My new blog

Posted on Updated on

Hi to all my readers.

I have now created a new company ME2Digital GmbH all new posts will published at the new home ūüėČ

Se you at ME2Digital Blog

shutdown all pods in openshift and kubernetes

Posted on Updated on

We must changed some settings on our NFS-Server. To prevent data corruption we decided to shutdown all pods.

This sounds easy but I wanted that the same amount of pods are running after the shutdown as it was before the shutdown.

I have created a text file where the settings are saved.

for i in $( oc get project -o name|cut -d '/' -f2 ); do
  for j in $( oc get dc -n $i -o name |cut -d '/' -f2 ); do
    echo "${i}_${j}_"$( oc get dc -n $i $j \
       --template='{{ .spec.replicas }}') >> backup_dc_scale_data.txt

Now scale all to 0

for i in $( < backup_dc_scale_data.txt ); do
  oc scale -n $( echo $i|cut -d'_' -f1) \
    dc/$( echo $i|cut -d'_' -f2) \

You can check with the following command how many pods are still running.

To shutdown all pods take some times.

watch 'oc get po --all-namespaces |egrep -c Running'

Maybe there are some pods still running then check this one and shutdown it manually.

The pods in openshift-infra must be shutdown via rc.

oc scale rc -n openshift-infra heapster --replicas=0
oc scale rc -n openshift-infra hawkular-metrics --replicas=0
oc scale rc -n openshift-infra hawkular-cassandra-1 --replicas=0

Now we have changed the setting on our NFS-Server.

After the change you can restore the amount of pod.

for i in $( < backup_dc_scale_data.txt ); do
  oc scale -n $( echo $i|cut -d'_' -f1) \
    dc/$( echo $i|cut -d'_' -f2) \
  --replicas=$( echo $i|cut -d'_' -f3)

and the metrics one.

oc scale rc -n openshift-infra heapster --replicas=1
oc scale rc -n openshift-infra hawkular-metrics --replicas=1
oc scale rc -n openshift-infra hawkular-cassandra-1 --replicas=1

You may also need to take care of the logging project, which we have not setuped.


Posted on

Dear Reader.

don’t think to use sunrise prepaid.
The support is really really bad.

My prepaid setup have only ~256 K.
I was in 2 different Shops they have no glue what happen. They have assumed something but can’t say what’s wrong with the setup.

After a call to the support hotline, which also not helps I decide now another Phone provider.
I will try swisscomm I hope they are less crappy then the

external Services with openshift v3 and haproxy

Posted on Updated on


Openshift V3 offers a simple solution to call external services.

Integrating External Services

This solution lacks some possibilities like use DNS names or use more then one destination.

The here described solution offers you the flexibility of haproxy with logs for this service.

Here the picture for the solution.



  • openshift v3
  • own git repository
  • haproxy 1.6
  • socklog / syslog
  • destination address(s)
  • patience ūüėČ

openshift v3 and git repo

I expect that you have access to a openshift (oc …/ webconsole) and a read / write access to a git


You can use the official image on docker hub of haproxy I suggest to use the alpine one.
I have used this repo for a proxy to Google.

socklog / syslog

Due to the fact that there is no official docker hub entry for socklog you can use my repo

destination address(s) or dns name

Well to which service do you want to connect is your decision ūüėČ


Now you should take a look into the excellence documentation of haproxy.

Start of Implementation

Create a new Project

oc new-project externel-service001

or when you admin and want to be able to run this pods on dedicated nodes you can use

oadm new-project externel-service001 --node-selector='your-dmz=external-router'

Create socklog/syslog

oc new-app https://gitlab.com/aleks001/rhel7-socklog \
    -e TZ=Europe/Vienna --dry-run -o yaml > 01_build_socklog.yaml
oc create -f 01_build_socklog.yaml

Q: Why do I use a file for  the creation not directly?

A: For reproduction and debugging. It’s easier do make a

oc delete -f 01_build_socklog.yaml

then to search for all components ;-).

Now we have a rhel7-socklog service with  exposed port 8514/udp

oc get svc
rhel7-socklog <none> 8514/UDP 3m

and a listening daemon which writes the requests out to stdout

oc logs -f <THE_SOCKLOG_POD>
listening on, starting.


Don’t use the user/uid and group/gid on openshift!

Dont’t use daemon option in openshift!

Create haproxy

Commit it to your repo and create the app

oc new-app https://gitlab.com/aleks001/haproxy \
    -e TZ=Europe/Vienna --dry-run -o yaml > metadata/01_build_haproxy.yaml
oc create -f metadata/01_build_haproxy.yaml

After some times you will see the pods up and running and in the log of socklog pod you will see the log-entries of haproxy.

[al@localhost openshift-external-services]$ oc logs -f <THE_SOCKLOG_POD>
listening on, starting. local0.notice: Apr 27 18:29:18 haproxy[1]: Proxy entry-point started. local0.notice: Apr 27 18:29:18 haproxy[1]: Proxy google started.

You can use configmaps to change the config of haproxy. The mount path is


and a sample template can be found here

Add route

To be able to use this service now add a route.

oc expose svc haproxy

When everything works as expect you should see something like this. local0.notice: Apr 27 19:56:25 haproxy[1]: Proxy entry-point started. local0.notice: Apr 27 19:56:25 haproxy[1]: Proxy be_google started. local0.info: Apr 27 19:56:55 haproxy[1]: [27/Apr/2016:19:56:55.189] entry-point be_google/srv_google/ 0/0/111/18/129 404 1686 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1" local0.info: Apr 27 19:57:21 haproxy[1]: [27/Apr/2016:19:57:21.555] entry-point be_google/srv_google/ 0/0/42/18/60 404 1686 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"

You can hire me to create that for you.

HTTP to HTTPS with Openshift V3

Posted on

Due to the fact that the current OpenShift V3 (3.1) only offer with Edge Termination a redirect  from HTTP to HTTPS you will need a similar concept as described below.


In a non OpenShift V3 you normaly handle the HTTP to HTTPS as following.


The application server or reverse proxy handle the HTTP¬†request and sends a redirect to HTTPS. It’s a easy setup for the most servers out there.

Due to the fact that OpenShift V3 is not able to handle one hostname in two backends the flow above will not work in the current ( 3.1 ) version.

To be able to handle the redirect you will need something like this.


This means you need to setup a dedicated Service with a haproxy < 1.6.

Please take a look into the description of Server IP address resolution using DNS for full details.

The server line in haproxy must follow this pattern


as described in OpenShift DNS .

There is a RH Buzilla enty for this and also a github issue.

Make OpenShift console available on port 443 (https)

Posted on Updated on


The main reason why this blog post exist is that OpenShift V3 and Kubernetes is very close binded to port 8443. This could be changed in the future.

I used several times a dedicated haproxy pod to provide the OpenShift v3 Web console on port 443 (https).

This concept could be used also for different services in the PaaS.

There are some ansible variables for openshift_master_api_port and openshift_master_console_port which suggest that you are able to change the port.

This ports are ‘internal’ ports and not designed to be the public ports. So changing this ports could crash your OpenShift setup!

In case that you want to you this variables you will also need to change a lot of OpenShift v3 and Kubernetes.

The describe solution is a more global and flexible solution then the external service solution.
The external service solution is much easier to setup it is described here

You will need the following to run this setup.

  • Time!
  • Understanding of OpenShift v3, Kubernetes and docker
  • SSL-Certificate for master.<your-domain> or *.<your-domain>
  • write access to a git repository
  • ssh key for deployment [optional]

Here a rudimentary picture which shows the idea and the flow.



Btw: Does I said you will need Time and Knowledge! ūüėČ

git clone

Due to the fact that you need to change the haproxy conf you must have a git repository from which OpenShift is able to build the haproxy

You can try to use this repo as base .

git clone https://github.com/cloudwerkstatt/openshift-master.git

Adopt ENV

You need to change the OPENSHIFT_MASTER_SERVER variable in the Dockerfile

Adopt master.cfg

You need to change the container-files/etc/haproxy/master.cfg

Add this into the global section.

ca-base /etc/ssl
crt-base /etc/ssl

Add ssl options to bind line

you need to add this to the bind line

ssl no-sslv3 crt /etc/ssl/certificates-all.pem

Test run

You can try the build with a simple docker build command

docker build --rm -t myhaproxy .

Now run

docker run -it --rm --net host -e OPENSHIFT_MASTER_SERVER=<your-master-ip> myhaproxy

When everything works you need to push the data to your git repository

git stuff

git add .
git commit -m 'init'
git push -u origin master

Create the project

oc new-project infra-services

Add ssl keys to service account

oc secrets new manage-certificate  certificates-all.pem=.../zerts_manage-certificate_all.pem
oc secrets add serviceaccount/default secret/manage-certificate

oc new-app

now create the app.

oc new-app <your-repo-url> --name=openshift-master

oc edit dc

You need to add the secret into the container.

Please take a look into the concept of the secrets here.

oc edit dc -n infra-services openshift-master
        volumeMounts: <-- Add from here 
        - mountPath: /etc/ssl
          name: secret-volume
          readOnly: true <-- until this line
      terminationGracePeriodSeconds: 30
      volumes: <-- Add from here
      - name: secret-volume
          secretName: manage-certificate

After saving the changes a rebuild will start.

oc expose

Make the setup public available over the OpenShift default router

oc expose service openshift-master --hostname=manage.<your-domain>

Test #1

After all this steps and build process you should now see a running pod ūüėČ

A call to

curl -sS https://manage.<your-domain>|egrep hostPort

should now show the OpenShift internal masterPublicURL

egrep -i masterPublicURL /etc/origin/master/master-config.yaml

Ansible hosts file

To configure OpenShift with the new URL please add the following lines to the ansible hosts file

openshift_master_public_api_url=https://manage.{{ osm_default_subdomain }}
openshift_master_public_console_url={{ openshift_master_public_api_url }}/console
openshift_master_metrics_public_url={{ openshift_master_public_api_url }}/hawkular/metrics

and rerun the ansible playbook as described here

ANSIBLE_LOG_PATH=/tmp/ansible_log_$(date +%Y_%m_%d-%H_%M) ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

Test #2

A call to

curl -sS https://manage.<your-domain>|egrep hostPort

should now show the OpenShift new public masterPublicURL

egrep -i masterPublicURL /etc/origin/master/master-config.yaml

Which should be the master.<your-domain>

ansible on centos 7.1

Posted on Updated on

You need the following packages after a centos minial installation

 yum install git gcc python-devel

Then follow the commands in the doc


If you get the following error.

# ansible all -m ping
Traceback (most recent call last):
 File "/root/ansible/bin/ansible", line 39, in <module>
 from ansible.utils.display import Display
 File "/root/ansible/lib/ansible/utils/display.py", line 28, in <module>
 from ansible import constants as C
 File "/root/ansible/lib/ansible/constants.py", line 26, in <module>
 from six.moves import configparser
ImportError: No module named six.moves

try to install

pip install six

This is six

This is the used python version.

# python -V
Python 2.7.5


Posted on Updated on

I have searched for a ping for the ajp protocol ( https://tomcat.apache.org/connectors-doc/ajp/ajpv13a.html ) and found some http://lmgtfy.com/?q=ajpping.

From my point of view they have some weakness.

  • not¬†accurate¬†enough for the current it environments
  • don’t¬†measure the operations
  • don’t write a graph-able line out

Due to these facts I have used jffry’s (¬†http://www.perlmonks.org/?node_id=766945 ) script as base for my extended version.


With the output line

%Y-%m-%d %T host %s ip %s port %s connect %f syswrite %f sysread %f timeouted %d timeout %d good_answer %d

you can easily create a picture with your preferred¬†tool ( gnuplot, splunk, excel, R, …)

certutil commands

Posted on

How to remove all entries in nss-db

rm <DIR_OF_NSS_DB>/*.db

How to see CAs in nss-db

certutil -L -d <DIR_OF_NSS_DB>

How to see KEYs in nss-db

certutil -K -d <DIR_OF_NSS_DB>

How to add CAs in nss-db

certutil -A -d  <DIR_OF_NSS_DB> -n <NIC-NAME> -t C,C,P -i <CERTIFIKATE-FILE>

How to create Key and CSR

cat /dev/urandom > noise_file

certutil -R -s “CN=<HOST_COMMON_NAME,OU=<OU>,O=<O>,L=<LOCATION>,C=<COUNTY>,ST=<STATE>” -g 2048 -o mycert.req ¬†-d <DIR_OF_NSS_DB> -a -n ¬†<NIC-NAME> -z noise_file


Doc of cert util


Build own ChatSecure Android client

Posted on

Step by step description based on


to build your own chatsecure client

Most important you need a 32bit Build platform due to the fact that the “aapt” is a

file adt-bundle-linux-x86_64-20140321/sdk/build-tools/android-4.4.2/aapt

adt-bundle-linux-x86_64-20140321/sdk/build-tools/android-4.4.2/aapt: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, not stripped

  1. I have used a Ubuntu 14.04 x32 Instance with 1 GB RAM on https://digitalocean.com/
  2. I haved downloaded the latest Linux 32bit ADT Bundle from https://developer.android.com/sdk/index.html
    1. wget https://dl.google.com/android/adt/22.6.2/adt-bundle-linux-x86-20140321.zip
  3. apt-get install ant libbcel-java openjdk-7-jdk unzip git
  4. unzip adt-bundle-linux-x86-20140321.zip
  5. export PATH=/root/adt-bundle-linux-x86-20140321/sdk/tools:${PATH}
  6. git clone https://github.com/guardianproject/ChatSecureAndroid.git
  7. cd ChatSecureAndroid/
  8. git submodule update –init
  9. ./update-ant-build.sh
  10. ant debug
  11. Now you have a ChatSecure-debug.apk in bin directory

Collect MySQL Size in Zabbix on Ubuntu 12.04 LTS

Posted on

If you have a lot of such entries

sh: 1: [[: not found
sh: 1: : Permission denied

in your zabbix_agentd.log and you are on Ubuntu 12.04 LTS then you have the dash shell as /bin/sh DashAsBinSh

I have used the following setup to solve this issue.

vi /etc/zabbix/mysql_size.sh


echo "select sum($(case "$3" in both|"") echo "data_length+index_length";; data|index) echo "$3_length";; free) echo "data_free";; esac)) from information_schema.tables$([[ "$1" = "all" || ! "$1" ]] || echo " where table_schema='$1'")$([[ "$2" = "all" || ! "$2" ]] || echo "and table_name='$2'");" | HOME=/var/lib/zabbix mysql -N

chmod 755 /etc/zabbix/mysql_size.sh

and changed the original line in


UserParameter=mysql.size[*],/etc/zabbix/mysql_size.sh $1 $2 $3

FPM in php 5.3.3

Posted on Updated on

The FastCGI Process Manager (FPM) is now part of the official php 5.3 branch.


The  official site in the php manual can be found here

The official site for the php-fpm project is here

Build Alfresco community edition

Posted on

Due to a bug in alfresco 3.3 I decided to build my own alfresco community edtion.

My first step was to download the latest tomcat 6 version and extract the tarball

cd /datadir/al/download/
mkdir alfresco
cd !$
mv <DOWNLOADED_.tar.gz> .
tar xfvz apache-tomcat-6.0.28.tar.gz

the second step was to read Developer Runtime Configuration.

Then get the  current subversion HEAD, change to the checkedout directory and set some env vars.

svn co svn://svn.alfresco.com/alfresco/HEAD alfresco_20100711
cd alfresco_20100711/root
export TOMCAT_HOME=/datadir/al/download/alfresco/apache-tomcat-6.0.28
export VIRTUAL_TOMCAT_HOME=/datadir/al/download/alfresco/apache-tomcat-6.0.28
export APP_TOMCAT_HOME=/datadir/al/download/alfresco/apache-tomcat-6.0.28/
export CATALINA_OPTS="-server -Xmx1024M -XX:PermSize=128m"
ant -verbose -l ../ant-20100713-2303.log

on my machine was the buildtime ~30 minutes.

Now you have a own alfresco.

Before you start the tomcat extract the original server.xml and web.xml from the tomcat tarball.

cd ../..
tar xfvz apache-tomcat-6.0.28.tar.gz  apache-tomcat-6.0.28/conf/server.xml apache-tomcat-6.0.28/conf/web.xml
cd apache-tomcat-6.0.28

after the successfull startup can you connect to the alfresco share.


If you have some suggestion or optimizations please drop me comment ūüėČ


  1. create a empty alf_data. I have used the alf_data from a installed 3.3
  2. add alfresco-global.properties to the alfresco.war. Current solution: cp /home/al/Alfresco/tomcat/shared/classes/alfresco-global.properties /datadir/al/download/alfresco/apache-tomcat-6.0.28/webapps/alfresco/WEB-INF/classes/

Listen and save a live stream

Posted on

I have tried to save some music from the local radio stations √Ė3 and Kronehit.

After some try and errors I came to this both lines


gst-launch-0.10 -tv uridecodebin subtitle-encoding=UTF-8 \
uri=mms://gcssrv.pkf.speednet.at/WSX/oe3_live \
! tee name=myT myT. ! queue2 \
! alsasink myT. ! queue2 ! audioconvert! lamemp3enc \
! multifilesink location=oe3_%05d


ÔĽŅgst-launch-0.10 -tv uridecodebin subtitle-encoding=UTF-8 \
uri=http://onair.krone.at/kronehit.mp3 \
! tee name=myT myT. ! queue \
! alsasink myT. ! queue ! audioconvert! lamemp3enc \
! multifilesink location=kronehit_%05d

The next step would be to save the stream automatical with the right name => Interpreter_Song.

If you have some ideas how I can do this please drop me a message, thanks ūüėČ

play radio stream with gstreamer

Posted on

Due to the fact that I started to play with gstreamer I have needed audio video inputs.

as we know the best source for such content is the NET I just search for some radio streams and found two Austrian local senders.

√Ė3 http://oe3.orf.at/ => mms://gcssrv.pkf.speednet.at/WSX/oe3_live

88.6 http://886.at/   => http://radiostream.de/stream/36795.pls

For¬† √Ė3 it is easy,¬† just use on gentoo:

gst-launch-0.10 -v playbin2 uri=mms://gcssrv.pkf.speednet.at/WSX/oe3_live

If you have another distribution maybe it is enough gst-launch.

For the 88. 6 is it a little bit difficulty, due to the fact that the link is a multimedia playlist.

  1. get the content of the pls file
curl -s http://radiostream.de/stream/36795.pls|egrep File
  1. take one of the streams and handover it to the gstreamer
gst-launch-0.10 -v playbin2 uri=http://ber.radiostream.de:36795

That’s it, have fun ūüėČ

Country select box in symfony setWidgets

Posted on

For my register form was the requirement to select from which Country a user is.

Symfony have a nice feature called select_country_tag which I wanted to use in the $this->setWidgets() in the mysfGuardFormSignin.class.php the problem is that the select_country_tag gives not the array back.

After a little bit ‘reverse engineering’ the solution for me is like this

    $c = new sfCultureInfo(sfContext::getInstance()->getUser()->getCulture());
    $countries = $c->getCountries();

here the full class

less lib/form/mysfGuardFormSignin.class.php 

class mysfGuardFormSignin extends sfForm
  public function configure()
    $c = new sfCultureInfo(sfContext::getInstance()->getUser()->getCulture());
    $countries = $c->getCountries();

      'username' => new sfWidgetFormInput(),
      'email' => new sfWidgetFormInput(),
      'password' => new sfWidgetFormInput(array('type' => 'password')),
      'country'  => new sfWidgetFormSelect(array('choices' => $countries)),

      'username' => new sfValidatorString(),
      'email'    => new sfValidatorString(),
      'password' => new sfValidatorString(),
      'country'  => new sfValidatorString(),

    $this->validatorSchema->setPostValidator(new sfGuardValidatorUser());


First Entry

Posted on Updated on

Hi all,

this is the BLOG of Aleksandar Lazic.
I hope this will not be ‘yust another blog’ but we well see in the future ūüėČ