My new blog

Posted on Updated on

Hi to all my readers.

I have now created a new company ME2Digital GmbH all new posts will published at the new home 😉

Se you at ME2Digital Blog

alllow image pulling in openshift

Posted on Updated on

oadm policy add-cluster-role-to-group system:image-puller system:authenticated -n

shutdown all pods in openshift and kubernetes

Posted on Updated on

We must changed some settings on our NFS-Server. To prevent data corruption we decided to shutdown all pods.

This sounds easy but I wanted that the same amount of pods are running after the shutdown as it was before the shutdown.

I have created a text file where the settings are saved.

for i in $( oc get project -o name|cut -d '/' -f2 ); do
  for j in $( oc get dc -n $i -o name |cut -d '/' -f2 ); do
    echo "${i}_${j}_"$( oc get dc -n $i $j \
       --template='{{ .spec.replicas }}') >> backup_dc_scale_data.txt

Now scale all to 0

for i in $( < backup_dc_scale_data.txt ); do
  oc scale -n $( echo $i|cut -d'_' -f1) \
    dc/$( echo $i|cut -d'_' -f2) \

You can check with the following command how many pods are still running.

To shutdown all pods take some times.

watch 'oc get po --all-namespaces |egrep -c Running'

Maybe there are some pods still running then check this one and shutdown it manually.

The pods in openshift-infra must be shutdown via rc.

oc scale rc -n openshift-infra heapster --replicas=0
oc scale rc -n openshift-infra hawkular-metrics --replicas=0
oc scale rc -n openshift-infra hawkular-cassandra-1 --replicas=0

Now we have changed the setting on our NFS-Server.

After the change you can restore the amount of pod.

for i in $( < backup_dc_scale_data.txt ); do
  oc scale -n $( echo $i|cut -d'_' -f1) \
    dc/$( echo $i|cut -d'_' -f2) \
  --replicas=$( echo $i|cut -d'_' -f3)

and the metrics one.

oc scale rc -n openshift-infra heapster --replicas=1
oc scale rc -n openshift-infra hawkular-metrics --replicas=1
oc scale rc -n openshift-infra hawkular-cassandra-1 --replicas=1

You may also need to take care of the logging project, which we have not setuped.

caddy logline for combined log format

Posted on Updated on

You can use this log line to get the “Combined Log Format” in caddy.

log / stdout “{remote} – – [{when}] \”{method} {path} {proto}\” {status} {size} \”{>Referer}\” \”{>User-agent}\””

or the predefined format “{combined}”

The placeholders are described at and the log directive at

In case that you are behind a reverves proxy or loadbalancer you will need also to add this directive to the caddy file, to get the real ip.

realip {

Of course you will change the from part 😉
The realip directive is described at


Posted on

Dear Reader.

don’t think to use sunrise prepaid.
The support is really really bad.

My prepaid setup have only ~256 K.
I was in 2 different Shops they have no glue what happen. They have assumed something but can’t say what’s wrong with the setup.

After a call to the support hotline, which also not helps I decide now another Phone provider.
I will try swisscomm I hope they are less crappy then the

external Services with openshift v3 and haproxy

Posted on Updated on


Openshift V3 offers a simple solution to call external services.

Integrating External Services

This solution lacks some possibilities like use DNS names or use more then one destination.

The here described solution offers you the flexibility of haproxy with logs for this service.

Here the picture for the solution.



  • openshift v3
  • own git repository
  • haproxy 1.6
  • socklog / syslog
  • destination address(s)
  • patience 😉

openshift v3 and git repo

I expect that you have access to a openshift (oc …/ webconsole) and a read / write access to a git


You can use the official image on docker hub of haproxy I suggest to use the alpine one.
I have used this repo for a proxy to Google.

socklog / syslog

Due to the fact that there is no official docker hub entry for socklog you can use my repo

destination address(s) or dns name

Well to which service do you want to connect is your decision 😉


Now you should take a look into the excellence documentation of haproxy.

Start of Implementation

Create a new Project

oc new-project externel-service001

or when you admin and want to be able to run this pods on dedicated nodes you can use

oadm new-project externel-service001 --node-selector='your-dmz=external-router'

Create socklog/syslog

oc new-app \
    -e TZ=Europe/Vienna --dry-run -o yaml > 01_build_socklog.yaml
oc create -f 01_build_socklog.yaml

Q: Why do I use a file for  the creation not directly?

A: For reproduction and debugging. It’s easier do make a

oc delete -f 01_build_socklog.yaml

then to search for all components ;-).

Now we have a rhel7-socklog service with  exposed port 8514/udp

oc get svc
rhel7-socklog <none> 8514/UDP 3m

and a listening daemon which writes the requests out to stdout

oc logs -f <THE_SOCKLOG_POD>
listening on, starting.


Don’t use the user/uid and group/gid on openshift!

Dont’t use daemon option in openshift!

Create haproxy

Commit it to your repo and create the app

oc new-app \
    -e TZ=Europe/Vienna --dry-run -o yaml > metadata/01_build_haproxy.yaml
oc create -f metadata/01_build_haproxy.yaml

After some times you will see the pods up and running and in the log of socklog pod you will see the log-entries of haproxy.

[al@localhost openshift-external-services]$ oc logs -f <THE_SOCKLOG_POD>
listening on, starting. local0.notice: Apr 27 18:29:18 haproxy[1]: Proxy entry-point started. local0.notice: Apr 27 18:29:18 haproxy[1]: Proxy google started.

You can use configmaps to change the config of haproxy. The mount path is


and a sample template can be found here

Add route

To be able to use this service now add a route.

oc expose svc haproxy

When everything works as expect you should see something like this. local0.notice: Apr 27 19:56:25 haproxy[1]: Proxy entry-point started. local0.notice: Apr 27 19:56:25 haproxy[1]: Proxy be_google started. Apr 27 19:56:55 haproxy[1]: [27/Apr/2016:19:56:55.189] entry-point be_google/srv_google/ 0/0/111/18/129 404 1686 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1" Apr 27 19:57:21 haproxy[1]: [27/Apr/2016:19:57:21.555] entry-point be_google/srv_google/ 0/0/42/18/60 404 1686 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"

You can hire me to create that for you.

HTTP to HTTPS with Openshift V3

Posted on

Due to the fact that the current OpenShift V3 (3.1) only offer with Edge Termination a redirect  from HTTP to HTTPS you will need a similar concept as described below.

In a non OpenShift V3 you normaly handle the HTTP to HTTPS as following.


The application server or reverse proxy handle the HTTP request and sends a redirect to HTTPS. It’s a easy setup for the most servers out there.

Due to the fact that OpenShift V3 is not able to handle one hostname in two backends the flow above will not work in the current ( 3.1 ) version.

To be able to handle the redirect you will need something like this.


This means you need to setup a dedicated Service with a haproxy < 1.6.

Please take a look into the description of Server IP address resolution using DNS for full details.

The server line in haproxy must follow this pattern


as described in OpenShift DNS .

There is a RH Buzilla enty for this and also a github issue.