alllow image pulling in openshift

Posted on Updated on

oadm policy add-cluster-role-to-group system:image-puller system:authenticated -n

shutdown all pods in openshift and kubernetes

Posted on Updated on

We must changed some settings on our NFS-Server. To prevent data corruption we decided to shutdown all pods.

This sounds easy but I wanted that the same amount of pods are running after the shutdown as it was before the shutdown.

I have created a text file where the settings are saved.

for i in $( oc get project -o name|cut -d '/' -f2 ); do
  for j in $( oc get dc -n $i -o name |cut -d '/' -f2 ); do
    echo "${i}_${j}_"$( oc get dc -n $i $j \
       --template='{{ .spec.replicas }}') >> backup_dc_scale_data.txt
  done
done

Now scale all to 0

for i in $( < backup_dc_scale_data.txt ); do
  oc scale -n $( echo $i|cut -d'_' -f1) \
    dc/$( echo $i|cut -d'_' -f2) \
    --replicas=0
done

You can check with the following command how many pods are still running.

To shutdown all pods take some times.

watch 'oc get po --all-namespaces |egrep -c Running'

Maybe there are some pods still running then check this one and shutdown it manually.

The pods in openshift-infra must be shutdown via rc.

oc scale rc -n openshift-infra heapster --replicas=0
oc scale rc -n openshift-infra hawkular-metrics --replicas=0
oc scale rc -n openshift-infra hawkular-cassandra-1 --replicas=0

Now we have changed the setting on our NFS-Server.

After the change you can restore the amount of pod.

for i in $( < backup_dc_scale_data.txt ); do
  oc scale -n $( echo $i|cut -d'_' -f1) \
    dc/$( echo $i|cut -d'_' -f2) \
  --replicas=$( echo $i|cut -d'_' -f3)
done

and the metrics one.

oc scale rc -n openshift-infra heapster --replicas=1
oc scale rc -n openshift-infra hawkular-metrics --replicas=1
oc scale rc -n openshift-infra hawkular-cassandra-1 --replicas=1

You may also need to take care of the logging project, which we have not setuped.

caddy logline for combined log format

Posted on Updated on

You can use this log line to get the “Combined Log Format” in caddy.

log / stdout “{remote} – – [{when}] \”{method} {path} {proto}\” {status} {size} \”{>Referer}\” \”{>User-agent}\””

or the predefined format “{combined}”

The placeholders are described at https://caddyserver.com/docs/placeholders and the log directive at https://caddyserver.com/docs/log

In case that you are behind a reverves proxy or loadbalancer you will need also to add this directive to the caddy file, to get the real ip.

realip {
from 10.0.0.0/8
}

Of course you will change the from part😉
The realip directive is described at https://caddyserver.com/docs/realip

sunrise-ch-crappy

Posted on

Dear Reader.

don’t think to use sunrise prepaid.
The support is really really bad.

My prepaid setup have only ~256 K.
I was in 2 different Shops they have no glue what happen. They have assumed something but can’t say what’s wrong with the setup.

After a call to the support hotline, which also not helps I decide now another Phone provider.
I will try swisscomm I hope they are less crappy then the
sunrise.

external Services with openshift v3 and haproxy

Posted on Updated on

Introduction

Openshift V3 offers a simple solution to call external services.

Integrating External Services

This solution lacks some possibilities like use DNS names or use more then one destination.

The here described solution offers you the flexibility of haproxy with logs for this service.

Here the picture for the solution.

OpenShift_External_Services_haproxy

Pre-Requirements

  • openshift v3
  • own git repository
  • haproxy 1.6
  • socklog / syslog
  • destination address(s)
  • patience😉

openshift v3 and git repo

I expect that you have access to a openshift (oc …/ webconsole) and a read / write access to a git

Haproxy

You can use the official image on docker hub of haproxy I suggest to use the alpine one.
I have used this repo for a proxy to Google.

socklog / syslog

Due to the fact that there is no official docker hub entry for socklog you can use my repo

destination address(s) or dns name

Well to which service do you want to connect is your decision😉

patience

Now you should take a look into the excellence documentation of haproxy.

Start of Implementation

Create a new Project

oc new-project externel-service001

or when you admin and want to be able to run this pods on dedicated nodes you can use

oadm new-project externel-service001 --node-selector='your-dmz=external-router'

Create socklog/syslog

oc new-app https://gitlab.com/aleks001/rhel7-socklog \
    -e TZ=Europe/Vienna --dry-run -o yaml > 01_build_socklog.yaml
oc create -f 01_build_socklog.yaml

Q: Why do I use a file for  the creation not directly?

A: For reproduction and debugging. It’s easier do make a

oc delete -f 01_build_socklog.yaml

then to search for all components😉.

Now we have a rhel7-socklog service with  exposed port 8514/udp

oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rhel7-socklog 172.30.189.182 <none> 8514/UDP 3m

and a listening daemon which writes the requests out to stdout

oc logs -f <THE_SOCKLOG_POD>
listening on 0.0.0.0:8514, starting.

haproxy

Don’t use the user/uid and group/gid on openshift!

Dont’t use daemon option in openshift!

Create haproxy

Commit it to your repo and create the app

oc new-app https://gitlab.com/aleks001/haproxy \
    -e TZ=Europe/Vienna --dry-run -o yaml > metadata/01_build_haproxy.yaml
oc create -f metadata/01_build_haproxy.yaml

After some times you will see the pods up and running and in the log of socklog pod you will see the log-entries of haproxy.

[al@localhost openshift-external-services]$ oc logs -f <THE_SOCKLOG_POD>
listening on 0.0.0.0:8514, starting.
10.1.3.1: local0.notice: Apr 27 18:29:18 haproxy[1]: Proxy entry-point started.
10.1.3.1: local0.notice: Apr 27 18:29:18 haproxy[1]: Proxy google started.

You can use configmaps to change the config of haproxy. The mount path is

/usr/local/etc/haproxy

and a sample template can be found here

Add route

To be able to use this service now add a route.

oc expose svc haproxy

When everything works as expect you should see something like this.

10.1.5.1: local0.notice: Apr 27 19:56:25 haproxy[1]: Proxy entry-point started.
10.1.5.1: local0.notice: Apr 27 19:56:25 haproxy[1]: Proxy be_google started.
10.1.5.1: local0.info: Apr 27 19:56:55 haproxy[1]: 10.1.2.1:41173 [27/Apr/2016:19:56:55.189] entry-point be_google/srv_google/216.58.212.132 0/0/111/18/129 404 1686 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"
10.1.5.1: local0.info: Apr 27 19:57:21 haproxy[1]: 10.1.2.1:41427 [27/Apr/2016:19:57:21.555] entry-point be_google/srv_google/216.58.212.132 0/0/42/18/60 404 1686 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"

You can hire me to create that for you.

HTTP to HTTPS with Openshift V3

Posted on

Due to the fact that the current OpenShift V3 (3.1) only offer with Edge Termination a redirect  from HTTP to HTTPS you will need a similar concept as described below.

https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/

In a non OpenShift V3 you normaly handle the HTTP to HTTPS as following.

HTTP-HTTPS_Flow

The application server or reverse proxy handle the HTTP request and sends a redirect to HTTPS. It’s a easy setup for the most servers out there.

Due to the fact that OpenShift V3 is not able to handle one hostname in two backends the flow above will not work in the current ( 3.1 ) version.

To be able to handle the redirect you will need something like this.

HTTP-HTTPS-Workaround

This means you need to setup a dedicated Service with a haproxy < 1.6.

Please take a look into the description of Server IP address resolution using DNS for full details.

The server line in haproxy must follow this pattern

$SERVICE.$PROJECT.svc.cluster.local

as described in OpenShift DNS .

There is a RH Buzilla enty for this and also a github issue.

Make OpenShift console available on port 443 (https)

Posted on Updated on

Introduction

The main reason why this blog post exist is that OpenShift V3 and Kubernetes is very close binded to port 8443. This could be changed in the future.

We at Cloudwerkstatt GmbH use a dedicated haproxy pod to provide the OpenShift v3 Web console on port 443 (https).

This concept could be used also for different services in the PaaS.

There are some ansible variables for openshift_master_api_port and openshift_master_console_port which suggest that you are able to change the port.

This ports are ‘internal’ ports and not designed to be the public ports. So changing this ports could crash your OpenShift setup!

In case that you want to you this variables you will also need to change a lot of OpenShift v3 and Kubernetes.

The describe solution is a more global and flexible solution then the external service solution.
The external service solution is much easier to setup it is described here

You will need the following to run this setup.

  • Time!
  • Understanding of OpenShift v3, Kubernetes and docker
  • SSL-Certificate for master.<your-domain> or *.<your-domain>
  • write access to a git repository
  • ssh key for deployment [optional]

Here a rudimentary picture which shows the idea and the flow.

OSv3-cons-443

Steps

Btw: Does I said you will need Time and Knowledge!😉

git clone

Due to the fact that you need to change the haproxy conf you must have a git repository from which OpenShift is able to build the haproxy

You can try to use this repo as base .

git clone https://github.com/cloudwerkstatt/openshift-master.git

Adopt ENV

You need to change the OPENSHIFT_MASTER_SERVER variable in the Dockerfile

Adopt master.cfg

You need to change the container-files/etc/haproxy/master.cfg

Add this into the global section.

ca-base /etc/ssl
crt-base /etc/ssl

Add ssl options to bind line

you need to add this to the bind line

ssl no-sslv3 crt /etc/ssl/certificates-all.pem

Test run

You can try the build with a simple docker build command

docker build --rm -t myhaproxy .

Now run

docker run -it --rm --net host -e OPENSHIFT_MASTER_SERVER=<your-master-ip> myhaproxy

When everything works you need to push the data to your git repository

git stuff

git add .
git commit -m 'init'
git push -u origin master

Create the project

oc new-project infra-services

Add ssl keys to service account

oc secrets new manage-certificate  certificates-all.pem=.../zerts_manage-certificate_all.pem
oc secrets add serviceaccount/default secret/manage-certificate

oc new-app

now create the app.

oc new-app <your-repo-url> --name=openshift-master

oc edit dc

You need to add the secret into the container.

Please take a look into the concept of the secrets here.

oc edit dc -n infra-services openshift-master
spec:
....
    spec:
      containers:
        volumeMounts: <-- Add from here 
        - mountPath: /etc/ssl
          name: secret-volume
          readOnly: true <-- until this line
      terminationGracePeriodSeconds: 30
      volumes: <-- Add from here
      - name: secret-volume
        secret:
          secretName: manage-certificate

After saving the changes a rebuild will start.

oc expose

Make the setup public available over the OpenShift default router

oc expose service openshift-master --hostname=manage.<your-domain>

Test #1

After all this steps and build process you should now see a running pod😉

A call to

curl -sS https://manage.<your-domain>|egrep hostPort

should now show the OpenShift internal masterPublicURL

egrep -i masterPublicURL /etc/origin/master/master-config.yaml

Ansible hosts file

To configure OpenShift with the new URL please add the following lines to the ansible hosts file

openshift_master_public_api_url=https://manage.{{ osm_default_subdomain }}
openshift_master_public_console_url={{ openshift_master_public_api_url }}/console
openshift_master_metrics_public_url={{ openshift_master_public_api_url }}/hawkular/metrics

and rerun the ansible playbook as described here

ANSIBLE_LOG_PATH=/tmp/ansible_log_$(date +%Y_%m_%d-%H_%M) ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

Test #2

A call to

curl -sS https://manage.<your-domain>|egrep hostPort

should now show the OpenShift new public masterPublicURL

egrep -i masterPublicURL /etc/origin/master/master-config.yaml

Which should be the master.<your-domain>