INT 21h

Hi, I am Vladimir Smagin, SysAdmin and Kaptain. Telegram Email / GIT / Thingiverse / RSS / GPG

Credentials and other secrets from Vault to your containers at startup

№ 11183 В разделах: Programming Sysadmin от January 2nd, 2021,
В подшивках: , , , ,

What if you stored your database credentials in Vault and want to make ENV variables with them for your application at container startup? You can do it for Kubernetes deployments or plain Docker containers with my small program vault-envs.

Add to your Dockerfile additional steps:

  • install my vault-envs programs that “converts” secret to ENV variables
  • create\modify entrypoint script where or call vault-envs and other pre-startup actions

Add to your Dockerfile steps:

...
...
# add Ubuntu\Debian repo and install vault-envs with fresh certificates
RUN curl http://deb.blindage.org/gpg-key.asc | apt-key add - && \
    echo "deb http://deb.blindage.org bionic main" | tee /etc/apt/sources.list.d/21h.list && \
    apt update
RUN apt install -y ca-certificates vault-envs

# copy entrypoint script
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]

Your entrypoint script will look like:

#!/bin/bash

...
...

export eval `vault-envs -token "$VAULT_TOKEN" \
        -vault-url https://vault.blindage.org \
        -vault-path /prod/crm/connection_postgres -envs-prefix "PG_"`

export eval `vault-envs -token "$VAULT_TOKEN" \
        -vault-url https://vault.blindage.org \
        -vault-path /prod/crm/connection_mysql -envs-prefix "MYSQL_"`

export eval `vault-envs -token "$VAULT_TOKEN" \
        -vault-url https://vault.blindage.org \
        -vault-path /prod/crm/connection_api`

...
...

exec "$@"

If some vars names is identical they will be overwritten at next vault-envs call, so I used prefix.

Now build image and run

docker run --rm -e VAULT_TOKEN=s.QQmLlqnHnRAEO9eUeoggeK1n crm printenv

and see results at container console:

...
VAULT_RETRIEVER=vault-envs
PG_DB_PASS=postgres
PG_DB_PORT=5432
PG_DB_USER=postgres
PG_DB_HOST=db-postgres
PG_DB_NAME=crm
MYSQL_DB_HOST=mysql.wordpress
MYSQL_DB_PASS=
MYSQL_DB_PORT=3306
MYSQL_DB_USER=root
MYSQL_DB_NAME=wordpress
API_HOST=http://crm/api
API_TOKEN=giWroufpepfexHyentOnWebBydHojGhokEpAnyibnipNirryesaccasayls4
...

Wooh! You did it.

Нет комментариев »

Add cache control and CORS to nginx ingress in Kubernetes

№ 11131 В разделе "Sysadmin" от November 3rd, 2020,
В подшивках: ,

annotations:
    nginx.ingress.kubernetes.io/configuration-snippet: |
      if ($request_uri ~* \.(js|css|gif|jpe?g|png|woff|woff2|ico)) {
        expires 1M;
        add_header Cache-Control "public";
      }
    nginx.ingress.kubernetes.io/cors-allow-headers: >-
      DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,X-CSRF-Token,
      Authorization
    nginx.ingress.kubernetes.io/cors-allow-methods: 'GET, PUT, POST, DELETE, PATCH, OPTIONS'
    nginx.ingress.kubernetes.io/cors-allow-origin: '*'
    nginx.ingress.kubernetes.io/enable-cors: 'true'

Нет комментариев »

Laravel applications scaling inside Kubernetes

№ 11122 В разделе "Sysadmin" от November 3rd, 2020,
В подшивках: , , ,

I ran into the problem of scaling a Laravel application today. It scales well, but this is hampered by the session management plugin that stores data in files. If you do not share sessions between Pods after scaling the browser will show error 419, page expired.

First of all, I needed to create a Redis cluster that would store the sessions. You can do it however you want, I used the redis-operator which I wrote. For best results, I added balancing via haproxy and turned off persistent storage.

$ k get po -l instance=sessions-store
NAME                                       READY   STATUS    RESTARTS   AGE
sessions-store-haproxy-59b45854f4-48sfp    2/2     Running   0          5h16m
sessions-store-redis-0                     1/1     Running   0          5h16m
sessions-store-redis-1                     1/1     Running   0          5h15m
sessions-store-redis-2                     1/1     Running   0          5h15m
sessions-store-sentinel-586f47d744-4kgqx   1/1     Running   0          5h16m
sessions-store-sentinel-586f47d744-cfwng   1/1     Running   0          5h16m
sessions-store-sentinel-586f47d744-fd254   1/1     Running   0          5h16m

After that, you need to create a connection to the new Redis cluster in config/database.php.

    'redis' => [

        'client' => env('REDIS_CLIENT', 'phpredis'),
        ...
        'sessions' => [
            'host' => env('SESSION_REDIS_HOST', '127.0.0.1'),
            'password' => env('SESSION_REDIS_PASSWORD', null),
            'port' => env('SESSION_REDIS_PORT', 6379),
            'database' => 0,
        ],

    ],

Now you need to apply a patch that will allow you to take the necessary connection parameters from the ENV in config/session.php.

    'driver' => env('SESSION_DRIVER', 'file'),
    'connection' => env('SESSION_CONNECTION', null),

Don’t forget about the php library for working with Redis in Dockerfile.

RUN apt-get install php7.3-redis

Also you can add additional support to php.ini if some additional non-laravel scripts used:

RUN sed -i 's/session.save_handler = files/session.save_handler = redis/g' /etc/php/7.3/fpm/php.ini
RUN sed -i 's/;session.save_path = "\/var\/lib\/php\/sessions"/session.save_path = "tcp:\/\/sessions-store-haproxy:6379"/g' /etc/php/7.3/fpm/php.ini

Now provide all the necessary environment variables to Pod and you can start deploying.

  SESSION_DRIVER: redis
  SESSION_CONNECTION: sessions
  SESSION_REDIS_HOST: "sessions-store-haproxy"
  SESSION_REDIS_PASSWORD: ""
  SESSION_REDIS_PORT: 6379

Login to Laravel and check Redis

Nice.

Нет комментариев »

Warm Image operator for Kubernetes

№ 11010 В разделе "Sysadmin" от May 28th, 2020,
В подшивках: , , ,

For example, you have huge image with your software and running POD on node. When POD moving to another node your image downloads to new node minute or two. This operator forces nodes to download image before rescheduling, so POD starts faster.

It runs /bin/sh with infinite loop on specified image as DaemonSet with additional options like NodeSelector, Affinity or resource limits. You can specify custom command if your image not contains /bin/sh interpreter or you want to run own script.

Your first warmer:

apiVersion: blindage.org/v1alpha1
kind: WarmImage
metadata:
  name: mongo4
spec:
  image: mongo
  version: "4"
  nodeSelector:
    node-role.kubernetes.io/master: ""

Now you warmed mongo:4 on all master nodes.

Repository here https://git.blindage.org/21h/warm-image-operator

Нет комментариев »

cert-manager can’t resolve new domain to perform HTTP01 challenge

№ 10443 В разделе "Sysadmin" от December 14th, 2019,
В подшивках: , ,

In ingress resource you created new domain to perform HTTP01 challenge and obtain new LE certificate but something goes wrong in log file:

E1214 14:35:06.644315 1 sync.go:183] cert-manager/controller/challenges "msg"="propagation check failed" "error"="failed to perform self check GET request 'http://test.k8s.blindage.org/.well-known/acme-challenge/nmxxZh0K7iXuOnqGRm52PqymHj8YFVpN2MryLfRdVoU': Get http://test.k8s.blindage.org/.well-known/acme-challenge/nmxxZh0K7iXuOnqGRm52PqymHj8YFVpN2MryLfRdVoU: dial tcp: lookup test.k8s.blindage.org on 10.245.0.10:53: no such host" "dnsName"="test.k8s.blindage.org" "resource_kind"="Challenge" "resource_name"="tls-test-k8s-blindage-org-749846670-0" "resource_namespace"="testing" "type"="http-01"

… and this error repeats multiple times without any progress. Its managed Kubernetes in DigitalOcean.

To solve this problem just uncomment these lines in Helm chart of cert-manager to provide your own nameservers:

podDnsPolicy: "None"
podDnsConfig:
  nameservers:
    - "1.1.1.1"
    - "8.8.8.8"

Voila! You got new certificate.

Нет комментариев »

Яндекс.Метрика

Fortune cookie: Today's spam: In a poll conducted by ${Brand} Condoms, 67% of women said they were unhappy with their lover's penis size. click here to more...