INT 21h

Hi, I am Vladimir Smagin, SysAdmin and Kaptain. Telegram Email / GIT / RSS / GPG

Laravel applications scaling inside Kubernetes

№ 11122 В разделе Sysadmin от November 3rd, 2020,
В подшивках: , , ,

I ran into the problem of scaling a Laravel application today. It scales well, but this is hampered by the session management plugin that stores data in files. If you do not share sessions between Pods after scaling the browser will show error 419, page expired.

First of all, I needed to create a Redis cluster that would store the sessions. You can do it however you want, I used the redis-operator which I wrote. For best results, I added balancing via haproxy and turned off persistent storage.

$ k get po -l instance=sessions-store
NAME                                       READY   STATUS    RESTARTS   AGE
sessions-store-haproxy-59b45854f4-48sfp    2/2     Running   0          5h16m
sessions-store-redis-0                     1/1     Running   0          5h16m
sessions-store-redis-1                     1/1     Running   0          5h15m
sessions-store-redis-2                     1/1     Running   0          5h15m
sessions-store-sentinel-586f47d744-4kgqx   1/1     Running   0          5h16m
sessions-store-sentinel-586f47d744-cfwng   1/1     Running   0          5h16m
sessions-store-sentinel-586f47d744-fd254   1/1     Running   0          5h16m

After that, you need to create a connection to the new Redis cluster in config/database.php.

    'redis' => [

        'client' => env('REDIS_CLIENT', 'phpredis'),
        ...
        'sessions' => [
            'host' => env('SESSION_REDIS_HOST', '127.0.0.1'),
            'password' => env('SESSION_REDIS_PASSWORD', null),
            'port' => env('SESSION_REDIS_PORT', 6379),
            'database' => 0,
        ],

    ],

Now you need to apply a patch that will allow you to take the necessary connection parameters from the ENV in config/session.php.

    'driver' => env('SESSION_DRIVER', 'file'),
    'connection' => env('SESSION_CONNECTION', null),

Don’t forget about the php library for working with Redis in Dockerfile.

RUN apt-get install php7.3-redis

Also you can add additional support to php.ini if some additional non-laravel scripts used:

RUN sed -i 's/session.save_handler = files/session.save_handler = redis/g' /etc/php/7.3/fpm/php.ini
RUN sed -i 's/;session.save_path = "\/var\/lib\/php\/sessions"/session.save_path = "tcp:\/\/sessions-store-haproxy:6379"/g' /etc/php/7.3/fpm/php.ini

Now provide all the necessary environment variables to Pod and you can start deploying.

  SESSION_DRIVER: redis
  SESSION_CONNECTION: sessions
  SESSION_REDIS_HOST: "sessions-store-haproxy"
  SESSION_REDIS_PASSWORD: ""
  SESSION_REDIS_PORT: 6379

Login to Laravel and check Redis

Nice.

Нет комментариев »

Warm Image operator for Kubernetes

№ 11010 В разделе Sysadmin от May 28th, 2020,
В подшивках: , , ,

For example, you have huge image with your software and running POD on node. When POD moving to another node your image downloads to new node minute or two. This operator forces nodes to download image before rescheduling, so POD starts faster.

It runs /bin/sh with infinite loop on specified image as DaemonSet with additional options like NodeSelector, Affinity or resource limits. You can specify custom command if your image not contains /bin/sh interpreter or you want to run own script.

Your first warmer:

apiVersion: blindage.org/v1alpha1
kind: WarmImage
metadata:
  name: mongo4
spec:
  image: mongo
  version: "4"
  nodeSelector:
    node-role.kubernetes.io/master: ""

Now you warmed mongo:4 on all master nodes.

Repository here https://git.blindage.org/21h/warm-image-operator

Нет комментариев »

cert-manager can’t resolve new domain to perform HTTP01 challenge

№ 10443 В разделе Sysadmin от December 14th, 2019,
В подшивках: , ,

In ingress resource you created new domain to perform HTTP01 challenge and obtain new LE certificate but something goes wrong in log file:

E1214 14:35:06.644315 1 sync.go:183] cert-manager/controller/challenges "msg"="propagation check failed" "error"="failed to perform self check GET request 'http://test.k8s.blindage.org/.well-known/acme-challenge/nmxxZh0K7iXuOnqGRm52PqymHj8YFVpN2MryLfRdVoU': Get http://test.k8s.blindage.org/.well-known/acme-challenge/nmxxZh0K7iXuOnqGRm52PqymHj8YFVpN2MryLfRdVoU: dial tcp: lookup test.k8s.blindage.org on 10.245.0.10:53: no such host" "dnsName"="test.k8s.blindage.org" "resource_kind"="Challenge" "resource_name"="tls-test-k8s-blindage-org-749846670-0" "resource_namespace"="testing" "type"="http-01"

… and this error repeats multiple times without any progress. Its managed Kubernetes in DigitalOcean.

To solve this problem just uncomment these lines in Helm chart of cert-manager to provide your own nameservers:

podDnsPolicy: "None"
podDnsConfig:
  nameservers:
    - "1.1.1.1"
    - "8.8.8.8"

Voila! You got new certificate.

Нет комментариев »

cert-manager пытается выдать сертификат в родительской зоне

№ 10436 В разделе Sysadmin от December 13th, 2019,
В подшивках: ,

Предыстория такая, была в DNS A запись code.semantiqo.ru и указывала она на IP балансера kubernetes. Захотел сделать нормально и создать на серверах digitalocean зону code.semantiqo.ru целиком и уже оттуда ей управлять. Захотел – сделал! Добавил в ingress ресурс выдачу сертификата от letsencrypt и затем добавил в панельке хостингера 3 NS записи указывающие на сервера digitalocean и стал ждать когда все само подтащится и заработает.

Время шло и в логах я продолжал видеть обращение к родительскому домену с целью создать челендж записи.

I1212 19:07:55.716799 1 controller.go:129] cert-manager/controller/challenges "level"=0 "msg"="syncing item" "key"="production/tls-code-semantiqo-ru-2352965144-0"
I1212 19:07:55.717716 1 dns.go:104] cert-manager/controller/challenges/Present "level"=0 "msg"="presenting DNS01 challenge for domain" "dnsName"="code.semantiqo.ru" "domain"="code.semantiqo.ru" "resource_kind"="Challenge" "resource_name"="tls-code-semantiqo-ru-2352965144-0" "resource_namespace"="production" "type"="dns-01"
E1212 19:07:56.164072 1 controller.go:131] cert-manager/controller/challenges "msg"="re-queuing item due to error processing" "error"="POST https://api.digitalocean.com/v2/domains/semantiqo.ru/records: 404 The resource you were accessing could not be found." "key"="production/tls-code-semantiqo-ru-2352965144-0

Проверил все ресурсы, даже полностью удалял все секреты и ресурсы и т.д., ничего не помогало, будто cert-manager просто не видит новую зону. И тут меня осенило просто прибить нахер этот гнусный под с гнусным днс кешем. И таки что бы вы могли подумать? Оно заработало! cert manager наконец правильно увидел зону и получил сертификат.

Чтобы окончательно решить эту проблему можно сходить по ссылке и настроиться.

Нет комментариев »

Private Docker Registry in DigitalOcean Kubernetes with s3 storage in Spaces

№ 10420 В разделе Sysadmin от December 4th, 2019,
В подшивках: , ,

Prepare Configmap with auth information. Use command htpasswd -Bbn vlad 123 to create login and password for users. No need to restart all pods of registry to apply changes. May be you want to store it in Secret resource, at your choice.

Example:

---
apiVersion: v1
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: registry-auth
data:
  htpasswd: |
    vlad:$2y$05$anFCx3pAPG/BNxPsEKcau.LPKjWFN7hHkoXbvIMp7Jie97uYafuSq

Now create bucket my-own-registry in Spaces with access key id and secret key. Do not forget to set http_secret and nodeSelector. http_secret required if you want multiple pods.

Example:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: registry
spec:
  replicas: 2
  template:
    metadata:
      labels:
        name: registry
    spec:
      containers:
      - name: registry
        image: registry:2
        ports:
        - name: registry
          containerPort: 5000
        volumeMounts:
        - mountPath: /auth
          name: auth
        env:
        - name: REGISTRY_STORAGE_DELETE_ENABLED
          value: "true"
        - name: REGISTRY_HEALTH_STORAGEDRIVER_ENABLED
          value: "false"
        - name: REGISTRY_AUTH
          value: "htpasswd"
        - name: REGISTRY_AUTH_HTPASSWD_REALM
          value: "Registry Realm"
        - name: REGISTRY_AUTH_HTPASSWD_PATH
          value: /auth/htpasswd
        - name: REGISTRY_STORAGE
          value: "s3"
        - name: REGISTRY_STORAGE_S3_ACCESSKEY
          value: "TVV3WXZ233MEPEBXFP7X"
        - name: REGISTRY_STORAGE_S3_SECRETKEY
          value: "ERlofd+hb9Ps1oBR5jUJuPa9NIMRSLxvUyulKJnt8S0"
        - name: REGISTRY_STORAGE_S3_BUCKET
          value: "my-own-registry"
        - name: REGISTRY_STORAGE_S3_REGION
          value: "fra1"
        - name: REGISTRY_STORAGE_S3_REGIONENDPOINT
          value: "https://fra1.digitaloceanspaces.com"
        - name: REGISTRY_LOG_LEVEL
          value: "info"
        - name: REGISTRY_HTTP_ADDR
          value: "0.0.0.0:5000"
        - name: REGISTRY_HTTP_SECRET
          value: sexy_pony
        resources:
          limits:
            cpu: 100m
            memory: 200Mi
          requests:
            cpu: 50m
            memory: 50Mi
      volumes:
      - name: auth
        configMap:
          name: registry-auth
      nodeSelector:
        doks.digitalocean.com/node-pool: infra

Last step easily shares registry. Set limit for image size in proxy-body-size, value 0 means no limits.

Example:

---
apiVersion: v1
kind: Service
metadata:
  name: registry
  labels:
    name: registry
spec:
  ports:
  - port: 80
    targetPort: registry
    protocol: TCP
    name: registry
  selector:
    name: registry
  type: ClusterIP

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    certmanager.k8s.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
  name: registry
spec:
  rules:
  - host: registry.k8s.blindage.org
    http:
      paths:
      - backend:
          serviceName: registry
          servicePort: registry
        path: /
  tls:
  - hosts:
    - k8s.blindage.org
    - '*.k8s.blindage.org'
    secretName: k8s-blindage-tls

Problems:

time="2019-12-14T22:03:19.448702167Z" level=info msg="PurgeUploads starting: olderThan=2019-12-07 22:03:19.439373039 +0000 UTC m=-601559.638413974, actuallyDelete=true"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xc4e6bd]

Its a bug.

Нет комментариев »

Яндекс.Метрика

Fortune cookie: You should be a hemorrhoid, you're such a pain in the ass.