№ 11183 В разделах: Programming
Sysadmin
от January 2nd, 2021,
В подшивках: Docker, Go, Kubernetes, Security, Vault
What if you stored your database credentials in Vault and want to make ENV variables with them for your application at container startup? You can do it for Kubernetes deployments or plain Docker containers with my small program vault-envs.
Add to your Dockerfile additional steps:
Add to your Dockerfile steps:
... ... # add Ubuntu\Debian repo and install vault-envs with fresh certificates RUN curl http://deb.blindage.org/gpg-key.asc | apt-key add - && \ echo "deb http://deb.blindage.org bionic main" | tee /etc/apt/sources.list.d/21h.list && \ apt update RUN apt install -y ca-certificates vault-envs # copy entrypoint script COPY entrypoint.sh /entrypoint.sh RUN chmod +x /entrypoint.sh ENTRYPOINT ["/entrypoint.sh"]
Your entrypoint script will look like:
#!/bin/bash ... ... export eval `vault-envs -token "$VAULT_TOKEN" \ -vault-url https://vault.blindage.org \ -vault-path /prod/crm/connection_postgres -envs-prefix "PG_"` export eval `vault-envs -token "$VAULT_TOKEN" \ -vault-url https://vault.blindage.org \ -vault-path /prod/crm/connection_mysql -envs-prefix "MYSQL_"` export eval `vault-envs -token "$VAULT_TOKEN" \ -vault-url https://vault.blindage.org \ -vault-path /prod/crm/connection_api` ... ... exec "$@"
If some vars names is identical they will be overwritten at next vault-envs call, so I used prefix.
Now build image and run
docker run --rm -e VAULT_TOKEN=s.QQmLlqnHnRAEO9eUeoggeK1n crm printenv
and see results at container console:
... VAULT_RETRIEVER=vault-envs PG_DB_PASS=postgres PG_DB_PORT=5432 PG_DB_USER=postgres PG_DB_HOST=db-postgres PG_DB_NAME=crm MYSQL_DB_HOST=mysql.wordpress MYSQL_DB_PASS= MYSQL_DB_PORT=3306 MYSQL_DB_USER=root MYSQL_DB_NAME=wordpress API_HOST=http://crm/api API_TOKEN=giWroufpepfexHyentOnWebBydHojGhokEpAnyibnipNirryesaccasayls4 ...
Wooh! You did it.
№ 11114 В разделе "Sysadmin"
от October 6th, 2020,
В подшивках: Docker, Linux
Typically its not useful because you can directly mount directory to containers, but… who knows? May be you just want it.
For example, you have directory on your hard drive and want to move files inside docker volume:
root@boroda:/tmp/future-volume# find . . ./somedir ./somedir/config.yaml ./file1 ./test.txt ./myfile2
Just run move (or copy) command in busybox container:
docker run --rm -it \ -v my-docker-volume:/destination \ -v /tmp/future-volume:/source \ busybox \ /bin/sh -c "mv /source/* /destination/ && find /destination"
This command mounts (or create if not exists already) volume, mount directory on disk and move files from disk to volume.
After move completion you’ll see tree on moved files:
/destination /destination/somedir /destination/somedir/config.yaml /destination/file1 /destination/test.txt /destination/myfile2
That’s all, easy.
№ 10908 В разделе "Sysadmin"
от January 17th, 2020,
В подшивках: Docker, ELK
version: '3.7' services: kibana: image: kibana:7.3.0 depends_on: - elasticsearch networks: - elk elasticsearch: image: elasticsearch:7.3.0 volumes: - esdata:/usr/share/elasticsearch/data networks: - elk ports: - 39200:9200 environment: - "discovery.type=single-node" - "cluster.name=docker-cluster" - "bootstrap.memory_lock=true" - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 oauth: # cloned git repo with enabled bitbucket support build: ./oauth2_proxy image: oauth2proxy entrypoint: - oauth2_proxy - --upstream=http://kibana:5601 - --email-domain=* - --http-address=0.0.0.0:4180 - --bitbucket-team=my_organization - --client-id=zZYjbsBVMBDyaXvk5v - --client-secret=wxz3uFvKVBXR2EaQPJAcQyPY44XbyNKT - --provider=bitbucket - --cookie-secret=cy-BbEK5MgHg5NcQe8FcdQ== - --cookie-secure=true depends_on: - elasticsearch - kibana ports: - 127.0.0.1:4180:4180 networks: - elk networks: elk: volumes: esdata: driver: local
№ 10453 В разделе "Sysadmin"
от December 16th, 2019,
В подшивках: Docker, Unit
Place unit_config.json
file in project root directory, it will be moved to /state
during image building. Find example configs by hash tag #unit.
Do not forget to change timezone and packages to install.
FROM ubuntu:eoan ENV TZ=Asia/Tomsk RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN set -xe \ && apt-get -y update \ && apt-get -y install --no-install-recommends gnupg2 curl php mysql-client ca-certificates \ php-curl php-mysql \ && curl https://nginx.org/keys/nginx_signing.key | apt-key add - \ && echo "deb https://packages.nginx.org/unit/ubuntu/ eoan unit" | tee -a /etc/apt/sources.list \ && echo "deb-src https://packages.nginx.org/unit/ubuntu/ eoan unit" | tee -a /etc/apt/sources.list \ && apt-get -y update \ && apt-get -y install unit unit-php unit-dev \ && unitd --version RUN rm /etc/init.d/unit WORKDIR /www/app COPY . . RUN mkdir -p /state/certs && mv unit_config.json /state/conf.json \ && chmod 700 -R /state && chown root:root -R /state RUN chown -R www-data:www-data /www/app CMD ["unitd", "--no-daemon", "--state", "/state"]
№ 10420 В разделе "Sysadmin"
от December 4th, 2019,
В подшивках: DigitalOcean, Docker, Kubernetes
Prepare Configmap with auth information. Use command htpasswd -Bbn vlad 123
to create login and password for users. No need to restart all pods of registry to apply changes. May be you want to store it in Secret resource, at your choice.
Example:
--- apiVersion: v1 kind: ConfigMap metadata: creationTimestamp: null name: registry-auth data: htpasswd: | vlad:$2y$05$anFCx3pAPG/BNxPsEKcau.LPKjWFN7hHkoXbvIMp7Jie97uYafuSq
Now create bucket my-own-registry
in Spaces with access key id and secret key. Do not forget to set http_secret and nodeSelector. http_secret required if you want multiple pods.
Example:
--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: registry spec: replicas: 2 template: metadata: labels: name: registry spec: containers: - name: registry image: registry:2 ports: - name: registry containerPort: 5000 volumeMounts: - mountPath: /auth name: auth env: - name: REGISTRY_STORAGE_DELETE_ENABLED value: "true" - name: REGISTRY_HEALTH_STORAGEDRIVER_ENABLED value: "false" - name: REGISTRY_AUTH value: "htpasswd" - name: REGISTRY_AUTH_HTPASSWD_REALM value: "Registry Realm" - name: REGISTRY_AUTH_HTPASSWD_PATH value: /auth/htpasswd - name: REGISTRY_STORAGE value: "s3" - name: REGISTRY_STORAGE_S3_ACCESSKEY value: "TVV3WXZ233MEPEBXFP7X" - name: REGISTRY_STORAGE_S3_SECRETKEY value: "ERlofd+hb9Ps1oBR5jUJuPa9NIMRSLxvUyulKJnt8S0" - name: REGISTRY_STORAGE_S3_BUCKET value: "my-own-registry" - name: REGISTRY_STORAGE_S3_REGION value: "fra1" - name: REGISTRY_STORAGE_S3_REGIONENDPOINT value: "https://fra1.digitaloceanspaces.com" - name: REGISTRY_LOG_LEVEL value: "info" - name: REGISTRY_HTTP_ADDR value: "0.0.0.0:5000" - name: REGISTRY_HTTP_SECRET value: sexy_pony resources: limits: cpu: 100m memory: 200Mi requests: cpu: 50m memory: 50Mi volumes: - name: auth configMap: name: registry-auth nodeSelector: doks.digitalocean.com/node-pool: infra
Last step easily shares registry. Set limit for image size in proxy-body-size
, value 0
means no limits.
Example:
--- apiVersion: v1 kind: Service metadata: name: registry labels: name: registry spec: ports: - port: 80 targetPort: registry protocol: TCP name: registry selector: name: registry type: ClusterIP --- apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: certmanager.k8s.io/cluster-issuer: letsencrypt-prod kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-body-size: "0" name: registry spec: rules: - host: registry.k8s.blindage.org http: paths: - backend: serviceName: registry servicePort: registry path: / tls: - hosts: - k8s.blindage.org - '*.k8s.blindage.org' secretName: k8s-blindage-tls
Problems:
time="2019-12-14T22:03:19.448702167Z" level=info msg="PurgeUploads starting: olderThan=2019-12-07 22:03:19.439373039 +0000 UTC m=-601559.638413974, actuallyDelete=true"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xc4e6bd]
Its a bug.
Fortune cookie: Today's spam: Remind him of the man that he used to be. brazier