Blog Marcina Bojko

Linux,Windows,serwer, i tak dalej ;)

Posts Tagged ‘work

Packer Hyper-V support for CentOS 8.1 is here

leave a comment »

Written by marcinbojko

22 kwietnia, 2020 at 19:22

Napisane w work

Tagged with , , , , ,

Traefik 2.2 + docker-compose – easy start.

leave a comment »

Traefik (https://containo.us/traefik/) is a cloud-native router (or load-balancer) in our case. From the beginning it offers very easy integration with docker and docker-compose – just using simple objects like labels, instead of bulky and static configuration files.

So, why to use it?

  • cloud-ready (k8s/docker) support
  • easy configuration, separated on a static and dynamic part. Dynamic part can (as the name suggests) change dynamically and Traefik is first to react and adjust.
  • support for modern and intermediate cipher suites (TLS)
  • support for HTTP(S) Layer7 load balance, as well as TCP and UDP (Layer 4)
  • out of the box support for Let’s Encrypt – no need to reuse and worry about certbot
  • out of the box prometheus metrics support
  • docker/k8s friendly

In the attached example we’re going to use it to create a simple template (static traefik configuration) + dynamic, docker related config, which can be reused to any of your docker/docker-compose/swarm deployments.

Full example:

https://github.com/marcinbojko/docker101/tree/master/10-traefik22-grafana

traefik.yaml

global:
  checkNewVersion: false
log:
  level: DEBUG
  filePath: "/var/log/traefik/debug.log"
  format: json
accessLog:
  filePath: "/var/log/traefik/access.log"
  format: json
defaultEntryPoints:
   - http
   - https
api:
  dashboard: true
ping: {}
providers:
  docker:
    endpoint: "unix:///var/run/docker.sock"
  file:
    filename: ./traefik.yml
    watch: true
entryPoints:
  http:
    address: ":80"
    http:
      redirections:
        entryPoint:
          to: https
          scheme: https
  https:
    address: ":443"
  metrics:
    address: ":8082"
tls:
  certificates:
    - certFile: "/ssl/grafana.test-sp.develop.cert"
      keyFile: "/ssl/grafana.test-sp.develop.key"
  stores:
    default:
      defaultCertificate:
        certFile: "/ssl/grafana.test-sp.develop.cert"
        keyFile: "/ssl/grafana.test-sp.develop.key"
  options:
    default:
      minVersion: VersionTLS12
      cipherSuites:
        - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
        - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
        - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
        - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
        - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
        - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
      sniStrict: true
metrics:
  prometheus:
    buckets:
      - 0.1
      - 0.3
      - 1.2
      - 5
    entryPoint: metrics

In attached example we have basic configuration reacting on port 80 and 443, doing automatic redirection from 80 to 443, enabling modern cipher suites with HSTS.

Sp, how to attach and inform docker container to a configuration?

docker-compose

version: "3.7"
services:
  traefik:
    image: traefik:${TRAEFIK_TAG}
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "8082:8082"
    networks:
      - front
      - back
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./traefik/etc/traefik.yml:/traefik.yml
      - ./traefik/ssl:/ssl
      - traefik_logs:/var/log/traefik
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.traefik-secure.entrypoints=https"
      - "traefik.http.routers.traefik-secure.rule=Host(`$TRAEFIK_HOSTNAME`, `localhost`)"
      - "traefik.http.routers.traefik-secure.tls=true"
      - "traefik.http.routers.traefik-secure.service=api@internal"
      - "traefik.http.services.traefik.loadbalancer.server.port=8080"
  grafana-xxl:
    restart: unless-stopped
    image: monitoringartist/grafana-xxl:${GRAFANA_TAG}
    expose:
     - "3000"
    volumes:
      - grafana_lib:/var/lib/grafana
      - grafana_log:/var/log/grafana
      - grafana_etc:/etc/grafana
      - ./grafana/provisioning:/usr/share/grafana/conf/provisioning
    networks:
      - back
    depends_on:
      - traefik
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.grafana-xxl-secure.entrypoints=https"
      - "traefik.http.routers.grafana-xxl-secure.rule=Host(`${GRAFANA_HOSTNAME}`,`*`)"
      - "traefik.http.routers.grafana-xxl-secure.tls=true"
      - "traefik.http.routers.grafana-xxl-secure.service=grafana-xxl"
      - "traefik.http.services.grafana-xxl.loadbalancer.server.port=3000"
      - "traefik.http.services.grafana-xxl.loadbalancer.healthcheck.path=/"
      - "traefik.http.services.grafana-xxl.loadbalancer.healthcheck.interval=10s"
      - "traefik.http.services.grafana-xxl.loadbalancer.healthcheck.timeout=5s"
    env_file: ./grafana/grafana.env

volumes:
  traefik_logs: {}
  traefik_acme: {}
  grafana_lib: {}
  grafana_log: {}
  grafana_etc: {}

networks:
  front:
    ipam:
      config:
        - subnet: 172.16.227.0/24
  back:
    ipam:
      config:
        - subnet: 172.16.226.0/24

Full example with Let’s Encrypt support:

https://github.com/marcinbojko/docker101/tree/master/11-traefik22-grafana-letsencrypt

Have fun!

Written by marcinbojko

21 kwietnia, 2020 at 19:47

Napisane w work

Tagged with , , ,

Vagrant boxes – feel free to use them

Written by marcinbojko

26 listopada, 2019 at 19:49

Napisane w work

Tagged with , , , , ,

Linux Mint Ansible playbook in version 1.1.9 for SysAdmin’s Day

Let’s include also Devops 😉

Let’s include also Devops 😉

https://github.com/marcinbojko/linux_mint

Written by marcinbojko

26 lipca, 2019 at 18:22

Napisane w work

Tagged with , , , , ,

HV-Packer in version 1.0.8

Written by marcinbojko

25 Maj, 2019 at 17:01

Napisane w work

Tagged with , , , ,

Veeam Instant Recovery – when life is too short to wait for your restore to complete.

(courtesy of @nixcraft)

„Doing backups is a must- restoring backups is a hell”
So how can we make this ‚unpleasant’ part more civilised? Let’s say – our VM is a couple hundred GB in size, maybe even a couple TB and for some reason, we have to do some magic: restore, migrate, transfer – whatever is desired a the moment.

But in oppositon to our will – physics is quite unforgiving in that matter – restoring takes time. Even if we have bunch of speed-friendly SSD arrays, 10G/40G network at our disposal, still few hours without their favourite data can be ‚no-go’ for our friends from „bussiness side”.
In this particular case, Veeam Instant Recovery comes into rescue.
How does it works?
It uses quite interesting quirk – in reality, the time you need for having your VM ready is to restore small, kb-sized VM configuration, create some sparse files for drives, and mount the drives itself over a network. This way, within 3-5 minutes your machine is up and ready

But, disks, you say, disks! „Where are my data, my precious data?”.
They are still safe, protected in your backup repository, mounted here as ReadOnly volumes.

Phase 1

So, after an initial deploy our VM is ready to serve requests. Also in Veeam B&R Console tasks is waiting for you to choose: begin migration from repositories into real server, or just to power it off.

Phase 1 during restore:

  • IO Read is served from mounted (RO) disk from backup repisotory
  • IO Write from client is server into new Drive snapshot (or written directly into sparse Disk_1 file.
  • In background, data from backup repository are being moved into Disk_1_Sparse file.




Phase 2

Phase 2 to begins when there are no data left to move from backup repository – this initiates merging phase, when ‚snapshot’ (CHANGED DATA) are being merged with originally restored DISK_1.

As with everything, there are few PROS and CONS

PROS:

  • machine is available withing a few minutes, regardles of its size
  • during InstantRecovery restore, VM is alive and can (slowly) process all requests

CONS

  • restoring can take a little longer then real ‚offline’ restore
  • space needed during restore proces can be as much as twice your VM size. If you’ll ran out of space during restore, process will fail and you’ll loose all new data.
  • your data are served with some delay – to read a needed block, VM have to reach it from repository, which means on-the-fly deduplication and decompression
  • if your VM is under heavy use, especially on IO Write, restoring can take much longer then anticipated, as there will be no IO left to serve read and transfer from original disk.
  • if your restore will fail from ANY reason – your data are inconsistent. You’ll have to repeat this process from point 0 – any data changed in previous restore attempt, will be discarded.

So which targets should be ideal for this process?

Any VM which doesn’t change data much but needs to be restored within few minutes:

  • frontend servers

Any VM which offers small-to-medium usage of IO Read , not so many IO Write

  • fileservers

Which targets should we avoid?

  • database servers
  • cluster role members/nodes

Written by marcinbojko

24 marca, 2019 at 19:17

Napisane w work

Tagged with , , , , ,

DevOps Linux Mint workstation – your simple ansible playbook.

Written by marcinbojko

14 stycznia, 2019 at 18:59

Napisane w open source, work

Tagged with , , ,

%d blogerów lubi to: