Blog Marcina Bojko

Linux,Windows,serwer, i tak dalej ;)

Packer Hyper-V support for CentOS 8.1 is here

Written by marcinbojko

22 kwietnia, 2020 at 19:22

Napisane w work

Tagged with , , , , ,

Traefik 2.2 + docker-compose – easy start.

Traefik (https://containo.us/traefik/) is a cloud-native router (or load-balancer) in our case. From the beginning it offers very easy integration with docker and docker-compose – just using simple objects like labels, instead of bulky and static configuration files.

So, why to use it?

  • cloud-ready (k8s/docker) support
  • easy configuration, separated on a static and dynamic part. Dynamic part can (as the name suggests) change dynamically and Traefik is first to react and adjust.
  • support for modern and intermediate cipher suites (TLS)
  • support for HTTP(S) Layer7 load balance, as well as TCP and UDP (Layer 4)
  • out of the box support for Let’s Encrypt – no need to reuse and worry about certbot
  • out of the box prometheus metrics support
  • docker/k8s friendly

In the attached example we’re going to use it to create a simple template (static traefik configuration) + dynamic, docker related config, which can be reused to any of your docker/docker-compose/swarm deployments.

Full example:

https://github.com/marcinbojko/docker101/tree/master/10-traefik22-grafana

traefik.yaml

global:
  checkNewVersion: false
log:
  level: DEBUG
  filePath: "/var/log/traefik/debug.log"
  format: json
accessLog:
  filePath: "/var/log/traefik/access.log"
  format: json
defaultEntryPoints:
   - http
   - https
api:
  dashboard: true
ping: {}
providers:
  docker:
    endpoint: "unix:///var/run/docker.sock"
  file:
    filename: ./traefik.yml
    watch: true
entryPoints:
  http:
    address: ":80"
    http:
      redirections:
        entryPoint:
          to: https
          scheme: https
  https:
    address: ":443"
  metrics:
    address: ":8082"
tls:
  certificates:
    - certFile: "/ssl/grafana.test-sp.develop.cert"
      keyFile: "/ssl/grafana.test-sp.develop.key"
  stores:
    default:
      defaultCertificate:
        certFile: "/ssl/grafana.test-sp.develop.cert"
        keyFile: "/ssl/grafana.test-sp.develop.key"
  options:
    default:
      minVersion: VersionTLS12
      cipherSuites:
        - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
        - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
        - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
        - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
        - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
        - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
      sniStrict: true
metrics:
  prometheus:
    buckets:
      - 0.1
      - 0.3
      - 1.2
      - 5
    entryPoint: metrics

In attached example we have basic configuration reacting on port 80 and 443, doing automatic redirection from 80 to 443, enabling modern cipher suites with HSTS.

Sp, how to attach and inform docker container to a configuration?

docker-compose

version: "3.7"
services:
  traefik:
    image: traefik:${TRAEFIK_TAG}
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "8082:8082"
    networks:
      - front
      - back
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./traefik/etc/traefik.yml:/traefik.yml
      - ./traefik/ssl:/ssl
      - traefik_logs:/var/log/traefik
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.traefik-secure.entrypoints=https"
      - "traefik.http.routers.traefik-secure.rule=Host(`$TRAEFIK_HOSTNAME`, `localhost`)"
      - "traefik.http.routers.traefik-secure.tls=true"
      - "traefik.http.routers.traefik-secure.service=api@internal"
      - "traefik.http.services.traefik.loadbalancer.server.port=8080"
  grafana-xxl:
    restart: unless-stopped
    image: monitoringartist/grafana-xxl:${GRAFANA_TAG}
    expose:
     - "3000"
    volumes:
      - grafana_lib:/var/lib/grafana
      - grafana_log:/var/log/grafana
      - grafana_etc:/etc/grafana
      - ./grafana/provisioning:/usr/share/grafana/conf/provisioning
    networks:
      - back
    depends_on:
      - traefik
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.grafana-xxl-secure.entrypoints=https"
      - "traefik.http.routers.grafana-xxl-secure.rule=Host(`${GRAFANA_HOSTNAME}`,`*`)"
      - "traefik.http.routers.grafana-xxl-secure.tls=true"
      - "traefik.http.routers.grafana-xxl-secure.service=grafana-xxl"
      - "traefik.http.services.grafana-xxl.loadbalancer.server.port=3000"
      - "traefik.http.services.grafana-xxl.loadbalancer.healthcheck.path=/"
      - "traefik.http.services.grafana-xxl.loadbalancer.healthcheck.interval=10s"
      - "traefik.http.services.grafana-xxl.loadbalancer.healthcheck.timeout=5s"
    env_file: ./grafana/grafana.env

volumes:
  traefik_logs: {}
  traefik_acme: {}
  grafana_lib: {}
  grafana_log: {}
  grafana_etc: {}

networks:
  front:
    ipam:
      config:
        - subnet: 172.16.227.0/24
  back:
    ipam:
      config:
        - subnet: 172.16.226.0/24

Full example with Let’s Encrypt support:

https://github.com/marcinbojko/docker101/tree/master/11-traefik22-grafana-letsencrypt

Have fun!

Written by marcinbojko

21 kwietnia, 2020 at 19:47

Napisane w work

Tagged with , , ,

Vagrant boxes – feel free to use them

Written by marcinbojko

26 listopada, 2019 at 19:49

Napisane w work

Tagged with , , , , ,

Linux Mint Ansible playbook in version 1.1.9 for SysAdmin’s Day

Let’s include also Devops 😉

Let’s include also Devops 😉

https://github.com/marcinbojko/linux_mint

Written by marcinbojko

26 lipca, 2019 at 18:22

Napisane w work

Tagged with , , , , ,

That feeling

when after 6 month long work you’re going live with K8S 😉

Written by marcinbojko

20 lipca, 2019 at 10:22

Napisane w work

HV-Packer in version 1.0.8

Written by marcinbojko

25 Maj, 2019 at 17:01

Napisane w work

Tagged with , , , ,

HV-Packer in version 1.0.7 with Windows Server 2019/1803/1809 Support

Written by marcinbojko

29 kwietnia, 2019 at 17:59

Napisane w work

Veeam Instant Recovery – when life is too short to wait for your restore to complete.

(courtesy of @nixcraft)

„Doing backups is a must- restoring backups is a hell”
So how can we make this ‚unpleasant’ part more civilised? Let’s say – our VM is a couple hundred GB in size, maybe even a couple TB and for some reason, we have to do some magic: restore, migrate, transfer – whatever is desired a the moment.

But in oppositon to our will – physics is quite unforgiving in that matter – restoring takes time. Even if we have bunch of speed-friendly SSD arrays, 10G/40G network at our disposal, still few hours without their favourite data can be ‚no-go’ for our friends from „bussiness side”.
In this particular case, Veeam Instant Recovery comes into rescue.
How does it works?
It uses quite interesting quirk – in reality, the time you need for having your VM ready is to restore small, kb-sized VM configuration, create some sparse files for drives, and mount the drives itself over a network. This way, within 3-5 minutes your machine is up and ready

But, disks, you say, disks! „Where are my data, my precious data?”.
They are still safe, protected in your backup repository, mounted here as ReadOnly volumes.

Phase 1

So, after an initial deploy our VM is ready to serve requests. Also in Veeam B&R Console tasks is waiting for you to choose: begin migration from repositories into real server, or just to power it off.

Phase 1 during restore:

  • IO Read is served from mounted (RO) disk from backup repisotory
  • IO Write from client is server into new Drive snapshot (or written directly into sparse Disk_1 file.
  • In background, data from backup repository are being moved into Disk_1_Sparse file.




Phase 2

Phase 2 to begins when there are no data left to move from backup repository – this initiates merging phase, when ‚snapshot’ (CHANGED DATA) are being merged with originally restored DISK_1.

As with everything, there are few PROS and CONS

PROS:

  • machine is available withing a few minutes, regardles of its size
  • during InstantRecovery restore, VM is alive and can (slowly) process all requests

CONS

  • restoring can take a little longer then real ‚offline’ restore
  • space needed during restore proces can be as much as twice your VM size. If you’ll ran out of space during restore, process will fail and you’ll loose all new data.
  • your data are served with some delay – to read a needed block, VM have to reach it from repository, which means on-the-fly deduplication and decompression
  • if your VM is under heavy use, especially on IO Write, restoring can take much longer then anticipated, as there will be no IO left to serve read and transfer from original disk.
  • if your restore will fail from ANY reason – your data are inconsistent. You’ll have to repeat this process from point 0 – any data changed in previous restore attempt, will be discarded.

So which targets should be ideal for this process?

Any VM which doesn’t change data much but needs to be restored within few minutes:

  • frontend servers

Any VM which offers small-to-medium usage of IO Read , not so many IO Write

  • fileservers

Which targets should we avoid?

  • database servers
  • cluster role members/nodes

Written by marcinbojko

24 marca, 2019 at 19:17

Napisane w work

Tagged with , , , , ,

Newest member in Packer’s family – Azure VM images with managed disks.

Written by marcinbojko

3 marca, 2019 at 17:29

Napisane w work

Tagged with , , , , ,

DevOps Linux Mint workstation – your simple ansible playbook.

Written by marcinbojko

14 stycznia, 2019 at 18:59

Napisane w open source, work

Tagged with , , ,

Hyper-V Packer Gen2 machines – version 1.0.5

https://github.com/marcinbojko/hv-packer

 

Written by marcinbojko

3 października, 2018 at 20:02

Napisane w work

Tagged with , , , ,

Hyper-V Packer Gen2 machines – version 1.0.4

Written by marcinbojko

21 Maj, 2018 at 19:44

Napisane w work

Tagged with , , , , ,

Packer 1.2.3 and (finally) disk_block_size for Hyper-V

Finally in 1.2.3 version of packer, they allowed us to change VHDX block size during its creation. For Linux machine suggested size was to set it to 1 MiB instead of default 32. How can it affect size of our image?.
Maybe like this?

Selection_464

It’s 59% of initial image 🙂

Written by marcinbojko

26 kwietnia, 2018 at 17:18

Napisane w work

Simple Foreman Template (with Grafana Dashboard) for Zabbix 3.x

Small project, using trappers instead of zabbix-agent active mode. I wanted to have better control over the pushing layer and intervals.

https://github.com/marcinbojko/foreman-template

Written by marcinbojko

24 kwietnia, 2018 at 18:15

Hyper-V Packer Gen2 machines – version 1.0.3

The most „bad ass” release so far 🙂

  • * `BREAKING FEATURE` – preparing switching to submodules/subtree for ./scripts and ./files – to share common code with other providers
  • * tree structure in `./scripts` and `./files`, moved to `./extras`
  • * [Windows] adding `phase-3.ps1` script to put less generic stuff there. Just uncomment line with `exit` to get rid of it
  • * [Windows] added support for `Windows Server 1709 Edition (Standard)`
  • * [Windows] remove some clutter from `bootstrap.ps1`
  • * [Windows] added `exit 0` for most of the scripts as some external commands were leaving packer with non-zero exit codes
  • * [CentOS] added `zeroing.sh` script to make compacting more efficient
  • * [CentOS] reworked bug with UEFI – this time after deploying from image you can run script `/usr/local/bin/uefi.sh` which will recheck and readd CentOS UEFI entries. For SCVMM deployments (which separates vhdx from vmcx) use `RunOnce`
  • * [CentOS] removed clutter from `provision.sh`
  • * [CentOS] removed screenfetch, replaced with neofetch
  • * [CentOS] reworked `motd.sh` in `/etc/profile.d` to reflect .Xauthority existence*

https://github.com/marcinbojko/hv-packer

Written by marcinbojko

23 lutego, 2018 at 19:34

Napisane w work

Tagged with , , , , ,

Hyper-V Packer Gen2 machines – version 1.0.2

Written by marcinbojko

17 grudnia, 2017 at 13:46

Napisane w work

Hyper-V Packer’s Virtual Machine Generation 2 Templates – Windows 2016 and CentOS 7.4

Selection_999(897)

As you probably know – HashiCorp’s Packer (https://www.packer.io/) is a great tool for generating customized OS images for future use.

You can use many builders, starting from Virtualbox, Azure, Amazon EC2. You can also use mechanizm of providers to customize your image even more.
That becomes very handy when you have to manage your infrastructure fighting for every minute your machine will be available, and later on – ready to work. It’s advisable to put some effort to prepare latest images possible, with your organization custom needs.

However great tool, it still lacked a lots of decent documentation how to prepare images using less common builder – Microsoft’s Hyper-V. Hyper-V is a part of Microsoft Windows for a long time, it can be used as standalone product (Microsoft Hyper-V Server) or a additional feature (Hyper-V Feature in Windows 2016/10). When using packer with so-called „generation 1” images packer templates are no different from common examples prepared for popular Virtualbox builder.

With „Generation 2” images it required some more work to achieve this goal.

What’s the use for Gen-2 images? Few features than can be useful:

  • secure boot (Windows/Linux)
  • UEFI boot
  • no IDE/COM burden from previous generations
  • boot from SCSI devices

If you want to know more about „Genration 2” – read this post: https://www.altaro.com/hyper-v/comparing-hyper-v-generation-1-2-virtual-machines/

It took me a while to prepare them Hyper-V ready, also with SystemCenter Virtual Machine Support. They were (and are) tested with both (2012 and 2016) editions and just became quite important part of my deploys.

Long story short – here is my github repository with Hyper-V Windows 2016 and CentOS 7.4 Generation 2 templates.

https://github.com/marcinbojko/hv-packer

Leave me a sign if these were useful to you 🙂

 

 

Written by marcinbojko

11 listopada, 2017 at 19:30

Napisane w work

Tagged with , , ,

Centos 6/7 virtual machines and Production Checkpoints in Hyper-V 2016.

As we may know – Microsoft introduced new way of doing snapshots/checkpoints in Hyper-V 2016. However term „production” is misleading, implying Standard checkpoints are not production ready – which is simply not true.
The biggest difference is that Production checkpoints are mostly used with VSS-aware applications (like MS SQL/Exchange, MS Windows itself) allowing them to flush/sync/commit changes to filesystem.

As a major difference – production checkpoints don’t save memory or cpu state, starting always with machine powered off after restore.

You can choose which way you want to do your snapshots here:

Selection_999(411)

Windows-based virtual machines have supported this since previous versions of integration services (2012 R2, 8/8.1) and from the start in case of Windows 2016/10. What about Linux-based, Centos 6/7 machines?

When installed out of the box, without any additional packages, trying to do a production snapshot of Centos 7 (with all updates) we got something like this:

Selection_999(410)

Quick how-to.

  1. If youre using external LIS (Linux Integration Services) from Microsoft, as an external package – remove it. It’s a piece of crap,  breaking kernels from time to time, packed with ‚latest’ errors and workaround rejected by linux kernel maintainers. It’s really not worth risk to have it installed: 
    yum remove microsoft-hyper-v kmod-microsoft-hyper-v

    or

    yum remove $(yum list installed|grep microsoft)
  2. Check if your Hyper-V offers all Integration Services for this VM.Selection_999(412)
  3. Check  and install hyperv-daemons
     yum info hyperv-daemons

    Available Packages
    Name : hyperv-daemons
    Arch : x86_64
    Version : 0
    Release : 0.29.20160216git.el7
    Size : 4.5 k
    Repo : base/7/x86_64
    Summary : HyperV daemons suite
    URL : http://www.kernel.org
    Licence : GPLv2
    Description : Suite of daemons that are needed when Linux guest : is running on Windows Host with HyperV

    yum install hyperv-daemons -y
  4. Enable and start services
    systemctl enable hypervfcopyd
    systemctl enable hypervkvpd
    systemctl enable hypervvssd
    
    systemctl start hypervkvpd 
    systemctl start hypervvssd 
    systemctl start hypervfcopyd
  5. Check status
    [root@centos7 ~]# systemctl status hypervkvpd
    ● hypervkvpd.service - Hyper-V KVP daemon
     Loaded: loaded (/usr/lib/systemd/system/hypervkvpd.service; static; vendor preset: enabled)
     Active: active (running) since Wed 2017-07-26 02:37:30 CDT; 14s ago
     Main PID: 3478 (hypervkvpd)
     CGroup: /system.slice/hypervkvpd.service
     └─3478 /usr/sbin/hypervkvpd -n
    
    Jul 26 02:37:30 centos7 systemd[1]: Started Hyper-V KVP daemon.
    Jul 26 02:37:30 centos7 systemd[1]: Starting Hyper-V KVP daemon...
    Jul 26 02:37:30 centos7 KVP[3478]: KVP starting; pid is:3478
    Jul 26 02:37:30 centos7 KVP[3478]: KVP LIC Version: 3.1
    [root@centos7 ~]# systemctl status hypervvssd
    ● hypervvssd.service - Hyper-V VSS daemon
     Loaded: loaded (/usr/lib/systemd/system/hypervvssd.service; static; vendor preset: enabled)
     Active: active (running) since Wed 2017-07-26 02:37:30 CDT; 27s ago
     Main PID: 3485 (hypervvssd)
     CGroup: /system.slice/hypervvssd.service
     └─3485 /usr/sbin/hypervvssd -n
    
    Jul 26 02:37:30 centos7 systemd[1]: Started Hyper-V VSS daemon.
    Jul 26 02:37:30 centos7 systemd[1]: Starting Hyper-V VSS daemon...
    Jul 26 02:37:30 centos7 hypervvssd[3485]: Hyper-V VSS: VSS starting; pid is:3485
    Jul 26 02:37:30 centos7 hypervvssd[3485]: Hyper-V VSS: VSS: kernel module version: 129
    [root@centos7 ~]# systemctl status hypervfcopyd
    ● hypervfcopyd.service - Hyper-V FCOPY daemon
     Loaded: loaded (/usr/lib/systemd/system/hypervfcopyd.service; static; vendor preset: disabled)
     Active: active (running) since Wed 2017-07-26 02:37:30 CDT; 44s ago
     Main PID: 3492 (hypervfcopyd)
     CGroup: /system.slice/hypervfcopyd.service
     └─3492 /usr/sbin/hypervfcopyd -n
    
    Jul 26 02:37:30 centos7 systemd[1]: Started Hyper-V FCOPY daemon.
    Jul 26 02:37:30 centos7 systemd[1]: Starting Hyper-V FCOPY daemon...
    Jul 26 02:37:30 centos7 HV_FCOPY[3492]: starting; pid is:3492
    Jul 26 02:37:30 centos7 HV_FCOPY[3492]: kernel module version: 1

    As a result:
    Selection_999(413)
    and in /var/log/messages

    Jul 26 02:43:27 centos7 journal: Hyper-V VSS: VSS: op=FREEZE: succeeded
    
    Jul 26 02:39:25 centos7 systemd: Time has been changed
    
    Jul 26 02:39:25 centos7 journal: Hyper-V VSS: VSS: op=THAW: succeeded

 

Written by marcinbojko

26 lipca, 2017 at 18:05

Napisane w work

Tagged with , , , , ,

Petya(notPetya) ransomware attack and how to (quickly) vaccinate lot’s of machines

There was a lot of nice summary articles about latest „ransomware” attack caused by Petya. Soon, researchers started to claim almost permanent vaccine for this type of worm.

https://www.bleepingcomputer.com/news/security/vaccine-not-killswitch-found-for-petya-notpetya-ransomware-outbreak/

Even patched OS won’t save you from infection as one infected machine quickly spreads the infection using other protocols like WinRM.

So, how should one on its vast server farm vaccinate hundrets of machines?

For example, like this 🙂

win_manage:
  dsc_file:    
    petya_vaccine1:
      dsc_destinationpath: C:\Windows\perfc
      dsc_type: file
      dsc_attributes: readonly
      dsc_contents: ""
    petya_vaccine2:
      dsc_destinationpath: C:\Windows\perfc.dat
      dsc_type: file
      dsc_attributes: readonly
      dsc_contents: ""
    petya_vaccine3:
      dsc_destinationpath: C:\Windows\perfc.dll
      dsc_type: file
      dsc_attributes: readonly
      dsc_contents: ""

 

Written by marcinbojko

1 lipca, 2017 at 11:14

Po LDI 2017

2017-04-22 20.14.36

Written by marcinbojko

22 kwietnia, 2017 at 20:16

Napisane w work

%d blogerów lubi to: