Blog Marcina Bojko

Linux,Windows,serwer, i tak dalej ;)

Posts Tagged ‘linux

Centos 6/7 virtual machines and Production Checkpoints in Hyper-V 2016.

leave a comment »

As we may know – Microsoft introduced new way of doing snapshots/checkpoints in Hyper-V 2016. However term „production” is misleading, implying Standard checkpoints are not production ready – which is simply not true.
The biggest difference is that Production checkpoints are mostly used with VSS-aware applications (like MS SQL/Exchange, MS Windows itself) allowing them to flush/sync/commit changes to filesystem.

As a major difference – production checkpoints don’t save memory or cpu state, starting always with machine powered off after restore.

You can choose which way you want to do your snapshots here:

Selection_999(411)

Windows-based virtual machines have supported this since previous versions of integration services (2012 R2, 8/8.1) and from the start in case of Windows 2016/10. What about Linux-based, Centos 6/7 machines?

When installed out of the box, without any additional packages, trying to do a production snapshot of Centos 7 (with all updates) we got something like this:

Selection_999(410)

Quick how-to.

  1. If youre using external LIS (Linux Integration Services) from Microsoft, as an external package – remove it. It’s a piece of crap,  breaking kernels from time to time, packed with ‚latest’ errors and workaround rejected by linux kernel maintainers. It’s really not worth risk to have it installed: 
    yum remove microsoft-hyper-v kmod-microsoft-hyper-v

    or

    yum remove $(yum list installed|grep microsoft)
  2. Check if your Hyper-V offers all Integration Services for this VM.Selection_999(412)
  3. Check  and install hyperv-daemons
     yum info hyperv-daemons

    Available Packages
    Name : hyperv-daemons
    Arch : x86_64
    Version : 0
    Release : 0.29.20160216git.el7
    Size : 4.5 k
    Repo : base/7/x86_64
    Summary : HyperV daemons suite
    URL : http://www.kernel.org
    Licence : GPLv2
    Description : Suite of daemons that are needed when Linux guest : is running on Windows Host with HyperV

    yum install hyperv-daemons -y
  4. Enable and start services
    systemctl enable hypervfcopyd
    systemctl enable hypervkvpd
    systemctl enable hypervvssd
    
    systemctl start hypervkvpd 
    systemctl start hypervvssd 
    systemctl start hypervfcopyd
  5. Check status
    [root@centos7 ~]# systemctl status hypervkvpd
    ● hypervkvpd.service - Hyper-V KVP daemon
     Loaded: loaded (/usr/lib/systemd/system/hypervkvpd.service; static; vendor preset: enabled)
     Active: active (running) since Wed 2017-07-26 02:37:30 CDT; 14s ago
     Main PID: 3478 (hypervkvpd)
     CGroup: /system.slice/hypervkvpd.service
     └─3478 /usr/sbin/hypervkvpd -n
    
    Jul 26 02:37:30 centos7 systemd[1]: Started Hyper-V KVP daemon.
    Jul 26 02:37:30 centos7 systemd[1]: Starting Hyper-V KVP daemon...
    Jul 26 02:37:30 centos7 KVP[3478]: KVP starting; pid is:3478
    Jul 26 02:37:30 centos7 KVP[3478]: KVP LIC Version: 3.1
    [root@centos7 ~]# systemctl status hypervvssd
    ● hypervvssd.service - Hyper-V VSS daemon
     Loaded: loaded (/usr/lib/systemd/system/hypervvssd.service; static; vendor preset: enabled)
     Active: active (running) since Wed 2017-07-26 02:37:30 CDT; 27s ago
     Main PID: 3485 (hypervvssd)
     CGroup: /system.slice/hypervvssd.service
     └─3485 /usr/sbin/hypervvssd -n
    
    Jul 26 02:37:30 centos7 systemd[1]: Started Hyper-V VSS daemon.
    Jul 26 02:37:30 centos7 systemd[1]: Starting Hyper-V VSS daemon...
    Jul 26 02:37:30 centos7 hypervvssd[3485]: Hyper-V VSS: VSS starting; pid is:3485
    Jul 26 02:37:30 centos7 hypervvssd[3485]: Hyper-V VSS: VSS: kernel module version: 129
    [root@centos7 ~]# systemctl status hypervfcopyd
    ● hypervfcopyd.service - Hyper-V FCOPY daemon
     Loaded: loaded (/usr/lib/systemd/system/hypervfcopyd.service; static; vendor preset: disabled)
     Active: active (running) since Wed 2017-07-26 02:37:30 CDT; 44s ago
     Main PID: 3492 (hypervfcopyd)
     CGroup: /system.slice/hypervfcopyd.service
     └─3492 /usr/sbin/hypervfcopyd -n
    
    Jul 26 02:37:30 centos7 systemd[1]: Started Hyper-V FCOPY daemon.
    Jul 26 02:37:30 centos7 systemd[1]: Starting Hyper-V FCOPY daemon...
    Jul 26 02:37:30 centos7 HV_FCOPY[3492]: starting; pid is:3492
    Jul 26 02:37:30 centos7 HV_FCOPY[3492]: kernel module version: 1

    As a result:
    Selection_999(413)
    and in /var/log/messages

    Jul 26 02:43:27 centos7 journal: Hyper-V VSS: VSS: op=FREEZE: succeeded
    
    Jul 26 02:39:25 centos7 systemd: Time has been changed
    
    Jul 26 02:39:25 centos7 journal: Hyper-V VSS: VSS: op=THAW: succeeded

 

Written by marcinbojko

Lipiec 26, 2017 at 18:05

Napisane w work

Tagged with , , , , ,

10 Myths about Hyper-V

On my lectures and meetings with both: IT and Management  I’ve had a pleasure to be a myth buster about Hyper-V. As much as I don’t appreciate Microsoft’s ‚way of life’ – Hyper-V is mostly feared due to: lack of proper knowledge and very low quality support from Microsoft. Few most common myths are:

  1. It’s very expensive
    Let’s calculate:
    If you’re going to use standalone hosts with Microsoft Hyper-V Server your cost will be just zero coma zero.
    If you’re going to use a lots of Microsoft Windows virtual machines on them – you can rent them as SPLA licenses (per machine), or just rent Windows Datacenter edition for whole host.
    If you’re gonna to use Linux machines (assuming opensource, not paid edition) – again – zero coma zero
    If you’re going to use HA, all you need is just 1 (preferably more) OS for Domain Controller.
    If you’re gonna to manage standalone hosts, all you need (and rather as a suggestion) is a Microsoft Windows 10 Anniversary Edition machines. Just one 🙂
    You don’t have to pay extra for all fine features like: HA, Live Migration, Cluster Aware Updates. With W2k16 edition few extra features are available only in Datacenter edition (which I believe is a grave mistake) but that’s all.
  2. It requires System Center to be managed by
    No. As a matter of fact, SC is only useful in situations when you have lots of VLANS, Logical Network, templates to be deployed or Services. In any other case like: have your VLANS’s accounted for, drop Services as nobody is using this part. System Center Virtual Machine Manager is nothing more than overgrown cancer on a top of Powershell scripts it runs. Since 2012 edition Microsoft couldn’t ever fix the simplest things like: responsive console and not having refreshed it manually after every operation.
  3. It’s slower than VMWare or ‚any’ other competitor
    No. Overhead of Hyper-V is done mostly on storage level and most problems with it are created on a level of infrastructure design.
    For example: If you have a lot of hosts, and you do not require virtual machines there to be Highly Available – do not (i repeat) DO NOT connect them all as cluster nodes.
    If you’re using 1 or 2 iSCSI 1GB cards as you paths for low quality machines – expect nothing more than problems.
    Instead: use local storage, combined with low-end HW controllers. Even having 2 (mirror) or 4 (RAID5 or RAID10) disks for those machines is way better than having one underpowered ‚best of the world storage’. Plan this usage carefully – you still have things like Shared Nothing Live Migration (in a case of maintenance on specific host).
    Creating a lot of host, giving them all 1 or 2 ClusterSharedVolume to share is just asking for trouble.
  4. It requires a lot’s of PowerShell knowledge
    No. And I am the best example here 😉 With few exceptions like script for installing Hyper-V hosts, maybe create few LACPs – that’s all I used Powershell for.
  5. It doesn’t support Linux
    It does, it does it very well.
    Official document: https://technet.microsoft.com/en-us/windows-server-docs/compute/hyper-v/supported-linux-and-freebsd-virtual-machines-for-hyper-v-on-windows
    Few my lectures: https://marcinbojko.wordpress.com/2014/10/22/xxi-spotkanie-regionalnej-lubelskiej-grupy-microsoft-i-moj-wyklad-systemy-linux-na-platformie-hyper-v-2012/
    As a matter of fact – I use CentOS/RedHat, Ubuntu/Debian machines and appliances, and have to say: working with them on Hyper-V is just a simple pleasure.
    In 2016 with things like Hyper-V and Veeam, support for Linux machines on Hyper-V is very much alive. Even our beloved ‚System Center Virtual Machine Manager’ supports creating templates for Linux machines, with small agent to set a lots of things during and after deployment.
  6. It’s complicated to install, run, maintain especially HA & Clusters
    No. It is just simple as click few times: next, next, next,finish.
    Using System Center or (better) FailoverCluster Manager from any Windows Server machine works perfectly out of the box. Rules are simple, wizard will tell you what you should do next.
    With maintenance mode, Live Storage Migration, Cluster Aware Updates you can have stable and secure environment for your machines. Even migrating machines between different clusters (Shared Nothing Migration) is secure and efficient.
  7. It requires specific hardware 
    One of the biggest myths. Learn with a hard way with VMWare hosts , you do not require special NIC, special motherboards or any devices from very narrow VMWare HCL list. Requirements of Hyper-V are very small: VT enabled CPU, enough memory to fit VM’s and host OS itself, one HDD, one NIC. For small setups it almost equals in using desktops and other workstations as a hyper-v farm.
    After hearing this statement from one of my clients, I began to pursue the subject. It was someone from VMWare camp that told him: ‚you will need special hardware for SMB3 and SMB Direct’ – which is generally correct in a same matter like: ‚if you want a milk, you need a cow’ 😉
  8. It doesn’t work with Azure
    Hyper-V 2016  is a light years ahead of Azure:) They still seems to be using Windows 2008 as a hosts with all of its negative aspects.
    But, jokes aside, using pre-build templates or products like Veeam and Windows Azure Pack, creating you own hybrid cloud is one of the best things you can do. Don’t trust sales guy from Microsoft forcing you to ‚move everything to a cloud, our cloud’. Don’t trust you IT guy saying ‚only on premise or death!’. Live in a both worlds.
  9. I know NOTHING about Hyper-V.
    If you have ANY knowledge about Windows – you have knowledge about Hyper-V itself.
  10. But migration from platform X/Y/Z is pain in the ….
    Take a deep breath. Calculate it. Find tools to do it manually, recreate all you machines using somekind of CM tool (like mentioned The Foreman/Puppet)- https://marcinbojko.wordpress.com/2016/10/04/puppet-the-foreman-powershell-dsc-your-system-center-in-a-box/. Calculate it again.
    Do it 😉

Written by marcinbojko

Grudzień 28, 2016 at 17:12

Puppet & The Foreman & Powershell DSC – your System Center in a box :)

Few weeks ago I started a little project – complete Puppet module called: win_manage.

My goal was to manage Windows based machines almost as easy as Linux servers, as little code inside as possible (you know, I am not a developer in any kind). And when I was thinking: KISS is no more possible with this project, I’ve found Puppet Powershell DSC module: https://github.com/puppetlabs/puppetlabs-dsc

Adding another resources it is just a breeze, the biggest part of work was to test almost every setting provided by Microsoft, to have working examples in day-to-day SysAdmin/DevOP job.

And yes, I know – we have plenty of things like this, sold with different price plans, different support plans and so on. But if you cannot afford pricey tools like Puppet Enterprise or System Center 2012 R2 in your environment, this little project comes to help you 🙂

First things first – why?

  1. We have excellent granularity using Puppet and Foreman architecture without complicated AD GPO with filters.
  2. Nested groups/copying groups helps so much in creating cloned environment
  3. It doesn’t matter what provider do you use: physical, virtual, VMWare,Hyper-V, Azure – it just works.
  4. With additional modules like Chocolatey and our private sources (and private CDNs) the story is completed – no more AD MSI voodoo stuff. Software deployment and maintenance just got really better.
  5. One is is to deploy, second thing is to maintain and manage. Securing running services or making permanent changes in your environment is as much important as just deploy them.
  6. No more ‚just another script’ approach.
  7. Everyone can afford simple machine with simple YAML examples 😉

So my work in progress looks just like this:

selection_418

Dashboard

selection_417

Host Groups

selection_419

Parameters to set

We love YAML driven configuration: setting users, rules, applications is just as easy as writing very light code:

Setting registry:

tightvncpassword:
 dsc_key: HKEY_LOCAL_MACHINE\SOFTWARE\TightVNC\Server
 dsc_valuename: Password
 dsc_valuedata: af af af af af af af af
 dsc_valuetype: binary
 tightvncpasswordcontrol:
 dsc_key: HKEY_LOCAL_MACHINE\SOFTWARE\TightVNC\Server
 dsc_valuename: ControlPassword
 dsc_valuedata: af af af af af af af af
 dsc_valuetype: binary

Adding features:

Web-Server:
 dsc_ensure: present
 dsc_name: Web-Server
 dsc_includeallsubfeature: true
 DSC-Service:
 dsc_ensure: present
 dsc_name: DSC-Service

Installing and maintaining latest version of packages:

chocolatey:
 ensure: latest
 powershell:
 ensure: latest
 doublecmd:
 ensure: latest
 conemu:
 ensure: latest

So, what to do next? I will be adding additional DSC Resources to module and hopefully will be able to make it public. Stay tuned and keep your fingers crossed 😉

 

Written by marcinbojko

Październik 4, 2016 at 19:11

Linux Mint 17.1 i Napiprojekt

W pracy bardzo doceniam zainstalowanego na laptopie Linuksa, zwłaszcza wtedy gdy potrzebuję stabilnego systemu do diagnozy, projektowania lub rozwiązania problemu. Aplikacje, skrypty, UI -to wszystko powoduje iż w codziennej pracy admina/architekta nie ma sobie równych – bardzo ciężko osiągnąć mi podobną funkcjonalność na stacji roboczej z zainstalowanym systemem z rodziny Windows.

W domu, do tej pory ostoi Windows’a – zawsze wystarczał mi wyżej wskazany laptop. Do czasu zakupy dobrej konfiguracji z 2 solidnymi 24 calowymi monitorami.

Ostatnią przyczyną korzystania z systemu Microsoftu były już tylko gry – jednak czy z tego powodu warto utrzymywać cały OS z innymi narzędziami? Dzięki naciskom Valve i nadchodzącym Steam Machines, już ponad 30% tytułów dostępnych na moim koncie Steam posiada swoje odpowiedniki  Szybki dual-boot z Linuksem zdecydowanie potwierdził tą tezę.

Przenosiny (wyjątkowo szybkie, skopiowanie /home z laptopa i dorzucenie wymaganych repozytoriów i pakietów) to temat na inny artykuł. Z poprzedniego OS brakowało mi tylko łatwego dostępu do zasobów Napiprojektu i błyskawicznego dopasowywania napisów do plików wideo.

W Linux Mint 17/17.1 skorzystać możemy z repozytorium zawierającego ostatnią wersję Qnapi (1.6-rc2-1) dla używanej przez nas architektury lub pobrać plik .deb bezpośrednio.

add-apt repository ppa:patryk-prezu/ppa

Jeżeli mamy już zainstalowane Qnapi warto dodać 2 dodatkowe akcje na managera Nemo, pozwalające pobierać napisy dla wszystkich zaznaczonych plików, we wskazanych językach.

W katalogu /usr/share/nemo/actions tworzymy dwa pliki o nazwach i zawartości:

nazwa pliku: 98_qnapi_en.nemo_action

[Nemo Action]
Active=true
Name=Pobierz napisy EN z QNapi
Comment=Pobierz napisy EN z QNapi
Name[en]=Download EN subtitles with QNapi
Comment[en]=Download EN subtitles with QNapi
Exec=qnapi -l EN %F
Icon-Name=qnapi
Selection=any
Extensions=avi;mkv;mpg;mp4;asf;divx;mpg;ogm;rmvb;wmv

 

nazwa pliku: 99_qnapi.nemo_action

[Nemo Action]
Active=true
Name=Pobierz napisy PL z QNapi
Comment=Pobierz napisy PL z QNapi
Name[en]=Download PL subtitles with QNapi
Comment[en]=Download PL subtitles with QNapi
Exec=qnapi -l PL %F
Icon-Name=qnapi
Selection=any
Extensions=avi;mkv;mpg;mp4;asf;divx;mpg;ogm;rmvb;wmv

Menu_187

Written by marcinbojko

Marzec 13, 2015 at 12:40

Napisane w Uncategorized

Tagged with , , , ,

HP Onboard Administrator 4.30 – Keeping up with Industry Standards

As you may (or may not) know – time flows. Once you bought quite expensive piece of equipment you want it to be always up to standards and updated to its fullest. You want it to last as long as your project goes.
Vendors like to forget about it. It’s convenient to sell you new hardware every year.
I have terrible experience with DRAC/RSA/Onboard Administrator’s and ILO’s Java/ActiveX admin’s nighmares. ILO/OA are almost always needed in a time of crisis, so six month after last login you can expect almost for sure: something will go wrong with these interfaces.
Maybe it is a Java cache, java security exception, your newer browser, newer OS, few intalled patches – the story is always the same: you have to react quickly only to find yourself in a neverending loop ‚which element causes this?’

It forces you to have almost neverending supply of browsers, virtual machines with different OS’es, every one of them with different snapshots, every one of these snapshots is with different version of Java/flash/browser/patch set. We have a plan: when you want to use THIS function, you have to use THIS version. If you want to switch to another function, you have to use different version (again: OS/browser/java/patch set).

It’s a reversed version of Russian roulette – you have almost zero chances to find proper, working version at first 6, 12 or even 48 shots.

And no, the magic ‚cloud’ is not gonna resolve this.

My quite harsh words are caused by positive feedback from HP – with OA firmware 4.30 I can use Linux Google Chrome again for login, using Remote Console and managing blades.

Yeeey for me…

Buuuu for vendors.

Written by marcinbojko

Wrzesień 13, 2014 at 10:11

Napisane w Uncategorized, work

Tagged with , , ,

Compacting Virtual Disks in System Center Virtual Machine Manager 2012 R2 or Hyper-V 2012 R2

As we all know – disk space isn’t free. If you allocate disk space to your virtual machine, you always want to allocate proper, well-balanced size to shorten future downtimes. But what is proper and balanced you ask?  There is no short answer to this, but if you’re  IT person, you always will try to allocate more than is needed right at the moment.  The decision is : should I use static or dynamic disks?

For me, there is no really need to use static disks anymore as there is no real speed difference. So, the real pro using dynamic disk is their smaller size.

In production environment, using Microsoft’s hypervisor, we can expect our VM to grow over time. During the nature of virtual machines and virtual disks, the real data machine uses are not the same numbers as virtual disk sizes. It is happening due to hypervisor behaviour – deleting all data from OS in virtual machine is no more than throwing away and index card from your book. The written data are still there, so there is no easy, automated way to compact a virtual disk.

Hypervisor however, tries to allocate in a first place, blocks marked as used. So the real expanding is happening when you’re constantly changing your data, without deleting them first. The great example is with databases, log and temporary trash disks and just plain and simple oversizing. I’ve had a case when relativly small machine (20 GB CentOS with a 600 GB VD) just grew over few days filling whole 600 GB of data because of logs set to DEBUG.

So what will be the cons and of virtual disk growing overtime?

  • obviously, more space allocated at expensive storage for VMs
  • more obviously, when you’re using any kind of backup software, you’re forced to write and store in multiple copies the data you really doesn’t need
  • CBT (Change Block Tracking) Table is bigger to process, with every move
  • more network traffic with every backup job you have
  • live ‚shared nothing’ migration times grows just out of proportions. If you have a small machine with 20GB data on 600 GB sized disk, you will have to transfer this whole fatso over you network to other machine. Even with compression set at live migration, it is just really unwanted.

So what can we do? We can just zero the data you’re not using and Hyper-v cmdlets  will take the rest.

You have to plan downtime for the machine, but according to my tests, zeroing machines with 600 GB took 15 to 30 minutes. With smaller sizes it is just matter of single minutes.

Before you go:
– plan your downtime, compacting should not to be interrupted
– take extra caution. Zeroing important data or uneeded data it’s just a matter of mistake.
– delete all snapshots/checkpoints
– make sure you converted VHD to VHDX (optional)
– make sure your disk is set as dynamic disk
– make sure you have enough room for compacting or resizing. Remember – if you have 600 GB virtual disk, during this process it may grow to this size.
– remember that compacting deletes CBT table for backup software – next backup will be a Full Backup.

The most usable zeroing ways I found was:

Offline methods

Cons
– longer downtime
Pros
– more accurate (no more background write)
– faster (no more background write)
– smaller risk of processes failure due to ‚out of space’ event

  • zerofree for Linux ext2/ext3/ext4 disks
  • ntfswipe for Windows based disks
  • dd for both Linux and Windows based disks or exotic fs (btrfs/xfs)

Online methods
Cons
– less accurate (background write  is still happening)
– machines became slower or unresponsible
– slower process
– risk of applications failing due to ‚out of space’ events
Pros
– smaller downtime

– SDelete or CCleaner for Windows based disks
– dd for Linux based disks

 

Phase I – Zeroing Offline

Offline zeroing (done with System Rescue CD 4.x)

  • Delete all uneeded data (logs, temps, bigfiles, dowloads, caches)
  • Umount all ext2/3/4/ntfs volumes
  • for NTFS volume: ntfswipe -av /dev/volume_name
  • for ext2/3/4 volume: zerofree -v /dev/volume-name
  • for LVM: vgchange -a y;zerofree -v /dev/mapper/volume_name
Zeroing swap space:
# swapoff -a

# free
total used free shared buffers cached
Mem: 8056340 2643132 5413208 155072 606916 914796
-/+ buffers/cache: 1121420 6934920
Swap: 0 0 0

# blkid |grep swap
/dev/sdb2: UUID="adad0488-3b67-4444-b792-6a1a775b8821" TYPE="swap"
# dd if=/dev/zero|pv -treb|dd of=/dev/sdb2 bs=8192
dd: error writing ‘/dev/sdb2’: No space left on device
1,91GB 0:01:04 [30,3MB/s]
249981+4 records in
249980+4 records out
2047868928 bytes (2,0 GB) copied, 67,931 s, 30,1 MB/s

#mkswap /dev/sdb2 -U "adad0488-3b67-4444-b792-6a1a775b8821"
Setting up swapspace version 1, size = 1999868 KiB
no label, UUID=adad0488-3b67-4444-b792-6a1a775b8821
# swapon -a

# free
 total used free shared buffers cached
Mem: 8056340 3309648 4746692 159844 1112232 919708
-/+ buffers/cache: 1277708 6778632
Swap: 1999868 0 1999868

Phase I – Online zeroing

1. Delete all uneeded data (logs, temps, bigfiles, dowloads, caches)

for Windows machine
sdelete -z letter:

Linux machine
#dd if=/dev/zero|pv -treb|dd of=/file.zero bs=4096;sync;sync;rm -rfv /file.zero;sync

Phase II – Compacting

1. Shutdown the machine

For System Center VMM 2012 R1/R2

Workspace 1_107
For Hyper-V powershell

Optimize-VHD -Path path_to_vhdx_file.vhdx -Mode Full

 

Workspace 1_108
——-
System Rescue CD – http://www.sysresccd.org/Download
SDelete – http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx
CCleaner Portable – https://www.piriform.com/ccleaner/builds
zerofree – http://manned.org/zerofree/00be91ab

Written by marcinbojko

Sierpień 3, 2014 at 13:10

Red Hat Enterprise Linux 7 (RHEL7) – maszyny wirtualne Hyper-V Generacji 2

Dla wszystkich, którzy czekali na RHEL7 mam dobrą wiadomość – gotowa do pobrania dystrybucja umożliwa jako drugie distro (po Ubuntu 14.04) tworzenie maszyn wirtualnych Hyper-V Generacji 2.

Plusy?

– start/stop maszyny zajmuje ułamki tego co pochłaniało przy RHEL6/CentOS 6

– wsparcie dla Hyper-V VSS (zintegrowane z systemem)

– kernele z serii 3.x

– poprawione błędy z dodawaniem (hotadd) pamięci dynamicznej

RHELG2 RHELG2_2 RHELG2_3Czekamy na ruchy społeczności CentOS 😉

 

Written by marcinbojko

Czerwiec 12, 2014 at 18:29

Napisane w work

Tagged with , , , ,

%d blogerów lubi to: