Blog Marcina Bojko

Linux,Windows,serwer, i tak dalej ;)

Posts Tagged ‘hyper-v

5 serious issues/deal breakers with System Center Virtual Machine Manager.

System Center Virtual Machine Manager was Microsoft’s answer to VMWare’s vSphere. It’s Microsoft, so what could have gone wrong? It’s Microsoft – so everything.
Below is a list of most annoying things, some of them are so serious it makes you wonder – maybe Powershell is the answer? Seriously? In 2017, Microsoft, you FORCE everybody to use text console again?
In a moment of doubt we used to call it overgrown cancer over Powershell commands.

Let’s start, sorted by weight of crime:

Deal breakers:

1) Terrible things you cannot do in SCVMM but you can in Hyper-V Manager, Failover Cluster or Powershell like:

  • rename you machine when its powered on (sic)
  • change its MAC from Dynamic to Static other way like manually copy it character by character.
  • change booting order (sic) of machines and templates
  • select all Integration Services offered
  • change location of smart paging file
  • change affinity with cluster (high/medium/low/do not autostart)

and so on.

2) Console. Console is so terrible, that its sorry state is just good meme source.

First – console from Hyper-V Manager/FailoverCluster

Selection_670.png

Then from SCVMM

Selection_671.png

  • you cannot attach console and then power on machine. You HAVE to – power on machine, wait few sec for console button to be available then race through time to start it BEFORE OS starts. You have better chances of winning some of Grand Prix then finish the trick above on first run.
  • only actions you have is Reconnect and Send CTRL+ALT+Delete. Never working ‚Clipboard’ added in SCVMM 2016 requires you to paste text HERE, then it’s pasted in VM console
  • when it start before machine starts – you have to kill an application. It’s no good to use it ever again, it won’t ‚click’ with machine you’ve started. Exit? Something terrible may happen.

Selection_672.png

3) Requirements

  • MS SQL Server Standard or Enterprise. https://technet.microsoft.com/en-us/system-center-docs/system-requirements/sql-server-version-compatibility
  • 4 GB Ram required, 16 GB recommended (don’t even bother going below)
  • A lot of not-really-so-working tricks to use it to manage hosts from other domains, especially without 2 way trusts settled.
  • Price. With whole gang of System Center tools, prepare to be robbed in a daylight. Doesn’t matter you have no intention to use other components – you have to pay for it. You cannot just pick and buy needed component – you have to buy-and-pay with bulk.

4) GUI

  • General slowness of GUI, regardless of hosts number, running tasks, library sizes.
  • Jobs window – generally unusable with more than one admin or more than one job running- lots of informational comments. Important actions (like: who deleted or altered machine) quickly goes off the screen, covered by messages like: refresh was completed.
  • Oh, did I mention ‚Refresh’ habit? Learn it. Learn it, and let your fingers memorize this config, as you will be using it a lot.

Refresh is required almost on everything. In options like: you DID change something via Powershell and HV-Manager – I can understand, refresh may be required. But you will have to hit REFRESH before, in time, and after ANY action you would like to perform. If not – expect the worst. Virtual machine seems to be non responding on your commands? Maybe its locked for backup, maybe it hanged, maybe it migrated to another host – you have to refresh, refresh and refresh to persuade SCVMM that you have most recent data.

Sometimes even refresh doesn’t work. Like in recovery or cluster node failure, you shouldn’t count on SCVMM to update its status before timer reaches day or two. Take your time! Sometime you will have to reboot SCVMM to persuade it to have the latest data. So, when your action fails – search no more, VM is probably locked, on other host or powered off. SCVMM takes its auto-refresh very slowly.

  • General over complexity in Logical Network and Switches. It’s like you have to create every VLAN again, even if you’ve done it on dozen of network devices, fill variables like subnets, gateways. You have to group it all together and again, attach to every Hyper-V switch on hosts you have.
  • Adding you own custom fields and filling them is, again, over complicated and requires you to do a lot of scripting and scheduling them in a Windows manner.
  • You cannot add, change or sort fields like Operating systems. What Microsoft got you are values like this:
    • Microsoft Windows Server 2012 R2
    • 64-bit edition of Microsoft Server

Selection_673.png

  • Hyper-V integration Services are always few releases behind. It started to change with Windows 2016 and idea to install them via Windows Update.
  • Inability to rename vm folder when machine changes its name. This way you will have to do a Live Migration to rename folder.
  • Complexity of generated script.One will think generating a new machine is easy: New-SCVirtualMachine with a lot of parameters. No. Script is long, heavy, complex and tries to do things in complete different matter.
  • Templates – only way to refresh a template is to create it again, or replace vhdx in library, and do some internal tricks.
  • Inability to do anything with machine when the job is running – all fields are grayed out and you have to wait for jobs to end or fail.

5) Agent

  • if you’re lucky, agents are deployed ALMOST instantly, but adding host to SCVMM requires it to restart
  • if you’re not lucky, then in case of SCVMM upgrade, you will have to manually redeploy and reinstall all agents. Quite common I’d say.
  • [IMPORTANT] The mess agent leaves on filesystem is just legendary. Lets say we would like to migrate our machine from folder d:\vm to e:\vm.After migration (when we choose right option) we will got:

– empty files in d:\vm\machinename

– machine in e:\vm\machinename

Let’s say we would like to migrate it back for some reason

We will get:

  • empty d:\vm\machinename
  • empty e:\vm\machinename
  • machine in e:\vm\machinename (1)

And migration is just done twice. Do you see the pattern? After few migrations we have complete chaos on filesystems with a lots of empty, semi-empty, almost empty and ‚soon-to-be-empty folders’. You’ll end up with removing them manually – again, if you’re lucky.

  • locked folder after failed job. Yes, when you migration failed, you will end with d:\vm\machinename which you’re not able to delete. Sometimes it can be deleted after some time, sometimes after SCVMM/host reboot, sometimes never.

Above list, not fully completed can be seen in SCVMM 2012 R2 and SCVMM 2016 versions. It’s clear that SCVMM is not very high on Microsoft ‚to do’ list as same errors and mistakes are transferred to newer version and hunts us until this day.

 

UPDATE (1)
Changed from Requirements (Enterprise) to (Standard , Enterprise)

Written by marcinbojko

Luty 4, 2017 at 20:08

Napisane w work

Tagged with , , , ,

Hyper-V Integration Services en masse ;)

Yes, we all „love” the way Microsoft gives us new stuff (like forced updates in Windows 10). As we usually know better what and when should be applied, let’s say it is time to upgrade our guest integration services to the latest version.
I usually  express strong conviction that HV integration services should be kept up to date, the thing most sysadmins disregards. Those tools are responsible for keeping your VMs in shape, maintain communication between them and host services, so they SHOULD be always updates. Of course, as with almost everything TEST it before applying on any important environment.
As on 2016-12-31, the latest HV Integration Services version is 6.3.9600.18398

Let’s say – we would like to upgrade all our Windows machines (those BEFORE Windows 10 and Windows 2016) . We can do it manually (not cool, not cool at all), semi-automatic (no reboot) or almost … magical way – fully automatic 😉

Let’s start we have proper source added to Chocolatey:

choco source add -n=public -s"https://www.myget.org/F/public-choco" --priority=10
  • semi automatic (RDP, psexec, powershell)
    choco install hvintegrationservices -y
  • fully automatic (with reboot) using The Foreman and win_manage

    win_manage:
        chocolatey_packages:
          hvintegrationservices:
            ensure: 6.3.9600.18398
        dsc_reboot:
          dsc_reboot:
            message: Machine requested a reboot
            when: pending

 

Written by marcinbojko

Grudzień 31, 2016 at 15:14

10 Myths about Hyper-V

On my lectures and meetings with both: IT and Management  I’ve had a pleasure to be a myth buster about Hyper-V. As much as I don’t appreciate Microsoft’s ‚way of life’ – Hyper-V is mostly feared due to: lack of proper knowledge and very low quality support from Microsoft. Few most common myths are:

  1. It’s very expensive
    Let’s calculate:
    If you’re going to use standalone hosts with Microsoft Hyper-V Server your cost will be just zero coma zero.
    If you’re going to use a lots of Microsoft Windows virtual machines on them – you can rent them as SPLA licenses (per machine), or just rent Windows Datacenter edition for whole host.
    If you’re gonna to use Linux machines (assuming opensource, not paid edition) – again – zero coma zero
    If you’re going to use HA, all you need is just 1 (preferably more) OS for Domain Controller.
    If you’re gonna to manage standalone hosts, all you need (and rather as a suggestion) is a Microsoft Windows 10 Anniversary Edition machines. Just one 🙂
    You don’t have to pay extra for all fine features like: HA, Live Migration, Cluster Aware Updates. With W2k16 edition few extra features are available only in Datacenter edition (which I believe is a grave mistake) but that’s all.
  2. It requires System Center to be managed by
    No. As a matter of fact, SC is only useful in situations when you have lots of VLANS, Logical Network, templates to be deployed or Services. In any other case like: have your VLANS’s accounted for, drop Services as nobody is using this part. System Center Virtual Machine Manager is nothing more than overgrown cancer on a top of Powershell scripts it runs. Since 2012 edition Microsoft couldn’t ever fix the simplest things like: responsive console and not having refreshed it manually after every operation.
  3. It’s slower than VMWare or ‚any’ other competitor
    No. Overhead of Hyper-V is done mostly on storage level and most problems with it are created on a level of infrastructure design.
    For example: If you have a lot of hosts, and you do not require virtual machines there to be Highly Available – do not (i repeat) DO NOT connect them all as cluster nodes.
    If you’re using 1 or 2 iSCSI 1GB cards as you paths for low quality machines – expect nothing more than problems.
    Instead: use local storage, combined with low-end HW controllers. Even having 2 (mirror) or 4 (RAID5 or RAID10) disks for those machines is way better than having one underpowered ‚best of the world storage’. Plan this usage carefully – you still have things like Shared Nothing Live Migration (in a case of maintenance on specific host).
    Creating a lot of host, giving them all 1 or 2 ClusterSharedVolume to share is just asking for trouble.
  4. It requires a lot’s of PowerShell knowledge
    No. And I am the best example here 😉 With few exceptions like script for installing Hyper-V hosts, maybe create few LACPs – that’s all I used Powershell for.
  5. It doesn’t support Linux
    It does, it does it very well.
    Official document: https://technet.microsoft.com/en-us/windows-server-docs/compute/hyper-v/supported-linux-and-freebsd-virtual-machines-for-hyper-v-on-windows
    Few my lectures: https://marcinbojko.wordpress.com/2014/10/22/xxi-spotkanie-regionalnej-lubelskiej-grupy-microsoft-i-moj-wyklad-systemy-linux-na-platformie-hyper-v-2012/
    As a matter of fact – I use CentOS/RedHat, Ubuntu/Debian machines and appliances, and have to say: working with them on Hyper-V is just a simple pleasure.
    In 2016 with things like Hyper-V and Veeam, support for Linux machines on Hyper-V is very much alive. Even our beloved ‚System Center Virtual Machine Manager’ supports creating templates for Linux machines, with small agent to set a lots of things during and after deployment.
  6. It’s complicated to install, run, maintain especially HA & Clusters
    No. It is just simple as click few times: next, next, next,finish.
    Using System Center or (better) FailoverCluster Manager from any Windows Server machine works perfectly out of the box. Rules are simple, wizard will tell you what you should do next.
    With maintenance mode, Live Storage Migration, Cluster Aware Updates you can have stable and secure environment for your machines. Even migrating machines between different clusters (Shared Nothing Migration) is secure and efficient.
  7. It requires specific hardware 
    One of the biggest myths. Learn with a hard way with VMWare hosts , you do not require special NIC, special motherboards or any devices from very narrow VMWare HCL list. Requirements of Hyper-V are very small: VT enabled CPU, enough memory to fit VM’s and host OS itself, one HDD, one NIC. For small setups it almost equals in using desktops and other workstations as a hyper-v farm.
    After hearing this statement from one of my clients, I began to pursue the subject. It was someone from VMWare camp that told him: ‚you will need special hardware for SMB3 and SMB Direct’ – which is generally correct in a same matter like: ‚if you want a milk, you need a cow’ 😉
  8. It doesn’t work with Azure
    Hyper-V 2016  is a light years ahead of Azure:) They still seems to be using Windows 2008 as a hosts with all of its negative aspects.
    But, jokes aside, using pre-build templates or products like Veeam and Windows Azure Pack, creating you own hybrid cloud is one of the best things you can do. Don’t trust sales guy from Microsoft forcing you to ‚move everything to a cloud, our cloud’. Don’t trust you IT guy saying ‚only on premise or death!’. Live in a both worlds.
  9. I know NOTHING about Hyper-V.
    If you have ANY knowledge about Windows – you have knowledge about Hyper-V itself.
  10. But migration from platform X/Y/Z is pain in the ….
    Take a deep breath. Calculate it. Find tools to do it manually, recreate all you machines using somekind of CM tool (like mentioned The Foreman/Puppet)- https://marcinbojko.wordpress.com/2016/10/04/puppet-the-foreman-powershell-dsc-your-system-center-in-a-box/. Calculate it again.
    Do it 😉

Written by marcinbojko

Grudzień 28, 2016 at 17:12

And I landed on a BitBucket…

Written by marcinbojko

Sierpień 27, 2016 at 09:47

Hyper-V default settings

I decided to make my script public via GitHub. What it does is helping making new Hyper-V installation easier. This little thing helped me in numerous Hyper-V deployments.

https://github.com/marcinbojko/hv_default

Written by marcinbojko

Lipiec 14, 2016 at 17:39

Napisane w open source, Uncategorized, work

Tagged with ,

Blogi które śledzę – MVP i nie tylko.

Nie jest tajemnicą, iż uważam Microsoft za firmę, w której myśl inżynieryjna jest tłamszona przez część schizofreniczno-marketoidalną, co widać w ich kolejnych posunięciach. Brak jakiejkolwiek możliwości wpływania przez inżynierów na swoje produkty, częste zmiany zasad gry w połowie rozgrywki, paranoidalne trzymane się jedynie rynku USA (nie chcę nawet komentować wykastrowanego XBL w PL). Po kolejnej konferencji, na której widzę te same twarze „celebrytów-evangelistów” wiem, że nic się w tej sytuacji nie zmieni na dobre.

Tym, którzy nie widzieli – polecam, to niezły otwieracz oczu:

Rynek i chęć współpracy z produktami Microsoftu ratują dla mnie REALNI (nie ewangeliczni) MVP – ludzie zatrudnieni na co dzień w przeróżnych firmach, zajmujący się realnymi problemami, rozwiązującymi je i dzielącymi się tym z całym światem.

Swoją przygodę z Hyper-V zaczynałem od próby weryfikacji różnic pomiędzy produktami konkurencji – język Technetu to beznadziejna maszynowa nowomowa, która w żaden sposób nie wyjaśnia jak osiagnąć to co próbujesz. Śmiało moge powiedzieć iż siedziałbym jeszcze w epoce IT łupanego, gdyby nie kontakt z MVP – ludzmi, którzy częściowo bezinteresownie przeprowadzali nas przez meandry korporacyjnych zawiłości.

Obecnie, aby pozostać w obiegu (na bełkot Microsoftu szkoda mi czasu) sledzę:

  • Mało przydatne:

System Center: Virtual Machine Manager http://blogs.technet.com/b/scvmm/

W zasadzie tylko komunikaty o poprawkach i opisy rozwiązań problemów, już dawno przez społeczność rozwiązanych.

Virtualisation Bloghttp://blogs.technet.com/b/virtualization/default.aspx

Jak wyżej.Niestety.

  • Obowiązkowe (Microsoft, Hyper-V, File Server)

Łukasz Kałużnyhttp://blog.kaluzny.pro/ – Blog Łukasza był dla mnie początkiem próby zrozumienia działania i przewagi HyperV. Chociaż ostatnio pisze rzadziej, sposób w jaki przekazywał powodował iż na początkowym etapie bywał po prostu niezastąpiony.

Aidan Finnhttp://www.aidanfinn.com/ – wyszczekany, niepoprawny politycznie, hejter Linuksów i VMWare. Wrzuca zarówno rozwiązania swoje, jak i w formie agregatora newsów rzeczy związane z Hyper-V, Azure. Nie boi się w mało delikatny sposób zaznaczyć kto w Microsofcie ma problemy z głową przy wprowadzaniu (tak, tak! więcej zamieszania w licencjonowaniu i supporcie) nowości.

Ben Armstronghttp://blogs.msdn.com/b/virtual_pc_guy/ – Manager programu Hyper-V. Gość z niesamowitą wiedzą, dzielący się DZIAŁAJĄCYMI rozwiązaniami w ramach oficjalnego uczestnictwa w programie Microsoftu.

Jose Barreto’s Bloghttp://blogs.technet.com/b/josebda/default.aspx – kolejny uczestnik programu Microsoftu (FIle Server Team). Jego posty z kolei skupiają się na rozwiązaniach z zakresu Storage. Nie ma co jednak zapominać iż Storage to SILNY i KLUCZOWY parametr dobrego hypervisora.

Didier Van Hoye ‚Working Hard in IT’http://workinghardinit.wordpress.com/ tam gdzie zawiodą oficjalne sposoby Microsoftu, lub osoby powyżej mają związane ręce z powodów oficjalnych, tam wkracza Didier. Jego artykuły mogą uratować Ci życie (lub karierę) w przypadku kolejnego świetnego pomysłu marketingu czy kierownictwa Microsoftu. Dobrze Wam radzę – wrzucać jego notki do Evernota czy Pocketa bo w chwili kryzysu mamy rozwiązanie podane jak na tacy.

Michael Rueefli AKA Dr.MIRUhttp://www.miru.ch/ Michael skupia się na najbardziej niedocenionym obecnie pomyśle Microsoftu czyli HyperV Replica. Z tym rozwiązaniem jest świetnie jak działa – jak nie działa, to na 99% Michael JUŻ wie dlaczego i co zrobić.

Kristian Nesehttp://kristiannese.blogspot.com/ – Azure i HyperV

  • Obowiązkowe – sensowni ludzie w PL

Agregator Ziemborahttp://ziembor.pl/plitproblogs/ – Skupia wpisy profesjonalistów w PL piszących dla innych, celem pomocy. Nieopatrznie powiem, iż poniższy blog jest w nim obecny.

Chociaż w żaden sposób nie pretenduję do powyższej czołówki (charakter mojej pracy nie pozwoli mi specjalizować się tylko w kilku wybranych technologiach) chciałbym wierzyć (i wierzę, patrząc na statystyki i maile ze swojego bloga) iż komuś tak sprezentowane rozwiązanie przyda się w krytycznej chwili.

Serdeczne podziękowania dla Michała Panasiewicza (z którym sprzeczam się równie często, co namiętnie) jako inspiracji dla powyższego wpisu.

I jak wyżej – szanowni Microsoftowicze – MVP są naprawdę jedynym powodem, który przepycha mnie przez codzienne obcowanie z waszymi technologiami.

Written by marcinbojko

Wrzesień 7, 2014 at 19:19

Napisane w open source, work

Tagged with , , , ,

Compacting Virtual Disks in System Center Virtual Machine Manager 2012 R2 or Hyper-V 2012 R2

As we all know – disk space isn’t free. If you allocate disk space to your virtual machine, you always want to allocate proper, well-balanced size to shorten future downtimes. But what is proper and balanced you ask?  There is no short answer to this, but if you’re  IT person, you always will try to allocate more than is needed right at the moment.  The decision is : should I use static or dynamic disks?

For me, there is no really need to use static disks anymore as there is no real speed difference. So, the real pro using dynamic disk is their smaller size.

In production environment, using Microsoft’s hypervisor, we can expect our VM to grow over time. During the nature of virtual machines and virtual disks, the real data machine uses are not the same numbers as virtual disk sizes. It is happening due to hypervisor behaviour – deleting all data from OS in virtual machine is no more than throwing away and index card from your book. The written data are still there, so there is no easy, automated way to compact a virtual disk.

Hypervisor however, tries to allocate in a first place, blocks marked as used. So the real expanding is happening when you’re constantly changing your data, without deleting them first. The great example is with databases, log and temporary trash disks and just plain and simple oversizing. I’ve had a case when relativly small machine (20 GB CentOS with a 600 GB VD) just grew over few days filling whole 600 GB of data because of logs set to DEBUG.

So what will be the cons and of virtual disk growing overtime?

  • obviously, more space allocated at expensive storage for VMs
  • more obviously, when you’re using any kind of backup software, you’re forced to write and store in multiple copies the data you really doesn’t need
  • CBT (Change Block Tracking) Table is bigger to process, with every move
  • more network traffic with every backup job you have
  • live ‚shared nothing’ migration times grows just out of proportions. If you have a small machine with 20GB data on 600 GB sized disk, you will have to transfer this whole fatso over you network to other machine. Even with compression set at live migration, it is just really unwanted.

So what can we do? We can just zero the data you’re not using and Hyper-v cmdlets  will take the rest.

You have to plan downtime for the machine, but according to my tests, zeroing machines with 600 GB took 15 to 30 minutes. With smaller sizes it is just matter of single minutes.

Before you go:
– plan your downtime, compacting should not to be interrupted
– take extra caution. Zeroing important data or uneeded data it’s just a matter of mistake.
– delete all snapshots/checkpoints
– make sure you converted VHD to VHDX (optional)
– make sure your disk is set as dynamic disk
– make sure you have enough room for compacting or resizing. Remember – if you have 600 GB virtual disk, during this process it may grow to this size.
– remember that compacting deletes CBT table for backup software – next backup will be a Full Backup.

The most usable zeroing ways I found was:

Offline methods

Cons
– longer downtime
Pros
– more accurate (no more background write)
– faster (no more background write)
– smaller risk of processes failure due to ‚out of space’ event

  • zerofree for Linux ext2/ext3/ext4 disks
  • ntfswipe for Windows based disks
  • dd for both Linux and Windows based disks or exotic fs (btrfs/xfs)

Online methods
Cons
– less accurate (background write  is still happening)
– machines became slower or unresponsible
– slower process
– risk of applications failing due to ‚out of space’ events
Pros
– smaller downtime

– SDelete or CCleaner for Windows based disks
– dd for Linux based disks

 

Phase I – Zeroing Offline

Offline zeroing (done with System Rescue CD 4.x)

  • Delete all uneeded data (logs, temps, bigfiles, dowloads, caches)
  • Umount all ext2/3/4/ntfs volumes
  • for NTFS volume: ntfswipe -av /dev/volume_name
  • for ext2/3/4 volume: zerofree -v /dev/volume-name
  • for LVM: vgchange -a y;zerofree -v /dev/mapper/volume_name
Zeroing swap space:
# swapoff -a

# free
total used free shared buffers cached
Mem: 8056340 2643132 5413208 155072 606916 914796
-/+ buffers/cache: 1121420 6934920
Swap: 0 0 0

# blkid |grep swap
/dev/sdb2: UUID="adad0488-3b67-4444-b792-6a1a775b8821" TYPE="swap"
# dd if=/dev/zero|pv -treb|dd of=/dev/sdb2 bs=8192
dd: error writing ‘/dev/sdb2’: No space left on device
1,91GB 0:01:04 [30,3MB/s]
249981+4 records in
249980+4 records out
2047868928 bytes (2,0 GB) copied, 67,931 s, 30,1 MB/s

#mkswap /dev/sdb2 -U "adad0488-3b67-4444-b792-6a1a775b8821"
Setting up swapspace version 1, size = 1999868 KiB
no label, UUID=adad0488-3b67-4444-b792-6a1a775b8821
# swapon -a

# free
 total used free shared buffers cached
Mem: 8056340 3309648 4746692 159844 1112232 919708
-/+ buffers/cache: 1277708 6778632
Swap: 1999868 0 1999868

Phase I – Online zeroing

1. Delete all uneeded data (logs, temps, bigfiles, dowloads, caches)

for Windows machine
sdelete -z letter:

Linux machine
#dd if=/dev/zero|pv -treb|dd of=/file.zero bs=4096;sync;sync;rm -rfv /file.zero;sync

Phase II – Compacting

1. Shutdown the machine

For System Center VMM 2012 R1/R2

Workspace 1_107
For Hyper-V powershell

Optimize-VHD -Path path_to_vhdx_file.vhdx -Mode Full

 

Workspace 1_108
——-
System Rescue CD – http://www.sysresccd.org/Download
SDelete – http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx
CCleaner Portable – https://www.piriform.com/ccleaner/builds
zerofree – http://manned.org/zerofree/00be91ab

Written by marcinbojko

Sierpień 3, 2014 at 13:10

%d blogerów lubi to: