Blog

VMM Hates SAN Groups Or How To Kill Your Cluster

A really nice feature of VMM is that you can integrate it with any SAN with an SMIS interface and then perform storage tasks, such as adding disks or even deploying VMs based on SAN snapshots. In fact if you set up an SMIS SAN many standard tasks will be updated to include SAN activities. This is where things start to go off the rails.

You see most SANs will use groups to manage access to LUNs. This way as you add a LUN you only have to add it to a single group and then all servers can see it.

Well VMM doesn’t work this way. It thinks in terms of servers. You’ll see this if you add a new LUN from VMM. It will map each server to the LUN rather than adding any obvious group. That’s fine you might think but things get nasty when you try to remove a server’s access.

You see VMM may not add servers to groups but it absolutely knows enough to do some serious damage. If you remove a server from a cluster then part of the job is to remove the cluster disk access. This will not only remove any direct access published to the server but also remove any groups that the server is also a member of. This has the side effect of removing all disk access to any other server also a member of the same SAN group. Effectively removing all SAN disks from all cluster nodes.

I first saw this with a SAN that I had never used before and just thought that it might be a bug in this vendor’s SMIS implementation but have recently seen the same behaviour with a totally different vendor.

So in short, groups make a heap of sense from the SAN point of view, but if you are going to use SMIS with VMM then ONLY assign servers to the LUNs.

VMM Bare Metal Builds and why you should use a Native vLAN

VMM Bare Metal Builds are an amazing way to ensure that your Hyper-V servers start out consistent. It’s a bit magical but part of that process just works better when you use a native VLAN. But why is that the case?

First let’s look at the VMM Bare Metal Build process.

  1. The VMM Server connects to the hardware management interface and instructs the server to reset. This is immediate and if you specified the wrong hardware management address, well congratulation you just rebooted a server.
  2. The new server being rebuilt goes through it’s boot process. Hopefully you have it configured to PXE boot. This will get a DHCP address and then request a PXE server to respond
  3. The WDS server receives the PXE boot request and checks with the VMM server to see whether this request is authorised. If it is then it responds to the request and send the WinPE image
  4. The new server loads the WinPE operating system and connects to the network. This network connection is a brand new network connection and is in no way connected to the PXE boot. You’ve just booted into an OS after all
  5. The new server runs the VMM scripts to discover the hardware inventory and then send this to the VMM server
  6. Once the admin inputs the required information (New server name and possibly network information) the new server begins the build process by cleaning the specified disk and downloading the VHDX image.
  7. The new server then reboots. This time the server is not authorised to PXE boot so proceeds to boot off the new VHDX boot image.
  8. The new server then customises the sysprepped operating system including any static IP address you provided and performs any additional customisation required by the VMM build process (ie. Adding the Hyper-V and MPIO role and installing the VMM agent).
  9. You should now be left with a server on the network using the configured network settings.

There are a few things to note here. Each time that the server uses either PXE or boots into WinPE it’s reliant on finding a DHCP server. If you’re using port-channel network connections, and very few people are not now, then how is this request going to work? It needs to know what vLAN to tag the request with.

Now you can configure most servers in the BIOS to PXE boot with vLAN tagging and that’s great. Now you have your WinPE image. How does WinPE know about the port-channel. This will be dependent on the NIC driver for your server. Is it even possible to modify it so that, when the driver is loaded, it automatically uses vLAN tagging with the correct vLAN ID. It’s possible but something else that needs to be managed. If VMM updates the WinPE image then you need to reconfigure it again.

Next when you boot off the VHDX this also needs be configured with the correct vLAN ID. Now I have to admit I have never got to this stage since the NIC driver in WinPE has always been a blocker for me but is VMM able to set the correct VLAN ID? You absolutely need to tell VMM what network switch to use and what logical network but does this mean that it will set the VLAN ID correctly. If it doesn’t then this is again another blocker.

So as you can see it may be possible to use vLAN tagging throughout the VMM Bare Metal Build process but sometimes you need to look at whether it’s worth the additional overhead. From managing the server BIOS, to the WinPE drivers and configuration, and the OS customisation. There’s a lot going on with this process and everything needs to work perfectly to result in a fully built server. Is it worth the additional overhead just to avoid setting a network as the native vLAN.

Skype for Business Admin and Powershell Unresponsive

I had an interesting issue where a Skype for Business admin site would sit at the spinning wheel at 100%. This environment had two Enterprise pools so I checked the other site to find the same thing. At this stage I was fairly convinced that it was bigger than just a bad server.

I then opened up powershell which connected fine. Great!!

Next I ran a command after much thought or more to the point after typing get-cs<couple of tabs><enter> which happened to end up on Get-CSADDomain.

So this returned LC_DOMAINSETTINGS_STATE_FAILED. Urgh!

That looks pretty average for what, at this point, is an operational environment.

So next I ran get-CSUser, and we waited. Yeah there are a few users in the environment so that’s the be expected but after a couple of minutes I knew that this wasn’t going to end.

I checked the event log and found the following error in the Lync Server Log


Source: LS Remote PowerShell

Level: Error

Event ID: 35009

Remote PowerShell cannot create InitialSessionState.

Remote PowerShell cannot create InitialSessionState for user: S-1-5-21-XXXXXXXXX-XXXXXXXXX-XXXXXXXXX-XXXXX. Cause of failure: Thread was being aborted.. Stacktrace: System.Threading.ThreadAbortException: Thread was being aborted.

at System.Threading.WaitHandle.WaitOneNative(SafeHandle waitableSafeHandle, UInt32 millisecondsTimeout, Boolean hasThreadAffinity, Boolean exitContext)

at System.Threading.WaitHandle.InternalWaitOne(SafeHandle waitableSafeHandle, Int64 millisecondsTimeout, Boolean hasThreadAffinity, Boolean exitContext)

at System.Threading.WaitHandle.WaitOne(Int32 millisecondsTimeout, Boolean exitContext)

at Microsoft.Rtc.Management.Store.Sql.ClientDBAccess.OnBeforeSprocExecution(SprocContext sprocContext)

at Microsoft.Rtc.Common.Data.DBCore.ExecuteSprocContext(SprocContext sprocContext)

at Microsoft.Rtc.Management.Store.Sql.XdsSqlConnection.ReadDocItems(ICollection`1 key)

at Microsoft.Rtc.Management.ScopeFramework.AnchoredXmlReader.Read(ICollection`1 key)

at Microsoft.Rtc.Management.ServiceConsumer.CachedAnchoredXmlReader.Read(ICollection`1 key)

at Microsoft.Rtc.Management.ServiceConsumer.TypedXmlReader.Read(SchemaId schemaId, IList`1 scopeContextList, Boolean useDefaultIfNoneExists)

at Microsoft.Rtc.Management.ServiceConsumer.ServiceConsumer.ReadT

at Microsoft.Rtc.Management.RBAC.ServiceConsumerRoleStoreAccessor.GetRolesFromStore()

at Microsoft.Rtc.Management.Authorization.OcsRunspaceConfiguration.ConstructCmdletsAndScopesMap(List`1 tokenSIDs)

at Microsoft.Rtc.Management.Authorization.OcsRunspaceConfiguration..ctor(IIdentity logonIdentity, IRoleStoreAccessor roleAccessor, List`1 tokenGroups)

at Microsoft.Rtc.Management.Authorization.OcsAuthorizationPlugin.CreateInitialSessionState(IIdentity identity, Boolean insertFormats, Boolean insertTypes, Boolean addServiceCmdlets)

Cause: Remote PowerShell can fail to create InitialSessionState for varied number of reasons. Please look for other events that can give some specific information.

Resolution:

Follow the resolution on the corresponding failure events.


Well that doesn’t look so good. Reading this it looked like it might be a database issue. This would make sense since the CMS database is in a single location with all servers accessing it. Even if an object is in AD, Skype for Business will get information about it from a single place, the CMS.

If you have multiple pools including fail-over pools then there is still just one CMS service.

The database server was busier than expected but nothing was standing out as really bad. (60% average CPU for the SQL process and a few deadlocked processes reported in the SQL log) but it did seem responsive.

It was at this point that other services using the same SQL server also were reported as being down and the SQL admin made the call to restart the SQL Service.

Once restarted everything became responsive again.

Unfortunately I never got to the bottom of what was wrong in the SQL server, but I think it’s still good to remember the heavy reliance on the database service in Skype for Business. Yes there is a SQL service on each Skype for Business Server, but this isn’t used for all processes.

WSUS Performance Issues

WSUS is often a service that is just left alone to do it’s own thing but if left alone for long enough it will experience extreme performance degradation.

This may first become obvious when other systems start to timeout. In my case the Windows Update VMM sync was experiencing a time out.


Error (24000)
Error connecting to the WSUS server: wsus-server, Port: 8531. Detailed error: The operation has timed out


When logging on to the WSUS server I found it extremely slow. When loading some screens in the WSUS Admin Console it would often time out and show a “Reset Server Node” error.

WSUS-Reset-Server-Node

The event log contained the following event:


The WSUS administration console was unable to connect to the WSUS Server via the remote API.

Verify that the Update Services service, IIS and SQL are running on the server. If the problem persists, try restarting IIS, SQL, and the Update Services Service.

System.Net.WebException — The operation has timed out


Useful but unfortunately not correct. After restarting all the services and even the server the performance was still shocking.

Since one of the slow screens showed the status of the synchronization, all 9200 of them, my first thought was that this was a database fragmentation issue. Microsoft has a database optimization script for this located at https://gallery.technet.microsoft.com/scriptcenter/6f8cde49-5c52-4abd-9820-f1d270ddea61.

Unfortunately this didn’t really make much difference.

A more comprehensive solution was found at https://community.spiceworks.com/scripts/show/2998-wsus-automated-maintenance-formerly-adamj-clean-wsus. This will likely take several hours to run but returned the server to normal operation again.

The script should be run with a -firstrun parameter to get the server operational again. This will automatically create a scheduled task to run the process daily at 8am which will keep the service optimised.

Server Requirements

This may also be a good time to check that the server is sized sufficiently. It may be possible to run this service with a minimal amount of memory and CPU but over time this may not be sufficient.

According to the WSUS system requirements:

Memory: WSUS requires an additional 2 GB of RAM more than what is required by the server and all other services or software.

At first read this may seem to mean you can have a server with 2GB of RAM but this is actually 2GB dedicated to the WSUS service. If you are running a Windows 2016 server with Desktop Experience this will require a minimum of 4GB of RAM. This may also be a little light depending on the configuration. Does the server also run the Windows Internal Database or is it an external SQL service?

You may want to seriously think about giving the server at least 8GB or RAM or even more to give it some overhead.

Windows Internal Database and Memory

If you’re using the Windows Internal Database then it won’t matter how much memory you throw at the server. The database will use all of it. This is typical SQL behaviour with the service configured with no maximum memory limit. You shouldn’t except this on a large SQL server let alone a small service like this.

To fix this install the SQL command line utilities and run the following commands if you’re using Windows 2012 R2.


sqlcmd -E -S \\.\pipe\Microsoft##WID\tsql\query

exec sp_configure ‘show advanced option’, ‘1’;
reconfigure;
go

exec sp_configure ‘max server memory’, 4096;
reconfigure with override;
go

quit


This will allow the database process to use a maximum of 4GB RAM. You can cut this down even further but be aware of how large the database is. If it’s a 20GB database and the memory is set to 2GB then performance will suffer.

Hopefully this will allow you to use your WSUS server again.

Windows Core Hyper-V Setup Using PowerShell

In a previous post I gave some sample powershell commands to get a Windows Core server configured with the Hyper-V role and with some base networking. Let’s have a look at that script and what it does.


install-windowsfeature -name Hyper-V, Data-Center-Bridging, FailOver-Clustering, multipath-IO, hyper-v-powershell, rsat-clustering-powershell, rsat-clustering-cmdInterface, rsat-datacenterBridging-lldp-tools


First up we need to install the features that we need for the server. Notice that we really need to install the powershell management tools to do much locally. Yes you can absolutely get away with running all commands remotely but there are some changes, like networking, that you might still want to be local for.


new-netlbfoteam -Name “Switch1”-TeamMembers “vNIC1”, “vNIC2” -loadbalancingalgorithm HyperVPort


Next we’re going to create a Load Balance and Failover Network Team. This is the older style Windows 2008/2012 network team and you could change this to the new style team if you really want to.


new-vmswitch -Name “VMSwitch1” -NetAdapterName “Switch1”


This part is easy. We need to create a Hyper-V switch which will be connected to the network team we created in the previous step.


add-vmnetworkadapter -name “HV-Mgmt” -switchname “VMSwitch1” -managementos
add-vmnetworkadapter -name “HV-CSV” -switchname “VMSwitch1” -managementos
add-vmnetworkadapter -name “HV-LM” -switchname “VMSwitch1” -managementos


Now we can create some virtual network adapters for the Hyper-V host to use. In this case we have a vNIC for Management, CSV Disk Management, and Live Migration. These adapters are all virtually plugged in to out virtual switch.


set-vmnetworkadaptervlan -vmnetworkadaptername “HV-CSV” -vlanid 2 -access -managementos
set-vmnetworkadaptervlan -vmnetworkadaptername “HV-LM” -vlanid 3 -access -managementos


We don’t want to have these three separate network cards just for the sake of it, they need to be on different networks to isolate the traffic. So here we configure them with different VLAN IDs. These need to have been configured on the network switch that the Hyper-V server plugs in to.

So why don’t we have a VLAN ID for the management vNIC? Well you really want to be able to perform bare metal build of the Hyper-V servers using VMM and while it’s possible to do this with VLAN tagging on the management adapter it’s far easier without this. By enabling the management network as the native VLAN on the Hyper-V server port any untagged traffic will be put into the Hyper-V Management VLAN. This will allow the server to PXE boot and load the WinPE environment without using a VLAN ID. The other side of this is that once you are in Windows you still don’t use the actual VLAN ID. Just leave it blank.


New-VMSwitch -Name “VM-Switch2” -NetAdapter “vNic3″,”vNIC4” -enableembeddedTeaming $true


Since we want to be fancy and use the new Windows 2016 Switch Embedded Networking for the VM networks the next team is created a different way. We don’t need to create the Network Team first it’s all managed in Hyper-V Networking.


Get-NetAdapterAdvancedProperty -DisplayName “Jumbo Packet” | Set-NetAdapterAdvancedProperty –RegistryValue “9014”


Almost at the end now. Hyper-V experiences significant performance increases if jumbo frames are enabled. This is particularly when machines are migrated between hosts but also around any other large network transfers. The problem is that all new network adapteds, including the ones we created above, default to having jumbo frames disabled. Turn these on whenever possible. In fact keep checking that these are still turned on. It’s a simple change which results in huge performance benefits.


mpclaim -r -i -a “”


Finally if you are using a SAN you’ll likely have multiple pathways and require MPIO to be enabled. If you don’t you’ll see multiple copies of the same disk and yet will only be using a single path. MPCLAIM will discover any MPIO devices and then will reboot the server to enable the configuration.

Now all you need to do is use sconfig to set the IP address for your new vNICs, change your server name and join the domain. Then you can use all your normal tools remotely.

Windows Core isn’t so scary after all.

 

Update the VMM Bare Metal WinPE Image

The VMM Bare Metal build process is one of those processes that just seems magical when you first see it but there’s a lot going on to make this work. One of the common issues is that the server will boot using PXE but then will either not be able to continue to talk to the VMM server or will not see any local disks. These are all related to the drivers contained in the WinPE boot image.

This image is managed by VMM but you will find a current version on the WDS server in the RemoteInstall\DCMgr\Boot\Windows\Images directory which is called boot.wim.

If you want to manually update this with new drivers then you can use the script below. You need to run this from the VMM server and it requires that the boot.wim file be located in c:\temp with all drivers extracted into a folder called c:\temp\Drivers. You also need a c:\temp\mount directory for the WinPE image to be mounted to.


$mount = “c:\temp\mount”
$winpeimage = “c:\temp\boot.wim”
$winpetemp = $winpeimage + “.tmp”
$drivers = “C:\temp\Drivers”

copy $winpeimage $winpetemp

dism /mount-wim /wimfile:$winpetemp /index:1 /mountdir:$mount
dism /image:$mount /add-driver /driver:$drivers /recurse
Dism /Unmount-Wim /MountDir:$mount /Commit

publish-scwindowspe -path $winpetemp
del $winpetemp


Once the WinPE image has been updated with the new drivers it will distribute the new image to all WDS servers in the environment.

It is also possible to install all drivers located in the VMM Library using but I try to stay away from this to minimise the size of the WinPE image. Let VMM install any non-critical drivers as part of it’s own process.

VMM Duplicate VMs

VMM may discover VMs that already exist in the environment and add then as a new VM. You will end up with two different VMs listed with the same name.

To confirm that this is the case run the following command in VMM Powershell.


get-vm “Duplicate-VM-Name” | FL NAME,ID,biosguid,location


If you have duplicate machines then everything except the ID, which is VMM assigned, will match as shown below


Name : Duplicate-VM-Name
ID : 42635679-94fb-4149-ad26-66041a8c96eb
BiosGuid : 5cf412b5-3398-4c5a-951f-3e22c7f97d1a
Location : C:\ClusterStorage\volume1

Name : Duplicate-VM-Name
ID : 8c675f1e-6626-4805-b365-f9b6be3d6c7f
BiosGuid : 5cf412b5-3398-4c5a-951f-3e22c7f97d1a
Location : C:\ClusterStorage\volume1


Both of these VMs refer to the same REAL VM so if you delete one then the second VM will go into a missing state.

If you use powershell using the -force parameter the behaviour changes. This will remove the VM from the VMM database but will not touch the VM. You can use the following powershell command to do this.


get-vm “Duplicate-VM-Name” | WHERE ID -eq “8c675f1e-6626-4805-b365-f9b6be3d6c7f” | remove-vm -force


You will now just have a single VM again.

 

VMM 2016 Cluster Upgrades and Resource Groups

In order to upgrade VMM from 2012 R2 to 2016 you need to deploy new management servers and basically use a lift and shift upgrade process. This is due to VMM 2012 R2 supporting up to Windows Server 2012 R2 while VMM 2016 ONLY supports Windows Server 2016.

If you installed VMM as a failover cluster then you also need to think about how you are going to handle the cluster as part of this upgrade. With Windows 2016 you can add new nodes to an existing Windows 2012 R2 cluster but there may be reasons to create a brand new cluster. You need to think carefully about the process that you are going to follow either way.

If you are going to configure a new cluster then you need to decide whether you will use the same VMM Cluster Service name or a new name. If you use a new name then you will need to reassociate all agents once you have completed the installation. Think about any parts of the environment which may also rely on the old VMM server name.

If on the other hand you plan to reuse the old name then there are a couple of things to watch out for. Ironically the first and most important aspect is actually the removal of the old VMM nodes. Even if you stop the old VMM cluster service it still appears that it will remove the cluster service computer name from the database when you uninstall the last node. This will result in the new VMM service crashing and being unable to restart. Looking at the VMM log located in c:\programdata\VMMLogs\SCVMM.{GUID} you will see the following error:


Base Exception Method Name=Microsoft.VirtualManager.DB.SqlRetryCommand.ValidateReturnValue

Exception Message=Computer cluster-service-name is not associated with this VMM management server.

Check the computer name, and then try the operation again.


If you face this issue the quickest way to fix this is to just uninstall the VMM service, delete the VMM cluster role and reinstall it using the same database and user settings. There may be a way to fix it in the back end database but it’s most likely not worth the effort at this point.

In order to avoid this you will need to uninstall VMM on the old cluster nodes first before doing the upgrade. Just make sure that you always select to retain the database. You should have a backup of the database already though right?

The other issue you will need to deal with is cluster permissions. Remember that the VMM cluster service is a virtual server along with the cluster service. The cluster service needs to have access to do things with the VMM cluster service.

When you run the first node installation it may fail after quite some time with the following error:


“Creation of the VMM resource group VMM failed.Ensure that the group name is valid, and cluster resource or group with the same name does not exist, and the group name is not used in the network.”


This is due to the cluster computer account not having access to modify the AD account of the VMM cluster service virtual server. Grant the new cluster computer account full control of the existing cluster service computer account and re-run setup.

While you’re at it make sure that you also grant access to the DNS entries in case these also need to change.

Windows Server Core – Is it worth the hassle?

It’s been around for a long time now but how many environments are actually using Windows Server Core? It appears that it’s something that everyone knows they should be using but no one really wants to commit to.

Now Microsoft have made life harder with Windows 2016, by removing the ability to add and remove the GUI meaning you need to commit up front.

So should you commit or run to the safety of the desktop experience? As expected that will depend.

Most servers come with ample memory and CPU to enable you to run the Desktop Experience so there really is little requirement to run Core but if you want to squeeze that little bit more out of your servers then maybe it’s worth looking further. What else do you need to think about before taking the plunge?

What do you need the server for?

There are still a lot of services that rely on the desktop experience to work and I’m not just talking about remote desktop services. Some print services will still want the desktop for instance, and there are many application servers that will still want it too.

If you’re looking at a Hyper-V server then you just know that Microsoft want you to install core on it. Feeling guilty for hovering over the desktop experience option yet?

What is the driver support like?

This might sound like a strange question but one of the limitations of Core is around driver management. Device Manager is available externally, but only in read only mode.

You can install, remove and update drivers using the Core commands but what about if you want to modify settings? Hopefully there’s a registry key or a configuration file because it’s not guaranteed that their device management utility will run in core. Sometimes they will but….

You may think that this isn’t really an issue but just think about when you need to tweak network driver settings. Some of these, like Jumbo Frames, are accessible using PowerShell but not all.

How often do you need to log on to the server anyway?

This question needs to be answered in multiple ways. First what sort of admin staff are there? Do they install as much as they can on their own admin workstations or jump-boxes and do all their admin remotely or is everything done on the server itself? Do they have an aversion to PowerShell and all other command lines? If you try to force Core on the wrong staff it’s not going to end well.

The second part of this question is how often you need to use the server. Let’s face it a nice GUI is quite comforting and if you need to do some manual task on the server every day then you just know you’re going to be happier with a full desktop experience. But stop you say. Why aren’t you automating your daily task? If so then maybe you are ready for Core after all.

You’re taking the plunge. How bad is it really?

That all depends. How do you feel about seeing this when you log on?

Windows Core Desktop

If you’re a little concerned then let’s make you feel better.

Windows Core Powershell Desktop

So much better right. As long as you get drivers sorted the actual setup isn’t too bad now. Remember the bad old days of running VBScript and weird command lines that no one will ever remember to get a server up and running? Well they’ve all gone. Now just run sconfig and you’re presented with a fairly user friendly, albeit ASCII menu.

Windows Core SConfig

That will get you over the initial setup but what about if you need to tweak things. Please don’t tell me I have to use REG command lines to edit the registry!!!

Well no you don’t, and this is the not so dirty little secret about Core. It might be desktop experience-less but that doesn’t mean GUIless.

Windows Core with GUI

In fact the mistake that almost everyone makes is exiting the command prompt thinking it will log them off. Nope, that’s not how it’s done. You’re suddenly left with a session with no interface. Don’t worry though just use CTRL-ALT-DEL and you get the ASCII version of the LoginUI.

Windows Core LoginUI

From here you can bring up the GUI version of task manager and re-run explorer.

Windows Core Task Manager

As you can tell you may find that even if you do have some GUI tool that needs to run on the server it might still be fine. After all it is still Windows just with a little less than normal.

Are there any advantages to offset the hassle of running Core? Absolutely. One major advantage is that the server will be left alone to just do what it’s intended to do. Admins won’t be logging on to the server and using browsers or other programs built in to the Desktop Experience.

The size of the installation will be significantly smaller which will not only mean less disk space requirements but also less patching. Yes Microsoft are now using cumulative updates so you’re likely still going to be patching monthly but the time to install these updates and the potential impact will be smaller.

Boot time is also incredibly quick since it has so little to load on boot.

Finally it will change how you look at managing a server. It’s so easy to just have a process you follow for deploying a server but wouldn’t it be nice to instead have a scripted installer. how often do you find a small typo in configuration resulting in a slightly different configuration between servers. Core makes it so easy to create scripts for everything you do that can then be used in the future.

So as an example of this say you want to install a Hyper-V server. How hard is it to get a base level using a script? Well here’s a basic script to get you going.


install-windowsfeature -name Hyper-V, Data-Center-Bridging, FailOver-Clustering, multipath-IO, hyper-v-powershell, rsat-clustering-powershell, rsat-clustering-cmdInterface, rsat-datacenterBridging-lldp-tools

new-netlbfoteam -Name “Switch1”-TeamMembers “vNIC1”, “vNIC2” -loadbalancingalgorithm HyperVPort

new-vmswitch -Name “Switch1” -NetAdapterName “Switch1”

add-vmnetworkadapter -name “HV-Mgmt” -switchname “Switch1” -managementos
add-vmnetworkadapter -name “HV-CSV” -switchname “Switch1” -managementos
add-vmnetworkadapter -name “HV-LM” -switchname “Switch1” -managementos

set-vmnetworkadaptervlan -vmnetworkadaptername “HV-CSV” -vlanid 2 -access -managementos
set-vmnetworkadaptervlan -vmnetworkadaptername “HV-LM” -vlanid 3 -access -managementos

New-VMSwitch -Name “VM-Switch2” -NetAdapter “vNic3″,”vNIC4” -enableembeddedTeaming $true

Get-NetAdapterAdvancedProperty -DisplayName “Jumbo Packet” | Set-NetAdapterAdvancedProperty –RegistryValue “9014”

mpclaim -r -i -a “”


Simple right? So what does it all do? I’ll go through it in the next post.