Upgrading BitBucket using HTTPS on Windows from 4.x to 5.x

Just for a change I had to upgrade a Bitbucket 4.x server to 5.9. This is a major upgrade and Atlassian provided clear warning that the configuration file needed to be changed manually as part of this process.

The upgrade went well and I decided to start the service to see what state it would be in prior to the configuration file changes. This wasn’t so good with the service stopping almost immediately.

Checking the Bitbucket web server log I found the following error.

Caused by: java.lang.IllegalStateException: Failed to load property source from location ‘file:/D:/Atlassian/ApplicationData/Bitbucket/shared/bitbucket.properties’

This seemed strange as the service had been running fine but after checked the file permissions sure enough the Bitbucket service account had no access to the file. Simple fix so I started the service again.

This time the service started but was using straight HTTP. That’s fine it showed that the service was all fine and talking to the database so now on to the configuration changes.

The configuration that needed to be replicated was below.

<Connector port=”443″
sslProtocol=”TLS” />

Looking at the new bitbucket.properties file the format was a little different and not what was expected based on the upgrade documentation. It seemed on first look to use semi-colons as separators rather than being line separated.


To replicate this the new configuration was set up using semi-colons. When the service was started it wasn’t too good.

Faulting application name: bserv64.exe, version:, time stamp: 0x51543b9d
Faulting module name: jvm.dll, version:, time stamp: 0x56ac212a
Exception code: 0xc0000005
Fault offset: 0x0000000000214f38
Faulting process id: 0xa70
Faulting application start time: 0x01d3dd132c763723
Faulting application path: C:\Atlassian\Bitbucket\5.9.1\bin\bserv64.exe
Faulting module path: c:\atlassian\bitbucket\4.6.0\jre\bin\server\jvm.dll
Report Id: 552e5d9f-4907-11e8-80db-001dd8b71d05
Faulting package full name:
Faulting package-relative application ID:

Well that didn’t work. After looking again I saw my mistake and removed the semi-colons and used line breaks instead. This didn’t work too well either with the Bitbucket log again recording an error.

Caused by: org.springframework.boot.context.embedded.tomcat.ConnectorStartFailedException: Connector configured to listen on port 443 failed to start

After more investigation I found that you need to specify the key-store and key password in this version even if the default password has been used.

Still the service wouldn’t start.

The other configuration items looked pretty clear so I looked at the key-store location parameter. In the old version this was keystoreFile=”D:\Atlassian\keystore\bitbucket.jks”

All of the examples provided were for linux which used the typical /dir/file format. This surely wouldn’t work for Windows but I didn’t find any examples of what to do.

Ultimately I removed the speech marks and converted the back slashes to forward slashes.

So the final working configuration for 5.x is below.


The service now started using HTTPS and we were back in service.

Unable to delete Hyper-V Host in VMM due to SQL statement failure

I had an odd failure when deleting a Hyper-V server from VMM 2016. The job failed with a very generic Error 20413.

So the next step was to check the log file which gave me an unexpected error.

——————- Error Report ——————-

Error report created 4/26/2018 7:29:26 AM
CLR is not terminating

————— Bucketing Parameters —————


SCVMM Version=4.0.2244.0
SCVMM flavor=C-buddy-RTL-AMD64
Default Assembly Version=4.0.2244.0
Executable Name=vmmservice.exe
Executable Version=4.0.2244.0
Base Exception Target Site=140717336435616
Base Exception Assembly name=System.Data.dll
Base Exception Method Name=System.Data.SqlClient.SqlConnection.OnError
Exception Message=Unable to connect to the VMM database because of a general database failure.
Ensure that the SQL Server is running and configured correctly, then try the operation again.
Build bit-size=64

Great!! The service can’t talk to SQL I thought but this message was also a little deceiving and the next section was actually more important.

———— exceptionObject.ToString() ————

Microsoft.VirtualManager.DB.CarmineSqlException: Unable to connect to the VMM database because of a general database failure.
Ensure that the SQL Server is running and configured correctly, then try the operation again. —> System.Data.SqlClient.SqlException: The DELETE statement conflicted with the SAME TABLE REFERENCE constraint “FK_tbl_WLC_VHD_VHD”. The conflict occurred
The statement has been terminated.

Again they bury the lead. The first part again goes on about not being able to talk to SQL but then they give you the actual issue. “The DELETE statement conflicted with the SAME TABLE REFERENCE constraint “FK_tbl_WLC_VHD_VHD”. The conflict occurred. The statement has been terminated”

When VMM is trying to delete the server it’s hitting an issue due to references in the “FK_tbl_WLC_VHD_VHD” table. This is blocking the deletion of the server object.

I found that there were some mentioned that this may be due to the server belonging to a cluster, which it was, and that VMM may take some time to clean up the reference. Well this server had been removed from the cluster almost 12 hours ago so I doubted that just waiting longer would do it and decided to clean up the table.

This appeared to be caused by some orphaned objects that were still in the database as being present on the host even though they were long gone. These existed in the  tbl_WLC_PhysicalObject table.

VMM uses GUIDs to refer to objects in the database so I first needed to get the GUID for the server which could then be used to target these entries. This was simple with powershell.

(Get-SCVMHost Hyper-V-Server-Name).ID

We then pick up the GUID and insert it into the following SQL query after a quick DB backup.

DELETE FROM [tbl_WLC_PhysicalObject] WHERE [HostId]=’VM-Host-GUID’

Finally back to VMM Powershell and delete the Hyper-V server again. My Hyper-V server was already off the network so I used a -force to just remove the database references.

remove-vmhost Hyper-V-Server-Name -Force

This time the job succeeded.

VMM Hates SAN Groups Or How To Kill Your Cluster

A really nice feature of VMM is that you can integrate it with any SAN with an SMIS interface and then perform storage tasks, such as adding disks or even deploying VMs based on SAN snapshots. In fact if you set up an SMIS SAN many standard tasks will be updated to include SAN activities. This is where things start to go off the rails.

You see most SANs will use groups to manage access to LUNs. This way as you add a LUN you only have to add it to a single group and then all servers can see it.

Well VMM doesn’t work this way. It thinks in terms of servers. You’ll see this if you add a new LUN from VMM. It will map each server to the LUN rather than adding any obvious group. That’s fine you might think but things get nasty when you try to remove a server’s access.

You see VMM may not add servers to groups but it absolutely knows enough to do some serious damage. If you remove a server from a cluster then part of the job is to remove the cluster disk access. This will not only remove any direct access published to the server but also remove any groups that the server is also a member of. This has the side effect of removing all disk access to any other server also a member of the same SAN group. Effectively removing all SAN disks from all cluster nodes.

I first saw this with a SAN that I had never used before and just thought that it might be a bug in this vendor’s SMIS implementation but have recently seen the same behaviour with a totally different vendor.

So in short, groups make a heap of sense from the SAN point of view, but if you are going to use SMIS with VMM then ONLY assign servers to the LUNs.

VMM Bare Metal Builds and why you should use a Native vLAN

VMM Bare Metal Builds are an amazing way to ensure that your Hyper-V servers start out consistent. It’s a bit magical but part of that process just works better when you use a native VLAN. But why is that the case?

First let’s look at the VMM Bare Metal Build process.

  1. The VMM Server connects to the hardware management interface and instructs the server to reset. This is immediate and if you specified the wrong hardware management address, well congratulation you just rebooted a server.
  2. The new server being rebuilt goes through it’s boot process. Hopefully you have it configured to PXE boot. This will get a DHCP address and then request a PXE server to respond
  3. The WDS server receives the PXE boot request and checks with the VMM server to see whether this request is authorised. If it is then it responds to the request and send the WinPE image
  4. The new server loads the WinPE operating system and connects to the network. This network connection is a brand new network connection and is in no way connected to the PXE boot. You’ve just booted into an OS after all
  5. The new server runs the VMM scripts to discover the hardware inventory and then send this to the VMM server
  6. Once the admin inputs the required information (New server name and possibly network information) the new server begins the build process by cleaning the specified disk and downloading the VHDX image.
  7. The new server then reboots. This time the server is not authorised to PXE boot so proceeds to boot off the new VHDX boot image.
  8. The new server then customises the sysprepped operating system including any static IP address you provided and performs any additional customisation required by the VMM build process (ie. Adding the Hyper-V and MPIO role and installing the VMM agent).
  9. You should now be left with a server on the network using the configured network settings.

There are a few things to note here. Each time that the server uses either PXE or boots into WinPE it’s reliant on finding a DHCP server. If you’re using port-channel network connections, and very few people are not now, then how is this request going to work? It needs to know what vLAN to tag the request with.

Now you can configure most servers in the BIOS to PXE boot with vLAN tagging and that’s great. Now you have your WinPE image. How does WinPE know about the port-channel. This will be dependent on the NIC driver for your server. Is it even possible to modify it so that, when the driver is loaded, it automatically uses vLAN tagging with the correct vLAN ID. It’s possible but something else that needs to be managed. If VMM updates the WinPE image then you need to reconfigure it again.

Next when you boot off the VHDX this also needs be configured with the correct vLAN ID. Now I have to admit I have never got to this stage since the NIC driver in WinPE has always been a blocker for me but is VMM able to set the correct VLAN ID? You absolutely need to tell VMM what network switch to use and what logical network but does this mean that it will set the VLAN ID correctly. If it doesn’t then this is again another blocker.

So as you can see it may be possible to use vLAN tagging throughout the VMM Bare Metal Build process but sometimes you need to look at whether it’s worth the additional overhead. From managing the server BIOS, to the WinPE drivers and configuration, and the OS customisation. There’s a lot going on with this process and everything needs to work perfectly to result in a fully built server. Is it worth the additional overhead just to avoid setting a network as the native vLAN.

Skype for Business Admin and Powershell Unresponsive

I had an interesting issue where a Skype for Business admin site would sit at the spinning wheel at 100%. This environment had two Enterprise pools so I checked the other site to find the same thing. At this stage I was fairly convinced that it was bigger than just a bad server.

I then opened up powershell which connected fine. Great!!

Next I ran a command after much thought or more to the point after typing get-cs<couple of tabs><enter> which happened to end up on Get-CSADDomain.


That looks pretty average for what, at this point, is an operational environment.

So next I ran get-CSUser, and we waited. Yeah there are a few users in the environment so that’s the be expected but after a couple of minutes I knew that this wasn’t going to end.

I checked the event log and found the following error in the Lync Server Log

Source: LS Remote PowerShell

Level: Error

Event ID: 35009

Remote PowerShell cannot create InitialSessionState.

Remote PowerShell cannot create InitialSessionState for user: S-1-5-21-XXXXXXXXX-XXXXXXXXX-XXXXXXXXX-XXXXX. Cause of failure: Thread was being aborted.. Stacktrace: System.Threading.ThreadAbortException: Thread was being aborted.

at System.Threading.WaitHandle.WaitOneNative(SafeHandle waitableSafeHandle, UInt32 millisecondsTimeout, Boolean hasThreadAffinity, Boolean exitContext)

at System.Threading.WaitHandle.InternalWaitOne(SafeHandle waitableSafeHandle, Int64 millisecondsTimeout, Boolean hasThreadAffinity, Boolean exitContext)

at System.Threading.WaitHandle.WaitOne(Int32 millisecondsTimeout, Boolean exitContext)

at Microsoft.Rtc.Management.Store.Sql.ClientDBAccess.OnBeforeSprocExecution(SprocContext sprocContext)

at Microsoft.Rtc.Common.Data.DBCore.ExecuteSprocContext(SprocContext sprocContext)

at Microsoft.Rtc.Management.Store.Sql.XdsSqlConnection.ReadDocItems(ICollection`1 key)

at Microsoft.Rtc.Management.ScopeFramework.AnchoredXmlReader.Read(ICollection`1 key)

at Microsoft.Rtc.Management.ServiceConsumer.CachedAnchoredXmlReader.Read(ICollection`1 key)

at Microsoft.Rtc.Management.ServiceConsumer.TypedXmlReader.Read(SchemaId schemaId, IList`1 scopeContextList, Boolean useDefaultIfNoneExists)

at Microsoft.Rtc.Management.ServiceConsumer.ServiceConsumer.ReadT

at Microsoft.Rtc.Management.RBAC.ServiceConsumerRoleStoreAccessor.GetRolesFromStore()

at Microsoft.Rtc.Management.Authorization.OcsRunspaceConfiguration.ConstructCmdletsAndScopesMap(List`1 tokenSIDs)

at Microsoft.Rtc.Management.Authorization.OcsRunspaceConfiguration..ctor(IIdentity logonIdentity, IRoleStoreAccessor roleAccessor, List`1 tokenGroups)

at Microsoft.Rtc.Management.Authorization.OcsAuthorizationPlugin.CreateInitialSessionState(IIdentity identity, Boolean insertFormats, Boolean insertTypes, Boolean addServiceCmdlets)

Cause: Remote PowerShell can fail to create InitialSessionState for varied number of reasons. Please look for other events that can give some specific information.


Follow the resolution on the corresponding failure events.

Well that doesn’t look so good. Reading this it looked like it might be a database issue. This would make sense since the CMS database is in a single location with all servers accessing it. Even if an object is in AD, Skype for Business will get information about it from a single place, the CMS.

If you have multiple pools including fail-over pools then there is still just one CMS service.

The database server was busier than expected but nothing was standing out as really bad. (60% average CPU for the SQL process and a few deadlocked processes reported in the SQL log) but it did seem responsive.

It was at this point that other services using the same SQL server also were reported as being down and the SQL admin made the call to restart the SQL Service.

Once restarted everything became responsive again.

Unfortunately I never got to the bottom of what was wrong in the SQL server, but I think it’s still good to remember the heavy reliance on the database service in Skype for Business. Yes there is a SQL service on each Skype for Business Server, but this isn’t used for all processes.

WSUS Performance Issues

WSUS is often a service that is just left alone to do it’s own thing but if left alone for long enough it will experience extreme performance degradation.

This may first become obvious when other systems start to timeout. In my case the Windows Update VMM sync was experiencing a time out.

Error (24000)
Error connecting to the WSUS server: wsus-server, Port: 8531. Detailed error: The operation has timed out

When logging on to the WSUS server I found it extremely slow. When loading some screens in the WSUS Admin Console it would often time out and show a “Reset Server Node” error.


The event log contained the following event:

The WSUS administration console was unable to connect to the WSUS Server via the remote API.

Verify that the Update Services service, IIS and SQL are running on the server. If the problem persists, try restarting IIS, SQL, and the Update Services Service.

System.Net.WebException — The operation has timed out

Useful but unfortunately not correct. After restarting all the services and even the server the performance was still shocking.

Since one of the slow screens showed the status of the synchronization, all 9200 of them, my first thought was that this was a database fragmentation issue. Microsoft has a database optimization script for this located at https://gallery.technet.microsoft.com/scriptcenter/6f8cde49-5c52-4abd-9820-f1d270ddea61.

Unfortunately this didn’t really make much difference.

A more comprehensive solution was found at https://community.spiceworks.com/scripts/show/2998-wsus-automated-maintenance-formerly-adamj-clean-wsus. This will likely take several hours to run but returned the server to normal operation again.

The script should be run with a -firstrun parameter to get the server operational again. This will automatically create a scheduled task to run the process daily at 8am which will keep the service optimised.

Server Requirements

This may also be a good time to check that the server is sized sufficiently. It may be possible to run this service with a minimal amount of memory and CPU but over time this may not be sufficient.

According to the WSUS system requirements:

Memory: WSUS requires an additional 2 GB of RAM more than what is required by the server and all other services or software.

At first read this may seem to mean you can have a server with 2GB of RAM but this is actually 2GB dedicated to the WSUS service. If you are running a Windows 2016 server with Desktop Experience this will require a minimum of 4GB of RAM. This may also be a little light depending on the configuration. Does the server also run the Windows Internal Database or is it an external SQL service?

You may want to seriously think about giving the server at least 8GB or RAM or even more to give it some overhead.

Windows Internal Database and Memory

If you’re using the Windows Internal Database then it won’t matter how much memory you throw at the server. The database will use all of it. This is typical SQL behaviour with the service configured with no maximum memory limit. You shouldn’t except this on a large SQL server let alone a small service like this.

To fix this install the SQL command line utilities and run the following commands if you’re using Windows 2012 R2.

sqlcmd -E -S \\.\pipe\Microsoft##WID\tsql\query

exec sp_configure ‘show advanced option’, ‘1’;

exec sp_configure ‘max server memory’, 4096;
reconfigure with override;


This will allow the database process to use a maximum of 4GB RAM. You can cut this down even further but be aware of how large the database is. If it’s a 20GB database and the memory is set to 2GB then performance will suffer.

Hopefully this will allow you to use your WSUS server again.

Windows Core Hyper-V Setup Using PowerShell

In a previous post I gave some sample powershell commands to get a Windows Core server configured with the Hyper-V role and with some base networking. Let’s have a look at that script and what it does.

install-windowsfeature -name Hyper-V, Data-Center-Bridging, FailOver-Clustering, multipath-IO, hyper-v-powershell, rsat-clustering-powershell, rsat-clustering-cmdInterface, rsat-datacenterBridging-lldp-tools

First up we need to install the features that we need for the server. Notice that we really need to install the powershell management tools to do much locally. Yes you can absolutely get away with running all commands remotely but there are some changes, like networking, that you might still want to be local for.

new-netlbfoteam -Name “Switch1”-TeamMembers “vNIC1”, “vNIC2” -loadbalancingalgorithm HyperVPort

Next we’re going to create a Load Balance and Failover Network Team. This is the older style Windows 2008/2012 network team and you could change this to the new style team if you really want to.

new-vmswitch -Name “VMSwitch1” -NetAdapterName “Switch1”

This part is easy. We need to create a Hyper-V switch which will be connected to the network team we created in the previous step.

add-vmnetworkadapter -name “HV-Mgmt” -switchname “VMSwitch1” -managementos
add-vmnetworkadapter -name “HV-CSV” -switchname “VMSwitch1” -managementos
add-vmnetworkadapter -name “HV-LM” -switchname “VMSwitch1” -managementos

Now we can create some virtual network adapters for the Hyper-V host to use. In this case we have a vNIC for Management, CSV Disk Management, and Live Migration. These adapters are all virtually plugged in to out virtual switch.

set-vmnetworkadaptervlan -vmnetworkadaptername “HV-CSV” -vlanid 2 -access -managementos
set-vmnetworkadaptervlan -vmnetworkadaptername “HV-LM” -vlanid 3 -access -managementos

We don’t want to have these three separate network cards just for the sake of it, they need to be on different networks to isolate the traffic. So here we configure them with different VLAN IDs. These need to have been configured on the network switch that the Hyper-V server plugs in to.

So why don’t we have a VLAN ID for the management vNIC? Well you really want to be able to perform bare metal build of the Hyper-V servers using VMM and while it’s possible to do this with VLAN tagging on the management adapter it’s far easier without this. By enabling the management network as the native VLAN on the Hyper-V server port any untagged traffic will be put into the Hyper-V Management VLAN. This will allow the server to PXE boot and load the WinPE environment without using a VLAN ID. The other side of this is that once you are in Windows you still don’t use the actual VLAN ID. Just leave it blank.

New-VMSwitch -Name “VM-Switch2” -NetAdapter “vNic3″,”vNIC4” -enableembeddedTeaming $true

Since we want to be fancy and use the new Windows 2016 Switch Embedded Networking for the VM networks the next team is created a different way. We don’t need to create the Network Team first it’s all managed in Hyper-V Networking.

Get-NetAdapterAdvancedProperty -DisplayName “Jumbo Packet” | Set-NetAdapterAdvancedProperty –RegistryValue “9014”

Almost at the end now. Hyper-V experiences significant performance increases if jumbo frames are enabled. This is particularly when machines are migrated between hosts but also around any other large network transfers. The problem is that all new network adapteds, including the ones we created above, default to having jumbo frames disabled. Turn these on whenever possible. In fact keep checking that these are still turned on. It’s a simple change which results in huge performance benefits.

mpclaim -r -i -a “”

Finally if you are using a SAN you’ll likely have multiple pathways and require MPIO to be enabled. If you don’t you’ll see multiple copies of the same disk and yet will only be using a single path. MPCLAIM will discover any MPIO devices and then will reboot the server to enable the configuration.

Now all you need to do is use sconfig to set the IP address for your new vNICs, change your server name and join the domain. Then you can use all your normal tools remotely.

Windows Core isn’t so scary after all.


Update the VMM Bare Metal WinPE Image

The VMM Bare Metal build process is one of those processes that just seems magical when you first see it but there’s a lot going on to make this work. One of the common issues is that the server will boot using PXE but then will either not be able to continue to talk to the VMM server or will not see any local disks. These are all related to the drivers contained in the WinPE boot image.

This image is managed by VMM but you will find a current version on the WDS server in the RemoteInstall\DCMgr\Boot\Windows\Images directory which is called boot.wim.

If you want to manually update this with new drivers then you can use the script below. You need to run this from the VMM server and it requires that the boot.wim file be located in c:\temp with all drivers extracted into a folder called c:\temp\Drivers. You also need a c:\temp\mount directory for the WinPE image to be mounted to.

$mount = “c:\temp\mount”
$winpeimage = “c:\temp\boot.wim”
$winpetemp = $winpeimage + “.tmp”
$drivers = “C:\temp\Drivers”

copy $winpeimage $winpetemp

dism /mount-wim /wimfile:$winpetemp /index:1 /mountdir:$mount
dism /image:$mount /add-driver /driver:$drivers /recurse
Dism /Unmount-Wim /MountDir:$mount /Commit

publish-scwindowspe -path $winpetemp
del $winpetemp

Once the WinPE image has been updated with the new drivers it will distribute the new image to all WDS servers in the environment.

It is also possible to install all drivers located in the VMM Library using but I try to stay away from this to minimise the size of the WinPE image. Let VMM install any non-critical drivers as part of it’s own process.

VMM Duplicate VMs

VMM may discover VMs that already exist in the environment and add then as a new VM. You will end up with two different VMs listed with the same name.

To confirm that this is the case run the following command in VMM Powershell.

get-vm “Duplicate-VM-Name” | FL NAME,ID,biosguid,location

If you have duplicate machines then everything except the ID, which is VMM assigned, will match as shown below

Name : Duplicate-VM-Name
ID : 42635679-94fb-4149-ad26-66041a8c96eb
BiosGuid : 5cf412b5-3398-4c5a-951f-3e22c7f97d1a
Location : C:\ClusterStorage\volume1

Name : Duplicate-VM-Name
ID : 8c675f1e-6626-4805-b365-f9b6be3d6c7f
BiosGuid : 5cf412b5-3398-4c5a-951f-3e22c7f97d1a
Location : C:\ClusterStorage\volume1

Both of these VMs refer to the same REAL VM so if you delete one then the second VM will go into a missing state.

If you use powershell using the -force parameter the behaviour changes. This will remove the VM from the VMM database but will not touch the VM. You can use the following powershell command to do this.

get-vm “Duplicate-VM-Name” | WHERE ID -eq “8c675f1e-6626-4805-b365-f9b6be3d6c7f” | remove-vm -force

You will now just have a single VM again.


VMM 2016 Cluster Upgrades and Resource Groups

In order to upgrade VMM from 2012 R2 to 2016 you need to deploy new management servers and basically use a lift and shift upgrade process. This is due to VMM 2012 R2 supporting up to Windows Server 2012 R2 while VMM 2016 ONLY supports Windows Server 2016.

If you installed VMM as a failover cluster then you also need to think about how you are going to handle the cluster as part of this upgrade. With Windows 2016 you can add new nodes to an existing Windows 2012 R2 cluster but there may be reasons to create a brand new cluster. You need to think carefully about the process that you are going to follow either way.

If you are going to configure a new cluster then you need to decide whether you will use the same VMM Cluster Service name or a new name. If you use a new name then you will need to reassociate all agents once you have completed the installation. Think about any parts of the environment which may also rely on the old VMM server name.

If on the other hand you plan to reuse the old name then there are a couple of things to watch out for. Ironically the first and most important aspect is actually the removal of the old VMM nodes. Even if you stop the old VMM cluster service it still appears that it will remove the cluster service computer name from the database when you uninstall the last node. This will result in the new VMM service crashing and being unable to restart. Looking at the VMM log located in c:\programdata\VMMLogs\SCVMM.{GUID} you will see the following error:

Base Exception Method Name=Microsoft.VirtualManager.DB.SqlRetryCommand.ValidateReturnValue

Exception Message=Computer cluster-service-name is not associated with this VMM management server.

Check the computer name, and then try the operation again.

If you face this issue the quickest way to fix this is to just uninstall the VMM service, delete the VMM cluster role and reinstall it using the same database and user settings. There may be a way to fix it in the back end database but it’s most likely not worth the effort at this point.

In order to avoid this you will need to uninstall VMM on the old cluster nodes first before doing the upgrade. Just make sure that you always select to retain the database. You should have a backup of the database already though right?

The other issue you will need to deal with is cluster permissions. Remember that the VMM cluster service is a virtual server along with the cluster service. The cluster service needs to have access to do things with the VMM cluster service.

When you run the first node installation it may fail after quite some time with the following error:

“Creation of the VMM resource group VMM failed.Ensure that the group name is valid, and cluster resource or group with the same name does not exist, and the group name is not used in the network.”

This is due to the cluster computer account not having access to modify the AD account of the VMM cluster service virtual server. Grant the new cluster computer account full control of the existing cluster service computer account and re-run setup.

While you’re at it make sure that you also grant access to the DNS entries in case these also need to change.