This week in the cloud – 18th June 2018

There’s so much happening in the cloud space at the moment that I thought it would be good for my own reference as much as anyone else’s to produce a summary of some of the big changes that have happened this week. This week has been particularly busy with the Microsoft Build Conference.

The compute decision tree

The first resource that I found wasn’t new this week but will be quite useful. There’s so many different types of compute services but choosing the wrong one can be catastrophic when migrating on-premises resources to the Azure cloud.

This and additional information is located at

The new DEV lab

Next up is a look at the DEVTest Lab function in Azure. If you haven’t heard of this it’s a great way to spin up a new environment to do some testing without having that old hardware around or having to bother with building all the boring stuff.

With this you can deploy templates with multiple machines which can include different components. This allows you to do things like deploy an SCCM environment , even though these include multiple servers and services. Deploy multiple VMs with domain controllers (including standing up a new forest) SQL services and the SCCM services all using a automated process.

Then you can minimise costs by automating the shutdown of the environment so that an idle DEV machine isn’t costing money for idling.

A great resource about this is found here.

PSTN Services in Teams getting closer

Here in NZ we will be holding our breaths for a while longer before PSTN services are available in Office 365 but things are looking a little easier with the introduction in preview of Direct Routing. This is only available in teams *sigh* but will allow a on-premises telephony gateway to directly integrate with Teams. No more Skype for Business on-premises environment or multi-VM cloud connector. Just install a supported physical or even virtual telephony gateway and away you go.

Let’s just hope that Teams can improve to the point that we will all accept them taking Skype for Business off us in the future.

Linux Everywhere even in your Azure AD

Microsoft now loves Linux. Really loves it. Loves it so much that they have now released a Linux distro in the form of Azure Sphere. This is a new IoT operating system which Microsoft will support for 10 years, While it has built-in integration with Azure there appears nothing to stop it from connecting to another cloud service or even a on-premises environment.

Next up is a boring old Linux VM running in Azure. Fairly boring now but now you can integrate it with Azure AD as the Identity Provider. This will enable you to log on to a Linux machine in Azure using on-premises credentials.

Another Azure AD Service

Just because there are not enough ways to use the words Azure Active and Directory in a product name there is now also Azure Active Directory Domain Services. This isn’t a really new service but I have to admit I totally missed it and must have thought it was just one of the other Azure Active Directory services.

This time though it’s a full on Active Directory service without the VM. This Azure service uses the Azure AD directory service to stand up a full Active Directory service in Azure complete with the features that Azure AD doesn’t include such as Group Policy, Organisational Units and NTLM/Kerberos Authentication.

To be clear this is still NOT your on-premises domain but another domain with the same users, passwords and groups.

Details can be found here.

This is just a taster of some of the changes that have been introduced recently. Microsoft announced at Build that they had introduced 170 different new functions in Azure in the last year. Keeping up with these changes is going to get very difficult without even including AWS.



LastPass on Firefox – The missing copy password function

Ever since Firefox removed the legacy plugin functionality I’ve been annoyed that the new LastPass plugin didn’t have the copy username and copy password option.


It’s fine if you are just using the browser to log in but what about when you need to log in using actual applications?

This required you to find the entry and select to edit it, then unhide the password and manually copy it to the clipboard. 😩

I finally went looking for a solution and it appears that the firefox addon now needs both the addon and a native messaging component. I guess this wasn’t available when the addon was first releases and LastPass doesn’t tell you to go get it.

So how do you know if you need it? Well other than the copy username and copy password not being available you can go to the lastpass addon and select more options and then about LastPass


If you have the native messaging components then you should see this

LastPass with Native Components

If you don’t have them installed then there will be a button to go to the LastPass web site to install them. This just seems to me to be re-running the LastPass installer but I could be wrong about that.

Once done all the functions will be available again. Ya!!!


Office 365 Hybrid Send-As Functionality – Not quite there yet.

Recently Microsoft announced that mailbox delegation would be available between Cloud and On-Premises accounts. This would allow for a cloud mailbox user to send-as an on-premises mailbox.

Looking at the documentation ( it appears that this should now be working in early May 2018. In particular.

As of February 2018 the feature to support Full Access, Send on Behalf and folder rights cross forest is being rolled out and expected to be complete by April 2018.

This feature requires the latest Exchange 2010 RU, Exchange 2013 CU10, Exchange 2016 or above but otherwise should just work.

Unfortunately when a user tried to use this feature it didn’t work.

This message could not be sent. Try sending the message again later, or contact your network administrator. You do not have the permission to send the message on behalf of the specified user. Error is [0x80070005-0x0004dc-0x000524].

Notice that this mentions the send on behalf rights. Well in this case the user didn’t have those but instead had the more powerful Send-As rights.

Well it looks like Microsoft are running a bit late on the rollout with this other article ( now shifting the rollout completion to Q2 2018.

As of February 2018 the feature to support Full Access and Send on Behalf Of is being rolled out and expected to be complete by the second quarter of 2018.

Either way it’s not much longer but in the interim you may need to keep assigning send on behalf rights prior to migrating mailboxes. This will save you having to use powershell to do this post-migration since the on-premises ECP interface doesn’t support granting these rights to cloud mailboxes.


Cisco UCS – Cisco Server Computing takes virtualisation a step further

I recently implemented a new Cisco UCS environment using Windows Server 2016 Hyper-V, managed by a MS VMM 2016 and SCCM Current Branch management environment. This was my first introduction to the Cisco UCS platform.

The Hardware

On first look it appeared to be just another blade enclosure environment.


In addition to the standard blade chassis, the Cisco UCS environment also requires external management controllers called Fabric Interconnects. This is where all the intelligent for the environment sites and can manage multiple chassis.


While the fabric interconnects can be installed as a single unit I can’t see why anyone would ever want to do this and so you can cluster multiple units. These are also not just the management controller for the environment but also the conduit for all external communications.

These are active/passive management clusters so just be aware that a management outage occurs when the active role changes. Blade traffic will continue to route as this uses both the active and passive nodes at all time. If a fabric interconnect goes offline then it will just mean that some of the paths are no longer available. As long as you have paths for all services via all fabric interconnects and the servers are configured correctly you won’t experience any issues.

There’s a few caveats there but unfortunately it is possible to install these units badly. This is not a unit to plug in quickly without any planning.


Cisco have produced validated designs which give step by step documentation to install an environment with specified hardware. The Windows 2016 Hyper-V with VMM validated design uses the following hardware:

  • UCS Blade Chassis
  • UCS Standalone Servers
  • UCS Fabric Interconnects
  • Nexus Switches
  • MDS Fibre Channel Switches
  • NetApp SAN

Put together this gives the following physical design

Cisco UCS Networking Design

It is absolutely possible to drop the MDS switches in this design and use the Nexus switches to provide the FibreChannel connectivity. Also worth noting is that in this design the NetApps are used for both FC and iSCSI/SMB storage thus requiring the connection to the Nexus switches.

Each blade chassis is connected via multiple connections to both fabric interconnects. This will provide all external connectivity including network and storage access as well as the management, which we will go into later.

Each port on the fabric interconnects will then be configured as either a server, network or FC port. Server ports will be used to discover chassis and standalone alone UCS Servers. Network ports will be configured using network templates for external connectivity.

FC ports can not be directly specified but are instead limited to a number of ports which are located in a location which differs depending on the fabric interconnects that you are using. The UCS 6248s that I used required the FC ports to be located at the top end of the ports on each fabric interconnect. If you wanted to have 2 FC ports per Fabric Interconnect then these ports would be assigned to port 31 and 32 on each unit.

The Virtualisation magic

This is reasonably standard so far so why did I say that it takes virtualisation a step further?

Well each server does not get directly configured. In fact Cisco would rather you forget that you even had servers and rather just think about resources.

Before you do anything you need to configure the external network configuration and external FC configuration as well as discover your servers.

Then everything is based on templates and service profiles. While it is possible to create a server from scratch without any templates this is not encouraged and would likely result in a giant mess. Instead you need to go through and create templates for everything.

You need to start with the addresses you will be using. This includes MAC addresses, FC Addresses, UUIDs. Next you need to create policies for the boot order, BIOS settings, power settings, and network configuration.

Then you need to configure all vLANs, vSANs which can then be assigned to vNICs and vHBAs which also have adapter configuration.

Then you need to create pools of servers which will be used to assign the configuration.

Next you create the service templates which takes all of the above information and creates a configuration template. You then assign this template to a server pool.

Finally you can configure your servers by deploying the service templates to your server pools. This will give the server a base name as well as a starting number which it will increment.

You would think that this would result in blade 1 in chassis 1 being assigned the first template but cisco really don’t want you to think that much about it. It will assign each service profile where-ever it sees fit. If you really need to know where the server is located physically then you can look it up but it’s definitely not front and centre.

Each blade will end up with what appears to be a physical NIC which is in fact the vNICs defined in the template as well as FCoE adapters to match the vHBA configuration.

Sounds like a lot of effort. Why bother?

It is a lot of effort up front, but once you’ve got your service templates expanding the environment is quite amazing. This is particularly the case if you also use SAN boot rather than local disk. Have a hardware failure? Just reassign the service profile to another blade in the environment. The server will reboot and be operational with ALL hardware configuration being identical.

Most other blade environments will allow you to switch out a blade, with the new blade having the same FC and MAC address, but this goes so much further. It also saves a trip to the data centre as you can move the configuration to a new slot rather than having to replace the server in the same slot.

Need to install a new chassis? Connect 4 cables and power, discover the chassis, potentially upgrade firmware and then add the new servers to the existing pools. Deploy 8 new servers with the existing service templates. Total time of stuff all.

Throw in the IPMI integration with VMM and you can deploy a new bare metal Hyper-V environment in no time at all.

Need to install a new network card? Sure that’s virtual. Change the service template and trigger a service profile update and all associated servers will now have the new vNIC.

What are the limitations

As so many facebook relationship status’ say. It’s complicated. Particularly when setting it up the first time you are almost guaranteed to be  left scratching your head asking why a template just refuses to deploy. Unfortunately the error messages can be a little vague too with “not enough compute error” and “not enough vNIC/vHBA error” plaguing me during my deployment.

This is definitely not the unit that you quickly install and have operational in an morning with the physical installation being just the start of the deployment process.

The Cisco environment really wants you to let go of where servers are located which can really be intuitive. If you are a bit too obsessive compulsive for this chaos then you can manually deploy each server to a service template and manually assign a name but you just know that someone at Cisco is shedding a tear.

You also have to understand just how much control you are handing over to the Cisco management environment. If you are deploying Hyper-V then you should be looking for how to configure jumbo frames on the physical network adapters. The problem is you just won’t find it on the physical adapters. This is because it’s configured on the vNIC template in the UCS management interface.

There are also still some rough edges in the environment. While vSphere 6.5 supports UEFI boot using secure boot this just wouldn’t work for me and ultimately had to be disabled. This was documented as a bug for the current release at the time.

Is it worth the effort?

As always it depends. If you do just want a quick build for a static environment then this may not be for you. It’s fancy but if the steep learning curve delays the deployment and then it’s never used again it’s a bit of a waste.

I actually really like this hardware for environments that are experiencing change or growth. Everything can be standardised while still allowing for huge growth. No longer will you have 5 different chassis configuration depending on the engineer assigned to the build.



Upgrading Bamboo results in HTTPS configuration disappearing

I recently upgraded bamboo within the 5.x version. The actual upgrade went well but when bamboo was restarted the site was only available on the default HTTP port. In this case it was a simple fix and it was a good thing that I had copied both the application files and the bamboo home.

Even though the Bamboo installer says that it’s going to “upgrade” what it’s really saying is that it will dump the new files in the old directory. This includes overwriting any files like the server.xml file.

Unfortunately this removed the HTTPS section of this file. Luckily this was a simple copy and paste from the old configuration file.

Yes this is in the upgrade guide but this guide also doesn’t say that an “upgrade” is possible so I figured this was a new function. Oh well.

Now that the server was available on HTTPS I still had another problem. The site wasn’t available via our F5 load balancer. This was a little harder to spot, but was again a simple solution.

When you connect to the root site ( it sends a 302 redirect to the web service address. Now the F5 load balancer is looking for a HTTP 200 message to say that the site is healthy so you can’t point it at the root so instead we used the location that it sends you to, which in the case of the old version was /userlogin!default.action?os_destination=%2Fstart.action.

Well of course as part of the upgrade this path was changed just a little bit to /userlogin!doDefault.action?os_destination=%2Fstart.action. If you went to the old path well no HTTP 200 for you and so the site was marked as being down.

Once the health monitor was updated to the new URL the site was available again.