Deploy a Test Microsoft 365 Tenancy

In recent history it was possible to set up labs to test new technologies in a lab. This was due to the ease of deploying VMs to run test work loads. But what do you do when these services become cloud services? Most companies aren’t keen on the idea of paying for a sandbox environment even when there is a quantifiable value to having it.

Microsoft have provided a solution for this if you are a developer through MSDN but now they have opened up access to their cloud services through

You can join their Dev Program at Developer Program – Microsoft 365. This allows you to deploy a new tenancy and includes 25 E5 licenses (just without Windows, and PSTN services). These licenses expire every 90 days but at this stage can be renewed so is an amazing deal.

These tenancies originally expired after 90 days which was good, but the change to allow renewals is outstanding.

Just remember that since you are the admin of this environment that it still needs to be secured.

Also don’t get too attached. Microsoft could always change their mind and go back to non-renewable tenancies.

Migrating from Hybrid to Native Microsoft Cloud – Overview

While Microsoft would love businesses to consume their services as a native cloud service, most environments operate in a hybrid mode. What does this mean and when would you run in this mode?

How did we get here?

If we look back on most business systems they were all located on-premises on hardware running inside the company offices. When the Microsoft cloud was first released it was essentially Microsoft running their own server software. Few customers were able to migrate all of their services into the cloud so used Hybrid services to introduce customers to the cloud.

In this mode selected users could use Cloud Services while other users would use on-premises services.

If you needed to use any Hybrid Cloud Services they all have a prerequisite of deploying Azure AD (AAD) in hybrid mode. You did this by installing Azure AD Connect on an Active Directory (AD) joined server. This replicated your on-premises AD users into AAD.

Azure AD Connect environment
Azure AD Connect provides the glue between AD and AAD identity providers

Note that the user accounts are replicated. They are not the same account. We either use single sign on, or password replication to make it appear to the user that they are using the same account.

Over time as more users are migrated to the Microsoft Cloud there has been less reliance on the on-premises services. But what is the process to move from a Hybrid to a Native Cloud environment?

Audit your environment

First you need to look at what services you still are using on-premises. Let’s assume that you’ve migrated all of your services to the cloud AND removed the old on-premises services. (Exchange, SharePoint, Skype for Business, and Configuration Manager). You’re likely still be left with AD.

In this case you’ll likely still be creating users in Active Directory and syncing them to AAD if for no other reason than to make sure that you can add them to your AD based groups.

Is your computer still connected to the on-premises Active Directory, or were they moved to Azure Active Directory connected computers? Do you still use any services located in the old Active Directory forest on file servers for instance? Do you use Password Hash or Federated Authentication?

Once you understand what dependencies you still have on the on-premises environment you can work to remove them.

Migrate your Authentication

If you are still using Federated Authentication then this is using your internal AD to authenticate any authentication requests for you AD users. This will also stop you from having any native cloud users with the same domain name as your hybrid users.

This should be one of the first services you migrate. Once completed all Office 365 login requests will be authenticated in AAD rather than being forwarded to AD.

Migrate users to Azure AD Devices

Even when your account is AD homed you can still log on using AAD if the computer is natively joined to AAD. Once the machine is removed from AD and added to AAD users will log on using their AAD account. Remember that the accounts are seperate so if you log on to an AAD device then you are logging on using your AAD account. If you log on to an AD device then you are using your AD account.

Same user logging in to different devices
The same user uses a different identity provider by using a different machine

Unfortunately the tools to make this migration are quite limited at this time. If you need to automate this process then Binary Tree Power365 currently should help migrate machines from AD to AAD.

Migrate all non-user AD objects

Before you can move your user objects you will need to move any groups that they are a member of. If you move the account first then they would non longer be a member as the group ownership was still in AD.

In Office 365 there are different types of groups which are migrated in different ways. If a group is a distribution group then this migration will need to be performed in Exchange Online. If on the other hand it’s a security group then this is owned by Azure AD.

It might also be worth cleaning up the rest of the directory at this stage. Remember that contacts also need to be moved but you may also have other objects that need to be cleaned up. While you can the source of most objects in the Azure AD portal it is worth using the Azure AD Connect service to see exactly what is being sent to AAD.

Migrate user accounts

Once your user accounts are finally ready to move you face a problem. Microsoft currently doesn’t have a process to move a user from AD to AAD. Instead you need to stop the user from syncing which will delete the user account in AAD. Once the account is deleted you can then recover it from the AAD recycle bin complete with all data. This will then change the account to be a native cloud account.

Flip the Native Cloud Switch

Once all of your users have been migrated to be native cloud you can then disable the Sync between AD and AAD.

While this will feel like a substantial change this won’t impact your users at all as they will no longer be accessing any AD services.

Hopefully this gives you a good overview of the steps required to move from Hybrid to Native Cloud. In subsequent articles we’ll look at the details for each step and how you can minimise the user impact.

Configure Hybrid Public Folder with Exchange 2013/2016 (aka Modern Public Folders)

Public Folders don’t seem to have the usage that they used to so it’s been a while since we worked with Public Folders in Exchange. So long in fact that what we last configured is now called Legacy Public Folders with the new version, introduced in Exchange 2013 called Modern Public Folders.

A Refresher on Exchange Public Folders

In order to understand the new process of setting up Hybrid mode with Exchange Online you first need to understand some changes to how Public Folders work.

In Exchange 2010 public folders were stored in dedicated Public Folder Databases. These also had their own log files and had to be managed independently of any User Mailbox Databases.

With Modern Public Folders they have been moved into Mailboxes which are stored in a standard user database. The environment can contain multiple public folder mailboxes, each of which can contain different parts of the public folder hierarchy.

When a user accesses a public folder they are actually opening the mailbox that contains that part of the hierarchy. Unlike previous versions the data is only accessible from the server hosting the active database rather than any server hosting a public folder replica.

Configuring Hybrid Public Folders

What does this mean for configuring Hybrid mode Public Folders?

First of all if you searched for something like “Configure Exchange Public Folder Hybrid” and found this Exchange 2019 article referring to Exchange 2010 SP3 or later then you’ve got the wrong article. You need to look for this article which is only on the Exchange Online documents site.

This newer article ignores all of the steps setting up new Public Folder Mailboxes resulting in just three steps:

1) Download the following files from Mail-enabled Public Folders – directory sync script

  • Sync-MailPublicFolders.ps1
  • SyncMailPublicFolders.strings.psd1

2) On Exchange Server, run the following command to synchronize mail-enabled public folders from your local on-premises Active Directory to O365.

Sync-MailPublicFolders.ps1 -Credential (Get-Credential) -CsvSummaryFile:sync_summary.csv

3) Enable the exchange online organization to access the on-premises public folders. You will point to all of you on-premises public folder mailboxes.

Set-OrganizationConfig -PublicFoldersEnabled Remote -RemotePublicFolderMailboxes PFMailbox1,PFMailbox2,PFMailbox3

Issues When Configuring Hybrid Mode

There are a few things to be aware of with this process though, particularly the final step.

1) Remember that the new Public Folders are stored in User Mailboxes which are associated with AD user accounts. If you aren’t syncing your entire Active Directory forest then the Public Folder Mailbox objects may not be synced to Exchange Online. So where are these stored by default? Well the Users container in your exchange enabled domain of course.

It’s likely that you haven’t synced this but you CAN move these objects to an OU that is being synced without any impact. Unfortunately this requirement isn’t included in the documentation. If these objects aren’t synced to Exchange Online then you’ll get the following message

Set-OrganizationConfig -PublicFoldersEnabled Remote -RemotePublicFolderMailboxes PFMailbox1
Couldn't find object "PFMailbox1". Please make sure that it was spelled correctly or specify a different object.
    + CategoryInfo          : NotSpecified: (:) [Set-OrganizationConfig], ManagementObjectNotFoundException
    + FullyQualifiedErrorId : [Server=SYAPR01MB2717,RequestId=d79eaa00-ff32-4076-8791-54ba22e3cb76,TimeStamp=26/11/201
   8 7:13:26 AM] [FailureCategory=Cmdlet-ManagementObjectNotFoundException] C4302B7C,Microsoft.Exchange.Management.Sy
    + PSComputerName        :

2) Once you’ve moved the public folder mailbox objects remember that the -RemotePublicFolderMailboxes PFMailbox1,PFMailbox2,PFMailbox3 syntax is referring to the Public Folder Mailboxes and NOT the public folder names. You can find these in the ECP under Public Folder Mailboxes.

3) You also need to list all public folder mailboxes in the one command. If you add an additional public folder mailbox in the future then include all the mailboxes and not just the new one.

4) Finally remember that your on-premises address book is different from your online address book. This means that any new mail enabled public folders will only appear in your online address book if you sync them using the Sync-MailPublicFolders.ps1 script. If users can create these objects then you may want to think about scheduling this task.

Only users who have been created on-premises and migrated to Exchange Online can access the on-premises Public Folder store. Only these users exist in the on-premises address book used to authenticate access.

It may not seem that way but ultimately this is a simple service to configure with just a few little gotchas to be aware of.

Misconfigured Skype for Business Edge Server Breaks Office 365 Hybrid Federation

We’ve been moving more customers to Office 365 recently. Not only are they seeing the business case stacking up from a cost point of view but they are also after the cloud only features which are now more frequently appearing. A troubling development with these migrations are the number of broken Skype for Business Edge servers that we are seeing.

Now these aren’t totally broken but just broken enough that when we try to integrate their on-premises Skype for Business environment with Office 365 services things go wrong.

How will you detect this?

This will often show up when trying to get voicemail configured to use hosted voicemail in Exchange Online since this is often the first hybrid service being deployed. When the call is redirected to the Exchange Online server it fails. Looking at the event logs on the front end server it says that the dial plan wasn’t configured correctly.

Attempts to route to servers in an Exchange UM Dialplan failed

No server in the dialplan [] accepted the call with id [XXXXXXXXXXXXXXXXXXXXXXXXX].

Cause: Dialplan is not configured properly.


Check the configuration of the dialplan on Exchange UM Servers.

All the configuration looked fine and so we needed to dig into the SIP traffic a little more. We did this using snooper. We could see the message being handed off to the edge server from the front-end server but then the edge server connection timed out.

<strong>Response Data</strong><br>
504  Server time-out<br>
ms-diagnostics:  1008;reason=”<strong><em>Unable to resolve DNS SRV" </em></strong>

This was a little strange as the edge server was working fine for other federation partners, and DNS lookups were working on the edge server.

What was happening?

One thing that didn’t look right though was that the internal interface was configured to use the internal DNS server. Referring to the Edge server deployment guide confirmed that this wasn’t correct.

Interface configuration without DNS servers in the perimeter network
1. Install two network adapters for each Edge Server, one for the internal-facing interface, and one for the external-facing interface.

The internal and external subnets must not be routable to each other.

2. On your external interface, you’ll configure one of the following:

a. Three static IP addresses on the external perimeter network subnet. You’ll also need to configure the default gateway on the external interface, for example, defining the internet-facing router or the external firewall as the default gateway. Configure the adapter DNS settings to point to an external DNS server, ideally a pair of external DNS servers.

b. One static IP address on the external perimeter network subnet. You’ll also need to configure the default gateway on the external interface, for example, defining the internet-facing router or the external firewall as the default gateway. Configure the adapter DNS settings to point to an external DNS server, or ideally a pair of external DNS servers. This configuration is ONLY acceptable if you have previously configured your topology to have non-standard values in the port assignments, which is covered in the Create your Edge topology for Skype for Business Server article.

3. On your internal interface, configure one static IP on the internal perimeter network subnet, and don’t set a default gateway. Also leave the adapter DNS settings empty.

4. Create persistent static routes on the internal interface to all internal networks where clients, Skype for Business Server, and Exchange Unified Messaging (UM) servers reside.

5. Edit the HOST file on each Edge Server to contain a record for the next hop server or virtual IP (VIP). This record will be the Director, Standard Edition server or Front End pool you configured as the Edge Server next hop address in Topology Builder. If you’re using DNS load balancing, include a line for each member of the next hop pool.

How to fix it?

The edge servers were changed to meet this guidance by creating a hosts file with all servers in the topology using both short names and FQDNs, as well as setting the external adapter to be the only adapter with DNS settings which were external to the organisation.

Voicemail started working once this change was made.

Why this happens

So why did this happen? Part of the setup for voicemail located in Office 365 is configuring a hosting provider in Skype for Business.

New-CsHostingProvider -Identity 'Exchange Online' -Enabled $True -EnabledSharedAddressSpace $True -HostsOCSUsers $False -ProxyFqdn "" -IsLocal $False -VerificationLevel UseSourceVerification

This provider has shared address space enabled. This means that endpoints with the same SIP domain name can be located either on-premises or in the cloud. In our case the endpoint is the Exchange Online UM service.

When a call is routed to Exchange Online UM it looks up the local directory to see that the user isn’t located on-premises. The call is passed to the Edge server which performs a lookup of the DNS record. Why is it doing this? Well basically it’s trying to make a federation request with it’s own domain and this is the start of that process. But the record only exists externally so that lookup is failing. Since it can’t federate with itself it doesn’t go to the next step which is establishing a connection to Exchange Online.

This can also be fixed by adding the DNS record to your internal DNS but the edge server would still not being configured correctly. It’s possible that using the internal DNS server would result in something else not working later on. Far better to fix it properly.

Just as a matter of interest if you tried to configure a hybrid mode with Skype Online you would also experience issues where your on-premises users couldn’t see presence or send messages to cloud users. This is the same reason as the Exchange UM issue with shared address space also enabled on this hosted provider

New-CSHostingProvider -Identity SkypeforBusinessOnline -ProxyFqdn "" -Enabled $true -EnabledSharedAddressSpace $true -HostsOCSUsers $true -VerificationLevel UseSourceVerification -IsLocal $false -AutodiscoverUrl

Both Exchange Online and Skype for Business have hybrid relationships with Skype for Business On-Premises. The only difference, apart from the provider endpoint address, is that the Skype Online provider is configured to host users while the Exchange Online provider hosts services.

Optimising WordPress for Running in Azure

Now that you have WordPress running in Azure there are a few housekeeping tasks that you may want to look at prior to switching over from your existing site.

Managing Storage

In a previous post we discussed the different types of deployment available in Azure. One thing to be aware of is that storage is managed differently with these and if you decide to scale-out then things change again. If you deploy WordPress in PaaS or a container then scaling is easy but each new web service needs to be able to access both the database and the file repository.

You may have noticed that our instructions on how to deploy WordPress using hybrid containers that we also deployed a storage service. This is what we will be using to store all of those objects so that any web server can access them. Luckily WordPress has a plugin that makes this functionality easy as well. This should be one of the first plugins that you install. No point in having any data saved to the wrong place after all.
Once this is added you need to configure the storage account that it will use. In order to do this you need to create a storage account key, which will be used by the plugin to access the blob storage. Log in to the Azure portal and open the storage account that you want to use. Under Blobs you need to create a new container then go to Access Keys to get one of the API keys. This information can then be put into the plugin configuration. In the WordPress admin go to settings|Microsoft Azure and copy in the name of the storage account and the API key. If the authentication is successful you will then see the container that you created.
Make sure that Azure Storage is then selected as the default upload source and save the settings. At this point you can restore your old site into WordPress. When you do this you will see that all of the images are automatically saved in the Azure storage account and when you go to upload a file it will automatically save to the Azure storage account.

Stopping the spam

When we stood up the site it took 4 hours before some spammer noticed that they could use a bot to send spam to the contact form. Luckily email wasn’t configured so no one really got spammed but still it was a lesson that we are running our own WordPress site and need to do some things ourselves now. So first up let’s make it harder to spam the contact form. I settled on installing the Contact Form 7 plugin. This has an integration with reCAPTCHA which is a free google service. You will need to sign up for an account at whic will give you a site key and secret key. Put these into the Integration page for the contact plugin and then create a new contact form. We simply added the recaptcha to the bottom of the form.
Once you’re happy with the form you can copy the short code for the form directly onto your page. Have a play with it and make sure that it’s working before you fix the email integration. For this we used the WP Mail SMTP plugin. This allows a way to modify the SMTP settings for WordPress. Now WordPress is not an SMTP server so you need to have some external SMTP service that you can use. This plugin supports Google, Mailgun and Sendgrid, but you can also manually specify SMTP server settings. Since we have an Office 365 account we use that. For Office 365 the SMTP host will be your tenant name, which may be different from your domain name with at the end. For some reason the TLS option wasn’t working with Office 365 so we used SMTP on port 25 but enabled the Auto TLS option.
If you want to relay mail outside the organisation then it would be a good idea to set up an account in Office 365 and use authentication. If you don’t want to do this then remember that you will need to set up a receive connector in Exchange Online to authorise the web server to relay based on it’s IP address. If you just want to receive alerts yourself then no changes are required.

Configuring HTTPS

You really need to use HTTPS for your new site. The default site already has a valid SSL certificate but if you want to use a custom name this will require additional work. First you need to set up the custom domain name by going to the app service and opening the custom domain properties. The process for adding a custom name differs depending on whether the site is live or not. If it isn’t then you create a new CNAME pointing to the default Azure name. This is the that was assigned when you first created the site. If the site is already live then you don’t really want to redirect it to the new Azure site just to add the custom domain. You may still have more work to do before you’re ready to go live after all. To cater for this you need to create a txt record in your DNS which has the name awverify with the data containing your If you want to have a host name for the site (eg then you will also need to create a record for this. (eg awverify.www) with the data referring to the Azure site. Once this is done you can upload a public certificate and bind it to the custom domain. If you went down the Windows PaaS route then you can use a Let’s Encrypt extension which will manage acquiring and renewing Let’s Encrypt certificates. This will result in a free cert associated with your custom domain. If you went down the container route then this is a little more difficult. There are solution out there which involves deploying multiple containers. The first container has a nginx reverse proxy which publishes the second container running WordPress. The nginx reverse proxy also has has the Let’s encrypt integration. In the end we went a different way. We use Cloudflare to publish our site. This already has SSL but things get a little funky with WordPress. If we configure the custom domain on CloudFlare but don’t use this name in WordPress then pages will break. If we set both to the same name and require SSL, well don’t do that. WordPress will stop responding. We added another plugin called CloudFlare Flexible SSL. This makes sure that all pages will display correctly to the end user. You then use CloudFlare to control the HTTPS configuration. You can then disable HTTP access from within CloudFlare if this is the route you want to go down.

Oh crap I changed the WordPress settings and now I can’t access my site!!!!

Yeah we’ve been there. Fortunately there is an easy way to fix this. You will need to change the setting on the database to get things back again. You can do this using the Azure cloud shell. Log on to it using the built in mysql command.
mysql -h -u [email protected] -p
Then convert the site details back to using HTTP.

UPDATE wp_options SET option_value = replace(option_value, '', '') WHERE option_name = 'home' OR option_name = 'siteurl';
Reconnect to your site and breath.

Deploying WordPress on Azure using Hybrid Containers

In the last post we looked at the different architectures that can be used to deploy WordPress in Azure. We decided to deploy our site using a hybrid container environment. This has the web service running in a docker container but the database running as a separate resource. This makes the web service a simple component which can be easily replaced if problems are experienced or scaled in response to load changes. If you haven’t already deployed any Azure services then you will need to start by deploying a resource group. This will group the WordPress resources together. Give this a name select the subscription you will be using to pay for the service and select the Azure region that you want to use.

MySQL Server Installation

Next deploy a Azure Database for MySQL Server. This will be used to host the WordPress database and will allow us to deploy additional web services connected to the same database.
You’ll need to define some basic settings as part of this deployment and you will need to record some of the details for later. The server name needs to unique across Azure and will ultimately end up with a fully qualified domain name of servername. The server admin name and password should be something complex and will be used to remotely access the database. Finally you may want to modify the pricing plan which is designed for significant production systems.
Once the database server has successfully deployed you need to create the WordPress database to the MySQL server. If you already have the MySQL tools installed you can use these to connect to the server. Otherwise you can connect using the cloud shell directly from the Azure portal. This is the command prompt in the toolbar.
If you haven’t used this before it may prompt to create a storage account. This will likely be deployed in a different Azure regions from your WordPress services but this won’t cause any problems. Once you are at the prompt you can use the following command to connect to your new MySQL server
mysql -h -u [email protected] -p
Then run the following script to create a wordpress user account in the database and create a new database.
create user 'wordpress' IDENTIFIED BY "Sup3r53crEtP455w0rd";

create database wordpress;
GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress';
WordPress doesn’t support connecting to a MySQL database using SSL connections out of the box. There are ways to patch this behaviour by updating the code but otherwise you will need to change the connection settings of the MySQL Server. You will also want to allow Azure resources to communicate with the server.

Web Service Installation

There are two parts to the web service. The billing component is called the App Service Plan. We’re going to be using docker containers for the web service using a Linux App Service Plan. Give the Service Plan a name, assign it to your resource group and subscription. Then make sure that it’s a Linux service plan in the same location as your database. Finally make sure that you’re happy with the billing for the site.
Now you can create the Web Application and associate it with the new service plan. Again make sure that you set the OS to linux and then select configure container.
Change to the docker tab and then enter wordpress:latest. This is the public wordpress repository and will mean that you get the latest wordpress setup whenever you update the container.
Once the web service deploys and starts you will be able to access the web service using the address. This will show the WordPress quickstart page. Select the correct language before proceeding to the database setup page.
Next fill in the connection settings for your MySQL server. you need to make sure that you use the full names in this section so the username needs to be [email protected] and the database host needs to be if you don’t use this syntax then you will end up with a connection error.
All going well you should now see the WordPress welcome page.
There’s still more to do before you have everything ready to go into production but with just a few steps you’ve got a serverless web server. this is what makes public cloud so powerful.

Options for deploying WordPress to Azure

We’ve been using to host our site. This is a cost effect solution, particularly with the cheaper subscription levels, but we decided that we really needed to drink our own kool-aid and migrate to our own public cloud. This even resulted in a saving for us as we have a Microsoft Partner account which has monthly Azure credits. Finally since we are now running our own WordPress site there are no functionality restrictions as is the case with the sites. With Azure there are a few different ways to deploy WordPress:
  1. Deploy WordPress using a VM in IaaS. This could be done using either one or several Windows or Linux VM. Using this model you need to think about whether you want to scale up or out and design this capability. You’re also running a full blown operating system so you are paying to run this as well as having to maintain it yourself.
  2. Next you could deploy everything into a container This would result in both the database and the web site running inside a container. This will have the smallest footprint but scaling the solution will be a little harder as the database is contained inside the docker container as well.
  3. You could also use the PaaS Web App service to run WordPress. This again can be either a Windows or Linux web service. In this case you will also need to deploy a database service which does allow for the web service to be scaled out if required.
  4. Finally you can also use containers but with an external database. This will use a docker image for the web service which connects to a dedicated database service. This solution actually runs on the Linux PaaS Web App so the difference between the two is how you stand up your solution. Is it pulled in from a docker image repository or do you push the web code using git into the web service?
In the end we decided on a WordPress docker image connecting to a Azure Database for mySQL server. This allowed for a shockingly quick deployment while still allowing some flexibility and the ability to expand. In the next article we’ll go through the process of how we set up this site.

This week in the cloud – 18th June 2018

There’s so much happening in the cloud space at the moment that I thought it would be good for my own reference as much as anyone else’s to produce a summary of some of the big changes that have happened this week. This week has been particularly busy with the Microsoft Build Conference.

The compute decision tree

The first resource that I found wasn’t new this week but will be quite useful. There’s so many different types of compute services but choosing the wrong one can be catastrophic when migrating on-premises resources to the Azure cloud.

This and additional information is located at

The new DEV lab

Next up is a look at the DEVTest Lab function in Azure. If you haven’t heard of this it’s a great way to spin up a new environment to do some testing without having that old hardware around or having to bother with building all the boring stuff.

With this you can deploy templates with multiple machines which can include different components. This allows you to do things like deploy an SCCM environment , even though these include multiple servers and services. Deploy multiple VMs with domain controllers (including standing up a new forest) SQL services and the SCCM services all using a automated process.

Then you can minimise costs by automating the shutdown of the environment so that an idle DEV machine isn’t costing money for idling.

A great resource about this is found here.

PSTN Services in Teams getting closer

Here in NZ we will be holding our breaths for a while longer before PSTN services are available in Office 365 but things are looking a little easier with the introduction in preview of Direct Routing. This is only available in teams *sigh* but will allow a on-premises telephony gateway to directly integrate with Teams. No more Skype for Business on-premises environment or multi-VM cloud connector. Just install a supported physical or even virtual telephony gateway and away you go.

Let’s just hope that Teams can improve to the point that we will all accept them taking Skype for Business off us in the future.

Linux Everywhere even in your Azure AD

Microsoft now loves Linux. Really loves it. Loves it so much that they have now released a Linux distro in the form of Azure Sphere. This is a new IoT operating system which Microsoft will support for 10 years, While it has built-in integration with Azure there appears nothing to stop it from connecting to another cloud service or even a on-premises environment.

Next up is a boring old Linux VM running in Azure. Fairly boring now but now you can integrate it with Azure AD as the Identity Provider. This will enable you to log on to a Linux machine in Azure using on-premises credentials.

Another Azure AD Service

Just because there are not enough ways to use the words Azure Active and Directory in a product name there is now also Azure Active Directory Domain Services. This isn’t a really new service but I have to admit I totally missed it and must have thought it was just one of the other Azure Active Directory services.

This time though it’s a full on Active Directory service without the VM. This Azure service uses the Azure AD directory service to stand up a full Active Directory service in Azure complete with the features that Azure AD doesn’t include such as Group Policy, Organisational Units and NTLM/Kerberos Authentication.

To be clear this is still NOT your on-premises domain but another domain with the same users, passwords and groups.

Details can be found here.

This is just a taster of some of the changes that have been introduced recently. Microsoft announced at Build that they had introduced 170 different new functions in Azure in the last year. Keeping up with these changes is going to get very difficult without even including AWS.



Office 365 Hybrid Send-As Functionality – Not quite there yet.

Recently Microsoft announced that mailbox delegation would be available between Cloud and On-Premises accounts. This would allow for a cloud mailbox user to send-as an on-premises mailbox.

Looking at the documentation ( it appears that this should now be working in early May 2018. In particular.

As of February 2018 the feature to support Full Access, Send on Behalf and folder rights cross forest is being rolled out and expected to be complete by April 2018.

This feature requires the latest Exchange 2010 RU, Exchange 2013 CU10, Exchange 2016 or above but otherwise should just work.

Unfortunately when a user tried to use this feature it didn’t work.

This message could not be sent. Try sending the message again later, or contact your network administrator. You do not have the permission to send the message on behalf of the specified user. Error is [0x80070005-0x0004dc-0x000524].

Notice that this mentions the send on behalf rights. Well in this case the user didn’t have those but instead had the more powerful Send-As rights.

Well it looks like Microsoft are running a bit late on the rollout with this other article ( now shifting the rollout completion to Q2 2018.

As of February 2018 the feature to support Full Access and Send on Behalf Of is being rolled out and expected to be complete by the second quarter of 2018.

Either way it’s not much longer but in the interim you may need to keep assigning send on behalf rights prior to migrating mailboxes. This will save you having to use powershell to do this post-migration since the on-premises ECP interface doesn’t support granting these rights to cloud mailboxes.