Blog

Windows Server 2012 R2 fails to install .Net Framework 3.5

.Net Framework 3.5 is getting old now and really shouldn’t be installed unless it’s required but if you do need this it’s now a real pain to get installed.

Almost everyone that has had the pleasure of trying to install this feature on Windows 2012 R2 will have found this Microsoft article which basically says that if you’ve already installed 2966827 or 2966828 that the installation will fail. The fix is to remove them before trying the installation again.

Well these patches came out in 2014 and there doesn’t appear to be any update to this guidance. If you look at most new servers you won’t see these patches but just try to install .Net Framework 3.5. It didn’t go very well did it?

When you run the installation it’s trying to find newer file versions which aren’t present on the original source media. It needs these newer versions due to other updates that have been installed. Of course since this is the first time .Net Framework has been installed these files weren’t around to be patched. But it knows that it needs them to remain stable and secure.

If the server is configured to use Windows Update then it will be able to download these files but otherwise the installation will fail.

Tracking down the offending patches

When you run the .net installation it logs the installation in c:\windows\logs\CSB\cbs.log. When the installation fails have a look for something like the following:

CommitPackagesState: Started persisting state of packages
2018-01-29 09:03:42, Info                  CBS    Failed call to CryptCATAdminAddCatalog. [HRESULT = 0x800706be – RPC_S_CALL_FAILED]
2018-01-29 09:03:42, Info                  CBS    Failed to install catalog file \\?\C:\WINDOWS\CbsTemp\30644498_651525325\Package_for_KB4058702~31bf3856ad364e35~amd64~~16299.188.1.0.cat for package [HRESULT = 0x800706be – RPC_S_CALL_FAILED]
2018-01-29 09:03:42, Info                  CBS    Failed to install catalog for package: Package_for_KB4058702~31bf3856ad364e35~amd64~~16299.188.1.0 [HRESULT = 0x800706be – RPC_S_CALL_FAILED]
2018-01-29 09:03:42, Info                  CBS    Failed to stage package manifest. [HRESULT = 0x800706be – RPC_S_CALL_FAILED]
2018-01-29 09:03:42, Info                  CBS    Failed to add package. [HRESULT = 0x800706be – RPC_S_CALL_FAILED]
2018-01-29 09:03:42, Info                  CBS    Failed to persist package: Package_for_KB4058702~31bf3856ad364e35~amd64~~16299.188.1.0 [HRESULT = 0x800706be – RPC_S_CALL_FAILED]
2018-01-29 09:03:42, Info                  CBS    Failed to update states and store all resolved packages. [HRESULT = 0x800706be – RPC_S_CALL_FAILED]
2018-01-29 09:03:42, Info                  CSI    [email protected]/1/29:15:03:42.863 CSI Transaction @0x27adb0b1d50 destroyed
2018-01-29 09:03:42, Info                  CBS    Perf: Resolve chain complete.
2018-01-29 09:03:42, Info                  CBS    Failed to resolve execution chain. [HRESULT = 0x800706be – RPC_S_CALL_FAILED]
2018-01-29 09:03:42, Error                 CBS    Failed to process Multi-phase execution. [HRESULT = 0x800706be – RPC_S_CALL_FAILED]
2018-01-29 09:03:42, Info                  CBS    WER: Generating failure report for package: Package_for_KB4058702~31bf3856ad364e35~amd64~~16299.188.1.0, status: 0x800706be, failure source: Resolve, start state: Resolved, target state: Staged, client id: UpdateAgentLCU
2018-01-29 09:03:42, Info                  CBS    Not able to query DisableWerReporting flag.  Assuming not set… [HRESULT = 0x80070002 – ERROR_FILE_NOT_FOUND]

In this part of the log you will notice it mentioning KB4058702. This is trying to locate the .Net Framework 3.5 files located in this patch which simply don’t exist. But if you remove this patch and retry you are likely to find another patch being mentioned.

Ultimately I found that the following patches needed to be removed before .Net Framework 3.5 would install but you may find a slightly different list.

  • KB3195792
  • KB4058702
  • KB4040981
  • KB4014505
  • KB4014581
  • KB3048072
  • KB3142045
  • KB3072307
  • KB3188732
  • KB3188743
  • KB3195792
  • KB3210132
  • KB2966828

Once the installation is successful make sure that you reinstall the missing patches again.

Skype for Business CU fails to install – Error 1603: Server.msp had errors installing

Microsoft have done a good job making the patching process for Skype for Business as simple as possible but over time it is possible that you may suddenly come across a server that will just not install a CU.

When you look at the logs the error doesn’t give you a lot of information to work on:

Executing command: msiexec.exe  /update “Server.msp” /passive /norestart /l*vx “c:patchesServer.msp-SRV01-[2018-11-28][19-27-42]_log.txt”

ERROR 1603: Server.msp had errors installing.

ERROR: SkypeServerUpdateInstaller failed to successfully install all patches

Right.

Luckily if does give you a log file in the first line. A REALLY BIG log file.

If you search for “error” then you will likely find a few but don’t get too worried. In particular one entry points you yet another log. This is located in the AppData\Local\Temp folder of the user running the upgrade and is called LCSSetup_Commands.txt. Inside this you will find the following information:

Install-CsDatabase : Command execution failed: Install-CsDatabase was unable to find suitable drives for storing the database files. This is often due to insufficient disk space; typically you should have at least 32 GB of free space before attempting to create databases. However, there are other possible reasons why this command could have failed. For more information, see http://ift.tt/1Og9jlm

So it seems that Skype for Business won’t patch the database if the disk on the server drops below a certain threshold. This log mentions 32GB but we’ve found that it will go lower than this.

After a bit of housekeeping the patch will run through successfully.

VMM bare metal build fails due to no matching logical network

If you deploy a new VMM bare metal build environment you may face an issue where the deployment fails with Error (21219).

The error description doesn’t appear to make a lot of sense either stating:

IP or subnet doesn’t match to specified logical network.


Recommended Action
Specify matching logical network and IP address or subnet.

If you face this, then you’ve likely checked and double checked all of your VMM networking and everything looks fine.

What’s happening?

This issue is caused by the build process looking up the IP address that you have allocated to the new server, and comparing this to the logical network. When it does this it’s finding that the IP address doesn’t match the subnet configured on the logical network .

But you’ve checked this already and it DOES. You checked it again now just in case and it still does, so this can’t be your problem right!?!?!

Well no.

You see VMM appears to not just be looking at the logical network for a match, but more specifically the first network site that was created on this logical network. Now in most cases you created the management network first so no big deal. But if you didn’t create it first, or you deleted it for some reason and recreated it then it will no longer be the first created site.

You’ve got to be kidding me. But it’s easy to fix?

You would think so but notice that there are no re-order buttons on the subnets?

This means that the only way to “reorder” them is to delete all of the network sites and recreate them. And if you have created any VM networks or bound them to any other configuration object then you’ll be even happier to know you will need to undo all of this configuration too.

Hopefully you’re deploying a new cluster and not deciding to deploy bare metal build to an existing one.

In case you think you must have missed this somewhere, it isn’t stated in the documentation. So is it a bug or a feature?

Either way just remember to create the host management subnet first in future.

Configure Hybrid Public Folder with Exchange 2013/2016 (aka Modern Public Folders)

Public Folders don’t seem to have the usage that they used to so it’s been a while since we worked with Public Folders in Exchange. So long in fact that what we last configured is now called Legacy Public Folders with the new version, introduced in Exchange 2013 called Modern Public Folders.

A Refresher on Exchange Public Folders

In order to understand the new process of setting up Hybrid mode with Exchange Online you first need to understand some changes to how Public Folders work.

In Exchange 2010 public folders were stored in dedicated Public Folder Databases. These also had their own log files and had to be managed independently of any User Mailbox Databases.

With Modern Public Folders they have been moved into Mailboxes which are stored in a standard user database. The environment can contain multiple public folder mailboxes, each of which can contain different parts of the public folder hierarchy.

When a user accesses a public folder they are actually opening the mailbox that contains that part of the hierarchy. Unlike previous versions the data is only accessible from the server hosting the active database rather than any server hosting a public folder replica.

Configuring Hybrid Public Folders

What does this mean for configuring Hybrid mode Public Folders?

First of all if you searched for something like “Configure Exchange Public Folder Hybrid” and found this Exchange 2019 article referring to Exchange 2010 SP3 or later then you’ve got the wrong article. You need to look for this article which is only on the Exchange Online documents site.

This newer article ignores all of the steps setting up new Public Folder Mailboxes resulting in just three steps:

1) Download the following files from Mail-enabled Public Folders – directory sync script

  • Sync-MailPublicFolders.ps1
  • SyncMailPublicFolders.strings.psd1

2) On Exchange Server, run the following command to synchronize mail-enabled public folders from your local on-premises Active Directory to O365.


Sync-MailPublicFolders.ps1 -Credential (Get-Credential) -CsvSummaryFile:sync_summary.csv

3) Enable the exchange online organization to access the on-premises public folders. You will point to all of you on-premises public folder mailboxes.


Set-OrganizationConfig -PublicFoldersEnabled Remote -RemotePublicFolderMailboxes PFMailbox1,PFMailbox2,PFMailbox3

Issues When Configuring Hybrid Mode

There are a few things to be aware of with this process though, particularly the final step.

1) Remember that the new Public Folders are stored in User Mailboxes which are associated with AD user accounts. If you aren’t syncing your entire Active Directory forest then the Public Folder Mailbox objects may not be synced to Exchange Online. So where are these stored by default? Well the Users container in your exchange enabled domain of course.

It’s likely that you haven’t synced this but you CAN move these objects to an OU that is being synced without any impact. Unfortunately this requirement isn’t included in the documentation. If these objects aren’t synced to Exchange Online then you’ll get the following message


Set-OrganizationConfig -PublicFoldersEnabled Remote -RemotePublicFolderMailboxes PFMailbox1
Couldn't find object "PFMailbox1". Please make sure that it was spelled correctly or specify a different object.
    + CategoryInfo          : NotSpecified: (:) [Set-OrganizationConfig], ManagementObjectNotFoundException
    + FullyQualifiedErrorId : [Server=SYAPR01MB2717,RequestId=d79eaa00-ff32-4076-8791-54ba22e3cb76,TimeStamp=26/11/201
   8 7:13:26 AM] [FailureCategory=Cmdlet-ManagementObjectNotFoundException] C4302B7C,Microsoft.Exchange.Management.Sy
  stemConfigurationTasks.SetOrganizationConfig
    + PSComputerName        : outlook.office365.com

2) Once you’ve moved the public folder mailbox objects remember that the -RemotePublicFolderMailboxes PFMailbox1,PFMailbox2,PFMailbox3 syntax is referring to the Public Folder Mailboxes and NOT the public folder names. You can find these in the ECP under Public Folder Mailboxes.

3) You also need to list all public folder mailboxes in the one command. If you add an additional public folder mailbox in the future then include all the mailboxes and not just the new one.

4) Finally remember that your on-premises address book is different from your online address book. This means that any new mail enabled public folders will only appear in your online address book if you sync them using the Sync-MailPublicFolders.ps1 script. If users can create these objects then you may want to think about scheduling this task.

Only users who have been created on-premises and migrated to Exchange Online can access the on-premises Public Folder store. Only these users exist in the on-premises address book used to authenticate access.

It may not seem that way but ultimately this is a simple service to configure with just a few little gotchas to be aware of.

Skype for Business Web Sites Fail to Work Using Microsoft Web Application Proxy

In a classic case of skim reading the documentation we had trouble publishing a Skype for Business environment externally using a Microsoft Web Application Proxy (WAP). This was mainly impacting the Skype for Business mobile client which was failing to log on.

You could still access all of the standard web sites through a browser such as dialin and meet though.

This happened due to a missed step when setting up the WAP. If the internal and external URLs are different, you need to disable the translation of URLs in the request headers. Use the following powershell command on the WAP server.


$Rule = (Get-WebApplicationProxyApplication -Name "Insert Rule Name to Modify").ID
Set-WebApplicationProxyApplication –ID $Rule –DisableTranslateUrlInRequestHeaders:$True

Once completed reload the mobile client and it should connect without issues.

Misconfigured Skype for Business Edge Server Breaks Office 365 Hybrid Federation

We’ve been moving more customers to Office 365 recently. Not only are they seeing the business case stacking up from a cost point of view but they are also after the cloud only features which are now more frequently appearing. A troubling development with these migrations are the number of broken Skype for Business Edge servers that we are seeing.

Now these aren’t totally broken but just broken enough that when we try to integrate their on-premises Skype for Business environment with Office 365 services things go wrong.

How will you detect this?

This will often show up when trying to get voicemail configured to use hosted voicemail in Exchange Online since this is often the first hybrid service being deployed. When the call is redirected to the Exchange Online server it fails. Looking at the event logs on the front end server it says that the dial plan wasn’t configured correctly.


Attempts to route to servers in an Exchange UM Dialplan failed

No server in the dialplan [Hosted__exap.um.outlook.com__tenant.onmicrosoft.com] accepted the call with id [XXXXXXXXXXXXXXXXXXXXXXXXX].

Cause: Dialplan is not configured properly.

Resolution:

Check the configuration of the dialplan on Exchange UM Servers.


All the configuration looked fine and so we needed to dig into the SIP traffic a little more. We did this using snooper. We could see the message being handed off to the edge server from the front-end server but then the edge server connection timed out.


Response Data
504  Server time-out
ms-diagnostics:  1008;reason=”Unable to resolve DNS SRV"

This was a little strange as the edge server was working fine for other federation partners, and DNS lookups were working on the edge server.

What was happening?

One thing that didn’t look right though was that the internal interface was configured to use the internal DNS server. Referring to the Edge server deployment guide confirmed that this wasn’t correct.

https://docs.microsoft.com/en-us/skypeforbusiness/deploy/deploy-edge-server/deploy-edge-servers

Interface configuration without DNS servers in the perimeter network
1. Install two network adapters for each Edge Server, one for the internal-facing interface, and one for the external-facing interface.

Note
The internal and external subnets must not be routable to each other.


2. On your external interface, you’ll configure one of the following:


a. Three static IP addresses on the external perimeter network subnet. You’ll also need to configure the default gateway on the external interface, for example, defining the internet-facing router or the external firewall as the default gateway. Configure the adapter DNS settings to point to an external DNS server, ideally a pair of external DNS servers.


b. One static IP address on the external perimeter network subnet. You’ll also need to configure the default gateway on the external interface, for example, defining the internet-facing router or the external firewall as the default gateway. Configure the adapter DNS settings to point to an external DNS server, or ideally a pair of external DNS servers. This configuration is ONLY acceptable if you have previously configured your topology to have non-standard values in the port assignments, which is covered in the Create your Edge topology for Skype for Business Server article.


3. On your internal interface, configure one static IP on the internal perimeter network subnet, and don’t set a default gateway. Also leave the adapter DNS settings empty.

4. Create persistent static routes on the internal interface to all internal networks where clients, Skype for Business Server, and Exchange Unified Messaging (UM) servers reside.

5. Edit the HOST file on each Edge Server to contain a record for the next hop server or virtual IP (VIP). This record will be the Director, Standard Edition server or Front End pool you configured as the Edge Server next hop address in Topology Builder. If you’re using DNS load balancing, include a line for each member of the next hop pool.

How to fix it?

The edge servers were changed to meet this guidance by creating a hosts file with all servers in the topology using both short names and FQDNs, as well as setting the external adapter to be the only adapter with DNS settings which were external to the organisation.

Voicemail started working once this change was made.

Why this happens

So why did this happen? Part of the setup for voicemail located in Office 365 is configuring a hosting provider in Skype for Business.


New-CsHostingProvider -Identity 'Exchange Online' -Enabled $True -EnabledSharedAddressSpace $True -HostsOCSUsers $False -ProxyFqdn "exap.um.outlook.com" -IsLocal $False -VerificationLevel UseSourceVerification

This provider has shared address space enabled. This means that endpoints with the same SIP domain name can be located either on-premises or in the cloud. In our case the endpoint is the Exchange Online UM service.

When a call is routed to Exchange Online UM it looks up the local directory to see that the user isn’t located on-premises. The call is passed to the Edge server which performs a lookup of the _sipfederationtls._tcp.domain.com DNS record. Why is it doing this? Well basically it’s trying to make a federation request with it’s own domain and this is the start of that process. But the _sipfederationtls._tcp.domain.com record only exists externally so that lookup is failing. Since it can’t federate with itself it doesn’t go to the next step which is establishing a connection to Exchange Online.

This can also be fixed by adding the DNS record to your internal DNS but the edge server would still not being configured correctly. It’s possible that using the internal DNS server would result in something else not working later on. Far better to fix it properly.

Just as a matter of interest if you tried to configure a hybrid mode with Skype Online you would also experience issues where your on-premises users couldn’t see presence or send messages to cloud users. This is the same reason as the Exchange UM issue with shared address space also enabled on this hosted provider


New-CSHostingProvider -Identity SkypeforBusinessOnline -ProxyFqdn "sipfed.online.lync.com" -Enabled $true -EnabledSharedAddressSpace $true -HostsOCSUsers $true -VerificationLevel UseSourceVerification -IsLocal $false -AutodiscoverUrl https://webdir.online.lync.com/Autodiscover/AutodiscoverService.svc/root

Both Exchange Online and Skype for Business have hybrid relationships with Skype for Business On-Premises. The only difference, apart from the provider endpoint address, is that the Skype Online provider is configured to host users while the Exchange Online provider hosts services.

Optimising WordPress for Running in Azure

Now that you have WordPress running in Azure there are a few housekeeping tasks that you may want to look at prior to switching over from your existing site.

Managing Storage

In a previous post we discussed the different types of deployment available in Azure. One thing to be aware of is that storage is managed differently with these and if you decide to scale-out then things change again. If you deploy WordPress in PaaS or a container then scaling is easy but each new web service needs to be able to access both the database and the file repository.

You may have noticed that our instructions on how to deploy WordPress using hybrid containers that we also deployed a storage service. This is what we will be using to store all of those objects so that any web server can access them. Luckily WordPress has a plugin that makes this functionality easy as well. This should be one of the first plugins that you install. No point in having any data saved to the wrong place after all.
Once this is added you need to configure the storage account that it will use. In order to do this you need to create a storage account key, which will be used by the plugin to access the blob storage. Log in to the Azure portal and open the storage account that you want to use. Under Blobs you need to create a new container then go to Access Keys to get one of the API keys. This information can then be put into the plugin configuration. In the WordPress admin go to settings|Microsoft Azure and copy in the name of the storage account and the API key. If the authentication is successful you will then see the container that you created.
Make sure that Azure Storage is then selected as the default upload source and save the settings. At this point you can restore your old site into WordPress. When you do this you will see that all of the images are automatically saved in the Azure storage account and when you go to upload a file it will automatically save to the Azure storage account.

Stopping the spam

When we stood up the site it took 4 hours before some spammer noticed that they could use a bot to send spam to the contact form. Luckily email wasn’t configured so no one really got spammed but still it was a lesson that we are running our own WordPress site and need to do some things ourselves now. So first up let’s make it harder to spam the contact form. I settled on installing the Contact Form 7 plugin. This has an integration with reCAPTCHA which is a free google service. You will need to sign up for an account at https://google.com/recaptcha whic will give you a site key and secret key. Put these into the Integration page for the contact plugin and then create a new contact form. We simply added the recaptcha to the bottom of the form.
Once you’re happy with the form you can copy the short code for the form directly onto your page. Have a play with it and make sure that it’s working before you fix the email integration. For this we used the WP Mail SMTP plugin. This allows a way to modify the SMTP settings for WordPress. Now WordPress is not an SMTP server so you need to have some external SMTP service that you can use. This plugin supports Google, Mailgun and Sendgrid, but you can also manually specify SMTP server settings. Since we have an Office 365 account we use that. For Office 365 the SMTP host will be your tenant name, which may be different from your domain name with .mail.protection.outlook.com at the end. For some reason the TLS option wasn’t working with Office 365 so we used SMTP on port 25 but enabled the Auto TLS option.
If you want to relay mail outside the organisation then it would be a good idea to set up an account in Office 365 and use authentication. If you don’t want to do this then remember that you will need to set up a receive connector in Exchange Online to authorise the web server to relay based on it’s IP address. If you just want to receive alerts yourself then no changes are required.

Configuring HTTPS

You really need to use HTTPS for your new site. The default site already has a valid SSL certificate but if you want to use a custom name this will require additional work. First you need to set up the custom domain name by going to the app service and opening the custom domain properties. The process for adding a custom name differs depending on whether the site is live or not. If it isn’t then you create a new CNAME pointing to the default Azure name. This is the sitename.azurewebsites.net that was assigned when you first created the site. If the site is already live then you don’t really want to redirect it to the new Azure site just to add the custom domain. You may still have more work to do before you’re ready to go live after all. To cater for this you need to create a txt record in your DNS which has the name awverify with the data containing your sitename.azurewebsites.net. If you want to have a host name for the site (eg www.sitename.com) then you will also need to create a record for this. (eg awverify.www) with the data referring to the Azure site. Once this is done you can upload a public certificate and bind it to the custom domain. If you went down the Windows PaaS route then you can use a Let’s Encrypt extension which will manage acquiring and renewing Let’s Encrypt certificates. This will result in a free cert associated with your custom domain. If you went down the container route then this is a little more difficult. There are solution out there which involves deploying multiple containers. The first container has a nginx reverse proxy which publishes the second container running WordPress. The nginx reverse proxy also has has the Let’s encrypt integration. In the end we went a different way. We use Cloudflare to publish our site. This already has SSL but things get a little funky with WordPress. If we configure the custom domain on CloudFlare but don’t use this name in WordPress then pages will break. If we set both to the same name and require SSL, well don’t do that. WordPress will stop responding. We added another plugin called CloudFlare Flexible SSL. This makes sure that all pages will display correctly to the end user. You then use CloudFlare to control the HTTPS configuration. You can then disable HTTP access from within CloudFlare if this is the route you want to go down.

Oh crap I changed the WordPress settings and now I can’t access my site!!!!

Yeah we’ve been there. Fortunately there is an easy way to fix this. You will need to change the setting on the database to get things back again. You can do this using the Azure cloud shell. Log on to it using the built in mysql command.
mysql -h wordpressdbserver.mysql.database.azure.com -u [email protected] -p
Then convert the site details back to using HTTP.

UPDATE wp_options SET option_value = replace(option_value, 'https://www.sitename.com', 'http://www.sitename.com') WHERE option_name = 'home' OR option_name = 'siteurl';
Reconnect to your site and breath.

Deploying WordPress on Azure using Hybrid Containers

In the last post we looked at the different architectures that can be used to deploy WordPress in Azure. We decided to deploy our site using a hybrid container environment. This has the web service running in a docker container but the database running as a separate resource. This makes the web service a simple component which can be easily replaced if problems are experienced or scaled in response to load changes. If you haven’t already deployed any Azure services then you will need to start by deploying a resource group. This will group the WordPress resources together. Give this a name select the subscription you will be using to pay for the service and select the Azure region that you want to use.

MySQL Server Installation

Next deploy a Azure Database for MySQL Server. This will be used to host the WordPress database and will allow us to deploy additional web services connected to the same database.
You’ll need to define some basic settings as part of this deployment and you will need to record some of the details for later. The server name needs to unique across Azure and will ultimately end up with a fully qualified domain name of servername. .mysql.database.azure.com. The server admin name and password should be something complex and will be used to remotely access the database. Finally you may want to modify the pricing plan which is designed for significant production systems.
Once the database server has successfully deployed you need to create the WordPress database to the MySQL server. If you already have the MySQL tools installed you can use these to connect to the server. Otherwise you can connect using the cloud shell directly from the Azure portal. This is the command prompt in the toolbar.
If you haven’t used this before it may prompt to create a storage account. This will likely be deployed in a different Azure regions from your WordPress services but this won’t cause any problems. Once you are at the prompt you can use the following command to connect to your new MySQL server
mysql -h wordpressdbserver.mysql.database.azure.com -u [email protected] -p
Then run the following script to create a wordpress user account in the database and create a new database.
create user 'wordpress' IDENTIFIED BY "Sup3r53crEtP455w0rd";

create database wordpress;
GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress';
FLUSH PRIVILEGES;
WordPress doesn’t support connecting to a MySQL database using SSL connections out of the box. There are ways to patch this behaviour by updating the code but otherwise you will need to change the connection settings of the MySQL Server. You will also want to allow Azure resources to communicate with the server.

Web Service Installation

There are two parts to the web service. The billing component is called the App Service Plan. We’re going to be using docker containers for the web service using a Linux App Service Plan. Give the Service Plan a name, assign it to your resource group and subscription. Then make sure that it’s a Linux service plan in the same location as your database. Finally make sure that you’re happy with the billing for the site.
Now you can create the Web Application and associate it with the new service plan. Again make sure that you set the OS to linux and then select configure container.
Change to the docker tab and then enter wordpress:latest. This is the public wordpress repository and will mean that you get the latest wordpress setup whenever you update the container.
Once the web service deploys and starts you will be able to access the web service using the servicename.azurewebsites.net address. This will show the WordPress quickstart page. Select the correct language before proceeding to the database setup page.
Next fill in the connection settings for your MySQL server. you need to make sure that you use the full names in this section so the username needs to be [email protected] and the database host needs to be yourmysqlserver.mysql.database.azure.com. if you don’t use this syntax then you will end up with a connection error.
All going well you should now see the WordPress welcome page.
There’s still more to do before you have everything ready to go into production but with just a few steps you’ve got a serverless web server. this is what makes public cloud so powerful.

Options for deploying WordPress to Azure

We’ve been using WordPress.com to host our site. This is a cost effect solution, particularly with the cheaper subscription levels, but we decided that we really needed to drink our own kool-aid and migrate to our own public cloud. This even resulted in a saving for us as we have a Microsoft Partner account which has monthly Azure credits. Finally since we are now running our own WordPress site there are no functionality restrictions as is the case with the WordPress.com sites. With Azure there are a few different ways to deploy WordPress:
  1. Deploy WordPress using a VM in IaaS. This could be done using either one or several Windows or Linux VM. Using this model you need to think about whether you want to scale up or out and design this capability. You’re also running a full blown operating system so you are paying to run this as well as having to maintain it yourself.
  2. Next you could deploy everything into a container This would result in both the database and the web site running inside a container. This will have the smallest footprint but scaling the solution will be a little harder as the database is contained inside the docker container as well.
  3. You could also use the PaaS Web App service to run WordPress. This again can be either a Windows or Linux web service. In this case you will also need to deploy a database service which does allow for the web service to be scaled out if required.
  4. Finally you can also use containers but with an external database. This will use a docker image for the web service which connects to a dedicated database service. This solution actually runs on the Linux PaaS Web App so the difference between the two is how you stand up your solution. Is it pulled in from a docker image repository or do you push the web code using git into the web service?
In the end we decided on a WordPress docker image connecting to a Azure Database for mySQL server. This allowed for a shockingly quick deployment while still allowing some flexibility and the ability to expand. In the next article we’ll go through the process of how we set up this site.

This week in the cloud – 18th June 2018

There’s so much happening in the cloud space at the moment that I thought it would be good for my own reference as much as anyone else’s to produce a summary of some of the big changes that have happened this week. This week has been particularly busy with the Microsoft Build Conference.

The compute decision tree

The first resource that I found wasn’t new this week but will be quite useful. There’s so many different types of compute services but choosing the wrong one can be catastrophic when migrating on-premises resources to the Azure cloud.

This and additional information is located at https://docs.microsoft.com/en-us/azure/architecture/guide/technology-choices/compute-decision-tree.

The new DEV lab

Next up is a look at the DEVTest Lab function in Azure. If you haven’t heard of this it’s a great way to spin up a new environment to do some testing without having that old hardware around or having to bother with building all the boring stuff.

https://azure.microsoft.com/en-us/services/devtest-lab/

With this you can deploy templates with multiple machines which can include different components. This allows you to do things like deploy an SCCM environment , even though these include multiple servers and services. Deploy multiple VMs with domain controllers (including standing up a new forest) SQL services and the SCCM services all using a automated process.

Then you can minimise costs by automating the shutdown of the environment so that an idle DEV machine isn’t costing money for idling.

A great resource about this is found here. https://execmgr.net/2018/04/13/building-a-configmgr-lab-in-azure/

PSTN Services in Teams getting closer

Here in NZ we will be holding our breaths for a while longer before PSTN services are available in Office 365 but things are looking a little easier with the introduction in preview of Direct Routing. This is only available in teams *sigh* but will allow a on-premises telephony gateway to directly integrate with Teams. No more Skype for Business on-premises environment or multi-VM cloud connector. Just install a supported physical or even virtual telephony gateway and away you go.

https://techcommunity.microsoft.com/t5/Microsoft-Teams-Blog/Direct-Routing-NOW-in-Public-Preview/ba-p/193915

Let’s just hope that Teams can improve to the point that we will all accept them taking Skype for Business off us in the future.

Linux Everywhere even in your Azure AD

Microsoft now loves Linux. Really loves it. Loves it so much that they have now released a Linux distro in the form of Azure Sphere. This is a new IoT operating system which Microsoft will support for 10 years, While it has built-in integration with Azure there appears nothing to stop it from connecting to another cloud service or even a on-premises environment.

Next up is a boring old Linux VM running in Azure. Fairly boring now but now you can integrate it with Azure AD as the Identity Provider. This will enable you to log on to a Linux machine in Azure using on-premises credentials.

https://docs.microsoft.com/en-us/azure/virtual-machines/linux/login-using-aad

Another Azure AD Service

Just because there are not enough ways to use the words Azure Active and Directory in a product name there is now also Azure Active Directory Domain Services. This isn’t a really new service but I have to admit I totally missed it and must have thought it was just one of the other Azure Active Directory services.

This time though it’s a full on Active Directory service without the VM. This Azure service uses the Azure AD directory service to stand up a full Active Directory service in Azure complete with the features that Azure AD doesn’t include such as Group Policy, Organisational Units and NTLM/Kerberos Authentication.

To be clear this is still NOT your on-premises domain but another domain with the same users, passwords and groups.

Details can be found here. https://azure.microsoft.com/en-us/services/active-directory-ds/.

This is just a taster of some of the changes that have been introduced recently. Microsoft announced at Build that they had introduced 170 different new functions in Azure in the last year. Keeping up with these changes is going to get very difficult without even including AWS.