Category Archives: azure

From Windows Embedded to cloud: Microsoft announces the Connected Vehicle Platform

Microsoft has announced the Connected Vehicle Platform, at the CES event under way in Las Vegas.

image

The company is not new to in-car systems, but its track record is disappointing. It used to be all about Windows Embedded, using Windows CE to make a vehicle into a smart device.

Ford was Microsoft’s biggest partner. It built Ford SYNC on the platform and in 2012 announced five years of partnership and 5 million SYNC-enabled vehicles.

However in 2014 Ford announced SYNC 3 with no mention of Microsoft – because SYNC 3 uses Blackberry’s QNX.

What went wrong? There’s a 2014 analysis from Bill Howard that offers a few clues. The bit that chimes with me is that Microsoft was too slow in updating the system. The overall Windows story over the last 10 years is convoluted to say the least, with many changes to the platform and disruptive (in a bad way) strategy shifts. The same factor is a large part of why Windows Phone failed.

It is not clear at this stage whether or not Microsoft’s Connected Vehicle Platform partners (which include Renault-Nissan and BMW) will use Windows Embedded in their solutions; but what is notable is that Microsoft’s release makes no mention of it. The company has shifted to a cloud strategy, and is primarily offering Azure services rather than mandating how manufacturers choose to consume them. The detail of the announcement identifies five key areas:

  • Telematics and Predictive services
  • Marketing (“Customer insights and engagement”)
  • Productivity (Office 365, Skype)
  • Connected ADAS (Advanced Driver Assistance Systems), ie. the car helping you to drive
  • Advanced Navigation

Cortana also gets a mention. We may think of Cortana as a virtual assistant, but what this means is a user interface to intelligent services.

There is big competition for all this of course, with Google, Amazon and Apple also in this space. There is also politics involved. If you read Howard’s analysis linked above, note that he mentions how the auto companies dislike restrictions such as Google insisting that you can’t have Google Search unless you also use Google Maps (I have no idea if this is still the case). There is a tension here. In-car systems are an important value-add for customers and critical to marketing vehicles, but the auto companies do not want their vehicles to become just another channel for big data-gathering companies like Google and Amazon.

Another point of interest is how smartphones interact with your car. If you want a simple and integrated experience, you can just dock your phone and use it for navigation, communication and entertainment – three key areas for in-car systems. On the other hand, a docked phone will not have the built-in screen and control of vehicle features that an embedded system can offer.

Hands on with Microsoft’s ADConnect

I’ve been trying Microsoft’s ADConnect tool, the replacement for the utility called DirSync, which synchronises on-premises Active Directory with Azure AD, the directory used by Office 365.

It is therefore a key piece in Microsoft’s hybrid cloud story.

In my case I have a small office set-up with Active Directory running on Server 2012 R2 VMs. I also have an Office 365 tenant that I use for testing Microsoft’s latest cloud stuff. I have long had a few basic questions about how the sync works so I created a small Server 2012 R2 VM on which to install it.

ADConnect can be installed on a Domain Controller, though this used to be unsupported for DirSync. However it seems to be tidier to give ADConnect its own server, and less likely to cause problems.

There are a number of pre-requisites but for me the only one that mattered was that your domain must be set up on the Office 365 tenant before you configure ADConnect. You cannot configure it using the default *.onmicrosoft.com domain.

Adding a domain to Office 365 is straightforward, provided you have access to the DNS records for the domain, and provided that the domain is not already linked to another Office 365 tenant. This last point can be problematic. For example, BT uses Office 365 to provide business email services to its customers. If you want to migrate from BT to your own Office 365, detaching the domain from BT’s tenant, to which you do not have admin access, is a hassle.

When I tried to set up my domain, I found another problem. At some point I must have signed up for a trial of Power BI, and without my realising it, this created an Office 365 tenant. I could not progress until I worked out how to get admin access to this Power BI tenant and assign my user account a different primary email address. The best way to discover such problems is to attempt to add the domain and note any error messages. And to resist the wizard’s efforts to get you to set up your domain in a different tenant to the one that you want.

That done, I ran the setup for ADConnect. If you use the Express settings, it is straightforward. It requires SQL Server, but installs its own instance of SQL Server Express LocalDB by default.

image

You enter credentials for your Office 365 tenant and for your on-premises AD, then the wizard tells you what it will do.

image

I was interested in the link on the next screen, which describes how to get all your Windows 10 domain-joined computers automatically “registered” to Azure AD, enabling smoother integration.

image

If you follow the link, and read the comments, you may be put off; I was. It involves configuring Active Directory Federation Services as well as Group Policy and looks fiddly. I suspect this is worth doing though, and hope that configuration will be more automated in due course.

The next step was to look at the outcome. One thing that is important to understand is that synced users are distinct from other Office 365 users. Imagine then that you have existing users in Office 365 and you want to match them with existing on-premises users, rather than creating new ones. This should work if ADConnect can match the primary email address. It will convert the matching Azure AD user into a synced user. Otherwise, it will just create new users, even if there are existing Azure AD users with the same names. If it goes wrong, there are ways to recover. Note that the users are not actually linked via the email address, they are linked by an attribute called an ImmutableID.

The Office 365 admin portal is fully aware of synced users and the user list shows the distinction. Users are designated as “In Cloud” or “Synced with Active Directory”.

image

Synced users cannot be deleted from the Office 365 portal. You delete them in on-premises AD and they disappear.

The next obvious issue is that if you dive in like me and just install ADConnect with Express Settings, you will get all your on-premises users and groups in Azure AD. In my case I have things like “ASP.NET Machine Account”, various IUSR* accounts, users created by various applications, and groups like “DHCP Administrators” and “Exchange Trusted Subsystem” that do not belong in Office 365.

These accounts do not do much harm; they do not consume licenses or mess up Office 365. On the other hand, they are annoying and confusing. You may also have business reasons to exclude some users from synchronization.

Fortunately, there are various ways to fine-tune, both before and after initial synchronization. You can read about it here. This document also states:

With filtering, you can control which objects should appear in Azure AD from your on-premises directory. The default configuration takes all objects in all domains in the configured forests. In general, this is the recommended configuration.

I find this puzzling, in that I cannot see the benefit in having irrelevant service accounts and groups synced to Office 365 – though it is not entirely obvious what is safe to exclude.

I went back to the ADConnect tool and reconfigured, using the Domain and OU filtering option. This time, I selected what seems to be a minimal configuration.

image

The excluded objects are meant to be deleted from Office 365, but so far they have not. I am not sure if this will fix itself. (Update: it did, though I also re-ran a full initial sync to help it along). If not, you can temporarily disable sync, manually delete them in the Office 365 portal, then re-enable sync.

What if you want to exclude a specific user? I used the steps described to create a DoNotSync filter based on setting extensionAttribute15. You use the ADConnect Synchrhonization Rules Editor to create the rule, then set the attribute using ADSIEdit or your favourite tool. This worked, and the user I marked disappeared from Office 365 on the next sync.

image

Incidentally, you can trigger an immediate sync using this PowerShell command:

Start-ADSyncSyncCycle -PolicyType Delta

Complications

Setting up ADConnect does introduce complexity into Office 365. You can no longer do everything through the portal. It is not only deletion that does not work. When I tried to set up a mailbox in Office 365 I hit this message:

image

“This user’s on-premises mailbox hasn’t been migrated to Exchange Online. The Exchange Online mailbox will be available after migration is completed.”

I can see the logic behind this, but there might be cases where you want a new empty mailbox; I am sure there is a way around it, but now there is more to go wrong.

Update: there is a rather important lesson hiding here. If you have are running Exchange on-premises and want to end up on Office 365 with ADConnect, you must take care about the order of events. Once ADConnect is running, you cannot do a cutover migration of Exchange, only a hybrid migration. If you don’t want hybrid (which adds complexity), then do the cutover migration first. Convert the on-premise mailboxes to mail-enabled users. Then run ADConnect, which will match the users based on the primary email address.

It is also obvious that ADConnect is designed for large organisations and for administrators who know their way around Active Directory. There is a simplified sync tool in Windows Server Essentials, though I have not used it. It would be good though to see something between Essentials and the complexity of ADConnect. For example, I had imagined that there might be a mapping tool that would let you see how ADConnect intends to match on-premises users with Office 365 users and let you amend and exclude users with a few clicks.

Microsoft has been working on this stuff for some time and is not done yet. In preview for example is Group Writeback, which lets you sync Office 365 groups back to on-premises AD.

image

Maybe Microsoft might also consider using different icons for the various ADConnect utilities as they do look a bit silly if you pin them to the taskbar:

image

The tools are:

  • Azure ADConnect (Wizard)
  • Synchronization Rules Editor (advanced filtering)
  • Synchronization Service WebService Connector Config (SOAP stuff)
  • Synchronization Service Key Management (what it says)

On the plus side, I have not hit any mysterious Active Directory errors and it has all worked without having to set up certificates, reverse proxies, special DNS entries (other than the standard ones for Office 365), or anything too fiddly, though note that I avoided ADFS and automatic Windows 10 registration.

Final thoughts

If you need to implement this, you will find doing what I did and trying it out on a test domain is worth it. There seem to be quite a few pitfalls, and as ever, it is easier to get it right at the start rather than trying to fix things up afterwards.

The case of the disappearing Azure AD application registration

Some time ago I wrote a simple web application which runs on Microsoft Azure and uses Azure Active Directory for authentication. The application is used constantly and has proved reliable; however yesterday it stopped working. A quick debug session showed that the problem was an Azure AD permissions error.

In order to use Azure AD, applications have to be registered in the Azure management portal. I use the old portal for this; I am not sure that the functionality exists in the new portal yet. There is a nice how-to here.

image

One of the elements in the registration is a key which has a maximum lifetime of 2 years:

image

My application was deployed about two years ago so I went to the portal to see if it had expired.

What I found surprised me. The application was not listed at all. It had disappeared.

Instead of simply obtaining a new key and updating my application config, I had to create a new application registration and update several keys in the config, which was an annoyance.

There is a wider point here, in the whole category of dealing with “things that expire”. Some time ago, Microsoft suffered an extended Azure outage because of an expired certificate. It is a shame that Microsoft insists on a maximum 2 year lifetime for this key but does not provide a check box for “alert me when this key is about to expire”, how difficult would that be?

Problems like this also mean that things which “just work” may not continue to do so. Of course a well organised enterprise setup can deal with this type of problem, but imagine, for example, the case of a small business with an application running on Azure where the developers have gone out of business, perhaps, or are no longer available. In fact the only code I needed to change was in web.config, but I can imagine it could take some time to figure out what to do and what to change.

Reserved IPs and other Microsoft Azure annoyances

I have been doing a little work with Microsoft’s Azure platform recently. A common requirement is that you want a VM which is internet-accessible with a custom domain, for which the best solution is to create a A record in your DNS pointing to the IP number of the VM. In order to do this reliably, you need to reserve an IP number for the VM; otherwise Azure may assign a different IP number if you shut it down and later restart it. If you keep it running you can keep the IP number, but this also means you are have to pay for the VM continuously.

Azure now offers reserved IP numbers. Useful; but note that you can only link a VM with a reserved IP number when it is created, and to do this you have to create the VM with PowerShell.

What if you want to assign a reserved IP number to an existing VM? One suggestion is that you can capture an image from the VM, and then create a new VM from the image, complete with reserved IP. I went partially down this route but came unstuck because Azure for some reason captured the image into a different region (West Europe) than the region where the VM used to be (North Europe). When I ran the magic PowerShell script, it complained that the image was in the wrong region. I then found a post explaining how to move images between regions, which I did, but the metadata of the moved image was not quite the same and creating a new VM from the image did not work. At this point I realised that it would  be easier to recreate the VM from scratch.

Note that when reserved IP number were announced in May 2014, program manager Mahesh Thiagarajan said:

The platform doesn’t support reserving the IP address of the existing Cloud Services or Virtual machines. We expect to announce support for this in the near future.

You can debate what is meant by “near future” and whether Microsoft has already failed this expectation.

There is another wrinkle here that I am not clear about. Some Azure VMs have special pricing, such as those with SQL Server pre-installed. The special pricing is substantial, often forming the largest part of the price, since it includes licensing fees. What happens to the special pricing if you fiddle with cloning VMs, creating new VMs with existing VHDs, moving VMs between regions, or the like? If the special pricing is somehow lost, how do you restore it so SQL Server (for example) is still properly licensed? I imagine this would mean a call to support. I have not seen any documentation relating to this in posts like this about moving a virtual machine into a virtual network.

And there’s another thing. If you want your VM to be in a virtual network, you have to do that when you create it as well; it is a similar problem.

While I am in complaining mode, here is another. Creating a VM with PowerShell is easy enough, but you do need to know the image name you are using. This is not shown in the friendly portal GUI:

image

In order to get the image names, I ran a PowerShell script that exports the available images to a file. I was surprised how many there are: the resulting output has around 13,500 lines and finding what you want is tedious.

Azure is mostly very good in my experience, but I would like to see these annoyances fixed. I would be interested to hear of other things that make the cloud admin or developer’s life harder than it should be.

SSD storage has come to Azure VMs, along with faster Azure SQL

Microsoft has introduced SSD storage for Azure VMs. This is a catch-up with Amazon which has been offering this at least since June 2014. It is an important feature though, and now in preview. The SSDs are part of the Azure storage service but can only be used for disks attached to VMs, not for general-purpose block files. There are three virtual disks available:

  P10 P20 P30
Disk size 128GB 512GB 1TB
IOPS 500 2300 5000
Throughput 100 MB/s 150 MB/s 200 MB/s

Price is $6.90 per 100GB per month, which if I am reading this right is less than Amazon’s $0.10 per GB per month ($10 per 100GB) as shown here.

One obvious use case is for SQL Server running on a VM. This generally performs better than Microsoft’s Azure SQL database service. That said, Microsoft is also previewing an improved Azure SQL which supports most of the features of SQL Server 2014, including .NET stored procedures and in-memory columnstore queries. Microsoft’s Scott Guthrie says performance is better:

Our internal benchmark tests (using over 600 million rows of data) show query performance improvements of around 5x with today’s preview relative to our existing Premium Tier SQL Database offering and up to 100x performance improvements when using the new In-memory columnstore technology.

If you can make it work, Azure SQL is better sense than running SQL Server in a VM with all the hassles of server patching and of course Microsoft’s licensing fees; but the performance has to be there. Another factor which drives users to the VM option is that SQL Reporting Service is not available in Azure SQL.

Microsoft’s Azure outage: a troubling account of what went wrong

Microsoft’s Jason Zander has published an account of what went wrong yesterday, causing failure of many Azure services for a number of hours. The incident is described as running from 0.51 AM to 11.45 AM on November 19th though the actual length of the outage varied; an Azure application which I developed was offline for 3.5 hours.

Customers are not happy. From the comments:

So much for traffic manager for our VM’s running SQL server in a high availability SQL cluster $6k per month if every data center goes down. We were off for 3 hrs during the worst time of day for us; invoicing and loading for 10,000 deliveries. CEO is wanting to pull us out of the cloud.

So what went wrong? It was a bug in an update to the Storage Service, which impacts other services such as VMs and web sites since they have a dependency on the Storage Service. The update was already in production but only for Azure Tables; this seems to have given the team the confidence to deploy the update generally but a bug in the Blob service caused it to loop and stop responding.

Here is the most troubling line in Zander’s report:

Unfortunately the issue was wide spread, since the update was made across most regions in a short period of time due to operational error, instead of following the standard protocol of applying production changes in incremental batches.

In other words, this was not just a programming error, it was an operational error that meant the usual safeguards whereby a service in one datacenter takes over when another fails did not work.

Then there is the issue of communication. This is critical since while customers understand that sometimes things go wrong, they feel happier if they know what is going on. It is partly human nature, and partly a matter of knowing what mitigating action you need to take.

In this case Azure’s Service Health Dashboard failed:

There was an Azure infrastructure issue that impacted our ability to provide timely updates via the Service Health Dashboard. As a mitigation, we leveraged Twitter and other social media forums.

This is an issue I see often; online status dashboards are great for telling you all is well, but when something goes wrong they are the first thing to fall over, or else fail to report the problem. In consequence users all pick up the phone simultaneously and cannot get through. Twitter is no substitute; frankly if my business were paying thousands every month to Microsoft for Azure services I would find it laughable to be referred to Twitter in the event of a major service interruption.

Zander also says that customers were unable to create support cases. Hmm, it does seem to me that Microsoft should isolate its support services from its production services in some way so that both do not fail at once.

Still, of the above it is the operational error that is of most concern.

What are the wider implications? There are two takes on this. One is to say that since Azure is not reliable try another public cloud, probably Amazon Web Services. My sense is that the number and severity of AWS outages has reduced over the years. Inherently though, it is always possible that human error or a hardware failure can have a cascading effect; there is no guarantee that AWS will not have its own equally severe outage in future.

The other take is to give up on the cloud, or at least build in a plan B in the event of failure. Hybrid cloud has its merits in this respect.

My view in general though is that cloud reliability will get better and that its benefits exceed the risk – though when I heard last week, at Amazon Re:Invent, of large companies moving their entire datacenter infrastructure to AWS I did think to myself, “that’s brave”.

Finally, for the most critical services it does make sense to spread them across multiple public clouds (if you cannot fallback to on-premises). It should not be necessary, but it is.

An Azure Web Site is a VM which supports multiple applications

This will be unnecessary for Azure experts, but I have seen some misunderstanding on this point, hence this post.

A “web site” is a unit of service on the Azure cloud platform which represents a web application hosted on IIS, Microsoft’s web server (but see below). You write a standard ASP.NET application and deploy it. Azure takes care of configuring the host VM, the server operating system, and IIS.

Using a web site is preferable to creating your own VM and installing IIS on it, for several reasons. One is that you do not have to worry about patching and maintaining the operating system. Another is that web sites can be scaled, manually or automatically, with an option for scheduling so that you can scale down the site for periods of low demand.

image

The main reason for using a VM rather than a web site is if the app has dependencies that fall outside what a web site can handle.

Another thing to know about Azure web sites is that they have four “plan modes,” but only two are worth considering for production. The Free and Shared modes host your application on a shared VM, and quotas are applied. If Azure decides your site is out of quota, it will stop responding. Fine for a prototype, but not something you want customers or users to see. This feature is not shown clearly on the table of features but it is in note 2:

Shared Instance: Free and Shared (Preview) tiers include 60 minutes and 240 minutes of CPU capacity per day, respectively. The Shared (Preview) Website rates are applied per website instance.

The Basic tier on the other hand is decent. It is a dedicated VM, and you can scale it (manually) to 3 instances. It costs around 25% less than a Standard tier site.

Why go Standard? You get 50B storage thrown in (a Basic tier site has 10GB), auto-backup, auto-scale up to 10 instances, and a fixed IP address for SSL. If you have to buy a fixed IP address for a single instance Basic tier site, the price goes above a Standard tier site, except for a Large instance.

Currently a Basic tier web site costs from £35.64 to £141.92 per month, and a Standard tier from £47.10 to £189.65, depending on the size of the VM.

It is a significant cost, but what may not be obvious is that you can deploy multiple applications to a single web site, which makes my statement above, “A ‘web site’ is a unit of service on the Azure cloud platform which represents a web application hosted on IIS”, not quite correct.

When you create a new web site, if you have one already, you can choose a “web hosting plan”. Here is an example:

image

In this case, there are two pre-existing web site VMs, one in East Asia and one in Europe. If you choose one of these two, the new web site will be added to that VM. If you choose “Create new web hosting plan”, you will create a new dedicated instance (or free, or Shared). Adding to an existing VM means no extra cost.

If you are a developer, it may well be better to run a single Basic VM for prototyping, and add multiple sites, rather than risking a free or shared instance which might be out of quota when you demonstrate it to your customer.

What is the limit to the number of web sites you can add? There is none, other than the overloading the VM and getting unresponsive applications.

Postscript: the Web Site service is interesting as an example which blurs the boundaries between IaaS (Infrastructure as a service) and PaaS (Platform as a service). It is more PaaS than IaaS, in that you do not have to worry about maintaining the OS, but more IaaS than PaaS, in that you are still having to think about individual VMs. It would be more purist if Microsoft abstracted away the VMs and simply guaranteed a certain level of service, or scaled up automatically and billed for what you use. On the other hand, the Web Site concept puts a lot of control in the hands of the developer/admin and help them to make best use of the resources, while still removing most of the maintenance burden. I think it is a good compromise.

Microsoft Azure: new preview portal is “designed like an operating system” but is it better?

How important is the Azure portal, the web-based user interface for managing Microsoft’s cloud computing platform? You can argue that it is not all that important. Developers and users care more about the performance and reliability of the services themselves. You can also control Azure services through PowerShell scripts.

My view is the opposite though. The portal is the entry point for Azure and a good experience makes developers more likely to continue. It is also a dashboard, with an overview of everything you have running (or not running) on Azure, the health of your services, and how much they are costing you. I also think of the portal as an index of resources. Can you do this on Azure? Browsing through the portal gives you a quick answer.

The original Azure portal was pretty bad. I wish I had more screenshots; this 2009 post comparing getting started on Google App Engine with Azure may bring back some memories. In 2011 there were some big management changes at Microsoft, and Scott Guthrie moved over to Azure along with various other executives. Usability and capability improved fast, and one of the notable changes was the appearance of a new portal. Written in HTML 5, it was excellent, showing all the service categories in a left-hand column. Select a category, and all your services in that category are listed. Select a service and you get a detailed dashboard. This portal has evolved somewhat since it was introduced, notably through the addition of many more services, but the design is essentially the same.

image

The New button lets you create a new service:

image

The portal also shows credit status right there – no need to hunt through links to account management pages:

image

It is an excellent portal, in other words, logically laid out, easy to use, and effective.

That is the old portal though. Microsoft has introduced a new portal, first demonstrated at the Build conference in April. The new portal is at http://portal.azure.com, versus http://manage.windowsazure.com for the old one.

The new portal is different in look and feel:

image

Why a new portal and how does it work? Microsoft’s Justin Beckwith, a program manager, has a detailed explanatory post. He says that the old portal worked well at first but became difficult to manage:

As we started ramping up the number of services in Azure, it became infeasible for one team to write all of the UI. The teams which owned the service were now responsible (mostly) for writing their own UI, inside of the portal source repository. This had the benefit of allowing individual teams to control their own destiny. However – it now mean that we had hundreds of developers all writing code in the same repository. A change made to the SQL Server management experience could break the Azure Web Sites experience. A change to a CSS file by a developer working on virtual machines could break the experience in storage. Coordinating the 3 week ship schedule became really hard. The team was tracking dependencies across multiple organizations, the underlying REST APIs that powered the experiences, and the release cadence of ~40 teams across the company that were delivering cloud services.

The new portal is the outcome of some deep thinking about the future. It is architected, according to Beckwith, more like an operating system than like a web application.

The new portal is designed like an operating system. It provides a set of UI widgets, a navigation framework, data management APIs, and other various services one would expect to find with any UI framework. The portal team is responsible for building the operating system (or the shell, as we like to call it), and for the overall health of the portal.

Each service has its own extension, or “application”, which runs in an iframe (inline frame) and is isolated from other extensions. Unusually, the iframes are not used to render content, but only to run scripts. These scripts communicate with the main frame using the window.postMessage API call – familiar territory for Windows developers, since messages also drive the Windows desktop operating system.

Microsoft is also using TypeScript, a high-level language that compiles to JavaScript, and open source resources including Less and Knockout.

Beckwith’s post is good reading, but the crunch question is this: how does the new portal compare to the old one?

I get the sense that Microsoft has put a lot of effort into the new portal (which is still in preview) and that it is responsive to feedback. I expect that the new portal will in time be excellent. Currently though I have mixed feeling about it, and often prefer to use the old portal. The new portal is busier, slower and more confusing. Here is the equivalent to the previous New screen shown above:

image

The icons are prettier, but there is something suspiciously like an ad at top right; I would rather see more services, with bigger text and smaller icons; the text conveys more information.

Let’s look at scaling a website. In the old portal, you select a website, then click Scale in the top menu to get to a nice scaling screen where you can set up autoscaling, define the number of instances and so on.

How do you find this in the new portal? You get this screen when you select a website (I have blanked out the name of the site).

image

This screen scrolls vertically and if you scroll down you can find a small Scale panel. Click it and you get to the scaling panel, which has a nicely done UI though the way panels constantly appear and disappear is something you have to get used to.

There are also additional scaling options in the preview portal (the old one only offers scaling based on CPU usage):

image

The preview portal also integrates with Visual Studio online for cloud-based devops.

The challenge for Microsoft is that the old portal set a high bar for clarity and usability. The preview portal does more than the old, and is more fit for purpose as the number and capability of Azure services increases, but its designers need to resist the temptation to let prettiness obstruct performance and efficiency.

Developers can give feedback on the portal here.

Microsoft integrates Azure websites with hybrid cloud

Microsoft has announced the integration of Azure websites with Azure virtual networks, including access to on-premise resources if you have a site-to-site VPN.

The Virtual Network feature grants your website access to resources running your VNET that includes being able to access web services or databases running on your Azure Virtual Machines. If your VNET is connected to your on premise network with Site to Site VPN, then your Azure Website will now be able to access on premise systems through the Azure Websites Virtual Network feature.

Azure websites let you deploy web applications running on IIS (Microsoft’s web server) hosted in Microsoft’s cloud. The application platform can be framework can be ASP.NET, Java, PHP, Node.js or Python. There are Free, Shared and Basic tiers which are mainly for prototyping, and a Standard tier which has auto-scaling features, managed through Microsoft’s web portal:

image

The development tool is Visual Studio, which now has strong integration with Azure.

Integration with virtual networks is a significant feature. You could now host what is in effect an intranet application on Azure if it is convenient. If it is only used in working hours, say, or mainly used in the first couple of hours in the morning, you could scale it accordingly.

Have a look at that web configuration page above, and compare it with the intricacies of System Center. It is a huge difference and shows that some parts of Microsoft have learned that usability matters, even for systems aimed at IT professionals.