Category Archives: azure

A mild case of Azure bill shock: is this the most over-priced service on Microsoft’s cloud?

I have been experimenting with accessing Azure storage from remote PCs and tried out the option to use SFTP which was introduced last year. It works though there are limitations, like no support for SSH commands after connecting, no resume support for uploads, and no support for Azure AD authentication – this last is a bit of an issue since fine-grained permissions can only be done with local users, specific to the blob storage.

I actually thought I had turned this off after my experiment but I did not. So I had SFTP enabled on a test storage account, doing nothing. I spotted it of course when I got a large (for my usage) bill. Simply having SFTP enabled on a storage account costs around $220 per month.

To be fair to Microsoft, the cost is documented and there is a notice in the portal, in the details for the storage account, that enabling SFTP incurs a charge, though it does not say how much.

The cost for enabling SFTP

The price is remarkable though, especially given that it seems that the SFTP support is a bit of a hack. Perhaps Microsoft actually runs up a dedicated VM for this in the background, who knows?

“The cost is astronomical considering the service, it’s like $7.20 a day to use and roughly $220 a Month. It’s WAY cheaper to use a VM. This service is like 3x too much,” said a comment from another sufferer.

My advice is not to do this. My further advice is to track closely the actual spend on any new services you run up since is it the only reliable way to avoid this kind of problem.

Microsoft to remove Azure “Basic” IP number and load balancer in favour of pricier options

Microsoft is removing some features from Azure which were called “Basic,” in favour of alternatives which have more features but are also more expensive.

A load balancer is a network component which balances traffic to virtual machines. The Basic load balancer is free but has a few limitations, such as no compatibility with availability zones, support for only 300 instances, no SLA (Service Level Agreement), and no support for NAT Gateway. Microsoft has emailed customers saying:

On 30 September 2025, Azure Basic Load Balancer will be retired. You can continue to use your existing Basic Load Balancers until then, but you’ll no longer be able to deploy new ones after 31 March 2025.

The Standard load balancer routes to availability zones, supports up to 5000 instances, is secure by default, and has a 99.9% SLA, but it costs $0.025 per hour, or around $18 per month, for up to 5 rules.

A Basic public IP number costs  $0.0036 per hour or about $2.60 per month. It is a perfectly good IP number but does not support zone resiliency. A standard public IP number costs $0.005 per hour or about $3.60 per month, and does support zone resiliency. A similar email has been sent to users, with the same dates.

Although these extra charges will not make much of a ripple in enterprise accounts, they can be noticeable, for example if you are an individual developing an application and trying to keep within a strict budget.

Developing software for playing bridge

I am a duplicate bridge player in my spare time and enjoyed playing in my local club once or twice a week. That was before COVID-19 and then, in March this year, lockdown. Bridge clubs were no longer able to meet. There are more important things in the world; but bridge is both a lot of fun and a welcome distraction from weightier matters, and my thoughts soon turned to what we could do to continue playing in these new circumstances.

The answer was to play online; but while there are plenty of ways to play bridge online, the existing systems were not designed with the idea of being a way for bridge clubs to meet in a new context. If anything, the reverse is true: online bridge site were designed for people who could not easily get to a club or wanted to play at any time with whoever else happened to be available. Clubs like my own, by contrast, wanted to replicate their face-to-face meetings with an online equivalent. A further complication back in March was that the biggest online bridge site, called Bridgebase, was immediately overloaded and declared that it was unwilling to allow new people to qualify as directors, people allowed to run online bridge sessions.

My immediate instinct was to build a new site for playing bridge. I was not quite starting from scratch. Back in the early days of Windows 8, I started work on a bridge game for Microsoft’s new and as it turned out ill-fated platform. I had got some way with it; I had created a bridge engine that understood about cards and hands and tricks and shuffling and scoring and all the various elements that go into playing bridge. It was written in C# and what is now UWP XAML. It is designed of course for a solo player. Here is the bidding screen:

image

and the play screen:

image

This is how it looks on Windows 10; it looked a bit better on Windows 8 though it would not win any prizes for design. My software could play bridge though; the reason I never finished it was that I never cracked getting the AI working. But for human to human play that did not matter. A weekend or two coding, I thought, and I could have a website up and running so our club could play bridge online. I made an immediate start, registering the domain name YourBridgeClubOnline.co.uk.

Well, three months later and here we are.

image

image

It is, I have to say, still under development. But it works and we have been able to play bridge again, as a club.

What took you so long? Ha! Much of my old bridge engine code remains untouched and has proved useful; it all runs fine on .NET Core. Even the (useless) AI has been handy, as I can test the mechanics of play without involving others. But I had, of course, wildly underestimated the problem of converting a game for solo play on Windows, to a multi-player web application. There is much to think about:

The UI. I am not a designer (I am sure you can tell) but spent ages puzzling over how to get a workable user interface in the browser for everything from tablets to desktops. Not smartphones yet but it is coming. I decided early on to take a view on compatibility. No Internet Explorer. JavaScript fetch API is required. When time is against you, it is easier to say, just use another browser, than to waste too much time supporting old browsers.

Messaging – both the API kind, and the chat kind. I am using C#, ASP.NET Core and SignalR. In general it works well. SignalR uses WebSockets as first preference, but falls back to Server Sent Events or long polling where necessary. In my first experiments I did my own polling and switching to SignalR was a great relief.

Registration and login. I am using the stuff that comes in the box, ASP.NET Core Identity. It has saved me a ton of work. It’s a bit annoying and not too well documented. I don’t really like using GUIDs for the primary key, for example, and I believe there is way to avoid it, but it isn’t top priority when you are going for Minimum Viable Product.

JavaScript. I’ve written tons of it and I don’t even like the language. I have a new respect for it though. The thing is, it is very fast and there is nothing you cannot do. The worst thing is the friction of doing some debugging in the browser, and some in Visual Studio. I am thinking of switching to VS Code for development since it works nicely with ASP.NET Core and is better for JavaScript than Visual Studio.

Scoring. My Windows software could score a hand of bridge. But duplicate is different; you have to compare the scores with others who played the same hands and work out the percentages, then export the results to standard formats for display on club websites and submission to the English Bridge Union. It was more work than I had expected and I am not done yet; the system only understands Pairs at the moment, not Teams (a different way of scoring).

Directing. Someone has to manage an online bridge session, settle any arguments, and fix errors like cards played by accident. It all needs coding and there was nothing like it in the Windows version.

Movements. Imagine you have 28 people playing bridge (or 14 pairs). They need to all play the same hands, but never play the same hand twice, and it has to be so arranged that each pair plays against other pairs in a defined sequence so it is balanced and fair. We call this the movement. Online, you have a bit more flexibility because you don’t need to share physical cards: everyone can play the same hand at the same time if you like. It is still quite fiddly though, and I did not do any of this in the old Windows version. I saved some time by writing an import function to enable re-use of movements made for EBUScore, a widely used scoring and bridge session management application. There is more to do though.

Claims. This is where, half way through the hand, a player says, “There’s no point in playing on, I’m obviously going to win all the remaining tricks.” A trick is a sequence of four cards played one from each hand, which is won by one of the pairs. This statement is called a claim, and has to be agreed by the other players. Getting this working was more difficult than I had expected – because built into my bridge engine was the idea that you could score by counting the tricks each side had won. But claimed tricks are never played. With hindsight, I should have allowed for this from the beginning.

Database. Every detail of play has to be stored on the server. I am using Dapper and SQL Server currently, though it is possible that PostgreSQL would work just as well. I started with Entity Framework Core, still there as it is used by ASP.NET Core Identity, but I am happier with Dapper.

Things that worked well

Three months is longer than I had thought it would take to get to a playable system, but I suppose as a spare time project it is not too bad. It would not be possible without the likes of ASP.NET Core and Dapper and SignalR doing so much for you. C# is a delight for coding. I am also using an Azure App Service for all this test and development and that has worked well. I am deploying to a Linux container of course; but the nice thing about App Service is that it will scale to a considerable extent without the hassle of Kubernetes. If the project succeeds and needs to scale up, there is an Azure SignalR service ready and waiting. I was nevertheless interested to see that AWS now offers .NET Core on Elastic Beanstalk, complete with some nice Visual Studio integration. Trying it there would be an interesting experiment, though I’m not sure AWS is so savvy about SignalR.

Open Source?

Could this have been done quicker by making it open source and seeking collaborators early on? Will it become open source? I need help for sure, though I also feel the code needs some cleaning up before it is fit to share more widely. You will recall though that I had started out thinking that it would be a small matter to convert my solo bridge game to an online multiplayer web application. I figured it would be better to get something working and then ask for help. But I am open to offers! Note: this is not a commercial project.

Rewarding

Most of the software projects I have been involved in have been business applications. Bridge is a lot more fun. I do see software development as a creative act. I recall starting work on the bridge game back in 2011 (I think); starting a new blank project in Visual Studio and thinking, hmm, I had better write a class to represent a pack of cards. From that beginning I ended up with an application that could play bridge, after a fashion, and now one that multiple people can play concurrently. It is rewarding and I will not regret the time spent on it, irrespective of how much actual use it gets.

Microsoft’s strong financials, and some notes on Azure vs AWS and the risks of losing in mobile

Microsoft delivered another strong set of figures in its latest financial results, for the period April-June 2018. Total revenue of $30.085 million was up 17% year on year, and all three of the company’s sectors (Office, Azure and consumer) showed strong growth.

What’s notable? Largely this is more of the same. A few things to note. Linked in revenue increased 37% year on year – an acquisition that seems to be making sense for the company. Dynamics 365 revenue grew by 65%. The Dynamics story is all about cloud synergy. As an on-premises product Dynamics CRM (the part of the suite I know best) was relatively undistinguished but as a cloud product the seamless integration between Office 365 and Dynamics 365 (and Azure Active Directory) makes it compelling.

Windows 10 is doing OK, possibly as more businesses heave themselves off Windows 7 and buy new PCs with OEM licenses as they do.

Even areas in which Microsoft is far from dominant did well. Gaming was up 39%, Surface 25% and Search advertising up 17%.

The biggest growth in the quarter, according to the breakdown here, was in Azure. up 89%. This growth is not without pain; the Register reports capacity issues in the UK South region, for example, with users getting the message “Unfortunately, due to high demand for virtual machines in this region, we are not able to approve your quota request at this time.” You can still create VMs, but not necessarily in the region you want.

Will Microsoft outpace AWS? My take on this has not changed. AWS does very little wrong and remains the pre-eminent cloud for IAAS and many services by some distance. What AWS does not have is Office 365, or armies of Microsoft partners helping enterprise customers to shunt more and more of their IT infrastructure into Azure. Microsoft makes more money from licensing: Windows Server, SQL Server, Office 365 and Dynamics seats, and so on. AWS does more business at a lower margin. These are big differences. I see it as unlikely that Azure will overtake AWS in the provision of essential cloud services like VMs, containers, cloud storage and so on. AWS also has a better reliability track record. However, the success of Azure means that enterprise customers no longer need to go to AWS to get the benefits of cloud. Perhaps the more interesting question is the extent to which AWS (or Google) can persuade enterprise customers to shift away from Microsoft’s high-margin applications.

Longer term, there is significant risk for the company in its retreat from mobile. We are now seeing Google work hard in the laptop market with Chromebooks alongside Android mobile. Coming sometime is Google Fuchsia which may be a single operating system for both. It is worth recalling that Microsoft built its success on winning users for its PC operating system; and that IBM lost its IT dominance by ceding this to Microsoft.

Here is the breakdown by segment, such as it is:  

Quarter ending June 30th 2018 vs quarter ending June 30th 2017, $millions

Segment Revenue Change Operating income Change
Productivity and Business Processes 9668 +1140 3466 +575
Intelligent Cloud 9606 +1784 3901 +990
More Personal Computing 10811 +1576 3012 +826

The segments break down as:

Productivity and Business Processes: Office, Office 365, Dynamics 365 and on-premises Dynamics, LinkedIn

Intelligent Cloud: Server products, Azure cloud services

More Personal Computing: Consumer including Windows, Xbox; Bing search; Surface hardware

Pusher: a nice solution for sending messages and notifications to web and mobile apps

Pusher is a London company which runs cloud services for publish/subscribe in web and mobile applications. The idea is to deliver real-time updates, a concept that has many use cases. Examples include price updates in finance apps, status updates to track a delivery, news updates, or anything where users want to monitor progress or keep in touch with fast-moving developments.

The service passed my “get up and running quickly” test. I created a free account (limited to 100 connections and 200k messages per day) and a new channel:

image 

I’m guessing it runs on AWS, looking at the datacentre locations:

image

I chose a JavaScript client and ASP.NET MVC for the back end. On my PC I pasted the JavaScript into a web page running locally on Apache (in Windows Subsystem for Linux). I also created a new ASP.NET MVC project and added the sample code with some trivial modifications. I was able to send a message to the web page; it triggers an annoying alert but of course you could easily amend this to update the UI in more user-friendly ways.

image

Of course you could roll your own solution for this but what you get with Pusher is all the plumbing pre-done for many different clients and automatic scalability.

Pusher also has a service called Beams (formerly Push Notifications) which lets you send notifications to Android and IOS apps.

Pusher or roll your own? As with many cloud services, you are putting a high level of trust in Pusher (security and reliability) if you use the service, and you will need a paid subscription:

image

You are saving considerable development time though, and as Google and Apple update their SDKs or change the rules, Pusher will presumably adapt accordingly.

Can Azure easily do this, I wondered? I headed over to Azure Notification Hubs. I noticed that the amount of admin you have to do to support each device is greater. Second, Microsoft promised to support “push to web” in March 2016:

image

… but has not done so nor even bothered to update those asking:

image

It is odd that Microsoft, with all its drive behind Azure, is still in the habit of leaving customers in the dark in certain areas.

Cosmos DB or SQL Server? Do you need Kubernetes? VM or App Service? A guide to Azure worth checking out

One of the best features of Microsoft Build, possibly the best, is the exhibition. Microsoft sets up stands for each of its product teams, and the staff there generally include the people who actually build that product, making this a great way to interact with them and get authoritative answers to questions.

I interviewed several executives at Build and asked a couple of times, how can your customers work out which Azure service is the best fit for what they need? It is not a trivial question, now that there are so many different services which overlapping functionality.

It is critically important. You can waste a large amount of money and cause unnecessary frustration by selecting the wrong services.

None of these executives mentioned that Microsoft has a rather good guide for exactly this question. It is called the Azure Architecture Center and I discovered it on the show floor.

image

The stand was called Azure Clinic and I told the guy his costume reminded me of Dr GUI. He was too young to remember this MSDN character of old but another guy on the stand overheard and said it brought back bad memories!

You can find the Azure Architecture Center here. It does not make any assumptions about the depth of knowledge you have, which seems right to me since it is aimed at developers who are not sure exactly what they need. There is a ton of useful material, like this decision tree for the compute services (click to enlarge):

image 

Recommended.

From Windows Embedded to cloud: Microsoft announces the Connected Vehicle Platform

Microsoft has announced the Connected Vehicle Platform, at the CES event under way in Las Vegas.

image

The company is not new to in-car systems, but its track record is disappointing. It used to be all about Windows Embedded, using Windows CE to make a vehicle into a smart device.

Ford was Microsoft’s biggest partner. It built Ford SYNC on the platform and in 2012 announced five years of partnership and 5 million SYNC-enabled vehicles.

However in 2014 Ford announced SYNC 3 with no mention of Microsoft – because SYNC 3 uses Blackberry’s QNX.

What went wrong? There’s a 2014 analysis from Bill Howard that offers a few clues. The bit that chimes with me is that Microsoft was too slow in updating the system. The overall Windows story over the last 10 years is convoluted to say the least, with many changes to the platform and disruptive (in a bad way) strategy shifts. The same factor is a large part of why Windows Phone failed.

It is not clear at this stage whether or not Microsoft’s Connected Vehicle Platform partners (which include Renault-Nissan and BMW) will use Windows Embedded in their solutions; but what is notable is that Microsoft’s release makes no mention of it. The company has shifted to a cloud strategy, and is primarily offering Azure services rather than mandating how manufacturers choose to consume them. The detail of the announcement identifies five key areas:

  • Telematics and Predictive services
  • Marketing (“Customer insights and engagement”)
  • Productivity (Office 365, Skype)
  • Connected ADAS (Advanced Driver Assistance Systems), ie. the car helping you to drive
  • Advanced Navigation

Cortana also gets a mention. We may think of Cortana as a virtual assistant, but what this means is a user interface to intelligent services.

There is big competition for all this of course, with Google, Amazon and Apple also in this space. There is also politics involved. If you read Howard’s analysis linked above, note that he mentions how the auto companies dislike restrictions such as Google insisting that you can’t have Google Search unless you also use Google Maps (I have no idea if this is still the case). There is a tension here. In-car systems are an important value-add for customers and critical to marketing vehicles, but the auto companies do not want their vehicles to become just another channel for big data-gathering companies like Google and Amazon.

Another point of interest is how smartphones interact with your car. If you want a simple and integrated experience, you can just dock your phone and use it for navigation, communication and entertainment – three key areas for in-car systems. On the other hand, a docked phone will not have the built-in screen and control of vehicle features that an embedded system can offer.

Hands on with Microsoft’s ADConnect

I’ve been trying Microsoft’s ADConnect tool, the replacement for the utility called DirSync, which synchronises on-premises Active Directory with Azure AD, the directory used by Office 365.

It is therefore a key piece in Microsoft’s hybrid cloud story.

In my case I have a small office set-up with Active Directory running on Server 2012 R2 VMs. I also have an Office 365 tenant that I use for testing Microsoft’s latest cloud stuff. I have long had a few basic questions about how the sync works so I created a small Server 2012 R2 VM on which to install it.

ADConnect can be installed on a Domain Controller, though this used to be unsupported for DirSync. However it seems to be tidier to give ADConnect its own server, and less likely to cause problems.

There are a number of pre-requisites but for me the only one that mattered was that your domain must be set up on the Office 365 tenant before you configure ADConnect. You cannot configure it using the default *.onmicrosoft.com domain.

Adding a domain to Office 365 is straightforward, provided you have access to the DNS records for the domain, and provided that the domain is not already linked to another Office 365 tenant. This last point can be problematic. For example, BT uses Office 365 to provide business email services to its customers. If you want to migrate from BT to your own Office 365, detaching the domain from BT’s tenant, to which you do not have admin access, is a hassle.

When I tried to set up my domain, I found another problem. At some point I must have signed up for a trial of Power BI, and without my realising it, this created an Office 365 tenant. I could not progress until I worked out how to get admin access to this Power BI tenant and assign my user account a different primary email address. The best way to discover such problems is to attempt to add the domain and note any error messages. And to resist the wizard’s efforts to get you to set up your domain in a different tenant to the one that you want.

That done, I ran the setup for ADConnect. If you use the Express settings, it is straightforward. It requires SQL Server, but installs its own instance of SQL Server Express LocalDB by default.

image

You enter credentials for your Office 365 tenant and for your on-premises AD, then the wizard tells you what it will do.

image

I was interested in the link on the next screen, which describes how to get all your Windows 10 domain-joined computers automatically “registered” to Azure AD, enabling smoother integration.

image

If you follow the link, and read the comments, you may be put off; I was. It involves configuring Active Directory Federation Services as well as Group Policy and looks fiddly. I suspect this is worth doing though, and hope that configuration will be more automated in due course.

The next step was to look at the outcome. One thing that is important to understand is that synced users are distinct from other Office 365 users. Imagine then that you have existing users in Office 365 and you want to match them with existing on-premises users, rather than creating new ones. This should work if ADConnect can match the primary email address. It will convert the matching Azure AD user into a synced user. Otherwise, it will just create new users, even if there are existing Azure AD users with the same names. If it goes wrong, there are ways to recover. Note that the users are not actually linked via the email address, they are linked by an attribute called an ImmutableID.

The Office 365 admin portal is fully aware of synced users and the user list shows the distinction. Users are designated as “In Cloud” or “Synced with Active Directory”.

image

Synced users cannot be deleted from the Office 365 portal. You delete them in on-premises AD and they disappear.

The next obvious issue is that if you dive in like me and just install ADConnect with Express Settings, you will get all your on-premises users and groups in Azure AD. In my case I have things like “ASP.NET Machine Account”, various IUSR* accounts, users created by various applications, and groups like “DHCP Administrators” and “Exchange Trusted Subsystem” that do not belong in Office 365.

These accounts do not do much harm; they do not consume licenses or mess up Office 365. On the other hand, they are annoying and confusing. You may also have business reasons to exclude some users from synchronization.

Fortunately, there are various ways to fine-tune, both before and after initial synchronization. You can read about it here. This document also states:

With filtering, you can control which objects should appear in Azure AD from your on-premises directory. The default configuration takes all objects in all domains in the configured forests. In general, this is the recommended configuration.

I find this puzzling, in that I cannot see the benefit in having irrelevant service accounts and groups synced to Office 365 – though it is not entirely obvious what is safe to exclude.

I went back to the ADConnect tool and reconfigured, using the Domain and OU filtering option. This time, I selected what seems to be a minimal configuration.

image

The excluded objects are meant to be deleted from Office 365, but so far they have not. I am not sure if this will fix itself. (Update: it did, though I also re-ran a full initial sync to help it along). If not, you can temporarily disable sync, manually delete them in the Office 365 portal, then re-enable sync.

What if you want to exclude a specific user? I used the steps described to create a DoNotSync filter based on setting extensionAttribute15. You use the ADConnect Synchrhonization Rules Editor to create the rule, then set the attribute using ADSIEdit or your favourite tool. This worked, and the user I marked disappeared from Office 365 on the next sync.

image

Incidentally, you can trigger an immediate sync using this PowerShell command:

Start-ADSyncSyncCycle -PolicyType Delta

Complications

Setting up ADConnect does introduce complexity into Office 365. You can no longer do everything through the portal. It is not only deletion that does not work. When I tried to set up a mailbox in Office 365 I hit this message:

image

“This user’s on-premises mailbox hasn’t been migrated to Exchange Online. The Exchange Online mailbox will be available after migration is completed.”

I can see the logic behind this, but there might be cases where you want a new empty mailbox; I am sure there is a way around it, but now there is more to go wrong.

Update: there is a rather important lesson hiding here. If you have are running Exchange on-premises and want to end up on Office 365 with ADConnect, you must take care about the order of events. Once ADConnect is running, you cannot do a cutover migration of Exchange, only a hybrid migration. If you don’t want hybrid (which adds complexity), then do the cutover migration first. Convert the on-premise mailboxes to mail-enabled users. Then run ADConnect, which will match the users based on the primary email address.

It is also obvious that ADConnect is designed for large organisations and for administrators who know their way around Active Directory. There is a simplified sync tool in Windows Server Essentials, though I have not used it. It would be good though to see something between Essentials and the complexity of ADConnect. For example, I had imagined that there might be a mapping tool that would let you see how ADConnect intends to match on-premises users with Office 365 users and let you amend and exclude users with a few clicks.

Microsoft has been working on this stuff for some time and is not done yet. In preview for example is Group Writeback, which lets you sync Office 365 groups back to on-premises AD.

image

Maybe Microsoft might also consider using different icons for the various ADConnect utilities as they do look a bit silly if you pin them to the taskbar:

image

The tools are:

  • Azure ADConnect (Wizard)
  • Synchronization Rules Editor (advanced filtering)
  • Synchronization Service WebService Connector Config (SOAP stuff)
  • Synchronization Service Key Management (what it says)

On the plus side, I have not hit any mysterious Active Directory errors and it has all worked without having to set up certificates, reverse proxies, special DNS entries (other than the standard ones for Office 365), or anything too fiddly, though note that I avoided ADFS and automatic Windows 10 registration.

Final thoughts

If you need to implement this, you will find doing what I did and trying it out on a test domain is worth it. There seem to be quite a few pitfalls, and as ever, it is easier to get it right at the start rather than trying to fix things up afterwards.

The case of the disappearing Azure AD application registration

Some time ago I wrote a simple web application which runs on Microsoft Azure and uses Azure Active Directory for authentication. The application is used constantly and has proved reliable; however yesterday it stopped working. A quick debug session showed that the problem was an Azure AD permissions error.

In order to use Azure AD, applications have to be registered in the Azure management portal. I use the old portal for this; I am not sure that the functionality exists in the new portal yet. There is a nice how-to here.

image

One of the elements in the registration is a key which has a maximum lifetime of 2 years:

image

My application was deployed about two years ago so I went to the portal to see if it had expired.

What I found surprised me. The application was not listed at all. It had disappeared.

Instead of simply obtaining a new key and updating my application config, I had to create a new application registration and update several keys in the config, which was an annoyance.

There is a wider point here, in the whole category of dealing with “things that expire”. Some time ago, Microsoft suffered an extended Azure outage because of an expired certificate. It is a shame that Microsoft insists on a maximum 2 year lifetime for this key but does not provide a check box for “alert me when this key is about to expire”, how difficult would that be?

Problems like this also mean that things which “just work” may not continue to do so. Of course a well organised enterprise setup can deal with this type of problem, but imagine, for example, the case of a small business with an application running on Azure where the developers have gone out of business, perhaps, or are no longer available. In fact the only code I needed to change was in web.config, but I can imagine it could take some time to figure out what to do and what to change.