Category Archives: professional

image

Remote Desktop on Mac failes to connect with 0x207 error

I am setting up a new Mac and got this annoying error from the Microsoft Remote Desktop client.

Worse, a number of people have complained about this error but there is a lot of useless advice out there, and also the bad advice to disable NLA (Network Level Authentication) on the Windows PC. Don’t do that, it is bad for security.

One of the few helpful threads on the topic is this one which point to this article on the subject of how to enable integrated authentication on Mac and Linux using Kerberos. I followed the advice here and it worked though I’m not sure if the ALL CAPS is necessary for the domain, but I used it and it worked – as long as I entered user@ALLCAPS in the RDP username as well.

.NET P/Invoke on Azure App Service for Linux

I have an online bridge game in development (yes, still!) and it is written in ASP.NET Core with C#. One of the things that interests bridge players is called double-dummy analysis; this is where you look at what would be the best play in a game if you knew where all the cards were, whereas when actually playing bridge you only see your own cards and, during play, another hand called Dummy, so half the cards are hidden.

Double-dummy analysis is a solved problem and bridge programmers benefit from an open source library called DDS (Double Dummy Solver) written primarily by Bo Hagland and Soren Hein. This is a C++ DLL that can also be compiled for Linux and MacOS.

I wanted to integrate DDS into the bridge game in order to give players information at the end of a game including whether they were in the optimum contract and whether they beat the optimum score. I started by doing a new C# wrapper for DDS though borrowing from the work here. My version is 64-bit and wraps a few more functions. I compiled the native DLL for Windows and Linux using OpenMP for concurrency, which considerably improves performance (Boost is another option but I did not find much difference).

Note: the usual caveats about P/Invoke apply here. During one of my tests I actually crashed the container running the app. The ASP.NET developers do a lot of work to make the platform reliable, and doing P/Invoke may introduce instability.

I added my wrapper into the ASP.NET application and it worked fine on my development machine. I deployed it to App Service and the P/Invoke calls did not work. Fixing this required a bit of a deep dive into Azure App Service for Linux.

I am deploying the native code .so library into the same directory as the compiled .NET code for the rest of the application. The error I got was:

Cannot open shared object file: No such file or directory

I raised the topic on Stack Overflow.

One of the things that puzzled me was that the unit tests, which include the P/Invoke code, ran OK in Azure Pipelines, which I use for deployment. But not when deployed.

The first point is that you get the “No such file” error not only when the file itself is not present (it was) but also when a dependency is missing. So step one is to SSH into the container running the ASP.NET app, which you can do with the Development Tools in the Azure portal. Note that with Azure App Service for Linux the app always runs in a container.

image

This gives you root permissions in the container though not to the host operating system. Navigate to the directory with the troublesome library and type:

ldd libdds.so

(or the name of your library). This will tell you if any dependencies are missing or other issues. I noticed two things. One is that it was missing the dependency libgomp.so.1 which is the OpenMP library. Second, ldd reported that my library required at least GLIBC 2.29 where the available version was 2.28.

How could I fix the GLIBC version? This is determined by the version of Linux and you can use

ldd – version

to check the version you have. In my case it said I had Debian with GLIBC 2.28:

image

I did some more research. If you really want to know about Azure App Service for Linux, there are a few key documents.

The basics here: Operating system functionality – Azure App Service | Microsoft Learn

The FAQ here: App Service on Linux FAQ | Microsoft Learn

Here you will learn details like why you cannot use a file-based database like SQLite in Azure App Service for Linux:

“The file system of your application is a mounted network share. This enables scale out scenarios where your code needs to be executed across multiple hosts. Unfortunately this blocks the use of file-based database providers like SQLite since it’s not possible to acquire exclusive locks on the database file.”

But I digress. To go deeper still, check this post by Jim Cheshire:

Things You Should Know: Web Apps and Linux – Microsoft Tech Community

which has lots of critical information, like why a custom container on App Service must respond to ping.

So after reading through all this and greatly improving my understanding of how App Service for Linux works, I got to the heart of my problem. When you deploy a .NET Core application to App Service for Linux, it will by default use a container from the Microsoft Artifact Registry that matches the version of .NET you are using. If you check this page you will see that the current version for ASP.NET Core 6.0 is tagged mcr.microsoft.com/dotnet/aspnet:6.0

image

If you examine this container you will find that it runs Debian Buster which uses GLIBC 2.28. It is a matter of slight concern since Debian Buster is shown on the Debian releases wiki as having an approximate end of life August 2022, though the LTS project extends that to June 2024.

Still, now I knew how to fix my problem. Either use a custom container image, or upgrade to .NET 7, or recompile libdds.so to run on Debian Buster.

I decided that the easiest short-term solution was to recompile. I downloaded Buster and recompiled the library.

What about libgomp.so.1? This was kind-of fixable by using SSH to run:

apt-get update

apt-get install libgomp1

This is not great though since Azure could replace the container at any time, and always if you do something like scale the plan up or down, to change the specification of the VM. I tried copying the buster version of libgomp.so.1 to the application directory. It works, but I also needed to add a linker option to enable DDS to use a library in the same directory:

 -Wl,-rpath='${ORIGIN}'

as explained here.

I think a better solution is to move to deploying a custom container to App Service, which is an option:

image

Care is needed though as there is a bit of special sauce in the official container images if you want features like SSH in the portal to work properly. It also means revisiting my deployment scripts, so the above hack was an easier and quicker workaround for me.

Web page memory usage: how much is too much?

I have a web application that loads names into a picklist and the question came up: how many names would be too much for the web page to handle? What about 5,000 names, for example?

Modern web browsers have a memory snapshot built in. Just press F12 and there it is. So I fired up my application with over 5,000 names loaded into an array and displayed in a scrolling div. 2.5 MB.

Then I visited Facebook. Logged in, the page reported over 78 MB.

image

Google’s clean-looking home page? Not logged in, 11.2 MB.

image

Twitter? Logged in, 83 MB.

image

This was enough for me to stop worrying about the memory impact of 5,000 names in my web application. With our casual acceptance of multi-MB web pages it is easy to forget that the complete works of Shakespeare in plain text is 5.5 MB and compresses to less than half of that.

Just because you can do something, does not mean you should. Smaller is better and faster, and software bloat of various kinds is responsible for many performance issues.

There are trade-offs though both in time and in performance. Maybe it is better to load just a few names, and retrieve more from the server when needed. It is easy to test such things. Nevertheless, I found it useful to get some perspective on the memory usage of modern web sites.

Using an M1 Mac after a lifetime of mainly Windows

So I got an M1 MacBook Pro back in April and it is time for a quick brain dump on my experience. I am not travelling as much as I did pre-lockdown, so although I got the Mac as a replacement for an ancient Windows laptop it gets used at home too. My usual desktop PC is a few years old but a decent spec gaming PC withCore i7-7700 3.6 GHz, 16GB RAM and Nvidia RTX 2060 GPU. I have been happy with it; but I do find myself thinking “why not just use the MacBook” when needing to fire up a computer, a subconscious preference that bears examination. Most of my work is writing, web browsing and coding.

I do not particularly prefer the macOS UI to that of Windows. It is more consistent because Apple managed iOS vs macOS sensibly whereas Microsoft made a hash of Windows desktop vs Windows CE vs Windows Phone vs Windows 8 and has now settled on a thing called WinUI but scratch the surface of Windows and you still find UI that has not changed for decades.

I digress though. I do not mind the Windows UI, I am used to it. What I do mind though is annoyances like the always-broken Windows search, and the way certain actions cause lengthy pauses that make me wonder what my PC is doing. In my case, sorting a large directory in Windows Explorer takes an age. Another little issue is that creating a new folder works fine, but renaming it causes a long pause. There also seem to be some focus issues. I create a new folder, I rename it and press Enter. Eventually it renames, but half the time the focus mysteriously switches to a different folder.

I realise that these problems do not occur with a new install of Windows and that I could pop out and buy a Surface laptop and it would be fine. For a bit. Windows, it seems to me, still suffers from the cruft problem beautifully described by Verity Stob 20 years ago. I do not think Macs are completely immune (I had a Mac Mini where I upgraded the OS once too often and it crawled) but does seem to me more resistant.

There is another thing that I like about the MacBook. You close the lid and it sleeps. You open the lid minutes, hours or days later, and it wakes. This has never worked well for me on Windows, though it is meant to do the same. I can believe that it is hard to implement, but when it works it is a huge benefit.

There is also the unwanted advertising that has crept into the Windows UI especially since Windows 11. Working on the MacBook I do notice its absence; I can better focus on what I want to do.

From a developer perspective, the performance of the M1 Pro is a delight. I work mostly in Visual Studio Code on both platforms; even on Windows I have come to prefer VS Code for most types of work. There is also the fact that Unix-like operating systems have won in server and web applications, so there is less friction there.

Launchpad: reminiscent of the Windows 8 Start screen?

Microsoft came up with a great application launcher in the Windows 95 Start menu – and improved it until it reached its peak in Windows 7. I also like the Windows 8 full-screen version. Windows 10 and 11 are not so good though. You get inadvertent web searches, as well as the problem of apps that you search for not appearing for strange reasons. The Mac Launchpad, which reminds me of the Windows 8 full-screen Start menu, seems to work well. You type what you want and all the matches appear.

What do I miss when not using Windows? It is mainly a matter of working out new ways to do certain tasks. I do miss Hyper-V and WSL (Windows Subsystem for Linux) though I have had success with UTM for running both Windows and Ubuntu on the Mac. The integration of WSL with the desktop OS is great though. Microsoft Office still works best on Windows though not to the extent of a few years back. There is no Paint or Notepad, and favourites like Notepad++ do not run natively, but Preview works for cropping images and alternatives to Windows utilities exist.

Sometimes you are pushed towards the command line which is not a bad thing. No WinSCP for example, so use scp instead, and do some helper scripts for common tasks. You end up saving time. (I realise you can script WinSCP as well). And no need for Putty; just type ssh or script the command line you need.

I do expect though to use Windows less in future, and for me that is a big change.

Book review: After Steve by Tripp Mickle

This is billed as a book about a company, but is more accurately described as about two people, Tim Cook and Jony Ive, respectively CEO and former Chief Design Officer at Apple, one of the world’s biggest and most profitable companies. The author Tripp Mickle is a reporter at the Wall Street Journal where he covered Apple for four years.

Mickle has a thesis: that under Cook Apple’s profitability has flourished but its design-led innovation has faltered, damaged in 2011 when co-founder Steve Jobs died at the age of 56. “It’s unclear if design will ever regain its position as the dominant voice over product direction,” he writes. In his epilogue, Mickle says that “Cook’s aloofness and unknowability made him an imperfect partner for an artist who wanted to bring empathy to every product.” The author mentions several times that Cook “seldom went to the design studio to see Ive’s team work.”

The book has amazing detail and represents the outcome of interviews with “more than two hundred current and former Apple employees” supplemented by further interviews with their family members, friends, suppliers of Apple, competitors, and government officials. There is lots of dialogue in the accounts of key incidents, drawn either from recordings or “reconstructed based on the recollections of people familiar with the events described.” As you read, you feel immersed in the company. It is a great achievement, particularly (as the author also notes) considering that “at Apple, current and former employees adhere to a strict code of silence.” There is a thick section of notes and references.

After Steve then is essential reading for Apple watchers. That said, I have a couple of reservations. At 512pp this is a lengthy work and for me, too long. It is occasionally repetitive, the writing is professional but at times pedestrian. Further, if your interest is in Apple the company rather than Cook and Ive, it is overly focused on those two people.

This last point is perhaps why Mickle misses the impact of Apple Silicon, the series of ARM-based processors which began with the A series and took over from Intel as the technology in Mac computers from November 2020 with the launch of the M1. Recently Apple has announced the M2 with claimed performance improvements of up to 18% for the CPU and 35% for the GPU, compared to the M1.

Apple Silicon matters because it dramatically improves over x86 in its power/performance ratio, making the company’s laptops and iPads a delight compared to their competition. It may not be design-based, and it builds on ARM and the work of others, but it is a huge advance and gives the company’s hardware an edge over its Windows and Android competition that is hard to counter. Johny Srouji, in charge of Apple Silicon? Not mentioned by Mickle.

I would have preferred the book to be shorter (though researchers may be glad of its detail). What of its central thesis? Mickle makes the point that Apple Watch has a disappointing lack of focus, which I agree with, and that projects like the Apple electric car appear to have faltered. The Beats acquisition had a mixed outcome, and this was a puzzle to me too. Apple did not need Beats, its culture was alien, and my sense is that Apple Music would have flourished equally well without it.

I do think though that since Jobs Apple has developed something with iPhone-level impact and that is Apple Silicon and the M series in particular. I also think that Mickle misses something of the big picture. Buying a smartphone or computer? There is the Android jungle, or the Windows jungle, or Apple. For many it is hardly a choice; and the fact that this is more than ever true more than a decade after the passing of Jobs is huge credit to those involved and makes the accusation “how Apple became a trillion-dollar company and lost its soul” ring just a bit hollow.

Hands On ASP.NET Core

I’ve been putting together a quick web application (well, I thought it would be quick) in my spare time (hah!) and I picked ASP.NET Core on Linux as a sensible option given that I like working in C#. Overall it has been a reasonable experience so far and I still love the language. This is the most extensive work I have done so far with ASP.NET Core though and I have a few observations.

It is not a difficult framework to work with but I believe it could be made more approachable. This is largely a matter of documentation though another point of confusion is the transition Microsoft has been making from ASP.NET MVC to Razor Pages. These two frameworks are similar but different, they share a lot of technology but some things work in one but not the other, and sometimes it is not clear whether what you are reading applies just to ASP.NET MVC, or just to Razor Pages, or to both, or to both but with a little tweaking to account for differences. I started with MVC because I am more familiar with it but have shifted to Razor Pages because that seems to be the preferred direction; really I am equally happy with either.

If you are thinking of getting started with ASP.NET Core I recommend you start not with the framework, but with making sure that you are familiar with the following topics:

Dependency injection. If you are puzzling about something in the framework the answer may be “add it to the constructor and it magically works.” This is obvious if you are familiar with it but not otherwise.

Anonymous types. These seem to crop up quite a lot.

Lambdas and the arrow operator =>

LINQ queries

Now, the documentation. Unless you have perhaps found a good and up to date book you will probably start here.

image

Now, I do think there are lots of good things about docs.microsoft.com, the fact that it is all on GitHub and open for comment and improvement, the fact that it performs well, and the obvious effort that has gone into many of the topics.

That said, I do not much like this page. My biggest problem with it is that there is no simple link to a comprehensive reference. It is a bunch of little tutorials which may or may not tell you what you need to know. It gets better if you click into one of the topics and I like this page, for example, much better, with the hierarchical list of topics on the left.

image

It is still not great though. There is a big emphasis on tutorials, and while I agree that learning through doing is a great way to learn, the problem with the tutorials is that they tend to leave you with lots of questions and no obvious route to answers.

I will give you an example. I decided to use the ASP.NET Identity system in my application, because it saves a ton of tedious work doing registration, password reset, login, and so on, plus it is security-critical code that I would likely get wrong if I did it myself.

The problem you will immediately hit though is that you want to store additional data about users. This could be any kind of data but let’s call it additional profile data. For example, you want to let users upload an image which is then displayed in the application. There are some heavy articles about customizing identity but there is also this one on adding custom user data to an ASP.NET Core web app. It’s great but it does not actually tell you how to retrieve the custom user data in your application. Eventually I figured out a way of doing it. You just have to use dependency injection to get an instance of the UserManager class. So you pop this in the constructor for one of your classes:

UserManager<YourCustomUser> UserManager

and store it in a private variable. Then you can do:

var MyTask = _userManager.GetUserAsync(User);
MyTask.Wait();
var MyUser = MyTask.Result;

or something similar (if it is a synchronous method) and it just works.

Let me add something else. The actual API reference for ASP.NET Core is almost useless. It faithfully documents each class and method while often saying nothing about how or why to use it.

Data access

My application is really forms over data as so many are, so data access plays a big role. There seem to be plenty of tutorials on data access in the ASP.NET Core documentation but I don’t much like them. The problem is Entity Framework. Most of the documentation assumes it. It is not that Entity Framework is bad; it does seem to work well and while there is debate about how well it performs, in many cases it does not matter, and in other cases you can fine-tune it. My problem rather is that what Microsoft calls a “complex data model” is actually the normal case, where you have many-to-many relationships, and dealing with this in Entity Framework soon gets fiddly. I am guilty of lacking patience, but being familiar with SQL it is easier for me just to write the SQL and to know exactly what data is being saved and what data is being retrieved. I have left Entity Framework in place because the Identity system uses it (and it looks non-trivial to replace) but for the rest I have migrated to Dapper which seems ideal. It is not a full-featured ORM and it expects you to write the SQL but does a lot that saves time. My only complaint about Dapper is that (again) the documentation isn’t great but I’ve found it much simpler to grok than the more advanced aspects of Entity Framework.

One thing I do like about Entity Framework is data migrations. Like most developers I have a local database and another one online and code-first data migrations save a lot of work creating database tables and keeping the schema in sync. Dapper does not have this.

StackOverflow

Of course it is true that no matter what is your question, someone has asked it before, and often the best place to find the answer is on StackOverflow. Big appreciation for the folk who take the time to answer questions there, though I’d add that it is not a place from which to copy code, it is a place to understand a solution. Out of date information is a problem, as it is in Microsoft’s own documentation.

Finally

I think ASP.NET Core is a great framework (or frameworks) but not as approachable as it could be. Documenting it in the best way is not an easy problem to solve, and every developer comes with different skills and requirements. Perhaps Microsoft could get someone suitable to write a nice book aimed at intermediate coders, and one that does not assume you want to use Entity Framework. Then offer it as a free download and/or publish it online as part of the documentation, and keep it up to date as new versions appear.

SQLite with .NET: excellent but some oddities

I have been porting a C# application which uses an MDB database (the old Access/JET format) to one that uses SQLite. The process has been relatively smooth, but I encountered a few oddities.

One is puzzling and is described by another user here. If you have a column that normally stores string values, but insert a string that happens to be numeric such as “12345”, then you get an invalid cast exception from the GetString method of the SQLite DataReader. The odd thing is that the GetFieldType method correctly returns String. You can overcome this by using GetValue and casting the result to a string, or calling GetString() on the result as in dr.GetValue().ToString().

Another strange one is date comparisons. In my case the application only stores dates, not times; but SQLite using the .NET provider stores the values as DateTime strings. The SQLite query engine returns false if you test whether “yyyy-mm-dd 00:00:00” is equal to “yyy-mm-dd”. The solution is to use the date function: date(datefield) = date(datevalue) works as you would expect. Alternatively you can test for a value between two dates, such as more than yesterday and less than tomorrow.

Performance is excellent, with SQLite . Unit tests of various parts of the application that make use of the database showed speed-ups of between 2 and 3 times faster then JET on average; one was 8 times faster. Note though that you must use transactions with SQLite (or disable synchronous operation) for bulk updates, otherwise database writes are very slow. The reason is that SQLite wraps every INSERT or UPDATE in a transaction by default. So you get the effect described here:

Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second. Transaction speed is limited by the rotational speed of your disk drive. A transaction normally requires two complete rotations of the disk platter, which on a 7200RPM disk drive limits you to about 60 transactions per second.

Without a transaction, a unit test that does a bulk insert, for example, took 3 minutes, versus 6 seconds for JET. Refactoring into several transactions reduced the SQLite time to 3 seconds, while JET went down to 5 seconds.

HackerRank survey shows programming divides in more ways than one

Developer recruitment company HackerRank has published a survey of developer skills. The first place I look in any survey is who took part, and how many:

HackerRank conducted a study of developers to identify trends in developer education, skills and hiring practices. A total of 39,441 professional and student developers completed the online survey from October 16 to November 1, 2017. The survey was hosted by SurveyMonkey and HackerRank recruited respondents via email from their community of 3.2 million members and through social media sites.

I would like to see the professional and student reponses shown separately. The world of work and the world of learning is different. This statement may also be incomplete, since several of the questions analyse what employers want, which suggests another source of data (not difficult to find for a recruitment company).

It is still a good read. It is notable for example that the youngest generation is learning to code later in life than those who are now over 35:

image

I am not sure how to interpret these figures, but can think of some factors. One is that the amount of stuff you can do with a computer without coding has risen. In the earliest days when computing became affordable for anyone (late seventies/early eighties), you could not do much without coding. This was the era of type-in listings for kids wanting to play games. That soon changed, but coding remained important to getting things done if you wanted to make a business database useful, or create a website. Today though you can do all kinds of business, leisure and internet computing without needing to see code, so the incentive to learn is lower. It has become a more specialist skill. It remains valuable though, so older people have reason to be grateful.

How do people learn to code? The most popular resource is Stack Overflow, followed by YouTube, with books coming in third. In truth the most popular resource must be Google search. Credit to Stack Overflow though: like Wikipedia, it offers a good browsing experience at a time when the web has become increasingly unpleasant to use, infected by pop-up surveys, autoplay videos and intrusive advertising, not to mention the actual malware out there.

No surprises in language popularity, though oddly the survey does not tell us directly what languages are most used or best known by the respondents. The most in demand languages are apparently:

1. JavaScript
2. Java
3. Python
4. C++
5. C
6. C#
7. PHP
8. Ruby
9. Go
10. Swift

If you ask what languages developers plan to learn next, Go, Python and Scala head the list. And then there is a fascinating chart showing which languages developers prefer grouped by age. Swift, apparently, is loved by 75% of those over 55, but only by 15% of those under 25, the opposite of what I would expect (though I don’t know if this is a percentage of those who use the language, or includes those who do not know it at all).

Frameworks is another notable topic. Everyone loves Node.js; but two of the frameworks on offer are “.NET Core” and “ASP”. This is odd, since .NET Core is not really a framework, and ASP normally refers to the ancient “Active Server Pages” framework which nobody uses any longer, and ASP.NET runs on .NET Core so is not alternative to it.

This may be a clue that the HackerRank company or community is not well attuned to the Microsoft platform. That itself is of interest, but makes me question the validity of the survey results in that area.

C# and .NET: good news and bad as Python rises

Two pieces of .NET news recently:

Microsoft has published a .NET Core 2.1 roadmap and says:

We intend to start shipping .NET Core 2.1 previews on a monthly basis starting this month, leading to a final release in the first half of 2018.

.NET Core is the cross-platform, open source implementation of the .NET Framework. It provides a future for C# and .NET even if Windows declines.

Then again, StackOverflow has just published a report on the most sought-after programming languages in the UK and Ireland, based on the tags on job advertisements on its site. C# has declined to fourth place, now below Python, and half the demand for JavaScript:

image

To be fair, this is more about increased demand for Python, probably driven by interest in AI, rather than decline in C#. If you look at traffic on the StackOverflow site C# is steady, but Python is growing fast:

image

The point that interest me though is the extent to which Microsoft can establish .NET Core beyond the Microsoft-platform community. Personally I like C# and would like to see it have a strong future.

There is plenty of goodness in .NET Core. Performance seems to be better in many cases, and cross-platforms is a big advantage.

That said, there is plenty of confusion too. Microsoft has three major implementations of .NET: the .NET Framework for Windows, Xamarin/Mono for cross-platform, and .NET Core for, umm, cross-platform. If you want cross-platform ASP.NET you will use .NET Core. If you want cross-platform Windows/iOS/macOS/Android, then it’s Xamarin/Mono.

The official line is that by targeting a specification (a version of .NET Standard), you can get cross-platform irrespective of the implementation. It’s still rather opaque:

The specification is not singular, but an incrementally growing and linearly versioned set of APIs. The first version of the standard establishes a baseline set of APIs. Subsequent versions add APIs and inherit APIs defined by previous versions. There is no established provision for removing APIs from the standard.

.NET Standard is not specific to any one .NET implementation, nor does it match the versioning scheme of any of those runtimes.

APIs added to any of the implementations (such as, .NET Framework, .NET Core and Mono) can be considered as candidates to add to the specification, particularly if they are thought to be fundamental in nature.

Microsoft also says that plenty of code is shared between the various implementations. True, but it still strikes me that having both Xamarin/Mono and .NET Core is one cross-platform implementation too many.

Server shipments decline as customers float towards cloud

Gartner reports that worldwide server shipments have declined by 4.2% in the first quarter of 2017.

Not a surprise considering the growth in cloud adoption but there are several points of interest.

One is that although Hewlett Packard Enterprise (HPE) is still ahead in revenue (over $3 billion revenue and 24% market share), Dell EMC is catching up, still number two with 19% share but posting growth of 4.5% versus 8.7% decline for HPE.

In unit shipments, Dell EMC is now fractionally ahead, with 17.9% market share and growth of 0.5% versus HPE at 16.8% and decline of 16.7%.

Clearly Dell is doing something right where HPE is not, possibly through synergy with its acquisition of storage vendor EMC (announced October 2015, completed September 2016).

The larger picture though is not great for server vendors. Businesses are buying fewer servers since cloud-hosted servers or services are a good alternative. For example, SMBs who in the past might run Exchange are tending to migrate to Office 365 or perhaps G Suite (Google apps). Maybe there is still a local server for Active Directory and file server duties, or maybe just a NAS (Networked Attached Storage).

It follows that the big cloud providers are buying more servers but such is their size that they do not need to buy from Dell or HPE, they can go directly to ODMs (Original Design Manufacturers) and tailor the hardware to their exact needs.

Does that mean you should think twice before buying new servers? Well, it is always a good idea to think twice, but it is worth noting that going cloud is not always the best option. Local servers can be much cheaper than cloud VMs as well as giving you complete control over your environment. Doing the sums is not easy and there are plenty of “it depends”, but it is wrong to assume that cloud is always the right answer.