Your favourite article on The Register, and what that says about technology and the media

I’m at Mobile World Congress in Barcelona and meeting people new to me who say, “who do you write for”? I’ve been struck by several separate occasions when people say, after I mention The Register, “Oh yes, I loved that Apple article”.

The piece they mean (not one of mine), is this one by Kieren McCarthy. It recounts the Reg’s efforts to attend the iPhone 7 launch; or more precisely, efforts to get Apple PR to admit that the Reg is on a “don’t invite” list and would not be able to attend.

image

Why does everyone remember this piece? In short, because it is a breath of reality in a world of hype.

The piece also exposes hidden pressures that influence tech media. There are more people working in PR than in journalism, as I recall, and it is their job to attempt to manage media coverage in order to get it to reflect as closely as possible the messaging that that their customers, the tech companies, wish to put out.

Small tech companies and start-ups struggle to get any coverage and welcome almost any press interest. The giants though are in a more privileged position, none more so than Apple, for whom public interest in its news is intense. This means it can select who gets to attend its events and naturally chooses those it thinks will give the most on-message coverage.

I do not mean to imply that those favoured journalists are biased. I believe most people write what they really think. Still, consciously or unconsciously they know that if they drift too far from the vendor’s preferred account they might not get invited next time round, which is probably a bad career move.

Apple is in a class of its own, but you see similar pressures to a lesser extent with other big companies.

Another thing I’ve noticed over years of attending technology events is that the opportunities for open questioning of the most senior executives have diminished. They would rather have communication specialists answer the questions, and stay behind closed doors or give scripted presentations from a stage.

Here in Barcelona I’ve discovered the Placa de George Orwell for the first time:

image

Orwell knew as well as anyone the power of the media, even though he almost certainly did not say what is now often attributed to him, “Journalism is printing what someone else does not want printed: everything else is public relations.”

Still, as I move into a series of carefully-crafted presentations it is a thought worth keeping in front of mind.

Finally, let me note that I have never worked full-time for The Register though I have written a fair amount there over the years (the headlines by the way are usually not written by me). The more scurrilous aspect of some Reg pieces is not really me, but I absolutely identify with The Register’s willingness to allow writers to say what they think without worrying about what the vendor will think. 

Should you go to Microsoft Build?

image

In the beginning there was the Professional Developers Conference (PDC) – the first was in 1992. They were fantastic events, with deep dives into the innards of Windows and how to develop applications on Microsoft’s platform. Much of the technology presented was in early preview and often did not work quite right; some things that were presented never made it to production, famous examples including “Hailstorm” also known as .NET My Services, and the WinFS file system originally slated for Windows Vista.

These events seemed to be a critical part of Microsoft’s development cycle. Internally teams would ready their latest stuff for a PDC session, which was then adapted and re-presented at other events around the world.

Then PDC kind-of morphed into Microsoft Build, the first of which took place in September 2011. Unlike PDC, Build was specifically focused on Windows, and was originally associated with Windows 8 and its new app platform, WinRT. Part of the vision behind Windows 8 was that it would have a strong app ecosystem and Build was about enthusing and informing developers about the possibilities.

As it turned out, the Windows 8 app ecosystem was a bit of a disaster for various reasons. Microsoft had another go with UWP (Universal Windows Platform) in Windows 10. Build in April 2015 was an amazing event, where the company appeared to be going all-out to make UWP on both desktop and mobile a success. Not only was the platform itself being enhanced, but we also got Project Centennial (deliver desktop applications via the Store), Project Astoria (compile Android apps to UWP) and Project Islandwood (compile iOS code for UWP).

Just a few months later the company made a huge about-turn. CEO Satya Nadella’s Aligning Engineering to Strategy memo signalled the beginning of the end for Windows Phone, and the departure of Stephen Elop and the dismantling of the Nokia devices acquisition. That was the end of the universal part of UWP.

Project Astoria was scrapped. The Windows Bridge for iOS (Project Islandwood) still just about exists, but its core rationale (get iOS apps to Windows Phone) is now irrelevant.

Nadella steered the company instead towards “the intelligent cloud” and to date that strategy has been successful, with impressive growth for Office 365 and Microsoft Azure.

Microsoft has announced Build 2018, in early May, I find it intriguing, given the history of Build, that Windows is not currently mentioned on the event’s home page. In the page description metadata it says:

Microsoft Build 2018, Seattle, WA May 7-9, 2018. Microsoft’s ultimate developer conference focused on cloud, artificial intelligence, mixed reality, and more.

In the main text of the page, about the only specific topics mentioned are these:

Take in keynotes by Microsoft CEO Satya Nadella and other visionaries behind the Intelligent Cloud and Intelligent Edge

That sounds like a focus on Azure and AI/ML/IoT/Big Data cloud services, and on mobile and IoT devices. It is a long way removed from the original concept of Build as all about Windows and its application platform.

Windows remains important to Microsoft, and to all of us who use it day to day. Still, if you think about cutting-edge software development today, Windows desktop applications are probably not the first thing to come to mind, nor UWP for that matter.

This being the case, it does make sense for the company to focus on its cloud services at Build, and on diverse mobile platforms through what is now an amazing range of cross-platform tools in Visual Studio.

Of course there will in fact also be Windows stuff at Build, including Windows and HoloLens Mixed Reality, Cortana skills and UWP improvements.

Still, if you can only get to one big Microsoft event in the year, Ignite in September is now a bigger deal and closer to the heart of the new (or current) Microsoft.

Let me add that these Microsoft events, whether Build or PDC, have on occasion seen some stunning announcements. Examples include the unveiling of C# and the .NET Framework, the 2003 Longhorn reveal (yes it all turned to dust), Windows 7 in 2008, and Windows 8 in 2011.

I would like to think that the company still has the capacity to surprise and amaze us; but it must be admitted that the current Build pitch is rather unexciting. Google I/O, incidentally, is on at the same time.

Microsoft introduces new feedback system for technical documentation, will delete existing comments

Microsoft is introducing a new feedback system for https://docs.microsoft.com, used for its technical documentation.

The new system, which you can already see for certain topics such as the Visual Studio IDE, is based on GitHub issues. When you leave a comment, you can specify whether it concerns documentation or product functionality.

image

So far so good, but the downside is that all existing comments will be deleted:

image

The statement “Old comments will not be carried over. If content within a comment thread is important to you, please save a copy.” is unhelpful. Nobody knows what comments will be useful to them in future.

Few things sap enthusiasm for community participation more than having all the past contributions into which you have put effort suddenly zapped. Nor is this the first time, as user guibirow notes:

As much I like the new system idea, I hate the fact that this is happening over and over.
It used to be a Disqus comment system, then moved to LiveFyre, then moved now to this new system, what will be the next?
The worst part of this all is that MS does not care about past content lost on these discussions, so many times I found issues described in the docs that are gone now.
Please, pay attention to your previous mistakes, don’t let the information be lost again, at lest import them as closed issue in the new system.

Sometimes progress has a cost and that is understood. However it is not impossible to migrate content from one system to another. It just takes effort.

Update: Microsoft’s Rob Eisenberg has responded with an explanation and mitigation plan regarding existing comments. He says that a straight migration of the comments is impossible:

    • There is a lot of garbage, spam and even dangerous content within existing LiveFyre comments which would violate GitHub terms of usage and our open source code of conduct, as well as cause security problems.
    • There isn’t a good way to map LiveFyre users to GitHub users and using a bot account to anonymously add comments is questionable with respect to OSS practices and GitHub terms of use.
    • For legal and privacy reasons, we cannot move user-associated data from one system to another without consent from users (GDPR).
    • LiveFyre conversations are threaded, while GitHub issues are not.
    • Placing the old comments into the GitHub Issues system would derail the entire GitHub Issues workflow for both customers and employees and muddle the data.
    • It isn’t clear whether there is a way to invoke GitHub APIs for a migration of this scale such that it wouldn’t violate GitHub API terms of use.

He also has an archiving proposal:

We would take the comments from an article on docs.microsoft.com and then convert them into a Markdown file. During this process, we would strip all user info (remember GDPR). The Markdown file would then be committed to a GitHub repo. Finally, at the bottom of the feedback section, next to the link that says "View on GitHub" we would add a second link that said something like "View Comment Archive". This link would connect you directly to the Markdown comment file for that page.

This sounds positive. At the same time, it is a mess that illustrates some of the disadvantages of a “best of breed” approach to solving technical problems. If Microsoft could use its own technology to host a documentation and commenting system, and a source code management and issue tracking system for that matter, this issue would not occur, and users would not need multiple accounts, causing the legal issues mentioned above.

Microsoft in fact used to use its own platform for all the above but decided to shift to using third-party solutions because they worked better. That seemed to be a good thing, improving user experience and productivity, but becomes a problem when what seems to be the best third-party option changes.

Setting up PHP for development on Windows Subsystem for Linux in Windows 10

I have been working a little with PHP, for the first time for a while, and soon found it annoying not to have the convenience of instant application testing and line by line debugging. I have set up a PHP development environment before using XAMPP for Windows and Eclipse, but it was fiddly. I also prefer PHP on Linux, which is where my scripts will be running.

Since Windows 10 now has a Linux environment built-in, called Windows Subsystem for Linux (WSL), I decided to set this up to run Apache, PHP and MySQL and to try debugging my scripts there.

My PC is a recent installation and I had not yet installed WSL. To do so, you have to both download a Linux distribution from the Store (I chose Ubuntu), and enable WSL in Windows features. Then restart, launch Ubuntu, set a username and password, and you are up and running.

Note the Linux commands that follow should be run as root using sudo.

Before doing anything else, I got Ubuntu up to date:

apt-get update

apt-get upgrade

Then I installed the LAMP suite:

apt-get install lamp-server^

(the final ^ is intentional; see the guide here).

To check that everything is working, I created the file phpinfo.php in /var/www/html with the following contents:

<?php phpinfo(); ?>

and restarted Apache:

/etc/init.d/apache2 restart

Note: if you have IIS running in Windows, or another web server, Apache will not be able to listen on port 80. Change the port in /etc/apache2/ports.conf and in /etc/apache2/sites-enabled/000-default.conf

Then I opened a web browser on the Windows side and browsed to localhost:

image

and

image

We are up and running, but not debugging PHP yet. Remember the basic rules of WSL:

  • you cannot change Linux files from Windows.
  • you can access Windows files from Linux.

We want to edit PHP from Windows, so we’ll define a site that uses Windows files. Windows files are under /mnt/c (or whatever drive letter you are using).

So if you example you have your PHP website in a folder called c:\websites\mysite, you can have Apache serve files from that folder.

The quickest way to get up and running is to create a symbolic link in the Apache home directory, in my case /var/www/html. Change to that directory and type:

ln -s /mnt/c/websites/mysite mysite

Now you can view the site at http://localhost/mysite/

This worked first time for me, complete with PHP running. You could also set up multiple virtual hosts in Apache, and use the hosts file in Windows to map other host names to localhost.

Next, you probably want PHP to show error messages. To do this, replace the default php.ini with the development version (or tweak it according to your own preferences. At the time of writing, on Ubuntu, the default PHP version is 7.0 and php.ini-development is located in /usr/lib/php/7.0/php.ini-development. So I backed up the ini file at /etc/php/7.0/apache2, replaced it with the development version, and restarted Apache. My PHP form immediately showed me a non-fatal undefined index error, so it worked.

There is one small inconvenience. Apache in WSL will only run during the session. So before starting work, you have to open Ubuntu and type:

sudo apache2ctl start

Well, background task support is coming to WSL but I do not regard this as a big problem.

OK, this is cool, we can make changes in the PHP code in our favourite Windows editor, save, and view the results directly in the browser. But what about line-by-line debugging? For this, we are going to use Visual Studio Code with the PHP Debug extension:

image

Then on the Ubuntu side:

apt-get install php-xdebug

Restart Apache:

apache2ctl restart

Check that phpinfo.php now shows an Xdebug section. Then edit php.ini and add the following:

[XDebug]
xdebug.remote_enable = 1
xdebug.remote_autostart = 1

Restart Apache again and XDebug is ready to go.

Over in Visual Studio code there is a little more work to do. The problem is that although everything is running on localhost, the location of the files looks different to Linux than to Windows. We can fix this with a pathMappings setting. In Visual Studio code, open the PHP file you want to debug. Click the Debug icon and then the little gearwheel near top left; this will open launch.json. By default there are a couple of settings for XDebug. These are OK for a default setup, but we need to add path mapping so that the debugger knows where to find the files. For example:

image

Now you can set a breakpoint, start debugging, and open the page in your browser:

image

More guidance on the PHP Debug extension by Felix Becker is here.

Final thoughts

This is cool; but is it better or worse than an old-style VM running Linux and PHP? The WSL solution is lightweight and convenient, but unlike a VM it is not isolated and you may hit issues that are unique to WSL, because not everything runs. I did happen to suffer crashes in Visual Studio and in Outlook while WSL was running; it may well be coincidence, but I cannot help wondering if WSL might be to blame.

Still, a great feature of WSL is that when you exit your session, it goes away, so it is not too intrusive. I plan to use it for PHP debugging and will see how it goes.

The price of free Wi-Fi, and is it a fair deal?

Here we are in a pub trying to get on the Wi-Fi. The good news: it is free:

image

But the provider wants my mobile number. I am a little wary. I hate being called on my mobile, other than by people I want to hear from. Let’s have a look at the T&C. Luckily, this really is free:

image

But everything has a cost, right? Let’s have a look at that “privacy” policy. I put privacy in quotes because in reality such policies are often bad news for your privacy:

Screenshot_20180211-141004

Now we get to the heart of it. And I don’t like it. Here we go:

“You also agree to information about you and your use of the Service including, but not limited to, how you conduct your account being used, analysed and assessed by us and the other parties identified in the paragraph above and selected third parties for marketing purposes”

[You give permission to us and to everyone else in the world that we choose to use your data for marketing]

“…including, amongst other things, to identify and offer you by phone, post, our mobile network, your mobile phone, email, text (SMS), media messaging, automated dialling equipment or other means, any further products, services and offers which we think might interest you.”

[You give permission for us to spam you with phone calls, texts, emails, automated dialling and any other means we can think of]

“…If you do not wish your details to be used for marketing purposes, please write to The Data Controller, Telefönica UK Limited, 260 Bath Road, Slough, SLI 4DX stating your full name, address, account number and mobile phone number.”

[You can only escape by writing to us with old-fashioned pen and paper and a stamp and note you have to include your account number for the account that you likely have no clue you even have; and even then, who is to say whether those selected third parties will treat your personal details with equal care and concern?]

A fair deal?

You get free Wi-Fi, O2 gets the right to spam you forever. A fair deal? It could be OK. Maybe there won’t in fact be much spam. And since you only give your mobile number, you probably won’t get email spam (unless some heartless organisation has a database linking the two, or you are persuaded to divulge it).

In the end it is not the deal itself I object to; that is my (and your) decision to make. What I dislike is that the terms are hidden. Note that the thing you are likely to care about is clause 26 and you have to not only view the terms but scroll right down in order to find it.

Any why the opt-out by post only? There is only one reason I can think of. To make it difficult.

What the Blazor! After Silverlight, .NET in the browser reappears by another route

Silverlight, Microsoft’s browser plug-in which included a cut-down .NET runtime, once seemed full of promise for developers looking for an end-to-end .NET solution, cross-platform on Windows and Mac, and with support for “out of browser” applications for a native-like experience.

Silverlight was killed by various factors, including the industry’s rejection of old-style browser plug-ins, and warring factions at Microsoft which resulted in Silverlight on Windows Phone, but not on Windows 8. The Windows 8 model won, with what became the Universal Windows Platform (UWP) in Windows 10, but this is quite a different thing with no cross-platform support. Or there is Xamarin which is cross-platform .NET, and one day perhaps Microsoft will figure out what to do about having both UWP and Xamarin.

Yesterday though Microsoft announced (though it was already known to those paying attention) Blazor, an experimental project for hosting the .NET Runtime in the browser via WebAssembly. The name derives from “Browser + Razor”, Razor being the syntax used by ASP.NET to combine HTML and C# in a web application. C# in Razor executes on the server, whereas in Blazor it executes on the client.

Blazor is enabled by work the Xamarin team has done to compile the Mono runtime to WebAssembly. Although this sounds like a relatively large download, the team is hoping that a combination of smart linking (to strip out unnecessary code in both applications and the runtime) with caching and HTTP compression will make this acceptable.

This post by Steve Sanderson is a good technical overview. Some key points:

– you can run applications either as interpreted .NET IL (intermediate language) or pre-compiled

– Blazor is an SPA (Single Page Application) framework with solutions for routing, state management, dependency injection, unit testing and more

– UI components use HTML and CSS

– There will be a browser API which you can call from C# code

– you will be able to interop with JavaScript libraries

– Microsoft will provide ASP.NET libraries that integrate with Blazor, but you can use Blazor with any server-side technology

What version of .NET will be supported? This is where it gets messy. Sanderson says Blazor will support .NET Standard 2.0 or higher, but not completely in the some functions will throw a PlatformNotSupported exception. The reason is that not all functions make sense in the context of a Blazor application.

Blazor sounds promising, if developers can get past the though the demo application on Azure currently gives me a 403 error. So there is this video from NDC Oslo instead.

The other question is whether Blazor has a future or will join Silverlight and other failed attempts to create a new application platform that works. Microsoft demands much patience from its .NET community.

HackerRank survey shows programming divides in more ways than one

Developer recruitment company HackerRank has published a survey of developer skills. The first place I look in any survey is who took part, and how many:

HackerRank conducted a study of developers to identify trends in developer education, skills and hiring practices. A total of 39,441 professional and student developers completed the online survey from October 16 to November 1, 2017. The survey was hosted by SurveyMonkey and HackerRank recruited respondents via email from their community of 3.2 million members and through social media sites.

I would like to see the professional and student reponses shown separately. The world of work and the world of learning is different. This statement may also be incomplete, since several of the questions analyse what employers want, which suggests another source of data (not difficult to find for a recruitment company).

It is still a good read. It is notable for example that the youngest generation is learning to code later in life than those who are now over 35:

image

I am not sure how to interpret these figures, but can think of some factors. One is that the amount of stuff you can do with a computer without coding has risen. In the earliest days when computing became affordable for anyone (late seventies/early eighties), you could not do much without coding. This was the era of type-in listings for kids wanting to play games. That soon changed, but coding remained important to getting things done if you wanted to make a business database useful, or create a website. Today though you can do all kinds of business, leisure and internet computing without needing to see code, so the incentive to learn is lower. It has become a more specialist skill. It remains valuable though, so older people have reason to be grateful.

How do people learn to code? The most popular resource is Stack Overflow, followed by YouTube, with books coming in third. In truth the most popular resource must be Google search. Credit to Stack Overflow though: like Wikipedia, it offers a good browsing experience at a time when the web has become increasingly unpleasant to use, infected by pop-up surveys, autoplay videos and intrusive advertising, not to mention the actual malware out there.

No surprises in language popularity, though oddly the survey does not tell us directly what languages are most used or best known by the respondents. The most in demand languages are apparently:

1. JavaScript
2. Java
3. Python
4. C++
5. C
6. C#
7. PHP
8. Ruby
9. Go
10. Swift

If you ask what languages developers plan to learn next, Go, Python and Scala head the list. And then there is a fascinating chart showing which languages developers prefer grouped by age. Swift, apparently, is loved by 75% of those over 55, but only by 15% of those under 25, the opposite of what I would expect (though I don’t know if this is a percentage of those who use the language, or includes those who do not know it at all).

Frameworks is another notable topic. Everyone loves Node.js; but two of the frameworks on offer are “.NET Core” and “ASP”. This is odd, since .NET Core is not really a framework, and ASP normally refers to the ancient “Active Server Pages” framework which nobody uses any longer, and ASP.NET runs on .NET Core so is not alternative to it.

This may be a clue that the HackerRank company or community is not well attuned to the Microsoft platform. That itself is of interest, but makes me question the validity of the survey results in that area.

C# and .NET: good news and bad as Python rises

Two pieces of .NET news recently:

Microsoft has published a .NET Core 2.1 roadmap and says:

We intend to start shipping .NET Core 2.1 previews on a monthly basis starting this month, leading to a final release in the first half of 2018.

.NET Core is the cross-platform, open source implementation of the .NET Framework. It provides a future for C# and .NET even if Windows declines.

Then again, StackOverflow has just published a report on the most sought-after programming languages in the UK and Ireland, based on the tags on job advertisements on its site. C# has declined to fourth place, now below Python, and half the demand for JavaScript:

image

To be fair, this is more about increased demand for Python, probably driven by interest in AI, rather than decline in C#. If you look at traffic on the StackOverflow site C# is steady, but Python is growing fast:

image

The point that interest me though is the extent to which Microsoft can establish .NET Core beyond the Microsoft-platform community. Personally I like C# and would like to see it have a strong future.

There is plenty of goodness in .NET Core. Performance seems to be better in many cases, and cross-platforms is a big advantage.

That said, there is plenty of confusion too. Microsoft has three major implementations of .NET: the .NET Framework for Windows, Xamarin/Mono for cross-platform, and .NET Core for, umm, cross-platform. If you want cross-platform ASP.NET you will use .NET Core. If you want cross-platform Windows/iOS/macOS/Android, then it’s Xamarin/Mono.

The official line is that by targeting a specification (a version of .NET Standard), you can get cross-platform irrespective of the implementation. It’s still rather opaque:

The specification is not singular, but an incrementally growing and linearly versioned set of APIs. The first version of the standard establishes a baseline set of APIs. Subsequent versions add APIs and inherit APIs defined by previous versions. There is no established provision for removing APIs from the standard.

.NET Standard is not specific to any one .NET implementation, nor does it match the versioning scheme of any of those runtimes.

APIs added to any of the implementations (such as, .NET Framework, .NET Core and Mono) can be considered as candidates to add to the specification, particularly if they are thought to be fundamental in nature.

Microsoft also says that plenty of code is shared between the various implementations. True, but it still strikes me that having both Xamarin/Mono and .NET Core is one cross-platform implementation too many.

Strong financial results from Microsoft as it aims for breadth of services

Microsoft reported a big quarter (in terms of revenue) for the three months ending December 31st, with revenue of $28,918 million.

What’s notable? Mainly the big jump in Microsoft’s recent success stories: year on year Office 365 up by 41%, Azure up by 98%, Dynamics 365 up by 67%.

Windows is flat/weak as you would expect, and Surface hardware is standing still. Xbox grew a bit following the launch of Xbox One X.

LinkedIn is growing: revenue of $1.3 billion and “sessions growth of over 20%” in the quarter. In the earnings webcast, Microsoft’s Amy Hood said that the LinkedIn acquisition has both performed better, and seems more strategic, now than it did at the time.

Hood also made reference to the company’s ability to up-sell cloud users to higher-margin services. “Office 365 commercial revenue increased 41 percent from installed base growth across all customer segments, and ARPU [Average Revenue per User] expansion from continued customer migration to higher value offers in the E3 and E5 workloads.”

This point is key and is the answer (from the provider’s point of view) to the lower margins implicit in moving from software to services. When Microsoft sells a licence for you to use Windows or Office, the margin is huge because reproducing the software, or providing it for download, costs almost nothing; whereas with a subscription there is significant cost to providing the service. However the subscription has advantages which offset this, in particular the continuing interaction with the customer that both provides data, which the customer as well as the provider can mine (subject to appropriate privacy controls), and gives opportunity for the provider to extend the relationship into new or upgraded services.

CEO Satya Nadella fielded a good question about Microsoft losing out to Sony in gaming and to Alexa and Google Home in voice devices. On gaming, Nadella referred to the PC alongside Xbox as a strategic asset. “PC gaming is a growth market,” he said, as well as software such as Minecraft now on mobile devices, giving the company a broad reach. He also remarked on Azure as a gaming back end.

As for Cortana in the home (or absence from), Nadella said that the focus is on the server-side cognitive services. He also talked about voice input and control of Office 365. The key point though was that Microsoft wants to work both with its own and other voice assistant devices so it can win on services even when competitor devices are in use. “One-turn dialogs on one speaker in one home, that’s just not our vision,” he said.

Nadella made another key point in the webcast, in answer to a question about how Azure Stack (a packaged version of Azure for installation on-premises) will impact Azure. “Computing is becoming more distributed, not less distributed,” he said. IoT and sensors play a large part in this. Everything goes to the cloud but computing on the edge (the new buzzword for local processing) is important for efficiency.

It is easy to see ways in which Microsoft could stumble. The PC will decline as the number of users who need a desktop or laptop computer diminishes. Microsoft’s failure in mobile could prove costly as competitors use synergy with their own applications and cloud services to steer customers away. There are opportunities such as home automation and payments which seem closed to the company now.

Then again, strong results such as these show how the company can succeed by continuing to migrate its business users to cloud services. It remains deeply embedded in business computing.

Here is my chart summarising Microsoft’s performance:   

Quarter ending December 31st 2017 vs quarter ending December 31st 2016, $millions

Segment Revenue Change Operating income Change
Productivity and Business Processes 8953 +1774 3337 +284
Intelligent Cloud 7795 +1037 2832 +541
More Personal Computing 12170 +281 2510 -51

The segments break down as:

Productivity and Business Processes: Office, Office 365, Dynamics 365 and on-premises Dynamics, LinkedIn

Intelligent Cloud: Server products, Azure cloud services

More Personal Computing: Consumer including Windows, Xbox; Bing search; Surface hardware