Category Archives: professional

image

QCon London 2017: IoT insecurity, serverless computing, predicting technical debt, and why .NET Core depends on a 36,000 line C++ file

I’m at the QCon event in London, a multi-vendor conference aimed primarily at enterprise developers and architects.

image
Adam Tornhill speaks at QCon London 2017

A few notes on day one. Alasdair Allan gave a keynote on security and the internet of things; it was an entertaining and disturbing résumé of all that is wrong with the mad rush to connect everything to the internet though short on answers; our culture has to change so that organisations such as hotels, toy manufacturers, appliance vendors and even makers of medical equipment take security seriously but it is not clear how this will come about unless so many bad things happen that customers start to insist on it.

Michael Feathers spoke on strategic code deletion, part of a track on “Dark code: the legacy/tech debt dilemma.” This was an excellent session; code is added to projects more often than it is removed, and lack of hygiene in this regard has risks including security, reliability and performance. But discovering which code is safe to remove is not always trivial, and Feathers explored some of the nuances and suggested some techniques.

Steve Faulkner gave a session on serverless JavaScript, or more specifically, using Amazon Web Services (AWS) Lambda and API Gateway. Faulkner said that the API Gateway was the piece that made Lambda viable for them; he is Director of Platform Engineering at Bustle, a busy content site based in the USA. In a nutshell, moving from EC2 VMs to Lambda has yielded both financial savings and easier management. The only downside is performance; each call to a Lambda function takes a minimum of 100ms whereas the same function on a WM might take 20ms. In the end it is not critical as performance remains satisfactory.

Faulkner said that AWS is ahead of its competitors (Microsoft, Google and IBM were mentioned) but when pressed said that both Microsoft and Google offered strong alternatives. Microsoft’s Azure Functions are spoilt by the need to specify a maximum scale, rather than scaling automatically, but its routing solution is in some ways ahead of AWS, he said. Google’s Functions will be great when out of beta.

Adam Tornhill spoke on A Crystal Ball to prioritise Technical Debt, another session in the dark code track. This was my favourite of the day. Tornhill presented a relatively simple way to discover what code you should refactor now in order to avoid future issues. His method is based on looking for files with many lines of code (a way of measuring complexity) and many commits (suggesting high importance and activity), the “hotspots” in your projects. For more detail and some utilities see Tornhill’s blog.

Why do we end up with bad or risky code in our software? Tornhill said that developers often mistake organisational problems for technical problems and try unsuccessfully to fix them with tools.

He also mentioned an example of high-risk code, the file gc.cpp which performs garbage collection in .NET Core, the next generation of Microsoft’s .NET Framework. This file is over 36,000 lines and should be refactored. There is a discussion on the subject here. It exactly bears out Tornhill’s point. A developer proposes to refactor the file, back in March 2015. Microsoft’s Karel Zikmund defends the status quo:

Why it is this way? … Partly historical reasons (it is this way since the start). Partly because devs working on it didn’t feel the urge to refactor it. Partly because splitting of gc.cpp is non-trivial and risky and because it does not bring too big value (ramp up in the code base can be gained also in the combination of reading BOTR and debugging the code). Why it is staying this way? … Cost/benefit/risk ratio is IMO not in favor of a change here.

Few additional thoughts:
Am I happy that there is only 1 large file? No, but it doesn’t hurt me much either.
Do I see the disadvantages of large file? Yes, but I don’t think they are huge. More like minor annoyances with easy workarounds.
And to turn it around: Do you see the risk of any changes here? Do you see the cost of extra careful code reviews to mitigate the risk?

Strictly technically, we truly believe this is a formatting change. If it was simple to split it up and if it would be low risk and if it would be very easy to review, it might be worth the ‘minor’ improvements mentioned above … but I don’t see that combo happening (not on a noticeable scale in gc.cpp).
On a personal note: I also trust CLR team that if all these three things were true, the refactoring would have happened long time ago.

Note that some of this code goes back beyond .NET Core to the .NET Framework, the “historical reasons” that Zikmund mentions. We can see that the factors preventing change are as much organisational as technical.

Finally I attended a session on Microsoft’s Cognitive Services. Note this was in the “Sponsored solution track”. Microsoft also has a stand here focused on its Cognitive Services.

There is not much Microsoft Platform content at QCon and it seems under-represented, though many of the sessions are applicable to developers on any platform. I am not sure of all the reasons for this; there used to be an Advanced .NET track at QCon. It does reflect some overall development trends as well as the history and evolution of QCon itself. That said, there is a session on SQL Server on Linux so the company is not completely invisible here.

As for the session, it was a reasonable overview of Microsoft’s expanding Cognitive Services APIs, which covers things like image recognition, speech recognition and more. I would have liked more depth and would have preferred to hear from a practitioner, in other words, “we built an application on Cognitive Services and this is what we learned.” I am not altogether clear why the company is pushing this so hard, except that it is a driver for developers to use Azure. I asked about how developers should deal with the problem of uncertainty*, in other words, that Cognitive Services does not deliver absolute results but rather draws conclusions with a confidence score – eg it might be pretty sure that an image contains a human face, fairly sure that it is male, and somewhat confident that the age of the person is mid forties. When the speaker demoed speech recognition it went pretty well except that “Start” was transcribed as “Stop.” This stuff is difficult.

Looking forward now to Day Two: Containers, Machine Learning, and more.

*More concisely expressed as “Systems are moving from the deterministic to the probabilistic” by Stephen Whitworth, who is now speaking on Machine Learning.

Microsoft improves Windows Subsystem for Linux: launch Windows apps from Linux and vice versa

The Windows 10 anniversary update introduced a major new feature: a subsystem for Linux. Microsoft marketing execs call this Bash on Windows; Ubuntu calls it Ubuntu on Windows; but Windows Subsystem for Linux is the most accurate description. You run a Linux binary and the subsystem redirects system calls so that it behaves like Linux.

image

The first implementation (which is designated Beta) has an obvious limitation. Linux works great, and Windows works great, but they do not interoperate, other than via the file system or networking. This means you cannot do what I used to do in the days of Services for Unix: type ls at the command prompt to get a directory listing.

That is now changing. Version 14951 introduces interop so that you can launch Windows executables from the Bash shell and vice versa. This is particularly helpful since the subsystem does not support GUI applications. One of the obvious use cases is launching a GUI editor from Bash, such as Visual Studio Code or Notepad++.

The nitty-gritty on how it works is here.

image

Limitations? A few. Environment variables are not shared so an executable that is on the Windows PATH may not be on the Linux PATH. The executable also needs to be on a filesystem compatible with DrvFs, which means NTFS or ReFS, not FAT32 or exFAT.

This is good stuff though. If you work on Windows, but love Linux utilities like grep, now you can use them seamlessly from Windows. And if you are developing Linux applications with say PHP or Node.js, now you can develop in the Linux environment but use Windows editors.

Note that this is all still in preview and I am not aware of an announced date for the first non-beta release.

Microsoft at Ignite: Building on Office 365, getting more like Google, Adobe mysteries and FPGA magic

I’m just back from Microsoft’s Ignite event in Atlanta, Georgia, where around 23,000 attendees mostly in IT admin roles assembled to learn about the company’s platform.

There are always many different aspects to this type of event. The keynotes (there were two) are for news and marketing hype, while there is lots of solid technical content in the sessions, of which of course you can only attend a small fraction. There was also an impressive Expo at Ignite, well supported both by third parties and by Microsoft, though getting to it was a long walk and I fear some will never find it. If you go to one of these events, I recommend the Microsoft stands because there are normally some core team members hanging around each one and you can get excellent answers to questions as well as a chance to give them some feedback.

The high level story from Ignite is that the company is doing OK. The event was sold out and Corporate VP Brad Anderson assured me that many more tickets could have been sold, had the venue been bigger. The vibe was positive and it looks like Microsoft’s cloud transition is working, despite having to compete with Amazon on IaaS (Infrastructure as a service) and with Google on productivity and collaboration.

My theory here is that Microsoft’s cloud advantage is based on Office 365, of which the core product is hosted Exchange and the Office suite of applications licensed by subscription. The dominance of Exchange in business made the switch to Office 365 the obvious solution for many companies; as I noted in 2011, the reality is that many organisations are not ready to give up Word and Excel, Outlook and Active Directory. The move away from on-premises Exchange is also compelling, since running your own mail server is no fun, and at the small business end Microsoft has made it an expensive option following the demise of Small Business Server. Microsoft has also made Office 365 the best value option for businesses licensing desktop Office; in fact, I spoke to one attendee who is purchasing a large volume of Office 365 licenses purely for this reason, while still running Exchange on-premises. Office 365 lets users install Office on up to 5 PCs, Macs and mobile devices.

Office 365 is only the starting point of course. Once users are on Office 365 they are also on Azure Active Directory, which becomes a hugely useful single sign-on for cloud applications. Microsoft is now building a sophisticated security story around Azure AD. The company can also take advantage of the Office 365 customer base to sell related cloud services such as Dynamics CRM online. Integrating with Office 365 and/or Azure AD has also become a great opportunity for developers. If I had any kind of cloud-delivered business application, I would be working hard to get it into the Office Store and try to win a place on the newly refreshed Office App Launcher.

image

Office 365 users have had to put up with a certain amount of pain, mainly around the interaction between SharePoint online/OneDrive for Business and their local PC. There are signs that this is improving, and a key announcement made at Ignite by Jeff Teper is that SharePoint (which includes Team Sites) will be supported by the new generation sync client, which I hope means goodbye to the ever-problematic Groove client and a bit less confusion over competing OneDrive icons in the notification area.

A quick shout-out too for SharePoint Groups, despite its confusing name (how many different kinds of groups are there in Office 365?). Groups are ad-hoc collections of users which you set up for a project, department or role. Groups then have an automatic email distribution list, shared inbox, calendar, file library, OneNote notebook (a kind of Wiki) and a planning tool. Nothing you could not set up before, but packaged in a way that is easy to grasp. I was told that usage is soaring which does not surprise me.

I do not mean to diminish the importance of Azure, the cloud platform. Despite a few embarrassing outages, Microsoft has evolved the features of the service rapidly as well as building the necessary global infrastructure to support it. At Ignite, there were several announcements including new, more powerful virtual machines, IPv6 support, general availability of Azure DNS, faster networking up to an amazing 25 Gbps powered by FPGAs, and the public preview of a Web Application Firewall; the details are here:

My overall take on Azure? Microsoft has the physical infrastructure to compete with AWS though Amazon’s service is amazing, reliable and I suspect can be cheaper bearing in mind Amazon’s clever pricing options and lower price for application services like database management, message queuing, and so on. If you want to run Windows server and SQL server in the cloud Azure will likely be better value. Value is not everything though, and Microsoft has done a great job on making Azure accessible; with a developer hat on I love how easy it is to fire up VMs or deploy web applications via Visual Studio. Microsoft of course is busy building hooks to Azure into its products so that if you have System Center on-premises, for example, you will be constantly pushed towards Azure services (though note that the company has also added support for other public clouds in places).

There are some distinctive features in Microsoft’s cloud platform, not least the forthcoming Azure Stack, private cloud as an appliance.

I put “getting more like Google” in my headline, why is that? A couple of reasons. One is that CEO Satya Nadella focused his keynote on artificial intelligence (AI), which he described as “the ability to reason over large amounts of data and convert that into intelligence,” and then, “How we infuse every application, Cortana, Office 365, Dynamics 365 with intelligence.” He went on to describe Cortana (that personal agent that gets a bit in the way in Windows 10) as “the third run time … it’s what helps mediate the human computer interaction.” Cortana, he added, “knows you deeply. It knows your context, your family, your work. It knows the world. It is unbounded. In other words, it’s about you, it’s not about any one device. It goes wherever you go.”

I have heard this kind of speech before, but from Google’s Eric Schmidt rather than from Microsoft. While on the consumer side Google is better at making this work, there is an opportunity in a business context for Microsoft based on Office 365 and perhaps the forthcoming LinkedIn acquisition; but clearly both companies are going down the track of mining data in order to deliver more helpful and customized experiences.

It is also noticeable that Office 365 is now delivering increasing numbers of features that cannot be replicated on-premises, or that may come to on-premises one day but Office 365 users get them first. Further, Microsoft is putting significant effort into improving the in-browser experience, rather than pushing users towards Windows applications as you might have expected a few years back. It is cloud customers who are now getting the best from Microsoft.

While Microsoft is getting more like Google, I do not mean to say that it is like Google. The business model is different, with Microsoft’s based on paid licenses versus Google’s primarily advertising model. Microsoft straddles cloud and on-premises whereas Google has something close to a pure cloud play – there is Android, but that drives advertising and cloud services rather than being a profit centre in itself. And so on.

There were a couple more notable events during Nadella’s keynote.

image
Distinguished Engineer Doug Burger and one of Microsoft’s custom FPGA boards.

One was Distinguished Engineer Doug Burger’s demonstration of the power of FPGA boards which have been added to Azure servers, sitting between the servers and the network so they can operate in part independently from their hosts (see my short interview with Burger here).

During the keynote, he gave what he called a “visual demo” of the impact of these FPGA accelerators on Azure’s processing power. First we saw accelerated image recognition. Then a translation example, using Tolstoy’s War and Peace as a demo:

image

The FPGA-enabled server consumed less power but performed the translation 8 times faster. The best was to come though. What about translating the whole of English Wikipedia? “I’ll show you what would happen if we were to throw most of our existing global deployment at it,” said Burger.

image

“Less than a tenth of a second” was the answer. Looking at that screen showing 1 Exa-op felt like being present at the beginning of a computing revolution. As the Top500 supercomputing site observes, “the fact the Microsoft has essentially built the world’s first exascale computer is quite an achievement.” Exascale is a billion billion operations per second.

However, did we see Wikipedia translated, or just an animation? Bearing in mind first, that Burger spoke of “what would happen”, and second, that the screen says “Estimated time”, and third, that the design of Azure’s FPGA network (as I understand it) means that utilising it could impact other users of the service (since all network traffic to the hosts goes through these boards), it seems that we saw a projected result and not an actual result – which means we should be sceptical about whether this would actually work as advertised, though it remains amazing.

One more puzzle before I wrap up. Adobe CEO Shantanu Narayen appeared on stage with Nadella, in the morning keynote, to announce that Adobe will make Azure its “preferred cloud.” This appears to include moving Adobe’s core cloud services from Amazon Web Services, where they currently run, to Azure. Narayen:

“we’re thrilled and excited to be announcing that we are going to be delivering all of our clouds, the Adobe Document Cloud, the Marketing Cloud and the Creative Cloud, on Azure, and it’s going to be our preferred way of bringing all of this innovation to market.”

Narayen said that Adobe’s decision was based on Microsoft’s work in machine learning and intelligence. He also looked forward to integrating with Dynamics CRM for “one unified and integrated sales and marketing service.”

This seems to me interesting in all sorts of ways, not only as a coup for Microsoft’s cloud platform versus AWS, but also as a case study in migrating cloud services from one public cloud to another. But what exactly is Adobe doing? I received the following statement from an AWS spokesperson:

“We have a significant, long-term relationship and agreement with Adobe that hasn’t changed. Their customers will want to use AWS, and they’re committed to continuing to make that easy.”

It does seem strange to me that Adobe would want to move such a significant cloud deployment, that as far as I know works well. I am trying to find out more.

Azure Stack on show at Microsoft Ignite

At the Expo here at Microsoft’s Ignite you can see Azure Stack – though behind glass.

image

Azure Stack is Microsoft’s on-premises edition of Azure, a private cloud in a box. Technical Preview 2 has just been released, with two new services: Azure Queue Storage and Azure Key Vault. You can try it out on a single server just to get a feel for it; the company calls this a “one node proof of concept”.

Azure Stack will be delivered as an appliance, hence the exhibition here. There are boxes from Dell, HP Enterprise and Lenovo on display. General availability is planned for mid-2017 according to the folk on the stand.

There is plenty of power in one of these small racks, but what if there is a fire or some other disaster? Microsoft recommends purchasing at least two, and locating them some miles from one another, so you can set up resilience just as you can between Azure regions.

Incidentally, the Expo at Ignite seems rather quiet; it is not on the way to anything other than itself, and I have to allow 10-15 minutes to walk there from the press room. I imagine the third party exhibitors may be disappointed by the attendance, though I may just have picked a quiet time. There is a huge section with Microsoft stands and this is a great way to meet some of the people on the various teams and get answers to your questions.

Reflections on QCon London 2016 – part one

I attended QCon in London last week. This is a software development conference focused on large-scale projects and with a tradition oriented towards Agile methodology. It is always one of the best events I get to attend, partly because it is vendor-neutral (it is organised by InfoQ), and partly because of the way it is structured. The schedule is divided into tracks, such as “Back to Java” or “Architecting for failure”, each of which has a track leader, and the track leader gets to choose who speaks on their track. This means you get a more diverse range of speakers than is typical; you also tend to hear from practitioners or academics rather than product managers or evangelists.

image

The 2016 event was well up to standard from my perspective – though bear in mind that with 6 tracks on each day I only got to attend a small fraction of the sessions.

This post is just to mention a few highlights, starting with the opening keynote from Adrian Colyer, who specialised in finding interesting IT-related research papers and writing them up on his blog. He seems to enjoy being contrarian and noted, for example, that you might be doing too much software testing – drawing I guess on this post about the art of testing less without sacrificing quality. The takeaway for me is that it is always worth analysing what you do and trying to avoid the point where the cost exceeds the benefit.

Next up was Gavin Stevenson on “love failure” – I wrote this up on the Reg – there is a perhaps obvious point here that until you break something, you don’t know its limitations.

On Monday evening we got a light-hearted (virtual) look at Babbage’s Analytical Engine (1837) which was never built but was interesting as a mechanical computer, and Ada Lovelace’s attempts to write code for it, thanks to John Graham-Cumming and illustrator Sydney Padua (author of The Thrilling Adventures of Lovelace and Babbage).

image

Tuesday and the BBC’s Stephen Godwin spoke on Microservices powering BBC iPlayer. This was a compelling talk for several reasons. The BBC is hooked on AWS (Amazon Web Services) apparently and stores 21TB daily into S3 (Simple Storage Service). This includes safety copies. iPlayer was rebuilt in 2013, Godwin told us, and the team of 25 developers achieves 34 live deployments per week on average; clearly the DevOps stuff is working here. Godwin advocates genuinely “micro” services. “How big should a microservice be? For us, about 600 Java statements,” he said.

Martin Thompson spoke on the characteristics of a good software engineer, though oddly the statement that has stayed with me is that an ORM (Object-Relational Mapping) “is the wrong abstraction for a database”, something that chimes with me even though I get the value of ORMs like Microsoft’s Entity Framework for rapid development where database performance is non-critical.

Then came another highlight: Google’s Micah Lemonik on Architecting Google Docs. This talk sadly was not recorded; a touch of paranoia from Google? This was fascinating both from a historical perspective – Lemonik was involved in a small company called 2Web technologies which developed an Excel-like engine in 2003-4, and joined Google (which acquired 2Web) in 2005 to work on Google Sheets. The big story here was the how Google Sheets became collaborative, so more than one person could work on a spreadsheet simultaneously. “Google didn’t like it initially,” said Lemonik. “They thought it was too weird.” The team persisted though, thinking about the editing process as “messages being transferred between collaborators” rather than as file updates; and it worked.

You can actually use today’s version in your own projects, with Google’s Realtime API, provided that you are happy to have your stuff on Google Drive.

I particularly enjoyed Lemonik’s question to the audience. Two people are working on a sheet, and one types “6” into a cell. Then the same person overtypes this with “7”. Then the collaborator overtypes the same cell with “8”. Next, the first person presses Ctrl-z for undo. What should be the result?

The audience split neatly into “6”, “7”, and just a few “8” (the rationale for “8” is that undo should only undo your own changes and not touch those made by others).

Google, incidentally, settled on “6”, maintaining a separate undo stack for each user. But there is no right answer.

Lemonik also discussed the problem of consistency when there are large numbers of contributors. A hard problem. “There have to be bounds to the system in order for it to perform well,” he said. “The biggest takeaway for me in building the system is that you just can’t have it all. All of engineering is this trade-off.”

image

I have more to say about QCon so look out for part two shortly.

New Delphi and C++ Builder Roadmap promises Linux server support

Embarcadero has published a new roadmap explaining what to expect in forthcoming editions of its RAD Studio suite, including Delphi and C++ Builder.

The company has been acquired by IDERA though the Embarcadero brand is to continue under the new ownership.

The roadmap covers two “development tracks”, though it is not completely clear what that means. One is described as the “Spring development track” which suggests a release in April, 12 months after RAD Studio XE8. However, the post adds that “The team is working the following features that will be included in 2016 releases,” raising the possibility that some features in this track may come later, perhaps in the scheduled summer update.

The Spring track, to be called “Berlin”, seems to be mainly a tidying-up exercise in any case, with features including Bluetooth LE support for Windows 10, DirectX 12 support, native support for Utf8String on all platforms (you mean it does not have this already?) and enhancements to the FireMonkey cross-platform framework.

“Spring” also offers C++ CLANG 3.3 on all platforms.

The second development track “will deliver a Fall release”, to be known as “Tokyo”, following the pattern of recent years where RAD Studio has two major updates every year. The Fall track is more interesting, and includes support for Delphi and C++ Builder on Linux Server, as well as “Linux platform support for console apps with IoT support.” I guess non-GUI Linux is the common thread here.

The IDE will remain on Windows, with cross-compilation for Linux. Initially supported distributions are Ubuntu Server and RedHat Enterprise, though further distributions will be added “as demand dictates”.

It is good to see Linux support back in Delphi. I remember Borland Kylix (2001-2003) well, but this was back in the days when desktop Linux looked like more of a thing.

The feature-list for Tokyo also includes Windows Centennial support. This is potentially big news. Centennial is a Microsoft project to deliver Windows desktop applications through the Windows Store, using application virtualisation based on the existing App-V technology to remove dependency issues. You can expect to hear more about Centennial at Microsoft’s Build conference at the end of March; it was covered at last year’s Build but we have not heard much more about it since.

image

Embarcadero is also promising a new installer for RAD Studio, based on its GetIt technology, which will reduce installation time and give more flexibility in selecting features. This would be welcome; I never look forward to installing RAD Studio as it tends to be a time-consuming process. It would also be good if it messed less with system environmental variables, though I do not know if this is on the cards. The new installer will comes in two phases, phase 1 in Berlin and phase 2 in Tokyo.

My own view is that two major releases a year is one too many, so I would prefer if Embarcadero scrapped Berlin and went straight to Tokyo.

Installing Windows 10 on Surface 3 with Windows To Go

I am working on a review of Surface 3, Microsoft’s recently released Atom-based tablet, and wanted to try Windows 10 on the device. How to do this though without endangering the correct functioning of my loan unit?

The ideal answer seemed to be Windows To Go (WTG), which les you run Windows from a USB drive without touching what is already installed – well, apart from a setting in control panel that enables boot from Windows to Go.

image

Luckily I have an approved Windows to Go USB drive, a 32GB Kingston DataTraveler Workspace. I downloaded the Windows 10 iso (64-bit, build 10074) and used the Control Panel applet on my Windows 8 desktop (which runs the Enterprise edition) to create a WTG installation.

(There are unofficial ways to get around both the requirement for Enterprise edition, and the need for an approved USB device, but I did not have to go there).

Next, I plugged the drive into the Surface 3 and restarted. Windows 10 came up immediately. An interesting feature was that I was prompted to sign into Office 365, rather than with a Microsoft account. It all seemed to work, though Device Manager showed many missing drivers.

image

The wifi driver must have been one of them, since I had no network.

I had anticipated this problem by downloaded the surface 3 drivers from here. These were inaccessible though, since a WTG installation by default has no access to the hard drive on the host PC. I could not plug in a second USB device with the drivers on it either, since there is only one USB port on the Surface 3.

No matter, you can mount the local drives using the Windows Disk Management utility. I did that, and ran the Surface 3 Platform Installer which I had downloaded earlier. It seemed to install lots of drivers, and I was then prompted to restart.

Bad news. When trying to restart, boot failed with an “inaccessible boot device” error.

Fool that I am, I tried this operation again with a small variation. I rebuilt the WTG drive, and instead of mounting the drives on the host, I used it first on another PC, where the wifi worked straight away. I copied the Surface 3 files to the WTG drive C, then booted it on the Surface 3. Ran the Surface 3 Platform Installer, restarted, same problem “Inaccessible boot device”.

The third time, I did not run the Surface 3 Platform Installer. Instead, I installed the drivers one by one by right-clicking on the Unknown Devices in Device Manager and navigating to the Surface 3 drivers files I had downloaded using another PC. That looks better.

image

I restarted, and everything still worked. I have wifi, Bluetooth, audio, cameras and everything. So something the Platform Installer tries to do breaks WTG on my device.

The next question is whether the system will update OK when set to Fast for the Windows 10 bleeding edge. So far though, so good.

Note: there is an issue with power management. If the Surface 3 sleeps, then it seems to wake up back in Windows 8 if you leave it long enough. Not too much harm done though; restart and you are back in Windows 10.

Note 2: new builds will not install on WTG, they complain about an unsupported UEFI layout

Running ASP.NET 5.0 on Nano Server preview

I have been trying out Microsoft’s Nano Server Preview and wrote up initial experiences for the Register. One of the things I mentioned is that I could not get an ASP.NET app successfully deployed. After a bit more effort, and help from a member of the team, I am glad to say that I have been successful.

image

What was the problem? First, a bit of background. Nano Server does not run the .NET Framework, presumably because it has too many dependencies on pieces of Windows which Microsoft wanted to omit from this cut-down deployment. Nano Server does support .NET Core, also known as Core CLR, which is the open source fork of the .NET Framework. This enables it to run PowerShell, although with a limited range of cmdlets, and my main two ways of interacting with Nano Server are with PowerShell remoting, and Windows file sharing for copying files across.

On your development machine, you need several pieces in order to code for ASP.NET 5.0. Just installing Visual Studio 2015 RC will do, except that there is currently an incompatibility between the version of the ASP.NET 5.0 .NET Core runtime shipped with Visual Studio, and what works on Nano Server. This meant that my first effort, which was to build an empty ASP.NET 5.0 template app and publish it to the file system, failed on Nano Server with a NativeCommandError.

This meant I had to dig a bit more deeply into ASP.NET 5.0 running on .NET Core. Note that when you deploy one of these apps, you can include all the dependencies in the app directory. In other words, apps are self-hosting. The binary that enables this bit of magic is called DNX (.NET Execution Environment); it was formerly known as the K runtime.

Developers need to install the DNX SDK on their machines (Windows, Mac or Linux). There is currently a getting started guide here, though note that many of the topics in this promising documentation are as yet unwritten.

image

However, after installation you will be able to use several handy commands:

dnvm This is the .NET Version manager. You can have several versions of the DNX runtime installed and this utility lets you list them, set aliases to save typing full paths, and manage defaults.

image

dnu This is the .NET Development Utility (formerly kpm) that builds and publishes .NET Core projects. The two commands I found myself using regularly are dnu restore which downloads Nuget (.NET repository) packages and dnu publish which packages an app for deployment. Once published, you will find .cmd files in the output which you use to start the app.

dnx This is the binary which you call to run an app. On the development machine, you can use dnx . run to run the console app in the current directory and dnx . web to run the web app in the current directory.

Now, back to my deployment issues. The Visual Studio templates are all hooked to DNX beta 4, and I was informed that I needed DNX beta 5 for Nano Server. I played around with trying to get Visual Studio to target the updated DNX but ran into problems so decided to ignore Visual Studio and do everything from the command line. This should mean that it would all work on Mac and Linux as well.

I had a bit of trouble persuading DNX to update itself to the latest unstable builds; the main issue I recall is targeting the correct repository. You NuGet sources must include (currently) https://www.myget.org/F/aspnetvnext/api/v2.

Since I was not using Visual Studio, I based my samples on these, Hello World Console, MVC and Web apps that you can use for testing that everything works. My technique was to test on the development machine using dnx . web, then to use dnu publish and copy the output to Nano Server where I could run ./web.cmd in a remote PowerShell session.

Note that I found it necessary to specify the CoreClr 64-bit runtime in order to get dnu to publish the correct files. I tried to make this the default but for some reason* it reverted itself to x86:

dnu publish –runtime "c:\users\[USERNAME]\.dnx\runtime\dnx-coreclr-win-x64.1.0.0-beta5-11701"

Of course the exact runtime version to use will change soon.

If you run this command and look in the /bin/output folder you will find web.cmd, and running this should start the app. The port on which the app listens is set in project.json in the top level directory of the project source. I set this to 5001, opened that port in the Windows Firewall on the Nano Server, and got a started message on the command line. However I still could not browse to the app running on Nano Server; I got a 400 error. Even on the development machine it did not work; the browser just timed out.

It turned out that there were several issues here. On the development machine, which is running Windows 10 build 10074, I discovered to my annoyance that the web app worked fine with Internet Explorer, but not in Project Spartan, sorry Edge. I do not know why.

Support also gave me some tips to get this working on Nano Server. In order for the app to work across the network, you have to edit project.json so that localhost is replaced either with the IP number of the server, or with a *. I was also advised to add dnx.exe to the allowed apps in the firewall, but I do not think this is necessary if the port is open (it is a nuisance, since the location of dnx.exe changes for every app).

Finally I was successful.

Final observations

It seems to me that ASP.NET vNext running on .NET Core has the characteristic of many open source projects, a few dedicated people who have little time for documentation and are so close to the project that their public communications assume a fair amount of pre-knowledge. The site I referenced above does have helpful documentation though, for the few topics that are complete. Some other posts I found helpful are this series by Steve Perkins, and the troubleshooting suggestions here especially David Fowler’s post.

I like The .NET Core initiative overall since I like C# and ASP.NET MVC and now it is becoming a true cross-platform framework. That said, the code does seem to be in rapid flux and I doubt it will really be ready when Visual Studio 2015 ships. The danger I suppose is that developers will try it in the first release, find lots of problems, and never go back.

I also like the idea of running apps in Nano Server, a low-maintenance environment where you can get the isolation of a dedicated server for your app at low cost in terms of resources.

No doubt though, the lack of pieces that you expect to find on Windows Server will be an issue and I am not sure that the mainstream Microsoft developer ecosystem will take to it. Aidan Finn is not convinced, for example:

Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now.

Finn’s point is that if your headless server is having networking issues it is hard to troubleshoot, since of course remote tools will not work reliably. That said, I have personally run Hyper-V Server (which is essentially Server Core with just the Hyper-V role) with great success for several years; I started keeping notes on how to troubleshoot from the command line and found solutions to common problems. If networking fails with Nano Server then yes, you have a problem, but there is always something you can do, even if it means mounting the Nano Server VHD or VHDX on another VM. Windows Server admins have become accustomed to a local GUI though and adjusting even to Server Core has not been easy.

*the reason was that I did not use the –p argument with dnvm use which would have made it persistent

Compile Android Java, iOS Objective C apps for Windows 10 with Visual Studio: a game changer?

Microsoft has announced the ability to compile Windows 10 apps written in Java or C++ for Android, or in Objective C for iOS, at its Build developer conference here in San Francisco.

image
Objective C code in Visual Studio

The Android compatibility had been widely rumoured, but the Objective C support not so much.

This is big news, but oddly the Build attendees were more excited by the HoloLens section of the keynote (3D virtual reality) than by the iOS/Android compatibility. That is partly because this is the wrong crown; these are the Windows faithful who would rather code in C#.

Another factor is that those who want Microsoft’s platform to succeed will have mixed feelings. Is the company now removing any incentive to code dedicated Windows apps that will make the most of the platform?

Details of the new capabilities are scant though we will no doubt get more details as the event progresses. A few observations though.

Microsoft is trying to fix the “app gap”, the fact that both Windows Store and Windows Phone Store (which are merging) have a poor selection of apps compared to iOS or Android. Worse, many simply ignore the platforms as too small to bother with. Lack of apps make the platforms less attractive so the situation does not improve.

The goal then is to make it easier for developers to port their code, and also perhaps to raise the quality of Windows mobile apps by enabling code sharing with the more important platforms.

There are apparently ways to add Windows-specific features if you want your ported app to work properly with the platform.

Will it work? The Amazon Fire and the Blackberry 10 precedents are not encouraging. Both platforms make it easy to port Android apps (Amazon Fire is actually a version of Android), yet the apps available in the respective app stores are still far short of what you can get for Google Android.

The reasons are various, but I would guess part of the problem is that ease of porting code does not make an unimportant platform important. Another factor is that supporting an additional platform never comes for free; there is admin and support to consider.

The strategy could help though, if Microsoft through other means makes the platform an attractive target. The primary way to do this of course is to have lots of users. VP Terry Myerson told us that Microsoft is aiming for 1 billion devices running Windows 10 within 2-3 years. If it gets there, the platform will form a strong app market and that in turn will attract developers, some of whom will be glad to be able to port their existing code.

The announcement though is not transformative on its own. Microsoft still has to drive lots of Windows 10 upgrades and sell more phones.

Visual Studio Code: an official Microsoft IDE for Mac, Windows, Linux

Microsoft has announced Visual Studio Code, a cross-platform, code-oriented IDE for Windows, Mac and Linux, at its Build developer conference here in San Francisco.

image

Visual Studio Code is partly based on the open source projects Omnisharp. It supports Intellisense code completion, GIT source code management, and debugging with break points and call stack.

I have been in San Francisco for the last few days and the dominance of the Mac is obvious. Sitting in a cafe in the Mission district I could see 10 Macs and no PCs other than my own Surface Pro. Some folk were coding too.

It follows that if Microsoft wants to make a go of cross-platform C#, and development of ASP.NET MVC web applications beyond the Windows developer community, then tooling for the Mac is important.

image

Visual Studio Code is free and is available for download here.

The IDE will lack the rich features and templates of the full Visual Studio, but if it is fast and clean, some Visual Studio developers may be keen to give it a try as well.