Category Archives: .net

Notes from the field: unexpected villain breaks Dynamics CRM and IIS on Windows Server 2012

Yesterday I was asked to convert a Dynamics CRM 2013 installation from an internal to an Internet Facing Deployment (IFD). It is a bit fiddly, but I have done this before so I was confident.

The installation in question is only for test; the company has its production CRM 2011 on another server. Because it is for test though, it is a small deployment on a single server.

I got to work running the Claims Based Authentication wizard in the CRM Deployment Manager but also noticed something odd about the server. WSUS (Windows Server Update Services) was installed though it was not in use. This seems a bad idea so I asked if I could remove it. Sure, it was just a quick experiment. I removed WSUS and got on with the next steps of configuring IFD.

Unfortunately ADFS 2.0 (in this case) would not play ball. It could not communicate with CRM. I quickly saw why: attempting to browse to the special FederationMetadata.xml URL raised a 500 error.

I tried a few things. There are plenty of odd things that can go wrong: permissions on the private keys of the certificate used for the CRM web site, Service Principal Names, incorrect DNS entries and so on. All seemed fine. Still the error.

I decided to backtrack and temporarily disable Claims Based Authentication. Unfortunately it appeared that I had broken CRM completely. All access to the site raised the same 500.19 IIS error.

image

The web page IIS delivers says that the most likely causes are that the worker process is unable to read the ApplicationHost.config or web.config file, or malformed XML in the applicationhost.config or web.config file, or incorrect NTFS permissions.

I did a repair install on CRM. I reapplied the rollups. No difference.

I ran Process Monitor to try to figure out what configuration file was causing the problem. It was not a great help, but did point me in the right direction to the extent that it seemed that ASP.NET was not working properly at all. I now focused on this rather than CRM itself, observing also that there were not many CRM-related errors in the event log and I would expect more if it was really broken.

I created a hello world ASP.NET application and installed it in a separate site on a different port. Same error.

Searching for help on this particular error was not particularly helpful. In the context of CRM, the few users that encountered something similar had reinstalled everything from scratch. However, now at least I knew that IIS rather than CRM was broken. This helpful MSDN article actually includes a hint to the solution:

For above specific error (mentioned in this example), DynamicCompressionModule module is causing the trouble. This is because of the XPress compression scheme module (suscomp.dll) which gets installed with WSUS. Since Compression schemes are defined globally and try to load in every application Pool, it will result in this error when 64bit version of suscomp.dll attempts to load in an application pool which is running in 32bit mode.

which is also referenced here. These refer to WSUS breaking 32-bit applications, but in my case after removing WSUS neither 64-bit nor 32-bit apps were running.

Let me put it more clearly. If you remove WSUS using the role wizard in Server Manager, a number of bits get left behind, including a setting in ApplicationHost.config (in /System32/Inetsrv) that breaks IIS.

So it was my attempt to clean up the server that had made it worse.

That said, this is also a Windows Server failure. Adding and removing a role should result as far as possible in no change.

Once identified, the problem is easy to fix (this is often true). Still, several hours wasted and more evidence for Martin Fowler’s assertion that you should automate server configuration and spin up a new one from scratch when you want to make a change, to avoid configuration drift. There is a more detailed post on the same theme – Phoenix servers that rise from the ashes, not snowflake servers that are unique and ugly – here.

In a small business context this perhaps is harder to achieve – though the cost of entry gets lower all the time, through either cloud computing or internal virtualization platforms.

Running ASP.NET 5.0 on Nano Server preview

I have been trying out Microsoft’s Nano Server Preview and wrote up initial experiences for the Register. One of the things I mentioned is that I could not get an ASP.NET app successfully deployed. After a bit more effort, and help from a member of the team, I am glad to say that I have been successful.

image

What was the problem? First, a bit of background. Nano Server does not run the .NET Framework, presumably because it has too many dependencies on pieces of Windows which Microsoft wanted to omit from this cut-down deployment. Nano Server does support .NET Core, also known as Core CLR, which is the open source fork of the .NET Framework. This enables it to run PowerShell, although with a limited range of cmdlets, and my main two ways of interacting with Nano Server are with PowerShell remoting, and Windows file sharing for copying files across.

On your development machine, you need several pieces in order to code for ASP.NET 5.0. Just installing Visual Studio 2015 RC will do, except that there is currently an incompatibility between the version of the ASP.NET 5.0 .NET Core runtime shipped with Visual Studio, and what works on Nano Server. This meant that my first effort, which was to build an empty ASP.NET 5.0 template app and publish it to the file system, failed on Nano Server with a NativeCommandError.

This meant I had to dig a bit more deeply into ASP.NET 5.0 running on .NET Core. Note that when you deploy one of these apps, you can include all the dependencies in the app directory. In other words, apps are self-hosting. The binary that enables this bit of magic is called DNX (.NET Execution Environment); it was formerly known as the K runtime.

Developers need to install the DNX SDK on their machines (Windows, Mac or Linux). There is currently a getting started guide here, though note that many of the topics in this promising documentation are as yet unwritten.

image

However, after installation you will be able to use several handy commands:

dnvm This is the .NET Version manager. You can have several versions of the DNX runtime installed and this utility lets you list them, set aliases to save typing full paths, and manage defaults.

image

dnu This is the .NET Development Utility (formerly kpm) that builds and publishes .NET Core projects. The two commands I found myself using regularly are dnu restore which downloads Nuget (.NET repository) packages and dnu publish which packages an app for deployment. Once published, you will find .cmd files in the output which you use to start the app.

dnx This is the binary which you call to run an app. On the development machine, you can use dnx . run to run the console app in the current directory and dnx . web to run the web app in the current directory.

Now, back to my deployment issues. The Visual Studio templates are all hooked to DNX beta 4, and I was informed that I needed DNX beta 5 for Nano Server. I played around with trying to get Visual Studio to target the updated DNX but ran into problems so decided to ignore Visual Studio and do everything from the command line. This should mean that it would all work on Mac and Linux as well.

I had a bit of trouble persuading DNX to update itself to the latest unstable builds; the main issue I recall is targeting the correct repository. You NuGet sources must include (currently) https://www.myget.org/F/aspnetvnext/api/v2.

Since I was not using Visual Studio, I based my samples on these, Hello World Console, MVC and Web apps that you can use for testing that everything works. My technique was to test on the development machine using dnx . web, then to use dnu publish and copy the output to Nano Server where I could run ./web.cmd in a remote PowerShell session.

Note that I found it necessary to specify the CoreClr 64-bit runtime in order to get dnu to publish the correct files. I tried to make this the default but for some reason* it reverted itself to x86:

dnu publish –runtime "c:\users\[USERNAME]\.dnx\runtime\dnx-coreclr-win-x64.1.0.0-beta5-11701"

Of course the exact runtime version to use will change soon.

If you run this command and look in the /bin/output folder you will find web.cmd, and running this should start the app. The port on which the app listens is set in project.json in the top level directory of the project source. I set this to 5001, opened that port in the Windows Firewall on the Nano Server, and got a started message on the command line. However I still could not browse to the app running on Nano Server; I got a 400 error. Even on the development machine it did not work; the browser just timed out.

It turned out that there were several issues here. On the development machine, which is running Windows 10 build 10074, I discovered to my annoyance that the web app worked fine with Internet Explorer, but not in Project Spartan, sorry Edge. I do not know why.

Support also gave me some tips to get this working on Nano Server. In order for the app to work across the network, you have to edit project.json so that localhost is replaced either with the IP number of the server, or with a *. I was also advised to add dnx.exe to the allowed apps in the firewall, but I do not think this is necessary if the port is open (it is a nuisance, since the location of dnx.exe changes for every app).

Finally I was successful.

Final observations

It seems to me that ASP.NET vNext running on .NET Core has the characteristic of many open source projects, a few dedicated people who have little time for documentation and are so close to the project that their public communications assume a fair amount of pre-knowledge. The site I referenced above does have helpful documentation though, for the few topics that are complete. Some other posts I found helpful are this series by Steve Perkins, and the troubleshooting suggestions here especially David Fowler’s post.

I like The .NET Core initiative overall since I like C# and ASP.NET MVC and now it is becoming a true cross-platform framework. That said, the code does seem to be in rapid flux and I doubt it will really be ready when Visual Studio 2015 ships. The danger I suppose is that developers will try it in the first release, find lots of problems, and never go back.

I also like the idea of running apps in Nano Server, a low-maintenance environment where you can get the isolation of a dedicated server for your app at low cost in terms of resources.

No doubt though, the lack of pieces that you expect to find on Windows Server will be an issue and I am not sure that the mainstream Microsoft developer ecosystem will take to it. Aidan Finn is not convinced, for example:

Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now.

Finn’s point is that if your headless server is having networking issues it is hard to troubleshoot, since of course remote tools will not work reliably. That said, I have personally run Hyper-V Server (which is essentially Server Core with just the Hyper-V role) with great success for several years; I started keeping notes on how to troubleshoot from the command line and found solutions to common problems. If networking fails with Nano Server then yes, you have a problem, but there is always something you can do, even if it means mounting the Nano Server VHD or VHDX on another VM. Windows Server admins have become accustomed to a local GUI though and adjusting even to Server Core has not been easy.

*the reason was that I did not use the –p argument with dnvm use which would have made it persistent

HoloLens: a developer hands-on

I attended the “Holographic Academy” during Microsoft’s Build conference in San Francisco. It was aimed at developers, and we got a hands-on experience of coding a simple HoloLens app and viewing the results. We were forbidden from taking pictures so you will have to make do with my words; this also means I do not have to show myself wearing a bulky headset and staring at things you cannot see.

image

First, a word about HoloLens itself. The gadget is a headset that augments the real world with a 3D “projected” image. It is not really a hologram, otherwise everyone would see it, but it is a virtual hologram created by combining what you see with digital images.

The effect is uncanny, since the image you see appears to stay in one place. You can walk around it, seeing it from different angles, close up or far away, just as you could with a real image.

That said, there were a couple of issues with the experience. One is that if you went too close to a projected image, it disappeared. From memory, the minimum distance was about 18 inches. Second, the viewport where you see the augmented reality was fairly small and you could easily see around it. This is detrimental to the illusion, and sometimes made it a struggle to see as much of your hologram as you might want.

I asked about both issues and got the same response, essentially “no comment’’. This is prototype hardware, so anything could change. However, according to another journalist who attended a hands-on demo in January, the viewport has gotten smaller, suggesting that Microsoft is compromising in its effort to make the technology into a commercially viable product.

Another odd thing about the demo was that after every step, we were encouraged to whoop and cheer. There was a Microsoft “mentor” for every pair of journalists, and it seemed to me that the mentors were doing most of the whooping and cheering. It is obvious that this is a big investment for the company and I am guessing that this kind of forced enthusiasm is an effort to ensure a positive iimpression.

Lest you think I am too sceptical, let me add that the technology is genuinely amazing, with obvious potential both for gaming and business use.

The developer story

The development process involves Unity, Visual Studio, and of course the HoloLens device itself. The workflow is like this. You create an interactive 3D scene in Unity and build it, whereupon it becomes a Visual Studio project. You open the project in Visual Studio, and deploy it to HoloLens (connected over USB), just as you would to a smartphone. Once deployed, you disconnect the HoloLens and wear it in order to experience the scene you have created. Unity supports scripting in C#, running on Mono, which makes the development platform easy and familiar for Windows developers.

Our first “Holo World” project displayed a hologram at a fixed position determined by where you are when the app first runs. Next, we added the ability to move the hologram, selecting it with a wagging finger gesture, shifting our gaze to some other spot, and placing it with another wagging finger gesture. Note that for this to work, HoloLens needs to map the real world, and we tried turning on wire framing so you could see the triangles which show where HoloLens is detecting objects.

We also added a selection cursor, an image that looks like a red bagel (you can design your own cursor and import it into Unity). Other embellishments were the ability to select a sphere and make it fall to the floor and roll around, voice control to drop a sphere and then reset it back to the starting point, and then “spatial audio” that appears to emit from the hologram.

All of this was accomplished with a few lines of C# imported as scripts into Unity. The development was all guided so we did not have to think for ourselves, though I did add a custom voice command so I could say “abracadabra” instead of “reset scene”; this worked perfectly first time.

For the last experiment, we added a virtual underworld. When the sphere dropped, it exploded making a virtual pit in the floor, through which you could see a virtual world with red birds flapping around. It was also possible to enter this world, by positioning the hologram above your head and dropping a sphere from there.

HoloLens has three core inputs: gaze (where you are looking), gesture (like the finger wag) and voice. Of these, gaze and voice worked really well in our hands on, but gesture was more difficult and sometimes took several tries to get right.

At the end of the session, I had no doubt about the value of the technology. The development process looks easily accessible to developers who have the right 3D design skills, and Unity seems ideally suited for the project.

The main doubts are about how close HoloLens is to being a viable commercial product, at least in the mass market. The headset is bulky, the viewport too small, and there were some other little issues like lag between the HoloLens detection of physical objects and their actual position, if they were moving, as with a person walking around.

Watch this space though; it is going to be most interesting.

Compile Android Java, iOS Objective C apps for Windows 10 with Visual Studio: a game changer?

Microsoft has announced the ability to compile Windows 10 apps written in Java or C++ for Android, or in Objective C for iOS, at its Build developer conference here in San Francisco.

image
Objective C code in Visual Studio

The Android compatibility had been widely rumoured, but the Objective C support not so much.

This is big news, but oddly the Build attendees were more excited by the HoloLens section of the keynote (3D virtual reality) than by the iOS/Android compatibility. That is partly because this is the wrong crown; these are the Windows faithful who would rather code in C#.

Another factor is that those who want Microsoft’s platform to succeed will have mixed feelings. Is the company now removing any incentive to code dedicated Windows apps that will make the most of the platform?

Details of the new capabilities are scant though we will no doubt get more details as the event progresses. A few observations though.

Microsoft is trying to fix the “app gap”, the fact that both Windows Store and Windows Phone Store (which are merging) have a poor selection of apps compared to iOS or Android. Worse, many simply ignore the platforms as too small to bother with. Lack of apps make the platforms less attractive so the situation does not improve.

The goal then is to make it easier for developers to port their code, and also perhaps to raise the quality of Windows mobile apps by enabling code sharing with the more important platforms.

There are apparently ways to add Windows-specific features if you want your ported app to work properly with the platform.

Will it work? The Amazon Fire and the Blackberry 10 precedents are not encouraging. Both platforms make it easy to port Android apps (Amazon Fire is actually a version of Android), yet the apps available in the respective app stores are still far short of what you can get for Google Android.

The reasons are various, but I would guess part of the problem is that ease of porting code does not make an unimportant platform important. Another factor is that supporting an additional platform never comes for free; there is admin and support to consider.

The strategy could help though, if Microsoft through other means makes the platform an attractive target. The primary way to do this of course is to have lots of users. VP Terry Myerson told us that Microsoft is aiming for 1 billion devices running Windows 10 within 2-3 years. If it gets there, the platform will form a strong app market and that in turn will attract developers, some of whom will be glad to be able to port their existing code.

The announcement though is not transformative on its own. Microsoft still has to drive lots of Windows 10 upgrades and sell more phones.

Why Windows Server is going Nano: think automation, Cloud OS

Yesterday Microsoft announced Windows Nano Server which is essentially an installation option that is even more stripped-down than Server Core. Server Core, introduced with Windows Server 2008, removed the GUI in order to make the OS lighter weight and more secure. It is particularly suitable for installations that do nothing more than run Hyper-V to host VMs. You want your Hyper-V host to be rock-solid and removing unnecessary clutter makes sense.

There was more to the strategy than that though, and it was at last week’s ChefConf in Santa Clara (attended by both Windows Server architect Jeffrey Snover and Azure CTO Mark Russinovich) that the pieces fell into place for me. Here are two key areas which Snover has worked on over the last 16 years or so (he joined Microsoft in 1999):

  • PowerShell, first announced as “Monad” in August 2002 and presented at the PDC conference in September 2003. Originally presented as a scripting platform, it is now described as an “automation engine”, though it is still pretty good for scripting.
  • Windows Server componentisation, that is, the ability to configure Windows Server by adding and removing components. Server Core was a sign of progress here, especially in the Server 2012 version where you can move seamlessly between Core and full Windows Server by adding or removing the various pieces. It is still not perfect, mainly because of dependencies that make you drag in more than you might really want when enabling a specific feature.
  • PowerShell Desired State Configuration, introduced in Server 2012 R2, which puts these together by letting you define the state of a server in a declarative configuration file and apply it to an OS instance.

I am not sure how much of this strategy was in Snover’s mind when he came up with PowerShell, but today it looks far-sighted. The role of a server OS has changed since Windows first entered this market, with Windows NT in 1993. Today, when most server instances are virtual, the focus is on efficiency (making maximum use of the hardware) and agility (quick configuration and on-demand scaling). How is that achieved? Two things:

1. For efficiency, you want an OS that runs only what is necessary to run the applications it is hosting, and on the hypervisor side, the ability to load the right number of VMs to make maximum use of the hardware.

2. For agility, you want fully automated server deployment and configuration. We take this for granted in cloud platforms such as Amazon Web Services and Azure, in that you can run up a new server instance in a few minutes. However, there is still manual configuration on the server once launched. Azure web apps (formerly web sites) are better: you just upload your application. Better still, you can scale it by adding or removing instances with a script or through the web-based management portal. Web apps are limited though and for more complex applications you may need full access to the server. Greater ability to automate the server means that the web app experience can become the norm for a wider range of applications.

Nano Server is more efficient. Look at these stats (compared to full Server):

  • 93 percent lower VHD size
  • 92 percent fewer critical bulletins
  • 80 percent fewer reboots

Microsoft has removed not only the GUI, but also 32-bit support and MSI (I presume the Windows Installer services). Nano Server is designed to work well both sides of the hypervisor, either hosting Hyper-V or itself running in a VM.

Microsoft has also improved automation:

All management is performed remotely via WMI and PowerShell. We are also adding Windows Server Roles and Features using Features on Demand and DISM. We are improving remote manageability via PowerShell with Desired State Configuration as well as remote file transfer, remote script authoring and remote debugging.

Returning for a moment to ChefConf, the DevOps concept is that you define the configuration of your application infrastructure in code, as well as that for the application itself. Deployment can then be automated. Or you could use the container concept to build your application as a deployable package that has no dependencies other than a suitable host – this is where Microsoft’s other announcement from yesterday comes in, Hyper-V Containers which provide a high level of isolation without quite being a full VM. Or the already-announced Windows Server Containers which are similar but a bit less isolated.

image

This is the right direction for Windows Server though the detail to be revealed at the Build and Ignite conferences in a few weeks time will no doubt show limitations.

A bigger issue though is whether the Windows Server ecosystem is ready to adapt. I spoke to an attendee at ChefConf who told me his Windows servers were more troublesome than Linux,. Do you use Server Core I asked? No he said, we like to be able to log on to the GUI. It is hard to change the culture so that running a GUI on the server is no longer the norm. The same applies to third-party applications: what will be the requirements if you want to install on Nano Server (no MSI)? Even if Microsoft has this right, it will take a while for its users to catch up.

Microsoft publishes new OneDrive API with SDK, sample apps

Microsoft has announced a new OneDrive API for programmatic access to its cloud storage service. It is a REST API which Microsoft Program Manager Ryan Gregg says the company is also using internally for OneDrive apps. The new API replaces the previous Live SDK, though the Live SDK will continue to be supported. One advantage of the new API is that you can retrieve changes to files and folders in order to keep an offline copy in sync, or to upload changes made offline.

Unfortunately this does not extend to only downloading the changed part of a file (as far as I can tell); you still have to delete and replace the entire file. Imagine you had a music file in which only the metadata had changed. With the OneDrive API, you will have to upload or download the entire file, rather than simply applying the difference. However, you can upload files in segments in order to handle large files, up to 10GB.

I have worked with file upload and download using the Azure Blob Storage service so I was interested to see what is now on offer for OneDrive. I went along to the OneDrive API site on GitHub and downloaded the Windows/C# API explorer, which is a Windows Forms application (why not WPF?). This uses a OneDrive SDK library which has been coded as a portable class library, for use in desktop, Windows 8, Windows Phone 8.1 and Windows Phone Silverlight 8.

image

I have to say this is not the kind of sample I like. I prefer short snippets of code that demonstrate things like: here is how you authenticate, here is how you iterate through all the files in a folder, here is how you download a file, here is how you upload a file, and so on. All these features are there in this app, but finding them means weaving your way through all the UI code and async calls to piece together how it actually works. On top of that, despite all those async calls, there are some performance issues which seem to be related to the smart tiles which display a preview image, where possible, from each file and folder. I found the UI becoming unresponsive at times, for example when retrieving my large SkyDrive camera roll.

Gregg makes no reference in his post to OneDrive for Business, but my assumption is that the new API only applies to consumer OneDrive. Microsoft has said though that it intends to unify its two OneDrive services so maybe a future version will be able to target both.

At a quick glance the API looks different to the Azure Blob Storage API. They are different services but with some overlap in terms of features and I wonder if Microsoft has ever got all its cloud storage teams together to work out a common approach to their respective APIs.

I do not intend to be negative. OneDrive is an impressive and mostly free service and the API is important for lots of reasons. If you find the OneDrive integration in the current Windows 10 preview too limited (as I do), at least you now have the ability to code your own alternative.

Microsoft open sources heart of .NET: CoreCLR runtime now on GitHub

Microsoft’s CoreCLR is now available on GitHub. We knew this was coming, but it is still a significant step, since this piece is the very heart of .NET: the execution engine that consumes a .NET IL (Intermediate Language) executable and compiles it to machine code for execution. The IL can easily be decompiled back to C#; it is in a sense fairly close to what you wrote in the editor. The CLR piece compiles it to a native executable, and also handles garbage collection (automatic memory management) and interop with other  native code libraries. The just-in-time compiler in CoreCLR is called RyuJIT.

CoreCLR is not same as the .NET Framework CLR (as found in the Windows desktop today), though one thing we now learn is that it is a true subset:

CoreCLR is a subset of the .NET Framework CLR. They share the same codebase and are updated together. For example, an update to the .NET GC improves both CoreCLR and the .NET Framework CLR.

We setup a live 2-way mirror between the coreclr repo on GitHub and the .NET Framework TFS server within Microsoft. The latency of the mirror is low, measurable in minutes.

Contributions made to the coreclr repo are integrated to the Microsoft TFS server automatically and will become part of both the .NET Framework and .NET Core products. The same is true in reverse, that .NET Framework CLR changes (within the CoreCLR subset) are mirrored to the CoreCLR repo. These changes will sometimes result in large commits to unrelated components.

This is good news since it reduces the risk of fragmentation between the .NET Framework and the CoreCLR. Note that the same does not apply to the framework libraries, which are forked between .NET Framework and CoreFX. The reason for the fork is to enable cross-platform .NET and to benefit from greater modularity in the Framework without breaking the existing .NET Framework.

Some other points of interest:

  • CoreCLR will run on Linux and Mac but not yet, this is work in progress
  • CoreCLR powers Windows Phone apps as well as ASP.NET 5
  • CoreCLR uses the CMake build system rather than MSBuild, because it runs cross-platform

There is a key architectural difference between CoreCLR and the .NET Framework, which is that in CoreCLR each application is deployed with the runtime and libraries it requires, whereas in the .NET Framework applications depend on a system-managed runtime and shared libraries. This has the advantage that applications are standalone, and you could run one from say a portable USB drive on a system which did not have .NET or Mono installed.

The disadvantage, aside from greater use of disk space, is that patching the same libraries across multiple applications is hard. In the interview here Microsoft offers a clue about how it might come up with a solution for this. Jan Kotas on the CLR team talks about an ideal scenario where identical copies of the same DLL are in fact shared even though each application appears to have its own copy. This sounds similar to the mechanism used by de-duplication in Windows Server. The file system makes it look as if several copies of a file exist in different directories, but in fact there is only one. If you update a file though, the right thing happens and only the virtual copy that you overwrite is changed. It sounds as if Kotas has in mind a variant where you could say, “update this file and all its instances elsewhere.” This would of course somewhat undermine the concept of app-isolated dependencies; but you know what they say about cakes and eating them:

“The ideal we should get to is every application has a local copy of everything. People eventually get to a point where through some OS mechanisms or through some other means the DLLs that are the same between different applications would get shared. That way nobody needs to worry about is this shared, or is it not shared. The ideal place that we’d like to get to is that sharing happens under the hood. It can happen through different mechanisms for different applications. [That would be the] ideal place for the runtime and how to version it.”

said Kotas. Possibly I am misinterpreting this; but it does sound like some kind of sharing-but-not-sharing solution to the patching problem.

Another point to note: a managed code application cannot execute without help. In order to run, every managed application needs three things:

1. The application code

2. The CLR – either CoreCLR or the .NET Framework CLR

3. A CLR host which loads the CLR and instructs it to execute the application. The CLR host has to be native code, for obvious reasons.

In the .NET Framework this third piece is invisible, since it is handled by the operating system (though apparently SQL Server is a special case). In the CoreCLR world though, you need to think about the CLR host. ASP.NET 5.0 has the KRuntime (K probably stands for Katana) which I think is the same as Project K. If you want to test CoreCLR today, you can use a host called CoreConsole which (as its name implies) lets you run console apps. Apparently there are a few technical problems using CoreCLR with ASP.NET 5 as the moment.

image

So that was 2014: Samsung stumbles, all change for Microsoft, Sony hack, more cloud, more mobile

What happened in 2014? One thing I did not predict is that Samsung lost its momentum. Here are Gartner’s figures for global smartphone sales by vendor, for the third quarter of 2014:

image

Samsung is still huge, of course. But in 2013, Samsung seemed to be in such control of its premium brand that it could shape Android as it wished, rather than being merely an OEM for Google’s operating system. In the enterprise, Samsung KNOX held promise as a way to bring security and manageability to Android, but only in Samsung’s flavour. Today, that seems less likely. Market share is declining, and much of KNOX has been rolled into Android Lollipop. What is going wrong? The difficulty for Samsung is how to differentiate its products sufficiently, to avoid bleeding market share to keenly priced competition from vendors such as Xiaomi and Huawei. This is difficult if you do not control the operating system.

What of the overall mobile OS wars? 2013 brought few surprises: the Apple/Android duopoly continued, Blackberry further diminished its share, and Windows Phone struggles on, though it was not looking good for Microsoft’s OS as 2013 closed; the Nokia acquisition may have been fumbled.

All change at Microsoft

That brings me to Microsoft, a company I watch closely. 2014 saw Satya Nadella appointed as CEO and several strategic changes, though the extent to which Nadella introduced those changes is uncertain. What changes?

Office is going truly cross-platform, with first-class support for iOS and Android. I covered this recently on the Register; the summary is that there will be mobile versions of Office for iOS, Android and Windows (this last a Store app) with similar features, and that more and more of the functionality of desktop Office will turn up in the mobile versions. I learned from my interview with Technical Product Manager Kaberi Chowdhury that ODF (Open Document) support is planned, as is some level of programmability.

The plans for Office are a clue to the company’s wider strategy, which is focused on cloud and server. Key products include Office 365, Windows Azure, Active Directory (and Azure Active Directory), SQL Server, SharePoint, and System Center as a management tool for hybrid cloud.

The Windows client strategy is to bring back users who disliked Windows 8 with a renewed focus on the desktop in the forthcoming Windows 10, while retaining the Store app model for apps that are secure, touch-friendly, and easily deployed. It is still not clear what Windows 10 phones and tablets will look like, but we can expect convergence; no more Windows RT, but perhaps tablets running Windows Phone OS that are in effect the next generation of Windows RT without a desktop personality.

The company will also hedge its bets with full app support for Office and its cloud services on iOS and Android, and in doing so will make its Windows mobile offerings less compelling.

Microsoft’s developer tools are changing in line with this strategy. The next generation of .NET is open source and cross-platform on the server side, for Windows, Mac and Linux. Xamarin plugs the gap for .NET on iOS and Android, while Microsoft is also adding native support (not .NET based) for cross-platform mobile in the next Visual Studio.

These are big changes to the developer stack, and Microsoft is forking .NET between the continuing Windows-only .NET Framework, and the new cross-platform .NET Core. Developers have many questions about this; see this interview on the Register for what I could glean about the current plans. Watch our for the Build conference at the end of April when the company will attempt to put it all together into a coherent whole for developers targeting either Windows 10, or cloud apps, or cloud services with cross-platform mobile clients.

This entire strategy is a logical progression from the company’s failure in mobile. Can it now succeed with client apps running on platforms controlled by its competitors? Alternatively, is there hope that Windows 10 can keep businesses hooked on Windows clients? Maybe 2015 will bring some answers, though with Windows 10 not expected until towards the end of the year there will be a long wait while iOS, Android and even Chrome OS (the operating system of Chromebook) continue to build.

A side effect is that C# now has a better chance of building a cross-platform user base, rather than being a Windows language. This has already happened in game development, thanks to the use of Mono and C# in the popular Unity game engine. Could it also happen with ASP.NET, deployed to Linux servers, now that this will be officially supported? Or is there little room for it alongside Java, PHP, Ruby, Node.js and the rest? 

The puzzle with Microsoft is that there is still too much mediocrity and complacency that damages the company’s offerings. How can it expect to succeed in the crowded wearable market with a band that is uncomfortable to wear? There is still an attitude in some parts of the company that the world will be happy to put up with problems that might be fixed in a future version after some long interval. Then again, the Azure team is doing great things and Windows server continues to impress. Win or lose, there will be plenty of Microsoft news this year.

A theme for 2015: cloud optimization

Late last year I attended Amazon’s re:Invent conference in Las Vegas; I wrote this up here. The key announcement for me was Amazon Aurora, a MySQL clone, not so much because of its merits as a cloud database server, but more because it represents a new breed of applications that are designed for the cloud. If you design database storage with the knowledge that it will only ever run on a huge cloud-scale infrastructure, you can make optimizations that cannot be replicated on smaller systems. I tried to summarize what this means in another Register piece here. The fact that this type of technology can be rented by any of us at commodity prices increases the advantage of public cloud, despite reservations that many still have concerning security and control. It also poses a challenge for companies like Oracle and Microsoft whose technology is designed for on-premises as well as cloud deployment; they cannot achieve the same advantage unless they fork their products, creating cloud variants that use different architecture.

The Sony hack

The cyber invasion of Sony Pictures in late November was not just another hack; it was a comprehensive takedown in which (as far as I can tell) the company’s entire IT systems were entirely compromised and significantly damaged.

According to this report:

Mountains of documents had been stolen, internal data centers had been wiped clean, and 75 percent of the servers had been destroyed.

Most IT admins worry about disaster recovery (what to do after catastrophic system failure such as a fire in your data center) as well as about security (what to do if hackers gain access to sensitive information). In this case, both seemed to happen simultaneously. Further, as producing movies is in effect a digital business, the business suffered loss of some of its actual products, such as the unreleased “Annie”.

The incident is fascinating in itself, especially as we do not know the identity of the hackers or their purpose, but what interests me more are the implications.

Specifically, how many companies are equally at risk? It seems clear that Sony’s security was towards the weak end of the scale, but there is plenty of weak security out there, especially but not exclusively in smaller businesses.

With the outcome of the Sony hack so spectacular, it is likely that there will be similar efforts in 2015, as well as many businesses looking nervously at their own practices and wondering what they can do to protect themselves.

Cloud may be part of the answer though even if the cloud provider does security right, that is no guarantee that their customers do the same.   

Looking back on looking back

Here is what I wrote a year or so ago, Reflecting on 2013- the year of not the PC, no privacy, and the Internet of Things. Most of it still applies. I have not achieved any of the three goals I set for myself though. Maybe this year…

SSD storage has come to Azure VMs, along with faster Azure SQL

Microsoft has introduced SSD storage for Azure VMs. This is a catch-up with Amazon which has been offering this at least since June 2014. It is an important feature though, and now in preview. The SSDs are part of the Azure storage service but can only be used for disks attached to VMs, not for general-purpose block files. There are three virtual disks available:

  P10 P20 P30
Disk size 128GB 512GB 1TB
IOPS 500 2300 5000
Throughput 100 MB/s 150 MB/s 200 MB/s

Price is $6.90 per 100GB per month, which if I am reading this right is less than Amazon’s $0.10 per GB per month ($10 per 100GB) as shown here.

One obvious use case is for SQL Server running on a VM. This generally performs better than Microsoft’s Azure SQL database service. That said, Microsoft is also previewing an improved Azure SQL which supports most of the features of SQL Server 2014, including .NET stored procedures and in-memory columnstore queries. Microsoft’s Scott Guthrie says performance is better:

Our internal benchmark tests (using over 600 million rows of data) show query performance improvements of around 5x with today’s preview relative to our existing Premium Tier SQL Database offering and up to 100x performance improvements when using the new In-memory columnstore technology.

If you can make it work, Azure SQL is better sense than running SQL Server in a VM with all the hassles of server patching and of course Microsoft’s licensing fees; but the performance has to be there. Another factor which drives users to the VM option is that SQL Reporting Service is not available in Azure SQL.

What is .NET Core, “the foundation of all future .NET platforms”?

I have been looking at .NET Core, an official Microsoft open source project which you can find on github and which is at the heart of Microsoft’s plans to open source most of its .NET technology.

Currently there are three Microsoft repositories for the .NET Core platform. There are the .NET Compiler Platform (“Roslyn”), ASP.NET 5, and the .NET Core Framework. Note that these are all v.Next versions of the .NET Framework. ASP.NET 5 and the .NET Core Framework are on github, but Roslyn is on CodePlex, Microsoft’s open source repository site. There is also a github repository for Entity Framework 7, currently part of ASP.NET though I am not sure that it belongs there. The current version of EF is 6.11 but the code for this is on CodePlex. The KRuntime, which is the implementation of the parts of the .NET Runtime needed to host an ASP.NET application, is also in the ASP.NET repository. Its full name is the K Runtime Environment (KRE); I am not sure what K stands for. Note that Microsoft has only promised to open source the .NET server stack, not desktop frameworks like Windows Presentation Foundation.

I had a look at the .NET Core Framework. This is the key set of libraries for .NET applications. The easiest way to build the core libraries is from the command line. Open a Visual Studio 2013 Developer Command Prompt (which sets up the path and environment for command line builds), go to your clone of the github repository and type build.

image

Cool. But what is in it? Not that much: System.Collections, Parallel Linq, Vectors and XML libraries.

“More is coming soon. Stay tuned!” say the docs. And in this blog post by Microsoft’s Immo Landwerth:

Consider the subset we have today a down-payment on what is to come. Our goal is to open source the entire .NET Core library stack by Build 2015.

Landwerth says that Microsoft is “currently figuring out the plan for open sourcing the runtime”; this is the native code that creates the .NET Virtual Machine which executes .NET code.

Of course there is also Mono, the old open source implementation of .NET which is from an independent code base.

This is exciting stuff for .NET developers, especially since official runtimes for Linux and Mac are also promised, but also somewhat confusing. What is .NET Core versus what we have known as the .NET Framework?

Here is a diagram from Landwerth’s blog:

image

I presume that the top left box (.NET Framework) has not been promised as open source, but the other two boxes have. Note that ASP.NET 5 will run on either .NET Core or the full .NET Framework; and that .NET Native – the project to compile a .NET application as true native code – sits as part of .NET Core.

Store apps (also known as Windows Runtime apps, or Metro apps) are not covered in the above diagram, but since .NET Native currently only works for Store apps, maybe .NET Core is also the .NET runtime for Store apps. Landwerth says:

.NET Core is a modular development stack that is the foundation of all future .NET platforms. It’s already used by ASP.NET 5 and .NET Native.

There are also some clues about .NET Core in the home page for the github repository:

.NET Core and the .NET Framework have (for the most part) a subset-superset relationship. .NET Core is named "Core" since it contains the core features from the .NET Framework, for both the runtime and framework libraries. For example, .NET Core and the .NET Framework share the GC, the JIT and types such as String and List<T>. We’ll continue improving these components for both .NET Core and .NET Framework.

.NET Core was created so that .NET could be open source, cross platform and be used in more resource-constrained environments. We have also published a subset of the .NET Reference Source under the MIT license, so that you and the community can port additional .NET Framework features to .NET Core.

The second paragraph is intriguing. Microsoft has posted parts of the source for the .NET Framework library so that the community can port some of it to .NET Core. What this means I think is not that this code should be part of .NET Core (otherwise it becomes more than just core) but rather that it would run on .NET Core.

It seems, contrary to what you might have thought, that the full .NET Framework is not a superset of .NET Core, although it is intended to be close to that. This has interesting implications for future compatibility. If .NET Core is intended to be more agile and to evolve more rapidly than the .NET Framework, since it is somewhat free of backwards compatibility constraints, we will soon find that there are features in .NET Core that do not exist in the .NET Framework as well as vice versa, in other words, two incompatible stacks. That could be a problem.

Despite Microsoft’s impressive openness in publishing much of its .NET work and forming the .NET Foundation, I for one would appreciate a clearer presentation of the plans for .NET Core and .NET Framework and the extent to which .NET Framework should now be considered a legacy or Windows desktop only technology. I suspect the answer for the moment is “wait for Build.”