Tag Archives: windows server

No more infrastructure roles for Windows Nano Server, and why I still like Server Core

Microsoft’s General Manager for Windows Server Erin Chapple posted last week about Nano Server (under a meaningless PR-speak headline) to explain that Nano Server, the most stripped-down edition of Windows Server, is being repositioned. When it was introduced, it was presented not only as a lightweight operating system for running within containers, but also for infrastructure roles such as hosting Hyper-V virtual machines, hosting containers, file server, web server and DNS Server (but without AD integration).

In future, Nano Server will be solely for the container role, enabling it to shrink in size (for the base image) by over 50%, according to Chapple. It will no longer be possible to install Nano Server as a standalone operating system on a server or VM. 

This change prompted Microsoft MVP and Hyper-V enthusiast Aidan Finn to declare Nano Server all but dead (which I suppose it is from a Hyper-V perspective) and to repeat his belief that GUI installs of Windows Server are best, even on a server used only for Hyper-V hosting.

Prepare for a return to an old message from Microsoft, “We recommend Server Core for physical infrastructure roles.” See my counter to Nano Server. PowerShell gurus will repeat their cry that the GUI prevents scripting. Would you like some baloney for your sandwich? I will continue to recommend a full GUI installation. Hopefully, the efforts by Microsoft to diminish the full installation will end with this rollback on Nano Server.

Finn’s main argument is that the full GUI makes troubleshooting easier. Server Core also introduces a certain amount of friction as most documentation relating to Windows Server (especially from third parties) presumes you have a GUI and you have to do some work to figure out how to do the same thing on Core.

Nevertheless I like Server Core and use it where possible. The performance overhead of the GUI is small, but running Core does significantly reduce the number of security patches and therefore required reboots. Note that you can run GUI applications on Server Core, if they are written to a subset of the Windows API, so vendors that have taken the trouble to fix their GUI setup applications can support it nicely.

Another advantage of Server Core, in the SMB world where IT policies can be harder to enforce, is that users are not tempted to install other stuff on their Server Core Domain Controllers or Hyper-V hosts. I guess this is also an advantage of VMWare. Users log in once, see the command-line UI, and do not try installing file shares, print managers, accounting software, web browsers (I often see Google Chrome on servers because users cannot cope with IE Enhanced Security Configuration), remote access software and so on.

Only developers now need to pay attention to Nano Server, but that is no reason to give up on Server Core.

Notes from the field: unexpected villain breaks Dynamics CRM and IIS on Windows Server 2012

Yesterday I was asked to convert a Dynamics CRM 2013 installation from an internal to an Internet Facing Deployment (IFD). It is a bit fiddly, but I have done this before so I was confident.

The installation in question is only for test; the company has its production CRM 2011 on another server. Because it is for test though, it is a small deployment on a single server.

I got to work running the Claims Based Authentication wizard in the CRM Deployment Manager but also noticed something odd about the server. WSUS (Windows Server Update Services) was installed though it was not in use. This seems a bad idea so I asked if I could remove it. Sure, it was just a quick experiment. I removed WSUS and got on with the next steps of configuring IFD.

Unfortunately ADFS 2.0 (in this case) would not play ball. It could not communicate with CRM. I quickly saw why: attempting to browse to the special FederationMetadata.xml URL raised a 500 error.

I tried a few things. There are plenty of odd things that can go wrong: permissions on the private keys of the certificate used for the CRM web site, Service Principal Names, incorrect DNS entries and so on. All seemed fine. Still the error.

I decided to backtrack and temporarily disable Claims Based Authentication. Unfortunately it appeared that I had broken CRM completely. All access to the site raised the same 500.19 IIS error.


The web page IIS delivers says that the most likely causes are that the worker process is unable to read the ApplicationHost.config or web.config file, or malformed XML in the applicationhost.config or web.config file, or incorrect NTFS permissions.

I did a repair install on CRM. I reapplied the rollups. No difference.

I ran Process Monitor to try to figure out what configuration file was causing the problem. It was not a great help, but did point me in the right direction to the extent that it seemed that ASP.NET was not working properly at all. I now focused on this rather than CRM itself, observing also that there were not many CRM-related errors in the event log and I would expect more if it was really broken.

I created a hello world ASP.NET application and installed it in a separate site on a different port. Same error.

Searching for help on this particular error was not particularly helpful. In the context of CRM, the few users that encountered something similar had reinstalled everything from scratch. However, now at least I knew that IIS rather than CRM was broken. This helpful MSDN article actually includes a hint to the solution:

For above specific error (mentioned in this example), DynamicCompressionModule module is causing the trouble. This is because of the XPress compression scheme module (suscomp.dll) which gets installed with WSUS. Since Compression schemes are defined globally and try to load in every application Pool, it will result in this error when 64bit version of suscomp.dll attempts to load in an application pool which is running in 32bit mode.

which is also referenced here. These refer to WSUS breaking 32-bit applications, but in my case after removing WSUS neither 64-bit nor 32-bit apps were running.

Let me put it more clearly. If you remove WSUS using the role wizard in Server Manager, a number of bits get left behind, including a setting in ApplicationHost.config (in /System32/Inetsrv) that breaks IIS.

So it was my attempt to clean up the server that had made it worse.

That said, this is also a Windows Server failure. Adding and removing a role should result as far as possible in no change.

Once identified, the problem is easy to fix (this is often true). Still, several hours wasted and more evidence for Martin Fowler’s assertion that you should automate server configuration and spin up a new one from scratch when you want to make a change, to avoid configuration drift. There is a more detailed post on the same theme – Phoenix servers that rise from the ashes, not snowflake servers that are unique and ugly – here.

In a small business context this perhaps is harder to achieve – though the cost of entry gets lower all the time, through either cloud computing or internal virtualization platforms.

Windows Server 2012 R2, System Center 2012 R2, SQL Server 14: what’s new, and what is the Cloud OS?

Earlier this month I attended a three-day press briefing on what is coming in the R2 wave of Microsoft’s server products: Windows Server, System Center and SQL Server.

There is a ton of new stuff, too much for a blog post, but here are the things that made the biggest impression.

First, I am beginning to get what Microsoft means by “Cloud OS”. I am not sure that this a useful term, as it is fairly confusing, but it is worth teasing out as it gives a sense of Microsoft’s strategy. Here’s what lead architect Jeffrey Snover told me:

I think of it as a central organising thought. That’s our design centre, that’s our north star. It’s not necessarily a product, it goes across some things … for example, I would absolutely include SQL [Server] in all of its manifestations in our vision of a cloud OS. Cloud OS has two missions. Abstracting resources for consumption by multiple consumers, and then providing services to applications. Modern applications are all consuming SQL … we’re evolving SQL to the more scale-out, elastic, on-demand attributes that we think of as cloud OS attributes.

If you want to know what Cloud OS looks like, it is something like this:


Yes, it’s the Azure portal, and one of today’s big announcements is that this is the future of System Center, Microsoft’s on-premise cloud management system, as well as Azure, the public cloud. Azure technology is coming to System Center 2012 R2 via an add-on called the Azure Pack. Self-service VMs, web sites, SQL databases, service bus messaging, virtual networks, online storage and more.

Snover also talked about another aspect to Cloud OS, which is also significant. He says that Microsoft sees cloud as an “operating system problem.” This is the key to how Microsoft thinks it can survive and prosper versus VMWare, Amazon and so on. It has a hold of the whole stack, from the tiniest detail of the operating system (memory management, file system, low-level networking and so on) to the highest level, big Azure datacenters.

The company is also unusual in its commitment to private, public and hybrid cloud. The three cloud story which Microsoft re-iterated obsessively during the briefing is public cloud (Azure), private cloud (System Center) and hosted cloud (service providers). Ideally all three will look the same and work the same – differences of scale aside – though the Azure Pack is only the first stage towards convergence. Hyper-V is the common building block, and we were assured that Hyper-V in Azure is exactly the same as Hyper-V in Windows Server, from 2012 onwards.

I had not realised until this month that Snover is now lead architect for System Center as well as Windows Server. Without both roles, of course, he could scarcely architect “Cloud OS”.

Here are a few other things to note.

Hyper-V 2012 R2 has some great improvements:

  • Generation 2 VMs (64-bit Server 2012 and Windows 8 and higher only) strip out legacy emulation, UEIF boot from SCSI
  • Replica supports a range of intervals from 30 seconds to 15 minutes
  • Data compression can double the speed of live migration
  • Live VM cloning lets you copy a running VM for troubleshooting offline
  • Online VHDX resize – grow or shrink
  • Linux now supports Live Migration, Live Backup, Dynamic memory, online VHDX resize

SQL Server 14 includes in-memory optimization, code-name Hekaton, that can deliver stunning speed improvements. There is also compilation of stored procedures to native code, subject to some limitations. The snag with Hekaton? Your data has to fit in RAM.

Like Generation 2 VMs, Hekaton is the result of re-thinking a product in the light of technical advances. Old warhorses like SQL Server were designed when RAM was tiny, and everything had to be fetched from disk, modified, written back. Bringing that into RAM as-is is a waste. Hekaton removes the overhead of the the disk/RAM model almost completely, though it does have to write data back to disk when transactions complete. The data structures are entirely different.

PowerShell Desired State Configuration (DSC) is a declarative syntax for defining the state of a server, combined with a provider that knows how to read or apply it. It is work in progress, with limited providers currently, but immensely interesting, if Microsoft can both make it work and stay the course. The reason is that using PowerShell DSC you can automate everything about an application, including how it is deployed.

Remember White Horse? This was a brave but abandoned attempt to model deployment in Visual Studio as part of application development. What if you could not only model it, but deploy it, using the cloud automation and self-service model to create the VMs and configure them as needed? As a side benefit, you could version control your deployment. Linux is way ahead of Windows here, with tools like Puppet and Chef, but the potential is now here. Note that Microsoft told me it has no plans to do this yet but “we like the idea” so watch this space.

Storage improvements. Both data deduplication and Storage Spaces are getting smarter. Deduplication can be used for running VHDs in a VDI deployment, with huge storage saving. Storage Spaces support hybrid pools with SSDs alongside hard drives, hot data automatically moved, and the ability to pin files to the SSD tier.

Server Essentials for small businesses is now a role in Windows Server as well as a separate edition. If you use the role, rather than the edition, you can use the Essentials tools for up to 100 or so users. Unfortunately that will also mean Windows Server CALs; but it is a step forward from the dead-end 25-user limit in the current product. Small Business Server with bundled Exchange is still missed though, and not coming back. More on this separately.

What do I think overall? Snover is a smart guy and if you buy into the three-cloud idea (and most businesses, for better or worse, are not ready for public cloud) then Microsoft’s strategy does make sense.

The downside is that there remains a lot of stuff to deal with if you want to implement Microsoft’s private cloud, and I am not sure whether System Center admins will all welcome the direction towards using Azure tools on-premise, having learned to deal with the existing model.

The server folk at Microsoft have something to brag about though: 9 consecutive quarters of double digit growth. It is quite a contrast with the declining PC market and the angst over Windows 8, leading to another question: long-term, can Microsoft succeed in server but fail in client? Or will (for better or worse) those two curves start moving in the same direction? Informed opinions, as ever, are welcome.

Notes from the field: USB 3.0 PCI Express cards, HP ML350 G6 and Server Core

If I search the web, get little help, and then solve a problem, I make a point of posting so that someone else will have a better experience. The challenge was this: finding a USB 3.0 PCI Express card that works in an HP ML350 G6 server, a popular choice for small business duties such as Small Business Server or Hyper-V Server. This particular example runs Hyper-V Server 2008 R2, based on Server Core, which can sometimes be awkward for installing drivers.

USB 3.0 is theoretically around 10 times faster than USB 2.0. If you are transferring large files or performing backup to an external drive, it can make a huge difference to performance.

Trawling the web was not particularly helpful. As this expert notes, there is no officially supported or recommended option for USB 3.0 on an ML350:

The ML350 G5 and G6 servers do not have, as a recommended option, a USB 3.0 and e-SATA controller, which would be clear to you by referring the quickspecs of the servers.

If you take the view that only recommended and certified components should be fitted to a server, give up and stop reading now. I do not disagree, but I tend to a pragmatic approach, depending on your budget and how system-critical is the server in question.

Further, it can work. This guy used a HighPoint 1144A card and it kind of works, though investigating I found that some users reporting that only two of the four ports actually work and you have to tolerate errors in device manager; it does not seem ideal. Another user noted that HP’s own card (which is designed for workstations and not the ML350) did not work though maybe it works for others, I am not sure.

I did find some references to success with a Renesas USB 3.0 chipset so found a StarTech card that uses this, PEXUSB3S2. Fitted it, but the server would not boot. A red LED on the server front panel indicated a “system critical” issue. Shame.

I tried a different card, bought in haste from Maplins. This one is a Transcend TS-PDU3. It also has a Renesas chipset. I fitted this to the PCIX 16 slot in the ML350. Note: if you do this, you will need some kind of extender cable for the power, since this (and most USB 3.0 cards) require additional power direct from the power supply. The ML350 G6, at least in my case, has plenty of spare Molex power connectors, but they are on short cables and sited at the front of the computer, whereas the PCI Express slots are at the back.

Good news: the server booted.


Next up, drivers. No CD comes with this particular card, but you can download from the Transcend site. There are two drivers for different versions of the TS-PDU3. I used the second version (Molex and Sata power connectors). Fortunately the setup ran perfectly on Server Core; success.

I took the StarTech card and tried it in another PC, this one self-assembled with an Intel motherboard. This machine also runs Hyper-V Server, but the 2012 version. The machine booted properly, but the setup on the supplied CD did not run.


“Sorry, the install wizard can’t find the proper component for the current platform”, it remarked cryptically.

I went along to the StarTech site and found an updated driver which looks remarkably similar to the one I had installed for the Transcend card. It ran perfectly and all is well.

This is a good moment to mention Devcon.exe, an essential tool if you are installing device drivers on Server Core. You can use the GUI Device Manager remotely, but it is read-only. Devcon.exe is part of the WDK (Windows Driver Kit), and it is not too hard to find. Make sure you use the right version (32-bit or 64-bit) for your system.

On server core, run:

Devcon status * –> devices.txt

to output the status of your devices to a text file. Open it in Notepad, which works on Server Core, and look for the word “problem” to see if there are issues. For example, Problem 28 is “no driver”. You also get the hardware ID from this output, needed if you use Devcon to install or update a driver. You may find things like audio devices that are not working; unlikely to be a worry on Server Core.

In my case, on both servers, I can see that the USB 3.0 card has been correctly detected and that the driver is running.

Why did the StarTech card not work on the ML350? Here I am going to shrug and say that PCI Express cards can be problematic. Equally, if I get good results and no unexpected behaviour from the Transcend card, I am not going to worry that it is a cheap card that does not belong in a server.

The truth is, if you need USB 3.0 you really need it, and the only alternative is a new server.

How to run Server Manager or any application as a different user in Windows 8

If you are running Windows Server 2012 you can install the Remote Administration Tools on Windows 8, which lets you administer your server from the comfort of the Windows 8 GUI, even if your servers are Server Core.

However, it is unlikely that you log onto your Windows 8 client with the same credentials you use to manage your servers.

The solution is to run the tools as a different user. The approach you use depends on which tool you are using. If you run PowerShell, for example, you can use the enter-psssession cmdlet with the Credential argument:

enter-psssession yourservername -credential yourdomain\youradmin

This will pop up a login prompt so you can start an administrative PowerShell session on the server.

But what about Server Manager? If you go to the Start screen (after installing the remote tools) and type Server Manager, you can right-click the shortcut (or flick up) and get these options:


Run as administrator will not help you, since this is the local adminstrator. Instead, choose Open file location.

Next, hold down the shift key and right-click the shortcut for Server Manager:


From the pop-up menu choose Run as different user and enter your server admin credentials.

Now you have a nice Dashboard from which to manage your remote server.


Improving Windows Server: the really hard problem

At Microsoft’s Build conference last week I attended a Server 2012 press event led by Jeffrey Snover, the Lead Architect for the Windows Server Division.

He and others spoke about the key features of Server 2012 and how it justifies Microsoft’s claim that it is the cornerstone of the Cloud OS.

It is a strong release; but after the event I asked Snover what he thought about a problem which is at the micro-management level, far removed from the abstractions of cloud.

The Windows event log, I observed, invariably fills with errors and warnings. Many of these are benign; but conscientious administrators spend significant effort investigating them, chasing down knowledgebase articles, and trying to tweak Windows Server in order to fix them. It is a tough and time-consuming task.

When, I asked, will we see an edition of Windows Server that does a better job of eliminating useless and unnecessarily repetitive log entries and separating those which really matter from those which do not?

[I realise that the Event Viewer makes some effort to do this but in my experience it falls short.]


That’s hard he said. It will take a long time.

Which is better than saying that the problem will never be solved; but you wonder.

I also realise that this issue is not unique to Windows. Your Linux or Mac machine also has logs full of errors and warnings. There is an argument that Windows makes them too easy to find, to the extent that scammers exploit it by cold-calling users (generally not server admins) to persuade them that they have a virus infection. On the other hand, ease of access to logs is a good thing.

What is hard is discerning, with respect to any specific report, whether it matters and what action if any is required. One reason, perhaps, why we will always need system administrators.