Tag Archives: windows server

Hands on with Windows Virtual Desktop

Microsoft’s Windows Virtual Desktop (WVD) is now in preview. This is virtual Windows desktops on Azure, and the first time Microsoft has come forward with a fully integrated first-party offering. There are also a few notable features:

– You can use a multi-session edition of Windows 10 Enterprise. Normally Windows 10 does not support concurrent sessions: if another user logs on, any existing session is terminated. This is an artificial restriction which is more to do with licensing than technology, and there are hacks to get around it but they are pointless presuming you want to be correctly licensed.

– You can use Windows 7 with free extended security updates to 2023. As standard, Windows 7 end of support is coming in January 2020. Without Windows Virtual Desktop, extended security support is a paid for option.

– Running a VDI (Virtual Desktop Infrastructure) can be expensive but pricing for Windows Virtual Desktop is reasonable. You have to pay for the Azure resources, but licensing comes at no extra cost for Microsoft 365 users. Microsoft 365 is a bundle of Office 365, Windows InTune and Windows 10 licenses and starts at £15.10 or $20 per month. Office 365 Business Premium is £9.40 or $12.50 per month. These are small business plans limited to 300 users.

Windows Virtual Desktop supports both desktops and individual Windows applications. If you are familiar with Windows Server Remote Desktop Services, you will find many of the same features here, but packaged as an Azure service. You can publish both desktops and applications, and use either a client application or a web browser to access them.

What is the point of a virtual desktop when you can just use a laptop? It is great for manageability, security, and remote working with full access to internal resources without a VPN. There could even be a cost saving, since a cheap device like a Chromebook becomes a Windows desktop anywhere you have a decent internet connection.

Puzzling out the system requirements

I was determined to try out Windows Virtual Desktop before writing about it so I went over to the product page and hit Getting Started. I used a free trial of Azure. There is a complication though which is that Windows Virtual Desktop VMs must be domain joined. This means that simply having Azure Active Directory is not enough. You have a few options:

Azure Active Directory Domain Services (Azure ADDS) This is a paid-for azure service that provides domain-join and other services to VMs on an Azure virtual network. It costs from about £80.00 or $110.00 per month. If you use Azure ADDS you set up a separate domain from your on-premises domain, if you have one. However you can combine it with Azure AD Connect to enable sign-on with the same credentials.

There is a certain amount of confusion over whether you can use WVD with just Azure ADDS and not AD Connect. The docs say you cannot, stating that “A Windows Server Active Directory in sync with Azure Active Directory” is required. However a user reports success without this; of course there may be snags yet to be revealed.

Azure Active Directory with AD Connect and a site to site VPN. In this scenario you create an Azure virtual network that is linked to your on-premises network via a site to site VPN. I went this route for my trial. I already had AD Connect running but not the VPN. A VPN requires a VPN Gateway which is a paid-for option. There is a Basic version which is considered legacy, so I used a VPNGw1 which costs around £100 or $140 per month.

Update: I have replaced the VPN Gateway with once using the Basic sku (around £20.00 or $26.00 per month) and it still works fine. Microsoft does not recommend this for production but for a very small deployment like mine, or for testing, it is much more cost effective.

This solution is working well for me but note that in a production environment you would want to add some further infrastructure. The WVD VMs are domain-joined to the on-premises AD which means constant network traffic across the VPN. AD integrates with DNS so you should also configure the virtual network to use on-premises DNS. The solution would be to add an Azure-hosted VM on the virtual network running a domain controller and DNS. Of course this is a further cost. Running just Azure ADDS and AD Connect is cheaper and simpler if it is supported.

Incidentally, I use pfsense for my on-premises firewall so this is the endpoint for my site-to-site VPN. Initially it did not work. I am not sure what fixed it but it may have been the TCP MSS Clamping referred to here. I set this to 1350 as suggested. I was happy to see the connection come up in pfsense.

image 

Setup options

There are a few different ways to set up WVD. You start by setting some permissions and creating a WVD Tenant as described here. This requires PowerShell but it was pretty easy.

image

The next step is to create a WVD host pool and this was less straightforward. The tutorial offers the option of using the Azure Portal and finding Windows Virtual Desktop – Provision a host pool in the Azure Marketplace. Or you can use an Azure Resource Manager template, or PowerShell.

I used the Azure Marketplace, thinking this would be easier. When I ran into issues, I tried using PowerShell, but had difficulty finding the special Windows 10 Enterprise Virtual Desktop edition via this route. So I went back to the portal and the Azure marketplace.

Provisioning the host pool

Once your tenant is created, and you have the system requirements in place, it is just a matter of running through a wizard to provision the host pool. You start by naming it and selecting a desktop type: Pooled for multi-session Windows 10, or Personal for a VM per user. I went for the Pooled option.

image

Next comes VM configuration. I stumbled a bit here. Even if you specify just 10 (or 1) users, the wizard recommends a fairly powerful VM, a D8s v3. I thought this would be OK for the trial, but it would not let me continue using the trial subscription as it is too expensive. So I ended up with a D4s v3. Actually, I also tried using a D4 v3 but that failed to deploy because it does not support premium storage. So the “s” is important.

image

The next dialog has some potential snags.

image

This is where you choose an OS image, note the default is Windows 10 Enterprise multi-session, for a pooled WVD. You also specify a user which becomes the default for all the VMs and is also used to join the VMs to the domain. These credentials are also used to create a local admin account on the VM, in case the domain join fails and you need to connect (I did need this).

Note also that the OU path is specified in the form OU=wvd,DC=yourdomain,DC=com (for example). Not just the name of an OU. Otherwise you will get errors on domain join.

Finally take care with the virtual network selection. It is quite simple: if you are doing what I did and domain-joining to an on-premises domain, the virtual network and subnet needs to have connectivity to your on-premises DCs and DNS.

The next dialog is pretty easy. Just make sure that you type in the tenant name that you created earlier.

image

Next you get a summary screen which validates your selections.

image

I suggest you do not take this validation too seriously. I found it happily validated a non-working configuration.

Hit OK and you can deploy your WVD host pool. This takes a few minutes, in my case around 10-15 minutes when it works. If it does not work, it can fail quickly or slowly depending on where in the process it fails.

My problem, after fixing issues like using the wrong type of OS image, was failure to join the VM to the domain. I could not see why this did not work. The displayed error may or may not be useful.

image

If the deployment got as far as creating the VM (or VMS), I found it helpful to connect to a VM to look at its event viewer. I could connect from my on-premises network thanks to the site to site VPN.

I discovered several issues before I got it working. One was simple: I mistyped the name of the vmjoiner user when I created it so naturally it could not authenticate. I was glad when it finally worked.

image

Connection

Once I got the host pool up and running my trial WVD deployment was fine. I can connect via a special Remote Desktop Client or a browser. The WVD session is fast and responsive and the VPN to my office rather handy.

image

Observations

I think WVD is a good strategic move from Microsoft and will probably be popular. I have to note though that setup is not as straightforward as I had hoped. It would benefit Microsoft to make the trial easier to get up and running and to improve the validation of the host pool deployment.

It also seems to me that for small businesses an option to deploy with only Azure ADDS and no dependency on an on-premises AD is essential.

As ever, careful network planning is a requirement and improved guidance for this would also be appreciated.

Update:         

There seems to a problem with Office licensing. I have an E3 license. It installs but comes up with a licensing error. I presume this is a bug in the preview.    

image

This was my mistake as it turned out. You have to take some extra steps to install Office Pro Plus on a terminal server, as explained here. In my case, I just added the registry key SharedComputerLicensing with a setting of 1 under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\ClickToRun\Configuration. Now it runs fine. Thanks to https://twitter.com/getwired for the tip.

Windows Server 2019 Essentials may be Microsoft’s last server offering for small businesses

Microsoft’s Windows Server Team has posted about Windows Server 2019 Essentials, stating that:

“There is a strong possibility that this could be the last edition of Windows Server Essentials.”

Server Essentials is an edition aimed at small organisations that includes 25 Client Access Licenses (CALs). If you go beyond that you have to upgrade to Windows Server Standard at a much higher cost. There are some restrictions in the product, such as lack of support for Remote Desktop Services (other than for admin use).

image

Microsoft has already greatly reduced its server offering for small businesses. Small Business Server, the last version of which was Windows Small Business Server 2011, bundled Exchange, SharePoint and System Update Services, and supported up to 75 users.

“Capabilities that small businesses need, like file sharing and collaboration are best achieved with a cloud service like Microsoft 365,” says the team, though also observing that Server 2019 will be supported according to the normal timeline, which means you will get something like mainstream support until 2024 and extended support until 2029 or so.

Good decision? There are several ways to look at this. Microsoft’s desire for small businesses to adopt cloud is not without self-interest. The subscription model is great for vendors, giving them a consistent flow of income and a vehicle for upselling.

Cloud also has specific benefits for small businesses. Letting Microsoft manage your email server makes huge sense, for example. The cloud model has brought many enterprise-grade features to organisations which would otherwise lack them.

Despite that, I do not altogether buy the “cloud is always best” idea. From a technical point of view, running stuff locally is more efficient, and from a business point of view, it can be cheaper. Of course there is also a legacy factor, as many applications are designed to run on a server on the local network.

Businesses do have a choice though. Linux works well as a file and print server, and pretty well as a Windows domain controller.

Network attached storage (NAS) devices like those from Synology and Qnap are easy to manage and include a bunch of features which are small-business friendly, including directory services and even mail servers if you still want to do that.

A common problem though with small businesses and on-premises servers (whether Windows or Linux) is weak backup. It makes sense to use the cloud for that, if nothing else.

Although it is tempting to rail at Microsoft for pulling the rug from under small businesses with their own servers, the truth is that cloud does mostly make better sense for them, especially with the NAS fallback for local file sharing.

No more infrastructure roles for Windows Nano Server, and why I still like Server Core

Microsoft’s General Manager for Windows Server Erin Chapple posted last week about Nano Server (under a meaningless PR-speak headline) to explain that Nano Server, the most stripped-down edition of Windows Server, is being repositioned. When it was introduced, it was presented not only as a lightweight operating system for running within containers, but also for infrastructure roles such as hosting Hyper-V virtual machines, hosting containers, file server, web server and DNS Server (but without AD integration).

In future, Nano Server will be solely for the container role, enabling it to shrink in size (for the base image) by over 50%, according to Chapple. It will no longer be possible to install Nano Server as a standalone operating system on a server or VM. 

This change prompted Microsoft MVP and Hyper-V enthusiast Aidan Finn to declare Nano Server all but dead (which I suppose it is from a Hyper-V perspective) and to repeat his belief that GUI installs of Windows Server are best, even on a server used only for Hyper-V hosting.

Prepare for a return to an old message from Microsoft, “We recommend Server Core for physical infrastructure roles.” See my counter to Nano Server. PowerShell gurus will repeat their cry that the GUI prevents scripting. Would you like some baloney for your sandwich? I will continue to recommend a full GUI installation. Hopefully, the efforts by Microsoft to diminish the full installation will end with this rollback on Nano Server.

Finn’s main argument is that the full GUI makes troubleshooting easier. Server Core also introduces a certain amount of friction as most documentation relating to Windows Server (especially from third parties) presumes you have a GUI and you have to do some work to figure out how to do the same thing on Core.

Nevertheless I like Server Core and use it where possible. The performance overhead of the GUI is small, but running Core does significantly reduce the number of security patches and therefore required reboots. Note that you can run GUI applications on Server Core, if they are written to a subset of the Windows API, so vendors that have taken the trouble to fix their GUI setup applications can support it nicely.

Another advantage of Server Core, in the SMB world where IT policies can be harder to enforce, is that users are not tempted to install other stuff on their Server Core Domain Controllers or Hyper-V hosts. I guess this is also an advantage of VMWare. Users log in once, see the command-line UI, and do not try installing file shares, print managers, accounting software, web browsers (I often see Google Chrome on servers because users cannot cope with IE Enhanced Security Configuration), remote access software and so on.

Only developers now need to pay attention to Nano Server, but that is no reason to give up on Server Core.

Notes from the field: unexpected villain breaks Dynamics CRM and IIS on Windows Server 2012

Yesterday I was asked to convert a Dynamics CRM 2013 installation from an internal to an Internet Facing Deployment (IFD). It is a bit fiddly, but I have done this before so I was confident.

The installation in question is only for test; the company has its production CRM 2011 on another server. Because it is for test though, it is a small deployment on a single server.

I got to work running the Claims Based Authentication wizard in the CRM Deployment Manager but also noticed something odd about the server. WSUS (Windows Server Update Services) was installed though it was not in use. This seems a bad idea so I asked if I could remove it. Sure, it was just a quick experiment. I removed WSUS and got on with the next steps of configuring IFD.

Unfortunately ADFS 2.0 (in this case) would not play ball. It could not communicate with CRM. I quickly saw why: attempting to browse to the special FederationMetadata.xml URL raised a 500 error.

I tried a few things. There are plenty of odd things that can go wrong: permissions on the private keys of the certificate used for the CRM web site, Service Principal Names, incorrect DNS entries and so on. All seemed fine. Still the error.

I decided to backtrack and temporarily disable Claims Based Authentication. Unfortunately it appeared that I had broken CRM completely. All access to the site raised the same 500.19 IIS error.

image

The web page IIS delivers says that the most likely causes are that the worker process is unable to read the ApplicationHost.config or web.config file, or malformed XML in the applicationhost.config or web.config file, or incorrect NTFS permissions.

I did a repair install on CRM. I reapplied the rollups. No difference.

I ran Process Monitor to try to figure out what configuration file was causing the problem. It was not a great help, but did point me in the right direction to the extent that it seemed that ASP.NET was not working properly at all. I now focused on this rather than CRM itself, observing also that there were not many CRM-related errors in the event log and I would expect more if it was really broken.

I created a hello world ASP.NET application and installed it in a separate site on a different port. Same error.

Searching for help on this particular error was not particularly helpful. In the context of CRM, the few users that encountered something similar had reinstalled everything from scratch. However, now at least I knew that IIS rather than CRM was broken. This helpful MSDN article actually includes a hint to the solution:

For above specific error (mentioned in this example), DynamicCompressionModule module is causing the trouble. This is because of the XPress compression scheme module (suscomp.dll) which gets installed with WSUS. Since Compression schemes are defined globally and try to load in every application Pool, it will result in this error when 64bit version of suscomp.dll attempts to load in an application pool which is running in 32bit mode.

which is also referenced here. These refer to WSUS breaking 32-bit applications, but in my case after removing WSUS neither 64-bit nor 32-bit apps were running.

Let me put it more clearly. If you remove WSUS using the role wizard in Server Manager, a number of bits get left behind, including a setting in ApplicationHost.config (in /System32/Inetsrv) that breaks IIS.

So it was my attempt to clean up the server that had made it worse.

That said, this is also a Windows Server failure. Adding and removing a role should result as far as possible in no change.

Once identified, the problem is easy to fix (this is often true). Still, several hours wasted and more evidence for Martin Fowler’s assertion that you should automate server configuration and spin up a new one from scratch when you want to make a change, to avoid configuration drift. There is a more detailed post on the same theme – Phoenix servers that rise from the ashes, not snowflake servers that are unique and ugly – here.

In a small business context this perhaps is harder to achieve – though the cost of entry gets lower all the time, through either cloud computing or internal virtualization platforms.

Windows Server 2012 R2, System Center 2012 R2, SQL Server 14: what’s new, and what is the Cloud OS?

Earlier this month I attended a three-day press briefing on what is coming in the R2 wave of Microsoft’s server products: Windows Server, System Center and SQL Server.

There is a ton of new stuff, too much for a blog post, but here are the things that made the biggest impression.

First, I am beginning to get what Microsoft means by “Cloud OS”. I am not sure that this a useful term, as it is fairly confusing, but it is worth teasing out as it gives a sense of Microsoft’s strategy. Here’s what lead architect Jeffrey Snover told me:

I think of it as a central organising thought. That’s our design centre, that’s our north star. It’s not necessarily a product, it goes across some things … for example, I would absolutely include SQL [Server] in all of its manifestations in our vision of a cloud OS. Cloud OS has two missions. Abstracting resources for consumption by multiple consumers, and then providing services to applications. Modern applications are all consuming SQL … we’re evolving SQL to the more scale-out, elastic, on-demand attributes that we think of as cloud OS attributes.

If you want to know what Cloud OS looks like, it is something like this:

image

Yes, it’s the Azure portal, and one of today’s big announcements is that this is the future of System Center, Microsoft’s on-premise cloud management system, as well as Azure, the public cloud. Azure technology is coming to System Center 2012 R2 via an add-on called the Azure Pack. Self-service VMs, web sites, SQL databases, service bus messaging, virtual networks, online storage and more.

Snover also talked about another aspect to Cloud OS, which is also significant. He says that Microsoft sees cloud as an “operating system problem.” This is the key to how Microsoft thinks it can survive and prosper versus VMWare, Amazon and so on. It has a hold of the whole stack, from the tiniest detail of the operating system (memory management, file system, low-level networking and so on) to the highest level, big Azure datacenters.

The company is also unusual in its commitment to private, public and hybrid cloud. The three cloud story which Microsoft re-iterated obsessively during the briefing is public cloud (Azure), private cloud (System Center) and hosted cloud (service providers). Ideally all three will look the same and work the same – differences of scale aside – though the Azure Pack is only the first stage towards convergence. Hyper-V is the common building block, and we were assured that Hyper-V in Azure is exactly the same as Hyper-V in Windows Server, from 2012 onwards.

I had not realised until this month that Snover is now lead architect for System Center as well as Windows Server. Without both roles, of course, he could scarcely architect “Cloud OS”.

Here are a few other things to note.

Hyper-V 2012 R2 has some great improvements:

  • Generation 2 VMs (64-bit Server 2012 and Windows 8 and higher only) strip out legacy emulation, UEIF boot from SCSI
  • Replica supports a range of intervals from 30 seconds to 15 minutes
  • Data compression can double the speed of live migration
  • Live VM cloning lets you copy a running VM for troubleshooting offline
  • Online VHDX resize – grow or shrink
  • Linux now supports Live Migration, Live Backup, Dynamic memory, online VHDX resize

SQL Server 14 includes in-memory optimization, code-name Hekaton, that can deliver stunning speed improvements. There is also compilation of stored procedures to native code, subject to some limitations. The snag with Hekaton? Your data has to fit in RAM.

Like Generation 2 VMs, Hekaton is the result of re-thinking a product in the light of technical advances. Old warhorses like SQL Server were designed when RAM was tiny, and everything had to be fetched from disk, modified, written back. Bringing that into RAM as-is is a waste. Hekaton removes the overhead of the the disk/RAM model almost completely, though it does have to write data back to disk when transactions complete. The data structures are entirely different.

PowerShell Desired State Configuration (DSC) is a declarative syntax for defining the state of a server, combined with a provider that knows how to read or apply it. It is work in progress, with limited providers currently, but immensely interesting, if Microsoft can both make it work and stay the course. The reason is that using PowerShell DSC you can automate everything about an application, including how it is deployed.

Remember White Horse? This was a brave but abandoned attempt to model deployment in Visual Studio as part of application development. What if you could not only model it, but deploy it, using the cloud automation and self-service model to create the VMs and configure them as needed? As a side benefit, you could version control your deployment. Linux is way ahead of Windows here, with tools like Puppet and Chef, but the potential is now here. Note that Microsoft told me it has no plans to do this yet but “we like the idea” so watch this space.

Storage improvements. Both data deduplication and Storage Spaces are getting smarter. Deduplication can be used for running VHDs in a VDI deployment, with huge storage saving. Storage Spaces support hybrid pools with SSDs alongside hard drives, hot data automatically moved, and the ability to pin files to the SSD tier.

Server Essentials for small businesses is now a role in Windows Server as well as a separate edition. If you use the role, rather than the edition, you can use the Essentials tools for up to 100 or so users. Unfortunately that will also mean Windows Server CALs; but it is a step forward from the dead-end 25-user limit in the current product. Small Business Server with bundled Exchange is still missed though, and not coming back. More on this separately.

What do I think overall? Snover is a smart guy and if you buy into the three-cloud idea (and most businesses, for better or worse, are not ready for public cloud) then Microsoft’s strategy does make sense.

The downside is that there remains a lot of stuff to deal with if you want to implement Microsoft’s private cloud, and I am not sure whether System Center admins will all welcome the direction towards using Azure tools on-premise, having learned to deal with the existing model.

The server folk at Microsoft have something to brag about though: 9 consecutive quarters of double digit growth. It is quite a contrast with the declining PC market and the angst over Windows 8, leading to another question: long-term, can Microsoft succeed in server but fail in client? Or will (for better or worse) those two curves start moving in the same direction? Informed opinions, as ever, are welcome.

Notes from the field: USB 3.0 PCI Express cards, HP ML350 G6 and Server Core

If I search the web, get little help, and then solve a problem, I make a point of posting so that someone else will have a better experience. The challenge was this: finding a USB 3.0 PCI Express card that works in an HP ML350 G6 server, a popular choice for small business duties such as Small Business Server or Hyper-V Server. This particular example runs Hyper-V Server 2008 R2, based on Server Core, which can sometimes be awkward for installing drivers.

USB 3.0 is theoretically around 10 times faster than USB 2.0. If you are transferring large files or performing backup to an external drive, it can make a huge difference to performance.

Trawling the web was not particularly helpful. As this expert notes, there is no officially supported or recommended option for USB 3.0 on an ML350:

The ML350 G5 and G6 servers do not have, as a recommended option, a USB 3.0 and e-SATA controller, which would be clear to you by referring the quickspecs of the servers.

If you take the view that only recommended and certified components should be fitted to a server, give up and stop reading now. I do not disagree, but I tend to a pragmatic approach, depending on your budget and how system-critical is the server in question.

Further, it can work. This guy used a HighPoint 1144A card and it kind of works, though investigating I found that some users reporting that only two of the four ports actually work and you have to tolerate errors in device manager; it does not seem ideal. Another user noted that HP’s own card (which is designed for workstations and not the ML350) did not work though maybe it works for others, I am not sure.

I did find some references to success with a Renesas USB 3.0 chipset so found a StarTech card that uses this, PEXUSB3S2. Fitted it, but the server would not boot. A red LED on the server front panel indicated a “system critical” issue. Shame.

I tried a different card, bought in haste from Maplins. This one is a Transcend TS-PDU3. It also has a Renesas chipset. I fitted this to the PCIX 16 slot in the ML350. Note: if you do this, you will need some kind of extender cable for the power, since this (and most USB 3.0 cards) require additional power direct from the power supply. The ML350 G6, at least in my case, has plenty of spare Molex power connectors, but they are on short cables and sited at the front of the computer, whereas the PCI Express slots are at the back.

Good news: the server booted.

image

Next up, drivers. No CD comes with this particular card, but you can download from the Transcend site. There are two drivers for different versions of the TS-PDU3. I used the second version (Molex and Sata power connectors). Fortunately the setup ran perfectly on Server Core; success.

I took the StarTech card and tried it in another PC, this one self-assembled with an Intel motherboard. This machine also runs Hyper-V Server, but the 2012 version. The machine booted properly, but the setup on the supplied CD did not run.

image

“Sorry, the install wizard can’t find the proper component for the current platform”, it remarked cryptically.

I went along to the StarTech site and found an updated driver which looks remarkably similar to the one I had installed for the Transcend card. It ran perfectly and all is well.

This is a good moment to mention Devcon.exe, an essential tool if you are installing device drivers on Server Core. You can use the GUI Device Manager remotely, but it is read-only. Devcon.exe is part of the WDK (Windows Driver Kit), and it is not too hard to find. Make sure you use the right version (32-bit or 64-bit) for your system.

On server core, run:

Devcon status * –> devices.txt

to output the status of your devices to a text file. Open it in Notepad, which works on Server Core, and look for the word “problem” to see if there are issues. For example, Problem 28 is “no driver”. You also get the hardware ID from this output, needed if you use Devcon to install or update a driver. You may find things like audio devices that are not working; unlikely to be a worry on Server Core.

In my case, on both servers, I can see that the USB 3.0 card has been correctly detected and that the driver is running.

Why did the StarTech card not work on the ML350? Here I am going to shrug and say that PCI Express cards can be problematic. Equally, if I get good results and no unexpected behaviour from the Transcend card, I am not going to worry that it is a cheap card that does not belong in a server.

The truth is, if you need USB 3.0 you really need it, and the only alternative is a new server.

How to run Server Manager or any application as a different user in Windows 8

If you are running Windows Server 2012 you can install the Remote Administration Tools on Windows 8, which lets you administer your server from the comfort of the Windows 8 GUI, even if your servers are Server Core.

However, it is unlikely that you log onto your Windows 8 client with the same credentials you use to manage your servers.

The solution is to run the tools as a different user. The approach you use depends on which tool you are using. If you run PowerShell, for example, you can use the enter-psssession cmdlet with the Credential argument:

enter-psssession yourservername -credential yourdomain\youradmin

This will pop up a login prompt so you can start an administrative PowerShell session on the server.

But what about Server Manager? If you go to the Start screen (after installing the remote tools) and type Server Manager, you can right-click the shortcut (or flick up) and get these options:

image

Run as administrator will not help you, since this is the local adminstrator. Instead, choose Open file location.

Next, hold down the shift key and right-click the shortcut for Server Manager:

image

From the pop-up menu choose Run as different user and enter your server admin credentials.

Now you have a nice Dashboard from which to manage your remote server.

image

Improving Windows Server: the really hard problem

At Microsoft’s Build conference last week I attended a Server 2012 press event led by Jeffrey Snover, the Lead Architect for the Windows Server Division.

He and others spoke about the key features of Server 2012 and how it justifies Microsoft’s claim that it is the cornerstone of the Cloud OS.

It is a strong release; but after the event I asked Snover what he thought about a problem which is at the micro-management level, far removed from the abstractions of cloud.

The Windows event log, I observed, invariably fills with errors and warnings. Many of these are benign; but conscientious administrators spend significant effort investigating them, chasing down knowledgebase articles, and trying to tweak Windows Server in order to fix them. It is a tough and time-consuming task.

When, I asked, will we see an edition of Windows Server that does a better job of eliminating useless and unnecessarily repetitive log entries and separating those which really matter from those which do not?

[I realise that the Event Viewer makes some effort to do this but in my experience it falls short.]

image

That’s hard he said. It will take a long time.

Which is better than saying that the problem will never be solved; but you wonder.

I also realise that this issue is not unique to Windows. Your Linux or Mac machine also has logs full of errors and warnings. There is an argument that Windows makes them too easy to find, to the extent that scammers exploit it by cold-calling users (generally not server admins) to persuade them that they have a virus infection. On the other hand, ease of access to logs is a good thing.

What is hard is discerning, with respect to any specific report, whether it matters and what action if any is required. One reason, perhaps, why we will always need system administrators.