Category Archives: systems

Amazon Linux 2023: designed to be disposable

Amazon Linux 2023 is the default for Linux VMs on AWS EC2 (Elastic Compute Cloud). Should you use it? It is a DevOps choice; the main reason why you might use it is that it feels like playing safe. AWS support will understand it, it should be performance-optimised for EC2; it should work smoothly with AWS services.

Amazon Linux 2 was released in June 2018 and was the latest production version until March 2023, by which time it was very out of date. Based on CentOS 7, it was pretty standard and you could easily use additional repositories such as EPEL (Extra Packages for Enterprise Linux). It is easy to keep up to date with sudo yum update. However there is no in-place upgrade.

Amazon Linux 2023 is different in character. It was released in March 2023 and the idea is to have a major release every 2 years, and to support each release for 5 years. It does not support EPEL or repositories other than Amazon’s own. The docs say:

At this time, there are no additional repositories that can be added to AL2023. This might change in the future.

The docs also document how to add an external repository so it is a bit confusing. You can also compile your own rpms and install that way; but if you do, keeping them up to date is down to you.

The key to why this is though is in a thing AWS calls deterministic upgrades. Each version, including minor versions, is locked to a specific repository. You can upgrade to a new release but it has to be specified. This is what I got today from my installation on Hyper-V:

Amazon Linux 2023 offering a new release

The command dnf check-release-update looks for a new release and tells you how to upgrade to it, but does not do so by default.

The reason, the docs explain, is that:

With AL2023, you can ensure consistency between package versions and updates across your environment. You can also ensure consistency for multiple instances of the same Amazon Machine Image (AMI). With the deterministic upgrades through versioned repositories feature, which is turned on by default, you can apply updates based on a schedule that meets your specific needs.

The idea is that if you have a fleet of Amazon Linux 2023 instances they all work the same. This is ideal for automation. The environment is predictable.

It is not ideal though if you have, say, one server, or a few servers doing different things, and you want to run them for a long time and keep them up to date. This will work, but the operating system is designed to be disposable. In fact, the docs say:

To apply both security and bug fixes to an AL2023 instance, update the DNF configuration. Alternatively, launch a newer AL2023 instance.

The bolding is mine; but if you have automation so that a new instance can be fired up configured as you want it, launching a new instance is just as logical as updating an existing one, and arguably safer.

My last server? HP ML310e G8 quick review

Do small businesses still need a server? In my case, I do still run a couple, mainly for trying out new releases of server products like Windows Server 2012 R2, System Center 2012, Exchange and SharePoint. The ability to quickly run up VMs for testing software is of huge value; you can do this with just a desktop but running a dedicated hypervisor is convenient.

My servers run Hyper-V Server 2012 R2, the free version, which is essentially Server Core with just the Hyper-V role installed. I have licenses for full Windows server but have stuck with the free one partly because I like the idea of running a hypervisor that is stripped down as far as possible, and partly because dealing with Server Core has been educational; it forces you into the command line and PowerShell, which is no bad thing.

Over the years I have bought several of HP’s budget servers and have been impressed; they are inexpensive, especially if you look out for “top value” deals, and work reliably. In the past I’ve picked the ML110 range but this is now discontinued (though the G7 is still around if you need it); the main choice is either the small Proliant Gen8 MicroServer which packs in space for 4 SATA drives and up to 16GB RAM via 2 PC3 DDR3 DIMM slots and support for the dual-core Intel Celeron G1610T or Pentium G2020T; or the larger ML310 Gen8 series with space for 4 3.5" or 8 small format SATA drives and 4 PC3 DDR3 DIMM slots for up to 32GB RAM, with support for the Core i3 or Xeon E3 processors with up to 4 cores. Both use the Intel C204 chipset.

I picked the ML310e because a 4-core processor with 32GB RAM is gold for use with a hypervisor. There is not a huge difference in cost. While in a production environment it probably makes sense to use the official HP parts, I used non-HP RAM and paid around £600 plus VAT for a system with a Xeon  E3-1220v2 4-core CPU, 32GB RAM, and 500GB drive. I stuck in two budget 2Tb SATA drives to make up a decent server for less than £800 all-in; it will probably last three years or more.

There is now an HP ML310e Gen 8 v2 which might partly explain why the first version is on offer for a low price; the differences do not seem substantial except that version 2 has two USB 3.0 ports on the rear in place of four USB 2.0 ports and supports Xeon E3 v3.

Will I replace this server? The shift to the cloud means that I may not bother. I was not even sure about this one. You can run up VMs in the cloud easily, on Amazon ECC or Microsoft Azure, and for test and development that may be all you need. That said, I like the freedom to try things out without worrying about subscription costs. I have also learned a lot by setting up systems that would normally be run by larger businesses; it has given me better understanding of the problems IT administrators encounter.

image

So how is the server? It is just another box of course, but feels well made. There is an annoying lock on the front cover; you can’t remove the side panel unless this is unlocked, and you can’t remove the key unless it is locked, so the solution if you do not need this little bit of physical security is to leave the key in the lock. It does not seem worth much to me since a miscreant could easily steal the entire server and rip off the panel at leisure.

On the front you get 4 USB 2.0 ports, UID LED button, NIC activity LED, system health LED and power button.

image

The main purpose of the UID (Unit Identifier) button is to help identify your server from the rear if it is in a rack. You press the button on the front and an LED lights at the rear. Not that much use in a micro tower server.

Remove the front panel and you can see the drive cage:

image

Hard drives are in caddies which are easily pulled out for replacement. However note the “Non hot plug” on these units; you must turn the server off first.

You might think that you have to buy HP drives which come packaged in caddies. This is not so; if you remove one of the caddies you find it is not just a blank, but allows any standard 3.5" drive to be installed. The metal brackets in the image below are removed and you just stick the drive in their place and screw the side panels on.

image

Take the side panel off and you will see a tidy construction with the 350w power supply, 4 DIMM slots, 4 PCI Express slots (one x16, two x8, one x4), and a transparent plastic baffle that ensures correct air flow.

image

The baffle is easily removed.

image

What you see is pretty much as it is out of the box, but with RAM fitted, two additional drives, and a PCIX USB 3.0 card fitted since (annoyingly) the server comes with USB 2.0 only – fixed in the version 2 edition.

On the rear are four more USB 2.0 ports, two 1GB NIC ports, a blank where a dedicated ILO (Integrated Lights Out) port would be, video and serial connector.

image

Although there is no ILO port on my server, ILO is installed. The luggage label shows the DNS name you need to access it. If you can’t get at the label, you can look at your DHCP server and see what address has been allocated to ILOxxxxxxxxx and use that. Once you log in with a web browser you can change this to a fixed IP address; probably a good idea in case, in a crisis, the DHCP server is not working right.

ILO is one of the best things about HP servers. It is a little embedded system, isolated from whatever is installed on the server, which gets you access to status and troubleshooting information.

image

Its best feature is the remote console which gives you access to a virtual screen, keyboard and mouse so you can get into your OS from a remote session even when the usual remote access techniques are not working. There are now .NET and mobile options as well as Java.

image

Unfortunately there is a catch. Try to use this an a license will be demanded.

image

However, you can sign up for an evaluation that works for a few weeks. In other words, your first disaster is free; after that you have to pay. The license covers several servers and is not good value for an individual one.

Everything is fine on the hardware side, but what about the OS install? This is where things went a bit wrong. HP has a system called Intelligent Provisioning built in. You pop your OS install media in the DVD drive (or there are options for network install), run a wizard, and Intelligent Provisioning will update its firmware, set up RAID, and install your OS with the necessary drivers and HP management utilities included.

I don’t normally bother with all this but I thought I should give it a try. Unfortunately Server 2012 R2 is not supported, but I tried it for Server 2012 x64, hoping this would also work with Hyper-V Server, but no go; failed with unattend script error.

Next I set up RAID manually using the nice HP management utility in the BIOS and tried to install using the storage drivers saved to a USB pen drive. It seemed to work but was not stable; it would sometimes fail to boot, and sometimes you could log on and do a few things but Windows would crash with a Kernel_Security_Check_Failure.

Memory problems? Drive problems? It was not clear; but I decided to disable embedded RAID in the BIOS and use standard AHCI SATA. Install proceeded perfectly with no need for additional drivers, and the OS is 100% stable.

I did not want to give up RAID though, so wondered if I could use Storage Spaces on Hyper-V Server. Apparently you can. I joined the Hyper-V Server to my domain and then used Server Manager remotely to create a Storage Pool from my pair of 2TB drives, and then a mirrored virtual disk.

My OS drive is not on resilient storage but I am not too concerned about that. I can backup the OS (wbadmin works), and since it does nothing more than run Hyper-V, recovery should be straightforward if necessary.

After that I moved across some VMs using a combination of Move and Export with no real issues, other than finding Move too slow on my system when you have a large VHD to copy.

The server overall seems a good bargain; HP may have problems overall, but the department that turns out budget servers seems to do an excellent job. My only complaint so far is the failure of the storage drivers on Server 2012 R2, which HP will I hope fix with an update.

Making sense of Microsoft’s Cloud OS

People have been talking about “the internet operating system” for years. The phrase may have been muttered in Netscape days in the nineties, when the browser was going to be the operating system; then in the 2000s it was the Google OS that people discussed. Most notably though, Tim O’Reilly reflected on the subject, for example here in 2010 (though as he notes, he had been using the phrase way earlier than that):

Ask yourself for a moment, what is the operating system of a Google or Bing search? What is the operating system of a mobile phone call? What is the operating system of maps and directions on your phone? What is the operating system of a tweet?

On a standalone computer, operating systems like Windows, Mac OS X, and Linux manage the machine’s resources, making it possible for applications to focus on the job they do for the user. But many of the activities that are most important to us today take place in a mysterious space between individual machines.

It is still worth reading, as he teases out what OS components look like in the context of an internet operating system, and notes that there are now several (but only a few) competing internet operating systems, platforms which our smart mobile phones or tablets tap into and to some extent lock us in.

But what on earth (or in the heavens) is Microsoft’s “Cloud OS”? I first heard the term in the context of Server 2012, when it was in preview at the end of 2011. Microsoft seems to like how it sounds, because it is getting another push in the context of System Center 2012 Service Pack 1, just announced. In particular, Michael Park from Server and Tools has posted on the subject:

At the highest level, the Cloud OS does what a traditional operating system does – manage applications and hardware – but at the scope and scale of cloud computing. The foundations of the Cloud OS are Windows Server and Windows Azure, complemented by the full breadth of our technology solutions, such as SQL Server, System Center and Visual Studio. Together, these technologies provide one consistent platform for infrastructure, apps and data that can span your datacenter, service provider datacenters, and the Microsoft public cloud.

In one sense, the concept is similar to that discussed by O’Reilly, though in the context of enterprise computing, whereas O’Reilly looks at a bigger picture embracing our personal as well as business lives. Never forget though that this is marketing speak, and Microsoft consciously works to blur together the idealised principles behind cloud computing with its specific set of products: Windows Azure, Window Server, and especially System Center, its server and device management piece.

A nagging voice tells me there is something wrong with this picture. It is this: the cloud is meant to ease the administrative burden by making compute power an abstracted resource, managed by a third party far away in a datacenter in ways that we do not need to know. System Center on the other hand is a complex and not altogether consistent suite of products which imposes a substantial administrative burden on those who install and maintain it. If you have to manage your own cloud, do you get any cloud computing benefit?

The benefit is diluted; but there is plentiful evidence that many businesses are not yet ready or willing to hand over their computer infrastructure to a third-party. While System Center is in one sense the opposite of cloud computing, in another sense it counts because it has the potential to deliver cloud benefits to the rest of the business.

Further confusing matters, there are elements of public cloud in Microsoft’s offering, specifically Windows Azure and Windows Intune. Other bits of Microsoft’s cloud, like Office 365 and Outlook.com, do not count here because that is another department, see. Park does refer to them obliquely:

Running more than 200 cloud services for over 1 billion customers and 20+ million businesses around the world has taught us – and teaches us in real time – what it takes to architect, build and run applications and services at cloud scale.

We take all the learning from those services into the engines of the Cloud OS – our enterprise products and services – which customers and partners can then use to deliver cloud infrastructure and services of their own.

There you have it. The Cloud OS is “our enterprise products and services” which businesses can use to deliver their own cloud services.

What if you want to know in more detail what the Cloud OS is all about? Well, then you have to understand System Center, which is not something that can be explained in a few words. I did have a go at this, in a feature called Inside Microsoft’s private cloud – a glossary of terms, for which the link is currently giving a PHP error, but maybe it will work for you.

image

It will all soon be a little out of date, since System Center 2012 SP1 has significant new features. If you want a summary of what is actually new, I recommend this post by Mike Schutz on System Center 2012 SP1; and this post also by Schutz on Windows Intune and System Center Configuration Manager SP1.

My even shorter summary:

  • All System Center products now updated to run on, and manage, Server 2012
  • Upgraded Virtual Machine Manager supports up to 8000 VMs on clusters of up to 64 hosts
  • Management support for Hyper-V features introduced in Server 2012 including the virtual network switch
  • App Controller integrates with VMs offered by hosting service providers as well as those on Azure and in your own datacenter
  • App Controller can migrate VMs to Windows Azure (and maybe back); a nice feature
  • New Azure service called Global Service Monitor for monitoring web applications
  • Back up servers to Azure with Data Protection Manager

and on the device and client management side, new Intune and Configuration Manager features. It is confusing; Intune is a kind-of cloud based Configuration Manager but has features that are not included in the on-premise Configuration Manager and vice versa. So:

  • Intune can now manage devices running Windows RT, Windows Phone 8, Android and iOS
  • Intune has a self-service portal for installing business apps
  • Configuration Manager integrates with Intune to get supposedly seamless support for additional devices
  • Configuration Manager adds support for Windows 8 and Server 2012
  • PowerShell control of Configuration Manager
  • Ability to manage Mac OS X, Linux and Unix servers in Configuration Manager

What do I think of System Center? On the plus side, all the pieces are in place to manage not only Microsoft servers but a diverse range of servers and a similarly diverse range of clients and devices, presuming the features work as advertised. That is a considerable achievement.

On the negative side, my impression is that Microsoft still has work to do. What would help would be more consistency between the Azure public cloud and the System Center private cloud; a reduction of the number of products in the System Center suite; a consistent user interface across the entire suite; and simplification along the lines of what has been done in the new Azure portal so that these products are easier and more enjoyable to use.

I would add that any business deploying System Center should be thinking carefully about what they still feel they need to manage on-premise, and what can be handed over to public cloud infrastructure, whether Azure or elsewhere. The ability to migrate VMs to Azure could be a key enabler in that respect.

HP discontinues WebOS, considers PC spin-off. Should have stuck with Microsoft

Oh yes, and buys Autonomy, a fast-growing specialist in enterprise knowledge management.

Here’s the news from HP’s announcement:

As part of the transformation, HP announced that its board of directors has authorized the exploration of strategic alternatives for the company’s Personal Systems Group. HP will consider a broad range of options that may include, among others, a full or partial separation of PSG from HP through a spin-off or other transaction. (See accompanying press release.)

HP will discontinue operations for webOS devices, specifically the TouchPad and webOS phones. The devices have not met internal milestones and financial targets. HP will continue to explore options to optimize the value of webOS software going forward.

In addition, HP announced the terms of a recommended transaction for all of the outstanding shares of Autonomy Corporation plc for £25.50 ($42.11) per share in cash.

A few quick comments. First, the failure of webOS does not surprise me. There is not much wrong with webOS as such; in pure technical terms it deserves better. Its focus on adapting web technologies for local mobile applications is far-sighted; it is a more interesting operating system than Android and in some ways it is surprising that it went to HP and not to Google, which is a web technology specialist.

The problem is that HP, despite its size, is not big enough to make a success of webOS on its own. This was my comment from just over a year ago:

Mobile platforms stand (or fall) on several pillars: hardware, software, mobile operator partners, and apps. Apple is powering ahead with all of these. Google Android is as well, and has become the obvious choice for vendors (other than HP) who want to ride the wave of a successful platform. Windows Phone 7 faces obvious challenges, but at least in theory Microsoft can make it work though integration with Windows and by offering developers a familiar set of tools, as I’ve noted here.

It is obvious that not all these platforms can succeed. If we accept that Apple and Android will occupy the top two rungs of the ladder when it comes to attracting app developers, that means HP webOS cannot do better than third; and I’d speculate that it will be some way lower down than that.

Frankly, if HP did not want to do Android, it should have stuck with Microsoft. But this is where the webOS news ties in with the announcement about he Personal Systems Group. HP fell out with Microsoft last year, as I noted in my 2010 retrospective. I said the two companies should make up; but it looks as if HP is more inclined to give up on PCs and pursue other lines that have better margins – like enterprise software.

I am puzzled though by the PSG announcement. It is always curious when a company announces that it might or might not do something, and the fact that HP says it is considering a spin-off of its PC division will be enough to makes its customers uncertain about the long-term future of HP PCs and some of them will buy elsewhere as a result. It would have paid HP either to say nothing, or to be more definite and aim for a speedy transition.

All this, on the eve of Microsoft’s detailed unveiling of Windows 8. What are the implications? More than I can put into a single post; but like Gartner’s reports of dramatically declining PC sales in Western Europe presented earlier this week, this is a sign of structural change in the industry.

Microsoft will be glad of one thing: it no longer has this major partner promoting a rival mobile and tablet operating system. Note that HP still is a major partner: even if it sells the Personal Systems Group, its server and services business will still be deeply entwined with Windows.

HP breaks 2.5 million web support links

The internet and search: the greatest resource ever for troubleshooting computer systems.

Except when you follow a promising link to find this:

image

On June 26th, the HP IT Resource Center forums were migrated to the HP Enterprise Business Community. This migration coincided with the release of the new HP Support Center, and the retirement of the legacy ITRC support portal. As part of the transition, we have migrated all ~2.5 million posts and ~712k users from the ITRC forums into the new community site.
As a result of this transition, all links/bookmarks/search results that attempt to load an ITRC forum page will redirect to this announcement page.

I understand the reasons; but I wish companies would think twice before doing this. Or three times. Eventually the search engines will stop listing the broken links, but other references to these support discussions will still be broken.

How much would it cost HP to keep the old links online in read-only form?

It is not just HP of course. These generic “sorry, we broke the link” pages pop up regularly on Microsoft’s site, for example, often after following a link on Microsoft’s own site.

The web is designed to tolerate broken links; it is one of the reasons why it works. However, that is no reason to break them with abandon.

Implications of Amazon’s cloud failure

Amazon is into day three of a major failure of its Elastic Compute Cloud at its North Virginia datacenter, and at the time of writing it is still not fully recovered.

image

I am reminded of a prescient remark by Tony Lucas at Flexiant, a UK cloud provider, who told me a couple of years ago (with commendable honesty) that cloud failures will be rare, but when they occur will be on a grand scale.

It seems that it is hard to engineer around the possibility of cascading failure. I am not sure what happened in North Virginia, but Amazon says on its status page that:

A networking event early this morning triggered a large amount of re-mirroring of EBS volumes in US-EAST-1. This re-mirroring created a shortage of capacity in one of the US-EAST-1 Availability Zones, which impacted new EBS volume creation as well as the pace with which we could re-mirror and recover affected EBS volumes. Additionally, one of our internal control planes for EBS has become inundated such that it’s difficult to create new EBS volumes and EBS backed instances.

It sounds like an automated recovery system built into the compute cloud actually became the problem, as a large number of volumes tried to fix themselves at the same time.

This is not the first Amazon outage, but I believe it is the most severe; though it could have been worse and I have not heard that any data was lost. What are the implications?

Any computer system can fail. There will be a lot of companies reflecting on this though, both those directly affected and others, and realising that the cloud can be a single point of failure, despite the scale and expertise which a company like Amazon invests in high availability.

Is Amazon EC2 more or less likely to fail for an extended period than Salesforce.com? Or Microsoft Azure? Or Google App Engine, or Gmail, or IBM’s evolving SmartCloud? Clearly an excellent question; but I am not sure how we go about answering it other than by reviewing historical performance. I do not expect any of these companies to take advantage of Amazon’s problems to proclaim their own superior resiliency; they will all be worrying too much about the same thing happening on their platforms.

My guess is that the industry will get better at this, and that at some unspecified future moment the chance of one of these cloud platforms failing for three days will become exceedingly small – of course risk can never be eliminated, only reduced.

It seems that the risk is not exceedingly small on Amazon’s cloud today; and we should probably assume that the same applies to other providers.

That is something we have always known, so in one sense nothing has changed. This outage is a sharp reminder though; and planning for failure is a hidden cost of cloud computing that has now been brought into the light.

When will Intel’s Many Integrated Core processors be mainstream?

I’m at Intel’s software tools conference in Dubrovnik, which I have attended for the last three years, and as usual the big topic is concurrent programming and how to write code that takes advantage of the multiple cores in today’s computers.

Clearly this remains a critical subject, but in some ways the progress over these last three years has been disappointing when it comes to the PCs that most of us use. Many machines are only dual-core, which is sub-optimal for concurrent programming since there is an overhead to multi-threading programming that eats into the benefit of having two cores. Quad core is now common too, and more useful, but what about having 50 or 80 or more cores? This enables massively parallel processing of the kind that you can easily do today with general-purpose GPU programming using OpenCL or NVidia’s CUDA, but not yet on the CPU unless you have a super computer. I realise that GPU cores are not the same as CPU cores; but nevertheless they enable some spectacularly fast parallel processing.

I am interested therefore in Intel’s MIC or Many Integrated Core architecture, which combines 50 or more CPU cores on a single chip. MIC is already in preview, with hardware codenamed Knight’s Corner and a development kit called Knight’s Ferry. But when will MIC hit the mainstream for servers and workstations, and how long is it until we can have 50 cores on a commodity desktop PC? I spoke to Intel’s chief evangelist James Reinders.

Reinders first gave me some background on MIC:

“We’ve made those bold steps to dual core, quad core and we’ve got even ten core now, but if you look inside those microprocessors they have a very simple structure. All the cores are hooked together and share their connection to memory, through a shared cache usually that’s on the chip. It’s a simple computer structure, and we know from experience when you build computers with more and more processors, that eventually you go to more sophisticated connections between the cores. You don’t build a 1000-processor super computer and hook them all together with a bus to one memory.

“It’s inevitable that on a chip we need to design a more sophisticated connection. That’s what MIC’s about, that’s what the Larrabee project has always been about, a belief that we should take a bunch of x86 cores and hook them together with something more sophisticated. In this case it’s a ring, a bi-directional, 512-bit wide high performance ring, with multiple connections to memory off the chip, which gives us more bandwidth.

“That’s how I look at MIC, it’s putting a cluster-type of design on a chip.”

But what about timing?

“The first place you’ll see this is in servers and in workstations, where there’s a lot of demand for a lot of computation. In that case we’ll see that availability sometime by the end of 2012. The Intel product should be out late in that year.

“When will we see it in other devices? I think that’s a ways off. It’s a very high core count part, more than 50, it’s going to consume a fair amount of power. The same part 18 months later will probably consume half the power. So inside a decade we could see this being common on desktops, I don’t know about mobile devices, it might even make it to tablets. A decade’s a long time, it gives a lot of time for people to come up with innovative uses for it in software.

“We’ll see single core disappear everywhere.”

Incidentally, it is hard to judge how much computing power is “enough”. Although having many CPU cores may seem overkill for everyday computing, things like speech recognition or on-the-fly image processing make devices smarter at the expense of intense processing under the covers. From super computers to smartphones, if more computing capability is available history tells us that we will find ways to use it.

Setting up RemoteApp and secure FTP on Windows

I spent some time setting up RemoteApp and secure FTP for a small business which wanted better remote access without VPN. VPN is problematic for various reasons: it is sometimes blocked by public or hotel wifi providers, it is not suitable for poor connections, performance can be poor, and it means constantly having to think about whether your VPN tunnel is open or not. When I switched from connecting Outlook over VPN to connecting over HTTP, I found the experience better in every way; it is seamless. At least, it would be if it weren’t for the connection settings bug that changes the authentication type by itself on occasion; but I digress.

Enough to say that VPN is not always the best approach to remote access. There’s also SharePoint of course; but there are snags with that as well – it is powerful, but complex to manage, and has annoyances like poor performance when there are a large number of documents in a single folder. In addition, Explorer integration in Windows XP does not always work properly; it seems better in Vista and Windows 7.

FTP on the other hand can simply publish an existing file share to remote users. FTP can be horribly insecure; it is a common reason for usernames and passwords to passed in plain text over the internet. Fortunately Microsoft now offers an FTP service for IIS 7.0 that can be configured to require SSL for both password exchange and data transmission. I would not consider it otherwise. Note that this is different from the FTP service that ships with the original Server 2008; if you don’t have 2008 R2 you need a separate download.

So how was the setup? Pretty frustrating at the time; though now that it is all working it does not seem so bad. The problem is the number of moving parts, including your network configuration and firewall, Active Directory, IIS, digital certificates, and Windows security.

FTP is problematic anyway, thanks to its use of multiple ports. Another point of confusion is that FTP over SSL (FTPS) is not the same thing as Secure FTP (SFTP); Microsoft offers an FTPS implementation. A third issue is that neither of Microsoft’s FTP clients, Internet Explorer or the FTP command-line client, support FTP over SSL, so you have to use a third-party client like FileZilla. I also discovered that you cannot (easily) run a FTPS client behind an ISA Server firewall, which explained why my early tests failed.

Documentation for the FTP server is reasonable, though you cannot find all the information you need in one place. I also found the configuration perplexing in places. Take this dialog for example:

image

The Data Channel Port Range is disabled with no indication why – the reason is that you set it for the entire IIS server, not for a specific site. But what is the “External IP Address of Firewall”? The wording suggests the public IP address; but the example suggests an internal, private address. I used the private address and it worked.

As for RemoteApp, it is a piece of magic that lets you remote the UI of a Windows application, so it runs on the server but appears to be running locally. It is essentially the same thing as remote desktop, but with the desktop part hidden so that you only see the window of the running app. One of the attractions is that it looks more secure, since you can give a semi-trusted remote user access to specified applications only, but this security is largely illusory because under the covers it is still a remote log-in and there are ways to escalate the access to a full desktop. Open a RemoteApp link on a Mac, for example, and you get the full desktop by default, though you can tweak it to show only the application, but with a blank desktop background:

image

Setup is laborious; there’s a step by step guide that covers it well, though note that Terminal Services is now called Remote Desktop Services. I set up TS Gateway, which tunnels the Terminal Server protocol through HTTPS, so you don’t have to open any additional ports in your firewall. I also set up TS Web Access, which lets users navigate to a web page and start apps from a list, rather than having to get hold of a .RDP configuration file or setup application.

If you must run a Windows application remotely, RemoteApp is a brilliant solution, though note that you need additional Client Access Licenses for these services. Nevertheless, it is a shame that despite the high level of complexity in the configuration of TS Gateway, involving a Connection Authorization Policy and a Resource Authorization Policy, there is no setting for “only allow users to run these applications, nothing else”. You have to do this separately through Software Restriction Policies – the document Terminal Services from A to Z from Cláudio Rodrigues at WTS.Labs has a good explanation.

I noticed that Rodrigues is not impressed with the complexity of setting up RemoteApp with TS Gateway and so on on Windows Server 2008 R2:

So years ago (2003/2004) we had all that sorted out: RDP over HTTPS, Published Applications, Resource Based Load Balancing and so on and no kidding, it would not take you more than 30 minutes to get all going. Simple and elegant design. More than that, I would say, smart design.

Today after going through all the stuff required to get RDS Web Access, RDS Gateway and RDS Session Broker up and running I am simply baffled. Stunned. This is for sure the epitome of bad design. I am still banging my head in the wall just thinking about how the setup of all this makes no sense and more than that, what a steep learning curve this will be for anyone that is now on Windows Server 2003 TS.

What amazes me the most is Microsoft had YEARS to watch what others did and learn with their mistakes and then come up with something clean. Smart. Unfortunately that was not the case … Again, I am not debating if the solution at the end works. It does. I am discussing how easy it is to setup, how smart the design is and so on. And in that respect, they simply failed to deliver. I am telling you that based on 15+ years of experience doing nothing else other than TS/RDS/Citrix deployments and starting companies focused on TS/RDS development. I may look stupid indeed but I know some shit about these things.

Simplicity and clean design are key elements on any good piece of software, what someone in Redmond seems to disagree.

My own experience was not that bad, though admittedly I did not look into load balancing for this small setup. I agree though: you have to do a lot of clicking to get this stuff up and running. I am reminded of the question I asked a few months back: Should IT administration be less annoying? I think it should, if only because complexity increases the risk of mistakes, or of taking shortcuts that undermine security.