A nearly perfect boombox: take your audio on the road with TDK’s Trek Max

I first heard the Trek Max at a busy press exhibition; audio rarely sounds good in big noisy rooms but I was struck by the fact that this TDK device was not dreadful but made a valiant attempt to deliver the music: there was at least a little bass, there was volume, there was clarity, and this from a small box, 24 x 5 x 10cm to be precise.


I asked for one on loan to review and it has not disappointed. There is not much in the box, just the Bluetooth speaker, a power supply/charger, and some mostly useless bits of paper.


The hardest task was getting that sticker off the front without leaving a gooey mark. Having done that to the best of my ability, I charged the unit, and paired with a phone. My attempt to use NFC (one-touch Bluetooth connect) failed with a Windows Phone, but worked with an Android tablet. It is no big deal; pairing is straightforward with or without NFC.

Then I played some music. I put on Santana Abraxas; this thing boogies, and does a great job with the complex percussion and propulsive guitar. I played Adele’s 21; it sounds like Adele singing, not the squawky sound you might expect from a device this size, and the drums on Don’t You Remember have a satisfying thud. I played Beethoven’s Third Symphony; the drama and power of the opening movement came over convincingly, albeit in miniature form.

I am not going to pretend that this is the best Bluetooth speaker I have heard; it has tough competition at much higher prices. I do not judge a thing like this versus a home audio setup or a larger Bluetooth speaker that is only semi-portable. This is something to take with you, and even sports a “weatherized” case; the manual makes clear that it is “splash-resistant” rather than anything more serious, and then only if you make sure to close the rubber flap over the panel on the right-hand side, but still a handy feature.

Any clever tricks? Just a couple. One is that you can use the Trek Max as a battery charger for your mobile phone (or any device compatible with USB power). Here is that side panel in detail:


From right to left, there is the USB power output (it has no other function), an AUX in for a wired audio connection, power in, and a master power switch which turns the entire unit off (including the USB power output).

The other party trick is the ability to work as a speakerphone. You are grooving along to music from your mobile, and an incoming call comes in. The music stops and a call button on the top illuminates. Press to answer, take the call hands-free, and press twice to end it. Neat.


Note that the Trek Max is surprisingly heavy for its size, around 1.25Kg. It does not surprise me; there is a lot packed in, including a decent battery.

The speaker configuration is right, left, central woofer for the (mono-ed) lower frequencies, and passive bass radiators at the back, boosting the bass.

It is worth noting that the Trek Max goes surprisingly loud – louder than I have heard before from a device of this size. That is important if you are outside or in a noisy room – but please do not annoy others too much!

The Trek Max  A34 replaces the Trek A33. What is the difference? Primarily, NFC Bluetooth pairing, pause, resume, forward and back buttons (they work fine), and better sync with iOS devices: on these, the volume control on the Apple device directly controls the volume on the Trek, whereas on other devices the volume controls are independent.

Conclusion: a great little device, and make sure you hear it before dismissing it as too pricey for something of this size.


Weight: 1.25 Kg
Size: 24.1 x 9.8 x 5cm
Power output: 15w total
Bluetooth: 2.1 + EDR, A2DP, HFP, HSP, AVRCP
Battery life: 8 hours

Quick reflections on Amazon re:Invent, open source, and Amazon Web Services

Last week I was in Las Vegas for my first visit to Amazon’s annual developer conference re:Invent. There were several announcements, the biggest being a new relational database service called RDS Aurora – a drop-in replacement for MySQL but with 3x write performance and 5x read performance as well as resiliency benefits – and EC2 Container Service, for deploying and managing Docker app containers. There is also AWS Lambda, a service which runs code in response to events.

You could read this news anywhere, but the advantage of being in Vegas was to immerse myself in the AWS culture and get to know the company better. Amazon is both distinctive and disruptive, and threes things that its retail operation and its web services have in common are large scale, commodity pricing, and customer focus.

Customer focus? Every company I have ever spoken to says it is customer focused, so what is different? Well, part of the press training at Amazon seems to be that when you ask about its future plans, the invariable answer is “what customers demand.” No doubt if you could eavesdrop at an Amazon executive meeting you would find that this is not entirely true, that there are matters of strategy and profitability which come into play, but this is the story the company wants us to hear. It also chimes with that of the retail operation, where customer service is generally excellent; the company would rather risk giving a refund or replacement to an undeserving customer and annoy its suppliers than vice versa. In the context of AWS this means something a bit different, but it does seem to me part of the company culture. “If enough customers keep asking for something, it’s very likely that we will respond to that,” marketing executive Paul Duffy told me.

That said, I would not describe Amazon as an especially open company, which is one reason I was glad to attend re:Invent. I was intrigued for example that Aurora is a drop-in replacement for an open source product, and wondered if it actually uses any of the MySQL code, though it seems unlikely since MySQL’s GPL license would require Amazon to publish its own code if it used any MySQL code; that said, the InnoDB storage engine code at least used to be available under a dual license so it is possible. When I asked Duffy though he said:

We don’t … at that level, that’s why we say it is compatible with MySQL. If you run the MySQL compatibility tool that will all check out. We don’t disclose anything about the inner workings of the service.

This of course touches on the issue of whether Amazon takes more from the open source community than it gives back.

Senior VP of AWS Andy Jassy

Someone asked Senior VP of AWS Andy Jassy, “what is your strategy of contributing to the open source ecosystem”, to which he replied:

We contribute to the open source ecosystem for many years. Zen, MySQL space, Linux space, we’re very active contributors, and will continue to do so in future.

That was it, that was the whole answer. Aurora, despite Duffy’s reticence, seems to be a completely new implementation of the MySQL API and builds on its success and popularity; could Amazon do more to share some of its breakthroughs with the open source community from which MySQL came? I think that is arguable; but Amazon is hard to hate since it tends to price so competitively.

Is Amazon worried about competition from Microsoft, Google, IBM or other cloud providers? I heard this question asked on several occasions, and the answer was generally along the lines that AWS is too busy to think about it. Again this is perhaps not the whole story, but it is true that AWS is growing fast and dominates the market to the extent that, say, Azure’s growth does not keep it awake at night. That said, you cannot accuse Amazon of complacency since it is adding new services and features at a high rate; 449 so far in 2014 according to VP and Distinguished Engineer James Hamilton, who also mentioned 99% usage growth in EC2 year on year, over 1,000,000 active customers, and 132% data transfer growth in the S3 storage service.

Cloud thinking

Hamilton’s session on AWS Innovation at Scale was among the most compelling of those I attended. His theme was that cloud computing is not just a bunch of hosted servers and services, but a new model of computing that enables new and better ways to run applications that are fast, resilient and scalable. Aurora is actually an example of this. Amazon has separated the storage engine from the relational engine, he explained, so that only deltas (the bits that have changed) are passed down for storage. The data is replicated 6 times across three Amazon availability zones, making it exceptionally resilient. You could not implement Aurora on-premises; only a cloud provider with huge scale can do it, according to Hamilton.

Distinguished Engineer James Hamilton

Hamilton was fascinating on the subject of networking gear – the cards, switches and routers that push bits across the network. Five years ago Amazon decided to build its own, partly because it considered the commercial products to be too expensive. Amazon developed its own custom network protocol stack. It worked out a lot cheaper, he said, since “even the support contract for networking gear was running into 10s of millions of dollars.” The company also found that reliability increased. Why was that? Hamilton quipped about how enterprise networking products evolve:

Enterprise customers give lots of complicated requirements to networking equipment producers who aggregate all these complicated requirements into 10s of billions of lines of code that can’t be maintained and that’s what gets delivered.

Amazon knew its own requirements and built for those alone. “Our gear is more reliable because we took on an easier problem,” he said.

AWS is also in a great position to analyse performance. It runs so much kit that it can see patterns of failure and where the bottlenecks lie. “We love metrics,” he said. There is an analogy with the way the popularity of Google search improves Google search; it is a virtuous circle that is hard for competitors can replicate.

Closing reflections

Like all vendor-specific conferences there was more marketing that I would have liked at re:Invent, but there is no doubting the excellence of the platform and its power to disrupt. There are aspects of public cloud that remain unsettling; things can go wrong and there will be nothing you can do but wait for them to be fixed. The benefits though are so great that it is worth the risk – though I would always advocate having some sort of plan B and off-cloud (or backup with another cloud provider) if that is feasible.

Microsoft’s Azure outage: a troubling account of what went wrong

Microsoft’s Jason Zander has published an account of what went wrong yesterday, causing failure of many Azure services for a number of hours. The incident is described as running from 0.51 AM to 11.45 AM on November 19th though the actual length of the outage varied; an Azure application which I developed was offline for 3.5 hours.

Customers are not happy. From the comments:

So much for traffic manager for our VM’s running SQL server in a high availability SQL cluster $6k per month if every data center goes down. We were off for 3 hrs during the worst time of day for us; invoicing and loading for 10,000 deliveries. CEO is wanting to pull us out of the cloud.

So what went wrong? It was a bug in an update to the Storage Service, which impacts other services such as VMs and web sites since they have a dependency on the Storage Service. The update was already in production but only for Azure Tables; this seems to have given the team the confidence to deploy the update generally but a bug in the Blob service caused it to loop and stop responding.

Here is the most troubling line in Zander’s report:

Unfortunately the issue was wide spread, since the update was made across most regions in a short period of time due to operational error, instead of following the standard protocol of applying production changes in incremental batches.

In other words, this was not just a programming error, it was an operational error that meant the usual safeguards whereby a service in one datacenter takes over when another fails did not work.

Then there is the issue of communication. This is critical since while customers understand that sometimes things go wrong, they feel happier if they know what is going on. It is partly human nature, and partly a matter of knowing what mitigating action you need to take.

In this case Azure’s Service Health Dashboard failed:

There was an Azure infrastructure issue that impacted our ability to provide timely updates via the Service Health Dashboard. As a mitigation, we leveraged Twitter and other social media forums.

This is an issue I see often; online status dashboards are great for telling you all is well, but when something goes wrong they are the first thing to fall over, or else fail to report the problem. In consequence users all pick up the phone simultaneously and cannot get through. Twitter is no substitute; frankly if my business were paying thousands every month to Microsoft for Azure services I would find it laughable to be referred to Twitter in the event of a major service interruption.

Zander also says that customers were unable to create support cases. Hmm, it does seem to me that Microsoft should isolate its support services from its production services in some way so that both do not fail at once.

Still, of the above it is the operational error that is of most concern.

What are the wider implications? There are two takes on this. One is to say that since Azure is not reliable try another public cloud, probably Amazon Web Services. My sense is that the number and severity of AWS outages has reduced over the years. Inherently though, it is always possible that human error or a hardware failure can have a cascading effect; there is no guarantee that AWS will not have its own equally severe outage in future.

The other take is to give up on the cloud, or at least build in a plan B in the event of failure. Hybrid cloud has its merits in this respect.

My view in general though is that cloud reliability will get better and that its benefits exceed the risk – though when I heard last week, at Amazon Re:Invent, of large companies moving their entire datacenter infrastructure to AWS I did think to myself, “that’s brave”.

Finally, for the most critical services it does make sense to spread them across multiple public clouds (if you cannot fallback to on-premises). It should not be necessary, but it is.

Microsoft promises to fix OneDrive sync in Windows 10, with one engine for Business and Consumer

Microsoft’s Jason Moore has responded to feedback on the change to OneDrive sync in the latest Windows 10 preview. The change removed the “placeholder” feature, where OneDrive files and metadata all show up in Windows Explorer, but do not actually download until requested. It was not a popular move among Windows power users, as reported here.

It turns out there is more going on here than merely tweaking a feature. In his response, Moore states:

We stepped back to take a fresh look at OneDrive in Windows. The changes we made are significant. We didn’t just “turn off” placeholders – we’re making fundamental improvements to how Sync works, focusing on reliability in all scenarios, bringing together OneDrive and OneDrive for Business in one sync engine, and making sure we have a model that can scale to unlimited storage. In Windows 10, that means we’ll use selective sync instead of placeholders. But we’re adding additional capabilities, so the experience you get in Windows 10 build 9879 is just the beginning. For instance, you’ll be able to search all of your files on OneDrive – even those that aren’t sync’ed to your PC – and access those files directly from the search results. And we’ll solve for the scenario of having a large photo collection in the cloud but limited disk space on your PC.

This is good news since it goes to the heart of a more serious issue: the poor implementation of OneDrive sync in Windows, especially in the “Business” edition which has a sync engine based on Office Groove. The consumer OneDrive sync is not perfect either, with a tendency to create duplicate files if you use more than one PC. There is also some kind of bug which means you can edit a file, save it, email it as an attachment, and find that you actually emailed an old version (this has happened to me when submitting articles to editors; no fun).

I have written more on OneDrive issues and confusions here. The poor sync experience with OneDrive for Business is perhaps the weakest point in Office 365 currently; a significant problem.

Now we will get a single sync engine across both versions of OneDrive. If it is also a better sync engine than either of the current ones, Microsoft’s cloud customers will be delighted.

Moore adds: “Longer term, we’ll continue to improve the experience of OneDrive in Windows File Explorer, including bringing back key features of placeholders.”

Questions remain of course. Will Microsoft unify the server technology as well as the sync engines? Will the new sync engine come to Windows 7 and 8 as well as 10? Will the company fix the mobile apps as well? Will OneDrive ever approach the fast, seamless sync achieved by Dropbox?

Watch this space.

Microsoft kills best Windows OneDrive feature in new Windows 10 preview

In Windows 8.1, Microsoft integrated its OneDrive cloud storage with the Windows file system, so you see your OneDrive files in Windows Explorer.

There was a twist though: in Explorer you see all your OneDrive files, but they are not actually downloaded to your PC unless you specifically configure a file or folder for “offline” use, or open a file in which case it downloads on demand.

The strength of this feature is that you have seamless access to what might be multiple Gigabytes of cloud files, without actually trying (and failing) to sync them to your nice, fast, but relatively small SSD, such as on a Surface tablet.

In the latest preview of Windows 10, Microsoft has killed the feature, supposedly on the basis that users did not understand it, says Gabe Aul:

In Windows 8.1, we use placeholders on your PC to represent files you have stored in OneDrive. People had to learn the difference between what files were “available online” (placeholders) versus what was “available offline” and physically on your PC. We heard a lot of feedback around this behavior. For example, people would expect that any files they see in File Explorer would be available offline by default. Then they would hop onto a flight (or go someplace without connectivity) and try to access a file they thought was on their PC and it wasn’t available because it was just a placeholder. It didn’t feel like sync was as reliable as it needed to be. For Windows 10, having OneDrive provide fast and reliable sync of your files is important. Starting with this build, OneDrive will use selective sync. This means you choose what you want synced to your PC and it will be. What you see is really there and you don’t need to worry about downloading it. You can choose to have all of your OneDrive files synced to your PC, or just the ones you select.

Many users did understand the feature though, and for them it is a disaster. No longer can you see all your OneDrive files in Windows Explorer, or search your cloud storage using the tools built into Windows.

This is just a preview though, and Microsoft may restore the feature, or add an advanced option for users who want it, if it gets feedback – as it is already doing?

The questions though: is there really time to revert the change, and is Aul telling the full story about why it was removed?

Amazon Reinvent: new products announced including Aurora database with claimed performance 5 times that of MySQL

Amazon is holding its third Reinvent conference in Las Vegas – 13,500 attendees catching up on Amazon’s Web Services platform. In this morning’s keynote, Amazon’s Senior VP of cloud services Andy Jassy evangelised the platform and announced a number of new services which, in typical Amazon style, are now available in preview.


Amazon is well ahead of its competitors in cloud services, in terms of market share and mindshare, and Jassy had no problems reeling off impressive statistics and case studies. A slide showing that AWS is not only larger but also growing faster yea-on-year than its competition prompted a small protest. Microsoft claims that Amazon understated its rate of growth:


The refrain from those who spoke on behalf of companies such as Intuit (which intends to move 100% of its applications to AWS) was that no alternative cloud provider could offer a realistic alternative to AWS. With the progress being made by competitors I wonder for how long this will be true – and bear in mind that this is an Amazon conference – but it testifies to the dominance that Amazon has achieved.

Jassy made a key point about security and compliance. The relative security of public cloud versus private datacenters has long been debated, initially on the assumption that computing resources you own and guard yourself must be more secure than those hosted by third-parties. The counter is that few organisations can afford the level of security that big public cloud providers can achieve. Jassy’s point though was that the number of certifications now achieved by AWS is now such that security and compliance is now a driver towards cloud computing.

The main news though was a series of product announcements:

Aurora relational database: a MySQL compatible database as a service for which Jassy claims 5x the performance of MySQL. He says that businesses stick with commercial, proprietary database managers because open source solutions lack the performance, but that Aurora now provides a solution at a commodity price. Unfortunately Aurora is not going to help those with applications locked into Oracle, SQL Server or others. Still, 5x performance is always welcome.

CodeDeploy: apparently based on a service Amazon uses internally, this is a deployment tool for pushing out updated applications to EC2 (Elastic Compute Cloud) VMs without downtime.

CodeCommit: a source code management service for Git repositories.

CodePipeline: automate your software release by defining a workflow of tests and approvals.

Key Management Service: if you manage encrypted data you will be familiar with the hassles of managing and rotating encryption keys. Here is a service to manage that.

AWS Config: A discovery service for the AWS resources you are using.

Service Catalog: a custom portal for users to browse and use AWS resources offered by an organisation.

This was day one; there is another keynote tomorrow and there may be more announcements.

There is no doubting the momentum behind AWS, and according to Jassy, there is still a long way to grow. Towards the end of the keynote he talked about businesses moving entire datacenters to AWS, for example when leases expire, and in the press Q&A session later he expressed the belief that eventually few companies will operate their own datacentres; he does not see much future for private cloud – in the sense of self-managed clouds on your own infrastructure. That is of course what you would expect Amazon to say.

Partnerships are key in this industry and I was interested to note the Reinvent sponsors:


The Diamond sponsors (who I presume have paid the most) are Accenture, Cloudnexa (AWS consultants), CSC (also consultants), Intel (I guess Amazon buys a lot of CPUs), Trend Micro and twilio (who must me doing well to be on this list).

Microsoft takes its .NET runtime open source and cross-platform, announces new C++ compilers for iOS and Android: unpacking today’s news

Microsoft announced today that the .NET runtime will be open source and cross-platform for Linux and Mac. There are a several announcements and it is potentially confusing, so here is a quick summary.

The .NET runtime, also known as the CLR (Common Language Runtime) is the virtual machine that runs Microsoft’s C#, F# and Visual Basic .NET languages, performing just –in-time compilation to native code and providing interop between the application code and the operating system APIs. It is distinct from the .NET Framework, which is the library of mostly C# code that underlies application platforms like ASP.NET, Windows Presentation Foundation (WPF), Windows Forms, Windows Communication Foundation and more.

There is is already a cross-platform version of .NET, an open source project called Mono founded by Miguel de Icaza in 2001, not long after the first preview release of C# in 2000. Mono runs on Linux, Mac and Windows. In addition, de Icaza is co-founder of Xamarin, which uses Mono together with its own technology to compile C# for iOS, Android and Mac OS X.

Further, some of .NET is already open source. At Microsoft’s Build conference earlier this year, Anders Hejlsberg made the Roslyn project, the compiler for the next generation of the .NET Runtime, open source under the Apache 2.0 license. I spoke to Hejlsberg about the announcement and wrote it up on the Register here. Note the key point:

Since Roslyn is the compiler for the forthcoming C# 6.0, does that mean C# itself is now an open source language? “Yes, absolutely,” says Hejlsberg.

What then is today’s news? Blow by blow, here are what seems to me the main pieces:

  • The CLR itself will be open source. This is the C++ code from which the CLR is compiled.
  • Microsoft will provide a full open source server stack for Mac and Linux including the CLR. This will not include the frameworks for client applications; no Windows Forms or WPF. Rather, it is the “.NET Core Runtime” and “.NET Core Framework”. However Microsoft is working with the Mono team which does support client applications so there could be some interesting permutations (bear in mind that Mono also has its own runtime). However Microsoft is focused on the server stack.
  • Microsoft will release C++ frameworks and compilers for iOS and Android, using the open source Clang (C and C++ compiler front-end) and LVVM (code generation back end), but with Visual Studio as the IDE. If you are targeting iOS you will need a Mac with a build agent, or you can use a cloud build service (see below). The Android compiler is available now in preview, the iOS compiler is coming soon. “You can edit and debug a single set of C++ source code, and build it for iOS, Android and Windows,” says Microsoft’s Soma Somasegar, corporate VP of the developer division.
  • Microsoft has a new Android emulator for Windows based on Hyper-V. This will assist with Android development using Cordova (the HTML and JavaScript approach also used by PhoneGap) as well as the new C++ option.


  • The next Visual Studio will be called Visual Studio 2015 and is now available in preview; download it here.
  • There will be a thing called Connected Services to make it easier to code against Office 365, Salesforce and Azure
  • A new edition of Visual Studio 2013, called the Community Edition, is now available for free, download it here. The big difference between this and the current Express editions is first that the Community Edition supports multiple target types, whereas you needed a different Express edition for Web applications, Windows Store and Phone apps, and Windows desktop apps.  Second, the Community Edition is extensible so that third parties can create plug-ins; today Xamarin was among the first to announce support. There may be some license restrictions; I am clarifying and will update later.
  • New Cloud Deployment Projects for Azure enable the cloud infrastructure associated with a project to be captured as code.
  • Release Management is being added to Visual Studio Online, Microsoft’s cloud-hosted Team Foundation Server.
  • Enhancements to the Visual Studio Online build service will support builds for iOS and OS X
  • Visual Studio 2013 Update 4 is complete. This is not a big update but adds fixes for TFS and Visual C++ as well as some new features in TFS and in GPU performance diagnostics.

The process by which these new .NET projects will interact with the open source community will be handled by the .NET Foundation.

What is Microsoft up to?

Today’s announcements are extensive, but with two overall themes.

The first is about open sourcing .NET,  a process that was already under way, and the second is about cross-platform.

It is the cross-platform announcements that are more notable, though they go hand in hand with the open source process, partly because of Microsoft’s increasingly close relationship with Mono and Xamarin. Note that Microsoft is doing its own C++ compilers for iOS and Android, but leaving the mobile C# and .NET space open for Xamarin.

By adding native code iOS and Android mobile into Visual Studio, Microsoft is signalling real commitment to these platforms. You could interpret this as an admission that Windows Phone and Windows tablets will never reach parity with their rivals, but it is more a consequence of the company’s focus on cloud, and in particular Office 365 and Azure. The company is prioritising the promotion of its cloud services by providing strong tooling for all major client platforms.

The provision of new Microsoft server-side .NET runtimes for Mac and Linux is a surprise to me. The Mac is not much used as a server but very widely used for development. Linux is an increasingly important operating system within the Azure cloud platform.

A side effect of all this is that the .NET Framework may finally fulfil its cross-platform promise, something Microsoft suppressed for years by only supporting it on Windows. That is good news for those who like programming in C#.

The .NET Framework is changing substantially in its next version. This is partly because of the Roslyn compiler, which is itself written in C# and opens up new possibilities for rich refactoring and code transformation; and partly because of .NET Core and major changes in the forthcoming version of ASP.NET.

Is Microsoft concerned that by supporting Linux it might reduce the usage of Windows Server? “In Azure, Windows and Linux are a core part of our platform,” Somesegar told me. “Helping developers by providing a good set of tools and letting them decide what server they run on, we feel is all goodness. If you want a complete open source platform, we have the tools for them.”

How big are these announcements? “I would say huge,”  Somasegar told me, “What is shows is that we are not being constrained by any one platform. We are doing more open source, more cross-platform, delivering Visual Studio free to a broader set of people. It’s all about having a great developer offering irrespective of what platform they are targeting or what kind of app they are building.”

That’s Microsoft’s perspective then. In the end, whether you interpret these moves as a sign of strength or weakness for Microsoft, developers will gain from these enhancements to Visual Studio and the .NET platform.

An Azure Web Site is a VM which supports multiple applications

This will be unnecessary for Azure experts, but I have seen some misunderstanding on this point, hence this post.

A “web site” is a unit of service on the Azure cloud platform which represents a web application hosted on IIS, Microsoft’s web server (but see below). You write a standard ASP.NET application and deploy it. Azure takes care of configuring the host VM, the server operating system, and IIS.

Using a web site is preferable to creating your own VM and installing IIS on it, for several reasons. One is that you do not have to worry about patching and maintaining the operating system. Another is that web sites can be scaled, manually or automatically, with an option for scheduling so that you can scale down the site for periods of low demand.


The main reason for using a VM rather than a web site is if the app has dependencies that fall outside what a web site can handle.

Another thing to know about Azure web sites is that they have four “plan modes,” but only two are worth considering for production. The Free and Shared modes host your application on a shared VM, and quotas are applied. If Azure decides your site is out of quota, it will stop responding. Fine for a prototype, but not something you want customers or users to see. This feature is not shown clearly on the table of features but it is in note 2:

Shared Instance: Free and Shared (Preview) tiers include 60 minutes and 240 minutes of CPU capacity per day, respectively. The Shared (Preview) Website rates are applied per website instance.

The Basic tier on the other hand is decent. It is a dedicated VM, and you can scale it (manually) to 3 instances. It costs around 25% less than a Standard tier site.

Why go Standard? You get 50B storage thrown in (a Basic tier site has 10GB), auto-backup, auto-scale up to 10 instances, and a fixed IP address for SSL. If you have to buy a fixed IP address for a single instance Basic tier site, the price goes above a Standard tier site, except for a Large instance.

Currently a Basic tier web site costs from £35.64 to £141.92 per month, and a Standard tier from £47.10 to £189.65, depending on the size of the VM.

It is a significant cost, but what may not be obvious is that you can deploy multiple applications to a single web site, which makes my statement above, “A ‘web site’ is a unit of service on the Azure cloud platform which represents a web application hosted on IIS”, not quite correct.

When you create a new web site, if you have one already, you can choose a “web hosting plan”. Here is an example:


In this case, there are two pre-existing web site VMs, one in East Asia and one in Europe. If you choose one of these two, the new web site will be added to that VM. If you choose “Create new web hosting plan”, you will create a new dedicated instance (or free, or Shared). Adding to an existing VM means no extra cost.

If you are a developer, it may well be better to run a single Basic VM for prototyping, and add multiple sites, rather than risking a free or shared instance which might be out of quota when you demonstrate it to your customer.

What is the limit to the number of web sites you can add? There is none, other than the overloading the VM and getting unresponsive applications.

Postscript: the Web Site service is interesting as an example which blurs the boundaries between IaaS (Infrastructure as a service) and PaaS (Platform as a service). It is more PaaS than IaaS, in that you do not have to worry about maintaining the OS, but more IaaS than PaaS, in that you are still having to think about individual VMs. It would be more purist if Microsoft abstracted away the VMs and simply guaranteed a certain level of service, or scaled up automatically and billed for what you use. On the other hand, the Web Site concept puts a lot of control in the hands of the developer/admin and help them to make best use of the resources, while still removing most of the maintenance burden. I think it is a good compromise.

Arcam’s high-end Solo soundbar: fixing TV sound

Arcam launched its Solo soundbar and subwoofer at a press event in London last week. When this footage was playing I was not thinking about the soundbar at the foot of the video screen, nor of the subwoofer sitting in the corner. Rather, I was thinking how wonderfully the great B.B. King is playing on this concert Blu-ray; and that is how it should be when auditioning hi-fi.


The Cambridge-based Arcam occupies a distinctive spot among UK audio manufacturers, neither low-budget nor at the silly-expensive end of the market. MD Charlie Brennan told me that the company’s focus is audio engineering: not lifestyle, nor ultra high-end, but products that are affordable and which sound great.

The aluminium-bodied Solo bar seems at first listen to be a good example. It is a solid product in every sense, weighing a hefty 6.4kg and featuring 100w of class D (highly efficient) amplification into 6 speaker drivers, midrange (4″), woofer (4″) and tweeter (1″) for each channel.


This is more than just a better audio system for your TV. The Solo bar has four HDMI inputs (with 4K support) and one output, so you can connect your games consoles and video streamer sources. There are also optical and coaxial digital (S/PDIF) inputs, and an analogue input for general purpose use, and an output for a subwoofer.


It does not end there. The Solo bar supports aptX (which means high quality for systems that support it) Bluetooth streaming both as player (for your mobile device) and as source (for your wireless headphones).

A setup microphone is included in the box, which accounts for the mic input on the panel. Use of this is optional, but it is often worth running this type of setup routine, by placing the mic at the normal sitting position and having the unit optimise the sound for the room, taking into account the position of the subwoofer if present.

Room effects are huge and often ignored, so it is good to see this. However you cannot fine-tune the results yourself; you either disable it, or enjoy the results the bar comes up with.

There is a controller app for iOS and Android but sadly not for Windows Phone, though all the functions are also accessible through the supplied remote. You can switch between unvarnished stereo, or audio processing for Movie or Concert.


The Solo subwoofer has a 300w amplifier and 10″ driver. We heard the system with and without the sub. My brief observation is that while it sounded good without the sub, adding it lifted the sound substantially; it is not so much the added bass that you notice, but greater realism. The sub is also important for those all-important explosions and sound effects in movies and games.

The Solo bar is £800 and the Solo subwoofer £500. That does not seem to me expensive given the quality I heard, but neither is it a casual purchase. There are drawbacks to the soundbar approach, notably two-channel sound rather than surround, but the simplicity of the system more than compensates for many (Brennan said that the soundbar market is one of the few areas of home audio that is growing).

Personally I would recommend getting system with the sub if possible, as they are designed to work together.

You should be able to buy a Solo later this month, November 2014.

This is not a review; that will have to wait for an opportunity to try the system for myself and test it in detail.

More information on Arcam’s site here.

Writing for The Register

Since the beginning of October I have been working two days a week for The Register. I am still freelance for the other three days so also available for other work.

Why the Register? I have been contributing for some years and there are several things I like about the publication. It is known of course for its attention-grabbing headlines but you will also find solid technical content there; it was one of the first sites to report the Linux Shellshock bug, for example, and did so in detail with strong follow-up posts, making the site a good one for admins to follow. There is also a strong developer readership which is good from my perspective. Editorially it is diverse and you will find plenty of different opinions expressed by the staff and contributors, which I consider a strength. Organisationally, The Register is refreshingly unbureaucratic. 

It reminds me in some ways of the best days of Personal Computer World, a famous print magazine which ceased publication in 2009. PCW was a delight because it was not shy about covering small niches as well as mainstream technology, in the days when it had plenty of editorial pages to fill.

The comments are worth reading too; not all of them, but there are plenty of smart readers. On any specific topic, logic suggests that some of the readers will know more about it than the journalist; you should always glance at the comments.

The Register is also a well-read site; number 513 in the UK according to Alexa, and 2204 in the USA. Judging by Alexa it is seems to be the most popular tech news site in the UK though I am not an expert on web stats.

I will continue to post here of course, as well as covering hardware, gadgets and audio on http://gadgets.itwriting.com/.

In case you missed it, this is what I came up with in October – it was a bit more than 2 days a week as it turned out, I am not superhuman:

Programming Office 365- Hands On with Microsoft’s new APIs

Microsoft unwraps new auto data-protection in Office 365 tools

Mozilla- Spidermonkey ATE Apple’s JavaScriptCore, THRASHED Google V8

Microsoft shows off spanking Win 10 PCs, compute-tastic Azure

Happy 2nd birthday, Windows 8 and Surface- Anatomy of a disaster

Entity Framework goes ‘code first’ as Microsoft pulls visual design tool

Lollipop unwrapped- Chromium WebView will update via Google Play

Microsoft and Dell’s cloud in a box- Instant Azure for the data centre

Migrate to the cloud and watch your business take flight

Docker’s app containers are coming to Windows Server, says Microsoft

Sway- Microsoft’s new Office app doesn’t have an Undo function

Influential scribe Charles Petzold- How I figured out the Windows API

Software gurus- Only developers can defeat mass surveillance

Xamarin, IBM lob cross-platform mobile app dev tools at Microsoft coders

Windows 10 feedback- ‘Microsoft, please do a deal with Google to use its browser’

No tiles, no NAP – next Windows for data centre looks promising

Vanished blog posts- Enterprise gaps- Welcome to Windows 10

One Windows- How does that work… and WTF is a Universal App-

Windows 10- One for the suits, right Microsoft- Or so one THOUGHT