Microsoft Office 365: migration hassles show why partners still have a role

I have been working with a small business migrating its email to Office 365. The task seems simple enough: migrate just over 100 mailboxes from on-premise Exchange 2007. There is no requirement for a hybrid deployment, so the normal approach is a cutover migration. You run a script on Exchange at the Office 365 end which sucks up all the mailboxes, creating user accounts as it goes. Once the mailboxes are synched, the script (called a migration batch) synchronises the mailboxes every 24 hours. You then change the MX (DNS) records so that mail goes to Office 365, get users to log on to the new mailboxes, and decommission the on-premise Exchange.

It sounds straightforward, and I am sure works fine with small mailboxes and just a few of them. It is meant to work for up to 1000 mailboxes though, so I did not think just 100 would cause any problems.

Here is what I discovered.

First, we soon ran into problems. The migration batch seems remarkably slow, partly thanks to using an ADSL connection (fast download but slow upload) but even slower than that would suggest. Some mailboxes report “Failed” for a variety of reasons, the most common being that they simply stop synching for no apparent reason, or in some cases never start synching. Here are some of the error messages:

  • Error: MigrationPermanentException: Error: Job is poisoned, poison count = 6.
  • Error: SyncTimeoutException: Email migration failed for this user because no email could be downloaded for 5 hours, 0 minutes.
  • Error: MigrationPermanentException: Error: The job has not made progress since 28/07/2013 08:43:19. Job was picked up at 27/07/2013 17:11:15.


So what do you do if you get these errors?

My first observation is that documentation for cutover migration is thin. It makes little provision for anything going wrong. There are a few things (like avoiding messages over 25MB or hidden addresses in Exchange) but nothing about “job is poisoned”.

Another observation is that the Cutover Migration Batch is lacking in common-sense options. For example, with a slow-ish connection you might prefer to migrate only two or three mailboxes at a time, rather than then 16, which seems to be the fixed number.

I was also puzzled by the option to “Stop” a migration batch, which you can do using a toolbar button.


What are the consequences of stopping a batch? Can you stop it in the morning, and restart in the evening, to reduce bandwidth during the working day, for example? Or do bad things happen?

I headed for the support community. Unfortunately this is not too good either. There are a number of unfailingly polite Microsoft support people there, but they don’t seem all that well informed when it comes to the details of what can go wrong and there is a lot of reference to support articles that might or might not answer the question; or an initial response that doesn’t quite answer the question and then no satisfactory follow-up; or retreat to private messages which, judging from the public responses, are also not always helpful either.

After being told that it was OK to stop and restart a migration batch I tried it. It did not work well at all for me. I even got this lovely error message for a Room mailbox:

Error: ProvisioningFailedException: Couldn‎’t convert the mailbox because the mailbox "mailboxname" is already of the type "Room”

I had better success deleting the migration batch and creating a new one, which is easy because it remembers the connection settings from last time. Mailboxes in progress resumed where they left off, and even some failed mailboxes started synching again.

Still, the detail of this is not so important. Fundamentally, Office 365 seems to me a strong service at a reasonable price (though I like Exchange better than SharePoint), and Microsoft is pushing small businesses towards it so hard that it is becoming difficult to stay on Microsoft’s platform at all unless you migrate – the disappearance of Small Business Server with bundled Exchange makes a small deployment expensive.

This being the case, you would have thought Microsoft would put its best effort into the migration tools, which are critical not only for successful transition, but also as many people’s first impression of the service. I did not expect what look like immature tools with skimpy documentation and poor community support.

Is Microsoft struggling to scale the system quickly enough to meet demand?

In some ways this may actually be good for Microsoft partners who still have their traditional role of puzzling out these kinds of problems and making it easier for their customers.

Appcelerator plans to rethink Titanium architecture, standardise on WebKit JavaScript engine

Appcelerator CEO Jeff Haynie has posted about his plans for Titanium, the company’s cross-platform mobile development toolkit.

The plan is to completely rewrite the core engine, while maintaining a mostly-compatible API. Central to the plans is the idea of using one JavaScript engine on all platforms:

With Ti.Next, we’ve created a small microkernel design that will allow us to have minimal bootstrap code in the native language (C, Java, C#, etc) that talks to a common set of compilers, tools and a single JavaScript Virtual Machine. We have found a way to make the WebKit KJS VM work on multiple platforms instead of using different VMs per platform. This means we can heavily optimize the microkernel (herein after called the “TiRuntime”) and maintenance, optimizations and profiling can be greatly simplified. We’re talking about ~5K LOC vs. 100K LOC per platform.

This will make it possible to share almost all the Titanium code itself across all platforms. The Titanium runtime itself will be shared code written in JavaScript.

Appcelerator says that Titanium code will be “faster than native code in most situations.”

No date for Ti.Next is given though according to this slidedeck the plan is to have the “first set of developer builds available soon to GitHub repo – possibly in the next 45-60 days”. It adds, “production builds are a ways away.”

Using a WebKit JavaScript engine on Windows Phone, for example, sounds interesting.

Ubuntu forum hack sets same-password users at risk

Canonical has announced a comprehensive security breach of its forums.

  • Unfortunately the attackers have gotten every user’s local username, password, and email address from the Ubuntu Forums database.
  • The passwords are not stored in plain text, they are stored as salted hashes. However, if you were using the same password as your Ubuntu Forums one on another service (such as email), you are strongly encouraged to change the password on the other service ASAP.
  • Ubuntu One, Launchpad and other Ubuntu/Canonical services are NOT affected by the breach.

If someone impersonates you on the Ubuntu forums it might be embarrassing but probably not a calamity. The real risk is escalation. In other words, presuming the attacker is able to work out the passwords (they have all the time in the world to run password cracking algorithms and dictionary attacks against the stolen data), it could be used to compromise more valuable accounts that use the same password.

Password recovery mechanisms can work against you. Businesses hate dealing with password reset requests so they automate them as much as they can. This is why Ubuntu’s warning about email accounts is critical: many web sites will simply email your password on request, so if your email is compromised many other accounts may be compromised too.

A better approach in a world of a million passwords is to use a random password generator alongside a password management database for your PC and smartphone. It is still a bit “all eggs in one basket” in that if someone cracks the password for your management database, and gets access, then they have everything.

It is a dreadful mess. Two-factor authentication, which involves a secondary mechanism such as a security token, card reader, or an SMS confirmation code, is more secure; but best reserved for a few critical accounts otherwise it becomes impractical. Two-factor authentication plus single sign-on is an even better approach.

Not just Windows: even Mac sales are down

Apple has released its third quarter financial results, and they are decent: year on year revenue is up fractionally, from $35.023 billion to $35,323 billion, though profit is down from $8.8 billion to $6.9 billion.

Compared to the third quarter in 2012 though, Mac sales are 7% down though, from 4.02 million units to 3.754 million units. Revenue from the Mac has declined by 1%.

In isolation this is not a dramatic change, but the statistic is more interesting when you put it in context with what is happening with Windows PCs. Windows 8 is nearly a year old, released to manufacturing on August 1 2012 and generally available from October 26 2012.

Windows 8 has received a mixed reception, with many users reluctant to adopt the reinvented operating system, which replaces the Start menu and adds a touch-friendly tablet platform alongside the desktop user interface. So far it has done nothing to stem declining PC sales and may have accelerated the process. Gartner reports a 10.9% decline in worldwide PC sales in the second quarter of 2013  (same as Apple’s third quarter). Gartner’s figures also include Macs, though Gartner estimates only a 4.3% unit decline.

The evidence is that gains in Mac sales at the expense of Windows, which is a long-standing trend, is continuing, insofar as Mac sales have declined by less than the PC market overall. However the figures confirm that the decline in PC sales is due to fundamental changes in personal computing, in favour of tablets and other mobile devices, rather than market response to an unpopular Windows edition.

The twist here is that Windows 8 is designed for exactly that trend; and while there is plenty of scope for argument about how Microsoft has addressed it, there is little doubt that it was right to come up with a version of Windows for tablets – and one that was not a reprise of its previous stylus-based efforts.

My own view is that Windows 8 is a plausible strategy and that Microsoft should stick with it. Earlier comments on Windows RT are relevant here.

Hybrid devices that twist between laptops and tablets are not the answer though. They are transitional machines which end up too heavy and expensive to be good tablets. Many users will buy a cheaper laptop instead.

The question for Microsoft now is how much tablet market will be left by the time the Windows app ecosystem matures to the point when a Windows tablet can really compete with an iPad for usability and utility in pure slate form.

Apple has problems too. iPad sales are down by 27% from the same quarter last year, though iPhone is up 15%. The reason, I suspect, is Android.

Dragon Notes Review: quick voice to text for Windows 8, but is it good enough?

When I saw that Nuance had released a Dragon Notes app for Windows 8 I was intrigued for two reasons.

First, I am interested in tracking the health of the app market for Windows 8, and an app from a company as well respected as Nuance is worth looking at.

Second, I have great respect for the Dragon Dictate application for speech to text. Dragon Dictate is superb; indispensable if you cannot use a keyboard for some reason, and valuable even if you can, whether to fend off RSI (Repetitive Strain Injury) or to help transcribe an interview. If Notes is based on the same engine, it could be very useful.

I installed it for review and was intrigued to find that it is not a real Windows 8 app, installed from the Windows Store. Rather, it is a desktop app designed to look superficially like Metro, the touch-friendly user interface in Windows Store apps. That said, the effect is rather odd since it does not run full screen or support the normal gestures and conventions, like settings in the Charms menu.


Still, it is mostly touch-friendly. I say “mostly” because occasionally it departs from the Metro-style user interface and reverts to something more like desktop-style – like these small and ugly buttons in the delete confirmation dialog:


This is sloppy design; look at the lack of margin around the button captions, the childish “No Way!”, and the fact that these buttons are smaller than they should be for comfortable touch control.

In the main part of the user interface the design remains poor. The font size is too small and there appears to be no way to change it. “Settings” lets you access Help, select language, connect to Twitter and Facebook, and register the product. That is all.

The big question though: how well does it work? Dragon Notes is different from Dragon Dictate, in that there is no voice training; it just does its best with whatever voice it hears.

Notes are easy to make; just tap Record, and tap again (or stop talking) to finish. You can transcribe for a maximum of 30 seconds, though you can also append to an existing note.

My initial results on a Surface Pro tablet, using the built-in microphone, were dire. Hardly any words were recognised. Before giving up though, I had a look at the microphone settings and made a recording using Sound Recorder. The result was a distorted mess, and I do not blame Dragon Notes for making no sense of it. I changed the levels in Windows, reducing the “Microphone Boost” until the level was reasonable but not distorted.


The improvement in Dragon Notes was dramatic. Speaking a simple note slowly and carefully I could get almost perfect accuracy.

I attached a high quality Plantronics headset and tried Wordsworth’s Daffodils:


Not bad, but not perfect either. (I did dictate “over” rather than “o’er” as the latter is just too difficult for Dragon).

Here is one of my efforts with the built-in microphone:


Again, not that bad, but not something you could use without editing.

And that could be a problem. In the full Dragon Dictate you can use commands like “Select Fattening” and then select a correction, or repeat the word, or spell it. The only commands in Dragon Notes are for basic punctuation, posting to Facebook and Twitter, sending in an email, or searching the web.

This last is fun when it works. Tap to record, speak a word or phrase, then when it is recognised say “Search the web”.


Summary: simple voice to text that works somewhat, terrible user interface design but basic enough that you will not struggle to use it.

Imitating a Metro user interface is a mistake; it is neither one thing nor the other. It is a shame Nuance did not do a proper Windows Store app.

That aside, how useful is this? It all hinges on the quality of the voice recognition, which will vary according to your voice, your microphone, and the quietness of your surroundings.

In the worst case it will be useless. In the best case, I can see some value in dictating a quick note rather than struggling to type with the on-screen keyboard, presuming you are in fact using a tablet.

It would help though if Dragon would record your voice as well as transcribing it, so that if the text is not intelligible you can later refer back to the recording.

A lot of the time you will end up having to edit the note with the keyboard to fix problems, which lessens its value.

Plenty of potential here, but with sloppy fake Metro design and features that are too limited it cannot yet be recommended.

More information on Dragon Notes is here.

Windows RT and Surface RT: Why Microsoft should persevere

Microsoft has reported a $900 million write-down on Surface RT inventory in its latest financial results. Was Surface RT a big mistake?

A loss of that size is a massive blunder, but the concept behind Surface RT is good and Microsoft should persevere. Here’s why.

Surface RT is experimental in two ways:

  • It was the first Microsoft-branded PC (or tablet if you prefer).
  • It was among the first Windows RT devices. Running on the ARM processor, Windows RT is locked down so that you can only install new-style Windows 8 apps, not desktop apps. However, the desktop is still there, and Microsoft bundles Office, a desktop application suite.

Microsoft had (and has) good reason to do both of these things.

Historically, DOS and Windows prospered because it was open to any hardware manufacturer to build machines running Microsoft’s operating system, creating a virtuous circle in which competition drove down prices, and abundance created widespread application support.

This ecosystem is now dysfunctional. The experience of using Windows was damaged by OEM vendors churning out indifferent hardware bundled with intrusive trial applications. It is still happening, and when I have to set up a new Windows laptop it takes hours to remove unwanted software.

Unfortunately this cycle is hard to break, because OEM vendors have to compete on price, and consumers are seemingly poor at discriminating based on overall quality; too often they look for the best specification they can get for their money.

Further, Windows remains a well understood and popular target for malware. One of the reasons is that despite huge efforts from Microsoft with User Account Control (the technology behind “do you really want to do this” prompts in Windows Vista onwards), most users outside the enterprise still tend to run with full administrative rights for their local machine.

Apple exploited these weaknesses with Mac hardware that is much more expensive (and profitable), but which delivers a less frustrating user experience.

Apple has been steadily increasing its market share at the high end, but an even bigger threat to Windows comes from below. Locked-down tablets, specifically the Apple iPad and later Android tablets, also fixed the user experience but at a relatively low price. Operating systems designed for touch control means that keyboard and mouse are no longer necessary, making them more elegant portable devices, and a wireless keyboard can easily be brought into use when needed.

Microsoft understood these trends, although late in the day. With Surface it began to manufacture its own hardware, an initiative which alongside the bricks-and-mortar Microsoft Stores (supplying trialware-free Windows PCs) aims to counter the corrosive race to the bottom among OEM vendors.

Windows 8 also introduces a new application model which is touch-friendly, secure, and offers easy app deployment via the app store.

In Windows RT the experiment is taken further, by locking down the operating system so that only these new-style apps can be installed.

Surface RT brings both these things together, solving many of the problems of Windows in a single package.

Why Surface RT failed

Surface RT is well made, though performance is disappointing; it seems that Nvidia’s Tegra 3 chipset is not quite sufficient to run Windows and Office briskly, though it is usable, and graphics performance not bad.

There were several problems though.

  • The price was high, especially when combined with the clever keyboard cover.
  • It may solve the problems of Windows, but for many users it also lacks the benefits of Windows. They cannot run their applications, and all too often their printers will not print and other devices lack drivers.
  • Surface RT launched when the Windows 8 app store was new. The new app ecosystem also has its problems (all these things are inter-related) and in consequence few compelling apps were available.
  • Microsoft’s built-in apps were poor to indifferent, and Office was bundled without Outlook.

I was in New York for the launch of Surface RT. There were “Click In” ads everywhere and it was obvious that Microsoft had convinced itself that it could sell the device in large numbers immediately. That was a fantasy. I suppose that if consumers had taken Windows 8 to heart quickly (as opposed to resisting the changes it imposes) and if the app ecosystem had flourished quickly then it could have taken off but neither was likely.

Surface RT positives

Despite all the above, Surface RT is not a bad device. Personally I was immediately drawn to its slim size, long battery life, and high build quality. The keyboard cover design is superb, though not everyone gets on with the “touch” cover. I purchased one of the launch machines and still use it regularly for cranking out Word documents on the road.

Reviews on Amazon’s UK site are largely positive:


Surface RT is also improving as the software evolves. Windows 8.1, now in preview, adds Outlook and makes the device significantly more useful for Exchange users. Performance also gets a slight lift. The built-in apps are improving and app availability in general is much better than it was at launch, though still tiny compared to iPad or Android.

I have also been trying Surface Pro since receiving one at Microsoft’s Build conference last month. The Pro device has great performance and runs everything, but it is too bulky and heavy to be a satisfying tablet, and battery life is poor. I think of it more as a laptop, whereas Surface RT is a true tablet with a battery that gives pretty much a full day’s use when out and about.

Microsoft’s biggest mistake with Surface RT was not the concept, nor the quality of the device. Rather, they manufactured far too many thanks to unrealistic expectations of the size of the initial market. The sane approach would have been a limited release with the aim of improving and refining it.

I hope Microsoft perseveres both with Windows RT and with Surface RT. Give it better performance with something like Nvidia, Tegra 4, Windows 8.1, and improved app support, and it is near-perfect.

The future of Windows

Desktop Windows will remain forever, but its decline is inevitable. Even if it fails, we should recognise that Microsoft is trying to fix long-standing and deep-rooted problems with Windows through its Windows 8, Surface and Windows RT initiatives, and there is some sanity in the solutions it has devised. Despite a billion dollars thrown away on excess Surface RT inventory, it should follow through rather than abandon its strategy.

Microsoft financials: nearly a $billion lost on Surface RT but prospering in server and cloud

Microsoft has reported fourth quarter and full year results for its financial year ending June 30th 2013.

I am in the habit of tracking the results quarter by quarter with a simple table:

Quarter ending June 30th 2013 vs quarter ending June 30th 2012, $millions

Segment Revenue Change Profit Change
Client (Windows + Live) 4411 +259 1099 -1323
Server and Tools 5502 +452 2325 +285
Online 804 +69 -372 +6300
Business (Office) 7213 +889 4873 +745
Entertainment and devices 1915 +134 -110 -142

What is notable in the figures? Windows profits are down, not so much due to declining PC sales but rather this:

These financial results include a $900 million charge, or a $0.07 per share impact, related to Surface RT inventory adjustments.

That said, Microsoft reports that after adjusting for deferred revenue, Windows client decreased 6% quarter on quarter and 1% for the full year, so the PC decline is having an impact.

Business, which includes Office, SharePoint and Office 365, is performing well and the company reports $1.5 billion annual revenue for Office 365.

Server and Tools (almost all Server) continues to shine:

Server & Tools revenue grew 9% for the fourth quarter and 9% for the full year, driven by double-digit percentage revenue growth in SQL Server and System Center.

Even Online, which is essentially Bing-related advertising income, is showing signs of life, despite yet another loss:

Online Services Division revenue grew 9% for the fourth quarter and 12% for the full year, driven by an increase in revenue per search and volume. Bing organic U.S. search market share was 17.9% for the month of June 2013, up 230 basis points from the prior year period.

Windows Phone is hidden in Entertainment and Devices, which reported a loss despite $1.9 billion revenue. Microsoft says:

Windows Phone revenue, reflecting patent licensing revenue and sales of Windows Phone licenses, increased $222 million.

This means that Xbox is slightly down but overall revenue slightly up thanks to Windows Phone.

Overall both revenue and profit are a little higher than the previous year.

Losing a billion dollars on Surface RT is careless. Put simply, Microsoft ordered far too many of its experimental new ARM-based version of Windows, at a time when few new-style apps were available. I do not regard this as proof that the entire concept was wrong, though it is a significant mis-step however you spin it. See further post coming shortly.

Review: Hauppauge HD PVR 2 Gaming Edition Plus capture and streaming device

Hauppauge’s HD PVR 2 is a video capture device. The idea is that you connect it between a video source, such as an Xbox 360 or PlayStation 3, and the TV or home theatre system you normally use. Instant pass-through means you can continue to play games as normal, provided that the HD PVR 2 is powered up.


At the same time, the HD PVR 2 outputs the sound and video to a PC or Mac via USB. Capture software running on the computer lets you save your gaming session to disk, or broadcast it to a live streaming service like so that your followers can watch you gaming triumphs and tragedies in real time, complete with voiceover commentary if you feel inclined to provide it.

I reviewed the original HD PVR 2 here. The Gaming Edition Plus has several new features:

  • Mac software is provided in the box, whereas before it was extra cost
  • An optical audio input is provided, so you can get surround-sound from a PS3
  • Updated software now includes StreamEez for live streaming of the captured video

In addition, whether because of firmware or driver updates, I found the HD PVR 2 Gaming Edition Plus generally less troublesome than the earlier model.

In the box

Kudos to Hauppauge for supplying a generous collection of cables.


Along with the software CD and a getting started leaflet, you get a USB cable, two HDMI cables, a 5-way special cable for connecting component video and stereo audio to the A/V input on the unit, and an adapter cable in case you prefer to use standard RCA cables for component video and audio.

The reason for both HDMI and component support is that the HD PVR 2 only works with unencrypted HDMI signals. This means it works with HDMI from the Xbox 360 but not from the PS3. In cases where unencrypted HDMI is not available, you will use the component option.


In order to get 5.1 surround sound without HDMI, you will need the optical in for audio.

The HD PVR2 itself is relatively compact. The snap below shows it with a CD so you can get a sense of the size.


Setup and usage

Setup is a matter of first making all the connections, including the USB connection to your computer, and then installing the software and drivers from the supplied CD.

There are two primary applications. One is Hauppauge Capture. You can use this to capture video in .TS (H.264) format, do basic editing, export videos to MP4, upload videos to YouTube, and stream to or Ustream. You can add a personal logo to your videos via Settings.

Capture is at a maximum of 1080p at 30fps, or you can downscale as needed.


The other supplied application is ArcsSoft TotalMedia ShowBiz 3.5. This can also also capture directly from the HD PVR 2, and in fact the documentation seems to steer you towards using ShowBiz rather than Hauppauge Capture. The ShowBiz editor has more features, including basic transition effects, storyboard and timeline, lettering, and upload to YouTube or export to file.

Setup was straightforward, though note that passthrough does not work until you have selected the video and audio input in settings on the PC. Once set, you can turn off or disconnect the computer and it continues to work.

Both applications worked well in my tests. While passthrough seems instant, there is a significant delay before video is captured, which is disorientating at first. I did experience occasional glitches. On one occasion the capture failed several minutes into a longer recording for no reason that I can see, but it seemed to be a one-off.

What about streaming to I was excited to try this, and impressed by the ease of setup. Login is built into the capture application.


However I discovered that my ADSL broadband connection was too slow for live streaming and although I could see that the connection was working, the image simply stuttered and broke up.

Live streaming is also demanding on your hardware. See this thread for a discussion of the requirements.

In other words, for successful video capture any modern PC or Mac should work fine, but do not assume live streaming will work unless you have the right hardware and broadband connection.


I was impressed by how reliable the HD PVR 2 Gaming Edition Plus compared to the earlier version. If you want to get creative with video sourced from a gaming console, or any video source, you need a capture device, and this Hauppauge is an affordable and reliable choice. The supplied software is basic, but of course you can use other video editors like Sony Vegas or Adobe Premiere Pro with the files that you capture.


The HD PVR 2 costs around £130 – £150 in the UK. More details from the manufacturer’s website here.

Anders Hejlsberg says C# 6.0 to use Roslyn compiler, coming in next Visual Studio after VS 2013

A disappointment at Microsoft’s Build conference last month was lack of news about the next version of C#, version 6.0. C# architect Anders Hejlsberg did present a session, but it was on TypeScript, a language which compiles to JavaScript.

Aside: Hejlsberg talks about the new Xbox music app in Windows 8.1 (and Xbox One) which is written in JavaScript. It is a large app with 500,000  lines of code, and new features are now implemented in TypeScript (30,000 lines so far).

However, Hejlsberg did also talk about C# 6.0 at Build, during this Channel 9 Q&A, though you have to scroll through to reach the C# content (about 34 minutes in).


He confirmed that C# in Visual Studio 2013 is the same as before, but there will be new previews of the forthcoming “Roslyn” compiler soon, and that C# 6.0 will be in the “next Visual Studio after” which suggests Visual Studio 2014, presuming Microsoft sticks to its annual release cycle.

“We are at a point where the Roslyn compilers are done,” he said.

Roslyn, Hejlsberg explained, is the new compiler for “C#, and VB, and the language services in the IDE.”

Roslyn performance will be at least as good as the existing native compiler, says Hejlsberg. It is better suited to parallel processing so will take advantage of multi-core machines, “particularly for large projects.”

You can read more about Roslyn here. Microsoft describes it as “opening up the Visual Basic and C# compilers as APIs.” Practical benefits include features like instant porting of VB code to and from C#, and the use of C# and VB as macro languages within a .NET application.

Hejlsberg also says that Roslyn will enable a faster pace of evolution for C# in future.

Another aside: Xamarin, which provides a compiler for C# targeting iOS and Android, gets a nod of approval from Hejlsjberg. “I’m a great fan of their work,” he says.

Blogger (and former Microsoft Excel developer) Wesner Moise provides a transcript of the key points.