New Office 365 OneDrive for Business sync client now supports team sites

Microsoft has announced new capabilities for its next-generation OneDrive for Business sync client – the software that lets users access OneDrive documents through Windows Explorer rather than having to go via a web browser.

Technically, there are two ways to access OneDrive with Windows Explorer. One uses WebDAV and only works online, the other makes a local copy of the documents and synchronises them when it can. Microsoft pushes users towards the second option. If you use WebDAV, repeated authentication prompts and lack of offline capabilities are annoyances that many find it hard to cope with.

Problem is that the old OneDrive for Business sync client, called Groove, is just not reliable. Every so often it stops syncing and there is often no solution other than to delete all the local copies and start again.

Microsoft is therefore replacing it with a new OneDrive for Business sync client, which has been in preview since September 2015. “The preview client adds OneDrive for Business connectivity to our proven OneDrive consumer client,” explained Microsoft, abandoning the problematic Groove.

There was a snag though. The new client did not support Team Sites, also known as SharePoint Online, but only personal OneDrive for Business cloud storage. Many businesses make more use of Team Sites than they do of the personal storage. Users with both had to run both the old and new sync clients side by side.

I was among those complaining so it is pleasing to see that Microsoft, a mere 15 months later, has met my request, by adding support for Team Sites to its new client.

image

(I had no idea until I looked today how much support the feedback had received).

Today’s announcement also includes a new standalone Mac client, which can be deployed centrally, and an enhancd UI with an Activity Center.

There are also new admin features in the Office 365 dashboard, like blocking syncing of specified file types, control over device access, and usage reporting.

There may still be some snags – and note that the new client is still a preview.

Competitors like DropBox and Box have some technical advantages, but Microsoft’s key benefit is integration with Office 365, and the fact that it comes as part of the bundle in most plans. If it can iron out the technical issues, of which sync has to date been the most annoying, it will significantly strengthen its cloud platform.

Publishing Exchange with pfSense

pfSense is a FreeBSD-based firewall which you can find here.

I wanted to publish Exchange through pfSense. I installed the Squid plugin which includes specific reverse proxy support for Exchange.

If you search for help with publishing Exchange on pfSense you will find this document by Mohammed Hamada.

Unfortunately the steps given seem to be incorrect in some places, certainly for my version which is 2.3.2.

Here’s what I had to do to get it working:

1. Simple one not mentioned in his steps, you have to enable the Squid Proxy Server otherwise Squid will not run

2. Hamada sets a NAT rule to forward HTTPS traffic to his Exchange server:

image

If you do this, it will bypass your reverse proxy. What you should do instead is to create a Firewall rule to accept HTTPS:

image

You should also verify that the pfSense web GUI is not using the same port (443), in System/Advanced/Admin Access. If it is set to HTTP rather than HTTPS that is OK too. Normally access to the web GUI from the WAN is blocked. One other thing: in order to use port 443 in Squid Reverse Proxy General Settings, I set net.inet.ip.portrange.reservedhigh to 0 in System/Advanced/System Tunables

3. I did this, as well as setting up Exchange in Squid Reverse Proxy General Settings, whereupon OWA worked but remote Outlook and mobile clients did not, or at least not reliably. The main problem was this setting in Squid Reverse Proxy / General:

image

This must be set to Intermediate rather than Modern (the default).

Now it works – though if pfSense experts out there have better ways to achieve the above I would be interested.

Update: one other thing to check, make sure that your pfSense box can resolve the internal hostname of your Exchange server. By default it may use external DNS servers even if you put internal DNS servers in General Setup. This is because of the setting Allow DNS server list to be overridden by DHCP/PPP on WAN.

Microsoft Office 365 Activation Hassles

Imagine you are a customer of Microsoft’s Office 365 service, including a subscription to the Office desktop applications like Word, Excel and Outlook.

One day you click on the shortcut for Word, but instead of opening, it just shows a “Starting” splash screen which never progresses.

Being smart, you try to start Word in safe mode by holding down the Ctrl key, but the exact same thing happens.

Annoying, when you want to do your work. What is going on?

I took a look at a case like this. Two things you should do (after the usual reboot):

1. Look in the event viewer. Here, I found a clue that the issue is related to software activation, specifically Event 2011 “Office Subscription Licensing exception”:

image

2. For all things related to Office licensing, open a command prompt, go to (for example) C:\Program Files (x86)\Microsoft Office\Office16, and type:

cscript ospp.vbs /dstatus

In this case I got the following:

image

This told me that Windows thinks TWO product keys for Office are installed. One has expired, the other is fine.

The guilty party may (or may not) be the trial version of Office typically pre-installed with a new PC. Or it could be a consequence of changing your Office 365 subscription. Neither would be the fault of the user, who is fully licensed and has done nothing other than follow Microsoft’s normal procedures for installing Office 365.

Solution: we reinstalled Office from the Office 365 portal, and attempted to remove the dud product key with:

cscript ospp.vbs /unpkey:<Last five characters of product key>

as explained here. All is well for the moment.

This kind of thing drives me nuts though. Activation and subscription license checking is for the benefit of the vendor, not the user, and should never get in the way like this.

Further, cannot Microsoft find some way of informing the user when this happens, and not have Word simply hang on starting? How difficult is it to check for licensing and activation issues, and throw up a message?

More on MQA and Tidal: a few observations

I have signed up for a trial of the Tidal subscription service and have been listening to a few of the MQA-encoded albums that are available. You can find a list here. Most of the albums are from Warner, which is in the process of MQA-encoding all of its catalogue.

From my point of view, having familiar material available to test is a huge advantage. Previous MQA samples have all sounded good, but with no point of reference it is hard to draw conclusions about the value of the technology.

I have used both the software decoding available in the Tidal desktop app (running on Windows), and the external Meridian Explorer 2 DAC which is an affordable solution if you want something approaching the full MQA experience.

image

Note that on Windows you have to set Exclusive mode for MQA to work correctly. When using an MQA-capable DAC, you should also set Passthrough MQA. The Explorer 2 has a blue light which shows when MQA is on and working.

image

For these tests, I used the Talking Heads album Remain in Light, which I know well.

The Tidal master is different from any of my CDs. Here is the song Born under Punches in Adobe Audition (after analogue capture):

image

Here is my remastered CD:

image

This is pretty ugly; it’s compressed for extra loudness at the expense of dynamic range.

Here is my older CD:

image

This is nicely done in terms of dynamic range, which is why some seek out older masterings, despite perhaps using inferior source tapes or ADC.

This image shows three variants of the track streamed by Tidal and captured via ADC into a digital recorder at 24-bit/96 kHz.

image

The first is the track with full MQA enabled and decoded by the Explorer 2. The second is the “Hi-Fi” version as delivered by Tidal, essentially CD quality. The third is the “Master” version, in other words the same source as the first, but with Exclusive mode turned off in Tidal, which prevents MQA from working.

You can see at a glance that MQA is doing what it says it does and extending the frequency response. The CD quality output has a maximum frequency response of 22 kHz whereas the MQA output extends this to 48 kHz at least as captured by my 24-bit / 96 kHz (the theoretical maximum frequency response is half the sampling rate).

Do they sound different though, bearing in mind that we cannot hear much above 20 kHz at best, and less than that as we age? I have been round this hi-res loop many times and concluded that for most of us there is not much benefit to hi-res as a delivery format. See here for some tests, for example.

MQA is not just extended frequency response though; it also claims to fix timing issues. However my captured samples are not really MQA; they are the output from MQA after a further ADC step. Of course this is not optimal but the alternative is to capture the digital output, which I am not set up to do.

An interesting question is whether the captured MQA output, after a second ADC/DAC conversion, can easily be distinguished from the direct MQA output. My subjective impression is, maybe. The first 30 seconds of Born Under Punches is a sort of collage of sounds including some vocal whoops, before David Byrne starts singing. What I notice listening to the Tidal stream with MQA enabled is that the different instruments sound more distinct from each other making the music more three-dimensional and dramatic. The vocals sound more natural. It is the best I have heard this track.

That said, I have not yet been able to set up any sort of blind test between the true MQA stream and my copy, which would be interesting, since what I have captured is plain old PCM.

There is a key point to note though, which is that mastering offered by Tidal is better than any of the CD versions I have heard; the old Eighties mastering is more dynamic but sounds harsher to my ears.

With or without MQA; you might want to subscribe to Tidal just to get these superior digital transfers.

Update: it seems that the Tidal stream for Remain in Light (both MQA and Hi-Fi) is a different mix, possibly a fold-down from the 5.1 release. So it is not surprising that it sounds different from the CD. The question of whether the MQA decoded version sounds different still applies though.

The MQA enigma: audio breakthrough or another false dawn?

The big news in the audio world currently, announced at CES in Las Vegas, is that music streaming service Tidal has signed up to use MQA (Master Quality Authenticated), under the brand name Tidal Masters. MQA is a technology developed by Bob Stuart of Meridian Audio, based in Cambridge in the UK, though MQA seems to have its own identity despite sharing the same address as Meridian.

image

What is MQA? The question is easy but the answer is not. Here is the official short description:

Conventional audio formats discard parts of the sound to keep file size down, but part of this lost detail is the subtle timing information that allows us to build a realistic 3D soundscape in our minds. … With MQA, we go all the way back to the original master recording and capture the missing timing detail. We then use advanced digital processing to deliver it in a form that’s small enough to download or stream.

At first sight it looks like another format for lossless audio, and the description on MQA’s site confuses matters by making a comparison with MP3:

MP3 brings you just 10% of what was recorded in the studio. Everything else is lost to fit the music into a conveniently small file. MQA brings you the missing 90%.

There are two problems with this statement. One is that MP3 (or its successor AAC) actually sounds very close to the original, such that in tests most cannot tell the difference; and the other is that audiophiles tend not to use MP3 anyway, preferring formats like FLAC or ALAC (Apple’s version) which are lossless.

There is more to it than that though. There are three core aspects to MQA:

1. “Audio origami”: MQA achieves higher resolution than CD (16-bit/44.1MHz) by storing extra information in audio files that is otherwise wasted, as it stores audio that is below the noise floor (ie normally inaudible). There is a bit of double-think here as removing unnecessary parts of audio files is the sort of thing that MP3 and AAC do, which the MQA folk have told us is bad because we are not getting 100%.

This is also similar in concept to HDCD (High Definition Compatible Digital), a technology developed by Pacific Microsonics in the Eighties and acquired by Microsoft. Of course MQA says its technology is quite different!

Note that you need an MQA decoder to benefit from this extra resolution, and there is a nagging worry that without it the music will actually sound worse (HDCD has the same issue).

2. Authentication. MQA verifies that the digital stream is not tampered with, for example by audio features that convert or enhance the sound with digital processing. This can be an issue particularly with PCs or Macs where the built-in audio processing will do this by default, unless configured otherwise.

3. Audio “de-blurring”. According to MQA’s team:

There’s a problem with digital – it’s called blurring. Unlike analogue transmission, digital is non-degrading. So we don’t have pops and crackles, but we do have another problem – pre- and post-ringing. When a sound is processed back and forth through a digital converter the time resolution is impaired – causing ‘ringing’ before and after the event. This blurs the sound so we can’t tell exactly where it is in 3D space. MQA reduces this ringing by over 10 times compared to a 24/192 recording.

If this is an issue, it is not a well-known one, at least, not outside the niche of audiophiles and hi-fi vendors who historically have come up with all sorts of theories about improving audio which do not always stand up to scientific scrutiny.

So is MQA solving a non-problem? That’s certainly possible; but I do find it interesting that MQA has received a generally warm reception from listeners.

Here’s one audiophile’s reaction:

Have never really “done” digital before. 16/44 has always sounded ghastly to my ears right from the start and still now. MQA did indeed “fix” the various forms of distortion that I could hear present in everything where the sampling rate was taken down to just 44. … My findings – those of an improved sense of solidity in the stereo image and the lack of that horrendous crystalline glassy edge to things, especially on the fade, seem to be being mirrored in what people are hearing. It doesn’t have that thing I describe as a “choppy sense of truncation” which I suspect others mean by “transients”.
Basically, per the post above, it’s a bit like “good analogue”. Digital can finally hold its head up high against an analog from master-to vinyl performance. And not only that, hopefully, walk all over it and give us something genuinely new.

If this history of audio has shown us anything, it is that subjective judgements about what makes something sound better (and whether it is better) are desperately unreliable. Further, it is often hard to make true comparisons because to do requires so much careful preparation: identical source material, exactly matched volume, and the ability to switch between sources without knowing which is which, to avoid our clever brains from intervening and telling us we are hearing differences which our ears alone cannot detect.

We should be sceptical then; and even possibly depressed at the prospect of a proprietary format spoiling the freedom we have enjoyed since the removal of DRM from most downloadable audio files.

Still … is it possible that MQA has come up with a technology that really does make digital audio better? Of course we should allow for that possibility too.

I have signed up for Tidal’s trial and will report back shortly.

From Windows Embedded to cloud: Microsoft announces the Connected Vehicle Platform

Microsoft has announced the Connected Vehicle Platform, at the CES event under way in Las Vegas.

image

The company is not new to in-car systems, but its track record is disappointing. It used to be all about Windows Embedded, using Windows CE to make a vehicle into a smart device.

Ford was Microsoft’s biggest partner. It built Ford SYNC on the platform and in 2012 announced five years of partnership and 5 million SYNC-enabled vehicles.

However in 2014 Ford announced SYNC 3 with no mention of Microsoft – because SYNC 3 uses Blackberry’s QNX.

What went wrong? There’s a 2014 analysis from Bill Howard that offers a few clues. The bit that chimes with me is that Microsoft was too slow in updating the system. The overall Windows story over the last 10 years is convoluted to say the least, with many changes to the platform and disruptive (in a bad way) strategy shifts. The same factor is a large part of why Windows Phone failed.

It is not clear at this stage whether or not Microsoft’s Connected Vehicle Platform partners (which include Renault-Nissan and BMW) will use Windows Embedded in their solutions; but what is notable is that Microsoft’s release makes no mention of it. The company has shifted to a cloud strategy, and is primarily offering Azure services rather than mandating how manufacturers choose to consume them. The detail of the announcement identifies five key areas:

  • Telematics and Predictive services
  • Marketing (“Customer insights and engagement”)
  • Productivity (Office 365, Skype)
  • Connected ADAS (Advanced Driver Assistance Systems), ie. the car helping you to drive
  • Advanced Navigation

Cortana also gets a mention. We may think of Cortana as a virtual assistant, but what this means is a user interface to intelligent services.

There is big competition for all this of course, with Google, Amazon and Apple also in this space. There is also politics involved. If you read Howard’s analysis linked above, note that he mentions how the auto companies dislike restrictions such as Google insisting that you can’t have Google Search unless you also use Google Maps (I have no idea if this is still the case). There is a tension here. In-car systems are an important value-add for customers and critical to marketing vehicles, but the auto companies do not want their vehicles to become just another channel for big data-gathering companies like Google and Amazon.

Another point of interest is how smartphones interact with your car. If you want a simple and integrated experience, you can just dock your phone and use it for navigation, communication and entertainment – three key areas for in-car systems. On the other hand, a docked phone will not have the built-in screen and control of vehicle features that an embedded system can offer.