Microsoft publishes new OneDrive API with SDK, sample apps

Microsoft has announced a new OneDrive API for programmatic access to its cloud storage service. It is a REST API which Microsoft Program Manager Ryan Gregg says the company is also using internally for OneDrive apps. The new API replaces the previous Live SDK, though the Live SDK will continue to be supported. One advantage of the new API is that you can retrieve changes to files and folders in order to keep an offline copy in sync, or to upload changes made offline.

Unfortunately this does not extend to only downloading the changed part of a file (as far as I can tell); you still have to delete and replace the entire file. Imagine you had a music file in which only the metadata had changed. With the OneDrive API, you will have to upload or download the entire file, rather than simply applying the difference. However, you can upload files in segments in order to handle large files, up to 10GB.

I have worked with file upload and download using the Azure Blob Storage service so I was interested to see what is now on offer for OneDrive. I went along to the OneDrive API site on GitHub and downloaded the Windows/C# API explorer, which is a Windows Forms application (why not WPF?). This uses a OneDrive SDK library which has been coded as a portable class library, for use in desktop, Windows 8, Windows Phone 8.1 and Windows Phone Silverlight 8.

image

I have to say this is not the kind of sample I like. I prefer short snippets of code that demonstrate things like: here is how you authenticate, here is how you iterate through all the files in a folder, here is how you download a file, here is how you upload a file, and so on. All these features are there in this app, but finding them means weaving your way through all the UI code and async calls to piece together how it actually works. On top of that, despite all those async calls, there are some performance issues which seem to be related to the smart tiles which display a preview image, where possible, from each file and folder. I found the UI becoming unresponsive at times, for example when retrieving my large SkyDrive camera roll.

Gregg makes no reference in his post to OneDrive for Business, but my assumption is that the new API only applies to consumer OneDrive. Microsoft has said though that it intends to unify its two OneDrive services so maybe a future version will be able to target both.

At a quick glance the API looks different to the Azure Blob Storage API. They are different services but with some overlap in terms of features and I wonder if Microsoft has ever got all its cloud storage teams together to work out a common approach to their respective APIs.

I do not intend to be negative. OneDrive is an impressive and mostly free service and the API is important for lots of reasons. If you find the OneDrive integration in the current Windows 10 preview too limited (as I do), at least you now have the ability to code your own alternative.

Playing native DSD with Raspberry Pi 2 and Volumio

There are many intriguing debates within the world of audio, and one which has long interested me concerns DSD (Direct Stream Digital). This is an alternative technology for converting and recovering sound from digital storage. The more common PCM (Pulse Code Modulation) works by sampling sound at very short intervals and recording its volume. By contrast DSD records the difference between one sample and the next, sampling at an even greater frequency to compensate for the fact that it only captures a single bit of data in each sample (ie on or off). For example the standard used by CD is:

16-bit precision, 44.1 kHz sampling rate

and by SACD (a DSD format):

1-bit precision, 2.8224 MHz sampling rate

The SACD was introduced by Philips and Sony in 1999 as an upgrade to CD, since it is a higher resolution format cable of a dynamic range of 120 dB and frequency response up to 100 kHz. It was an effort, like the PCM-based DVD Audio, to convince the public that the CD is not good enough for the best quality sound.

SACD was largely unsuccessful, mainly because there was not really any dissatisfaction with CD quality among the general public, and even some experts argue that CD quality already exceeds what is required to be good enough for human hearing.

That said, SACD was popular in the niche audiophile community , more so than DVD Audio. Some listeners feel that SACD and DSD results in a more natural sound, and believe that PCM has some inherent harshness, even at higher-than-CD resolution. Enthusiasts say that DSD stands for “Doesn’t sound digital”. For this reason, there is a regular stream of new SACD releases even today, and DSD downloads are also available from sites like Native DSD Music and Blue Coast Records. Some DSD downloads are at higher resolution than SACD – Double-rate or Quad-Rate.

The resurgence of DSD has been accompanied by increasing availability of DSD DACs (Digital to Analogue Converters). While these tend to be more expensive than PCM-only DACs, prices have come down and a quick eBay search will find one from under £100.

There are several complications in the DSD vs PCM debate. While DSD is a reasonable format for storing digital audio, it is poor for processing audio, so many SACDs or DSD downloads have been converted to and from PCM at some point in their production history. If PCM really introduces harshness, it is presumably too late by the time it gets to DSD. That said, it is possible to find some examples that are captured straight to DSD; this can work well for live recording.

Another complication is that some consumer audio equipment converts DSD to PCM internally, to enable features like bass management.

Is there any value in pure DSD? I wanted to try it, preferably with DSD files rather than simply with SACD, since this is much easier for experimenting with different formats and conversions as well as enabling Double DSD and higher. Unfortunately SACD is rather hard to rip, though there is a way if you have the right early model of Sony PlayStation 3.

The first step was to get a DSD-capable DAC. I picked the Teac 301 which is a high quality design at a reasonable price. But how to get DSD to the DAC? Most DSD DACs support a feature called DSD over PCM (DoP), which conveys the DSD signal in a PCM-format wrapper. DoP is not a conversion to and from PCM, it merely looks like PCM for better compatibility with existing playback software.

Next, I used a Raspberry Pi 2 supplied by Element14 (cost around £25.00) and installed Volumio, a pre-built version of Linux which includes an audio streamer and web-based user interface. You download Volumio as a single file which you burn onto a micro SD card using a utility such as Win32DiskImager. Then you plug the card into the Pi, connect to a home network via Ethernet and to a DAC via USB, and power on.

After a minute or two I could connect to Volumio using a web browser.

image

My music is stored mostly in FLAC format but with a few DSD files in Sony’s DSF (Digital Storage Facility) format, and located on a Synology NAS (Network Attached Storage). In Volumio’s menu I went to Library and mounted the network share containing the media. Next, I tried to play some music. PCM worked, but not DSD. I changed the playback settings to enable DoP:

image

Success: My DSD files play perfectly:

image

If you squint at this image you will see that the 5.6 MHz light is illuminated, indicating that the DAC is processing Double DSD.

It sounds lovely, but is it any better than the more convenient PCM format? I am sceptical, but intend to try some experiments, using a forthcoming audio show to find some willing listeners.

That aside, I am impressed with the capability of the Raspberry Pi for enabling a simple and cost-effective means of playing DSD over the network, although with the assistance of an external DAC. It plays PCM formats too of course, and with Volumio is easily controlled using a mobile device, thanks to the touch-friendly web UI.

Universal Apps: a look at Microsoft’s first efforts on Phone and PC

Windows 10 for phones is now available on preview; I wrote a first-look piece for The Register here. I like it better than I had expected; it is a bit laggy but pretty much stable and with some compelling new features.

The main interest of the preview for me though is the appearance of first-party universal apps. Since these form a key part of the strategy for Windows 10, it seems to me that they merit close attention; after all, this is what Microsoft is hoping other developers will do when creating apps for Windows. Universal apps are not actually new in Windows 10 – you can write one today for Windows 8 and Windows Phone – but in the forthcoming Windows they run on the desktop rather than just in the tablet environment. There are also changes in the Windows Runtime API and frameworks though these are currently undocumented as far as I am aware (wait for Build!)

How many Microsoft universal apps are there in Windows 10, designed for both tablet and phone? Quite a few. The ones I am looking at here are Settings (not sure if this is actually the same app), Calculator, Photos, Sound Recorder, Alarms and Feedback.

There is more coming, most notably Outlook (including Mail and Calendar), Word, Excel and PowerPoint. The latter three are already available in preview in Windows 10 for PCs and tablets, but not yet for phone. However, the Android and iOS phone versions are probably a good indication of what is to come, at least for Word, Excel and PowerPoint. For Outlook there is some confusion caused by Microsoft acquiring third-party apps and rebadging them, so in these cases Windows 10 may diverge more from iOS and Android.

Enough apps then to be significant. In the screenshots that follow, I have shown in most cases three versions of each app: Windows Phone 8.1 (the equivalent app, not a universal app), Windows 10 PC, and Windows 10 phone. My general observations are:

1. The old Windows Phone version is more carefully optimized for a smartphone, with a chunky UI that is optimized for touch.

2. The new apps have more functionality, as you would expect for apps that need to work on the desktop where expectations are higher.

3. The new apps have a distinctive look and feel compared to either Windows Phone 8.1 apps, or Windows 8 “Metro” apps. Needless to say, they look different from Windows 7 style desktop apps as well. These are still Windows Runtime (the platform underlying “Metro” or “Store” apps) but in general the UI is denser than before; there is more information on view in a single screen.

While I have some doubts about the usability of the new apps on a phone, this seems to me a good direction overall; the phone is benefiting from work Microsoft is doing for the PC and vice versa. I think we will see better, more useful apps on both platforms as a result.

Now for the screenshots:

Calculator

Windows Phone 8.1 Windows 10 Phone Windows 10 PC
image image image

A good example of how the new app is more functional but less well optimized for touch.

Alarms

Windows Phone 8.1 Windows 10 Phone Windows 10 PC
image image image

I have cheated a bit here because no world clock in the old Alarms app!

Sound Recorder

Windows 10 Phone Windows 10 PC
image image

No Phone 8.1 version. But you can see this really is the same app. I am glad to see this on the phone; it is an update of an ancient Windows accessory and actually useful.

Photos

Windows Phone 8.1 Windows 10 Phone Windows 10 PC
image image image

Feedback

Windows 10 Phone Windows 10 PC
image image

While this is the same app, you can see that Microsoft has adapted the UI for the phone. In the Phone version, you hit the All Categories link to see the categories and select. In the PC version, they are listed in a left-hand column. The Universal App concept allows for a totally different UI on different devices if necessary.

Settings

Windows Phone 8.1 Windows 10 Phone Windows 10 PC
image image image

The Settings app is radically changed in Windows 10; a good thing in that the Windows Phone 8.1 settings is a hopeless long and confusing list and needed some organisation. The Windows 10 PC version looks different but has the same sections and icons.

Restoring a system image backup on Windows 7 when system recovery fails

I was asked to look at a laptop over the weekend. It was an HP running Windows 7 Home Premium, and the user was having problems installing applications. I noticed several things about it:

  • Lots of utilities like registry cleaners, system care, driver accelerator and more were installed
  • When I tried to remove the third-party firewall and use the Windows firewall instead, the Windows firewall could not be fully enabled
  • Most applications could not be removed using Control Panel – Programs and Features
  • Right-clicking a network connection and choosing Properties gave an error

When Windows is in this kind of state it makes sense to reinstall from scratch. There was an intact recovery partition, so I backed up the data and ran system recovery. This seemed to go fine until right at the end, when it gave an error and invited me to contact HP support. Oddly, if I chose HP’s “Minimized Image Recovery” I still got an error, but it got me a working “Windows Basic” installation, but Windows Basic is not much use because of some arbitrary limitations Microsoft imposed.

Now I had a problem, in that the system recovery had successfully removed the old Windows install, but had failed to install a new one.

One solution would be to re-purchase Windows or try to get recovery media from HP, but before going down that route, I decided to use a system image backup that had been made earlier. There was a backup from a year or so ago on a USB hard drive. I booted using a Windows 7 DVD, chose Repair your computer, then System Image Recovery.

Unfortunately Windows refused to list the backed up system image, even though it was in the standard location under WindowsImageBackup. Since the backup was not listed, it could not be restored.

Fortunately there is another approach that works. A system image backup actually created a virtual hard drive (.vhd) for each of the drives you select. You can zap the contents back onto the real hard drive to restore it.

This HP has three partitions. One is a small system partition used for booting, one is the main partition (C drive) and one is the recovery partition. The main partition is the one that matters. Here is what I did.

First, I installed Drive Snapshot, a utility I’ve found reliable for this kind of work.

Next, I plugged in the USB drive and found the .vhd file. These are located in WindowsImageBackup\[NAME OF PC] and have long names with letters and numbers (actually a GUID) followed by .vhd. The old C drive will be the largest file (there are usually at least two .vhd files, the smaller one being the system partition).

Step 3 is to mount the vhd so it looks like a real drive in Windows. You do of course need a working Windows PC for this; even Windows Basic will do, or you can use a spare PC. I opened a command prompt using Run as administrator and ran DISKPART. The commands are:

select vdisk file=”path\to\vhd\filename.vhd”

attach vdisk

I generally leave DISKPART open so you can detach the vdisk when you are done.

When you enter “attach vdisk” an additional drive will appear in Windows Explorer. This is your old drive. You can copy urgent documents or data from here if you like.

The goal though is to restore your PC. Run Drive Snapshot or an equivalent utility.

image

Choose Backup Disk to File. Select your old drive and back it up to an external USB drive. I hesitate to mention it, but you also need to keep the drive with the .VHD on it attached for obvious reasons! You can back up to that same drive if there is room.

Once complete, go back to DISKPART and enter:

detach vdisk

Now you need to use Drive Snapshot to restore your old hard disk. I was lucky in this case; I could run the utility in Windows Basic on the laptop itself and restore it from there. Drive Snapshot is smart enough that you can even restore the drive where it is running, after a reboot. You could also use pretty much any old version of Windows, no need to activate it, just to run the utility.

After the restore I was able to boot Windows and all was well, apart from the hundreds of Windows Updates needed for an OS that was a year out of date. In some cases though you might need to go back into system recovery to repair the boot configuration; it usually does that pretty well.

Microsoft open sources heart of .NET: CoreCLR runtime now on GitHub

Microsoft’s CoreCLR is now available on GitHub. We knew this was coming, but it is still a significant step, since this piece is the very heart of .NET: the execution engine that consumes a .NET IL (Intermediate Language) executable and compiles it to machine code for execution. The IL can easily be decompiled back to C#; it is in a sense fairly close to what you wrote in the editor. The CLR piece compiles it to a native executable, and also handles garbage collection (automatic memory management) and interop with other  native code libraries. The just-in-time compiler in CoreCLR is called RyuJIT.

CoreCLR is not same as the .NET Framework CLR (as found in the Windows desktop today), though one thing we now learn is that it is a true subset:

CoreCLR is a subset of the .NET Framework CLR. They share the same codebase and are updated together. For example, an update to the .NET GC improves both CoreCLR and the .NET Framework CLR.

We setup a live 2-way mirror between the coreclr repo on GitHub and the .NET Framework TFS server within Microsoft. The latency of the mirror is low, measurable in minutes.

Contributions made to the coreclr repo are integrated to the Microsoft TFS server automatically and will become part of both the .NET Framework and .NET Core products. The same is true in reverse, that .NET Framework CLR changes (within the CoreCLR subset) are mirrored to the CoreCLR repo. These changes will sometimes result in large commits to unrelated components.

This is good news since it reduces the risk of fragmentation between the .NET Framework and the CoreCLR. Note that the same does not apply to the framework libraries, which are forked between .NET Framework and CoreFX. The reason for the fork is to enable cross-platform .NET and to benefit from greater modularity in the Framework without breaking the existing .NET Framework.

Some other points of interest:

  • CoreCLR will run on Linux and Mac but not yet, this is work in progress
  • CoreCLR powers Windows Phone apps as well as ASP.NET 5
  • CoreCLR uses the CMake build system rather than MSBuild, because it runs cross-platform

There is a key architectural difference between CoreCLR and the .NET Framework, which is that in CoreCLR each application is deployed with the runtime and libraries it requires, whereas in the .NET Framework applications depend on a system-managed runtime and shared libraries. This has the advantage that applications are standalone, and you could run one from say a portable USB drive on a system which did not have .NET or Mono installed.

The disadvantage, aside from greater use of disk space, is that patching the same libraries across multiple applications is hard. In the interview here Microsoft offers a clue about how it might come up with a solution for this. Jan Kotas on the CLR team talks about an ideal scenario where identical copies of the same DLL are in fact shared even though each application appears to have its own copy. This sounds similar to the mechanism used by de-duplication in Windows Server. The file system makes it look as if several copies of a file exist in different directories, but in fact there is only one. If you update a file though, the right thing happens and only the virtual copy that you overwrite is changed. It sounds as if Kotas has in mind a variant where you could say, “update this file and all its instances elsewhere.” This would of course somewhat undermine the concept of app-isolated dependencies; but you know what they say about cakes and eating them:

“The ideal we should get to is every application has a local copy of everything. People eventually get to a point where through some OS mechanisms or through some other means the DLLs that are the same between different applications would get shared. That way nobody needs to worry about is this shared, or is it not shared. The ideal place that we’d like to get to is that sharing happens under the hood. It can happen through different mechanisms for different applications. [That would be the] ideal place for the runtime and how to version it.”

said Kotas. Possibly I am misinterpreting this; but it does sound like some kind of sharing-but-not-sharing solution to the patching problem.

Another point to note: a managed code application cannot execute without help. In order to run, every managed application needs three things:

1. The application code

2. The CLR – either CoreCLR or the .NET Framework CLR

3. A CLR host which loads the CLR and instructs it to execute the application. The CLR host has to be native code, for obvious reasons.

In the .NET Framework this third piece is invisible, since it is handled by the operating system (though apparently SQL Server is a special case). In the CoreCLR world though, you need to think about the CLR host. ASP.NET 5.0 has the KRuntime (K probably stands for Katana) which I think is the same as Project K. If you want to test CoreCLR today, you can use a host called CoreConsole which (as its name implies) lets you run console apps. Apparently there are a few technical problems using CoreCLR with ASP.NET 5 as the moment.

image