Do you need the new Raspberry Pi B+?

An updated Raspberry Pi board was released earlier this month, and the kind folk at Element 14 sent me one to review.

image

The Raspberry Pi is a complete low-power computer which needs only a case, an SD card, and a standard USB power source to start doing real work. It is ideal for learning projects, home automation, practical applications like running a media server or client, or anything you can think of.

It is a little over two years since the first Pi was shipped in April 2012. The progress is a little confusing: the first model was the B, followed by the A in early 2013, a cut-down model with a single USB port and no Ethernet.

image

The new model has the same Broadcom BCM2835 SoC as all the other Pi models. The CPU is a 700 MHz ARM 1176JZ-F.

So what is new? The highlights:

  • 4 USB 2.0 ports
  • The dedicated composite video port has been removed and is now shared with the audio jack, requiring an adaptor
  • The power draw is now 600 mA up to 1.8A at 5v, making it both lower power and higher power (when necessary) than the model B (750 mA up to 1.2A at 5v). The USB ports can supply a little more power, making most self-powered external hard drives usable, for example.
  • The SD card slot has been replaced by a micro SD card slot, a good move (all my SD cards are in fact micro SD cards with adaptors, which is common).
  • The GPIO (General Purpose Input Output) connector now has 40 pins rather than 26. The first 26 pins are the same as before, for compatibility.
  • The price is the same as for the B

There are a few other changes which I noticed. One is that the LEDs have been moved. On the B, there are 5 LEDs which are together on the bottom right corner of the board: ACT (SD card access), PWR, FDX (Duplex LAN), LNK (Activity LAN) and 100 (100Mbit LAN connected). The B+ has two LEDs in the opposite corner, ACT and PWR, and two more LEDs on the LAN port itself. Personally I prefer the old arrangement.

The audio output is improved, according to Pi inventor Eben Upton, thanks to a “dedicated low-noise power supply.” Raspberry Pi Engineer jdb adds that  “The output impedance and buffering for the audio port has been improved and the maximum output amplitude has been increased (~1.25V pk-pk).” However one blogger measured the output and considered no better (or slightly worse).

Since the layout of the board has changed, a B+ Pi will not fit in your old model B case. I bought a new case but I don’t recommend this one:

image

This is a push-fit case and even thought the board is held down by tabs, it moves and rattles slightly. I also worry about the case tabs breaking if you open it repeatedly. The tab that you need to press to open the case is sited by the micro SD slot, and that is another mistake, since it presses against the board making it hard to reopen after the Pi is fitted. There is also too much space below card slot so you can easily post your card into the case rather than into the slot if you are careless. Finally, I don’t like the way the top of the case slopes down, reducing the space above the GPIO at its shallow point.

I wish I had seen this Cyntech case which looks miles better, for a similar low price, though I haven’t actually tried it. I do like the idea of an optional spacer which lets you increase the case height to fit add-on boards.

Finally, a few notes on operation. If you have existing micro SD cards running on the B, they might or might not work on the B+. I use piCorePlayer as a streaming audio client, for which it is excellent, but my existing image would not boot on the B+.  Following a tip elsewhere, I installed the latest piCorePlayer download on the B, updated it to version 1.16A using the web UI, and it then worked on the B+.

image

I had no such problems with the standard Raspbian distro which worked fine on the B+.

image

So do you need the B+? If you have not yet tried a Pi, give it a try, it is fabulous. If you already have a B, then you will find some nice improvements but nothing dramatic – though the extra USB ports in particular are most welcome.

More information is on the Element14 site or of course the official site.

Richard Thompson solo acoustic at the Warwick Folk Festival, 24 July 2014

It was a last minute decision. “Hey, Richard Thompson is on at the Warwick Folk Festival tomorrow. I wonder if there are any tickets left?” There were; and we were fortunate to end up about 6 rows from the stage in the large marquee which houses the main stage, on a balmy English summer evening – rather to RT’s surprise, it seems. “I’ve got a reputation for bringing disaster to festivals”, he told us, “in the form of rain and mud”. It was not to be; and the event yesterday was, as summer music festivals go, distinctly genteel, complete with chandelier in the wine tent.

image

The audience too was exceedingly well behaved; if anything a bit too subdued but nevertheless enjoying every minute of what turned out to be an outstanding concert.

image

A few quick reflections. RT was on excellent form; of course the guitar work is the big attraction but he also has a powerful voice which he uses to great effect in his various tales of woe.

It was great to seem him play from a relatively close position, but frankly I have no idea how he gets the sound he does, and the variety of sounds; you see him moving his fingers and it looks like nothing extraordinary but the music he produces really is.

In Vincent Black Lightning, for example, you hear a tune, a bass accompaniment, little frills and decorations and runs, and it sounds like three guitars; and even while doing that he delivers an intense vocal performance, to get just the right throaty growl and on the “he did Riiiiiide” refrain.

I was also struck by how much feeling he puts into songs that he has performed countless times. Of course that is what performers do; but we have all experienced events where old favourites are delivered in a throwaway manner; I never got that feeling yesterday.

There was plenty of between-song banter and RT took numerous requests, even complaining, “please shout your request in before I start the next song”, when someone succeeded in changing his mind about what to play.

It was great to hear some older songs, including Bright Lights and Genesis Hall. “I used to be in a band,” said RT, telling us that he left due to “musical differences,” and observing that Fairport Convention had got on fine without him.

I enjoyed every song; but one or two stood our for me. Johnny’s Far Away is a rollicking “sea shanty” in the which the crowd joins in the chorus. Its subject is infidelity and RT says “it’s about what musicians get up to on the road.” Johnny is in a band which plays on a cruise where he has a fling with a “wealthy widow” or two; at home his wife Tracey “laying in the booze” consoles herself with “another man, a smoothie”. It is all very seamy, complete with Johnny returning home with “sores and all” as Tracey’s lover sneaks out the back, but is there a trace of sympathy as Tracey declares, “I can’t express myself with my old man,” and Johnny, “I can’t express myself with my old lady?” Or were they just trying to justify themselves to their temporary partners? It seems RT is reflecting on what he has observed in a lifetime on the road, but with some ambiguity, accentuated by the contrast between the sordid subject and the good-timey singalong chorus.

Humour, reflection, sharp observation, tinged with sadness, delivered with virtuoso performance: it makes for an intense experience and the evening flew by.

Beeswing was fantastic, wistful and beautiful.

The most sombre moment came towards the end when RT performed three “songs”, if that is the right word, with words taken from First World War diaries. The opening words: “I’ve never seen a dead body before I went to war and the trenches.”

These words are chanted more than sung, with sparse accompaniment, and the performance was potent but bleak. Nothing new for RT you might think; except that these are unsugared by the humour or melody which lightens other songs.

This is preparatory work for a performance at a centenary commemoration of the 1914-18 war which is set for sometime in 2016, we were told.

The purpose of such events is that we do not forget the horrors of the past nor, we vainly hope, repeat them; and I applaud RT for including this in the concert.

After that we moved to Wall of Death, whose title made a natural link with what had gone before, but which returned us to RT’s normal territory, juxtaposing merriment (a funfair) with gloom (death); but of course it is only a fun ride isn’t it?

A short encore of Keep Your Distance and that was it. Thank you RT for another fine concert.

The set list:

Bathsheba
Saving the good stuff for you
Valerie
The Ghost of you Walks
Johhny’s Far Away
The story of Hamlet
Vincent Black Lightning
Dry My Tears
I want to see the Bright Lights Tonight
Genesis Hall
Fergus Lang
Persuasion
Beeswing
I Misunderstood
Feel so Good
Read about Love
The Trenches
Wall of Death
Keep your distance (Encore)

The UK government is adopting Open Document: some observations

The UK government is adopting the Open Document Format for Office Applications, for documents that are editable (read-only documents will be PDF or HTML). You can read Mike Bracken’s (Government Digital Service) blog on the subject here, and the details of the new requirements here. If you want to see the actual standards, they are on the OASIS site here.

I followed the XML document standards wars in some details back in 2006-2008. The origins of ODF go back to Sun Microsystems (a staunch opponent of Microsoft) which acquired an Office suite called Star Office, made it open source, and supported OpenOffice.org. My impression was that Sun’s intentions were in part to disrupt the market for Microsoft Office, and in part to promote a useful open standard out of conviction. OpenOffice eventually found its way to the Apache Foundation after Oracle’s acquisition of Sun. You can find it here.

During the time, Microsoft responded by shifting Office to use XML formats by default – these are the formats we know as .docx, .xlsx etc. It also made the formats an open standard via ECMA and ISO, to the indignation of ODF advocates who found every possible fault in the standards and the process. There were and are faults; but it has always seemed to me that an open XML standard for Microsoft Office documents was a real step forward from the wholly proprietary (but reverse engineered) binary formats.

The standards wars are to some extent a proxy for the effort to shift Microsoft from its dominance of business document authoring. Microsoft charges a lot for Office, particularly for businesses, and arguably this is an unnecessary burden. On the other hand, it is a good product which I personally prefer to the alternatives on Windows (on the Mac I am not so sure), and considering the amount of use Office gets during the working day even a small improvement in productivity is worth paying for.

As a further precaution, Microsoft added ODF support into its own Office suite. This was poor at first, though it has no doubt improved since 2007. However I would not advise anyone to set Microsoft Office to use ODF by default, unless mandated by some requirement such as government regulation. It is not the native format and I would expect a greater likelihood that something could go slightly wrong in formatting or metadata.

Bracken does not mention Microsoft Office in his blog; but as ever, the interesting part of this decision is how it will impact Office users in government, or working with government. If it is a matter of switching defaults in Office, that is no big deal, but if it means replacing Microsoft Office with Open Office or its fork, Libre Office, that will have more impact.

The problem with abandoning Microsoft Office is not only that that the alternatives may fall short, but also that the ecosystem around Microsoft Office and is document formats is richer – in other words, tools that consume or generate Office documents, add-ins for Office, and so on.

This also means that Microsoft Office documents are, in my experience, more interoperable (not less) than ODF documents.

That does not in itself make the UK government’s decision a bad one, because in making the decision it is helping to promote an alternative ecosystem. On the other hand, it does mean that the decision could be costly in constraining the choice of tools while the ODF ecosystem catches up (if it does).

How does the move towards cloud services like Office 365 and Google Docs impact on all this? Microsoft says it supports ODF in SharePoint; but for sure it is better to use Microsoft’s own formats there. For example, check the specifications for Office Online. You can edit docx in the browser, but not odt (Open Document Text); it is the same story with spreadsheets and presentations.

Google has recently added native support for the Microsoft formats to Google Docs.

Amazon’s Zocalo service, which I have just reviewed for the Register, can preview Microsoft’s formats in the browser, but while it also supports odt for preview, it does not support ods (Open Document Spreadsheet).

A good decision then by the UK government? Your answer may be partly ideological, but as a UK taxpayer, my feelings are mixed.

For more information on this and other government IT matters, I recommend Bryan Glick’s pieces over on Computer Weekly, like this one.

RemObjects previews native Apple Mac IDE for C#, .NET, Oxygene

RemObjects is previewing a new native Mac IDE for its Oxygene and C# compilers. Oxygene is a Delphi-like language (in other words, a variant of Object Pascal) which targets iOS, Mac, Android, Windows Phone and Windows. RemObjects C# shares the same targets. Both can compile to .NET assemblies for Windows, or to Mono for cross-platform .NET, or to a Mac or iOS executable (using the LLVM compiler), or to Java bytecode for the Android Dalvik runtime. You can get both Oxygene and RemObjects  C# bundled in a product called Elements.

In the past, RemObjects has used Visual Studio as its IDE. While this is a natural choice for Windows users, much development today is done on the Mac. Requiring Mac users to develop in a Windows Virtual Machine adds friction, so RemObjects is now working on a native IDE for the Mac codenamed Fire.

image

I gave Fire the briefest of looks. Here are some of the options for a new .NET application:

image

Note the appearance of ASP.NET MVC 4, and even Silverlight.

Here are the options for a new Cocoa application:

image

If you are developing for Cocoa, you can edit the resource file in Apple’s Xcode and use it in your application. I started a new C# Cocoa app, made a few changes and and then ran it from the IDE:

image

I imagine Microsoft will be keeping an eye on tools like this – if it is not, it should – since they fit with the strategy of supporting Microsoft services on multiple devices. Visual Studio is a fine tool but if Microsoft is serious about cross-platform, it needs strong Mac-native development tools. Xamarin came up with Xamarin Studio, which is cross-platform for Windows and Mac, but the RemObjects approach also looks worth investigating.

PS The first release of RemObjects C# lacked full generic support, for which failing Xamarin and Mono founder Miguel de Icaza took RemObjects to task on Twitter. I was amused to see this in the changelog for April 2014:

 image

65764 Full support for Generics on Cocoa, as requested by Miguel

For more details on Fire, see here.

Microsoft Financials show cloud growth, Nokia loss

Microsoft has announced its financial results for the quarter ending June 30th 2013. How is it doing?

Quarterly revenue is up to $23.38 billion from $20.49 billion year on year, though $1.98 billion of that is phone hardware – Nokia, in other words. Operating income is up to $6.48 billion from $6.07. Net income is down to $4.61 billion from $4.96 billion because of tax adjustments.

I am more interested in the segment breakdown, though Microsoft’s segments are not particularly clear:

Quarter ending June 30th 2014 vs quarter ending June 30th 2013, $millions

Segment Revenue Change Gross margin Change
Devices and Consumer Licensing 4694 +406 4407 +526
Computing and Gaming Hardware 1441 +274 18 +665
Phone Hardware 1985 N/A 54 N/A
Devices and Consumer Other 1880 +317 446 +78
Commercial Licensing 11222 +595 10296 +345
Commercial Other 2262 +688 691 +355

Revenue is actually up year on year in all segments. Windows has benefited from the end of XP support driving upgrades. Products Microsoft wants to talk about are Azure, SQL Server and System Center which are all growing revenue. “Commercial cloud revenue” or in other words Office 365, CRM online and Azure, grew 147% and is now a $4.4 billion business at current rate of sale.

The bad news is that Nokia contributed a $692 million loss (diminishment of operating income). Microsoft says it sold 5.8 million Lumia (Windows) phones and 30.3 million non-Lumia phones, with the majority of Lumia sales being low-cost devices.

Bing search grew revenue by 40% and US search share is up to 19.2% according to Microsoft.

Microsoft CEO Satya Nadella promises “One Windows” in place of three, but should that be two?

Microsoft released its latest financial results yesterday, on which I will post separately. However, this remark from the earnings call transcript (Q&A with financial analysts) caught my eye:

In the year ahead, we are investing in ways that will ensure our Device OS and first party hardware align to our core. We will streamline the next version of Windows from three Operating Systems into one, single converged Operating System for screens of all sizes. We will unify our stores, commerce and developer platforms to drive a more coherent user experiences and a broader developer opportunity. We look forward to sharing more about our next major wave of Windows enhancements in the coming months.

What are the three versions of Windows today? I guess, Windows x86, Windows RT (Windows on ARM), and Windows Phone. On the other hand, there is little difference between Windows x86 and Windows RT other than that Windows RT runs on ARM and is locked down so that you cannot install desktop apps. The latter is a configuration decision, which does not make it a different operating system; and if you count running on ARM as being a different OS, then Windows Phone will always be a different OS unless Microsoft makes the unlikely decision to standardise on x86 on the phone (a longstanding relationship with Qualcomm makes this a stretch).

Might Nadella have meant PC Windows, Windows Phone and Xbox? It is possible, but the vibes from yesterday are that Xbox will be refocused on gaming, making it more distinct from PC and phone:

We made the decision to manage Xbox to maximize enterprise value with a focus on gaming. Gaming is the largest digital life category in a mobile first, cloud first world. It’s also the place where our past success, revered brand and passionate fan base present us a special opportunity.

With our decision to specifically focus on gaming we expect to close Xbox Entertainment Studios and streamline our investments in Music and Video. We will invest in our core console gaming and Xbox Live with a view towards the broader PC and mobile opportunity.

said Nadella.

As a further aside, what does it mean to “manage Xbox to maximize enterprise value”? It is not a misprint, but perhaps Nadella meant to say entertainment? Or perhaps the enterprise he has in mind is Microsoft?

Never mind; the real issue issue is about the development platform and making it easier to build applications for PC, phone and tablets without rewriting all your code. That is the promise of the Universal App announced earlier this year at the Build conference.

That sounds good; but remember that Windows 8.x is two operating systems in one. There is the desktop side which is what most of us use most of the time, and the tablet side (“Metro”) which is struggling. Universal Apps run on the tablet side. The desktop side has different frameworks and different capabilities, making it in effect a separate platform for developers.

“One Windows” then is not coming soon. But we might be settling on two.

Farewell Nokia X? Not quite, but the signs are clear as Microsoft bets on Universal Apps

I could never make sense of Nokia X, the Android-with-Microsoft-services device which Nokia announced less than a year ago at Mobile World Congress in Barcelona:

If Nokia X is a worse Android than Android, and a worse Windows Phone than Windows Phone, what is the point of it and why will anyone buy?

Nokia X is Android without Google’s Play Store; if Amazon struggles to persuade developers to port apps to Kindle Fire (another non-Google Android) then the task for Nokia, lacking Amazon’s ecosystem, is even harder. Now, following Microsoft’s acquisition, it makes even less sense: how can Microsoft simultaneously evangelise both Windows Phone and an Android fork with its own incompatible platform and store?

Nokia X was meant to be a smartphone at feature phone prices, or something like that, but since Windows phone runs well on low-end hardware, that argument does not stand up either.

Now Nokia X is all but dead. Microsoft CEO Satya Nadella:

image

Second, we are working to integrate the Nokia Devices and Services teams into Microsoft. We will realize the synergies to which we committed when we announced the acquisition last September. The first-party phone portfolio will align to Microsoft’s strategic direction. To win in the higher price tiers, we will focus on breakthrough innovation that expresses and enlivens Microsoft’s digital work and digital life experiences. In addition, we plan to shift select Nokia X product designs to become Lumia products running Windows. This builds on our success in the affordable smartphone space and aligns with our focus on Windows Universal Apps.

and former Nokia CEO Stephen Elop, now in charge of Microsoft devices:

In addition to the portfolio already planned, we plan to deliver additional lower-cost Lumia devices by shifting select future Nokia X designs and products to Windows Phone devices. We expect to make this shift immediately while continuing to sell and support existing Nokia X products.

Nadella has also announced a huge round of job cuts, mainly of former Nokia employees, around 12,500 which is roughly 50% of those who came over. Nokia’s mobile phone business is no all Windows Phone (Lumia) and Nokia X. In addition, it sells really low-end phones, the kind you can pick up for £10 at a supermarket, and the Asha range which are budget smartphones. Does Microsoft have any interest in Asha? Elop does not even mention it.

It seems then that Microsoft is focusing on what it considers strategic: Windows Phone at every price point, and Universal Apps which let developers create apps for both Windows Phone and full Windows (8 and higher) from a single code base.

Microsoft does also intend to support Android and iOS with apps, but has no need to make its own Android phones in order to do so.

My view is that Nokia did an good job with Windows Phone within the constraints of a difficult market; not perfect (the early Lumia 800 devices were buggy, for example), but better by far than Microsoft managed with any other OEM partner. I currently use a Lumia 1020 which I regard as something of a classic, with its excellent camera and general high quality.

It seems to me reassuring (from a Windows Phone perspective) that Microsoft is keeping Windows Phone engineering in Finland:

Our phone engineering efforts are expected to be concentrated in Salo, Finland (for future, high-end Lumia products) and Tampere, Finland (for more affordable devices). We plan to develop the supporting technologies in both locations.

says Elop, who also notes that Surface and Xbox teams will be little touched by today’s announcements.

Incidentally, I wrote recently about Universal Apps here (free registration required) and expressed the view that Microsoft cannot afford yet another abrupt shift in its developer platform; the continuing support for Universal Apps in the Nadella era makes that less likely.

Speculating a little, it also would not surprise me if Universal Apps were extended via Xamarin support to include Android and iOS – now that is really a universal app.

Will Microsoft add some kind of Android support to Windows Phone itself? This is rumoured, though it could be counter-productive in terms of winning over developers: why bother to create a Windows Phone app if your Android app will kind-of run?

Further clarification of Microsoft’s strategy is promised in the public earnings call on July 22nd.

A note on Azure storage and downloading large files

I have written a simple ASP.NET MVC application for upload and download of files to/from Azure storage.

Getting large file upload to work was the first exercise, described here. That is working well; but what about download?

If your files in Azure storage are public, you can simply serve an URL to the file. If it is not public though, you have a couple of choices:

1. Download the file under application control, by writing to Response.OutputStream or using a FileResult action.

2. Issue a Shared Access Signature (SAS) to the client which enables it to retrieve the file directly from Azure storage. The SAS is sent as an URL argument which tells Azure storage that the request is authorised. The browser downloads the file directly, so it makes no difference to your web application if the file is large.

Note that if you use the first option, it will not work with large files if you simply call DownloadToStream or similar:

container.GetBlockBlobReference(FileName).DownloadToStream(Response.OutputStream);

Why not? Well, the way this code works is that it downloads the large file to the web server, then sends it to the browser. What if your large file is 5GB? The browser will wait a long time for the first byte to be served (giving the user an unresponsive page); but before that happens, the web application will probably throw an exception because it does not like downloading such a large file.

This means the SAS option is a good one, though note that you have to specify an expiry time which could cause problems for users on a slow connection.

Another option is to serve the file in chunks. Use CloudBlockBlob.DownloadRangeToStream to write to Response.OutputStream in a loop until the download is complete. Call Response.Flush() after each chunk to send the chunk to the browser immediately.

This gives the user a nice responsive download experience complete with a cancel option as provided by the browser, and does not crash the application on the server. It seems to me a reasonable approach if the web application is also hosted on Azure and therefore has a fast connection to Azure storage.

What about resuming a failed download? The SAS approach should work as Azure supports it. You could also support this in your app with some additional work since Resume means reading the Range header in a GET request. I have not tried doing this but you might find some clues here.

Microsoft StorSimple brings hybrid cloud storage to the enterprise, but what about the rest of us?

Microsoft has released details of its StorSimple 8000 Series, the first major new release since it acquired the hybrid cloud storage appliance business back in late 2012.

I first came across StorSimple at what proved to be the last MMS (Microsoft Management Summit) event last year. The concept is brilliant: present the network with infinitely expandable storage (in reality limited to 100TB – 500TB depending on model), storing the new and hot data locally for fast performance, and seamlessly migrating cold (ie rarely used) data to cloud storage. The appliance includes SSD as well as hard drive storage so you get a magical combination of low latency and huge capacity. Storage is presented using iSCSI. Data deduplication and compression increases effective capacity, and cloud connectivity also enables value-add services including cloud snaphots and disaster recovery.

image

The two new models are the 8100 and the 8600:

  8100 8600
Usable local capacity 15TB 40TB
Usable SSD capacity 800GB 2TB
Effective local capacity 15-75TB 40-200TB
Maxiumum capacity
including cloud storage
200TB 500TB
Price $100,000 $170,000

Of course there is more to the new models than bumped-up specs. The earlier StorSimple models supported both Amazon S3 (Simple Storage Service) and Microsoft Azure; the new models only Azure blob storage. VMWare VAAPI (VMware API for Array Integration) is still supported.

On the positive site, StorSimple is now backed by additional Azure services – note that these only work with the new 8000 series models, not with existing appliances.

The Azure StoreSimple Manager lets you manage any number of StorSimple appliances from the Azure portal – note this is in the old Azure portal, not the new preview portal, which intrigues me.

image

Backup snapshots mean you can go back in time in the event of corrupted or mistakenly deleted data.

image

The Azure StorSimple Virtual Appliance has several roles. You can use it as a kind of reverse StorSimple; the virtual device is created in Azure at which point you can use it on-premise in the same way as other StorSimple-backed storage. Data is uploaded to Azure automatically. An advantage of this approach is if the on-premise StorSimple becomes unavailable, you can recreate the disk volume based on the same virtual device and point an application at it for near-instant recovery. Only a 5MB file needs to be downloaded to make all the data available; the actual data is then downloaded on demand. This is faster than other forms of recovery which rely on recovering all the data before applications can resume.

image

The alarming check box “I understand that Microsoft can access the data stored on my virtual device” was explained by Microsoft technical product manager Megan Liese as meaning simply that data is in Azure rather than on-premise but I have not seen similar warnings for other Azure data services, which is odd. Further to this topic, another journalist asked Marc Farley, also on the StorSimple team, whether you can mark data in standard StorSimple volumes not to be copied to Azure, for compliance or security reasons. “Not right now” was the answer, though it sounds as if this is under consideration. I am not sure how this would work within a volume, since it would break backup and data recovery, but it would make sense to be able to specify volumes that must remain always on-premise.

All data transfer between Azure and on-premise is encrypted, and the data is also encrypted at rest, using a service data encryption key which according to Farley is not stored or accessible by Microsoft.

image

Another way to use a virtual appliance is to make a clone of on-premise data available, for tasks such as analysing historical data. The clone volume is based on the backup snapshot you select, and is disconnected from the live volume on which it is based.

image

StorSimple uses Azure blob storage but the pricing structure is different than standard blob storage; unfortunately I do not have details of this. You can access the data only through StorSimple volumes, since the data is stored using internal data objects that are StorSimple-specific. Data stored in Azure is redundant using the usual Azure “three copies” principal; I believe this includes geo-redundancy though this may be a customer option.

StorSimple appliances are made by Xyratex (which is being acquired by Seagate) and you can find specifications and price details on the Seagate StorSimple site, though we were also told that customers should contact their Microsoft account manager for details of complete packages. I also recommend the semi-official blog by a Microsoft technical solutions professional based in Sydney which has a ton of detailed information here.

StorSimple makes huge sense, but with 6 figure pricing this is an enterprise-only solution. How would it be, I muse, if the StorSimple software were adapted to run as a Windows service rather than only in an appliance, so that you could create volumes in Windows Server that use similar techniques to offer local storage that expands seamlessly into Azure? That also makes sense to me, though when I asked at a Microsoft Azure workshop about the possibility I was rewarded with blank looks; but who knows, they may know more than is currently being revealed.

Amazon Mobile SDK adds login, data sync, analytics for iOS and Android apps

Amazon Web Services has announced an updated AWS Mobile SDK, which provides libraries for mobile apps using Amazon’s cloud services as a back end. Version 2.0 of the SDK supporting iOS, and Android including Amazon Fire, is now in preview, adding several new features:

Amazon Cognito lets users log in with Amazon, Facebook or Google and then synchronize data across devices. The data is limited to a 20MB, stored as up to 20 datasets of key/value pairs. All data is stored as strings, though binary data can be encoded as a base64 string up to 1MB. The intent seems to be geared to things like configuration or game state data, rather than documents.

Amazon Mobile Analytics collects data on how users are engaging with your app. You can get data on metrics including daily and monthly active users, session count and average daily sessions per active user, revenue per active user, retention statistics, and custom events defined in your app.

Other services in the SDK, but which were already supported in version 1.7, include push messaging for Apple, Google, Fire OS and Windows devices; Amazon S3 storage (suitable for any amount of data, unlike the Cognito sync service), SimpleDB and Dynamo DB NoSQL database service, email service, and SQS (Simple Queue Service) messaging.

Windows Phone developers or those using cross-platform tools to build mobile apps cannot use Amazon’s mobile SDK, though all the services are published as a REST API so you could use it from languages other than Objective-C or Java by writing your own wrapper.

The list of supported identity providers for Cognito is short though, with notable exclusions being Microsoft accounts and Azure Active Directory. Getting round this is harder since the federated identity services are baked into the server-side API.

image