Tag Archives: storsimple

Microsoft StorSimple brings hybrid cloud storage to the enterprise, but what about the rest of us?

Microsoft has released details of its StorSimple 8000 Series, the first major new release since it acquired the hybrid cloud storage appliance business back in late 2012.

I first came across StorSimple at what proved to be the last MMS (Microsoft Management Summit) event last year. The concept is brilliant: present the network with infinitely expandable storage (in reality limited to 100TB – 500TB depending on model), storing the new and hot data locally for fast performance, and seamlessly migrating cold (ie rarely used) data to cloud storage. The appliance includes SSD as well as hard drive storage so you get a magical combination of low latency and huge capacity. Storage is presented using iSCSI. Data deduplication and compression increases effective capacity, and cloud connectivity also enables value-add services including cloud snaphots and disaster recovery.

image

The two new models are the 8100 and the 8600:

  8100 8600
Usable local capacity 15TB 40TB
Usable SSD capacity 800GB 2TB
Effective local capacity 15-75TB 40-200TB
Maxiumum capacity
including cloud storage
200TB 500TB
Price $100,000 $170,000

Of course there is more to the new models than bumped-up specs. The earlier StorSimple models supported both Amazon S3 (Simple Storage Service) and Microsoft Azure; the new models only Azure blob storage. VMWare VAAPI (VMware API for Array Integration) is still supported.

On the positive site, StorSimple is now backed by additional Azure services – note that these only work with the new 8000 series models, not with existing appliances.

The Azure StoreSimple Manager lets you manage any number of StorSimple appliances from the Azure portal – note this is in the old Azure portal, not the new preview portal, which intrigues me.

image

Backup snapshots mean you can go back in time in the event of corrupted or mistakenly deleted data.

image

The Azure StorSimple Virtual Appliance has several roles. You can use it as a kind of reverse StorSimple; the virtual device is created in Azure at which point you can use it on-premise in the same way as other StorSimple-backed storage. Data is uploaded to Azure automatically. An advantage of this approach is if the on-premise StorSimple becomes unavailable, you can recreate the disk volume based on the same virtual device and point an application at it for near-instant recovery. Only a 5MB file needs to be downloaded to make all the data available; the actual data is then downloaded on demand. This is faster than other forms of recovery which rely on recovering all the data before applications can resume.

image

The alarming check box “I understand that Microsoft can access the data stored on my virtual device” was explained by Microsoft technical product manager Megan Liese as meaning simply that data is in Azure rather than on-premise but I have not seen similar warnings for other Azure data services, which is odd. Further to this topic, another journalist asked Marc Farley, also on the StorSimple team, whether you can mark data in standard StorSimple volumes not to be copied to Azure, for compliance or security reasons. “Not right now” was the answer, though it sounds as if this is under consideration. I am not sure how this would work within a volume, since it would break backup and data recovery, but it would make sense to be able to specify volumes that must remain always on-premise.

All data transfer between Azure and on-premise is encrypted, and the data is also encrypted at rest, using a service data encryption key which according to Farley is not stored or accessible by Microsoft.

image

Another way to use a virtual appliance is to make a clone of on-premise data available, for tasks such as analysing historical data. The clone volume is based on the backup snapshot you select, and is disconnected from the live volume on which it is based.

image

StorSimple uses Azure blob storage but the pricing structure is different than standard blob storage; unfortunately I do not have details of this. You can access the data only through StorSimple volumes, since the data is stored using internal data objects that are StorSimple-specific. Data stored in Azure is redundant using the usual Azure “three copies” principal; I believe this includes geo-redundancy though this may be a customer option.

StorSimple appliances are made by Xyratex (which is being acquired by Seagate) and you can find specifications and price details on the Seagate StorSimple site, though we were also told that customers should contact their Microsoft account manager for details of complete packages. I also recommend the semi-official blog by a Microsoft technical solutions professional based in Sydney which has a ton of detailed information here.

StorSimple makes huge sense, but with 6 figure pricing this is an enterprise-only solution. How would it be, I muse, if the StorSimple software were adapted to run as a Windows service rather than only in an appliance, so that you could create volumes in Windows Server that use similar techniques to offer local storage that expands seamlessly into Azure? That also makes sense to me, though when I asked at a Microsoft Azure workshop about the possibility I was rewarded with blank looks; but who knows, they may know more than is currently being revealed.

Microsoft takes aim at VMware, talks cloud and mobile device management at MMS 2013

I am attending the Microsoft Management Summit in Las Vegas (between 5 and 6,000 attendees I was told), where Brad Anderson, corporate vice president of Windows Server & System Center, gave the opening keynote this morning.

image

There was not a lot of news as such, but a few things struck me as notable.

Virtualisation rival VMware was never mentioned by name, but frequently referenced by Anderson as “the other guys”. Several case studies from companies that had switched from “the other guys” were mentioned, with improved density and lower costs claimed as you would expect. The most colourful story concerned Dominos (pizza delivery) which apparently manages 15,000 servers across 5,000 stores using System Center and has switched to Hyper-V in 750 of them. The results:

  • 28% faster hard drive writes
  • 36% faster memory speeds
  • 99% reduction in virtualisation helpdesk calls

That last figure is astonishing but needs more context before you can take it seriously. Nevertheless, there is momentum behind Hyper-V. Microsoft says it is now optimising products like Exchange and SQL Server specifically for running on virtual machines (that is, Hyper-V) and it now looks like a safe choice, as well as being conveniently built into Windows Server 2012.

I also noticed how Microsoft is now letting drop some statistics about use of its cloud offerings, Azure and Office 365. The first few years of Azure were notable in that the company never talked about the numbers, which is reason to suppose that they were poor. Today we were told that Azure storage is doubling in capacity every six to nine months, that 420,000 domains are now managed in Azure Active Directory (also used by Office 365), and that Office 365 is now used in some measure by over 20% of enterprises worldwide. Nothing dramatic, but this is evidence of growth.

Back in October 2012 Microsoft acquired a company called StorSimple which specialises in integrating cloud and on-premise storage. There are backup and archiving services as you would expect, but the most innovative piece is called Cloud Integrated Storage (CiS) and lets you access storage via the standard iSCSI protocol that is partly on-premise and partly in the cloud. There was a short StorSimple demo this morning which showed how how you could use CiS for a standard Windows disk volume. Despite the inherent latency of cloud storage performance can be good thanks to data tiering, which puts the most active data on the fastest storage, and the least active data in the cloud. From the white paper (find it here):

CiS systems use three different types of storage: performance-oriented flash SSDs, capacity- oriented SAS disk drives and cloud storage. Data is moved from one type of storage to another according to its relative activity level and customer-chosen policies. Data that becomes more active is moved to a faster type of storage and data that becomes less active is moved to a higher capacity type of storage. 

CiS also uses compression and de-duplication for maximum efficiency.

This is a powerful concept and could be just the thing for admins coping with increased demands for storage. I can also foresee this technology becoming part of Windows server, integrated into Storage Spaces for example.

A third topic in the keynote was mobile device management. When Microsoft released service pack 1 of Configuration Manager (part of System Center) it added the ability to integrate with InTune for cloud management of mobile devices, provided that the devices are iOS, Android, Windows RT, or Windows Phone 8. A later conversation with product manager Andrew Conway confirmed that InTune rather than EAS (Exchange ActiveSync) policies is Microsoft’s strategic direction for mobile device management, though EAS is still used for Android. “Modern devices should be managed from the cloud” was the line from the keynote. InTune includes policy management as well as a company portal where users can install corporate apps.

What if you have a BlackBerry 10 device? Back to EAS. A Windows Mobile 6.x device? System Center Configuration Manager can manage those. There is still some inconsistency then, but with iOS and Android covered InTune does support a large part of what is needed.