Category Archives: cloud computing

Los Angeles chooses Google over Exchange for email – who will follow?

When the city council of Los Angeles needed to replace its Novell email system, it looked at two main options. One was Microsoft Exchange, the other Google Apps; and Google won the deal.

There is one, fascinating, caveat. According to David Sarno at LA Times:

The contract was approved pending an amendment that would require Google to compensate the city in the event that the Google system was breached and city data exposed or stolen. No such clause existed in the contract.

Compensation sounds like something more substantial than the fee refund offered by a typical SLA (Service Level Agreement) – and this is about security, not interruption of service.

I would be intrigued to know whether Microsoft pitched a traditional on-premise solution (most likely); or whether it sought to do like-for-like with Google with a hosted Exchange offering.

It’s been a good month or so for Google Apps. I’ve heard of deals with Rentokil Initial (up to 35,000 users worldwide) and Jaguar Land Rover (15,000 worldwide). Deals like this put Google on the map for many more organisations.

Could 2010 be the year of the cloud?

Visual Studio 2010 and .NET Framework 4.0 – a simply huge release

I’ve been exercising the new beta 2 of Visual 2010. It is hard to encapsulate in a few words because this is a simply huge release. OK, so I did download the Ultimate version; but the changes at every level seem greater than in Visual Studio 2008. One of the reasons is that this is the first full update to the .NET Framework since version 2.0 in late 2005. Versions  3.0 and 3.5 extended 2.0 but did not replace it. Another factor is that Visual Studio 2010 has a new editor built with Windows Presentation Foundation, and has a different look and feel than its predecessor. In addition, there is a new language, Visual F#, though I don’t hear much buzz about it; I think elevating IronRuby or IronPython to this status would have attracted more interest – but they are dynamic languages, whereas Visual F# is a functional language. 

When you are assessing Visual Studio you are in part assessing Microsoft’s platform, and as that platform has sprawled, so too has the tool. It is now so large that it is difficult to have in-depth knowledge of the entire thing. I also notice this when speaking to Microsoft folk about the product.

So what is new?

If you need to acclimatise, I suggest you start with What’s new in .NET Framework 4.0. This is a large topic in itself. Some of the things to look out for are What’s new in the Base Class Library, including Complex numbers, Location API, IObservable<T> for observable collections, and other tweaks and enhancements.

Then there are things like in-process side-by-side execution – the ability to run two versions of the Framework at once in the same process, which is remarkable.

Parallel programming with PLINQ and the Task Parallel Library is another major topic.

COM interop is changing; you no longer need to deploy Primary Interop Assemblies, because the compiler can include only the types you need in your application.

Next, take a look at what’s new in specific frameworks, such as WPF version 4 and ASP.NET MVC 2.

After that, you might be ready to look at new stuff in specific languages: including the dynamic keyword in C#, implicit line continuation in VB, lambda expressions in VC++, the concurrency runtime, and the arrival of Visual F#.

With that sorted, check out the new tools in the Visual Studio IDE. I’m thinking of the new code editor, the updated WPF visual designer, the new visual designer for Silverlight, and the Tools for SharePoint development; and not forgetting the updated modelling and application lifecycle management tools.

But isn’t this the era of cloud computing? That’s another part of the problem; the Windows-oriented tools seem less important if you are immersed in the latest cloud news. That said, don’t forget Windows Azure, though I was disappointed to find that the Windows Azure Tools for Visual Studio are a separate download, and not done yet.

I’m impressed that Microsoft seems to be pulling all this together successfully; it is a significant integration task. And as ever I’d be interested in what developers think – was the new code editor really necessary? Is Microsoft addressing the right areas? Has Microsoft done enough to support new Windows 7 features? And is performance OK in this version (it was a problem in beta 1)?

Microsoft quarterly results: server and tools shine, overall decline

Microsoft has reported its results for the quarter ending 30th Sept 2009. I’ve got into the habit of making a small table to help make sense of the figures:

Quarter ending Sept 30th 2009 vs quarter ending Sept 30th 2008, $millions

Segment Revenue % change Profit % change
Client (Windows) 2620 -38.8 1463 -52.17
Server and Tools 3434 0.50 1283 22.89
Online 490 5.80 -480 -49.53
Business (Office) 4404 -11.1 2863 -11.37
Entertainment and devices 1891 -0.11 312 96.22

A quick glance tells you that Windows suffered a sharp decline, though Microsoft says this is because it has deferred $1.47 billion of Windows 7 upgrade revenue, and that adding this back would reduce the decline to 4% year on year.

Note that even with the deferral, Windows is highly profitable.

The star here is server and tools, growing in the downturn and delivering strongly increased profits. I doubt tools counts for much of this; I’m guessing it reflects the positive reception for Server 2008.

Online is as dismal as ever. Clearly the Live properties are still not performing. Presuming Azure is in this category, it’s possible that this will start to turn this round; that is more likely I guess than an improvement in the fortunes of existing products such as Bing.

Office strikes me as pretty good bearing in mind the weak economy and that Microsoft is now talking about Office 2010. Entertainment and devices ticking along but nothing special.

I’m guessing Windows 7 will deliver Microsoft a great next quarter no matter what; but when if ever will it be profitable online?

Disclaimer: I am not a financial analyst, and hold no shares in companies about which I write. Please do not misconstrue this as investment advice; I know nothing about the subject.

Technorati Tags: ,

Web advertising goes outside: digital signage using force.com and Media RSS

In the last 10 years or so video advertising screens have replaced static posters in busy public places like the London Underground. This is known in the trade as digital signage or Digital Out of Home (DOOH) advertising; and I was interested to speak to a company at the recent Salesforce.com Service Cloud 2 launch which is running digital signage systems on the Salesforce force.com platform. The company is signagelive, run by http://www.remotemedia.co.uk/, and and its secret sauce is to use the internet and commodity technology to run 10,000 displays around the world cheaply and efficiently. As I understood it from my brief conversation, a force.com application provides customers with dashboard for managing their screens, usable from any web browser. Content is served to the screens over the Internet using Media RSS. This is well suited to the purpose since it is easy for customers to update, and fail-safe in that if the system fails or the connection breaks, screens just carry on displaying the last version of the feed which they retrieved. Since Media RSS is a standard, the content can also feed desktop applications; and of course it doesn’t have to be advertising though often it is.

A sinagelive display could be a low-powered network-connected device attached to a display; or a display alone with enough intelligence to retrieve a Media RSS feed and display its images; what you can do in the home with something like a estarling connected photo frame or a PhotoVu wireless digital picture frame but with bigger displays. The company is looking forward to displays which include on-chip Adobe Flash players since this will enable animation and video to be included with little extra cost. The media itself is currently stored on company servers, but is likely to move to Amazon S3 in future – which makes sense for scalability, pay as you go, and for taking advantage of Amazon’s global network, reducing latency.

If you want to see an example, apparently the London Dockland Light Railway screens are driven by signagelive; they are also in Harrods.

CEO Jason Cremlins has a blog post about the future of DOOH. My further thought is that if you had devices able to run Flash applications, you could put this together with touch screens and add interactivity to the mix.

The boundaries between internet and non-internet advertising are blurring. Ad networks such as those run by Google can be extended to networks using this kind of technology in a blink. Why shouldn’t advertisers be able to select airport lounges or underground stations alongside Adsense for websites?

The less compelling aspect of the technology is that as the costs of running these advertising networks come down, the likelihood of intrusive advertising screens invading every possible public space increases.

I also found this interesting as an innovative use of the Salesforce platform. As I recall, it hooks into other force.com applications to handle billing, customer record management and so on, and shows the potential for Salesforce to move beyond CRM. With the Adobe Flash aspect as well this example brings together a number of themes that I’ve been mulling over and I enjoyed hearing about it.

Traditional IT is a scam, says Salesforce.com CEO Marc Benioff, introducing Service Cloud 2

Yesterday I attended the London launch for Service Cloud 2 from Salesforce.com. A weary but still ebullient Marc Benioff showed off his new book Behind the cloud – sure to be a bestseller if only for the copies his own company has purchased – and introduced a demo of Service Cloud 2.

There are several elements to Service Cloud 2, which puts customer service alongside CRM as a core Salesforce offering. Traditional call centres are last-century technology, says Benioff, and today’s customers go to Google, Facebook or Twitter before picking up the phone. Salesforce Knowledge is a multi-tenant knowledgebase – a specialist type of content management system – that hooks into a customer service online dashboard as well as being exposed to Google etc. Salesforce Answers is a:

complete, customisable website that facilitates question/answer style conversations between customers

The idea is to promote interaction with customers and to feedback customer knowledge (since they often provide the best support) into the knowledgebase. Salesforce Answers can be published as a Facebook site as well as on the Web. Another feature is the ability to monitor twitter, pick up what people are saying about your product, and intervene as appropriate. You can also have a twitter account to which customers can address queries.

If customers do in fact pick the phone, the same information along with customer details can be used to offer support.

We saw an impressive demo based on Dell.com’s Salesforce application – Dell is a big Salesforce customer and CEO Michael Dell a friend of Benioff. A customer calls in and all their details, purchase and support history pops up automatically based, presumably, on their incoming number. A few quick taps and the representative is able to answer their question. You would imagine that every competent call centre has a similar arrangement – having said which, we’ve all had experience of call centres where you are passed from rep to rep repeating our details and our problem with each new contact. We also saw a Facebook page and Twitter encounter where a customer got quick and accurate responses. Of course the demo problems were nice easy ones like “How do I fit more RAM”, not more intricate ones like “why does Windows freeze every third time I boot”.

The core of the Salesforce proposition is that multi-tenant applications are more cost-effective than traditional in-house IT. The most striking statistic Benioff offered is that they support 63,000 customers on just 1500 Dell PCs. “What a scam traditional IT is”, said Benioff, referring to the low utilisation of most in-house servers – though virtualisation also goes some way towards solving this problem.

One of Benioff’s tips for success is to bring customers and prospects together for informal marketing, and this launch was an example. It was hosted at the London Stock Exchange, a great location for a business-oriented presentation, and I was surrounded by men in suits, unlike the informal attire at the more technology-focussed events I attend.

At the party afterwards there was a piece of marketing genius. Smiling staff circulated the party with armfuls of Flip video cameras. Guests were asked to say something about Salesforce.com into the camera – questions like “How has using Salesforce impacted your business” – and to agree to allow their piece to be used in marketing. In return they got a camera. I wonder if the company will disclose that last piece of information alongside the comments?

Disclosure: I got a Flip too.

While most customers are positive, I did hear some grumbles as well. Cost is one: while you save on infrastructure cost, the Salesforce model is not necessarily cheap, and extras like training are a significant cost. I also heard how the system sometimes fails, not with downtime as such, but with things like scheduled data exports (for backup) failing to run; I presume these are resource hogs and get shunted out of the way to keep the system responsive for immediate transactions. I also heard that the platform now has its own legacy and that some things work in odd ways because they are too difficult to change.

Another worry is lack of control. If something goes wrong, there is nothing you can do beyond harrying support. One customer said it was like being on a train that is late. If you are between stations you cannot even get off the train.

Still, I believe the cloud is the future because of sheer economics; it is more cost-effective. Further, when it comes to multi-tenanted applications Salesforce.com is undoubtedly the leader in its segment.

I spoke to another customer about a particularly interesting use of the platform and will post about that separately.

Rentokil Initial adopting Google Apps – largest deployment yet, apparently

Following a successful 100-day trial with 800 users, Rentokil Initial is deploying Google Apps Premier Edition globally to “up to 35,000 colleagues” by the end of 2010, in what the press release says is the:

Largest deployment of Google Apps™ Premier Edition to replace multiple email systems with a standard global email solution … The new platform will provide a single web-based communication and collaboration suite to replace the Group’s existing 180 email domains and 40 mail systems across its six operating divisions.

Note that the focus is on email, though the release also talks about “communication and collaboration”, including Google chat and video and shared calendars.

Rentokil is keen on the translation service which Google offers:

…the frustrations of not having access to a single company-wide email address database will disappear and the translation difficulties faced by those colleagues wanting to collaborate with others around the world will be lessened

says CIO Bryan Kinsella.

There is no mention of word processing, spreadsheets or presentation graphics in the release, suggesting that a wholesale move to Google for documents is not currently envisaged. That said, I suspect that once an organization signs up for email and collaboration services, they will end up using other parts of the platform as well.

Google’s progress in the Enterprise is interesting to watch. If it successful, it will have a profound impact on the IT industry, and there will be less work for all those support organizations that spend their time keeping Microsoft systems up and running.

When the unthinkable happens: Microsoft/Danger loses customer data

Danger is a company acquired by Microsoft in April 2008, which provides synchronization and online data storage for mobile devices, the best-known being the T-Mobile Sidekick.  Here’s the Danger promise:

Data is always synchronized and backed up
Danger-powered devices are always connected to the Danger service. All user data is automatically and securely backed up over-the-air, and emails, photos, and organzier data are automatically synchronized with a Web-based application. All changes that are made on the device are instantly and automatically reflected on the user’s computer, and vice versa.

That dream is in tatters thanks to a currently unspecified server failure. Problems started over a week ago, culminating in this devastating “status update”:

Regrettably, based on Microsoft/Danger’s latest recovery assessment of their systems, we must now inform you that personal information stored on your device – such as contacts, calendar entries, to-do lists or photos – that is no longer on your Sidekick almost certainly has been lost as a result of a server failure at Microsoft/Danger … we recognize the magnitude of this inconvenience.

The word “inconvenience” does not express what some users are experiencing. Here’s an example:

I too have lost business contacts (over 200), family and friends mailing, email address & phone #. Good luck now with holiday cards.  Without my calendar, I now have no clue when all my upcoming appts are.  In addition, I have lost passwords, account codes and my gym workout routine.  I was unable to do my side jobs over the past two weekends without these codes.  To recover the information will take hours of my time worth way more than the month of service credit in addition to the money I have already lost not being able to work.

The entire reason I chose to stay with the sidekick and renew with t-mobile was because of the piece of mind knowing that my data information was backed up to an online system. 

So what next? People are drawing a variety of conclusions, the most obvious being either that the cloud can never be trusted, and/or that Microsoft can never be trusted. Of course there is no such thing as total data (or any other kind of) security, but risks can be minimized, and in the absence of nuclear war, earthquake or volcanic eruption this looks inexcusable – but bad things happen.

The company is promising an update tomorrow (October 12th). Personally I doubt that the data is really irrecoverable, knowing a little about what data forensics can achieve, but it may be economically irrecoverable. Still, the best thing Microsoft could do would be to announce that it can get the data back after all. Failing that, we need to understand as much as possible about what went wrong so that we can make our own judgment about what to conclude.

Presuming that the data does not reappear, this is going to get messy. What happens when the marketing information says one thing, but the small print says another (as is often the case)? One user found this in his contract:

The services and devices are provided on an “as-is” and “with all faults” basis and without warranties of any kind

which may well be typical. Then again, what about T-Mobile’s relationship with Microsoft?

Finally, while I accept that data may be safer in a cloud service provider’s data centre than on my cheap local hard drives, it is also obvious that cloud + local backup is even safer. Apparently this is one thing that Danger made somewhat difficult, and I’ve known this to be true of other cloud-based services.

Update: rumour has it that this was a failed SAN (Storage Area Network) upgrade without a backup. Further rumours of the poor state of Danger (and Windows Mobile) within Microsoft are in this RoughlyDrafted article.

Adobe uses Amazon platform for cloud LiveCycle ES2

Just spotted this from today’s Adobe’s LiveCycle ES2 announcement:

Adobe is also announcing the ability for enterprise customers to deploy LiveCycle ES2 as fully managed production instances in the cloud, with 24×7 monitoring and support from Adobe, including product upgrades. LiveCycle ES2 preconfigured instances will be hosted in the Amazon Web Services cloud computing environment.

This is neat: Amazon’s Elastic Compute Cloud handles the infrastructure, but customers get fully supported hosted services from Adobe.

Maintaining a global infrastructure for high-volume cloud services is hugely expensive, which restricts it to a few very large companies. Using Amazon removes that requirement at a stroke. I wonder if Adobe also uses Amazon for Acrobat.com – hosted conferencing and document-based collaboration – or plans to do so?

Future of Web Apps cheers the independent Web

The Future of Web Applications conference in London is always a thought-provoking event, thanks to its diversity, independence and character. That said, it is a frustrating creature at times. The frustration on day 1 was the barely functional wi-fi, which ruined a promising interactive application called HelloApp, built with ASP.NET MVC. HelloApp would have told us who we were sitting next to, what their interests were, their twitter ID and so on. Microsoft must be disappointed since the developers, some of them more used to technologies like PHP and Ruby, said how impressed they were with the framework and Visual Studio. The poor connectivity was a shame, and a bad slip-up for a web application conference. Even the speakers had to work mostly offline – cloud devotees beware.

Ryan Carson at the Future of Web Apps London, 2009

FOWA has been at London Excel recently, but this event was back to its earlier venue of Kensington Town Hall, more crowded but a better atmosphere and easier to get to. I suspect a little downsizing, but much prefer it. Organizer Ryan Carson has his heart set on enabling start-ups, proffering business advice and uniting developers, designers and money folk, though many attendees are not in the start-up category at all. When revealing the results of a survey showing that many web app hopefuls had less then 1000 site visitors a month he shook his head despairingly “you’re never gonna build a business on that kind of traffic”.

Carson has excellent contacts and the day kicked off with Digg’s Kevin Rose on how to get those visitor numbers up – he should know if anyone does. Rose exceeded my expectations with tips on massaging your visitor egos, avoiding analysis paralysis, hanging round event parties to meet influencers even when you can’t afford to attend the event, and even how to hack the press.

After that the day was disappointingly low-key, at least until midday. Then we got Francisco Tolmasky from 280 North and it all changed. Tolmasky’s line is that we should use pure web technology but with the richness of desktop applications, and to enable this he’s put forward cappuccino, a JavaScript framework inspired by Apple’s Objective C and Cocoa – Cappuccino uses Objective-J. This now has a visual development tool (web-based of course) called Atlas, and in Tolmasky’s demo it looked superb. See here for more details.

The surprising twist is that after developers told Tolmasky that they (or their companies) were not willing to trust code to the web, 280 North came up with a desktop version of Atlas with the added ability to create desktop applications as well. I am not clear about all the runtime details, though it no doubt involves webkit, but Tolmasky’s differentiator versus alternatives like Java or Adobe AIR is that Atlas uses only web APIs.

We heard a lot at FOWA about social media, how to use it for marketing, and how to integrate it into applications. Cat Lee from Facebook gave us a breathless presentation on how simple it is to hook into Facebook Connect. It was OK but it was a sales pitch, and that never goes down well at FOWA. 

The later afternoon sessions were excellent. Bruce Lawson of Opera gave us an entertaining overview of how HTML 5 would make life easier for developers. There was nothing new here, but nevertheless a revealing moment. He showed some rich media working in HTML 5 and made the comment, jabbing at Adobe Flash and Microsoft Silverlight, that the web was too important to place control in the hands of any one vendor. A loud and spontaneous cheer went up.

This was echoed later when Aza Raskin of Mozilla gave us a browser-centric view of social media, suggesting that the browser could broker our “social graph” by integrating with multiple identity providers. Raskin’s line: social media is too important to be in the hands of any one vendor.

The Guardian’s Chris Thorpe gave a bold presentation about how the Guardian wants to embed itself in the web through its open platform. Like most print media, the Guardian has many challenges around its future business model (disclaimer: I write for the Guardian from time to time); but Thorpe’s presentation shows that his newspaper is coming up with an intelligent response, promoting interaction and building out into the wider web rather than erecting paywalls. Having said that, maybe the Guardian will try other business models too; it is a journey into the unknown.

Overall a day for social media and the open web, and a good antidote to the more vendor-centric conferences at which I often find myself. Next week, for example, it is the Flash-centric Adobe MAX; and having heard very little about Flash at FOWA that will make an interesting contrast.

Apple is like Microsoft

That was my first thought after seeing the news that Google CEO Dr Eric Schmidt is leaving the Apple board. Steve Jobs:

Unfortunately, as Google enters more of Apple’s core businesses, with Android and now Chrome OS, Eric’s effectiveness as an Apple Board member will be significantly diminished, since he will have to recuse himself from even larger portions of our meetings due to potential conflicts of interest. Therefore, we have mutually decided that now is the right time for Eric to resign his position on Apple’s Board.

I realise that we are more used to the idea that Apple is Microsoft’s polar opposite. Apple has design and beautiful hardware, Microsoft has OEM’s with their model-a-minute systems that are never quite right. Apple has iPhone which everyone wants, Microsoft has Windows Mobile which everyone puts up with (if they don’t have an iPhone). Apple has iPod which everyone uses, Microsoft has Zune which nobody uses. And so on.

Nevertheless, Apple and Microsoft are companies from the same era, and they both make most of their money by constantly upgrading the client and persuading us to buy into the latest version. Although Apple has some investment in the cloud, with Mobile Me and more importantly the App Store, these exist primarily to support its client devices.

Google on the other hand is invested in the cloud. Projects like Android and Chrome OS may run on the client, but they are not profit centres in themselves – they exist to promote Google’s web-based services (see Google Chrome OS – the Web’s the thing). It is important for Google to make these investments, as without them the client-centric giants (Apple and Microsoft) have too much power to impair web-based computing in favour of the old model.

Recently Apple has been been making life miserable for App Store developers by denying applications that compete with built-in iPhone features – most visibly in the case of Google Voice. Unfortunately by protecting the iPhone in this way Apple is diminishing its usefulness in the cloud era.

Apple is not quite like Microsoft. Apple can grow by taking market share from Microsoft, whereas it is harder for Microsoft to do the reverse (though Windows 7 is a good attempt). Apple can make more inroads into business computing. It can broaden the market for the iPhone by making a wider range of device and lowering the price of entry, as it did with the iPod. The digital home is another promising market.

On the other hand, Microsoft has more of a cloud platform than Apple. Microsoft has Bing-Yahoo search, Hotmail and Messenger, Windows Azure and Silverlight. It has failed so far, but in theory it could build this into a viable alternative to Google.

Still, now that Apple and Google have started to break their alliance and openly compete, it’s clear that Apple and Microsoft are on the same side of a great divide, with Google on the other.

Technorati Tags: ,,