Category Archives: .net

Marc Benioff: Google deal is aimed at a common enemy

Here at Dreamforce Europe, I asked Salesforce.com CEO Marc Benioff about the company’s agreement with Google, in which Salesforce becomes an OEM for Google Apps. We saw this demonstrated in the keynote. You can start a email via  Gmail from within a Salesforce contact. When sent – provided you click the Salesforce send button and not the Gmail send button – the email is added to the contact history. A similar feature lets you attach a Google document to a Salesforce record.

It’s a useful feature; but long term, will Salesforce.com and Google be competitors rather than partners? It is a natural question, since both companies are promoting their services as a platform for applications. Salesforce has the Apex programming language, while Google has its App Engine. According to Benioff, App Engine is primarily for Python developers, while Salesforce.com is a platform for enterprise applications. This struck me as downplaying Google’s likely ambitions in the enterprise market.

I therefore asked Benioff whether the agreement with Google included any non-compete element, or whether Google might be a future platform competitor. He did not answer my question, but said:

The enemy of my enemy is my friend

The identity of the enemy is unspecified; but given that Benioff used Microsoft .NET as the example of what his platform is supposedly replacing, and that Google docs competes with Microsoft Office, and that Benioff makes constant jibes at the complexity and expense of developing for Windows, I guess we can draw our own conclusions.

For sure, it did little to allay my suspicion that Salesforce.com and Google will not not always be as warm towards one another.

As an aside, there are ironies in Benioff’s characterization of .NET. Microsoft launched .NET as a “platform for web services”, which is exactly what Salesforce.com has become. Microsoft was a key driver behind the standardization and adoption of SOAP, which is the main protocol in the Salesforce.com API.

Live Mesh: Hailstorm take 2?

So says Spolsky, in a rant about both unwanted mega-architectures, and the way big companies snaffle up all the best coders.

Is he right? Well, I attended the Hailstorm PDC in 2001 and I still have the book that we were given: .NET My Services specification. There are definitely parallels, not least in the marketing pitch (from page 3):

.NET My Services will enable the end user to gain access to key information and receive alerts about important events anywhere, on any device, at any time. This technology will put users in total control of their data and make them more productive.

Swap “.NET My Services” for “Live Mesh” and you wouldn’t know the difference.

But is it really the same? Spolsky deliberately intermingles several points in his piece. He says it is the same stuff reheated. One implication is that because Hailstorm failed, Live Mesh will fail. Another point is that Live Mesh is based on synchronization, which he says is not a killer feature. A third point is that the thing is too large and overbearing; it is not based on what anyone wants.

Before going further, I think we should ask ourselves why Hailstorm failed. Let’s look at what some of the people involved think. We should look at this post by Mark Lucovsky, chief software architect for Hailstorm and now at Google, who says:

I believe that there are systems out there today that are based in large part on a similar set of core concepts. My feeling is that the various RSS/Atom based systems share these core concepts and are therefore very similar, and more importantly, that a vibrant, open and accessible, developer friendly eco-system is forming around these systems.

Joshua Allen, an engineer still at Microsoft, disagrees:

All of these technologies predate Hailstorm by a long shot.  There is a reason they succeeded where Hailstorm failed.  It’s because Hailstorm failed to adopt their essence; not because they adopted Hailstorm’s essence …. the “principles” Mark’s blog post cites are actually principles of the technologies Hailstorm aimed to replace.

but as Allen shows in the latter part of his post, the technology was incidental to the main reasons Hailstorm failed:

  1. Hailstorm intended to be a complete, comprehensive set of APIs and services ala Win32.  Everything — state management, identity, payments, provisioning, transactions — was to be handled by Hailstorm.
  2. Hailstorm was to be based on proprietary, patented schemas developed by a single entity (Microsoft).
  3. All your data belonged to Microsoft.  ISVs could build on top of the platform (after jumping through all sorts of licensing hoops), but we controlled all the access.  If we want to charge for alerts, we charge for alerts.  If we want to charge a fee for payment clearing, we charge a fee.  Once an ISV wrote on top of Hailstorm, they were locked in to our platform.  Unless we licensed a third party to implement the platform as well, kind of like if we licensed Apple to implement Win32.

Hailstorm’s technology was SOAP plus Passport authentication. There were some technical issues. I recall that Passport in those days was suspect. Some smart people worked out that it was not as secure as it should be, and there was a general feeling that it was OK for logging into Hotmail but not something you would want to use for online banking. As for SOAP, it gets a bad rap these days but it can work. That said, these problems were merely incidental compared to the political aspect. Hailstorm failed for lack of industry partners and public trust.

Right, so is Live Mesh any different? It could be. Let me quickly lay out a few differences.

  1. Live Mesh is built on XML feeds, not SOAP messaging. I think that is a better place to start.
  2. Synchronization is a big feature of Mesh, that wasn’t in Hailstorm. I don’t agree with Spolsky; I think this is a killer feature, if it works right.
  3. Live Mesh is an application platform, whereas Hailstorm was not. Mesh plus Silverlight strikes me as appealing.

Still, even if the technology is better, what about the trust aspect? Will Mesh fail for the same reasons?

It is too soon to say. We do not yet know the whole story. In principle, it could be different. Mesh is currently Passport (now Live ID) only. Will it be easy to use alternative authentication providers? If the company listens to its own Kim Cameron, you would think so.

Currently Mesh cloud data resides only on Microsoft’s servers, though it can also apparently do peer-to-peer synch. Will we be able to run Mesh entirely from our own servers? That is not yet known. What about one user having multiple meshs, say one for work, one personal, and one for some other role? Again, I’m not sure if this is possible. If there is only One True Mesh and it lives on Live.com, then some Hailstorm spectres will rise again.

Finally, the world has changed in the last 7 years. Google is feared today in the way that Microsoft was feared in 2001: the entity that wants to have all our information. But Google has softened us up to be more accepting of something like Live Mesh or even Hailstorm. Google already has our search history, perhaps our email, perhaps our online documents, perhaps an index of our local documents. Google already runs on many desktops; Google Checkout has our credit card details. What boundary can Live Mesh cross, that Google has not already crossed?

Hailstorm revisited is an easy jibe, but I’m keeping an open mind.

Mono on the iPhone

Unlocked iPhone, of course. Miguel de Icaza has the details and some video links.

Flash, Silverlight, Mono, Java: surely Jobs won’t keep all these runtimes officially forbidden for ever? It strikes me that Flash has the best chance of getting there, simply because without it the Web is a little bit broken for iPhone users. It’s an influential device and its runtime support (or lack thereof) will be a factor in web development trends.

Technorati tags: , , , , ,

Steve Ballmer: post Yahoo, we will be a PHP shop

Steve Ballmer took a few questions yesterday at Mix08 in Las Vegas, and I asked him what Microsoft would do with all Yahoo’s PHP applications if its takeover bid succeeds, especially where they duplicate home-grown applications that are running on ASP.NET.  PHP is deeply embedded into Yahoo’s culture, and Rasmus Lerdorf, who invented PHP, works at Yahoo as Infrastructure Architect.

He gave me a fuller answer than I expected, which is worth quoting in its entirety:

There’s really two different questions. In a number of areas, and I won’t go into specifics, but we will have to make some kind of integration plans after presumably we reach deal and it will be appropriate to talk to the Yahoo guys. We shouldn’t have two of everything. It won’t make sense to have two search services, two advertising services, two mail services, and we’ll have to sort some of that through. Some of that technology undoubtedly will come from Microsoft’s side, and some will undoubtedly come from Yahoo’s side, whatever technology comes, it will also come with an infrastructure that runs it.

You ask what we will do with those PHP applications? I’m sure a bunch of them will be running, at high scale and in production for a long time to come.

I think there’s going to be a lot of innovation in the core infrastructure which we have on Windows today with ASP.NET, and Yahoo have in Linux and PHP today, and over time probably most of the big applications on the Internet will wind up being rebuilt and redone, whether those are ours, or Yahoo’s, or any of the other competitors. But for the foreseeable future we will be a PHP shop, I guess if we own Yahoo, as well as being an ASP.NET shop.

One of the things I love which we got into the new Windows Server, is that we put a lot of attention in to making sure that PHP applications run well on Windows Server. That’s not the current Yahoo environment and I’m not suggesting that we would transition that way, but for those of you who do have PHP skills, we are going to try and make Windows Server the best place to have PHP applications in the future.

It was a good answer, though I’d still expect integration to be difficult. One danger is that post-merger infighting over what gets preserved and what gets scrapped could stifle innovation. Microsoft’s Live platform actually looks increasingly interesting, as we’ve learned here at Mix, and I imagine that some of these teams will be nervous about what will happen to their efforts in Microsoft-Yahoo becomes a reality.

Microsoft promises WPF DataGrid, big performance improvement for .NET clients

Microsoft’s Scott Guthrie posts about coming service updates to client-side .NET (Windows Forms and Windows Presentation Foundation). He says we can expect:

  • A new, quicker and more efficient setup framework
  • 25%-40% faster start-up for applications using .NET 2.0 and higher, and smaller runtime footprint
  • More hardware acceleration in WPF, plus better video performance and data-handling improvements
  • A DataGrid, Ribbon, and Calendar/DatePicker for WPF
  • Improved WPF designer for Visual Studio 2008

These address common real-world complaints. I’m sceptical; when version 1.0 of the .NET Framework came out, Microsoft said it was working to reduce the runtime memory footprint for Windows Forms applications, but it never happened. Let’s hope this time it will be different.