Category Archives: Uncategorized

Zoho CEO on Flash vs Javascript

Zoho is an online office suite. I was interested in comments from Zoho’s Sridhar Vembu on why it is coded using Javascript rather than Flash. He gives five reasons:

  1. Web standards. “Flash, for all its advantages, sits in a separate space from the browser.”
  2. Open source libraries more widely available
  3. Vector graphics can be done in browsers (SVG, VML)
  4. Mobile support – “one word – iPhone”
  5. Smaller size = faster loading

Note that he is not rejecting Flash in all circumstances; he merely regards it as less suitable than Javascript for his company’s premier product and web application.

Convinced? It’s a fair case, though I suspect you could equally easily make a case for Flash, citing reasons like:

  1. No need to code around browser differences
  2. Faster code thanks to just-in-time compilation
  3. More consistent font rendering across different platforms and browsers
  4. Easier coding of complex effects and layouts

Sridhar’s most compelling point

One way of investigating further is to contrast the Flash-based Buzzword with Zoho Writer. They are very different. Zoho’s user interface is busy and cluttered by comparison, though it has some ambitious features which Buzzword lacks (Insert Layer, for example). Personally I prefer the cleaner UI. But is that really because of Flash vs Javascript, or simply the outcome of different design decisions? Zoho’s apps are like its website, too much stuff thrown at the user. I count 25 products advertised on its home page – fourteen apps, four utilities, one beta, four add-ons, two uncategorised (iZoho and Zoho in Facebook). Overwhelming.

Users don’t care about Flash vs Javascript; they care about usability and productivity.

Another twist is what happens when these apps introduce offline support. Zoho has already done so, using Google Gears, but I don’t much like the implementation. It is modal and intrusive. I want offline synch to happen seamlessly when I hit Save; it should only raise its own UI when there is a conflict. There is also the point that Adobe’s Kevin Lynch made at the Max conference last month (and no doubt elsewhere): it is counter-intuitive to open a browser, when offline, to access a web application. Adobe has AIR, and Mozilla is also working on solutions to this. But to my mind Flash has an advantage here. Think: AIR, web storage, local cache. Whoever gets this right will grab a lead in the online office wars.

Technorati tags: , , , , ,

Vista myths and reality

CNET’s inclusion of Vista on a list of top ten terrible tech products has drawn some attention. Here’s the blurb:

Its incompatibility with hardware, its obsessive requirement of human interaction to clear security dialogue box warnings and its abusive use of hated DRM, not to mention its general pointlessness as an upgrade, are just some examples of why this expensive operating system earns the final place in our terrible tech list.

Fair? Let’s have a look:

  • Incompatibility with hardware

Not fair. I don’t think Vista is worse in this respect than any other new operating system. I have used Vista from day one and my only outright failure is an aged Umax scanner – that’s across several desktops and laptops.

  • Obsessive requirement of human interaction to clear security dialogue box warnings

Not fair. This is about UAC, right? Which you can turn off if you want. But you won’t see these dialogues often – only if you install software, perform admin tasks, or run badly designed applications like, say, LG PC Suite (I’ve suffered from this one recently).

In all cases UAC is working as designed. After all, the purpose of UAC is not just immediate security, but to force app developers to design apps that do not undermine Windows security.

  • Abusive use of hated DRM

Not fair. I’ve not run into any DRM issues with Vista. Some claim that Vista performance problems are DRM-related but I’m sceptical.

  • General pointlessness as an upgrade

Now this is a tough one. What is the benefit of Vista? Then again, what can you do in XP that you cannot do in Windows 2000? It’s certainly open to argument; but I don’t agree. I prefer Vista; I regard it as more secure; and there are a number of small details that I like, which together add up to a better experience.

What Crave didn’t say

Despite the above, I do have some Vista gripes.

One is performance. The spinning bagel – I see it often. The Windows Explorer loading thermometer – you know, the green bar – what kind of nonsense is that?

Second, audio. This matters to me. And here’s a telling comment to my blog post:

I’m a pro audio user with thousands of dollars invested in MOTU audio interfaces and many years of recording experience. For most of us who use our computers to record, Vista has been a painful lesson. Often we need to run much smaller audio buffers to get lower latency than gamers or home theatre enthusiasts. This is something that was no problem on a well tuned XP machine. Unfortunately Vista has proved itself to be a very poor alternative. Even pro audio apps that register with MMCSS to guarantee CPU time to critical audio threads perform poorly. My feeling is that the move of most of the audio driver components from kernel mode to user mode is at the root of the issues we’re seeing. This move was made to reduce the likelihood that a bad audio driver could cause a BSOD. The trade-off however, has been much worse audio performance at low latency, regardless of how much money you spend on top-shelf audio interfaces.

Third, app compatibility. This is the crux of the matter. Microsoft designed Vista to make life difficult for apps that trample all over the Windows security model. To mitigate this it then has a bunch of stuff that tries to make life better for those apps, but which may cause further problems.

You can think of this as a battle for the future of Windows. If Vista wins, then the bad apps gradually get replaced by good apps, and in a few years the compatibility stuff will become irrelevant and life will be better for Windows users.

Alternatively, if the bad apps win, then users just revert to XP or turn off UAC so that the bad apps continue to work right. What is the way forward for Windows then? I do not know who will win this contest.

Finally, let’s acknowledge that Microsoft put a ton of energy into Vista that has not resulted in any immediate benefit to the user. One energy sink was the years wasted going down the wrong path prior to the notorious reset. The other energy sink is all this UAC and compatibility stuff which makes sense long-term, but not as a “wow! that’s better” experience. Possibly DRM is a third example.

Bottom line: Vista is not as bad as its detractors make out, but not as good as it should be. I know, I’ve said this before.

Technorati tags: , , , ,

Another crack at the online office suite

The creator of Hotmail is having a crack at the online office suite market with Live Documents.

I’ve signed up but not received an invite yet. Of course I’ll take a look, but two things caught my immediate interest:

1. Uses Flash and Flex:

Built using RIA technologies such as Flash and Flex, Live Documents allow users to view and edit documents within any common browser on any operating system from anywhere.

2. Connected Desktop Client:

Live Documents gives you continuous two-way syncing of your documents and edits; between your PCs and the web and vice versa.

Sync is a big deal and would make this more interesting to me than, say, Google Docs in its present incarnation.

There’s integration with Microsoft Office and (in preparation) Open Office.

Technorati tags: , , , ,

Saving the planet with Sun’s thin client or an Asus Eee PC

I spoke yesterday at an Education Forum on the subject of open source software. While I was there I sat in on a discussion led by Sun’s Simon Tindall, on the subject of thin clients. Sun has been beating this drum for some time, not least because it sells suitable servers, though as far as I can tell the take-up has been relatively modest. The argument is usually about manageability, but Tindall majored on the energy aspect. He claimed typical power consumption of 8 watts for a Sun Ray 2, versus maybe 50 – 120 watts for a traditional PC. That excludes the display, plus the additional power consumption of the necessary chunky server for a Sun Ray 2, but there’s little doubt that a thin client approach will save a significant amount of energy.

The views of users were mixed. There were enthusiasts, but also reservations expressed about performance, particularly as multimedia becomes increasingly important. A weakness of these devices is that they have relatively weak graphics, and device support can be a problem.

Even so, if we are serious about reducing energy consumption it strikes me that this area is worth looking at. Windows Vista has some new power-saving features, but also makes the problem worse by using rich graphical effects in the main Windows user interface. Constant disk activity from services like the search indexer cannot help either.

I am not sure about using a thin client for all my work, but I like the idea of minimalist devices that let you accomplish common tasks without firing up an energy-hungry PC or laptop. Much of the time, Internet, email and word processing is all I need, for example. A device like the Asus Eee PC is interesting in this context: small screen, solid-state disk, low power consumption – perhaps even less than a Sun Ray 2 with a typical display. There’s also the OLPC, which draws no more than 15 watts.

Technorati tags: , ,

Huge update to MFC unveiled

Microsoft’s Herb Sutter has more details on the “massive update” to the Microsoft Foundation Classes (MFC), about which I blogged back in August.

The focus seems to be on UI features, including Vista themes, Office 2007 Ribbon-alike, new dialogs, task panes, docking, tabbing, and so on.

How big an update is this? Here’s what Sutter says:

This update nearly doubles the size of MFC. Now, “nearly doubles the size of X” can be a bad thing. In this case, though, it’s a Good Thing… in my opinion, at least.

MFC was originally designed as a thin C++ wrapper for the Windows API, which accounts for its ugliness when considered purely as an application framework. I don’t know if the update fixes any of those underlying issues, but it will be handy for developers who need a quick route to an up-to-date Windows UI.

I interpret this as Microsoft acknowledging the continuing importance of native code versus .NET programming, though personally I would still rather use CodeGear’s Delphi

Technorati tags: , ,

Google’s Open Handset Alliance site: not mobile, not open

I was browsing the web on my mobile, as one does, and came across a news item about the Open Handset Alliance, Google’s new initiative to foster a Linux-based operating system for mobile devices, codename Android. I clicked the link, but thought I’d mis-clicked, because this is what I got:

Open Handset Alliance site showing only a Google search page

Puzzled, I checked out the site later on a PC. Everything was fine:

Open Handset Alliance showing blurb about commitment to openness

The problem is that Google automatically detects mobile browsers and redirects them to an “/m” version of the site. Which in this instance is completely useless. There is no obvious way round it – I tried amending the URL, but it bounced straight back to Google search. This is one of the reasons I dislike the mobile web.

Let me add that Google has done a mixed job on the “open” aspect, even if you visit with a supported browser. Most of the site doesn’t mention Google. It places itself modestly in alphabetical order under Software Companies, in the list of members.

So far so good, but then I hit the terms of service:

1.2 Your use of products, software, services and websites in connection with the Open Handset Alliance website (referred to collectively as the “Services” in this document) is subject to the terms of a legal agreement between you and Google.

4.3 As part of this continuing innovation, you acknowledge and agree that Google may stop (permanently or temporarily) providing the Services (or any features within the Services) to you or to users generally at Google’s sole discretion, without prior notice to you.

5.5 Unless explicitly permitted to do so by Google, you agree that you will not reproduce, duplicate, copy, sell, trade or resell the Services for any purpose.

Ouch. Those pesky lawyers just don’t get this open thing, do they?

Why Silverlight?

I noticed this question in a comment to Rob Blackwell’s Reg article on Silverlight:

…given that MS never does anything without a commercial reason … why Silverlight? What sales will it make? What competition will it kill? As far as I can see, there’s nothing that will tie SL exclusively to a particular MS product.

Answer: it’s all about the platform stack. Microsoft does not want to cede this space to Adobe and Flash, because it is strategic. Use Flash, use Adobe’s tools rather than Visual Studio, use the Java-based LiveCycle and JEE rather than ASP.NET and Windows Server. Use Silverlight, use Visual Studio, ASP.NET, XAML, SQL Server, all the Microsoft stuff.

What about the Internet as an advertising platform? Flash/Silverlight is the client runtime.

What about the Internet as a broadcasting platform? Same story.

I speculated recently about the future of gaming.

Silverlight is partly defensive. In other words, less about “what sales it will make?” than about, “what sales will it avoid losing?” Web developers need to support cross-platform clients; if Microsoft cannot provide the tools and server-side platform to make that work, developers will look elsewhere.

I picked up a hint here at Tech-Ed that SQL Server Compact Edition may find its way into a future Silverlight. A cross-platform local database store makes a lot of sense; Adobe already has this in the form of SQLite. If Adobe’s AIR proves popular, Microsoft could relatively easily push Silverlight in that direction as well, providing a way of running Silverlight outside the browser.

Doesn’t this undermine Windows? Maybe a little, and I am sure this is a point of debate within Microsoft, but it is worth it.

Silverlight’s big problem: devices. Flash on iPhone: possible, even likely. Silverlight on iPhone, Nokia? A stretch.

Why Entity Framework when we have LINQ to SQL?

I’ve just returned from Carl Perry’s Tech Ed session on the Entity Framework, an object-relational library for ADO.NET, initially implemented for SQL Server. Perry is a Senior Program Manager Lead on the SQL Server team. The Entity Framework is the first implementation of what Microsoft calls the Entity Data Model. Generate a data model from a database, tweak the model in Visual Studio’s designer, then generate code to use in combination with LINQ (Language Integrated Query). I found this code snippet from Perry’s slides illuminating:

using (AdventureWorksModel model = new AdventureWorksModel())
{
var query = from c in model.Customer
where c.MiddleName == null
select new {
FirstName = c.FirstName,
LastName = c.LastName,
EmailAddress = c.EmailAddress }; 

foreach (var c in query)
 {
 Response.Write(String.Format("<p>{0}\t{1}\t{2}</p>",
 c.FirstName,
 c.LastName,
 c.EmailAddress));
 }
}

In the above code, AdventureWorksModel is an instance of an Entity Framework model, and as you can see makes for clean strongly-typed coding against the database.

But doesn’t Microsoft already have a shiny new object-relational layer called LINQ to SQL? Why bother with Entity Framework?

There appears to be considerable overlap, but the Entity Framework has higher ambitions. Perry said that LINQ to SQL is fine when your entities map closely to database tables, but Entity Framework is better for more complex mappings. It is not there yet, but it looks as if Microsoft will evolve the framework to enable model-first development and add features like the ability to define constraints in the model. All very familiar in the modeling world. The question may become: why bother with LINQ to SQL?

Entity Framework is not new; for example it is described in this paper from 2006. However, I had not looked at it before in any detail. You can download a beta here.

OpenSocial: where’s the identity story?

The media is ga-ga right now about Google’s OpenSocial story but most accounts are missing the key question here, which is about identity rather than APIs. An honourable exception is David Berlind – one of my top 10 tech journalists – who posed an interesting question at a press briefing but received an incomplete answer (a common experience). He asked how identities are mapped between containers.

I’ll explain. OpenSocial is an API for writing social widgets – JavaScript applets which hook into your relationships as expressed by people you’ve added as “friends”. Mark Andreessen has a great overview of the API, which is based on the concept of containers and apps. A container is a site such as MySpace or Orkut, which is where you’ve defined a set of relationships. An app is a widget or other JavaScript application that calls the OpenSocial APIs. The OpenSocial docs, which have just gone live, define a JavaScript API for widgets, and data APIs for People, Activities and Persistence, where Persistence covers retrieving and updating key/value pairs. All the data APIs use the GData protocol.

The big deal about OpenSocial is that many containers will support the same API, making it easier for developers to write apps. For example, music discovery site iLike, already big on FaceBook, can write one app that will more or less work in both MySpace and Orkut. And some new container site can start up, and by supporting the OpenSocial APIs be immediately attractive to developers with existing OpenSocial apps.

That’s definitely an advantage, but who are my friends? My Facebook friends? MySpace friends? Orkut friends? LinkedIn friends? If the social app concept is as big as people think it is (count me as a little sceptical), then the existence of multiple incompatible friend networks will soon become intolerable. Further, if I sign into a new container site, the big barrier to entry is that I have to recreate a friends network on this new site. Some sites workaround this problem in the crudest possible way. You have to give the new container site your username and password for some other container, and it pretends to be you and sucks out your existing contacts.

There is no simple answer to this. Even if you could do it, many people would be reluctant to merge multiple existing networks, because they represent different roles which they want to keep distinct. Social and business are the obvious ones, but that’s just the start.

There are several implications. First, the impact of OpenSocial on Facebook is probably less than some are implying. Facebook’s advantage is its bank of existing accounts and relationships. MySpace has lots of these too; but the arrival of OpenSocial has not changed that fact.

Second, the OpenSocial API is not such a big story yet. The big story will be when common identity gets added to OpenSocial. Right now, we have several containers vying to the one true identity provider for the internet, and a brave but so far unsuccessful effort by OpenID to free us from that alarming prospect. OpenSocial could evolve some federated way to unite identities. Or Google could try to make its Google Account system the center of our digital lives.

The identity wars matter more than the API wars.

Update

Others are also asking about this. Here’s a couple:

Dennis Howlett on a first Enterprise take

Bob Warfield who talks about a meta-social network

I’ll add more info as I find it, or by all means comment.

Now I understand what a rich internet application is

For a while now I’ve been puzzling over what exactly is meant by the term “Rich Internet Application” or RIA. Microsoft wants the initials to stand for “Rich Interactive Application” but it is losing that battle – see this great post by Dare Obasanjo. It is Adobe’s term, but it has never been clear to me exactly what it means. I’ve seen it refer to everything from internet-connected desktop applications, to Flash applications running in the browser, or even plain old HTML and JavaScript.

The way to understand a term is to look at its origin, and here I got a big clue from Adobe’s Chief Software Architect Kevin Lynch. At a press briefing during Adobe Max Europe last week, Lynch described what happened:

The whole move of Adobe to rich internet applications was actually driven by the community. It was people using the Flash player about 2001, 2002, to start creating not just interactive media or animation experiences, but application experiences. The first one at that time was something called the Broadmoor Hotel reservation system. It was a 5 or 6 page HTML process to check out and they were having a lot of drop off. They turned that into a one-screen check out process in Flash, and they saw their reservations increase by 50%. We actually named that trend. We thought OK, we can do more to support that, and we called it Rich Internet Applications. Then we focused on enabling more of those to be made with these technologies, so a new virtual machine in Flash player, the Flex framework, Flex Builder, all of that was driven by some of those early developers who were pushing the boundaries.

So there you have it. The Broadmoor hotel case study, which I recall seeing demonstrated at the 2002 Macromedia devcon, was apparently a significant influence on the evolution of the Flash player. The first press release about it was in November 2001. The case study is still online, and the application is still around today.

I don’t think we will get closer than this to a definition. Adobe will continue to use it to mean Flash applications; Microsoft will continue to try and de-brand it – the same way it tried to use “blogcast” in place of “podcast”, according to this article. I tend to agree that the concept is bigger than Adobe; but language is organic and cannot be so easily manipulated.