Why Google Gears? Thoughts from Google Developer Day

Google Gears is a browser plug-in to support running web applications offline. It has several components:

A local server – not a complete web server, but a cache for web pages. One of its benefits is to solve versioning issues. For example, what if you had an application that retreived one page from the cache, complete with Javascript, and another from the Web, including some updated Javascript? The app would likely break. The Gears local server lets you define a set of pages as an application, so you can ensure that either all or none of the pages are delivered from the cache.

A local database. SQLite of course. I can think of many uses for this – whether or not your application needs to work offline. Searching and displaying data from a local database will be quicker than retrieving it remotely. In the current beta, there is no limit to the size of the database you can download or create on the user’s machine.

A WorkerPool for running Javascript in a background thread. Again, there are many possible applications, but a key reason for its inclusion is so you can do long-running synchronization tasks in the background.

A Javscript library to enable access to all these goodies.

Synchronization

Synchronization is integral to the Gears concept. The idea is that your web application works the same online and offline; and then when you reconnect, any changes you made offline are transparently synched back to the server. Google’s demo app for Gears is Reader, a blog reader app, but you can see how this would work nicely with Documents and Spreadsheets, removing one of the disincentives for its use. I’m reminded of comments from James Governor and others about the Synchronized Web – cloud storage, but with full offline capability.

Gears vs Apollo

How does Gears impact Adobe, which is promoting offline web applications in the guise of Apollo, desktop applications running on Flash? You can argue this either way. On the one hand, you could say that Gears removes the need for Apollo. Now any web application can work offline. On the other hand, you could say that Gears is not targetting the same space. Apollo is for desktop apps; Gears is for web applications that happen to work offline.

My take is that Google is making its pitch for ubiquitous web apps which break the offline barrier. The attraction of Gears is that it is seamless, at least for the user. Look at the reader example: it’s the same app, but now it works offline. As I see it, Google is saying that you don’t need Apollo (or WPF, or Java) for compelling apps that work connected and disconnected. That said, it’s not against Flash; there are even handy Google Javascript APIs for simplifying SWF hosting.

Another twist is that Adobe says it is supporting the Gears API in Apollo. That presumably means Apollo now has a fast embedded SQL database engine, which must be a good thing.

 

SQLite will be everywhere

One of the core components in Google’s new Gears API is SQLite, an open source database engine. I’ve been an enthusiast for SQLite for a while now – I first blogged about it in 2003. I’ve also worked a little on SQLIte wrappers for Java and Delphi.

It’s a superb embedded database engine and I’m pleased but not surprised to see it now picked up by Google. It’s part of PHP 5.x, and also used by Apple for Core Data and Spotlight search in OS X. Now it is part of Gears and I imagine will be widely deployed. Google is also apparently contributing to the project – Full Text Search has been mentioned here at Developer Day – though I’ve not yet looked at this in detail. Congratulations to the primary author D. Richard Hipp, truly a star of the open source world, and thanks to him for making SQLite “completely free and unencumbered by copyright“.

 

Google’s offline problem

Here at Developer Day I attended the workshop on new Maps API features. Unfortunately I was one of the last into the session and could not connect to the internet. I suspect a problem with IP number allocation but I don’t know for sure. I spend some time trying to get it working, then gave up and returned to the blogger lounge, where the wi-fi worked perfectly.

A let-down; yet nicely illustrates the reason why we need Gears.

That said that, even Gears isn’t going to enable offline Geocode lookup.

Next up is the session on Gears.

 

Technorati tags: , ,

My question to Google

I grabbed the first question after the opening keynote today. It was prompted by my visit to the Google Gears site – I’d intended to install the beta. I was confronted with this dialog:

I asked:

Why does Google display an 8-page agreement in a box 7 lines high?

More significantly, why does it include this clause which strikes me as unreasonable:

12. Software updates
12.1 The Software which you use may automatically download and install updates from time to time from Google. These updates are designed to improve, enhance and further develop the Services and may take the form of bug fixes, enhanced functions, new software modules and completely new versions. You agree to receive such updates (and permit Google to deliver these to you) as part of your use of the Services.

Of course I’ll have to install Gears; I can’t do my job otherwise. But I’m inclined to do so in a virtual machine, because I prefer to keep control of what gets installed.

There’s plenty more in the agreement that you might object to- have a read and see.

It all sits uncomfortably with the stuff we’ve heard about how much Google loves open source, Creative Commons licenses and so on.

My question wasn’t answered, but Chris DiBona invited me to email him with the question, which I’ve done, referencing this post.

Technorati tags: , ,

Google Developer Day begins

I’m early to the London event; but registration is open and I get a flimsy red bag with oddments including a tin of “Goo” which turns out to be thinking putty. The event is at The Brewery in the heart of the City. We are ushered into the Blogger Lounge – stylish, with bright-coloured cushions, soft pastel lighting, fresh-squeezed orange juice and no chairs. A quick glance around the room tells me that Macs outnumber Windows by about 4 to 1.

The event will kick off with a keynote from Chris DiBona “Developer message” and Ed Parsons “Geo Message”. Then I’ve got API workshops – lots of AJAX and Maps – and closing with another keynote live from Mountain View.

I’m already familiar in a broad sense with Google’s developer offerings, but what is the strategy? Getting closer to that is one reason to be here. The other to assess how useful all this stuff is in the real world – to developers that is, rather than to Google.

Delegate using laptop station at Google dev day in London

More as it happens.

Technorati tags: ,

Not convinced by LINA

Set for public release next month, LINA is a new approach to cross-platform development. Write your app once, for Linux, then deploy using a lightweight virtual machine, implemented for Windows, Mac and Linux. Why even Linux on Linux? Well, on Linux compatibility is a problem, with a multitude of different distributions out there. A VM provides a secure, reliable and predictable environment for your app. LINA’s creators claim to have solved the obvious problems: access to resources in the host operating system, and matching the look and feel which the user is expecting.

I’ll look it with interest when it appears next month, but I’m sceptical. It strikes me as a heavyweight approach, and I’d like to see the extent to which LINA blends with the host O/S before believing all the claims. Some of the publicity annoys me too. Here’s a quote from the white paper:

All computer users – individuals and organizations alike – make the most fundamental software decision when they choose an operating system. Historically, this choice locks the user into a single, clearly-demarcated realm of available software. As a result, Windows and Mac users have virtually no  access to the vast world of Open Source software.

I do most of my work on Windows Vista, so apparently I have “virtually no access” to open source software. Yet happily installed on the Vista box in front of me is:

  • Open Office
  • FireFox
  • Filezilla
  • Apache web server (installed with Delphi for PHP)
  • Tortoise SVN (Subversion client)
  • 7-Zip file archiver
  • Audacity sound editor
  • Ethereal Network Protocol Analyzer
  • NetBeans Java IDE
  • Eclipse Java IDE

and I’m sure there is more if I spend time looking. All open source, mostly cross-platform. Some of this is on the techie side; but the first two above are true mainstream apps.

Writing cross-platform apps is still a challenge, but easier than was the case a few years back, with numerous viable approaches available. So do we really need LINA?

 

Offline blog authoring with Word 2007

After writing a blog with Adobe’s Contribute, part of the new Creative Suite, I thought I should try the same task in Microsoft Word 2007. It’s quite a contrast. Word does not attempt to display the surrounding furniture of the blog, so it feels less cluttered than Contribute, and you get the benefit of Word’s proofing tools. The famous Office ribbon is reduced to three tabs: Blog Post, Insert and Add-Ins; ironically, the only add-in available is Adobe’s Contribute toolbar. It’s a comfortable editing environment, but it does not feel safe. For example, I can insert a WordArt text object, or shapes of various kinds, but it’s not clear what sort of code it will generate, and as with Contribute there is no easy way to view the HTML.

Another problem with Word is the lack of any Insert Tag option. A Technorati tag is just a hyperlink, so I could do this manually, but that is extra work in comparison to Contribute or Live Writer, which have Insert Tag built in. Word does offer an Insert Category button, but you can only select one category each time you drop down the list, whereas in Live Writer you can add multiple categories in one operation, by checking boxes.

I can see the appeal of blog authoring in Word for someone who is familiar with Office and does not want to learn a new tool, but this is my least favourite of the three tools I’ve been trying. So far I prefer Contribute for its features, and Live Writer for its focused design. I suspect Writer will remain the tool I actually use.

 

Offline blog authoring with Adobe Contribute

I generally use Microsoft’s Windows Live Writer to write my blog entries. It has a few annoyances, but I like it better than trying to type directly into WordPress. After installing Adobe’s Creative Suite 3 I noticed a new Contribute toolbar appearing in my web browser, including a Post to Blog button, reminding me that blog authoring is a feature of the new suite and that I ought to try it out. I opened Contribute and set up a connection to this blog; in fact, I’m writing this post in Contribute now.

As you would expect, Adobe has provided a sophisticated tool. Contribute sets up a template that lets you edit a blog entry within an editable area on a page that replicates the blog itself. It is more WYSIWYG than Live Writer. The editing tools are impressive too: along with basic HTML formatting, there is an Insert menu offering Flash, Video and PDF, a spell checker, a table editor, and an image editor with options to rotate, crop, sharpen, set brightness and adjust contrast. Inserting Technorati tags is easy, as is selecting categories from those I’ve defined.

Any complaints? Well, I miss the clean, uncluttered appearance of Live Writer. It feels a touch over-engineered. And if you want to inspect or edit the HTML code, you have to open the blog entry in Dreamweaver, which isn’t a great experience because you get the template as well as the blog entry.

It may sound strange, but Contribute does more than I need. I might use it for authoring WordPress pages, as opposed to blog entries, but otherwise I’m likely to stick with Live Writer. Unless Word 2007 can tempt me away; mini-review coming shortly.

,,,,,

How many XBox 360s have failed?

Simple question. In the early days Microsoft stuck to its story about 3-5%, muttering about “industry average”. More recently Peter Moore, in an interview with Mercury News, ducked the question, saying:

I can’t comment on failure rates, because it’s just not something  – it’s a moving target. What this consumer should worry about is the way that we’ve treated him. Y’know, things break, and if we’ve treated him well and fixed his problem, that’s something that we’re focused on right now. I’m not going to comment on individual failure rates because I’m shipping in 36 countries and it’s a complex business.

In the absence of official figures, there is anecdotal evidence. It’s the folk with broken consoles who make a noise, so it can’t be trusted. Yet a notable feature of surveys like this one in 360 gamer is the number of users with multiple failures – 3, 4, 5, even more.

Another intriguing aspect is that users with broken 360s report significant success rate with a crude repair technique – deliberate overheating. There are several variations. In one you remove the motherboard and apply a heat gun or even a hairdryer. In another you wrap the XBox in towels and turn it on. It suggests that that the most common problem with the 360 is that soldered joints fail. Overheating causes components to expand and if you are lucky remakes the connections. It’s not a good repair and the XBox will likely fail again soon. In particular, the towel trick is silly – apart from the obvious fire risk, overheating in general is bad for electronic components and likely to shorten their life.

The evidence suggests an inherent manufacturing or design problem with the XBox 360. I think 3-5% is wildly optimistic; it would not surprise me if the true figure is 30% or higher. Multiple failures suggests that, at a minimum, entire batches of faulty machines were produced. And because Microsoft is tight-lipped we still do not know when or whether the problem has been fixed. Is it still present in new 360s today? What about the forthcoming Elite?

There is another long-standing irritation connected with the 360’s DRM. A 360 supports muliple profiles, so that family members can maintain their own game progress, high scores, XBox Live accounts and so on. If you download and purchase a Live Arcade game, it is available to all the profiles on that machine. However, if you replace the machine the rules change. The games can be re-downloaded by the original purchaser for free, but on the new machine they are only unlocked for that player’s profile, not for the others which share the machine. In other words, if your 360 breaks and is replaced, you have something not quite as good as what you had before.

Microsoft’s standard policy on receiving a broken 360 is to send out a refurbished model immediately. This means you never get your original machine back, so you always suffer this problem. Third-party repairers are likely to be better in this respect, though you will have to pay, of course, and hope that they use a more effective technique than towels or hairdryers.

Nothing can be done about the number of faulty 360’s now out there, but Microsoft could do a couple of things to improve the situation. First, come clean about the problem and tell us how many are affected and what has been done to fix it. Second, figure out how to restore unlocked Arcade games properly on replacement machines.

Perhaps you guessed: my own (December 2005) 360 failed this weekend, three red lights, code 0020. Another particle of anecdotal evidence.

 

Sutter on Concurrency

Herb Sutter, Software architect at Microsoft and C++ guru, has posted his slides (PDF) from OGDC, a game development conference. His talk was on the challenge programming for concurrency. If you’re not familiar with the subject, the earlier article The Free Lunch is Over is a great starting point.

The free lunch is the assumption that faster processor speeds will fix our slow applications. It’s now well-known that chipmakers are running into the wall in terms of speed, but getting very good at providing multiple processors. The secret of faster or smarter software is to take advantage of those multiple processors with concurrent programming.

A few highlights from the slides:

  • Sutter says that manycore processors are improving rapidly: “Intel could build 100-Pentium chips today if they wanted to.”
  • He observes that the issue is largely solved on the server, but not on the client
  • Locking is inadequate as a way of managing shared state. In particular, it breaks composability
  • He favours transactional memory to reduce but not eliminate dependency on locks: “Version memory ‘like a database.’ Automatic concurrency control, rollback and retry for competing transactions”

Finally, Sutter says:

The concurrency sea change impacts the entire software stack: Tools, languages, libraries, runtimes, operating systems. No programming language can ignore it and remain relevant.

My comment: we’ve seen threading get a little easier in programming languages like C# and Java, thanks to wrapper classes, and in C++ OpenMP can work magic, but what is the radical language innovation that will make concurrency achievable for mortals?