Google Wave: a disruptive approach to collaboration

I watched the Google Wave Developer Preview session at Google IO.

It is tedious to sit through 1 hour 20 mins of conference presentation; but it is worth watching at least a little of it to see some Wave demos. In essence, Google is presenting email++ and hopes it will catch on. A “wave” is loosely analogous to an email conversation or a thread on a discussion board. You participate using a browser-based client. Unlike email, in which messages are copied hither and thither, a wave exists on a server so you always see the latest version. Again unlike email, a wave offers real-time synchronization, so that I see your replies or comments as you type. This means you can use a wave like an instant messaging system or like a wiki (collaborative document editing). You can also embed rich media and extend the system with robots (non-human participants which add functionality) or gadgets, such as polls, games, applications. Waves can be embedded into web pages, just as the above video is embedded into this blog post.

The client-side API for Wave is written in Google Web Toolkit, and according to the keynote:

we couldn’t have built something like this without GWT

The server side, such as robots, has an API in Java or Python. You can check out the APIs here, and sign up for a developer account. The Wave protocol is also published, and is documented here. Google is talking about launch later this year. Mashable has a good overview.

Significance of Wave

This is the bit that interests me most. Why bother with Wave? Well, Google is hoping we will find this to be a compelling alternative and partner to email, Facebook, instant messaging, Twitter, discussion boards, and more. For example, you could develop a new kind of discussion board in which the topics are waves rather than threaded discussions. The impact might include:

  • Driving users to Google Chrome and other browsers optimized for fast JavaScript, and away from Microsoft IE.
  • Promoting use of Google-sponsored APIs like OpenSocial, upon which Wave builds.
  • Shifting attention away from classic email servers such as Microsoft Exchange and Lotus Notes.
  • Offering an alternative to Microsoft Office and Adobe Acrobat for document collaboration.
  • Getting more us to run our content on Google’s servers and use Google’s identity system. This is not required as I understand it – the keynote mentions the ability to run your own Wave servers – but it is inevitable.

The demos are impressive, though it looks like a large Wave with many contributors could become hard to navigate; but it is definitely something I look forward to trying.

Finally, it is notable that Google’s Flash-aversion is continuing here: all the client stuff is done with JavaScript and HTML.

I am not sure how this might work offline; but I imagine Google could do something with Chrome and Gears, while no doubt there is also potential for a neat Adobe AIR client, using the embedded WebKit HTML renderer.

A few good things about Bing – but where is the webmaster’s guide?

So Bing (Bing Is Not Google?) is Microsoft’s new search brand. A few good things about it:

1. Short memorable name, short memorable url

2. Judging by the official video at http://www.decisionengine.com/ Microsoft realises that it has to do something different than Google; doing the same thing almost as well or even just a little better is not enough.

3. Some of the ideas are interesting – morphing the results and the way they are displayed according to the type of search, for example. In the video we see a search for a digital camera that aggregates user reviews from all over the Internet (supposedly); whereas searching for a flight gets you a list of flight offers with fares highlighted.

This kind of thing should work well with microformats, about which Google and Yahoo have also been talking – see my recent post here. But does Bing use them? That’s unknown at the moment, because the Bing Reviewer’s Guide says little about how Bing derives its results. I don’t expect Microsoft to give away its commercial secrets,  but it does have a responsibility to explain how web authors can optimise their sites for Bing – presuming that it has sufficient success to be interesting. Where is the webmaster’s guide?

Some things are troubling. The Bing press material I’ve seen so far is relentlessly commercial, tending to treat users as fodder for ecommerce. While I am sure this is how many businesses perceive them – why else do you want users to come to your site? – it is not a user-centric view. Most searches do not result in a purchase.

There’s a snippet in the reviewer’s guide about why Bing will deliver trustworthy results for medical searches:

Bing Health provides you with access to medical information from nine trusted medical resources, including the Mayo Clinic, the American Cancer Society and MedlinePlus.

No doubt these are trusted names in the USA. Still, reliance on a few trusted brands, while it is good for safety in a sensitive area such as health, is also a route to a dull and sanitized version of the Internet. I am sure there are far more than nine reliable sources of medical information on the Web; and if Bing takes off those others will want to know why they have been excluded.

Back to the introduction in the Reviewer’s Guide:

In a world of excessive choice and too much information, it’s often difficult to make the right decision. What you need is more than just a search engine; you need a decision engine that provides useful tools to help you get what you want fast, rather than simply presenting a list of Web links. Bing is such a decision engine. It provides an easy way to make more informed choices. It organizes popular results by category to help you get the answers you’re looking for without having to guess at the right way to formulate your query. And built right into Bing is a set of intelligent tools to help you accomplish important tasks such as buying a product, planning a trip or finding a local business.

Like many of us, I’ve been searching the web since its earliest days. I found portals and indexes like early Yahoo and dmoz unhelpful: always out of date, never sufficiently thorough. I used DEC’s AltaVista instead, because it seemed to search everywhere. Google came along and did the same thing, but better. Too much categorization and supposed intelligence can work against you, if it hides the result that you really want to see.

Live Search, I’ve come to realise (or theorise), frequently delivers terrible results for me because of faulty localization. It detects that I am in the UK and prioritises what it things are UK results, even though for most of my searches I only care about the quality of the information, not where the web sites are located. It’s another example of the search engine trying to be smart, and ending up with worse results than if it had not bothered.

Still, I’ll undoubtedly try Bing extensively as soon as I can; I do like some of its ideas and will judge it with an open mind.

Technorati Tags: ,,,,,

Spotify demos mobile music streaming with offline option – for Android

If you have any interest in the future of the music industry, I recommend taking a look at the following video:

There are a couple of reasons why this demo of streaming music to a Google Android mobile is interesting. First, if Spotify delivers the kind of performance and quality it has on the desktop, this will be a great facility for music fans. Second, it is interesting to see how it handles the offline problem, such as when you are in London and descend into the London Underground train network. Simple: just mark a track for offline use, and it downloads to local storage. I’m presuming this is encrypted in some way in order to prevent you from converting it to a standard MP3; but if it is always available anyway, who cares?

Will this be free, or a premium service? I’m guessing the latter but don’t yet have any more details.

Of course everyone is asking for an iPhone version. See for example this post:

It’s interesting that Spotify has chosen Android as the mobile debut, rather than iPhone – although it’s safe to assume the company is working on Apple’s handset too, among others.

Hmmm, I wonder what chance this would have of getting past Apple’s iPhone app censorship? It seems to me that what we are seeing is the beginning of the end for the iTunes download model.

The battle to be part of the emerging cloud stack: Force.com for Google App Engine

I was interested in today’s announcement of a new Force.com for Google App Engine. App Engine lets you build Python or, since April 7th this year, Java application and run them on Google’s servers. Salesforce.com already offered Python libraries for its Force.com platform, but these have now been joined by Java libraries which are more complete:

The Java toolkit supports the complete Partner WSDL of the Force.com Web Services API. All operations and objects are included in the library and documentation. If you are a Java developer, you can also leverage the Java resources found here.

whereas the Python toolkit only supports “many of the key Force.com Web Services API calls”. I suspect the Java toolkit will have more impact, because it is the language and platform many Enterprises already use for application development.

On the other side, there is also a Google Data API Toolkit for Force.com.

Why is Salesforce.com cosying up to Google? The way I see it, there is an emerging cloud stack and vendors need to be part of that stack or be marginalized.

What’s a cloud stack? You can interpret what the expression means in various ways. Sam Johnston has a go at it here, identifying 6 layers:

  • Infrastructure
  • Storage
  • Platform
  • Application
  • Services
  • Clients

There isn’t a single cloud stack, and all parts of it are available from multiple vendors as well as from open source. It is a major shift in the industry though, and there is no reason to think that the same vendors who currently succeed in the on-premise stack will also succeed in the cloud stack, rather the contrary. You could describe the RIA wars (Adobe Flash vs browser vs Silverlight) as a battle for share of the client stack, for example, and one in which Microsoft is unlikely to win as much share as it has enjoyed with Windows.

By positioning itself as a platform that integrates well with Google App Engine, Salesforce.com is betting that Google will continue to be an important player, and that it will pay to be perceived as complementary to its platform.

A factor which holds back Force.com adoption is that it is expensive. Developers can experiment for free, but rolling out a live application means subscriptions for all your users. Getting started with App Engine, on the other hand, is free, with fees kicking in only after you have succeeded sufficiently that you are generating substantial traffic (and hopefully making or saving some money).

Adobe Presentations goes public

Adobe has gone public with Presentations, cloud-based presentation graphics built with Flash and the Acrobat.com portal. It runs in Firefox 3.x, IE 6 or higher, or Safari 3.x or higher, on Windows or Mac, provided that the latest Flash Player 10 is installed. No mention of Linux though it might work.

Presentations is up against two obvious rivals: Microsoft PowerPoint and Google Docs. Why use Adobe Presentations? Adobe is highlighting the advantages for collaboration: no need to email slides hither and thither. You can use PowerPoint on collaborative servers like SharePoint, but it’s still a point well made. A free account on Acrobat.com is far easier to set up and manage. The Flash UI is elegant and easy to use, and while it lacks all the features of PowerPoint, it seems to cover the essentials pretty well. You can insert .FLV (Flash Video) files which enables all sorts of interesting possibilities. At first glance, Adobe Presentations seems to be way ahead of Google Docs, with transitions, themes, colour schemes, opacity control, and general Flash goodness.

It’s good, it’s free: does Adobe have a winner? I can see a couple of problems. One issue is that people are nervous about relying on a live connection to the Internet during a presentation. Given that conferences and hotels often have wifi connectivity issues, that’s not an irrational concern. Presentations does have a solution, which is export to PDF, but nevertheless Adobe has to overcome that instinctive reaction: cloud-based presentations? No thanks. Having PDF as the sole export option is restrictive too; it would be great to see PowerPoint import and export, but I suspect it is too tightly wedded to Flash for this to work.

As Mike Downey observed on Twitter, it is also a shame that you cannot embed a presentation into a web site, though of course you could include a link.

Presentations has a lot in common with Buzzword, the Acrobat.com word processor, which does not seem to have taken off despite its strong features. Will this be different? Potentially, but Adobe needs to work on public perception, which is Microsoft for offline, Google for online.

I reckon Adobe would gain substantially by adding AIR support to Acrobat.com. This makes obvious sense for both Buzzword and now Presentations. Users would have the comfort and performance of local files, plus the collaborative benefits of online. Why not?

Book Review: IronPython in Action

I guess that to many people IronPython, an implementation of the Python programming language that runs on Microsoft’s .NET platform, is little more than a curiosity. For them, a glance at the preface and foreword to this new Manning title may be enough to convince them that it is at least interesting. In the foreword, IronPython creator Jim Hugunin describes how he first worked on Python for .NET as an exercise to prove “why the CLR is a terrible platform for dynamic languages”:

The plan was turned upside down when the prototypes turned out to run very well – generally quite a bit faster than the standard C-based Python implementations.

Next is a preface from the authors, Michael Foord and Christian Muirhead, about how IronPython came to be used in an innovative programmable, testable spreadsheet called Resolver One:

Resolver One is in use in financial institutions in London, New York and Paris, and consists of 40,000 lines of IronPython code with a further 150,000 in the test framework. Resolver One has been tuned for performance several times, and this has always meant fine tuning our algorithms in Python. It hasn’t (yet) required even parts of Resolver One to be rewritten in C#.

Why is Python used in Resolver One?

As a dynamic language, Python was orders of magnitude easier to test then the languages they had used previously.

If that piques your interest, you may find the book itself worth reading. It is aimed at Python programmers who do not know .NET, and .NET programmers who do not know Python, rather than at existing IronPython developers. The authors run through the basics of Python and its .NET integration, so that by the end of Part 1 you could write a simple Windows Forms application. Part 2 is on core development techniques and covers duck typing, model-view-controller basics, handling XML, unit and functional testing, and metaprogramming – this is where you generate and execute code at runtime.

The third part of the book covers .NET integration, and how to use IronPython with WPF (Windows Presentation Foundation), PowerShell, ASP.NET, ADO.NET (database access) , web services, and in Silverlight. Finally, Part 4 shows how to extend IronPython with libraries written in C# or VB.NET, how to use Python objects in C# or VB, and how to embed the IronPython engine into your own application.

It’s a well-written book and fulfils its purpose nicely. I like the way the book is honest about areas where IronPython is more verbose or awkward than C# or VB.NETAs someone who knows .NET well, but Python little, I could have done without all the introductory .NET explanations, but I understand why they are there. I particularly liked part two, on core programming techniques, which is a thought-provoking read for any developer with an interest in dynamic languages.

According to the IronPython site, you can get 35% off by ordering from the Manning site with the code codeplex35.

It is also available on Amazon.com:

and Amazon.co.uk:

 

A few jottings on hi-fi and misleading science

My interest in hi-fi began a few decades ago  when I was out looking for a new cassette deck. At that time I had the view that all amplifiers sounded the same, pretty much, because I was aware that the frequency response of an amp was flat and its distortion low across the audible range.

I was in a store comparing a bank of cassette decks with a tape of my own that I’d brought along and a pair of headphones. There were a couple of amplifiers and two switchbox comparators, so I could listen to several decks through one amplifier, then several other decks through the second amp.

I began to suspect that the comparison was unfair, because all the decks going through the first amp sounded better – more musical and enjoyable – than those going through the second amp. I realised that contrary to my expectation the amplifiers were contributing to the sound, and that the first one sounded better. It was a Cambridge A&R A60. So I bought that instead, and loved it.

I realised therefore that the frequency response and distortion specs were not telling the whole story. It was better to buy what sounded best.

Unfortunately subjectivism has problems too. In particular, once people have been trained to distrust specs they become vulnerable to exploitation. Listening alone is not enough, for all sorts of reasons: what we hear is influenced by expectations, small variations in volume that we mis-interpret as quality differences, changes in multiple variables that make it impossible to know what we are really comparing, and so on. We need science to keep the industry honest.

Another factor is that advances in technology have made it harder for the hi-fi industry. Digital music eliminates things like wow and flutter, rumble and surface noise, and audio quality that is good enough for most people is now available for pennies. In search of margin, hi-fi retailers settled on selling expensive cables or equipment supports with ever-diminishing scientific rationale. Beautiful, chunky, elegant, gold-plated interconnects look like they should sound better, so like jackdaws attracted by shiny buttons we may think they do, even when common sense tells us that the audio signal has already passed though many very ordinary cables before it reaches us, so why should the particular stretch covered by this interconnect make any difference?

My respect for the power of the mind in this regard was increased by an incident during the Peter Belt years. Peter Belt was an audio eccentric who marketed a number of bizarre theories and products in the UK, mainly in the eighties, and attracted some support from the hi-fi press. See here for an enthusiast view of his work; and here for a woman convinced that she can improve the sound of her CDs by putting them in the deep freezer for 24 hours:

Freezing The Downward Spiral made it far more engaging than it has ever been. For instance, the layers at the end of the song "Closer" are more in evidence. Little bits of sound present themselves that I have never heard before. NIN’s sound is close to industrial, with what at times sounds like machinery droning in the background. After freezing this disc, these sounds became more easily discernible. The overall NIN experience increased tenfold for me after freezing the disc.

Another of Belt’s theories is or was that you could improve the sound of any hi-fi equipment with four supports (such as small rubber feet) by placing a triangular sheet of paper under one of them, to make it in some mystical sense three-legged.

I tried this with a friend. He had a high-end turntable and knew nothing of Peter Belt or his theories. I told him I knew of a madcap theory that I wanted to disprove. We played a record, and then I said I would make a change that would make no difference to the sound. I took a small thin triangular piece of paper and placed it under one of the four feet of the turntable. It did not affect its stability. We played the record again. He said it definitely sounded better. What is more, I thought it sounded better too – or at least, that was my subjective impression. My rational mind told me it sounded just the same. Still, he left the bit of paper there.

I don’t doubt that we have more to learn about sound reproduction; that we measure what we can, but we may measure the wrong things or in the wrong way. That does not mean that every wild theory about how to improve hi-fi has equal validity. There is one simple technique that helps to assess whether some particular thing is worth spending money on, and that is blind testing. Listen to A, then to B, and see if you can tell which is which. If the differences you hear when you know which is which disappear, then you know that the feature you are testing does not affect the audible sound quality. It might still be worth having; there is no law against liking beautiful cables or shiny amplifiers that cost more than a house.

I suspect that few or none of Peter Belt’s improvements would survive such trials.

Blind testing is not perfect. Even if you can hear a difference, that does not tell you which sounds more accurate to the source, or which is more enjoyable. Ironically, very poor equipment has nothing to fear from blind testing, in that it will most likely sound different; though there is merit in scoring your preferences blind as well.

Sometimes blind testing yields surprising results, like the trials which show that high-resolution audio like SACD and DVDA can pass through a conversion to CD quality and back and still sound the same. I’ve written more on this subject here.

I think we should learn from such tests and not fear them. They help us to focus on the things that yield less contentious improvements, like using the best available sources, and maintaining excellence all the way through from initial recording to final mastering. In the strange world of hi-fidelity neither specs, nor price, nor casual listening, nor even science will tell us everything about how to get the best sound; but some combination of all of these will enable us to spend our money and time wisely, and do more of what really counts: enjoying the music.

Technorati Tags: ,,

Apple censors iPhone application, threatens developer livelihood

There is an alarming report here from James Montgomerie, who has developed an application for the iPhone called Eucalyptus. It displays public domain texts hosted at Project Gutenberg.

Apple has refused to allow his application to be placed into the App Store, which in effect means nobody can buy it or use it. The reason: one of the texts on Project Gutenberg is the Kama Sutra, which Apple’s assessors consider pornographic.

It’s doubtful reasoning. Eucalyptus is a client and does not contain the contentious text directly; it is no different in this respect from Safari, the iPhone’s web browser, which can also also display the Kama Sutra, or indeed many far more objectionable web pages.

Still, it’s the wider issues that are more interesting here. Montgomerie writes:

I suspect that no-one at Apple knows how genuinely torturous the app store approval process is for developers personally after a rejection. When they hold the key to the only distribution pipe for something you’ve spent a lot of your time on – in my case a year – something you’re hoping could provide you with a livelihood – and polite email enquiries are not replied to – not even with an autoresponder, it is extremely frustrating. I don’t think I’ve ever felt as powerless in my life (and I’ve had to deal with US immigration authorities…). I think anyone that knows me would confirm that I’m a very level-headed person, but this is the only thing in my adult life I can recall losing sleep over (although perhaps that’s also a consequence of being otherwise lucky in life so far).

Let’s do a bit of what-if. What if Microsoft exerted equal control over what applications were allowed on Windows? What if Apple extended its iPhone control to any Mac computers? Unacceptable, could never happen, you might think. Sure, but it if is unacceptable in that wider context, is it not unacceptable on the iPhone as well?

Apple is not the first company to lock down a platform. Locked mobile phones have done this to some extent for years. Games consoles like Xbox and Playstation do it. Apple is taking heat because of its success in creating a device that users want to use as a platform for all kinds of applications; potentially, in some future version, it may be able to do most of the things for which we currently use laptops. Therefore we should be concerned both about the way Apple is using its control over iPhone distribution, and more fundamentally that it has that level of control at all.

Fortunately Apple does not control the Internet; and Montgomerie has done the right thing by appealing to public opinion. Apple’s PR machine will take note and no doubt resolve the immediate case.*

Nevertheless, this story and others like it are a real concern. Perhaps your next phone should run Android?

*UPDATE this is exactly what happened:

Earlier today I received a phone call from an Apple representative. He was very complimentary about Eucalyptus. We talked about the confusion surrounding its App Store rejections, which I am happy to say is now fully resolved. He invited me to re-build and submit a version of Eucalyptus with no filters for immediate approval, and that full version is now available on the iPhone App Store.

See also: Friendly to users, hostile to competition: get ready for more app stores

Technorati Tags: ,,,

Is high-resolution audio (like SACD) audibly better than CD?

Music is something I care about; so when the industry came up with something better than CD for playing it back, some ten years ago, I took a keen interest. There was an ugly format war – SACD vs DVDA – but when universal players appeared, that could play either format, I purchased one, along with a few examples of each type of disc.

On the whole I think they sound good, and of course they also have multi-channel capability which is nice if you are properly set up for that. I haven’t purchased either type of disc in large numbers though, mainly because of the price premium, and also because they are awkward to rip to a music server, which is how I do most of my listening. Most SACDs are hybrid, which means they have a standard CD layer as well as a high resolution layer, and you can rip the CD layer easily enough; but then you do not get the benefit of high resolution sound.

But is there a benefit, other than more care in mastering that could equally have been applied to a CD? High resolution is certainly useful for audio professionals who are processing the sound, but some argue that even CD audio is sufficiently accurate for human hearing. Two people, Brad Meyer and David Moran, conducted a series of blind tests in 2007 to prove the point: Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback. In other words, they used a box that converted the output from a high-resolution player to CD-quality digital and back, and found that nobody could reliably tell the difference at normal listening levels.

Meyer and Moran’s research throws into question much of the rationale behind SACD and DVDA, which is marketed on the basis of its superior sound quality. It’s caused a debate in the audio industry, though perhaps not as much as you might expect. Some argue that the test was flawed, others that these “absurd” results prove that blind testing simply does not work. Others agree with the results and find them unsurprising. It is notable though that even critics and vendors with a stake in the audible superiority of high-resolution sound have not yet (as far as I know) come up with a repeat test, correcting the flaws they see in the original, and achieving different results, though it is still possible that someone may do so.

I wrote this up with more detail here.

Technorati Tags: ,,,

Microsoft having another go at Windows help: Help 3

Online help is a part of Windows full of dead-ends and back-alleys. I’m not going to attempt the story in detail here; but it goes back many years. By online help I mean local help of course; in ancient times the word “online” meant something on your computer as opposed to being in a printed book.

The first help engine I remember was in Windows 3.x, called WinHelp, and used .hlp files. It was well-liked, but authoring the files was an arcane process involving Word, RTF, a help compiler, and a certain amount of black magic.

In 1997 Microsoft replaced .hlp with .chm (compiled HTML); its initial efforts were less good than the old .hlp, but this evolved into a decent help engine despite one or two quirks.

After that it gets messy. In 2001 Microsoft announced Help 2, still HTML based but with all sorts of improvements. It was used by MSDN and in Visual Studio; its viewer is the Microsoft Document Explorer. However, and despite the help authoring tool vendors getting all geared up for Help 2, Microsoft announced in 2003 that it would not be made into a general Windows help engine, but only used for Visual Studio. Since then Help 2 has had a curious status; it is possible to author for Help 2, and those building Visual Studio extensions have needed to do so, but it has never replaced compiled HTML.

There was a similar story with Vista Help. Microsoft built a new help engine for Vista but drew back from making this available to 3rd party applications. In fact, there is a rather wonderful tool called Guided Help which lets you include application automation within Help, complete with “show me” and “do it” functionality. You can get the Guided Help SDK if you know where to look, and it works, but the project was mostly abandoned. You are still meant to use HTML Help 1.4 for your own applications.

Now Microsoft is talking about Help 3. Microsoft’s Terry Clancy mentions it in an informative post about Visual Studio 2010:

Visual Studio 10 will come with a completely re-engineered Help system that introduces a new flexible, standards based Help framework which will ultimately be used in other products beyond Visual Studio. Help3 is a help system replacement for Microsoft Help 2.x . This new help system will be easier to produce content for, and will interfere less with Visual Studio itself. The standards based approach delivers not only a much better local experience but also a seamless transition to an online web browser and with infrastructure and tooling much more consistent other Visual Studio and internet technologies.

Will Help 3 ever replace the seemingly immortal HTML Help 1.x? Place your bets.

In practice, desktop help is less important than it used to be. Online help now means the Internet; or users just use Google.