All posts by onlyconnect

On Quadrophenia, rock classics, tribute shows, and aging

The Who’s Quadrophenia is currently on tour in the UK – but it is not performed by The Who. No, this is the Quadrophenia Rock Show, Music Lyrics & Concept by Pete Townshed – stage adaption by Jeff Young, John O’Hara and Tom Critchley.

Quadrophenia is among my favourite albums – not for the daft story, but because the music and lyrics speak to me of the frustration and glory of being human, or something. But do I want to see it performed by musicians other than The Who? At one time I’d have said, no way. Why settle for an imitation when you can have the real thing?

The trouble is, you can’t any more. Keith Moon died in 1978; John Entwistle in 2002. Roger Daltrey and Pete Townshend still tour and no doubt put on a good show from time to time – I saw The Who in January 2002, at which time Entwistle was still around, and enjoyed it tremendously. Still, at best with these aging bands there is always an element of “it’s amazing how good they are considering”, and at worst it can be embarrassing. I saw Jethro Tull in Derby in 2007, and while the musicianship was generally impressive, my memory is dominated by the failings of Ian Anderson’s voice, which spoilt most of the songs through no fault of his.

It is also rather strange to see bands whose music is laden with the sexual tension of youth performing the same songs at a later stage of life. What is “Hope I die before I get old” meant to mean, sung by a 65-year old Daltrey?

The bottom line is that I have mixed feelings about seeing performances like these. I still go to see Bob Dylan, who is even older, but that’s partly because I see it as a pilgrimage to see one of the greats, and partly because Dylan is more able to be his age, thanks to the songs he writes and continues to write, and the fact the he’s been fixin’ to die since his very first album in 1962.

So when I saw that the Quadrophenia show is on locally, I thought twice about it. Is it possible that tribute show of younger performers could put more energy into it than the current Who? Well, yes, it is possible. And once old rockers like The Who and The Rolling Stones hang up their touring boots for the last time, it will be this or nothing.

I’m also encouraged by knowing that Pete Townshend is involved to some degree in the show. He talks about it – or actually writes, since it’s an email interview, in an illuminating piece in The Times. He includes a comment pertinent to this post:

Have you ever been to see a rock musical based on a back-catalogue?

I live inside one. Musicals based on back-catalogues are becoming a saturated market. How can rock musicals avoid being watered-down exercises in asset-stripping?

Let me ask another question. When all those nostalgic for the music of their youth have moved on, will today’s revered rock classics ever be performed live? In most cases, I’m guessing the answer is no. In a few cases though, maybe an evening out to hear a performance of Blonde on Blonde or The Dark Side of the Moon or Quadrophenia will be accepted in the same way as we treat other music from composers long gone, who knows?

I’m booking to see Quadrophenia.

Cloud Computing survey: more fog than cloud

Yesterday I attended a presentation from NTT Communications, a managed hosting provider, on the plans of 200 CFOs and CIOs from larger UK organizations (500+ employees) with respect to cloud computing. Since NTT would presumably like more companies to stick more stuff on its hosted servers, I presume it was hoping for a strong endorsement of the idea. Unfortunately for NTT, that was not the case. Fewer than 20% of those surveyed think they are using cloud computing now, a bit more than 20% think they will adopt some of it in the next two years, but – and here’s the real killer – cloud computing is way down the list of investment priorities, at around 5%. I’m not clear 5% of what exactly; but the report says it is the lowest priority.

What are companies spending money on instead? Servers and storage, network infrastructure, security, company web sites, backup and disaster recovery, unified communications, desktops and laptops, software, almost anything else in other words.

What’s wrong with the cloud? The three top issues, for those surveyed, are security, immaturity, and reliability.

These are valid concerns, though each one is open to debate; but the entire survey was undermined by the fact that most of those surveyed admitted to not knowing what cloud computing is. The reason is not ignorance, but the many and various ways the term is used. The common strand is that it is something to do with the internet, but even that is undermined if we describe virtual on-premise servers as a “private cloud”.

What are the varieties of cloud? Almost infinite, but here are a few:

  • Multi-tenanted applications such as Salesforce CRM, Google Docs, NetSuite. This is the model that has the biggest inherent economic advantage.
  • Hosted application platforms including Google App Engine, Microsoft Azure, Force.com. These are hosted application servers, where you write the code, taking advantage of integrated hosted services for storage, identity, transactions and so on.
  • Utility services such as Amazon S3. It’s a great example: S3 offers nothing but storage, though you can use it in conjunction with other Amazon web services.
  • On-demand infrastructure such as Amazon EC2. You get virtual servers to do what you like with. NTT’s services are mainly in this broad category. It’s cloud but you are mostly not getting the benefits of multi-tenancy.
  • Anything on the internet. Running a web application? Hey, you’re in the cloud.

If we are going to have a sane discussion about these things, we need to know what we are talking about. Maybe rather than asking companies whether or not they are doing that cool cloud stuff, it would be better to enquire how they see their use of the internet evolving.

Another big question is the extent to which companies are willing to buy in their IT infrastructure as a third-party service. Although it makes obvious financial sense in most cases, it is a big ask given how business-critical it is, hence the concerns about security, immaturity, reliability.

Smaller companies with ad-hoc IT systems are likely to be more amenable to the idea, but this group was not covered by NTTs survey.

Conclusions? The main one is “watch this space”. In the end I reckon sheer economics will drive cloud computing adoption – in all the areas described above – but the one thing NTT’s survey proves is that larger organisations are in no hurry to make that jump.

Technorati Tags: ,,,

Windows 7: July RTM, October 22 launch

News is drifting out that Microsoft intends to launch Windows 7 – that is, have PCs with it pre-loaded on retail sale – on October 22.

Not unexpected news – it is exactly what many of us predicted last year, after seeing it at PDC – but it is good to have it confirmed and will help users considering PC purchase decisions. There should be an announcement very soon about free upgrade offers, where you but a PC with Vista now, and get a free upgrade to 7 when available.

By the way, there’s a further gallery of Windows 7 images up on the Guardian Technology site. This is not just more of the same: I included some of the less publicised corners of the new OS, such as the new-but-not-improved Movie Maker, PowerShell scripting, and the option to remove Internet Explorer.

Update: An official announcement is here:

Microsoft will deliver Release to Manufacturing (RTM) code to partners in the second half of July. Windows 7 will become generally available on Oct. 22, 2009, and Windows Server 2008 R2 will be broadly available at the same time.

I’ve also amended the title of this post to remove the ambiguity between “Windows: 7 July” and “Windows 7: July” 🙂

Technorati Tags: ,

Google Wave: a disruptive approach to collaboration

I watched the Google Wave Developer Preview session at Google IO.

It is tedious to sit through 1 hour 20 mins of conference presentation; but it is worth watching at least a little of it to see some Wave demos. In essence, Google is presenting email++ and hopes it will catch on. A “wave” is loosely analogous to an email conversation or a thread on a discussion board. You participate using a browser-based client. Unlike email, in which messages are copied hither and thither, a wave exists on a server so you always see the latest version. Again unlike email, a wave offers real-time synchronization, so that I see your replies or comments as you type. This means you can use a wave like an instant messaging system or like a wiki (collaborative document editing). You can also embed rich media and extend the system with robots (non-human participants which add functionality) or gadgets, such as polls, games, applications. Waves can be embedded into web pages, just as the above video is embedded into this blog post.

The client-side API for Wave is written in Google Web Toolkit, and according to the keynote:

we couldn’t have built something like this without GWT

The server side, such as robots, has an API in Java or Python. You can check out the APIs here, and sign up for a developer account. The Wave protocol is also published, and is documented here. Google is talking about launch later this year. Mashable has a good overview.

Significance of Wave

This is the bit that interests me most. Why bother with Wave? Well, Google is hoping we will find this to be a compelling alternative and partner to email, Facebook, instant messaging, Twitter, discussion boards, and more. For example, you could develop a new kind of discussion board in which the topics are waves rather than threaded discussions. The impact might include:

  • Driving users to Google Chrome and other browsers optimized for fast JavaScript, and away from Microsoft IE.
  • Promoting use of Google-sponsored APIs like OpenSocial, upon which Wave builds.
  • Shifting attention away from classic email servers such as Microsoft Exchange and Lotus Notes.
  • Offering an alternative to Microsoft Office and Adobe Acrobat for document collaboration.
  • Getting more us to run our content on Google’s servers and use Google’s identity system. This is not required as I understand it – the keynote mentions the ability to run your own Wave servers – but it is inevitable.

The demos are impressive, though it looks like a large Wave with many contributors could become hard to navigate; but it is definitely something I look forward to trying.

Finally, it is notable that Google’s Flash-aversion is continuing here: all the client stuff is done with JavaScript and HTML.

I am not sure how this might work offline; but I imagine Google could do something with Chrome and Gears, while no doubt there is also potential for a neat Adobe AIR client, using the embedded WebKit HTML renderer.

A few good things about Bing – but where is the webmaster’s guide?

So Bing (Bing Is Not Google?) is Microsoft’s new search brand. A few good things about it:

1. Short memorable name, short memorable url

2. Judging by the official video at http://www.decisionengine.com/ Microsoft realises that it has to do something different than Google; doing the same thing almost as well or even just a little better is not enough.

3. Some of the ideas are interesting – morphing the results and the way they are displayed according to the type of search, for example. In the video we see a search for a digital camera that aggregates user reviews from all over the Internet (supposedly); whereas searching for a flight gets you a list of flight offers with fares highlighted.

This kind of thing should work well with microformats, about which Google and Yahoo have also been talking – see my recent post here. But does Bing use them? That’s unknown at the moment, because the Bing Reviewer’s Guide says little about how Bing derives its results. I don’t expect Microsoft to give away its commercial secrets,  but it does have a responsibility to explain how web authors can optimise their sites for Bing – presuming that it has sufficient success to be interesting. Where is the webmaster’s guide?

Some things are troubling. The Bing press material I’ve seen so far is relentlessly commercial, tending to treat users as fodder for ecommerce. While I am sure this is how many businesses perceive them – why else do you want users to come to your site? – it is not a user-centric view. Most searches do not result in a purchase.

There’s a snippet in the reviewer’s guide about why Bing will deliver trustworthy results for medical searches:

Bing Health provides you with access to medical information from nine trusted medical resources, including the Mayo Clinic, the American Cancer Society and MedlinePlus.

No doubt these are trusted names in the USA. Still, reliance on a few trusted brands, while it is good for safety in a sensitive area such as health, is also a route to a dull and sanitized version of the Internet. I am sure there are far more than nine reliable sources of medical information on the Web; and if Bing takes off those others will want to know why they have been excluded.

Back to the introduction in the Reviewer’s Guide:

In a world of excessive choice and too much information, it’s often difficult to make the right decision. What you need is more than just a search engine; you need a decision engine that provides useful tools to help you get what you want fast, rather than simply presenting a list of Web links. Bing is such a decision engine. It provides an easy way to make more informed choices. It organizes popular results by category to help you get the answers you’re looking for without having to guess at the right way to formulate your query. And built right into Bing is a set of intelligent tools to help you accomplish important tasks such as buying a product, planning a trip or finding a local business.

Like many of us, I’ve been searching the web since its earliest days. I found portals and indexes like early Yahoo and dmoz unhelpful: always out of date, never sufficiently thorough. I used DEC’s AltaVista instead, because it seemed to search everywhere. Google came along and did the same thing, but better. Too much categorization and supposed intelligence can work against you, if it hides the result that you really want to see.

Live Search, I’ve come to realise (or theorise), frequently delivers terrible results for me because of faulty localization. It detects that I am in the UK and prioritises what it things are UK results, even though for most of my searches I only care about the quality of the information, not where the web sites are located. It’s another example of the search engine trying to be smart, and ending up with worse results than if it had not bothered.

Still, I’ll undoubtedly try Bing extensively as soon as I can; I do like some of its ideas and will judge it with an open mind.

Technorati Tags: ,,,,,

Spotify demos mobile music streaming with offline option – for Android

If you have any interest in the future of the music industry, I recommend taking a look at the following video:

There are a couple of reasons why this demo of streaming music to a Google Android mobile is interesting. First, if Spotify delivers the kind of performance and quality it has on the desktop, this will be a great facility for music fans. Second, it is interesting to see how it handles the offline problem, such as when you are in London and descend into the London Underground train network. Simple: just mark a track for offline use, and it downloads to local storage. I’m presuming this is encrypted in some way in order to prevent you from converting it to a standard MP3; but if it is always available anyway, who cares?

Will this be free, or a premium service? I’m guessing the latter but don’t yet have any more details.

Of course everyone is asking for an iPhone version. See for example this post:

It’s interesting that Spotify has chosen Android as the mobile debut, rather than iPhone – although it’s safe to assume the company is working on Apple’s handset too, among others.

Hmmm, I wonder what chance this would have of getting past Apple’s iPhone app censorship? It seems to me that what we are seeing is the beginning of the end for the iTunes download model.

The battle to be part of the emerging cloud stack: Force.com for Google App Engine

I was interested in today’s announcement of a new Force.com for Google App Engine. App Engine lets you build Python or, since April 7th this year, Java application and run them on Google’s servers. Salesforce.com already offered Python libraries for its Force.com platform, but these have now been joined by Java libraries which are more complete:

The Java toolkit supports the complete Partner WSDL of the Force.com Web Services API. All operations and objects are included in the library and documentation. If you are a Java developer, you can also leverage the Java resources found here.

whereas the Python toolkit only supports “many of the key Force.com Web Services API calls”. I suspect the Java toolkit will have more impact, because it is the language and platform many Enterprises already use for application development.

On the other side, there is also a Google Data API Toolkit for Force.com.

Why is Salesforce.com cosying up to Google? The way I see it, there is an emerging cloud stack and vendors need to be part of that stack or be marginalized.

What’s a cloud stack? You can interpret what the expression means in various ways. Sam Johnston has a go at it here, identifying 6 layers:

  • Infrastructure
  • Storage
  • Platform
  • Application
  • Services
  • Clients

There isn’t a single cloud stack, and all parts of it are available from multiple vendors as well as from open source. It is a major shift in the industry though, and there is no reason to think that the same vendors who currently succeed in the on-premise stack will also succeed in the cloud stack, rather the contrary. You could describe the RIA wars (Adobe Flash vs browser vs Silverlight) as a battle for share of the client stack, for example, and one in which Microsoft is unlikely to win as much share as it has enjoyed with Windows.

By positioning itself as a platform that integrates well with Google App Engine, Salesforce.com is betting that Google will continue to be an important player, and that it will pay to be perceived as complementary to its platform.

A factor which holds back Force.com adoption is that it is expensive. Developers can experiment for free, but rolling out a live application means subscriptions for all your users. Getting started with App Engine, on the other hand, is free, with fees kicking in only after you have succeeded sufficiently that you are generating substantial traffic (and hopefully making or saving some money).

Adobe Presentations goes public

Adobe has gone public with Presentations, cloud-based presentation graphics built with Flash and the Acrobat.com portal. It runs in Firefox 3.x, IE 6 or higher, or Safari 3.x or higher, on Windows or Mac, provided that the latest Flash Player 10 is installed. No mention of Linux though it might work.

Presentations is up against two obvious rivals: Microsoft PowerPoint and Google Docs. Why use Adobe Presentations? Adobe is highlighting the advantages for collaboration: no need to email slides hither and thither. You can use PowerPoint on collaborative servers like SharePoint, but it’s still a point well made. A free account on Acrobat.com is far easier to set up and manage. The Flash UI is elegant and easy to use, and while it lacks all the features of PowerPoint, it seems to cover the essentials pretty well. You can insert .FLV (Flash Video) files which enables all sorts of interesting possibilities. At first glance, Adobe Presentations seems to be way ahead of Google Docs, with transitions, themes, colour schemes, opacity control, and general Flash goodness.

It’s good, it’s free: does Adobe have a winner? I can see a couple of problems. One issue is that people are nervous about relying on a live connection to the Internet during a presentation. Given that conferences and hotels often have wifi connectivity issues, that’s not an irrational concern. Presentations does have a solution, which is export to PDF, but nevertheless Adobe has to overcome that instinctive reaction: cloud-based presentations? No thanks. Having PDF as the sole export option is restrictive too; it would be great to see PowerPoint import and export, but I suspect it is too tightly wedded to Flash for this to work.

As Mike Downey observed on Twitter, it is also a shame that you cannot embed a presentation into a web site, though of course you could include a link.

Presentations has a lot in common with Buzzword, the Acrobat.com word processor, which does not seem to have taken off despite its strong features. Will this be different? Potentially, but Adobe needs to work on public perception, which is Microsoft for offline, Google for online.

I reckon Adobe would gain substantially by adding AIR support to Acrobat.com. This makes obvious sense for both Buzzword and now Presentations. Users would have the comfort and performance of local files, plus the collaborative benefits of online. Why not?

Book Review: IronPython in Action

I guess that to many people IronPython, an implementation of the Python programming language that runs on Microsoft’s .NET platform, is little more than a curiosity. For them, a glance at the preface and foreword to this new Manning title may be enough to convince them that it is at least interesting. In the foreword, IronPython creator Jim Hugunin describes how he first worked on Python for .NET as an exercise to prove “why the CLR is a terrible platform for dynamic languages”:

The plan was turned upside down when the prototypes turned out to run very well – generally quite a bit faster than the standard C-based Python implementations.

Next is a preface from the authors, Michael Foord and Christian Muirhead, about how IronPython came to be used in an innovative programmable, testable spreadsheet called Resolver One:

Resolver One is in use in financial institutions in London, New York and Paris, and consists of 40,000 lines of IronPython code with a further 150,000 in the test framework. Resolver One has been tuned for performance several times, and this has always meant fine tuning our algorithms in Python. It hasn’t (yet) required even parts of Resolver One to be rewritten in C#.

Why is Python used in Resolver One?

As a dynamic language, Python was orders of magnitude easier to test then the languages they had used previously.

If that piques your interest, you may find the book itself worth reading. It is aimed at Python programmers who do not know .NET, and .NET programmers who do not know Python, rather than at existing IronPython developers. The authors run through the basics of Python and its .NET integration, so that by the end of Part 1 you could write a simple Windows Forms application. Part 2 is on core development techniques and covers duck typing, model-view-controller basics, handling XML, unit and functional testing, and metaprogramming – this is where you generate and execute code at runtime.

The third part of the book covers .NET integration, and how to use IronPython with WPF (Windows Presentation Foundation), PowerShell, ASP.NET, ADO.NET (database access) , web services, and in Silverlight. Finally, Part 4 shows how to extend IronPython with libraries written in C# or VB.NET, how to use Python objects in C# or VB, and how to embed the IronPython engine into your own application.

It’s a well-written book and fulfils its purpose nicely. I like the way the book is honest about areas where IronPython is more verbose or awkward than C# or VB.NETAs someone who knows .NET well, but Python little, I could have done without all the introductory .NET explanations, but I understand why they are there. I particularly liked part two, on core programming techniques, which is a thought-provoking read for any developer with an interest in dynamic languages.

According to the IronPython site, you can get 35% off by ordering from the Manning site with the code codeplex35.

It is also available on Amazon.com:

and Amazon.co.uk:

 

A few jottings on hi-fi and misleading science

My interest in hi-fi began a few decades ago  when I was out looking for a new cassette deck. At that time I had the view that all amplifiers sounded the same, pretty much, because I was aware that the frequency response of an amp was flat and its distortion low across the audible range.

I was in a store comparing a bank of cassette decks with a tape of my own that I’d brought along and a pair of headphones. There were a couple of amplifiers and two switchbox comparators, so I could listen to several decks through one amplifier, then several other decks through the second amp.

I began to suspect that the comparison was unfair, because all the decks going through the first amp sounded better – more musical and enjoyable – than those going through the second amp. I realised that contrary to my expectation the amplifiers were contributing to the sound, and that the first one sounded better. It was a Cambridge A&R A60. So I bought that instead, and loved it.

I realised therefore that the frequency response and distortion specs were not telling the whole story. It was better to buy what sounded best.

Unfortunately subjectivism has problems too. In particular, once people have been trained to distrust specs they become vulnerable to exploitation. Listening alone is not enough, for all sorts of reasons: what we hear is influenced by expectations, small variations in volume that we mis-interpret as quality differences, changes in multiple variables that make it impossible to know what we are really comparing, and so on. We need science to keep the industry honest.

Another factor is that advances in technology have made it harder for the hi-fi industry. Digital music eliminates things like wow and flutter, rumble and surface noise, and audio quality that is good enough for most people is now available for pennies. In search of margin, hi-fi retailers settled on selling expensive cables or equipment supports with ever-diminishing scientific rationale. Beautiful, chunky, elegant, gold-plated interconnects look like they should sound better, so like jackdaws attracted by shiny buttons we may think they do, even when common sense tells us that the audio signal has already passed though many very ordinary cables before it reaches us, so why should the particular stretch covered by this interconnect make any difference?

My respect for the power of the mind in this regard was increased by an incident during the Peter Belt years. Peter Belt was an audio eccentric who marketed a number of bizarre theories and products in the UK, mainly in the eighties, and attracted some support from the hi-fi press. See here for an enthusiast view of his work; and here for a woman convinced that she can improve the sound of her CDs by putting them in the deep freezer for 24 hours:

Freezing The Downward Spiral made it far more engaging than it has ever been. For instance, the layers at the end of the song "Closer" are more in evidence. Little bits of sound present themselves that I have never heard before. NIN’s sound is close to industrial, with what at times sounds like machinery droning in the background. After freezing this disc, these sounds became more easily discernible. The overall NIN experience increased tenfold for me after freezing the disc.

Another of Belt’s theories is or was that you could improve the sound of any hi-fi equipment with four supports (such as small rubber feet) by placing a triangular sheet of paper under one of them, to make it in some mystical sense three-legged.

I tried this with a friend. He had a high-end turntable and knew nothing of Peter Belt or his theories. I told him I knew of a madcap theory that I wanted to disprove. We played a record, and then I said I would make a change that would make no difference to the sound. I took a small thin triangular piece of paper and placed it under one of the four feet of the turntable. It did not affect its stability. We played the record again. He said it definitely sounded better. What is more, I thought it sounded better too – or at least, that was my subjective impression. My rational mind told me it sounded just the same. Still, he left the bit of paper there.

I don’t doubt that we have more to learn about sound reproduction; that we measure what we can, but we may measure the wrong things or in the wrong way. That does not mean that every wild theory about how to improve hi-fi has equal validity. There is one simple technique that helps to assess whether some particular thing is worth spending money on, and that is blind testing. Listen to A, then to B, and see if you can tell which is which. If the differences you hear when you know which is which disappear, then you know that the feature you are testing does not affect the audible sound quality. It might still be worth having; there is no law against liking beautiful cables or shiny amplifiers that cost more than a house.

I suspect that few or none of Peter Belt’s improvements would survive such trials.

Blind testing is not perfect. Even if you can hear a difference, that does not tell you which sounds more accurate to the source, or which is more enjoyable. Ironically, very poor equipment has nothing to fear from blind testing, in that it will most likely sound different; though there is merit in scoring your preferences blind as well.

Sometimes blind testing yields surprising results, like the trials which show that high-resolution audio like SACD and DVDA can pass through a conversion to CD quality and back and still sound the same. I’ve written more on this subject here.

I think we should learn from such tests and not fear them. They help us to focus on the things that yield less contentious improvements, like using the best available sources, and maintaining excellence all the way through from initial recording to final mastering. In the strange world of hi-fidelity neither specs, nor price, nor casual listening, nor even science will tell us everything about how to get the best sound; but some combination of all of these will enable us to spend our money and time wisely, and do more of what really counts: enjoying the music.

Technorati Tags: ,,