All posts by onlyconnect

Measure the dynamic range of your CDs and downloads

A new development in the loudness wars backlash: a downloadable application to measure the dynamic range of your music.

Audiophiles have been complaining for years about the tendency of modern CD and download mastering to maximize the loudness of the music at the expense of the dynamic range. This type of compression squashes the peaks and amplifies the quieter parts to achieve a more even sound. This is encoded at a high volume level so that the music sounds as loud as possible. The idea is to make the music stand out more. Unfortunately the music also sounds less natural and loses the contrast between louder and quieter passages, making it more fatiguing and generally less interesting.

Once established though, this practice is hard to reverse, since new releases that have full dynamic range will sound quieter than average, which the industry considers risky.

Even the forthcoming Beatles remasters will be mastered to some extent for loudness. This is the careful but self-contradictory wording from the press release:

Finally, as is common with today’s music, overall limiting – to increase the volume level of the CD – has been used, but on the stereo versions only. However, it was unanimously agreed that because of the importance of The Beatles’ music, limiting would be used moderately, so as to retain the original dynamics of the recordings.

The latest counter-initiative is from German mastering engineer Friedemann Tischmeyer, who has set up the Pleasurize Music Foundation with the aim of establishing industry standards for loudness and dynamic range. Windows users can download a utility to measure the dynamic range of a WAV or MP3 file, the TT Dynamic Range Meter (a Mac version is in preparation). Higher numbers are better, and the Foundation intends to set 14 as the minimum standard, which to judge by my own measurements seems ambitious; even releases from well before the loudness wars do not meet it.

I did a few measurements. Here is one of the loudest releases of all time, Iggy Pop’s Raw Power remaster (score 2, ouch):

Here’s Katy Perry, I kissed a girl – a typical modern release (score 5):

 

Here’s a couple showing a progression. The track is This Year’s Girl, from This Year’s Model by Elvis Costello and the Attractions. Here is an early CD from the eighties (score 11):

and here’s the most recent Deluxe Remaster (score 7):

showing how remasters tend to boost volume and diminish dynamic range. But can anyone beat Iggy Pop?

Google App Engine to be less free: quotas reduced from May 25th

I’m blogging this because I’ve only just noticed it; I’m not sure when it was announced. From 25th May, Google is reducing the resource quotas allowed for App Engine applications, before you have start paying. The question “by how much” is tough to answer, because the quota system is complex. Here’s the relevant document; there are quotas for bandwidth in and out, internal API calls, CPU time, data sent and received from the internal datastore, emails sent, and use of the image and caching services.

Still, what caught my eye is this:

The new free quota levels to take effect on May 25th will be as follows:

  • CPU Time: 6.5 hours of CPU time per day
  • Bandwidth: 1 gigabyte of data transferred in and out of the application per day
  • Stored Data & Email Recipients: unchanged

Currently, you are allowed 10 gigabytes in and 10 gigabytes out per day. So it looks to me as if by some measures the quotas have been reduced to one tenth of what they were; unless the new limit aggregates incoming and outgoing transfer, in which case it would be one twentieth.

The spin is that:

We believe these new levels will continue to serve a reasonably efficient application around 5 million page views per month, completely free.

It’s true that the old limits are generous. Still, the real point here is not to build your business on “free” services; at any moment the terms can change, sometimes severely. While the same is true of paid-for services, it is more difficult to make extreme changes.

It is also a reminder of Google’s usual tactic, to buy market share with generous initial terms. Remember all those Google Checkout incentives when the company was fighting to win customers from PayPal?

I’m actually more comfortable with Amazon’s approach to web services: nothing free, but commodity pricing from the get-go.

Google App Engine is easier than Windows Azure for getting started

Hello App Engine

Yesterday morning I saw the news that Google App Engine was now open for Java as well as Python applications – in beta, that is. I signed up and received notification of access almost immediately. I read the notes on getting started with Eclipse. Fortunately I already have Eclipse installed. I just needed to run Eclipse, and enter the URL of the Google plug-in into the software update configuration dialog. The plug-in downloaded and installed in a few moments. Then it is a matter of File – New – Other and select Google Web Application Project. Enter a project and package name, click Finish.

The wizard creates a skeleton Java Servlet application. I made a few trivial modifications to both the servlet code and the home page. Clicked Run, and the app runs on a local server. It worked. Next, I needed to deploy it. I signed into my App Engine console, and created a new application. I had to find a name that was not yet taken, and selected javaisgo. This generates an application ID. I copied this into the appengine-web.xml file which the Eclipse wizard had generated. Then I hit Deploy App Engine Project in the toolbar. I was prompted for my Google account name and password, the application uploaded, and it was done; you can see the results at http://javaisgo.appspot.com/.

Although Python is dynamic and fashionable, Java is probably the world’s most popular language for business development. I expect the ability to create and deploy applications so easily and for free will be attractive to many, leaving aside anxiety about Google’s plans to take over the Internet.

Hello Windows Azure

All this reminded me that although I had signed up for the Windows Azure CTP (Community Tech Preview) a while back, I had not got round to deploying a web application. I did deploy a Mesh-Enabled Web Application, part of the Live Framework, a process which I found frustrating and ultimately disappointing. That is a different kind of thing though; whereas an Azure ASP.NET application takes a similar approach to that of Google App Engine – write your web application, deploy to the cloud provider’s servers.

I already had an Azure account and developer token; how long would it take to deploy a Hello World project to Azure? Well, first you have to install the Windows Azure Tools for Visual Studio. I tried running this, but the dependencies were not in place. Although I already had Visual Studio 2008 SP1, SQL Server, and .NET Framework 3.5, I needed to add IIS 7.0 with ASP.NET to my development machine, and to configure the .NET Framework to support WCF HTTP Activation, which is off by default. See here for details. I did all this and the tools installed. The SDK also gets installed. When you build and run the samples, it starts up two new services on your machine, Development Fabric and Development Storage.

Next, I started Visual Studio, which apparently has to be run with full local admin rights for Azure to work. New project, Visual C#, Cloud services, Web Cloud Service (essentially an ASP.NET application). This looked familiar, and I quickly added a button and an event handler to make sure it worked. When you debug, it runs on the local development fabric.

Time to deploy. This is where I ran into some difficulties. I logged into the development portal and created a new Hosted Services project, called Azure is go. Next, I went back to Visual Studio and used the Publish wizard. Note: you must not use Publish from the Build menu, as this does not work. You need to right-click the solution in the Solution Explorer and choose Publish from there. This compiles two files, a .cspkg which contains your application, and ServiceConfiguration.cscfg which is configures it.

The wizard is disappointing: it merely opens an Explorer window showing your deployment files, and opens the Developer Portal in a web browser. You deploy your project to a Staging area; then when you are happy with it, hit Promote to copy it to the production URL. In order to deploy, you have to select the two deployment files manually in a web upload dialog. The reason the Wizard opens the Explorer window is to show you where they are so you can copy the path. All rather clunky, though not difficult.

After I did this, the Developer Portal displayed a spinning bagel and the words Package is deploying; then eventually it said Allocated. I hit Run, and it said Initializing. Nothing seems to happen quickly with Azure. It said Initializing for a long time; I got bored and hit Refresh. This may have been a mistake. My Staging icon went red and I got a message: InternalServerError – Information is not available.

I decided to delete the deployment and retry. I clicked Delete, whereupon Azure told me it couldn’t delete the deployment because “tenant status is currently Running”. That almost seemed hopeful; yet the test URL still did not work. I clicked Stop – pause while it stops – then Delete – pause while it deletes – then re-compiled and re-deployed.

My second effort worked, after the usual pauses. I then selected Promote and the application arrived at its final URL: http://azureisgo.cloudapp.net. As with App Engine, it is good to know that your web site is running on a scalable data center rather than on a single machine or virtual machine, and currently without any cost.

These hello world experiences may not seem important later, when you are buried in the intricacies of a real application, but they do have an impact on your desire to explore and experiment with a new platform. Judging from my own experience, getting started with Google App Engine is easier than it is with Azure, even for a Windows developer already set up with Visual Studio. The long pauses as Azure thinks about deploying your project also make a bad impression, in contrast to App Engine’s near-instant response. Maybe it will all come right with Visual Studio 2010 and the final release of Azure. In the meantime, this does nothing to shake my feeling that Microsoft’s Azure launch needs attention if it is to win developer mindshare.

Salesforce.com = CRM + platform?

Organizations evolve; and that can be an untidy process. Salesforce.com started out as an online application for CRM (Customer Relationship Management), and that remains its core business, as suggested by its name. Seeing its success, observers naturally asked whether the company would break out of that niche to service other needs, such as ERP (Enterprise Resource Planning). Sometimes there were hints that this is indeed the case; I recall being told by one of the executives last year that if the company was still called Salesforce.com in five years’ time it would have failed. However, rather than developing new applications itself, the company has chosen to encourage third parties to do this, by opening its underlying platform. The platform is called Force.com, and supports its own programming language called Apex. Third-party applications are sold on the AppExchange, and either extend the CRM functionality or address new and different areas. According to CEO Marc Benioff this morning, there are now 750 applications on AppExchange.

A question I’ve asked a couple of times is whether Salesforce.com gives any assurance to its 3rd party partners that it will not compete with them by rolling into its core platform features similar to those in an AppExchange offering. I’ve not received a clear answer, though EMEA co-president Lyndsey Armstrong told me last year that it just was not an issue; and Benioff today at Cloudforce told me it has not proved to be a problem so far. It is an interesting question though, since if Salesforce.com did choose to expand into new application areas, this kind of competition would be all-but inevitable. It therefore seems to me that the company is more interested in growing its platform business, and continuing to grow its CRM business, than in addressing new kinds of online applications itself. There were also broad hints today that the company intends to improve its platform as an application server.

Let’s speculate for a moment. What if Salesforce gets acquired, say by Oracle, a move which would not be unexpected? If such a thing happened, it would make sense for existing Oracle applications like the E-business Suite or PeopleSoft Enterprise to get extended or merged or migrated into Force.com. That might be less comfortable for AppExchange 3rd parties.

BT brings Ribbit to the UK via Salesforce.com

Ribbit is an internet service for integrating voice communications into web applications. It is a US start-up that was acquired by BT in July 2008, but until now has not been available to UK customers. Today at Cloudforce BT announced Ribbit for Salesforce.com, an extension to the Salesforce CRM application and platform. In essence it adds a voice mailbox to your Salesforce.com account, enhanced with voice to text transcription. You can receive voice messages and have them sent to you as SMS or email; you can also use it as a voice memo utility where you dial in yourself to record the message.

A typical use case is for recording voice notes immediately after a meeting, perhaps when you get back to your car. These notes can then be attached to contacts or prospects for later reference.

The feature also adds a Flash-based VOIP phone to Saleforce.com, so you can make calls from your computer (not that this is anything new).

The cost will be around £35 per month.

I asked at the press briefing whether the voice-to-text really works. It’s good enough, I was told; the interesting part is how it done. First the message is processed using third-party technology. The automatic transcription assigns a confidence level to each word or phrase, and where confidence is low a human corrects it. Despite human involvement it is still only 80% – 90% accurate. I would like to know more about how many messages end up needing human intervention and how that impacts the time it takes – overall we were told 5-10 minutes for transcription on average. It would also be interesting to know who is doing this, where they are and how much they are paid – it sounds like an ideal use for Amazon’s Mechanical Turk.

Ribbit is also an example of Enterprise Flash. Its API is Flash/Flex. This works out well for Salesforce.com integration, as Salesforce.com also has a Flex API. It’s not so good for mobile devices, partly because Flash is not always available (hello iPhone), and partly because Flash Lite does not give access to a microphone, making it useless for voice communication. Apparently a REST API is also under development, though that won’t solve the client piece.

BT says there is more to come, both in terms of other Ribbit applications, and integration with other BT services.

My reaction: Ribbit/Salesforce.com integration looks convenient but is it really worth £35.00 per month, when you could just record notes on a portable device instead? Well, the big feature turns out to be the automatic transcription. One BT guy at the meeting says he uses the service to have his voicemail emailed to him, and as a result rarely needs to dial-in to listen to the message. That has real value – text is better than voice for lots of reasons. That said, is the transcription service really good enough? I sensed some hesitancy about this, though with human involvement it certainly could be.

Intel network driver 64-bit annoyance: won’t install, won’t uninstall

I’m mostly using Vista 64-bit these days and enjoying it. The system is stable, fast and responsive. At least, it was until I started using a dual display; this morning I got a blue-screen, which Windows assured was because of the display driver. I also noticed that the Windows Problem Reports and Solutions applet said that my Intel network driver was causing problems and should be updated, so after updating the NVidia graphics driver, I also downloaded the latest from Intel and tried to run it.

It wouldn’t install. It got almost to the end, then declared:

Error 1713. Intel Network Connections cannot install one of its required products. Contact your technical support group.

The event log also refers to System Error: 1605. This means: This action is only valid for products that are currently installed.

The solution is to uninstall the existing Intel networking utility. Not easy, since bizarrely the uninstall fails with a report that the software is not designed for this version of Windows. So why did you let me install it then? And why not let me remove it?

The next step is to use msizap, also known as the Windows Installer Cleanup Utility, to delete the installation from the Installer database. Note that it doesn’t install any actual files. I downloaded the cleanup utility here, but it did not work; when I tried to run it, the GUI did not appear. That download seemed quite old, so I installed the more recent Windows Installer 4.5 SDK which includes an updated msizap. That version is command-line only, but I copied the new msizap to the Windows Installer Clean Up folder in Program Files (x86); then the GUI worked too. Removed the Intel networking utility, re-installed the updated driver, and all is well.

Thank also to this thread for some useful pointers.

Technorati Tags: ,,,

CSS: a long wait for the aha moment

I’ve been messing around with web form design recently; started with a table layout, decided it was horrible and unmanageable so redid it with CSS. I came across this example which had more or less the layout I wanted – it’s the form about half way down that looks like this:

Looks simple enough; and is based on an idea by CSS guru Eric Meyer; so I copied the code and tried it. Unfortunately my version looked like this:

I squinted at the code again and noticed that a style defining a little hack called div.spacer was missing from the sample code though it is mentioned earlier in the article. Added it, and now my form looked like this:

Better, but frankly not that close to what I wanted. What was wrong? Should the fields float:left instead of float:right? It made little difference. Then I noticed that the example on the page was real code, not just an image. I downloaded the actual CSS it was using. Of course it was an obvious error. The sum of the widths assigned to the label style and the field style was greater than the width of the containing div. Increase that, and all is well:

Great stuff; but it reminded me how tricksy CSS is. With both cascades and inheritance, and exceptions such as the fact that some properties do not inherit, figuring out exactly what style attributes apply to an individual element is a challenge. The positioning rules are complex and often do not work as I first expect. Styles can be defined in numerous places; and while external CSS files are easier to manage than those defined within HTML, they soon get long and hard to navigate.

By way of mitigation, CSS is powerful; yet as the above example shows, it can still need workarounds (div.spacer in this case) to achieve the result you want.

I suppose I’m waiting for that aha moment when it all makes perfect sense; but it seems a long time coming. Worried that it was just me, I found this reassuring post from Sho Kuwamoto:

I used to think of myself as knowing a lot about CSS. For starters, I’d been responsible for the CSS implementation in Dreamweaver. I was also a member of the W3C CSS working group. I wasn’t a major contributor (I didn’t author any of the chapters of the spec, for example), but I thought I knew the spec pretty well.

It’s been a while since I’ve touched CSS, and in coming up with the design for this blog, I was reminded again how difficult it is to use CSS to get the layout you want. It was incredibly difficult. I couldn’t get it to work and I ended up having to google around to figure out how other people had done their page layouts.

I’ve also noticed that the aforementioned Eric Meyer is increasingly critical of the language. In his post Wanted: Layout System he writes:

Maybe CSS isn’t the place for this. Maybe there needs to be a new layout language that can be defined and implemented without regard to the constraints of the existing CSS syntax rules, without worrying about backwards compatibility. Maybe that way we can not only get strong layout but also arbitrary shapes, thus leaving behind the rectangular prison that’s defined the web for almost two decades.

It’s too late of course. Now that Microsoft’s Internet Explorer 8 is out there is decent support for CSS across all the major browsers. What’s the chance of getting agreement on a new layout system now? The only realistic alternative is to work increasingly in Adobe Flash or Microsoft Silverlight, which is proprietary badness but can be attractive.

Flash library for Facebook, Silverlight library for MySpace

Adobe and Facebook have announced that ActionScript 3, the language of Flash 9 and higher, is now officially supported by FaceBook along with JavaScript and PHP. Information about coding for Facebook with Flash is here, and the library itself is on Google Code.

MySpace has announced the MySpace Silverlight SDK which will be hosted on Microsoft’s CodePlex open source site. The focus of the Microsoft Silverlight work seems to be on wrapping the Open Social API used by MySpace in a C# library.

Note that there is already documentation on creating Flash applications for MySpace. On the Facebook side, here’s an intriguing fact: there’s also an Fb:silverlight tag, though the documentation remarks: “For now this feature has no functionality.” Fb:swf is better supported. David Justice has been working on a Facebook library for Silverlight. It’s clear though that Flash is more widely accepted and supported on both platforms, reflecting its maturity and broader acceptance.

Smart developers can already devise code to access the public APIs of platforms like Facebook and MySpace from a variety of clients; this is about making that easier. It benefits the social networking sites if a wider group of developers has access to its platform, and with the advantages of multimedia features; equally it benefits the plug-in vendors if their runtime works smoothly with the broadest possible range of services. Therefore we should expect more of this type of announcement.

It is interesting to see technology partnerships bridging political divides. Microsoft has a stake in Facebook, for example, while Google has a partnership with MySpace.

Perhaps the most interesting outcome may be more Facebook applications based on AIR, Adobe’s Flash platform for the desktop. The existence of AIR applications like Twhirl and Tweetdeck has significantly boosted Twitter; maybe it is now Facebook’s turn.

Open Cloud Manifesto – but from a closed group?

I’ve read the Open Cloud Manifesto with interest. It’s hard to find much to disagree with; I especially like this point on page 5:

Cloud providers must not use their market position to lock customers into their particular platforms and limit their choice of providers.

Companies like IBM won’t do that? I’m sceptical. Still, it is all very vague; and companies not on the list of supporters have been quick to point out the lack of any effort to achieve cross-industry consensus:

Very recently we were privately shown a copy of the document, warned that it was a secret, and told that it must be signed "as is," without modifications or additional input.  It appears to us that one company, or just a few companies, would prefer to control the evolution of cloud computing, as opposed to reaching a consensus across key stakeholders

says Microsoft’s Steve Martin. Amazon, perhaps the most prominent cloud computing pioneer, is another notable absentee.

It is a general truth that successful incumbents rarely strive for openness; whereas competitors who want to grow their market share frequently demand it.

The manifesto FAQ says:

There are many reasons why companies may not be listed. This moved quickly and some companies may not have been reached or simply didn’t have time to make it through their own internal review process.

A poor excuse. If a few more months would have added Microsoft, Amazon, Google and Salesforce.com to the list, it would have been well worth it and added hugely to its impact.

That said, I’ve found Amazon reluctant to talk about interoperability between clouds, while Salesforce.com makes no secret of its lock-in:

… you are making a platform decision, and our job is to make sure you choose our platform and not another platform, because once they have chosen another platform, getting them off it is usually impossible.

said CEO Marc Benioff when I quizzed him on the subject. I guess it could have taken more than a few months.

Sys-con vs Aral Balkan in Web 2.0 war over intellectual property

Aral Balkan is well-known in the Adobe Flash community as an independent speaker and developer; I first came across him a few years back as a champion of open source Flash. On Friday Balkan was surprised to find that he was apparently a key author at Ulitzer.com, a new online publication from Sys-con media which has been launched with the extravagant claim:

Ulitzer is designed to replace Wikipedia with Its three dimensional live content offerings and dynamic topic structure.

Balkan found that he had an entire sub-domain on Ulitzer devoted to his work, with articles he had written. He had not been consulted about this or offered any payment and was indignant, declaring on Twitter (note the sub-domain has been removed):

WTF is Ulitzer and why am I listed as an author on it? Sys-con, remove me now!!! http://aralbalkan.ulitzer.com

He was not alone; and along with a number of other authors contacted sys-con to have his content removed. Balkan expressed his feelings in a series of tweets using the tag @plagiarismtoday and remarking that:

They’ve never had any respect for authors/speakers. I was once announced as speaking at an event I wasn’t approached about!

Fair enough; and the drama could have ended there, except that Sys-con decided to fight back on its blog.

Sys-Con libels me, calls me a "gay son of a bitch" in article titled "Turkish Fags Who Live in London"

tweeted Balkan in response to an intemperate blog post, letting loose a salvo of tweets expressing his emotions towards the company.

Next up, Sys-con blogged under the heading Turkish Web Designer declares Death on Twitter:

Company representatives contacted the Interpol and Scotland Yard to locate the Turkish Web Designer who is suspected to live in London. Aral Balkan seemed to be organizing a Twitter group who may harm the company representatives according to his Twitter logs.

and going on to recall the attempted assassination of the Pope John Paul II.

All very silly; though in saying that I don’t want to underestimate the impact this kind of outburst can have on individuals – it can be profoundly disturbing. Certainly it is not the way a reputable media company should behave. It strikes me that Sys-con has underestimated the influence of a popular individual armed with Twitter, a blog, and the attention of numerous influential folk at companies including Adobe and Microsoft, which are Sys-con’s advertising clients or potential clients – Sys-con’s site is currently plastered with ads for Microsoft’s Visual Studio.

The episode interests me because at heart it is a battle over intellectual property. One way to look at Ulitzer.com is that it gets free content from others and profits from it – something which Experts Exchange also does, but in that case successfully and openly. That’s worrying for those of us who make a business from selling our content. The episode is causing some bloggers to have second thoughts about the Creative Commons license:

Take a scroll down the right-hand side of this blog and you will see that I have removed the Creative Commons License and reverted to specific copyright protection.

Why? Some rather interesting facts have come to light about a certain publishing house over the last few days. It seems that they are doing nothing short of scraping blogs and recycling content under the auspices of a "publishing portal" labelled with their brand, claiming the original blog authors as their own featured authors as if the content was written specifically for them.

says Robert Turrall.

I doubt Sys-con need fear assassins; but it should not under-estimate the power of a community or the importance of reputation.