NVIDIA CUDA 4.0 simplifies GPU programming, aims for mainstream

NVIDIA has announced CUDA 4.0, a major update to its C++ toolkit for general programming on the GPU. The idea is to take advantage of the many cores of NVIDIA’s GPUs for speeding up tasks that may not be graphic-related.

There are three key features:

Unified Virtual Addressing provides a single address space for the main system RAM and the GPU RAM, or even RAM across multiple GPUs if available. This significantly simplifies programming.

image

GPUDIRECT 2.0 is NVIDIA’s name for peer-to-peer communication between multiple GPUs on the same computer. Instead of copying objects from one GPU, to main memory, and to a second GPU, the data can go directly.

Thrust C++ template libraries Thrust is a CUDA library which is similar to the parallel algorithms in the C++ Standard Template Library (STL). NVIDIA claims that typical Thrust routines are 5 to 100 times faster than with STL or Intel’s Threading Building Blocks. Thrust is not really new but is getting pushed to the mainstream of CUDA programming.

Other new features include debugging (cuda-gdb) support on Mac OS X, support for new/delete and virtual functions in C++, and improvement to multi-threading.

The common theme of these features is to make it easier for mortals to move from general C/C++  programming to CUDA programming, and to port existing code. This is how NVIDIA sees CUDA progress:

image

Certainly I see increasing interest in GPU programming, and not just among super-computer researchers.

A weakness is that CUDA only works on NVIDIA GPUs. You can use OpenCL for generic GPU programming but it is less advanced.

CUDA 4.0 release candidate will be available from March 4 if you sign up for the CUDA Registered Developer Program.

Another cloud fail: disappearing Google accounts

Every time a story like this runs it sets back cloud computing. Many users of Google Mail reported yesterday a problem with missing email:

I was on my eMail normally and when I refreshed all my account settings, eMail, labels, contacts etc has just disappeared.

Google’s App Status Dashboard has a series of updates:

image

It does say that the issue affects “less than 0.08% of the Google Mail userbase”. While that does not sound much, if Google Mail has 150 million users that would be 120,000 people. Of those accounts, only a proportion will be critical as some of us use Gmail only casually; but some people are severely inconvenienced:

This really is wildly inconvenient and worrisome, though. I rely on my Gmail an enormous amount for my job, and not having access to it is really crippling me. I can’t even do my work at this point, because all the material I need is in attachments on Gmail, so all I can do is wait until I (hopefully) get it back! I suppose I should have saved my files to my computer, but hindsight is 20/20.

Google is indicating that it will restore the data soon though it is all rather vague.

Of course there are also failed Exchange Servers and the like out there; sometimes backups fail too and data is lost. Cloud providers like Google do tend to lack transparency though, making times like this anxious ones for those who are affected.

The real lesson: if you have data you really care about, keep it in more than one place.

What’s in HP’s Beats Audio, marketing aside?

If you are like me you may be wondering what is actually in Beats Audio technology, which comes from HP in partnership with Beats by Dr Dre.

The technical information is not that easy to find; but a comment to this blog directed me to this video:

http://www.precentral.net/what-beats-audio

image

According to this, it comes down to four things:

1. Redesigned headphone jack with better insulation, hence less ground noise.

image

2. Discrete headphone amp to reduce crosstalk. This is also said to be “more powerful”, but since we do not know what it is more powerful than, I am not going to count that as technical information.

3. Isolated audio circuitry.

4. Software audio profiles which I think means some sort of equalizer.

These seem to me sensible features, though what I would really like to see is specifications showing the benefits versus other laptops of a comparable price.

There may be a bit more to Beats audio in certain models. For example, the Envy 14 laptop described here has a “triple bass reflex subwoofer”.

image

though this user was not greatly impressed:

I ran some audio tone test sites and found out the built in laptop speakers do not generate any sound below 200 Hz. In the IDT audio drivers speaker config there is only configuration for 2 speaker stereo system, no 2.1 speaker system (which includes subwoofer). I’m miffed, because on HP advertising copy claims “HP Triple Bass Reflex Subwoofer amplifiers put out 12W total while supporting a full range of treble and bass frequencies.” Clearly I am not getting “full range” frequencies.

Still, what do you expect from a subwoofer built into a laptop?

Straining to hear: the benefits of SACD audio

A discussion on a music forum led me to this SACD, on which pianist George-Emmanual Lazaridis plays the Grandes études de Paganini. It was recommended as a superb performance and a superb recording.

image

I bought it and have to agree. The music is beautiful and the recording astonishingly realistic. Close your eyes and you can almost see the piano hammers striking the strings.

Since this sounds so good, I took the opportunity to explore one of my interests: the audible benefits of SACD or other high-resolution audio formats versus the 16/44 resolution of CD.

I have set up a simple comparison test. While it is imperfect and would not pass scientific scrutiny, I report it as of anecdotal interest.

First I set my Denon SACD to its best quality, without any bass management or other interference with the sound.

Then I wired the analog output from Front Left and Front Right to one input on my amplifier, and the analog Stereo output to an external analog to digital converter (ADC). The ADC is set to 16/44. When played in SACD stereo mode, these two sets of analog outputs should be the same.

The output from the ADC is then connected to a digital input on the amplifier.

Now I can use the amplifier remote to switch between pure SACD, and SACD via an additional conversion to and from 16/44 sound, which in theory could be encoded on a CD.

At first I could just about tell which was which. The SACD sounded a little more open, with more depth to the sound. It was more involving. I could not describe it as a huge difference, but perhaps one that would be hard to do without once you had heard it. A win for SACD?

Then I realised that the output on the ADC was slightly too low; the SACD was slightly louder. I increased the volume slightly.

Having matched the volume more exactly, I could no longer tell the difference. Both sounded equally good.

I enlisted some volunteers with younger and sharper hearing than mine, but without positive results.

I am not going to claim that nobody could tell the difference. I also recognise that a better SACD player, or a better audio system, might reveal differences that my system disguises.

Still, the test is evidence that on a working system of reasonable quality, the difference is subtle at most. Which is also what science would predict.

The SACD still sounds wonderful of course; and has a surround sound option which a CD cannot deliver. I also believe that SACDs tend to be engineered with more attention to the demands of high-end audio systems than CDs, tailored for the mass market.

Against that, CDs are more convenient because you can rip them to a music server. Personally I rarely play an actual CD these days.

Don’t be fooled. 24-bit will not fix computer audio

Record producer Jimmy Iovine now chairman of Interscope and CEO of Beats by Dr Dre, says there are huge quality problems in the music industry. I listened to his talk during HP’s launch event for its TouchPad tablet and new smartphones.

“We’re trying to fix the degradation of music that the digital revolution has caused,” says Iovine. “Quality is being destroyed on a massive scale”.

So what has gone wrong? Iovine’s speech is short on technical detail, but he identifies several issues. First, he implies that 24-bit digital audio is necessary for good sound:

We record our music in 24-bit. The record industry downgrades that to 16-bit. Why? I don’t know. It’s not because they’re geniuses.

Second, he says that “the PC has become the de facto home stereo for young people” but that sound is an afterthought for most computer manufacturers. “No-one cares about sound”.

Finally, he says that HP working with, no surprise, his own company Beats by Dr Dre, has fixed the problem:

We have a million laptops with Beats audio in with HP … HP’s laptops, the Envy and the Pavilion, actually feel the way the music feels in the studio. I can tell you, that is the only PC in the world that can do that.

Beats Audio is in the Touchpad as well, hence Iovine’s appearance. “The Touchpad is a musical instrument” says Iovine.

I am a music and audio enthusiast and part of me wants to agree with Iovine. Part of me though finds the whole speech disgraceful.

Let’s start with the positive. It is true that the digital revolution has had mixed results for audio quality in the home. In general, convenience has won out over sound quality, and iPod docks are the new home stereo, compromised by little loudspeakers in plastic cabinets, usually with lossy-compressed audio files as the source.

Why then is Iovine’s speech disgraceful? Simply because it is disconnected from technical reality for no other reason than to market his product.

Iovine says he does not know why 24-bit files are downgraded to 16-bit. That is implausible. The first reason is historical. 16-bit audio was chosen for the CD format back in the eighties. The second reason is that there is an advantage in reducing the size of audio data, whether that is to fit more on a CD, or to reduce download time, bandwidth and storage on a PC or portable player.

But how much is the sound degraded when converted from 24-bit to 16-bit? PCM audio has a sampling rate as well as a bit-depth. CD or Redbook quality is 16-bit sampled at 44,100 Hz, usually abbreviated to 16/44. High resolution audio is usually 24/96 or even 24/192.

The question then: what are the limitations of 16/44 audio? We can be precise about this. Nyquist’s Theorem says that the 44,100 Hz sampling rate is enough to perfectly recapture a band-limited audio signal where the highest frequency is 22,500 Hz. Human hearing may extends to 20,000 Hz in ideal conditions, but few can hear much above 18,000 Hz and this diminishes with age.

Redbook audio also limits the dynamic range (difference between quietest and loudest passages) to 96dB.

In theory then it seems that 16/44 should be good enough for the limits of human hearing. Still, there are other factors which mean that what is achieved falls short of what is theoretically possible. Higher resolution formats might therefore sound better. But do they? See here for a previous article on the subject; I has also done a more recent test of my own. It is difficult to be definitive; but my view is that in ideal conditions the difference is subtle at best.

Now think of a PC or Tablet computer. The conditions are far from ideal. There is no room for a powerful amplifier, and any built-in speakers are tiny. Headphones partly solve this problem for personal listening, even more so when they are powered headphones such as the high-end ones marketed by Beats, but that has nothing to do with what is in the PC or tablet.

I am sure it is true that sound quality is a low priority for most laptop or PC vendors, but one of the reasons is that the technology behind digital audio converters is mature and even the cheap audio chipsets built into mass-market motherboards are unlikely to be the weak link in most computer audio setups.

The speakers built into a portable computer are most likely a bit hopeless – and it may well be that HPs are better than most – but that is easily overcome by plugging in powered speakers, or using an external digital to analog converter (DAC). Some of these use USB connections so that you can use them with any USB-equipped device.

Nevertheless, Iovine is correct that the industry has degraded audio. The reason is not 24-bit vs 16-bit, but poor sound engineering, especially the reduced dynamic range inflicted on us by the loudness wars.

The culprits: not the PC manufacturers as Iovine claims, but rather the record industry. Note that Iovine is chairman of a record company.

It breaks my heart to hear the obvious distortion in the loud passages during a magnificent performance such as Johnny Cash’s version of Trent Reznor’s Hurt. That is an engineering failure.

Tiny data projectors using Texas Instruments DLP chips

Remember when data projectors were huge and expensive, and had bulbs so delicate that you were not meant to move them for half an hour after switch off?

image

Things are different now. At Mobile World Congress You can hold an HD projector in the palm of your hand or build it into a mobile phone. The projectors I saw were based on DLP Pico chipsets from Texas Instruments, which contain up to 2 million hinge-mounted microscopic mirrors. If you add a light source and a projection lens, you get a tiny projector.

The obvious use case is that you can turn up at an ad-hoc meeting and show photos, charts or slides on the nearest wall, instead of huddling round a laptop screen or setting up an old-style data projector.

Hands on with Google Cloud Connect: Microsoft docs in Google’s cloud

Google has released Cloud Connect for Microsoft Office, and I gave it a quick try.

Cloud Connect is a plug-in for Microsoft Office which installs a toolbar into Word, Excel and PowerPoint. There is no way that I can see to hide the toolbar. Every time you work in Office you will see Google’s logo.

image

From the toolbar, you sign into a Google Docs account, for which you must sign up if you have not done so already. The sign-in involves passing a rather bewildering dialog granting permission to Cloud Connect on your computer to access Google Docs and contacts on your behalf.

The Cloud Connect settings synchronise your document with Google Docs every time you save, or whenever the document is updated on Google’s servers.

image

Once a document is synched, the Cloud Connect toolbar shows an URL to the document:

image

You get simultaneous editing if more than one person is working on the document. Google Docs will also keep a revision history.

You can easily share a document by clicking the Share button in the toolbar:

image

I found it interesting that Google stores your document in its original Microsoft format, not as a Google document. If you go to Google Docs in a web browser, they are marked by Microsoft Office icons.

image

If you click on them in Google Docs online, they appear in a read-only viewer.

That said, in the case of Word and Excel documents the online viewer has an option to Edit Online.

image

This is where it gets messy. If you choose Edit online, Google docs converts your Office document to a Google doc, which possible loss of formatting. Worse still, if you make changes these are not synched back to Microsoft Office because you are actually working on a second copy:

image

Note that I now have two versions of the same Excel document, distinguished only by the icon and that the title has been forced to lower case. One is a Google spreadsheet, the other an Excel spreadsheet.

Google says this is like SharePoint, but better.

Google Cloud Connect vastly improves Microsoft Office 2003, 2007 and 2010, so companies can start using web-enabled teamwork tools without upgrading Microsoft Office or implementing SharePoint 2010.

Google makes the point that Office 2010 lacks web-based collaboration unless you have SharePoint, and says its $50 per user Google Apps for Business is more affordable. I am sure that is less than typical SharePoint rollouts – though SharePoint has other features.  The best current comparison would be with Business Productivity Online Standard Suite at $10 per user per month, which is more than Google but still relatively inexpensive. BPOS is out of date though and an even better comparison will be Office 365 including SharePoint 2010 online, though this is still in beta.

Like Google, Microsoft has a free offering, SkyDrive, which also lets you upload and share Office documents.

Microsoft’s Office Web Apps have an advantage over Cloud Connect, in that they allow in-browser editing without conversion to a different format, though the editing features on offer are very limited compared with what you can do in the desktop applications.

Despite a few reservations, I am impressed with Cloud Connect. Google has made setup and usage simple. Your document is always available offline, which is a significant benefit over SharePoint – and one day I intend to post on how poorly Microsoft’s SharePoint Workspace 2010 performs both in features and usability. Sharing a document with others is as easy as with other types of Google documents.

The main issue is the disconnect between Office documents and Google documents, and I can see this causing confusion.

Update: I uninstalled Cloud Connect after a couple of days. Two reasons. First, the chunky toolbar is annoying and takes valuable working space. Second, I had performance issues when working with documents opened from SharePoint. I guess the two do not get on well together.

Microsoft has its own unsurprisingly negative take on the product here. Apparently Cloud Connect uses the Track Changes feature under the covers, hence breaking this feature for any other purpose. If so, I would like to have been warned about this. On the other hand, I still like the usability of Cloud Connect. Microsoft is right to observe that auto-sync could result in inadvertent document sharing; but the simple and prominent sharing dialog is easier to use than SharePoint permissions.

Where is Microsoft going with its Rich Client API? Microsoft drops some clues as developers fret

A discussion taking place in a Windows Presentation Foundation (WPF) newsgroup, in a thread called WPF vNext, shows how Microsoft’s confused rich client development strategy is affecting developers, and offers some clues about what is coming.

Developer Rudi Grobler, who posted on his blog some wishes for Windows Phone, Silverlight and WPF, describes his difficulty in discerning Microsoft’s direction:

The strategy for the future is very vague… I daily get questions about should I use WPF or Silverlight? Is WPF dead? Is Silverlight dead? etc…

Jeremiah Morrill describes his frustration with WPF performance:

Microsoft has known of WPF’s performance problems since the first time they wrote a line of code for it.  You will be hard pressed to find a customer that hasn’t complained about perf issues.  And you will not have gone to a PDC in the last few years and not hear folks bring this up to the WPF team. This is 3rd party info by now, but I’ve been told the issues I have noted have been brought up internally, only to be disregarded.

and remarks his frustration with what has happened to Silverlight:

Silverlight’s strategy USED to be about cross-platform, get-the-runtime-on-every-device-out-there, but it’s obvious that is not the strategy any more.  What happened to Silverlight on set-top-boxes?  Android? I read an article that some people saw it on XBox, but nobody has talked about it since.  Cross-platform with OSX has become symbolic at best.

Developer Peter O’Hanlon describes how the uncertainty has affected his business:

I run a small consultancy, and I bet the company on WPF because I could sell the benefits of faster development time for desktop applications. We have spent a lot of time learning the ins and outs of the platform and saw that Silverlight gave us a good fit for developing web apps. In one speech Microsoft caused me months of work repairing the damage when Muglia seemed to suggest that these technologies are dead and Microsoft are betting the farm on Html 5. We hand our code over to the client once we have finished, and they ask us why they need to invest in a dead technology. I don’t care what you say on this thread, Microsoft gave the impression that html 5 was the way to go.

[…] Muglia’s statement about the future being html caused serious issues for my company. We lost two bids because the managers didn’t want to commit to "dead" technology.

Microsoft’s Jaime Rodrigues, WPF Technical Evangelist, offers the following response:

You are telling us to improve perf in WPF. We hear this loudly and we are trying to figure how to solve it. Unfortunately, there are a few pieces to consider:

1)      First of all,  a lot of our customers are telling us to invest more into Silverlight.  Let’s say (again made up) that demand is  4-to 1. How do we justify a revamp of the graphics architecture in WPF.  This is not trivial work; the expertise in this space is limited, we can’t clone our folks to 5x to meet everyone’s needs.

2)      Let’s assume we did take on the work.  My guess (again, I am not engineering) is that it would take two years to implement and thorougly test a release.  At the stage that WPF is at, a rearchitecture or huge changes on the graphics stack would be 80% about testing and 20% about the dev work.    It is not a trivial amount of work.   Would we get the performance you want across myriad of devices? We don’t know. WPF bet on hardware, and there is new devices out  there that are trading hardware for battery, weight, or simply for cost.  it would suck to do that much work, make you wait a long time, and then not get there. Let’s get real on the asks; you say "improve perf" but you are asking us to do a "significant re-write"; these two asks are different.

3)      By the time we get there, what will be a more powerful framework?  Silverlight, WPF, C++, or SuperNew.Next ??  we don’t know today.  We go back to #1 and look at demand We are in agreement that "customers" is the driving principle.

The WPF has looked at the trade-offs, and risk many times.  We are also looking at what customers need. Jer, to you it is all about graphics.  To many others, it is about data.  So, how do we serve all customers??
The strategy is exactly what you have seen/heard:

1)      WPF 4.5 is going to have some significant data binding performance improvements.

2)      We are not redoing the graphics framework, but we are doing a lot of work to let you interoperate with lower level graphics so that if you need more graphics perf you can get it, and still keep the RAD of the rest of the framework.

[…] Hope it helps; apologies if it does not, and again, wait for Rob Relyea or someone else to make it official.  That is just my 2c as a person who bet heavily on WPF but has seen the data that drives the trade-offs the team has to make.

This will be disappointing to former Microsoft evangelist Scott Barnes, who has initiated a Fix WPF campaign.

The problem though is lack of clarity about the strategy. Look at Rodrigue’s third point above. Nobody can predict the future; but what is Microsoft’s current bet? Silverlight, HTML5, or maybe SuperNew.Next – for example, the rumoured new native code UI for Windows 8 or some variant of it?

My own view is that the current difficulties are rooted in what happened with Longhorn and the fact that the Windows team abandoned WPF back in 2004. I’ve written this up in more detail here.

Lest this post be misinterpreted, let me emphasise that Microsoft has a good track record in terms of supporting its Windows APIs long-term, even the ones that become non-strategic. Applications built with the first version of .NET still run; applications built with Visual Basic 6 mostly still run; applications built for ancient versions of Windows often still run or can be coaxed into running. Build an application with WPF or Silverlight today, and it will continue to work and be supported for many years to come.

My guess is that events like the coming 2011 MVP Summit and Mix 2011 in April will bring some clarity about Microsoft’s mobile, tablet, Windows and cross-platform story for rich clients.

Update: Barnes has his own take on this discussion here.

Microsoft still paying the price for botched Vista with muddled development strategy

Professional Developers Conference 2003. Windows Longhorn is revealed, with three “pillars”:

  • Avalon, later named Windows Presentation Foundation (WPF)
  • Indigo, later named Windows Communication Foundation (WCF)
  • WinFS, the relational file system that was later abandoned

With the benefit of hindsight, Microsoft got many things right with the vision it set out at PDC 2003. The company saw that a revolution in user interface technology was under way, driven by the powerful graphics capabilities of modern hardware, and that the old Win32 graphics API would have to be replaced, much as Windows itself replaced DOS and the command-line. XAML and WPF was its answer, bringing together .NET, DirectX, vector graphics, XML and declarative programming to form a new, rich, presentation framework that was both designer-friendly and programmer-friendly.

Microsoft also had plans to take a cut-down version of WPF cross-platform as a browser plugin. WPF/Everywhere, which became Silverlight, was to take WPF to the Mac and to mobile devices.

I still recall the early demos of Avalon, which greatly impressed me: beautiful, rich designs which made traditional Windows applications look dated.

Unfortunately Microsoft largely failed to execute its vision. The preview of Longhorn handed out at PDC, which used Avalon for its GUI, was desperately slow.

Fast forward to April 2005, and Windows geek Paul Thurrott reports on Longhorn progress:

I’m reflecting a bit on Longhorn 5048. My thoughts are not positive, not positive at all. This is a painful build to have to deal with after a year of waiting, a step back in some ways. I hope Microsoft has surprises up their sleeves. This has the makings of a train wreck.

Thurrott was right. But why did Longhorn go backwards? Well, at some point – and I am not sure of the date, but I think sometime in 2004 – Microsoft decided that the .NET API for Longhorn was not working, performance was too bad, defects too many. The Windows build was rebased on the code for Server 2003 and most of .NET was removed, as documented by Richard Grimes.

Vista as we now know was not a success for Microsoft, though it was by no means all bad and laid the foundation for the well-received Windows 7. My point though is how this impacted Microsoft’s strategy for the client API. WPF was shipped in Longhorn, and also back-ported to Windows XP, but it was there as a runtime for custom applications, not as part of the core operating system.

One way of seeing this is that when Longhorn ran into the ground and had to be reset, the Windows team within Microsoft vowed never again to depend on .NET. While I do not know if this is correct, as a model it makes sense of what has subsequently happened with Silverlight, IE and HTML5, and Windows Phone:

  • Windows team talks up IE9 at PDC 2010 and does not mention Silverlight
  • Microsoft refuses to deliver a tablet version of Windows Phone OS with its .NET application API, favouring some future version of full Windows instead

Note that in 2008 Microsoft advertised for a job vacancy including this in the description:

We will be determining the new Windows user interface guidelines and building a platform that supports it. We’ll eliminate much of the drudgery of Win32 UI development and enable rich, graphical, animated user interface by using markup based UI and a small, high performance, native code runtime.

In other words, the Windows team has possibly been working on its own native code equivalent to XAML and WPF, or perhaps a native code runtime for XAML presentation markup. Maybe this could appear in Windows 8 and support a new touch-oriented user interface.

In the meantime though, Microsoft’s developer division has continued a strong push for .NET, Silverlight and most recently Windows Phone. Look at Visual Studio or talk to the development folk, and you still get the impression that this is the future of Windows client applications.

All this adds up to a muddled development story, which is costly when it comes to evangelising the platform.

In particular, eight years after PDC 2003 there is no clarity about Microsoft’s rich client or RIA (Rich Internet Application) designer and developer story. Is it really WPF, Silverlight and .NET, or is it some new API yet to be revealed, or will IE9 as a runtime play a key role?

There is now a little bit more evidence for this confusion and its cost; but this post is long enough and I have covered it separately.

Appcelerator releases Titanium Mobile 1.6

Appcelerator has released Titanium Mobile 1.6, an update to its cross-platform app framework for Apple iOS and Google Android.

The update adds 26 features for Android and 9 features for iOS. The Facebook API has been completely redone, keeping up-to-date with the latest Facebook API. There is beta support for the Android NDK – native code development.

Android 1.6 is now deprecated and will not be supported in future releases.

While not a big release in itself, Titanium Mobile 1.6 is require for using forthcoming Titanium+Plus modules, libraries which add support for features such as barcode reading and PayPal payments.

There is no sign yet of Aptana integration, following the acquisition of this JavaScript IDE in January.

Updating to the 1.6 SDK was delightfully easy on Windows. Just open Titanium Developer and click the prompt.

image