Tag Archives: software development

GPU Programming for .NET: Tidepowerd’s GPU.NET gets some improvements, more needed

When I attended the 2010 GPU programming conference hosted by NVIDIA I encounted Tidepowerd, which has a .NET library called GPU.NET for GPU programming.

GPU programming enables amazing performance improvements for certain types of code. Most GPU programming is done in C/C++, but Typepowerd lets you run code in .NET, simply marking any methods you want to run on the GPU with a [kernel] attribute:

[Kernel]

private static void AddGpu(float[] a, float[] b, float[] c)

{

// Get the thread id and total number of threads

int ThreadId = BlockDimension.X * BlockIndex.X + ThreadIndex.X;

int TotalThreads = BlockDimension.X * GridDimension.X;

for (int ElementIndex = ThreadId; ElementIndex < a.Length; ElementIndex += TotalThreads)

{

c[ElementIndex] = a[ElementIndex] + b[ElementIndex];

}

}

GPU.NET is now at version 2.0 and includes Visual Studio Error List and IntelliSense support. This is useful, since some C# code will not run on the GPU. Strings, for example, are not supported. Take a look at this article which lists .NET OpCodes that do not work in GPU.NET.

GPU.NET requires an NVIDIA GPU with CUDA support and a CUDA 3.0 driver. It can run on Mac and Linux using Mono, the open source implementation of .NET. In principle, GPU.NET could also work with AMD GPUs or others via a vendor-specific runtime:

image

but the latest FAQ says:

Support for AMD devices is currently under development, and support for other hardware architectures will follow shortly.

Another limitation is support for multiple GPUs. If you want to do serious supercomputing relatively cheaply, stuffing a PC with a bunch of Tesla GPUs is a great way to do it, but currently GPU.NET only used one GPU per active thread as far as I can tell from this note:

The GPU.NET runtime includes a work-scheduling system which can distribute device method (“kernel”) calls to multiple GPUs in the system; at this time, this only works for applications which call device-based methods from multiple host threads using multiple CPU cores. In a future release, GPU.NET will be able to use multiple GPUs to execute a single method call.

I doubt that GPU.NET or other .NET libraries will ever compete with C/C++ for performance, but ease of use and productivity count for a lot too. Potentially GPU.NET could bring GPU programming to the broad range of .NET developers.

It is also worth checking out hoopoe’s CUDA.NET and OpenCL.NET which are free libraries. I have not done a detailed comparison but would be interested to hear from others who have.

What’s coming in Microsoft Visual Studio

Microsoft is beginning to talk in detail about the next version of Visual Studio, though currently mostly in the area of ALM (Application Lifecycle Management) tools.

Continuous testing and support for diverse test frameworks

The new Visual Studio will support unit tests that run in the background. Visual Studio VP Jason Zander adds that:

With Visual Studio vNext we are enabling you to use your favorite unit testing framework integrated deeply into the IDE. We will support MS Test, xunit, and nunit with vNext. You will also be able to target both .NET and native C++ code. Adding test frameworks is an extensibility point as well so if you don’t see your favorite one listed here, you can easily add it.

Storyboarding in PowerPoint
This is not exactly a Visual Studio feature; but the new version will include a PowerPoint plug-in and templates that lets you mock up a user interface. Why bother, when Microsoft already has Sketchflow in Expression Blend, and tools in Visio for laying out a GUI? Apparently many users are more comfortable in Office.

Integration with System Center

Visual Studio 2010 already includes a virtual lab management feature that lets you test applications on virtual machines managed by System Center Virtual Machine Manager. But what about deployed applications? A new connector for System Center Operations Manager brings similar integration, so that bugs can be reported directly to Team Foundation Server complete with stack trace enabled by IntelliTrace, a historical debugging feature.

Context switching

The thinking here is that when developers are interrupted they lose the flow of their work. Context switching lets you shelve code changes, open windows and other activity tied to the current task. You can then do other work; when you later resume the task Visual Studio recovers its state.

New Team Explorer

Team Explorer is the connector and window in Visual Studio that forms the client for Team Foundation Server. This has been revamped for the new version, and now uses “full asynchronous communication” to improve load time and responsiveness. There are new views for common categories of information, including work items, pending changes, builds, reports and bugs.

New Agile collaboration tools

There are new tools in the Web Access client for Team Foundation Server for feedback and collaboration on projects using Agile methodology. Backlogs shows features to be implemented in a sprint, a unit of project iteration. The Task Board shows the backlog in a new visual view.

Connector for Project Server

A new connector for Project Server enables project-style views of project progress, such as Gantt charts.

Feedback tools

A new feedback mechanism aimed at stakeholders lets users enter feedback into Team Foundation Server. Tools include a web recorder that lets users comment on actions in a web application with linked recordings.

Code Clone Detection

This is a code quality feature that analyses a project looking for common code that should be refactored into a shared block.

Code Review

Code Review lets team members comment on code, similar in some ways to a commented document in Word.

Hosted Team Foundation Server on Azure

“Any team up and running within 30 seconds” is Microsoft’s claim for a new hosted option for Team Foundation Server. An exaggeration no doubt; but since a full-featured TFS takes some effort and infrastructure to implement, the hosted option will be welcome.

Visual Studio tends to be synchronized to some extent with new versions of Windows, so I would guess we will learn more about Visual Studio vNext at the Professional Developers Conference (though it may be called something else) in Anaheim on September 13-16 this year.

You can read more about Visual Studio vNext on Jason Zander’s blog and in a white paper [pdf]. 

Delphi and C++ Builder XE Starter Editions announced

Embarcadero has announced Starter Editions for both Delphi XE and C++ Builder XE, rapid development environments for native Windows applications.

These are not toy versions. The main technical difference between the Starter editions and the Professional versions are the absence of UML modelling, Class Explorer and Resource Manager tools. You also miss out on code completion for HTML, Live Code Templates, Subversion support, translation manager, refactoring and unit testing.

Not a big deal: most of these lacks are either not critical or can be addressed in other ways. Most features are the same, and you can build excellent high-performance applications with these Starter Editions.

The real restriction is the licensing:

Delphi XE Starter can be used by individuals who will earn less than US $1,000 for the applications they create with Delphi, or organizations or companies with five or fewer developers and less than US$1,000 in total annual revenue. Purchase the Professional edition or higher for larger scale commercial use.

with a similar wording for C++ Builder XE Starter.

The other question: how much? At the time of writing the Starter Editions are not in the online store, but according to this article in SD Times they will be $199 each or £149 for upgrades. Ownership of a Starter Edition gives you $100 discount if you later upgrade to a higher edition.

Delphi is as good as ever, especially bearing in mind that Microsoft has no real equivalent. Visual Studio is mostly .NET-based, whereas Delphi compiles to native code; and Visual C++ is more challenging to learn and arguably less productive. It is true that developers are waiting impatiently for 64-bit Delphi and for a promised compiler for OS X (and perhaps iOS?); but in the meantime if you need to build Windows applications do not ignore it.

Update: the European price is €199 each, or upgrades for €149.

Visual Studio 2010 nine months on: how good has it proved?

Visual Studio 2010 was released on April 12th 2010. Nine months on, how good has it proved to be?

image

I researched deeply into Visual Studio 2010 at the time, and was impressed overall. It was a huge release, partly because the IDE was rebuilt using Windows Presentation Foundation, and partly because of a large number of new features including the F# language. Performance was always going to be an issue with the move to a .NET-based IDE, but on my machines I found it satisfactory.

Others have been less pleased with the performance. The comments to Jason Zander’s announcement of the Service Pack 1 beta last month make interesting reading. Here is the negative:

I am a professional .NET developer and I am really upset with VS 2010. It crashes more often than VS 2008. It is slow as hell. It even crashes when debugging. VS 2010 is built with WPF which is causing all these problems.

and here is the positive:

I don’t know what y’all complaining about – VS2010 is blazingly fast… at least on my machine.

I am not sure whether the performance issues are more dependent on the the type of work you are doing, or the size of the projects, or some other factor. One issue may be graphics performance, since this will make a big difference to WPF whereas not so much with Visual Studio 2008 and earlier.

Thinking back to this time last year, I also recall how Visual Studio 2010 seems so focused on .NET, including Silverlight. Later on we got the announcement of Visual Studio LightSwitch, a RAD database application tool which builds Silverlight clients. It now seems obvious, especially following the PDC (Professional Developers) conference in November, that the vision of the developer team at Microsoft did not align with the vision of the Windows team; and that the Windows team seemed to win that argument internally. It is odd, because Silverlight has the potential to solve problems for the company. It is a technology that extends from the desktop to Windows Phone 7, which is well-suited to app store deployment thanks to the way apps are isolated, and which potentially can run on multiple platforms. Now with Silverlight 5, promised for release this year, Microsoft is adding more Windows-specific features and allowing more fragmentation between versions. Silverlight on Windows Phone 7 is based on version 3, the Mac version has more limited capabilities than the Windows version, and so on.

Microsoft said at PDC that “HTML 5” is its broad-reach platform. That suggests that what Visual Studio needs is HTML 5 designers and JavaScript libraries that integrate with Microsoft’s server technologies and which make it easier to develop HTML application for multiple form factors including small devices.

It is a confusing story, and I would love to know if the subject came up in CEO Steve Ballmer’s discussions with Bob Muglia, VP of Server and Tools, recently. The outcome of those discussions is that Muglia will be leaving Microsoft in the summer.

We will have to wait for Visual Studio 2012, maybe, to discover any change in its direction. In the meantime, SP1 adds a new help viewer, in response to many complaints, as well as a few new features for testing and debugging. There is also a list of bug-fixes, some of which look significant:

and so on. Let me add that while the list looks bad, it is no more than you would expect for a tool of this complexity and in my own testing Visual Studio 2010 has worked well.

I agree though with some of the commenters who note that Microsoft is slow to react when bugs are reported. It will be more than a year after the initial release when SP1 is finished, though you can use the beta for production code if you dare.

I would be interested in hearing from users of Visual Studio 2010. How are you finding it, or did you try it and go back to Visual Studio 2008? I realise that adoption of a new IDE for production work tends to be slow, because developers are reluctant to switch mid-project.

Creating a Web Application for the Google Chrome Web Store

I noticed an old post here getting a lot of hits: My first Google Chrome Web Application. Unfortunately it was based on an early version of Chrome’s app format. Here is an update.

My web application in this example is this blog. I created a manifest in Notepad:

image

Next, using my artistic skills, I made an icon of the required size: 128×128. I used .png format.

Then I put the manifest and the icon into a folder called itwriting-app. I tested it by using Chrome’s Tools – Extensions – Load unpacked extension. It worked fine.

image

Next I compressed  the folder to a zip file. I just right-clicked in Windows and chose Send to – Compressed (zipped) folder.

Then I logged into the Developer Dashboard at the Chrome Web Store (I had to pay $5.00) and uploaded the app:

image

Next, I had to complete some metadata. I chose a couple of categories, uploaded the icon as the image for the app, and uploaded a screenshot of a sample article. Clicked Publish Changes and it was done.

image

If you click Install, you get an icon in the Chrome Apps list, which appears when you open a new tab.

image

Of course it is just a link to a web site. Why is this interesting?

A few reasons. One is that it is easy to get started, which promotes usage.

Next, you can charge for your app. Once the user has paid, you use the Licensing API to check whether the user has paid, or is a trial user, or has not paid. This also depends on the user’s Google ID, promoting Google’s identity system as well as its payment system. Users get single sign-on if they are already logged into Google. Developers do not have to worry about storing passwords, which can be an embarrassment.

Web Apps are also interesting if you request additional permissions. There are three at the moment: geolocation, notifications, and unlimited storage. These give additional capabilities to your app. You can also enable autoupdating.

Finally, Google wants us to accept that web applications are apps too, blurring the boundaries between desktop, mobile device, and web.

What you read in 2010: top posts on ITWriting.com

With three days to go, traffic on ITWriting.com in 2010 is more than 50% up over that of 2009 with over 1 million unique visitors for the first time. Thank you for your attention in another crazy year in technology.

So what did you read? It is intriguing to look at the stats for the whole year, which are different in character from stats for a week or month. The reason is that over a short period, it is the news of the day that is most read – posts like The Java Crisis and what it means for developers. Over the year though, it is the in-depth technical posts like How to backup Small Business Server 2008 on Hyper-V that draw more readers, along with those posts that are a hit with people searching Google for help with an immediate problem like Cannot open the Outlook window – what sort of error message is that?

The most-read post in 2010 though is in neither category. In July I made a quick post noting that the Amazon Kindle now comes with a web browser based on WebKit and a free worldwide internet connection. Mainly thanks to some helpful comments from users it has become a place where people come for information on that niche subject.

On the programming side, posts about Microsoft’s changing developer story are high on the list:

Lessons from Evernote’s flight from .NET

Microsoft wrestles with HTML5 vs Silverlight futures

Microsoft’s Silverlight dream is over

Another post which is there in the top twenty is this one about Adobe Flash and web services:

SOA, REST and Flash/Flex – why Flash does not PUT

along with this 2009 post on the pros and cons of parallel programming:

Parallel Programming: five reasons for caution. Reflections from Intel’s Parallel Studio briefing

This lightweight post also gets a lot of hits:

Apple iPad vs Windows Tablet vs Google Chrome OS

It is out of date now and I should do a more considered update. Still, it touches on a big theme: the success of the Apple iPad. When you take that alongside the interest in Android tablets, perhaps we can say that 2010 was the year of the tablet. I first thought the tablet concept might take off back in 2003/2004 when I got my first Acer tablet. I was wrong about the timing and wrong about the operating system; but the reasons why tablets are a good idea still apply.

Watching these trends is a lot of fun and I look forward to more surprises in 2011.

First impressions of Google TV – get an Apple iPad instead?

I received a Google TV as an attendee at the Adobe MAX conference earlier this year; to be exact, a Logitech Revue. It is not yet available or customised for the UK, but with its universal power supply and standard HDMI connections it works OK, with some caveats.

The main snag with my evaluation is that I use a TV with built-in Freeview (over-the-air digital TV) and do not use a set top box. This is bad for Google TV, since it wants to sit between your set top box and your TV, with an HDMI in for the set top box and an HDMI out to your screen. Features like picture-in-picture, TV search, and the ability to choose a TV channel from within Google TV, depend on this. Without a set-top box you can only use Google TV for the web and apps.

image

I found myself comparing Google TV to Windows Media Center, which I have used extensively both directly attached to a TV, and over the network via Xbox 360. Windows Media Center gets round the set top box problem by having its own TV card. I actually like Windows Media Center a lot, though we had occasional glitches. If you have a PC connected directly, of course this also gives you the web on your TV. Sony’s PlayStation 3 also has a web browser with Adobe Flash support, as does Nintendo Wii though it is more basic.

image

What you get with Google TV is a small set top box – in my case it slipped unobtrusively onto a shelf below the TV, a wireless keyboard, an HDMI connector, and an IR blaster. Installation is straightforward and the box recognised my TV to the extent that it can turn it on and off via the keyboard. The IR blaster lets you position an infra-red transmitter optimally for any IR devices you want to control from Google TV – typically your set-top box.

I connected to the network through wi-fi initially, but for some reason this was glitchy and would lose the connection for no apparent reason. I plugged in an ethernet cable and all was well. This problem may be unique to my set-up, or something that gets a firmware fix, so no big deal.

There is a usability issue with the keyboard. This has a trackpad which operates a mouse pointer, under which are cursor keys and an OK button. You would think that the OK button represents a mouse click, but it does not. The mouse click button is at top left on the keyboard. Once I discovered this, the web browser (Chrome, of course) worked better. You do need the OK button for navigating the Google TV menus.

I also dislike having a keyboard floating around in the living room, though it can be useful especially for things like Gmail, Twitter or web forums on your TV. Another option is to control it from a mobile app on an Android smartphone.

The good news is that Google TV is excellent for playing web video on your TV. YouTube has a special “leanback” mode, optimised for viewing from a distance that works reasonably well, though amateur videos that look tolerable in a small frame in a web browser look terrible played full-screen in the living room. BBC iPlayer works well in on-demand mode; the download player would not install. Overall it was a bit better than the PS3, which is also pretty good for web video, but probably not by enough to justify the cost if you already have a PS3.

The bad news is that the rest of the Web on Google TV is disappointing. Fonts are blurry, and the resolution necessary to make a web page viewable from 12 feet back is often annoying. Flash works well, but Java seems to be absent.

Google also needs to put more thought into personalisation. The box encouraged me to set up a Google account, which will be necessary to purchase apps, giving me access to Gmail and so on; and I also set up the Twitter app. But typically the living room is a shared space: do you want, for example, a babysitter to have access to your Gmail and Twitter accounts? It needs some sort of profile management and log-in.

In general, the web experience you get by bringing your own laptop, netbook or iPad into the room is better than Google TV in most ways apart from web video. An iPad is similar in size to the Google TV keyboard.

Media on Google TV has potential, but is currently limited by the apps on offer. Logitech Media Player is supplied and is a DLNA client, so if you are lucky you will be able to play audio and video from something like a NAS (network attached storage) drive on your network. Codec support is limited.

In a sane, standardised world you would be able to stream music from Apple iTunes or a Squeezebox server to Google TV but we are not there yet.

One key feature of Google TV is for purchasing streamed videos from Netflix, Amazon VOD (Video on Demand) or Dish Network. I did not try this; they do not work yet in the UK. Reports are reasonably positive; but I do not think this is a big selling point since similar services are available by many other routes. 

Google TV is not in itself a DVR (Digital Video Recorder) but can control one.

All about the apps

Not too good so far then; but at some point you will be able to purchase apps from the Android marketplace – which is why attendees at the Adobe conference were given boxes. Nobody really knows what sort of impact apps for TV could have, and it seems to me that as a means of running apps – especially games – on a TV this unobtrusive device is promising.

Note that some TVs will come with Google TV built-in, solving the set top box issue, and if Google can make this a popular option it would have significant impact.

It is too early then to write it off; but it is a shame that Google has not learned the lesson of Apple, which is not to release a product until it is really ready.

Update: for the user’s perspective there is a mammoth thread on avsforum; I liked this post.

Is the triumph of the GPU the failure of the CPU?

I’m at NVIDIA’s GPU tech conference in San Jose. The central theme of the conference is that the capabilities of modern GPUs enable substantial performance gains for general computing, not just for graphics, though most of the examples we have seen involve some element of graphical processing. The reason you should care about this is that the gains are huge.

Take Matlab for example, a popular language and IDE for algorithm development, data analysis and mathematical computation. We were told in the keynote here yesterday that Matlab is offering a parallel computing toolkit based on NVIDIA’s CUDA, with speed-ups from 10 to 40 times. Dramatic performance improvements opens up new possibilities in computing.

Why has GPU performance advanced so rapidly, whereas CPU performance has levelled off? The reason is that they use different computing models. CPUs are general-purpose. The focus is on fast serial computation, executing a single thread as rapidly as possible. Since many applications are largely single-thread, this is what we need, but there are technical barriers to increasing clock speed. Of course multi-core and multi-processor systems are now standard, so we have dual-core or quad-core machines, with big performance gains for multi-threaded applications.

By contrast, GPUs are designed to be massively parallel. A Tesla C1060 has not 2 or 4 or 8 cores, but 240; the C2050 has 448. These are not the same as CPU cores, but nevertheless do execute in parallel. The clock speed is only 1.3Ghz, whereas an Intel Core i7 Extreme is 3.3Ghz, but the Intel CPU has a mere 6 cores.  An Intel Xeon 7560 runs at 2.266 Ghz and has 8 cores.The lower clock speed in the GPU is one reason it is more power-efficient.

NVIDIA’s CUDA initiative is about making this capability available to any application. NVIDIA made changes to its hardware to make it more amenable to standard C code, and delivered CUDA C with extensions to support it. In essence it is pretty simple. The extensions let you specify functions to execute on the GPU, allocate memory for pointers on the GPU, and copy memory between the GPU (called the device) and the main memory on the PC (called the host). You can also synchronize threads and use shared memory between threads.

The reward is great performance, but there are several disadvantages. One is the challenge of concurrent programming and the subtle bugs it can introduce.

Another is the hassle of copying memory between host and device. The device is in effect a computer within a computer. Shifting data between the two is relatively show.

A third is that CUDA is proprietary to NVIDIA. If you want your code to work with ATI’s equivalent, called Streams, then you should use the OpenCL library, though I’ve noticed that most people here seem to use CUDA; I presume they are able to specify the hardware and would rather avoid the compromises of a cross-GPU library. In the worst case, if you need to support both CUDA and non-CUDA systems, you might need to support different code paths depending on what is detected at runtime.

It is all a bit messy, though there are tools and libraries to simplify the task. For example, this morning we heard about GMAC, which makes host and device appear to use a single address space, though I imagine there are performance implications.

NVIDIA says it is democratizing supercomputing, bringing high performance computing within reach for almost anyone. There is something in that; but at the same time as a developer I would rather not think about whether my code will execute on the CPU or the GPU. Viewed at the highest level, I find it disappointing that to get great performance I need to bolster the capabilities of the CPU with a specialist add-on. The triumph of the GPU is in a sense the failure of the CPU. Convergence in some form or other strikes me as inevitable.

NVIDIA talks up GPU computing, presents roadmap

At the NVIDIA GPU Technology Conference in San Jose CEO Jen-Hsun Huang talked up the company’s progress in GPU computing, showed some example applications, and announced a high-level roadmap for future graphics chip architectures. NVIDIA has three areas of focus, he said: the Quadro line for visualisation, Tesla for parallel computing, and GeForce/Tegra for personal computing. Tegra is a system on a chip aimed at mobile devices. Mobile, says Huang, is “a completely disruptive force to all of computing.”

NVIDIA’s current chip architecture is called Fermi. The company is settling on a two-year product cycle and will deliver Kepler in 2011 with 3 to 4 times the performance (expressed as Gigaflops per watt) of Fermi. Maxwell in 2013 will have around 12 times the performance of Fermi. In between these architecture changes, NVIDIA will do “kicker” updates to refresh its products, with one for Fermi due soon.

The focus of the conference though is not on super-fast graphics cards in themselves, but rather on using the GPU for general purpose computing. GPUs are very, very good at doing mathematics fast and in parallel. If you have an application that does intensive calculations, then executing that part of the code on the GPU can offer impressive performance increases. NVIDIA’s CUDA library for C lets you do exactly that. Another option is OpenCL, a standard that works across GPUs from multiple vendors.

Adobe uses CUDA for the Mercury Playback engine in Creative Suite 5, greatly improving performance in After Effects, Premiere Pro and Photoshop, but with the annoyance that you have to use a compatible NVIDIA graphics card.

The performance gain from GPU programming is so great that it is unavoidable for applications in relevant areas, such as simulation or statistical analysis. Huang gave a compelling example during the keynote, bringing heart surgeon Dr Michael Black on stage to talk about his work. Operating on a beating heart is difficult because it presents a moving target. By combining robotic surgery with software that is able to predict the heart’s movement through simulation, he is researching how to operate on a heart almost as if it were stopped and with just a small incision.

Programming the GPU is compelling, but difficult. NVIDIA is keen to see it become part of mainstream programming, for obvious reasons, and there are new libraries and tools which help with this, like Parallel Nsight for Visual Studio 2010. Another interesting development, announced today, is CUDA for x86, being developed by PGI, which will let your CUDA code run even when an NVIDIA GPU is not present. Even if the performance gains are limited, it will mean developers who need to support diverse systems can run the same code, rather than having a different code path when no CUDA GPU is detected.

That said, GPU programming still has all the challenges of concurrent development, prone to race conditions and synchronization problems.

Stuffing a server full of GPUs is a cost-effective route to super-computing. I took a brief look at the exhibition, which includes this Colfax CXT8000 with 8 Tesla GPUs; it also has three 1200W power supplies. It may cost $25,000 but if you look at the performance you are getting for the price, machines like this are great value.

image

Delphi XE includes licenses for older versions back to Delphi 7

I’ve just picked up that Delphi XE, the latest RAD Windows development suite from Embarcadero, includes licenses for older versions going back to Delphi 7.

There’s an explanation and list of what’s on offer here. Delphi 7 was the last version to use the old fully native code IDE and is delightfully fast and lightweight by today’s standards. Delphi 2007 was the last version before big Unicode changes in Delphi 2009, which often broke code, so could be useful for older projects.

The FAQ includes a few points of interest. Embarcadero is dismissive of the old Delphi for .NET (before Prism) and will not supply it:

That is an old technology that was replaced by Delphi Prism and we don’t want to encourage use of that old product.

If you have purchased XE and want to take advantage of the offer, you must do so within 180 days.