GPU programming for .NET

Exhibiting here at the NVIDIA GPU Technology Conference is a Cambridge-based company called tidepowerd, whose product GPU.NET brings GPU programming to .NET developers. The product includes both a compiler and a runtime engine, and one of the advantages of this hybrid approach is that your code will run anywhere. It will use both NVIDIA and AMD GPUs, if they support GPU programming, or fall back to the CPU if neither is available. From the samples I saw, it also looks easier than coding CUDA C or OpenCL; just add an attribute to have a function run on the GPU rather than the CPU. Of course underlying issues remain – the GPU is still isolated and cannot access global variables available to the rest of your application – but the Visual Studio tools try to warn of any such issues at compile time.

GPU.NET is in development and will go into public beta “by October 31st 2010” – head to the web site for more details.

I am not sure what sort of performance or other compromises GPU.NET introduces, but it strikes me that this kind of tool is needed to make GPU programming accessible to a wider range of developers.

VN:F [1.9.18_1163]
Rate this post
Rating: 10.0/10 (6 votes cast)
GPU programming for .NET, 10.0 out of 10 based on 6 ratings

Related posts:

  1. Anders Hejlsberg on functional programming, programming futures
  2. GPU Programming for .NET: Tidepowerd’s GPU.NET gets some improvements, more needed
  3. What is the best programming language for a child progressing from Scratch?
  4. New OpenACC compiler directives announced for GPU accelerated programming
  5. NVIDIA CUDA 4.0 simplifies GPU programming, aims for mainstream

3 comments to GPU programming for .NET

  • Paul

    Interesting development – thanks for the post, Tim.

    It would certainly be great to access the GPU from .NET languages. I’m curious, though, about better GPU support for unmanaged applications (since most existing computationally-intensive apps are unmanaged). For example, are existing math library vendors adding seamless GPU support? Intel’s a big one in this space, but I can’t see them optimizing their Math Kernel Library for another vendor’s hardware. But if they don’t, they risk being left behind in performance.

    It will be very interesting to see how this market evolves in the next 12 months.

  • John

    I was about to post a comment to the previous post(about GPUS) when I read this. GPUs need Java, .NET, scripting etc if they are to see mainstream adoption.

  • Hi there,

    I’d like to say thanks for the post, Tim.

    I completely agree with John that GPUs have a long way to go before mainstream adoption. I do think, also, that we will see a dramatic spike in their adoption as soon as tools bridge the gap for the market. The price/performance is there, it’s just difficult for developers to create and integrate for GPUs.

    In the future, GPU .NET will support scripting languages such as IronPython due to JIT compilation.

    Paul, you might be interested in solutions like Acceleryes Jacket for MATLAB. They have been working closely with NVIDIA and have acheived some great performance optimizations for their hardware. Another would be PGI’s Accelerator Fortan compliers.

    Unmanaged code will always give the developer more tunability than managed, but sometimes the ability to develop and deploy code quickly will trump that.

    It certainly is an exciting time for a rapidly growing technology, and the more accessible we make it for developers, I hope, the more it will gain favor in the HPC space.