Category Archives: c++

Embarcadero launches free Community Edition of Delphi and C++Builder for mainly non-commercial use

A new Community Edition of Delphi and C++Builder, visual development tools for Windows, Mac, Android and iOS, has been released by Embarcadero.

image

The tools are licensed for non-commercial use or for commercial use (for up to 5 developers) where revenue is less than $5000 per year. It is not totally clear to me, but I believe this means the total revenue (or for non-profits, donations) of the individual or organisation, not just the revenue generated by Community Edition applications. From the EULA:

The Community Edition license applies solely if Licensee cumulative annual revenue (of the for-profit organization, the government entity or the individual developer) or any donations (of the non-profit organization) does not exceed USD $5,000.00 (or the equivalent in other currencies) (the “Threshold”). If Licensee is an individual developer, the revenue of all contract work performed by developer in one calendar year may not exceed the Threshold (whether or not the Community Edition is used for all projects).

Otherwise, the Community Editions are broadly similar to the Professional Editions of these tools. Note that even the Professional Edition lacks database drivers other than for local or embedded databases so this is a key differentiator in favour of the Architect or Enterprise editions.

An annoyance is that you cannot install both Delphi and C++ Builder Community Editions on the same PC. For this you need RAD Studio which has no Community Edition.

Delphi and C++ Builder are amazing tools for Windows desktop development, with a compiler that generates fast native code. For cross-platform there is more competition, not least from Microsoft’s Xamarin tools, but the ability to share code across multiple platforms has a powerful attraction.

Get Delphi Community Edition here and C++Builder Community Edition here.

Adapting a native code DLL to be called from a Store or Universal Windows app

I am writing a Bridge game in C# – yes, I have been doing this for some time, it does run now but it is not ready for public unveiling.

It is good fun though and a learning experience, as I am writing it as a Windows 8 Store app. This means it can also be a Universal Windows Platform app but I have kept it compatible with Window 8.1 as I don’t want to lose that large market of Windows 8 users who have not upgraded to 10. Hmm.

Bridge is a card game in which a pack of 52 cards is dealt into 4 hands of 13 cards. Each hand is played as a sequence of 13 4-card “tricks”, and each trick is won one of two opposing pairs of players according to the cards played. Each pair of course tries to win as many tricks as possible, so one of the points of interests is how many tricks can be won if you play perfectly (ie with full knowledge of all four hands). Another point of interest is how each card played affects the potential number of tricks you can win with best play. For example, leading a King might cost you a trick (or more) if your opponents hold both the Ace and the Queen of that suit.

This is called “double dummy” analysis and smart people have written algorithms to calculate the answers. A double dummy analysis is useful in a bridge game for two reasons. One is that users may like to know, after playing a hand, what their best score could have been, or even to analyse the hand and see how if they played this card rather than that card at trick such-and-such the outcome would have varied. The other is that you can use it to assist the software in finding the best play. Of course it is important that the software plays fair by not using knowledge of all four hands beyond what would be known by human players; but it is legitimate to try out various possible hands that match what is currently known and use double dummy analysis on these hands.

One such smart person is Bo Haglund who wrote a C++ Windows library for double dummy analysis, called Double Dummy Solver (DDS) and released it as open source under the Apache 2 license. It works very well and is widely used in the Bridge software community, and has now been ported to Mac and Linux; you can find the latest code on Github.

Modifying a native code DLL to use with a Store app

I wanted to use the library in my own Bridge game but faced a compatibility problem. Windows Store apps can only call into DLLs that meet certain requirements, such as using only a subset of the Windows API, and DDS did not meet those requirements. My choice was either to port the DLL to C#, or to modify the code so that it would work as a Windows Runtime native DLL.

I have no doubt that the code could be ported to C# but it looks like rather a long job that would result in a library with slower performance (please feel free to prove me wrong). I thought it would be more realistic to modify the code, so I created a new Windows 8.1 DLL project in Visual Studio 2013 (I am now using Visual Studio 2015 but it is the same for this) and set about modifying the code so that it would compile.

In no particular order, here are some notes on what I learned.

I was able to get the DLL to compile after disabling the multi-threading support (more on this later), and commenting out some functions that I don’t yet need.

Another issue I hit was that Visual C++ by default performs “Security Development Lifecycle” checks (compile with /sdl). This means that that common functions like strcpy, strcat, sprintf and others will not compile. You have to use “secure” versions of those functions, strcpy_s, strcat_s, sprintf_s and so on. These are specific to Microsoft’s libraries though. Of course you can just not compile with /sdl, or define _CRT_SECURE_NO_WARNINGS, but I chose to fix all of these. Now the library compiled.

But did it work? No. I had introduced a stupid bug which took me a while to fix. Did it then work? Yes, but it took me some time to get it working from C#.

Next, I kept getting DLLNotFound exceptions. OK, so you have to add the DLL as content in your C# project, and make sure it is set to copy to your output. I still got DLLNotFound exceptions. It turns out that you get this exception even when the DLL is present, if there is a dependency in the DLL which is not found. What dependency was not found? I downloaded the Sysinternals Process Monitor utility and set the filter to monitor my C# game. I excluded SUCCESS results. Then I tried to load the DLL. This told me that it was looking for the file msvcr120_app.dll (the Windows Runtime version of the Visual C++ runtime library). My first thought was to add runtime libraries from the appx deployment packages, in:

C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1\ExtensionSDKs\Microsoft.VCLibs\12.0

image

Then I discovered that all you need to do is to add a reference to the Visual C++ runtime packages, much easier. That fixed DLLNotFound.

Next, I had some problems calling the 64-bit DLL with Platform Invoke (PInvoke) from C#. I found it easier to compile both my C# app and the DLL itself as 32-bit code. I may go back to the 64-bit option later.

Concurrency issues

Now I had everything working; except that my DDS port was far inferior to the standard one because it was single-threaded. The original used QueueUserWorkItem which is not available in a Windows Runtime DLL. I searched for what to do, and came across this MSDN article which recommends using RunAsync, WorkItemHandler and IAsyncAction. However my DLL was not currently compiled using /ZW for “Consume Windows Runtime Extension”. I could add that of course; but then my DLL would have a dependency on the Windows Runtime and if I wanted to use the code for, say, Windows 7, it would not work. or not without yet more #ifdef blocks. No big deal perhaps; but my preference was to avoid this dependency.

There may be other solutions, but the one that I found was to use the Concurrency Runtime. Previously, QueueUserWorkItem was called in a for loop. I simply modified this to use a parallel_for loop instead, using the example here for guidance. I also added:

#include <ppltasks.h>

using namespace concurrency;

to the top of the code. It works well, speeding performance by about three times on my quad-core desktop. Of course I was greatly helped by the fact that the code was already written with concurrency in mind.

image

The effect is spoiled by the time it takes to load the DLL but fortunately you can get DDS to solve multiple boards in one call though I have yet to experiment with this.

Coding Office for cross platform: Microsoft explains its approach

At last month’s @Scale conference in San Francisco, developers from a number of well-known companies (Google, Facebook, Twitter, Dropbox and others) spoke about the challenge of scaling applications and services to millions or even billions of users.

Among the speakers was Igor Zaika, Distinguished Engineer in the Microsoft Office team, and the video (embedded below) is illuminating not only as an example of how to code across multiple platforms, but also as an insight into where the company is taking Office.

Zaika gives a brief résumé of the history of Office, mentioning how the team has experienced the highs and lows of cross-platform code. Word 6.0 (1993) was great on Windows but a disaster on the Mac. The team built an entire Win32 emulation layer for the Mac, enabling a high level of code reuse, but resulting in a poor user experience and lots of platform-specific bugs and performance issues in the Mac version.

Next came Word 98 for the Mac, which took the opposite approach, forking the code to create an optimized Mac-specific version. It was well received and great for user experience, but “it was only fun for the first couple of years,” says Zaika. As the Windows version evolved, merging code from the main trunk into the Mac version became increasingly difficult.

Today Microsoft is committed not only to Mac and Windows versions of Word, but to all the major platforms, by which Zaika means Apple (including iOS), Android, Windows (desktop and WinRT) and Web. “If we don’t, we are not going to have a sustainable business,” he says.

WinRT is short for the Windows Runtime, also known as Metro, or as the Store App platform. Zaika says that the relationship between WinRT and Win32 (desktop Windows) is similar to that between Apple’s OS X and iOS.

Time for a brief digression of my own: some observers have said that Microsoft should have made a dedicated version of Windows for touch/mobile rather than attempting to do both at once in Windows 8. The truth is that it did, but Microsoft chose to bundle both into one operating system in Windows 8. Windows RT (the ARM version used in Surface RT) is a close parallel to the iPad, since only WinRT apps can be installed. What seems to be happening now is that Windows Phone and Windows RT will be merged, so that the equivalence of WinRT and iOS will be closer and more obvious.

Microsoft’s goal with Office is to achieve high content fidelity and consistency of functionality across all platforms, but to use native UX/UI frameworks so that each version integrates properly with the operating system on which it runs. The company also wants to achieve a faster shipping cycle; the traditional two-year cycle is not fast enough, says Zaika.

What then is Microsoft’s technical strategy for cross-platform Office now? The starting point, Zaika explains, is a shared core of C++ code. Office has always been written in C/C++, and “that has worked out well for us,” he says, since it is the only language that compiles to native code across all the platforms (web is an exception, and one that Zaika did not talk much about, except to note the importance of “shared service code,” cloud-based code that is used for features that do not need to work offline).

In order for the shared non-visual code to work correctly cross-platform, Microsoft has a number of platform abstraction layers (PALs). No #ifdefs (to handle platform differences) are allowed in the shared code itself. However, rather than a monolithic Win32 emulation as used in Word 6.0 for the Mac, Microsoft now has numerous mini-PALs. There is also a willingness to compromise, abandoning shared code if it is necessary for a good platform experience.

image

How do you ensure cross-platform fidelity in places where you cannot share code? The alternative is unit testing, says Zaika, and there is a strong reliance on this in Office development.

There is also an abstraction layer for document rendering. Office requires composition, animation and touch APIs on each platform. Microsoft uses DirectX on Win32, a thin layer over Apple’s CoreAnimation API on Mac and iOS, a thin layer over XAML on WinRT, and a thinnish layer over Java on Android.

The outcome of Microsoft’s architectural work is a high level of code sharing, despite the commitment to native frameworks for UX. Zaika showed a slide revealing code sharing of over 95% for PowerPoint on WinRT and Android.

image

What can Microsoft-watchers infer from this about the future of Office? While there are no revelations here, it does seem that work on Office for WinRT and for Android is well advanced.

Office for WinRT has implications for future Windows tablets. If a version of Office with at least the functionality of Office for iPad runs on WinRT, there is no longer any need to include the Windows desktop on future Windows tablets – by which I mean not laptop replacements like Surface 3.0, but smaller tablets. That will make such devices less perplexing for users than Surface RT, though with equivalent versions of Office on both Android and iOS tablets, the unique advantages of Windows tablets will be harder to identify.

Thanks to WalkingCat on Twitter for alerting me to this video.

Microsoft and developer trust

David Sobeski, former Microsoft General Manager, has written about Trust, Users and The Developer Division. It is interesting to me since I recall all these changes: the evolution of the Microsoft C++ from Programmer’s Workbench (which few used) to Visual C++ and then Visual Studio; the original Visual Basic, the transition from VBX to OCX; DDE, OLE and OLE Automation and COM automation, the arrival of C# and .NET and the misery of Visual Basic developers who had to learn .NET; how DCOM (Distributed COM) was the future, especially in conjunction with Transaction Server, and then how it wasn’t, and XML web services were the future, with SOAP and WSDL, and then it wasn’t because REST is better; the transition from ASP to ASP.NET (totally different) to ASP.NET MVC (largely different); and of course the database APIs, the canonical case for Microsoft’s API mind-changing, as DAO gave way to ADO gave way to ADO.NET, not to mention various other SQL Server client libraries, and then there was LINQ and LINQ to SQL and Entity Framework and it is hard to keep up (speaking personally I have not yet really got to grips with Entity Framework).

There is much truth in what Sobeski says; yet his perspective is, I feel, overly negative. At least some of Microsoft’s changes were worthwhile. In particular, the transition to .NET and the introduction of C# was successful and it proved an strong and popular platform for business applications – more so than would have been the case if Microsoft had stuck with C++ and COM-based Visual Basic forever; and yes, the flight to Java would have been more pronounced if C# had not appeared.

Should Silverlight XAML have been “fully compatible” with WPF XAML as Sobeski suggests? I liked Silverlight; to me it was what client-side .NET should have been from the beginning, lightweight and web-friendly, and given its different aims it could never be fully compatible with WPF.

The ever-expanding Windows API is overly bloated and inconsistent for sure; but the code in Petzold’s Programming Windows mostly still works today, at least if you use the 32-bit edition (1998). In fact, Sobeski writes of the virtues of Win16 transitioning to Win32s and Win32 and Win64 in a mostly smooth fashion, without making it clear that this happened alongside the introduction of .NET and other changes.

Even Windows Forms, introduced with .NET in 2002, still works today. ADO.NET too has been resilient, and if you prefer not to use LINQ or Entity Framework then concepts you learned in 2002 will still work now, in Visual Studio 2013.

Why does this talk of developer trust then resonate so strongly? It is all to do with the Windows 8 story, not so much the move to Metro itself, but the way Microsoft communicated (or did not communicate) with developers and the abandonment of frameworks that were well liked. It was 2010 that was the darkest year for Microsoft platform developers. Up until Build in October, rumours swirled. Microsoft was abandoning .NET. Everything was going to be HTML or C++. Nobody would confirm or deny anything. Then at Build 2010 it became obvious that Silverlight was all-but dead, in terms of future development; the same Silverlight that a year earlier had been touted as the future both of the .NET client and the rich web platform, in Microsoft’s vision.

Developers had to wait a further year to discover what Microsoft meant by promoting HTML so strongly. It was all part of the strategy for the tablet-friendly Windows Runtime (WinRT), in which HTML, .NET and C++ are intended to be on an equal footing. Having said which, not all parts of the .NET Framework are supported, mainly because of the sandboxed WinRT environment.

If you are a skilled Windows Forms developer, or a skilled Win32 developer, developing for WinRT is a hard transition, even though you can use a familiar language. If you are a skilled Silverlight or WPF developer, you have knowledge of XAML which is a substantial advantage, but there is still a great deal to learn and a great deal which no longer applies. Microsoft did this to shake off its legacy and avoid compromising the new platform; but the end result is not sufficiently wonderful to justify this rationale. In particular, there could have been more effort to incorporate Silverlight and the work done for Windows Phone (also a sandboxed and touch-based platform).

That said, I disagree with Sobeski’s conclusion:

At the end of the day, developers walked away from Microsoft not because they missed a platform paradigm shift. They left because they lost all trust. You wanted to go somewhere to have your code investments work and continue to work.

Developers go where the users are. The main reason developers have not rushed to support WinRT with new applications is that they can make more money elsewhere, coding for iOS and Android and desktop Windows. All Windows 8 machines other than those running Windows RT (a tiny minority) still run desktop applications, whereas no version of Windows below 8 runs WinRT apps, making it an easy decision.

Changing this state of affairs, if there is any hope of change, requires Microsoft to raise the profile of WinRT among users more than among developers, by selling more Windows tablets and by making the WinRT platform more compelling for users of those tablets. Winning developer support is a factor of course, but I do not take the view that lack of developer support is the chief reason for lacklustre Windows 8 adoption. There are many more obvious reasons, to do with the high demands a dual-personality operating system makes on users.

That said, the events of 2010 and 2011 hurt the Microsoft developer community deeply. The puzzle now is how the company can heal those wounds but without yet another strategy shift that will further undermine confidence in its platform.

Hands on Cross-Platform Windows and Mac development with C++ Builder XE3

I have been writing about Embarcadero’s RAD Studio XE3, which includes Delphi and C++ Builder, and as part of the research I set this up for cross-platform development on a Mac.

My setup uses a Parallels Virtual Machine to run Windows 7, on which RAD Studio XE3 is installed. This is convenient for Mac development, since the IDE itself is Windows only. That said, if I were doing this in earnest I would use multiple displays or perhaps separate physical machines, since it is no fun debugging in a VM with the application running in another operating system behind it.

Is it straightforward to configure? Not too bad. You have to install Xcode on the Mac, and in addition, you have to install the Xcode command line tools, which you can do from Xcode itself, in Preferences – Downloads – Components, or as a separate download.

image

Then you need to find the Platform Assistant (paserver), an agent which runs on the Mac to support remote debugging. I was annoyed to find that this has a dependency on Java SE6, which to be fair it downloaded and installed automatically. Actually I find this amusing, after hearing from an Embarcadero VP how native code is all the rage and nobody uses managed code any more. Except Embarcadero for the paserver.

Once that is all up and running you are done on the Mac side. On Windows, you then need to sort out a remote profile, after having installed RAD Studio of course. The way to do this is first to start a new cross-platform project, which means using the FireMonkey framework. Then right-click TargetPlatforms in the project manager and add a platform. If you add OSX but no remote profile exists, you will be prompted to create one.

image

This is where something went slightly wrong. I created a profile and could connect OK. However, when I tried to build the project, I got an error: Unable to open include file ‘CoreFoundation/CoreFoundation.h’. You get this if for some reason the required library files have not been pulled over from the Mac. The fix is to edit the profile and click Update Local File Cache.

image

After that I was away. Set breakpoints if needed, build and debug.

image

Cross-platform is not new in RAD Studio; it was in XE2, and in some ways better, since you could target iOS as well as OSX. C++ Builder XE3 is actually a new generation though. In the 64-bit update 1, it is the first release to use Clang and LLVM, and from what I understand this represents the future for Embarcadero’s tools.

Updates are promised in 2013 for both Delphi and C++Builder – this roadmap is most of what we have to go on – which will add first iOS and later Android support, at what the company calls a “low cost”. Unlike the iOS support in XE2, the coming update will not use the Free Pascal compiler, but the new architecture based on LLVM. This also suggests that the add-on will replace some of the guts of Delphi when it arrives, so it will be significant and somewhat risky.

The cross-platform capabilities look good, though I am somewhat wary of FireMonkey which is less complete and mature than the Windows-only VCL. For example no Webbrowser component is supplied, which is a significant limitation, though I am sure there are ways of hacking this, perhaps through ChromiumEmbedded for which a Delphi FireMonkey exists.

It is worth a bit of effort, since Delphi and C++Builder are productive tools, and the output is true native code which still had advantages.

More information on RAD Studio XE3 is here.

Embarcadero launches C++ Builder XE3: first built on Clang

Embarcadero has released C++ Builder XE3, the first version built on the open source clang front end for the LLVM compiler. This has enabled the product to support many new features, including extensive C++ 11 support and a 64-bit compiler.

image

While it is a shame that the old Borland C/C++ Compiler is no more, it makes sense for Embarcadero to bring its VCL (Visual Component Library) and FireMonkey framework to Clang rather than continuing to work on its own compiler.

The other big change is cross-platform support. Through FireMonkey, C++ Builder XE3 supports Windows (including Windows 8) and Mac OS X, with iOS and Android promised for 2013.

Although Windows 8 is supported on the desktop, there is no official support for the Windows Runtime (Windows Store apps). Instead, Embarcadero has a curious application framework called Metropolis which fakes the Windows 8 style but with desktop applications, as if the Windows 8 world were not already sufficiently confusing.

The big question is how compatible VCL applications created for earlier versions of C++ Builder are with the XE3 release. With a new compiler and major changes to the VCL in order to support the new compiler, you might expect some issues.

“That’s what we’ve been spending all of our time on,” Embarcadero VP Michael Swindell told me. “This is fully compatible with all our previous C++ dialects. We’ve completely re-engineered the C++ front end but it’s engineered to be compatible with C++ Builder applications and Borland C++ applications.”

I would rather hear that from developers though, rather than from Embarcadero.

Although C++ Builder is a cross-platform compiler, it only runs on Windows. A common scenario is to run in Windows emulation on a Mac, using VMware Fusion or Parallels.

Similar changes are on the way for Delphi, which uses the same VCL and FireMonkey frameworks but with the Delphi language based on Object Pascal.

Note that the new Clang-based compiler is 64-bit only. You are meant to continue using the old Borland compiler for 32-bit, making it hard to maintain a single code base for both.

Embarcadero adopts open source Clang for future C++ versions

A couple of months ago Embarcadero’s John Ray Thomas published a roadmap for the company’s C++ tools. Coming soon: not only a long-awaited 64-bit compiler for Windows, but also native iOS and Android support. On top of that, there are plans for “the very best in C++11 and C99 language and library compliance in the industry.”

Sounds good; but this forthcoming upgrade is not quite what it seems. I spoke to technical evangelist David Intersimone about the changes. The company is adopting Clang, an open source project which creates a C/C++/Objective C front-end for the LLVM compiler. “We’re integrating all that into our IDE,” said Intersimone. “We’re also going to be using that same toolchain world with our Delphi compiler as well.”

“We have analysed what to do about C++. Our compilers both for Delphi and C++ have been around for a lot of years, and over time it just got harder and harder to add new capabilities to make programming simpler and also to add power and richness to the languages. In C++ in particular the language continues to be updated, with now C++ 11. So we made the decision to use Clang and LLVM on the C++ side, for the 64-bit compiler. We’re going to keep our existing compiler for 32-bit Windows for now, and then eventually replace that.

“For Delphi we’re still using our existing compiler to do some of the work, but we’ve been working on a next-generation compiler for Delphi and that is still in the works. For a while we’ll have two compilers, the existing architecture compilers, and then new compilers.”

Embarcadero has its own C++ extensions to support component development, and is working on adding them to Clang. “We’re leveraging and extending the Clang compiler with the property-method-event extensions that we added to our own C++ compiler.”

It is a sad moment in some ways, bearing in mind the long history of what was once the Borland C++ compiler. Equally, it is a sensible move. Intersimone said that the work of keeping up with the evolving C++ specification was disproportionate to the benefits. “It’s a monster language, with a lot of power and a lot of complexity. It made perfect sense to fit our extensions [to Clang] versus building a compiler from scratch and having to continue to track the language into the future.”

Embarcadero can focus instead on its IDE and tools, and on the frameworks for Windows and for cross-platform.

Look out for a more detailed interview with David Intersimone in a future article for Hardcopy.

GPU computing with NVIDIA in Beijing

I’m in Beijing for NVIDIA’s GPU Technology Conference; I attended last year’s event in San Jose and found it fascinating, partly because it has an academic and research flavour with a huge variety of projects on display.

This year the event is in Beijing, reflecting the level of HPC (High Performance Computing) activity in this region.

image

image

NVIDIA’s business is graphics processors, though it has expanded into the SoC (System on a chip) business with its ARM-based Tegra chipset. This conference though is focused at the other end of the scale: Tesla GPUs that are primarily designed not for driving a display, but for rapid processing using massively parallel computing.

The Tesla business is relatively small for NVIDIA; less than 5% of its overall revenue, I was told; and I was told that the company treats it partly as research and development. That said, GPU computing is coming into the mainstream and the business is expected to grow. NVIDIA’s desktop GPU cards also support GPU computing.

I recently reviewed a video format converter from Cyberlink; the product was unexceptional except that it can take advantage of GPU computing when available to speed processing when converting from one video format to another. Since I do have a suitable graphics card (though sadly not a Tesla) this made a substantial difference, converting several times faster than another format converted I tried.

Of course NVIDIA is not the only player; there is an open standard (OpenCL) for GPU computing and other GPU vendors such as AMD implement OpenCL. NVIDIA implements OpenCL but also has its own CUDA architecture, which tends to be the focus of its conference as you would expect.

More reports soon.

C++ 11 is approved by ISO: a big day for native code development

Herb Sutter reports that C++ 0x, which will be called C++ 11, has been unanimously approved by the ISO C++ committee. The “11” in the name refers to the year of approval, 2011. The current standard is C++ 98, though amended as C++ 03, so it has taken 8 or 13 years to update it depending on how you count it.

This means that compiler makers can get on with implementing the full C++ 11 standard. Most current compilers implement some of the features already. This Apache wiki shows the current status. A quick glance suggests that the open source GCC is ahead of the pack, followed by Intel C++ and then perhaps Microsoft Visual C++.

C++ 11 is pretty much compatible with C++ 03 so existing code should still work. However there are many new features, enough for Bjarne Stroustrup to say in his feature summary:

Surprisingly, C++0x feels like a new language: The pieces just fit together better than they used to and I find a higher-level style of programming more natural than before and as efficient as ever. If you timidly approach C++ as just a better C or as an object-oriented language, you are going to miss the point. The abstractions are simply more flexible and affordable than before. Rely on the old mantra: If you think of it as a separate idea or object, represent it directly in the program; model real-world objects, and abstractions directly in code. It’s easier now.

Concurrent programming is better supported in C++ 11, important for getting the best performance from modern hardware.

It is curious how the programming landscape has changed in recent year. A few years back, you might have foreseen a day when most programming would be .NET, Java or JavaScript: all varieties of managed code. While those languages do still dominate, native code has come more to the fore, thanks to factors like Apple’s focus on Objective C, and signs of internal conflict at Microsoft over the best language for coding Windows applications.

That said, C++ 11 remains a demanding language to learn and use. As Stroustrup notes, since C++ 11 is a superset of C++ 98 it is technically harder to learn all of it, though new libraries and abstractions should help beginners. The reasons for using or not using C++ are not going to change significantly with this new standard.

Adobe AIR is user-hostile compared to native apps says BankSimple CTO

Alex Payne, CTO at BankSimple, has written an analysis of Adobe AIR from the user’s perspective. The scenario: his team was looking for a an alternative to Campfire for group chat, and selected HipChat. They liked the features of HipChat, but not the desktop app, which is built using Adobe AIR:

My team experienced a number of the usual problems one has with AIR applications: lousy performance, odd interface bugs, key combinations and UI elements that didn’t conform to our operating system. AIR apps exist in an uncanny valley between a web application and a desktop application, and the result is unsettling and annoying. Pretty soon, we were itching to go back to Campfire (via the native Mac client Propane), even though HipChat has better features and the promise of improved reliability.

Payne investigated further and came to the conclusion that users prefer native apps; and that cross-platform toolkits are for the benefit of software companies not users. Echoes of Steve Jobs’ Thoughts on Flash:

Flash is a cross platform development tool. It is not Adobe’s goal to help developers write the best iPhone, iPod and iPad apps. It is their goal to help developers write cross platform apps.

And lest you think this is bad for AIR but good for Java, note that Payne adds:

For anyone who used a computer in the 1990s, AIR probably brings back scarring memories of Java apps: slow, ugly, inconsistent, awkward.

I was also reminded of Evernote’s experience with .NET versus native code, which I blogged here.

Payne is not all wrong, neither is Jobs. That said, the distinction between what is good for users and what is good for developers is not absolute. Maintaining a single cross-platform code-base, for example, is good for both users and developers, because it reduces bugs and assists feature-compatibility across platforms. It is also good for users of minority platforms who might otherwise have nothing.

Another question: how many of the issues Payne identifies are inherent to using AIR (or another cross-platform runtime), and how many are implementation issues? It is impossible to know without drilling into the details; but I don’t believe that all AIR (or Java, or .NET) apps have “lousy performance”.

It is true that ActionScript code is slower than Java or .NET code, and much slower than compiled C/C++, but speed of script execution is not always the performance bottleneck that users will notice most.

This is seemingly one of those never-ending computing debates; but a post like Payne’s is a reminder that neither Adobe AIR, nor any cross-platform runtime, is a perfect solution to the challenge of multiple client platforms.