IntelliJ IDEA: the best IDE for programming Android?

Late last year the JetBrains team released IntelliJ IDEA 12, the latest version of its Java IDE.

Java today has many roles, but two dominate. One is server-side programming using one of many Java application servers, while the other is coding Android apps. IntelliJ IDEA has the former role well covered, though this is the first release with full support for Java 8, but Android development is less mature, though it seems to me that it has now come together.

The big new feature for Android is the inclusion of a visual user interface designer. Standard Android layouts are defined in XML, and the IntelliJ IDEA tool is a two-way designer that lets you flip between visual and code views. I found it to work well.

The starting point for an Android app is the New Project dialog. This hooks into the Android SDK installed on your machine. In this example I am using Android 4.1 “Jelly Bean”.


Next, you select a target device (actual or emulated) with the option to create a “Hello World” activity as a starting point. The project then opens in the IDE.


It is not obvious how to get from here to the new UI designer. The New dialog will not help you.


What you do is to hold down Control and click the word main in setContentView(R.layout.main).


The default layout is a LinearLayout. If you are making, for example, a calculator, you probably want a TableLayout or GridLayout. I found it useful to be able to flip between text and design views. The design view can save a lot of typing. The text view is excellent when you want to see the exact code and perform text operations like copy or search and replace.


I was surprised not to find an instant way to create an event handler (unless I missed it) but this is easily done in the editor. With IntelliJ IDEA, it is always worth pressing Alt-Enter as this will offer a prompt of potentially useful actions.


I hooked up an event listener and was able to set a breakpoint and debug my app:


Is this the best IDE for Android development? There is the mighty Eclipse of course; but while Eclipse can do most things, I am not surprised to see comments like this:

Usability: Intellij user experience is much easier to grasp. The learning curve in Intellij is by far faster. It seems using Intellij makes developing easier and more natural. Dropdowns, code completion, quick view, project wizards, etc, are all possible both in Eclipse and Intellij, but the experience in Intellij is much more satisfying.

That said, Eclipse is completely free, whereas the free Community Edition of IntelliJ IDEA has limitations – but as far as I can tell, Android support is included.

Review: Logitech Z553 2.1 Speaker System

Logitech’s Z553 speaker system has a striking appearance, dominated by a cylindrical down-firing subwoofer which also contains the power supply and amplifier.



Two small satellite speakers provide mid and high frequencies, each with two 2″ drivers and designed like binoculars stood on their side.


Both the sub and the satellites have integrated stands including three firm rubber-spiked feet, preventing any rocking motion.

These speakers are designed for several scenarios:

  • Position the satellites either side of a computer screen on your desk, have the subwoofer on the floor.
  • Position the satellites either side of a television, subwoofer on the floor, sit back and enjoy.
  • Connect your smartphone or tablet for ad-hoc music or video.

One thing to avoid: do not site the satellites on the floor, where they will sound dreadful. They must be on a desk or table.

The system is purely analogue (no digital input or dock) and purely wired, though there is a wired remote which Logitech calls a “control pod”. This pod has a rotary on-off and volume control, a red power LED, and a small and fiddly bass adjustment control which seems primarily to set the volume of the subwoofer; it makes a dramatic difference to the level of bass.


The main connections for the Z553 are on the back of the subwoofer.


Here you will find the power connector, 3.5mm stereo line in socket, left and right RCA inputs, RCA outputs for the satellite speakers, and a special connector for the control pod.


The control pod has connections of its own. On the left of what I suppose is the back of the pod there is an additional line in and a headphone socket.

The top of the pod is the volume control and has a smooth, weighty feel that makes it good to operate, having said which the ergonomics of the pod are not quite right. It is too easy to spin the main volume control by accident when operating the bass control, or just by brushing against it with your hand. The cable for the pod is a nuisance and it is a shame Logitech does not provide a wireless remote.

By way of mitigation, many sources provide their own volume control. For example, I used the speakers with Logitech’s discontinued Squeezebox Touch as input, and was able to use the digital volume control remotely from a web browser or tablet.

The connections are not difficult, but if you hate wires this might not be the system for you.

Another oddity concerns the inputs. There are three inputs altogether: line-in jack on the Pod, line-in jack on the sub, and RCA on the sub. However there is no way to switch between inputs if you have several connected; the sounds will simply be combined.

The Z553 system goes pretty loud, but the gain is not quite sufficient in some cases. I connected a Nokia Lumia 800 smartphone and found that even at maximum volume on both phone and Z553, I was not getting the maximum possible undistorted output.

Sound quality

If you care mainly about sound quality, you will be impressed. I was. I tried the Z553 in several scenarios, including close listening on a desk and playing at the other side of the room. The bass is rich and deep, and the integration between the satellites and the subwoofer seamless. Volume was fine for normal listening, though it would not do for parties or if you like your music very loud.

Compare the Z553 to a mid-price hi-fi system, and you may wonder why you bothered spending more. To be fair, the Z553 does have limitations. Compared to my usual active monitors, there is a little smearing of notes and congestion, and the bass is a little soft. You do not get the startling realism and depth you get from a high-end system. The Z553 holds up well though, given the price difference.


Pros and cons

The sound quality is great for the price, and the build feels good too. Just a few annoyances:

  • No input switcher
  • Awkward wired remote volume and tone control
  • Too many wires
  • Gain barely adequate for some sources

The styling is a matter of taste; I consider it inoffensive but would not recommend these speakers for their appearance. For me the sound quality is a higher priority, and for the price I cannot fault it. An excellent buy.


Subwoofer: 4 inches. Midrange drivers 2 inches.

Power: Satellites 2 x 10 watts RMS. Subwoofer 1 x 20 watts RMS. Max sound level quoted as 88dBc

Satellite speakers are 160mm (6.3″) high

Subwoofer cabinet is 381mm (15″) high and 160mm (6.3″) diameter


Notes from the field: USB 3.0 PCI Express cards, HP ML350 G6 and Server Core

If I search the web, get little help, and then solve a problem, I make a point of posting so that someone else will have a better experience. The challenge was this: finding a USB 3.0 PCI Express card that works in an HP ML350 G6 server, a popular choice for small business duties such as Small Business Server or Hyper-V Server. This particular example runs Hyper-V Server 2008 R2, based on Server Core, which can sometimes be awkward for installing drivers.

USB 3.0 is theoretically around 10 times faster than USB 2.0. If you are transferring large files or performing backup to an external drive, it can make a huge difference to performance.

Trawling the web was not particularly helpful. As this expert notes, there is no officially supported or recommended option for USB 3.0 on an ML350:

The ML350 G5 and G6 servers do not have, as a recommended option, a USB 3.0 and e-SATA controller, which would be clear to you by referring the quickspecs of the servers.

If you take the view that only recommended and certified components should be fitted to a server, give up and stop reading now. I do not disagree, but I tend to a pragmatic approach, depending on your budget and how system-critical is the server in question.

Further, it can work. This guy used a HighPoint 1144A card and it kind of works, though investigating I found that some users reporting that only two of the four ports actually work and you have to tolerate errors in device manager; it does not seem ideal. Another user noted that HP’s own card (which is designed for workstations and not the ML350) did not work though maybe it works for others, I am not sure.

I did find some references to success with a Renesas USB 3.0 chipset so found a StarTech card that uses this, PEXUSB3S2. Fitted it, but the server would not boot. A red LED on the server front panel indicated a “system critical” issue. Shame.

I tried a different card, bought in haste from Maplins. This one is a Transcend TS-PDU3. It also has a Renesas chipset. I fitted this to the PCIX 16 slot in the ML350. Note: if you do this, you will need some kind of extender cable for the power, since this (and most USB 3.0 cards) require additional power direct from the power supply. The ML350 G6, at least in my case, has plenty of spare Molex power connectors, but they are on short cables and sited at the front of the computer, whereas the PCI Express slots are at the back.

Good news: the server booted.


Next up, drivers. No CD comes with this particular card, but you can download from the Transcend site. There are two drivers for different versions of the TS-PDU3. I used the second version (Molex and Sata power connectors). Fortunately the setup ran perfectly on Server Core; success.

I took the StarTech card and tried it in another PC, this one self-assembled with an Intel motherboard. This machine also runs Hyper-V Server, but the 2012 version. The machine booted properly, but the setup on the supplied CD did not run.


“Sorry, the install wizard can’t find the proper component for the current platform”, it remarked cryptically.

I went along to the StarTech site and found an updated driver which looks remarkably similar to the one I had installed for the Transcend card. It ran perfectly and all is well.

This is a good moment to mention Devcon.exe, an essential tool if you are installing device drivers on Server Core. You can use the GUI Device Manager remotely, but it is read-only. Devcon.exe is part of the WDK (Windows Driver Kit), and it is not too hard to find. Make sure you use the right version (32-bit or 64-bit) for your system.

On server core, run:

Devcon status * –> devices.txt

to output the status of your devices to a text file. Open it in Notepad, which works on Server Core, and look for the word “problem” to see if there are issues. For example, Problem 28 is “no driver”. You also get the hardware ID from this output, needed if you use Devcon to install or update a driver. You may find things like audio devices that are not working; unlikely to be a worry on Server Core.

In my case, on both servers, I can see that the USB 3.0 card has been correctly detected and that the driver is running.

Why did the StarTech card not work on the ML350? Here I am going to shrug and say that PCI Express cards can be problematic. Equally, if I get good results and no unexpected behaviour from the Transcend card, I am not going to worry that it is a cheap card that does not belong in a server.

The truth is, if you need USB 3.0 you really need it, and the only alternative is a new server.

Making sense of Microsoft’s Cloud OS

People have been talking about “the internet operating system” for years. The phrase may have been muttered in Netscape days in the nineties, when the browser was going to be the operating system; then in the 2000s it was the Google OS that people discussed. Most notably though, Tim O’Reilly reflected on the subject, for example here in 2010 (though as he notes, he had been using the phrase way earlier than that):

Ask yourself for a moment, what is the operating system of a Google or Bing search? What is the operating system of a mobile phone call? What is the operating system of maps and directions on your phone? What is the operating system of a tweet?

On a standalone computer, operating systems like Windows, Mac OS X, and Linux manage the machine’s resources, making it possible for applications to focus on the job they do for the user. But many of the activities that are most important to us today take place in a mysterious space between individual machines.

It is still worth reading, as he teases out what OS components look like in the context of an internet operating system, and notes that there are now several (but only a few) competing internet operating systems, platforms which our smart mobile phones or tablets tap into and to some extent lock us in.

But what on earth (or in the heavens) is Microsoft’s “Cloud OS”? I first heard the term in the context of Server 2012, when it was in preview at the end of 2011. Microsoft seems to like how it sounds, because it is getting another push in the context of System Center 2012 Service Pack 1, just announced. In particular, Michael Park from Server and Tools has posted on the subject:

At the highest level, the Cloud OS does what a traditional operating system does – manage applications and hardware – but at the scope and scale of cloud computing. The foundations of the Cloud OS are Windows Server and Windows Azure, complemented by the full breadth of our technology solutions, such as SQL Server, System Center and Visual Studio. Together, these technologies provide one consistent platform for infrastructure, apps and data that can span your datacenter, service provider datacenters, and the Microsoft public cloud.

In one sense, the concept is similar to that discussed by O’Reilly, though in the context of enterprise computing, whereas O’Reilly looks at a bigger picture embracing our personal as well as business lives. Never forget though that this is marketing speak, and Microsoft consciously works to blur together the idealised principles behind cloud computing with its specific set of products: Windows Azure, Window Server, and especially System Center, its server and device management piece.

A nagging voice tells me there is something wrong with this picture. It is this: the cloud is meant to ease the administrative burden by making compute power an abstracted resource, managed by a third party far away in a datacenter in ways that we do not need to know. System Center on the other hand is a complex and not altogether consistent suite of products which imposes a substantial administrative burden on those who install and maintain it. If you have to manage your own cloud, do you get any cloud computing benefit?

The benefit is diluted; but there is plentiful evidence that many businesses are not yet ready or willing to hand over their computer infrastructure to a third-party. While System Center is in one sense the opposite of cloud computing, in another sense it counts because it has the potential to deliver cloud benefits to the rest of the business.

Further confusing matters, there are elements of public cloud in Microsoft’s offering, specifically Windows Azure and Windows Intune. Other bits of Microsoft’s cloud, like Office 365 and, do not count here because that is another department, see. Park does refer to them obliquely:

Running more than 200 cloud services for over 1 billion customers and 20+ million businesses around the world has taught us – and teaches us in real time – what it takes to architect, build and run applications and services at cloud scale.

We take all the learning from those services into the engines of the Cloud OS – our enterprise products and services – which customers and partners can then use to deliver cloud infrastructure and services of their own.

There you have it. The Cloud OS is “our enterprise products and services” which businesses can use to deliver their own cloud services.

What if you want to know in more detail what the Cloud OS is all about? Well, then you have to understand System Center, which is not something that can be explained in a few words. I did have a go at this, in a feature called Inside Microsoft’s private cloud – a glossary of terms, for which the link is currently giving a PHP error, but maybe it will work for you.


It will all soon be a little out of date, since System Center 2012 SP1 has significant new features. If you want a summary of what is actually new, I recommend this post by Mike Schutz on System Center 2012 SP1; and this post also by Schutz on Windows Intune and System Center Configuration Manager SP1.

My even shorter summary:

  • All System Center products now updated to run on, and manage, Server 2012
  • Upgraded Virtual Machine Manager supports up to 8000 VMs on clusters of up to 64 hosts
  • Management support for Hyper-V features introduced in Server 2012 including the virtual network switch
  • App Controller integrates with VMs offered by hosting service providers as well as those on Azure and in your own datacenter
  • App Controller can migrate VMs to Windows Azure (and maybe back); a nice feature
  • New Azure service called Global Service Monitor for monitoring web applications
  • Back up servers to Azure with Data Protection Manager

and on the device and client management side, new Intune and Configuration Manager features. It is confusing; Intune is a kind-of cloud based Configuration Manager but has features that are not included in the on-premise Configuration Manager and vice versa. So:

  • Intune can now manage devices running Windows RT, Windows Phone 8, Android and iOS
  • Intune has a self-service portal for installing business apps
  • Configuration Manager integrates with Intune to get supposedly seamless support for additional devices
  • Configuration Manager adds support for Windows 8 and Server 2012
  • PowerShell control of Configuration Manager
  • Ability to manage Mac OS X, Linux and Unix servers in Configuration Manager

What do I think of System Center? On the plus side, all the pieces are in place to manage not only Microsoft servers but a diverse range of servers and a similarly diverse range of clients and devices, presuming the features work as advertised. That is a considerable achievement.

On the negative side, my impression is that Microsoft still has work to do. What would help would be more consistency between the Azure public cloud and the System Center private cloud; a reduction of the number of products in the System Center suite; a consistent user interface across the entire suite; and simplification along the lines of what has been done in the new Azure portal so that these products are easier and more enjoyable to use.

I would add that any business deploying System Center should be thinking carefully about what they still feel they need to manage on-premise, and what can be handed over to public cloud infrastructure, whether Azure or elsewhere. The ability to migrate VMs to Azure could be a key enabler in that respect.

Amazon AutoRip: great service, or devaluing music?

Or possibly both. Amazon’s AutoRip service means that when you buy one of a limited, but considerable, range of CDs, you get an MP3 version in your Amazon cloud player for free. Even past purchases are automatically added, which means US customers have received emails informing them that hundreds or in some cases thousands of tracks have been added to their Amazon cloud player.


The service adds value to CD purchases in several ways. You get instant delivery, so you can start listening to your music straight away, and when the CD comes in the post, you can enjoy the artwork and play it on your hi-fi for best quality.

Amazon is differentiating from Apple, which only sells a download.

An infernal creature lies in the details though. Here are a few comments from Steve Hoffman’s music forum:

Got Auto-rip Pink Floyd’s DSOTM 2011 mastering of the DSOTM SACD that I bought in 2003.


I now have autorips of cd’s I no loner own…..interesting concept.


I now have autorips of CDs I bought as gifts.

These customers have done nothing wrong. They bought a CD from Amazon and gave it away or sold it, but it is still in their Amazon history, so now they have the MP3s.

Another interesting point is that Amazon appears to treat all versions of the same recording as equal. This is why I have included the comment about the Pink Floyd album above. Record companies have done well over the years by persuading fans to buy the same CD again in a remastered version, sometimes with bonus tracks. The Beatles 2009 remastered CDs are a well-known example. But if customers with unremastered CDs are now getting remastered MP3s automatically, this type of sale is harder to make.

The gift issue is more serious. The terms and conditions say:

Albums purchased in orders including one or more items marked as “gifts” at purchase are not eligible for AutoRip.

and intriguingly:

If you cancel your order or return this album, our normal order cancellation and product return policies will apply regarding the physical version of this album. However, if you download any of the tracks on the MP3 version of the album from your Cloud Player library (including if you have enabled auto-download to a device and any of the tracks on the MP3 version of the album auto-download), you will be considered to have purchased the MP3 version of the album from the Amazon MP3 Store and we will charge your credit card (or other payment method) for the then-current price of the MP3 version of the album (which will be non-refundable and may be a higher price than the physical version of the album).

Someone therefore has thought about the problem, though I predict unhappy customers, if they buy a faulty CD, return it, and find they have been charged anyway thanks to an auto-download feature of which they might not understand the implications.

Note also that many CDs are purchased as gifts without being marked as gifts in Amazon’s system. The idea of marking items as gifts is that you can have gift wrapping and get an item sent to another address, but if you plan to do your own wrapping, it is not necessary.

Here is something else. Audio enthusiasts are not happy with MP3s, preferring the real and/or psychological benefits of the lossless CD format for sound quality. For many people though, the audio is indistinguishable or they do not care about the difference.

What do you do if you receive a CD in the post, having already downloaded and enjoyed the MP3 versions of the tracks? I imagine some customers will figure that they have no use for the CD and sell it.  Provided they do not return the CD to Amazon, I cannot see anything in Amazon’s terms and conditions that forbids this, though I can see ethical and possibly legal difficulties in some territories.

The consequence is that someone may lose a sale.

Subscription is the future

My view on this is simple. The only sane way to sell music today is via subscription – the Spotify or Xbox Music model. The idea of “owning” music (which was never really ownership, but rather a licence tied to physical media) is obsolete with today’s technology.

Amazon’s new initiative demonstrates how little value there is in a downloaded MP3 file – so vanishingly small, that it can give them away to past customers for nothing.

The cross-platform app problem. What should the BBC do?

The BBC released a new sports app last week. In the comments to the announcement though, there is little attention given to the app or its content. Rather, the discussion is about why the BBC has apparently prioritised iOS over Android, since the Android version is not yet ready, with an occasional interjection from a Windows Phone user about why there is nothing at all for them.


BBC I think you need to actually catch up on what’s happening. Android is huge now. You should be launching both platforms together. A lot of people I know have switched to an Android device and your app release almost feels like discrimination!

says one user; while the BBC’s Lucie Mclean, product manager for mobile services, replies:

Back in July, when we launched the Olympics app for iPhone and Android together, we saw over three times as many downloads of the iPhone version. Android continues to grow apace but this, together with the development and testing complexity, led us to the decision to phase the iOS app first.

BBC Technology correspondent spoke to head of iPlayer David Danker about this problem back in December. Danker claims that the BBC spends more “energy” (I am not sure if that means time or just frustration) on Android than Apple, and mainly blames Android fragmentation and the existence of more low-end devices for the delays:

It’s not just fragmentation of the operating system – it is the sheer variety of devices. Before Ice Cream Sandwich (an early variant of the Android operating system) most Android devices lacked the ability to play high quality video. If you used the same technology as we’ve always used for iPhone, you’d get stuttering or poor image quality. So we’re having to develop a variety of approaches for Android

A couple of things are obvious. One is that Apple’s clearly-defined iOS development platform and limited range of devices is a win for developers. Despite frustrations over things like the way apps are sandboxed or Apple’s approval process, it is easier to target iOS than Android because the platform is more consistent. iOS users are also relatively prosperous and highly engaged with the web and the app store, so that even though Apple’s overall platform market share has fallen behind that of Android, it is still the most important market in some contexts.

Another is that the BBC cannot win. From a PR perspective, it should probably do simultaneous iOS and Android releases even if that means a delay, but even then there will be complaints over differences in detail between iOS and Android implementations. Further, the voices of those neglected minorities, such as Windows Phone and soon, Blackberry 10 users, will grow louder if iOS and Android achieve parity.

In all this, it is worth noting that the BBC gets one thing right, prioritising the mobile web:

The decision to launch the core mobile browser site first (before either app) was itself to ensure that users got a quality product across as wide a range of devices as possible.

says Mclean.

Personally I wonder if the the BBC needs to do all these niche apps. The iPlayer app is the one that really matters, particularly when it offers download for offline viewing, but is a sports app so necessary?

Should it not concentrate instead on first, the mobile web site, and second, APIs that third-party developers can use, enabling developers on each platform to create high quality apps?

Another option would be to make cross-platform a religion, and cover all significant platforms while giving up some of the benefits of native code. High quality video is a problem; but in many scenarios the quality of the video is not such a big issue provided that it works and is intelligible.

Perhaps the BBC could make Cordova (an open source framework for cross-platform mobile apps) video work better. Having the BBC invest its publicly funded resources into open source cross-platform development is better PR than developing expensive apps for single platforms.

A good quarter for Nokia, but Lumia still has far to go

Some good news from Nokia at last. The company reports sales ahead of expectations along with “underlying profitability” in the fourth quarter of 2012.

Success for Windows Phone? It is a positive sign, but short of a breakthrough. Here are the details. I am showing three quarters for comparison: fourth quarter 2011, third quarter 2012, and fourth quarter 2012.

  Q4 2011 Q3 2012 Q4 2012
Mobile phone units, millions 113.5 77 79.6
Smartphone units, millions (Lumia in brackets) 19.6 (?) 6.3 (2.9) 6.6 (4.4)

Looking in more detail at the Smartphone units, the Q4 2011 smartphones were mostly Symbian. Lumia (Windows Phone) was launched in October 2011 but with only two models and limited territories (it also sold short of expectations, and rumour has it, with a high rate of returns).

Lumia units increased by 51% over Q3, but considering that Q3 was a bad quarter as customers waited for Windows Phone 8 that is a decent but not stunning improvement. Lumia units exceeded Symbian units, but remain far short of what Nokia used to achieve with Symbian.

There is also a warning about Q1 2013:

Seasonality and competitive environment are expected to have a negative impact on the first quarter 2013 underlying profitability for Devices & Services, compared to the fourth quarter 2012.

That said, here is what Nokia said in the Q3 release:

Nokia expects the fourth quarter 2012 to be a challenging quarter in Smart Devices, with a lower-than-normal benefit from seasonality in volumes, primarily due to product transitions and our ramp up plan for our new devices.

It looks as if the company prefers to be cautious in its financial statements.


Got a Ruby on Rails application running? Patch it NOW

A security issue has been discovered in Ruby on Rails, a popular web application framework. It is a serious one:

There are multiple weaknesses in the parameter parsing code for Ruby on Rails which allows attackers to bypass authentication systems, inject arbitrary SQL, inject and execute arbitrary code, or perform a DoS attack on a Rails application. This vulnerability has been assigned the CVE identifier CVE-2013-0156.
Versions Affected:  ALL versions
Not affected:       NONE
Fixed Versions:     3.2.11, 3.1.10, 3.0.19, 2.3.15

and also worth noting:

An attacker can execute any ruby code he wants including system("unix command"). This effects any rails version for the last 6 years. I’ve written POCs for Rails 3.x and Rails 2.x on Ruby 1.9.3, Ruby 1.9.2 and Ruby 1.8.7 and there is no reason to believe this wouldn’t work on any Ruby/Rails combination since when the bug has been introduced. The exploit does not depend on code the user has written and will work with a new rails application without any controllers.

You can grab patched versions here.

How quickly can an organisation patch its applications? As Sourcefire security architect Adam J. O Donnell observes, this is where strong DevOps pays dividends:

Modern web development practices have made major leaps when it comes to shortening the time from concept to deployment.  After a programmer makes a change, they run a bunch of automated tests, push the change to a code repository, where it is picked up by another framework that assures the changes play nice with every other part of the system, and is finally pushed out to the customer-facing servers.  The entire discipline of building out all of this infrastructure to support the automated testing and deployment of software is known as DevOps.

In a perfect world, everyone practices devops, and everyone’s devops workflow is working at all times.  We don’t live in a perfect world.

For many organizations changing a library or a programming framework is no small task from a testing and deployment perspective.  It needs to go through several steps between development and testing and finally deployment.  During this window the only thing that will stop an attacker is either some form of network-layer technology that understands how the vulnerability is exploited or, well, luck.

This site runs WordPress, and if I look at the logs I see constant attack attempts. In fact, I see the same attacks on sites which do not run WordPress. The bots that do this are not very smart; they try some exploit against every site they can crawl and do not care how many 404s (error showing page not found) they get. One in a while, they hit. Sometimes it is the little-used applications, the tests and prototypes, that are more of a concern than the busy sites, since they are less likely to be patched, and might provide a gateway to other sites or data that matter more, depending on how the web server is configured.

Hands on Cross-Platform Windows and Mac development with C++ Builder XE3

I have been writing about Embarcadero’s RAD Studio XE3, which includes Delphi and C++ Builder, and as part of the research I set this up for cross-platform development on a Mac.

My setup uses a Parallels Virtual Machine to run Windows 7, on which RAD Studio XE3 is installed. This is convenient for Mac development, since the IDE itself is Windows only. That said, if I were doing this in earnest I would use multiple displays or perhaps separate physical machines, since it is no fun debugging in a VM with the application running in another operating system behind it.

Is it straightforward to configure? Not too bad. You have to install Xcode on the Mac, and in addition, you have to install the Xcode command line tools, which you can do from Xcode itself, in Preferences – Downloads – Components, or as a separate download.


Then you need to find the Platform Assistant (paserver), an agent which runs on the Mac to support remote debugging. I was annoyed to find that this has a dependency on Java SE6, which to be fair it downloaded and installed automatically. Actually I find this amusing, after hearing from an Embarcadero VP how native code is all the rage and nobody uses managed code any more. Except Embarcadero for the paserver.

Once that is all up and running you are done on the Mac side. On Windows, you then need to sort out a remote profile, after having installed RAD Studio of course. The way to do this is first to start a new cross-platform project, which means using the FireMonkey framework. Then right-click TargetPlatforms in the project manager and add a platform. If you add OSX but no remote profile exists, you will be prompted to create one.


This is where something went slightly wrong. I created a profile and could connect OK. However, when I tried to build the project, I got an error: Unable to open include file ‘CoreFoundation/CoreFoundation.h’. You get this if for some reason the required library files have not been pulled over from the Mac. The fix is to edit the profile and click Update Local File Cache.


After that I was away. Set breakpoints if needed, build and debug.


Cross-platform is not new in RAD Studio; it was in XE2, and in some ways better, since you could target iOS as well as OSX. C++ Builder XE3 is actually a new generation though. In the 64-bit update 1, it is the first release to use Clang and LLVM, and from what I understand this represents the future for Embarcadero’s tools.

Updates are promised in 2013 for both Delphi and C++Builder – this roadmap is most of what we have to go on – which will add first iOS and later Android support, at what the company calls a “low cost”. Unlike the iOS support in XE2, the coming update will not use the Free Pascal compiler, but the new architecture based on LLVM. This also suggests that the add-on will replace some of the guts of Delphi when it arrives, so it will be significant and somewhat risky.

The cross-platform capabilities look good, though I am somewhat wary of FireMonkey which is less complete and mature than the Windows-only VCL. For example no Webbrowser component is supplied, which is a significant limitation, though I am sure there are ways of hacking this, perhaps through ChromiumEmbedded for which a Delphi FireMonkey exists.

It is worth a bit of effort, since Delphi and C++Builder are productive tools, and the output is true native code which still had advantages.

More information on RAD Studio XE3 is here.

Microsoft updates .NET Framework 4.5 for Windows 8, Server 2012 to fix performance, bugs

Microsoft has released an update for .NET Framework 4.5 which you may have noticed flying past if you keep an eye on Windows Update in Windows 8. The update is described here, and it is a big one. For example, in the Network Class Library:

Assume that you run a .NET Framework 4.5-based application that uses asynchronous APIs to read chunked responses. In this situation, the chunked responses may be read synchronously.

The HttpWebRequest class lets callers read an HTTP response either synchronously or asynchronously. However, if the response is a chunked HTTP response, then parts of the response are read by using synchronous I/O (Winsock calls) even when the caller uses the asynchronous code path. In this situation, the calling thread is blocked until data is received on the network.

Given this and other issues, the update is highly recommended. Maybe we will see fewer pauses in Windows App Store apps, some of which have not delivered on the “fast and fluid” promise.