A few facts about Microsoft’s new Windows Runtime

I’ve just come out of Martyn Lovell’s talk on WinRT internals here at BUILD in Anaheim, California.

Make no mistake: Microsoft has re-invented the Windows API in WinRT. Just to recap, WinRT is the API for Metro-style applications, the touch-centric, app-centric API for tablets and, one presumes, eventually for Windows Phone (though Microsoft has yet to admit it).

WinRT is only useable from Metro applications. You cannot call WinRT from a Win32 application, nor vice versa*. I think it is reasonable to assume that a future version of Windows which runs only WinRT is a possibility; and that Windows 8 on ARM will look a bit like that even though Win32 will still be there, but mainly out of sight; but I am speculating.

Does that mean Win32 is now legacy? In a way, but such a huge legacy that for the moment we should think of Windows 8 as two platforms side by side.

There is no inter-app communication in WinRT other than by the pre-defined contracts built into the system (though Lovell noted that you could always use the file system and polling for a crude inter-process communication).

There is no way to install a shared dynamic library. Apps can only use the system libraries together with what you install with the app. Each app lives in its own context and is isolated. In other words, WinRT is not extensible, other than within your app’s code*.

If you figure out a way to bypass limitations of WinRT by calling other Windows APIs, your app might work but the submission process for the Windows Store will prohibit it.

Versioning is built into WinRT. This means that when Windows 9 comes along, you will be able to code just against the Windows 8 versions of the classes, for compatibility, and your IDE can support this by only exposing the Windows 8 version of the API.

The CLR exists in the Metro environment, for use by .NET applications, complete with JIT (Just in time) compilation. However only a subset of the .NET Framework libraries are included. Microsoft aimed to include only what was necessary for Metro. I am not sure yet what is included and what is not, beyond the obvious (no Windows Forms, for example) but will be investigating what is documented. The native WinRT APIs look similar to a COM callable wrapper from the .NET side. That said, you do not normally need to care about WinRT interfaces, even though these are there in WinRT. Normally you interact with WinRT classes, making it more natural for .NET than working with COM.

WinRT is full of asynchronous calls. Lovell told us that Microsoft had seen in the past that if both synchronous and asynchronous APIs are available for the same function, then developers often use the synchronous version even when they should not, making applications less responsive. The new await keyword in C# makes this easy to code.

WinRT makes use of the ILDasm metadata format which is also used by .NET. This means you get rich metadata for IntelliSense and debugging, but note that the actual runtime is not .NET; they just borrowed the same metadata format.

WinRT objects are reference counted like COM for memory management, with weak references to avoid circularity. You should not have to worry about this; you can code according to the conventions of your language.

There are three ways to write WinRT applications. One is C++, in which case you write directly to the “projection” of WinRT into your language. The second is .NET, in which case your code goes via the CLR. The third is HTML and JavaScript, in which case your code goes via the “Chakra” JavaScript engine also used by Internet Explorer 9 and higher. Lovell assured me that there is little difference in performance in most cases, though there could be advantages for C++ in certain niche scenarios. Of course we heard that story for .NET as well, but from what I have seen it is more plausible in WinRT.

There is no message loop in WinRT. There is no GDI in WinRT. All graphics are via DirectX. XNA, the .NET games framework, is not supported. It seems that you will need to use C++ for fancy DirectX coding, though this is not confirmed. Of course your XAML or Canvas code will be rendered by DirectX under the covers.

It is fascinating to see how Microsoft has borrowed XAML and ILDasm from .NET, but that WinRT is native and not .NET at its core. My take on this is that Microsoft intended to preserve the productivity of .NET, but without any performance compromise.

Despite the inclusion of .NET though, the fact that only a subset of the Framework is available, and that interop to the Windows API will not work*, means that most existing apps will need considerable work to be ported to Metro.

*Updates

A few clarifications.

It has been shown that you can call WinRT from Win32 (the favoured word for Win32 seems to be “desktop applications”) though I’m not sure how useful it is.

Concerning P/Invoke (Platform Invocation) to Win32 APIs, apparently this does work for a certain specified, small subset of the Windows API. It also works for your own native code DLL, with the proviso that if your native code DLL calls a disallowed Win32 API it will raise an error.

WinRT is partially extensible. A Framework Extension is a library which you can reference as a dependency in your app’s manifest. When the app is deployed it will download this dependency from the Windows Store. An example is the C Runtime Library. An extension library installs into its own directory, and can be used by multiple WinRT apps provided each one also references it in their manifests. However, the caveat is that only Microsoft can create these extensions: there is no way to create your own shared extension for general distribution, though an enterprise can deploy a shared extension internally.

Which Microsoft cloud? Windows Server 8 shows Azure is not everything

I was fortunate to attend a two-day drilldown into what is coming in Windows Server 8 last week, just before the BUILD conference under way in Anaheim, California. It is an impressive release, with two things standing out for me.

One is that Microsoft has successfully re-engineered Windows Server so that it is both sufficiently modular that you can transition from Server Core to full Server and back without reinstall, and also sufficiently detached from the Windows GUI that everything runs and can be configured without the need to log on to the Windows desktop on the server itself. This is a huge achievement.

Second, much of the engineering in Server 8 is focussed on making it better for cloud hosting. This the focus of changes in both Hyper-V and IIS, isolation of virtual networks, proper bandwidth and CPU quotas and throttling, and the ability to move VMs freely between hosts without taking them offline, and to replicate them for failover purposes. You can read more in my piece on The Register.

The question this raises for me is about Windows Server clouds and Azure. Of course Azure runs on Windows Server, but Azure is a platform, all the VMs are stateless, and when you use Azure you are buying into a whole set of services that might or might not match your needs. At a developer event yesterday, one explained how he could not use Azure because he needed to install a third-party application. The Hyper-V role helps a little, but it is not ideal as you still need to solve the stateless problem; at any time, changes you make to the server may be reverted.

If you simply rent plain Windows Server VMs in the cloud, you lose some of the benefits of cloud computing since you are responsible for everything about how the server is configured and maintained; but you also get complete freedom to set it up as you want.

One of the issues with moving from running your own Exchange and SharePoint, for example, to a cloud-hosted service like Office 365 is that you lose control of your destiny. If the service goes down, you have to beg and plead with support to get information and to speed recovery.

Now consider a scenario in which you have your Exchange and SharePoint on hosted Hyper-V VMs with replication (now coming in Server 8) to an alternate provider such as Amazon Web Services, or to your own on-premise servers. If the service goes down, you failover to the replicas.

Another compelling idea relates to live migration. Imagine you have a VM running on premise, and want to move it to the cloud. Without interruption of service, you could in principle migrate it from on-premise to the cloud and back at will. You need a fast connection of course, but this aspect is constantly improving.

The bottom line: plain Windows Server on a VM has many attractions versus an entire platform like Azure.

The snag is, Microsoft does not offer this type of hosting at the moment. Well, that is not necessarily a snag depending on what you think about hosting with Microsoft; but for some there is considerable reassurance in hosting with a company of Microsoft’s size, and which should in theory have the best understanding of what it takes to host Windows Server.

My guess is that Microsoft will either add this capability to Azure – without the limitations of the Hyper-V role, but with replication and failover – or else develop a new cloud service alongside Azure for this purpose.

My further guess is that it would be popular, possibly more so than Azure is today.

Here comes Windows 8 – but what about the apps?

I’ve spent what feels like most of the night trying out the first developer preview of Windows 8, using an Intel tablet PC loaned by Microsoft for that purpose. The early preview is frustrating, in that many of what will be standard apps like Mail and Contacts are missing, but it is already obvious that Microsoft has done a great job with what I am calling the “Metro” platform within Windows 8. Here is Control Panel in the new user interface:

image

This is the touch-optimized personality of tthe new operating system, featuring a Start menu with live tiles like an evolved Windows Phone 7, apps that run full-screen to create an "immersive user interface", and swipe control to show application menus, switch apps, or access standard features.

It is a delight to use; but this is Metro, with its own Windows Runtime (WinRT), a native code API which is wrapped for access by either HTML and JavaScript apps (which also use the IE 10 runtime), or by C/C++, VB or C# apps driving XAML-defined user interfaces – yes, kind of like Silverlight but not Silverlight.
What about all our Windows apps? For that we need the desktop personality in Windows 8. Tap the Desktop tile, or launch a "Desktop" app, and it suddenly appears, looking much like Windows 7.

The problem: while Windows 8 "Metro" looks great, there are currently zero apps for it, or at least only those supplied with the preview, because it is brand new.
In truth then, Microsoft has not quite done what would have been ideal, which is to make Windows touch-friendly. That would have been impossible. Instead, it has integrated Windows with a new touch-friendly platform.

The key question: will this new platform attract the support it needs from developers in order to become successful in its own right, so that we can do most of our work there and retreat to the desktop only for legacy apps, or apps which really need mouse and keyboard?

It is a big ask, and we have seen HP with WebOS, and probably RIM with PlayBook, fail at this task.

Of course it is still Windows; but I do have a concern that a proportion of users will try Windows 8, find the transitions between Desktop and Metro unsettling, and stick with version 7.0.

Let me add these are very much first impressions; and that Metro really does look good. Perhaps it will win; but a lot of momentum has to build behind it for that to be possible.

Building Windows – when Microsoft shows its hand

I’m in Anaheim, California on the eve of Microsoft’s BUILD conference. I have heard the phrase “wait until BUILD” so many times from Microsoft over the last few months that it has given this conference a special flavour. After Wednesday, the company will have to think of another way to avoid awkward questions like what is happening to Silverlight.

This is the latest chapter in the progression of Windows, server client and mobile. In particular, I will be trying to understand Microsoft’s software development platform. Whatever it looks like, it will be diverse, and include native code, HTML and JavaScript, .NET code including Silverlight, and perhaps some new hybrid. What will be the pros and cons of each approach, how do developers create apps that span desktop, tablet and mobile, and how will the delivery model change in the app store era?

Interesting questions; but the other theme is about how effectively Microsoft will compete versus its competition as the importance of desktop Windows shrinks. Cloud, mobile and tablet are the themes here, and after many mis-steps time is running out.

Not much to add except “watch this space” over the next few days; though I would be interested in any specific comments or questions on Microsoft’s strategy.

PhoneGap comes to Windows Phone

Nitobi has announced PhoneGap for Windows Phone 7, nicely timed just before the Microsoft BUILD conference next week.

PhoneGap is a cross-platform mobile development tool that uses the HTML and JavaScript engine on the phone as its runtime, supplemented by extensions which give access to other device features:

After unpackaging the contents of the www folder, your www/index.html file is loaded into an embedded headless browser control. This is essentially the same paradigm as other platforms, except here it is an IE9 browser and not a webkit variant. IE9 is a much more standards-compliant browser than previous IEs, and implements commonly used html5 features like DOMContentLoaded events, addEventListener interfaces, and CSS3. Be sure to use to get the html5 implementation otherwise the browser may fallback to a compatibility mode, and your code will likely choke and die.

The version for Windows Phone 7, just released in preview, is extended to support features including the camera, accelerometer, contacts, and notifications. There is also support for plugins:

PhoneGap-WP7 maintains the plugability of other platforms via a command pattern, to allow developers to add functionality with minimal fuss, simply define your C# class in the WP7GapClassLib.PhoneGap.Commands namespace and derive your class from BaseCommand.

In general Windows Phone 7 is not well supported by cross-platform toolkits, so PhoneGap support is an interesting development. PhoneGap has a high profile currently, and is being integrated into a diverse range of tools ranging from Adobe Dreamweaver to Embarcadero RadPHP, as well as the standard PhoneGap tools based on Eclipse.

Review: Continuous Delivery by Jez Humble and David Farley

I like this book. I know I like it because I find myself wanting to quote from it frequently. It is a book that almost every software developer should read, even if you disagree with parts of it – which is likely, because it is opinionated. The authors always give reasons for their opinions though, which means that if you disagree, you need to articulate why that is; or they may even change your mind. In consequence you find yourself learning as you read.

The authors are software theoreticians, but they are also practitioners; in fact they are practitioners first and theoreticians afterwards. This means they are pragmatic rather than dogmatic. Here is an example. Chapter 13 discusses software dependencies, and page 372 covers circular dependencies, “probably the nastiest dependency problem.” A circular dependency is when component A depends on component B, and component B also depends on component A.

A bad idea; but the authors write:

Surprisingly, we have seen successful projects with circular dependencies in their build systems. You may argue with our definition of “successful” in this case, but there was working code in production, which is enough for us.

As an aside, this kind of dry humour is characteristic, as also evident in remarks like this:

We are certain that, occasionally, manually intensive releases work smoothly. We may well have been unlucky in having mostly seen the bad ones.

The subject of the book is Continuous Delivery. So what is that? Well, if Continuous Integration is about ensuring that your software always builds, then Continuous Delivery is about ensuring that your software always deploys. The final form, as it were, of Continuous Delivery is Continuous Deployment, where you are so confident of your automated build and deploy process that any checked-in code that passes its tests can be deployed immediately. I was confused about the difference between Continuous Delivery and Continuous Deployment so I wrote a post about it; it turns out that there is not much difference.

The principle behind Continuous Delivery is that software is not done until it is released. If the release process is long, arduous and infrequent, then you are not really doing Agile development. A section of chapter 1 is devoted to release anti-patterns, and these form an excellent rationale for taking an interest in Continuous Delivery.

My guess is that anyone who has been involved in professional software development will wince a little while reading through these anti-patterns, thinking “that is what we used to do” or even “that is what we do”.

That said, Humble and Farley do not fall into the trap of merely writing about how not to do it. Rather, they address in some detail the kinds of problems you will face if you decide to embrace the Continuous Delivery methodology. The key ingredient in Continuous Delivery is that pretty much everything must be automated, otherwise it is too difficult to do. But how do you automate something like Acceptance Testing? That is the subject of chapter 8. How do you automate a deployment at all? That is the subject of chapter 6. The authors are not on a higher plane than the rest of us, and much of the advice is straightforward, even at the level of “Always use relative paths,” which is a tip in chapter 6.

The authors talk a lot about testing, as you would expect, but there is also extensive discussion of software configuration management, describing different approaches such as centralised and distributed version control and even specific tools. The chapter on Advanced Version Control is a particularly good read. Humble and Farley articulate the point that branching and merging is antithetical to Continuous Integration and therefore Continuous Delivery:

If different members of the team are working on separate branches or streams then by definition they’re not continuously integrating (p 390)

Does this mean branches are a bad idea? Not always, say the authors, but they also state:

Our strong recommendation is to crate long-lived branches only on release … new work is always committed to the trunk (p 392)

The reason is not only to enable Continuous Integration, but also because merging is complex and error-prone.

Software configuration management is not easy, but it is a relatively mature aspect of software development. This is less true of what you might call infrastructure configuration management; yet infrastructure dependencies such as versions and configurations of the operating system or web server are a common reason for deployment failures. Several chapters discuss this problem in detail. In principle, the authors say:

The desired state of your infrastructure should be specified through version-controlled configuration.

This leads to some thoughtful discussion of how to achieve this.

Another theme, as you would expect, is that development and operations people need to be working together and not in isolation. To some extent this is a DevOps book.

A great book then; but there are flaws. One is that there is some repetition because of the way the book is organised. This is good if you are inclined to read chapters in isolation, but not so good if you are reading straight through. In practice I did not find it too annoying, but it is there.

Another issue is that while the authors do cover Microsoft .NET to some extent, this is usually in the form of a brief mention and there is more focus on Java. This may be in part because of their preference for open source. It is still a good read for .NET developers, because the principles are platform-agnostic, but Microsoft platform developers may find it irritating at times. Team Foundation Server, say the authors, is “essentially an inferior knock-off of Perforce” (p 386).

The discussion of specific tools is a strength but also a weakness, in that the tools will change over time and the book will become dated.

This is not the last word on Continuous Delivery, but it is an enjoyable and thought-provoking read. Recommended.

 

Amazon entices Android developers with $50 incentive

Amazon is offering Android developers $50 of AWS (Amazon Web Services) credit if they submit an app to the Amazon Android app store.

Although the announcement refers to apps that actually make use of AWS, this does not seem to be a pre-condition:

September 7 – November 15: Android developers who submit an app that is approved to the Amazon Appstore for Android through October 15 will receive a $50 promotional code towards the use of AWS products and services

The move ties in with reports of Amazon developing its own Android-based tablet/Kindle. Exactly what Amazon will offer is still under wraps.

Amazon is an interesting contender in the mobile wars because it has its own instant ecosystem – millions of customers who are already signed up with accounts and stored credit card details. Add in Kindle eBooks, the MP3 store, and the Amazon Instant Video Store for streaming video, and it amounts to a comprehensive content offering that approaches that of Apple.

The AWS element is also significant, and in this respect Amazon is ahead of Apple. Of course there is nothing to stop you using AWS with apps for iOS or other platforms, though there is synergy when it comes to payments.

The relationship with Google is interesting, in that Google controls Android but Amazon is not hooking into Google services or the official Android Marketplace. Amazon is showing no sign of developing its own search engine though, so Google will still get some benefit if Amazon devices are popular, provided Google remains the default for search.

Windows Phone 7 apps, stats and future

Justin Angel, a former Microsoft employee who worked on Silverlight, has posted his analysis of the 24,505 apps he found in the Windows Phone 7 marketplace, exploiting a loophole that lets you get the download links. A few highlights:

  • 97% of the apps are not obfuscated, meaning that it is trivial (with easily available tools) to decompile the source.
  • 90% are Silverlight vs 10% XNA. This is not so much an indicator of the popularity of the two frameworks, but more an indicator of how many apps are graphic-rich games rather than some other kind of utility. Of course if you are making a very simple app, Silverlight is easier than XNA, so that may be a factor too.
  • 99% are C# vs 1% Visual Basic and a smattering of F#. A fascinating stat that makes me wonder what is the future of Visual Basic.

There are more interesting stats about libraries and components used, for which I refer you to the original post.

Does it matter? Well, Windows Phone 7 has not been a big success so far, though the reasons for that are not so much the quality of the OS or the ease of developing apps, but rather its low profile at retail and the fact that most operators and manufacturers don’t really need it: Apple and Android between them pretty much have the market.

That said, there are a few reasons why Windows Phone or some evolution of it may yet be significant. Nokia is betting on it, and while Nokia is undoubtedly in difficulties, this must work in Microsoft’s favour. Further, fear uncertainty and doubt surrounding Android patent and copyright issues may persuade some industry players to give Windows Phone another look.

Perhaps more significantly, when Microsoft unveils its developer strategy at the BUILD conference next week, it is likely that the application model in Windows Phone, or some evolution of it, will integrate with what is planned for Windows 8. NVIDIA is already talking about how Windows 8 will run Windows Phone apps.

For these reasons I believe there is at least a glimmer of hope for Microsoft in the mobile world; certainly the developer story to be officially told next week will be an interesting one.

Internet security hangs on a DNS thread, as hacks of The Register, Telegraph, Acer sites demonstrates

Several well-known web sites including The Register, The Daily Telegraph, UPS.comn and Acer.com suffered a DNS hack on Sunday evening. The consequence is that visitors to the sites may see a Turkish hack message.

image

The hacked sites share a common registrar, Ascio Technologies, and were registered through NetNames. Both NetNames and Ascio are brands of GroupNBT. Zone-h suggests:

It appears that the turk­ish attack­ers man­aged to hack into the DNS panel of Net­Names using a SQL injec­tion and mod­ify the con­fig­u­ra­tion of arbi­trary sites, to use their own DNS.

This kind of attack is more serious than simply hacking into a web server and defacing the content. DNS maps internet names to the IP numbers that identify actual servers on the internet. This means that the hackers can intercept not only web requests for the affected names, but also email. Hackers could also read cookies placed on user’s computers by the real sites, possibly gaining access to user accounts in cases where there is a saved logon.

What this means is that access to DNS records is security-critical. It should give any business pause for thought. How strong is the username/password which gives access to your ISP or registrar’s control panel, allowing the DNS records to be changed? How secure are the servers themselves at that ISP or registrar – it is this that was cracked in this case, according to Zone-h.

Fixing a DNS problem is never instant, since records are replicated across the internet and any changes take time to propagate. This also explains why some users see hacked sites, while others get through to the correct destination. It is possible that the hackers chose to strike at the weekend, in the hope that corrective action would take longer. At the time of writing (23.30 on Sunday) the sites I checked have been fixed at source, including The Register and The Daily Telegraph, but some users are still seeing defaced sites.

Hands on with Delphi XE2 for Apple iOS

Last week Embarcardero released RAD Studio XE2. RAD Studio is the suite of tools based on Delphi, a language – originally called Object Pascal – and visual development tool which still has a loyal following. XE2 is the most interesting new release for years, introducing a 64-bit compiler for Windows and cross-platform support for Apple’s OSX and iOS.

I have been trying the final release, paying particular attention to the iOS support, bearing in mind the importance of Apple’s mobile platform. The RAD Studio IDE only runs on Windows, so the most convenient way to target Apple’s platform is to install on a Windows virtual machine. I used a Parallels VM running Windows 7 64-bit, hosted on OS X Lion.

Setting up for iOS development with RAD Studio XE2 involves several steps. First, you have to use the new FireMonkey application framework in order to do cross-platform work. FireMonkey emerged after Embarcadero acquired the intellectual property of a company called KSDev early in 2011, along with its founder Eugene Kryukov:

KSDev’s intellectual property has been purchased by Embarcadero Technologies, the makers of Delphi and C++Builder Rapid App Development Tools. I am excited to announce that I have joined Embaracadero’s next gen frameworks team leading a very exciting project. As a result I will no longer operate the KSDev company and will not be accepting any further orders for KSDev products.

The products in question were Delphi frameworks called VGScene and DXScene, and these seem to have been melded with remarkable speed into what is now called FireMonkey. FireMonkey controls such as buttons and listboxes are all custom drawn, which is good for cross-platform consistency, but bad if you want your application to look and feel truly native. FireMonkey is not compatible with Delphi’s VCL (Visual Component Library), though the basic controls like TButton and TEdit are similar. FireMonkey applications can be either 3D, with the emphasis on Flash-like visual effects, or HD, used for more traditional user interfaces.

Support for Mac OSX is more fully integrated than for iOS. You can easily add an OSX target to a FireMonkey application, but for iOS you have to create a new application that only targets iOS. Another difference is that Embarcadero has its own Mac compiler, whereas the iOS support depends on the FreePascal open source compiler. If you are targetting OSX, you can code and debug entirely from the Delphi IDE, whereas for iOS you have to export your project and compile in Xcode.

In order to prepare for iOS development, you first need a Mac with XCode and the iOS SDK installed. Next, install RAD Studio XE2 on Windows. Then find the FireMonkey-iOS folder in the directory where RAD Studio XE2 is installed. This contains FireMonkey-iOS.dmg. Copy this to the Mac side, mount it and run the FireMonkey iOS installers to add FreePascal and the FireMonkey libraries to your XCode setup.

image

If you are also doing OSX development you will also need to install the Platform Assistant on the Mac, but for iOS this is not required.

Now you can go over to the Windows side, start a new application observing all the tasty new options, and choose a FireMonkey HD iOS application.

image

This creates a new form sized for an iPhone 4.0, though of course you can amend this. There is a tool palette which looks well-stocked with components, but note the following warning:

While you are designing your iOS application, you can only use components that are supported on iOS devices. However, the Tool Palette might contain components that are Windows-only or otherwise not supported on iOS.

That is an annoyance, and contributes to a feeling that iOS support is a little, dare I say, unfinished. Still, undaunted I built my sample app, following the path I have trodden before by creating a simple calculator.

image

You might wonder why all the buttons are green. I did, too, and played around a little trying to change it. This seems to involve creating a custom style. I started doing this, but decided it was not necessary for my simple test. It does make the point that the default appearance does not have the iOS look and feel.

There is what seems to me a small bug in the designer. If you select more than one control, the sizing tabs disappear and there is no visual evidence that the controls are selected, other than a heading in the Object Inspector that reads “n items selected.” At first I thought it was impossible to select more than one control, but this is not the case. However, there is no clipboard support in the visual designer. For example, if you want several buttons that are exactly the same, you need to add them individually, then multi-select and set the properties as needed.

While developing an iOS app, you can test it by running it on Windows within the IDE. When it is ready to test on iOS, you need to export the project. To do this, you need a command-line tool called dpr2xcode.exe, which is in the RAD Studio bin folder. Running this from the command-line is inconvenient, so the usual approach is to use Configure Tools from the Tools menu to add it to the IDE.

image

It is puzzling that Embarcadero has not included this by default.

Running the tool creates an xcode sub-folder in your project directory, with an .xcodeproj project file along with some default icons. I then copied the entire project folder to the Mac. It is also possible to use a shared folder accessed from both Windows and Mac, though I found this does not work if the folder is on the Windows side, so I simply copied it back and forth.

I opened the project in Xcode, and was prompted to “Modernize” it in Xcode jargon, to no ill effect. At this point I could successfully build it and run in the iPhone emulator.

Of course I wanted to test it on an actual device. I attached an iPhone 4 and did the Apple provisioning dance. After the usual messing around with certificates, it worked.

image

and here it is on the iPhone:

image

It works, and to that extent I am impressed. That said, I am disappointed with the performance. This is subjective, but I am talking about the responsiveness of the UI. There are perceptible pauses, which for such a simple app is surprising. I have created this same app numerous times using different development tools, and had expected that the Delphi version would be up there with the best, but while it is acceptable it is less responsive than some of the others.

Let me add though, a Delphi developer will find the process described above a easier than learning Objective C, and I was able to create this fully working app in an afternoon so I should not complain too much.

Maybe when Embarcadero comes up with its own iOS compiler there will be some improvement.