All posts by onlyconnect

Windows Phone and Windows 8 convergence: a few more hints from Microsoft

The moment when Nokia is in the midst of the US launch for its Lumia 900 phone, which both Nokia and Microsoft hope will win some market share for Windows Phone 7, is not the best time to talk about Windows Phone 8 from a marketing perspective. Especially when Windows Phone 8 will have a new kernel based on Windows 8 rather than Windows CE, news which was leaked in early February and made almost official by writer Paul Thurrott who has access to advance information under NDA:

Windows Phone 8, codenamed Apollo, will be based on the Windows 8 kernel and not on Windows CE as are current versions. This will not impact app compatibility: Microsoft expects to have over 100,000 Windows Phone 7.5-compatible apps available by the time WP8 launches, and they will all work fine on this new OS.

Nevertheless, Microsoft is talking a little about Windows Phone 8. Yesterday Larry Lieberman posted about the future of the Windows Phone SDK. After echoing Thurrott’s words about compatibility, he added:

We’ve also heard some developers express concern about the long term future of Silverlight for Windows Phone. Please don’t panic; XAML and C#/VB.NET development in Windows 8 can be viewed as a direct evolution from today’s Silverlight. All of your managed programming skills are transferrable to building applications for Windows 8, and in many cases, much of your code will be transferrable as well. Note that when targeting a tablet vs. a phone, you do of course, need to design user experiences that are appropriately tailored to each device.

Panic or not, these are not comforting words if you love Silverlight. Lieberman is saying that if you code today in Silverlight, you had better learn to code for WinRT instead in order to target future versions of Windows Phone.

The odd thing here is that while Lieberman says:

today’s Windows Phone applications and games will run on the next major version of Windows Phone.

(in bold so that you do not doubt it), he also says that “much of your code will be transferrable as well”. Which is equivalent to saying “not all your code will be transferrable.” So how is it that “non-transferrable code” nevertheless runs on Windows Phone 8 if already compiled for Window Phone 7? It sounds like some kind of compatibility layer; I would be interested to know more about how this will work.

I was also intrigued by this comment from Silverlight developer Morton Nielsen:

Its really hard to sell this investment to customers with all these rumors floating, and you only willing to say that my skill set is preserved is only fuel onto that. The fact is that there is no good alternative to Silverlight, and its an awesome solution for distribution LOB apps, but the experience on win8 is horrible at best. And it doesn’t help that the blend team is ignoring us with a final v5, and sl5 is so buggy it needs 100% DEET but we don’t see any GDRs any longer.

What are these acronyms? DEET just means insect repellent, ie. bug fixes. GDR is likely “General Distribution Release”; I guess Nielsen is saying that no bug-fix releases are turning up are turning up for Silverlight 5, implying that Microsoft has abandoned it.

All in all, this does not strike me as a particularly reassuring post for Windows Phone developers hoping that their code will continue to be useful, despite Lieberman’s statement that:

I hope we’ve dispelled some of your concerns

Still, it has been obvious for some time that WinRT, not Silverlight, is how Microsoft sees the future of its platform so nobody should be surprised.

Update: Several of you have commented that Lieberman talks about WinRT on Windows 8 not on Windows Phone 8. Nobody has said that WinRT will be on Windows Phone 8, only that the kernel will be the that of Windows 8 rather than Windows CE. That said, Lieberman does specifically refer to “the long term future of Silverlight for Windows Phone” and goes on to talk about WinRT. The implication is that WinRT is the future direction for Windows Phone as well as for Windows 8 on tablets. Maybe that transition will not occur until Windows Phone 9; maybe Windows Phone as an OS will disappear completely and become a form factor for Windows 8 or Windows 9. This aspect is not clear to me; if you know more, I would love to know.

Multicore processor wars: NVIDIA squares up to Intel

I first became aware of NVIDIA’s propaganda war against Intel at the 2012 GPU Technology conference in Beijing. CEO Jen-Hsun Huang stated that CPUs are remarkably inefficient for multicore processing:

The CPU is fast and is terrific at single-threaded performance, but because so much of the electronics inside the CPU is dedicated to out of order execution, branch prediction, speculative execution, all of the technology that has gone into sustaining instruction throughput and making the CPU faster at single-threaded applications, the electronics necessary to enable it to do that has grown tremendously. With four cores, in order to execute an operation, a floating point add or a floating point multiply, 50 times more energy is dedicated to the scheduling of that operation than the operation itself. If you look at the silicone of a CPU, the floating point unit is only a few percentage of the overall die, and it is consistent with the usage of the energy to sequence, to schedule the instructions running complicated programs.

That figure of 50 times surprised me, and I asked Intel’s James Reinders for a comment. He was quick to respond, noting that:

50X is ridiculous if it encourages you to believe that there is an alternative which is 50X better.  The argument he makes, for a power-efficient approach for parallel processing, is worth about 2X (give or take a little). The best example of this, it turns out, is the Intel MIC [Many Integrated Core] architecture.

Reinders went on to say:

Knights Corner is superior to any GPGPU type solution for two reasons: (1) we don’t have the extra power-sucking silicon wasted on graphics functionality when all we want to do is compute in a power efficient manner, and (2) we can dedicate our design to being highly programmable because we aren’t a GPU (we’re an x86 core – a Pentium-like core for “in order” power efficiency). These two turn out to be substantial advantages that the Intel MIC architecture has over GPGPU solutions that will allow it to have the power efficiency we all want for highly parallel workloads, but able to run an enormous volume of code that will never run on GPGPUs (and every algorithm that can run on GPGPUs will certainly be able to run on a MIC co-processor).

So Intel is evangelising its MIC vs GPCPU solutions such as NVIDIA’s Tesla line. Yesterday NVIDIA’s Steve Scott spoke up to put the other case. If Intel’s point is that a Tesla is really a GPU pressed into service for general computing, then Scott’s first point is that the cores in MIC are really CPUs, albeit of an older, simpler design:

They don’t really have the equivalent of a throughput-optimized GPU core, but were able to go back to a 15+ year-old Pentium design to get a simpler processor core, and then marry it with a wide vector unit to get higher flops per watt than can be achieved by Xeon processors.

Scott then takes on Intel’s most compelling claim, compatibility with existing x86 code. It does not matter much, says Scott, since you will have to change your code anyway:

The reality is that there is no such thing as a “magic” compiler that will automatically parallelize your code. No future processor or system (from Intel, NVIDIA, or anyone else) is going to relieve today’s programmers from the hard work of preparing their applications for the future.

What is the real story here? It would, of course, be most interesting to compare the performance of MIC vs Tesla, or against the next generation of NVIDIA GPGPUs based on Kepler; and may the fastest and most power-efficient win. That will have to wait though; in the meantime we can see that Intel is not enjoying seeing the world’s supercomputers install NVIDIA GPGPUs – the Oak Ridge National Laboratory Jaguar/Titan (the most powerful supercomputer in the USA) being a high profile example:

In addition, 960 of Jaguar’s 18,688 compute nodes now contain an NVIDIA graphical processing unit (GPU). The GPUs were added to the system in anticipation of a much larger GPU installation later in the year.

Equally, NVIDIA may be rattled by the prospect of Intel offering strong competition for Tesla. It has not had a lot of competition in this space.

There is an ARM factor here too. When I spoke to Scott in Beijing, he hinted that NVIDIA would one day produce GPGPUs with ARM chips embedded for CPU duties, perhaps sharing the same memory.

Run Metro apps in a window on Windows 8

I have been drilling into Visual Studio 11 beta recently. This includes a simulator for debugging Windows 8 Metro style apps and I was surprised by the way it works. Unlike the Windows Phone emulators, which are isolated environments for testing apps, the simulator is actually a window into your own machine.

image

You can do some strange stuff. For example, you can not only debug your app in the simulator, you can run up Visual Studio 11 on the desktop within the simulator and edit it as well. It will not let you run the simulator within the simulator though – I tried!

It occurred to me that the metro simulator accomplishes one of the things some users of the consumer preview have asked for. It lets you run Metro apps in a window, so that you can resize them, minimize them, and avoid the jarring context switch between full-screen Metro and the normal desktop with the taskbar.

image

What is the simulator? It is actually a remote desktop session into your own machine. Normally you cannot do this, as Windows client only allows one session at a time and you already have one running, but Microsoft has given itself special permission.

Running Metro apps in a windows is not its intended purpose but it is interesting to try as it shows how this might have worked if Microsoft had taken a more desktop-centric approach to the dual personality in Windows 8.

A further thought is to consider why the Visual Studio team decided to do things this way. Microsoft’s developers saw the necessity of working in the Visual Studio IDE while also exercising the Metro-style app.

Well, what if you are not a developer, but you still want to have Excel open while you check out, for example, the Bing Finance app? It is not only developers that may have good reasons to have a desktop and a Metro app running side by side.

Dual monitors accomplish this of course, and to some extent so does the “Snap” split view if you have the right screen resolution, but running Metro in its own window is a rather convenient solution.

Apple breaks web storage in iOS 5.1, does not care about web apps?

Many iOS apps which rely on web storage APIs for persistent data have been broken by the recent upgrade to iOS 5.1. The issue affects apps built with PhoneGap or others which use WebKit APIs to store data. The affect for users is that they lose all their data after the upgrade. For example, it sounds like the issue has hit this app:

image

Another developer says:

My statistics show users abandoning ship as their settings are wiped over and over, after each app restart.
This is a critical error that must be patched as soon as possible. Remember there’s also a delay from Apples app approval process to consider.

Put more precisely, WebKit used to store its local databases in Library/WebKit which is a location that the OS regards as persistent and which is backed up to iCloud. In iOS 5.1 this data is stored in Library/Caches which means it is regarded as temporary and likely to be deleted. The W3C Candidate Recommendation says of localStorage:

User agents should expire data from the local storage areas only for security reasons or when requested to do so by the user.

An embedded browser is not quite the same as a web browser though, and if you are using SQLite in Webkit then that falls outside the W3C HTML 5 API since Web SQL is no longer included.

The issue is complicated in that there also seems to be a bug, described here, which causes data to be lost after upgrading an app to a newer version; and there are problems with actual web apps as well as with apps that use an embedded UIWebView.

PhoneGap is fixable in that it can call native APIs and there is work going on to implement this. The danger is that more platform-specific code undermines the cross-platform benefits.

Discussions on the Apple developer forums during the beta period for 1OS 5.1 show that Apple was aware of the issue and that it is by design. The impression given is that Apple was annoyed by the number of apps using web storage to speed up their apps (whether web or native) rather than just storing customer-created content, and felt it was imposing too much burden on the constrained storage space in an iOS device.

It does not help that there is no way to increase the storage in an iPad or iPhone other than by replacing it with a newer one with more memory.

The problem is a real one, but you cannot escape the impression that Apple considers solutions like PhoneGap, or even web apps that behave like local apps, as a kind of workaround or hack that is to be discouraged in favour of apps written entirely with the iOS SDK.

Apple benefits from true native apps as they are more likely to be exclusive to its platform, and must be sold through the App Store with a fee to Apple.

The official Data Storage Guidelines for iOS are here.

Developers dislike monochrome Visual Studio 11 beta

Microsoft is having trouble convincing developers that its new Metro-influenced Visual Studio user interface, in the forthcoming version now in beta, is a good idea.

To be more precise, it is not so much Metro, but the way Microsoft has chosen to use it, with toolbox icons now black and white. The change also affects menus such as IntelliSense in the code editor. Here is the new design:

image

or you can choose a “Dark” colour scheme:

image

and the old 2010 design for comparison:

image

Developers voting on this over at UserVoice, the official feedback site, have made this the single biggest issue, with 4707 votes.

image

They do not much like the All Caps in the toolbox names either.

Microsoft has marked this as “Under review” so maybe there could yet be a more colourful future for Visual Studio 11.

Adobe will charge a royalty for use of “Premium features” in Flash Player

Adobe has announced that from August 1 2012, developers who make use of hardware-accelerated Stage3D in Flash Player, in combination with Domain Memory, will pay a 9% net revenue share as royalty. Net revenue is what remains after taxes, payment processing fees and “social network platform fees” (sounds like Facebook) are deducted.

“Domain Memory” is a block of memory declared as a byte array that is used as memory by the Alchemy C/C++ to ActionScript compiler. Allocating some bytes from this byte array is much faster than asking the Flash Player to grab some real memory from the system for your new object or variable, and manipulating memory via this technique is quicker too. In other words, it is a hack to improve performance.

Adobe is aiming the new licensing arrangement at games developers. Most developers will not be affected because of the following:

  • A license is only needed if both Stage3D hardware acceleration and Domain Memory are used. Use just one of these and you are fine.
  • If the game or app is packaged using Adobe AIR for iOS, Android, Windows or Mac (in other words, anywhere) then no license is needed.
  • Applications that make less than $50,000 in revenues (not clear whether this is net or gross) will be royalty-free
  • Applications released before July 31 2012 will remain royalty-free

There may be a program fee however, which I imagine will apply whether or not you pay royalties.

Although the new royalty is not all that onerous, it is significant as a change of direction. Until now, the deal with all these runtimes – Flash Player, Silverlight, Java – is that you might pay for the tools, but the runtime is free.

If you are considering Flash versus other runtimes for your new project, Adobe has now informed you that future free use of the runtime is not a foregone conclusion. Who knows what Adobe will define as “premium features” that might require royalties in future?

According to the FAQ, further premium features are indeed planned:

We are already planning premium features that enable "instant play" gaming experiences for content that relies on large assets which will be able to cache data using a local storage API. For content publishers looking for better branding and user acquisition, another planned new feature would allow apps to request if the user would like to create a shortcut on the desktop, task bar or start menu pointing to the application.

Overall it seems a curious move, at a time when Adobe seems to be moving away from Flash and towards HTML5 as its long-term strategy. The company may profit a little from a few high-profile games, but the dampening effect on Flash usage in the long term will offset any advantage.

No developer likes to pay runtime royalties and I would guess that Adobe’s move will spark an immediate search for alternatives.

Update: there is a great discussion of the issue with participation from Adobe’s Thibault Imbert here. Why the change in direction, when Adobe has previously made money from its tools:

at some point you are capped. Ask any tooling company today, hence why you see companies going to consumers, services, because games could generate millions of revenue with maybe 200 copies of Flash Builder and Flash Pro sold. Is it a good business? Not really.

says Imbert. Another issue is that third-party tools for Flash have been taking market share away from Adobe, which must hurt:

The model where Adobe invests all of the resources in developing the Flash Player, and then projects such as Haxe and Unity pull developers away from Adobe tooling is one that was not sustainable under the old model. Under the new model, it doesnt matter which tools and technologies you are using to develop Flash content, since revenue is generated based on the runtime and not tooling.

says Adobe’s Mike Chambers.

Microsoft open sources further ASP.NET Frameworks, publishes code with Git

Microsoft has released two further ASP.NET frameworks as open source, joining ASP.NET MVC which was already open source. These are published on CodePlex, Microsoft’s open source repository site, using the newly added Git support. You can find the code here.

The two additional frameworks are ASP.NET Web API and ASP.NET Web Pages. Just to recap, ASP.NET supports several frameworks:

ASP.NET Web Forms: the original framework shipped with .NET 1.0 and greatly enhanced since then. Excellent for quickly assembling a dynamic web site but somewhat heavyweight with its ViewState field and complex page lifecycle. Designed in pre-Ajax days.

ASP.NET MVC: A more elegant framework with separation of content from code, amenable to test-driven development, based on controllers and routing.

ASP.NET Web Pages formerly known as Razor: An alternative view engine designed to work with ASP.NET MVC. Uses .cshtml or .vbhtml extension in place of .aspx. A declarative language with codewords like @foreach and @if – though Microsoft’s Scott Guthrie says it is not a language but rather a template markup syntax.

ASP.NET Web API: formerly known as WCF Web API is a framework for building REST services. A key framework if you have a cloud + mobile target in mind. Now gets installed with ASP.NET MVC.

So why is ASP.NET Web Forms not open source? According to Microsoft’s Scott Hanselman:

The components that are being open sourced at this time are all components that are shipped independently of the core .NET framework, which means no OS components take dependencies on them. Web Forms is a part of System.Web.dll which parts of the Windows Server platform take a dependency on. Because of this dependency this code can’t easily be replaced with newer versions expect when updates to the .NET framework or the OS ships.

though it is not clear why this prevents the code being published.

Hanselman adds that Microsoft is not only publishing the code, but also taking contributions:

Today we continue to push forward and now ASP.NET MVC, Web API, Web Pages will take contributions from the community.

Why is Microsoft doing this? Within Microsoft, there have always seemed to be open source advocates like Hanselman, and others who pull back. One answer is that the open source folk are winning more arguments now.

Another take is that this is the outcome of industry-wide changes. Microsoft’s platform is less dominant than it was; it still reigns on the desktop, but Macs, tablets and smartphones are eroding its position on the client, and on the web Netcraft’s figures show steady decline since June 2010:

image

Most of the competition is open source and it is possible that this is a factor behind the latest moves. Microsoft is not open sourcing its IIS web server yet, though Hanselman does make the point that ASP.NET MVC runs well on Mono, the open source implementation of the .NET Framework, which is often used with Apache.

Developers: will you do Metro?

It is fascinating to watch the Metro-fication of all things Microsoft, from the Xbox 360 user interface to Windows Phone to Windows 8 to forthcoming versions of Office and other applications.

Future versions of Dynamics products were previewed at the Convergence 2012 event (which included a session called CRM goes Metro) and there are a bunch of screenshots here.

image

Microsoft calls Metro a design language and you can see its guiding principles here. Calling it a language does not seem quite right; the word “style” is more accurate, but it does have building block elements (and yes it is blocky) which I guess make it more than just a style.

A safe prediction at this point is that all Microsoft’s products will be touched by Metro influence, even though not all will become full Metro apps running on the Windows Runtime (WinRT).

In the past the style adopted by Microsoft for its own applications have strongly influenced third-party applications as well. Once Windows, Office, Dynamics and other apps have a Metro look, other apps that do not may begin to look dated or out of place.

Metro is controversial though, perhaps even more so than the Office Ribbon which replaced menus in Office 2007 and 2012. There is some connection: members of the Office team who worked with Steven Sinofsky on the design of Office 2007, including Julie Larson-Green and Jensen Harris, are now working with him on Windows 8. Harris has written extensively about the work on Office 2007 on his Office User Interface Blog, though the last substantial post was in 2008.

What’s not to like about Metro? Here’s a few arguments against:

  • Beauty is in the eye of etc; but the blockiness of the Metro style does give it a utilitarian appearance. In Windows Phone 7 it is nice to use, but not so great to look at.
  • The Live Tile concept, where shortcut blocks can be populated with current information, adds a random element to Metro start screens which does not always look good.
  • The emphasis on simplicity and immersion makes Metro vulnerable to the accusation that it wastes too much precious screen space.
  • Metro tends to be a horizontally scrolling style, though I am not sure if this is baked into the guidelines. This takes some adjustment since most of us are more used to vertical scrolling to see more content.
  • Metro seems to be optimized for a touch UI, and while its advocates insist that it is just as good with keyboard and mouse, that is a stretch. Metro seems to be a big bet on touch as the future of human-computer interaction.

On the other hand, the usability of Windows Phone 7 is a point in its favour, and some are convinced. Paul Greenberg, in a positive take on Microsoft’s strategy based on his trip to Convergence 2012, says:

They have nailed UX (a.k.a user experience). Nailed it. Their combination of the extremely well done Metro interface and their work on natural user interfaces involving voice and touch is the new gold standard – and I’m someone who loves Apple products. (please, Mac fanboys, spare my life.)

I would be interested to hear from developers whether you expect to embrace the Metro style in your apps, wither in WinRT or elsewhere.

What’s new in SQL Server 2012?

Microsoft’s SQL Server 2012 is released next month and available to download now (I am not sure what the distinction is). I have a high regard for Microsoft’s database server; it seems to me that the team mostly gets it right. The product has become somewhat diffuse though, especially as the Business Intelligence aspect has grown, and this may account for what to me is a rather unfocused launch for SQL Server 2012, even though its name suggests that it is the most significant release since SQL Server 2008.

The following slide summarises the new features, presumably with the type size suggesting the importance of each one.

image

But is the ODBC Driver for Linux really more important than the SQL Server Data Tools, for example? Not in my view; but that reflects how SQL Server represents different things to different people.

So what are the key new feature? Here’s my quick take.

Always On

A new feature called Availability Groups that is an improved version of database mirroring

Improved failover clustering which supports multi-site clustering across subnets – above to failover across datacentres.

ColumnStore Index

A new type of index for data warehouses. This is actually pretty simple: the name says it all. Here is Microsoft’s illustration:

image

and explanation:

A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored.

Why do this? Because it is more efficient when the query only requests a a few columns from the table. Microsoft claims performance improvements from 6X to 100X in cases where the the data can be cached in RAM, and thousand-fold improvements where the working set does not fit in RAM.

SQL Server Data Tools

This is my favourite feature, probably because it is developer-focused. These are the tools that were code-named “Juneau” and which install into Visual Studio 2010. There are some visual tools, but this is essentially a code-centric approach to database design, where you design your database with all its tables, queries, triggers, stored procedures and so on. You can then build it and test it against a private “localdb” instance of SQL Server. What I like is that the database project includes the entire design of your database in a form that can be checked into source control and compared against other schema versions. Here is the Add New Item dialog for a database project:

image

Data Quality Services

Data Quality Services (DQS) lets you check your data against a Data Quality Knowledge Base (DQKB), the contents of which are specific to the type of data in the database and may be created and maintained by your business or obtained from a third-party. If your data includes addresses, for example, the DQKB might have all valid city names to prevent errors. Features of DQS include data cleansing, de-duplication through data matching, profiling a database for quality, and monitoring data quality.

image

Illustration and more details are here.

Updated SQL Server Management Studio

SQL Server Management Studio now runs in the Visual Studio 2010 shell.

LocalDB

LocalDB is a local instance of SQL Server aimed at developers and for use as an embedded database in single-user applications. It is a variant of SQL Server Express, but different in that it does not run as a service. Rather, the LocalDB process is started on demand by the SQL native client and closed down when there are no more connections. You can attach database files at runtime by using AttachDBFileName in the connection string. LocalDB is intended to replace user instances which are now deprecated.

FileTables

This is the most intriguing feature in SQL Server 2012. It is described here:

The FileTable feature brings support for the Windows file namespace and compatibility with Windows applications to the file data stored in SQL Server … In other words, you can store files and documents in special tables in SQL Server called FileTables, but access them from Windows applications as if they were stored in the file system, without making any changes to your client applications.

and the purpose:

Enterprises can move this data from file servers into FileTables to take advantage of integrated administration and services provided by SQL Server. At the same time, they can maintain Windows application compatibility for their existing Windows applications that see this data as files in the file system.

Integration of the file system and the database is not a new idea, and Microsoft has tried variants before, such as the “M” drive that was once part of Exchange, the aborted WinFS feature planned for Windows Longhorn (Vista), and SharePoint, which can store documents in SQL Server while presenting them as Windows file shares through WebDAV.

That said, FileTables in SQL Server 2012 are not an attempt to reinvent the file system, but presented more as a way of supporting legacy applications while managing data in SQL Server. It is an interesting feature though, and it would not surprise me if users find some unexpected ways to exploit it.

Power View

Codenamed “Project Crescent”, this is a web-based reporting client for businesses that have embraced Microsoft’s platform, because it has several key dependencies:

  • SharePoint Server Enterprise Edition
  • SQL Server Reporting Services
  • Silverlight on the client

In fact, Power View is described as:

a feature of SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Server 2010 Enterprise Edition

Power View reports that I have seen do look good, and have an Office ribbon style designer for designing customising the report. That said, I would guess that Microsoft now wishes it had used HTML 5 rather than Silverlight for this – there are those Apple iPad and Windows 8 Metro users to think of, after all.

Microsoft emphasises that Power View is not a replacement for Report Designer or Report Builder, but an ad-hoc reporting tool.

Closing thoughts

There is more in SQL Server 2012, as a glance back at the initial slide will tell you, but the above is a starting point if you are wondering what it is all about. It is also worth noting that Microsoft still gives away SQL Server Express which supports up to 10GB per database and includes many of same features as the paid-for versions; it is the same product at heart.

Someone who finds that SQL Server Express actually meets all their needs asked me why Microsoft gives it away. My guess is that this is a consequence of all the other free database engines available such as MySQL, PostgreSQL, interesting  newer NoSQL options like mongoDB, and of course equivalent free versions of Oracle and IBM DB2. A proportion of customers who start with SQL Server Express will grow into the paid-for editions.

This does make SQL Server Express an excellent choice for smaller scale applications and small businesses, particularly since it integrates smoothly into Microsoft’s developer stack. Having said which, I am becoming something of an Entity Framework sceptic, but that is a story for another day – and fortunately you do not have to use EF if you do not want to.

PhoneGap is Adobe, Cordova is Apache

The hot cross-platform mobile toolkit PhoneGap was created by Nitobi, a company acquired by Adobe last year. Almost at the same time, the project was submitted to Apache as an open source project. However, the Apache project is not called PhoneGap; it was briefly known as Callback and is now called Cordova (the name of the street in Vancouver where Nitobi was based).

A new official log post explains why PhoneGap was renamed at Apache, but also makes the point that the PhoneGap brand will continue.

PhoneGap is a distribution of Apache Cordova. You can think of Apache Cordova as the engine that powers PhoneGap, similar to how WebKit is the engine that powers Chrome or Safari. (Browser geeks, please allow me the affordance of this analogy and I’ll buy you a beer later.)

Over time, the PhoneGap distribution may contain additional tools that tie into other Adobe services, which would not be appropriate for an Apache project. For example, PhoneGap Build and Adobe Shadow together make a whole lot of strategic sense. PhoneGap will always remain free, open source software and will always be a free distribution of Apache Cordova.

Read it carefully, because it is still potentially confusing. Note that PhoneGap “will always remain free, open source software” though it may gain hooks into commercial Adobe tools. At least, that is how I read it.

I would also expect that Adobe will come up with design and development tools for which PhoneGap (or Cordova) is invisible to the user. You will just be able to build for multiple platforms.

The post adds:

Currently, the only difference is in the name of the download package and will remain so for some time.

I will add that there is great brand-awareness of PhoneGap and what it is, and little for Cordova, so if you want to be understood talk about PhoneGap.