Appcelerator CEO on EMEA expansion, Titanium vs PhoneGap, and how WebKit drives HTML5 standards

I spoke to Appcelerator CEO Jeff Haynie yesterday, just before today’s announcement of the opening of an EMEA headquarters in Reading. It has only 4 or 5 staff at the moment, mostly sales and marketing, but will expand into professional services and training.

Appcelerator’s product is a cross-platform (though see below) development platform for both desktop and mobile applications. The mobile aspect makes this a hot market to be in, and the company says it has annual growth of several hundred percent. “We’re not profitable yet, but we’ve got about 1300 customers now,” Haynie told me. “ On the developer numbers side, we’ve got about 235,000 mobile developers and about 35,000 apps that have been built.”

Jeff Haynie, Appcelerator

In November 2011, Red Hat invested in Appcelerator and announced a partnership based on using Titanium with OpenShift, Red Hat’s cloud platform.

Another cross-platform mobile toolkit is PhoneGap, which has received lots of attention following the acquisition of Nitobi, the company which built PhoneGap, by Adobe, and also the donation of PhoneGap to the Apache Foundation. I asked Haynie to explain how Titanium’s approach differs from that of PhoneGap.

Technically what we do and what PhoneGap does is a lot different. PhoneGap is about how do you take HTML and wrap it into a web browser and put it into a native container and expose some of the basic APIs. Titanium is really about how you expose JavaScript for an API for native capabilities, and have you build a real native application or an HTML5 application. We offer both a true native application – I mean the UI is native and you get full access to all the API as if you had written it native, but you are writing it in JavaScript. We have also got now an HTML5 product where that same codebase can be deployed into an HTML5 web-driven interface. We think that is wildly different technically and delivers a much better application.

Haynie agrees that cross-platform tools can compromise performance and design, and even resists placing Titanium in the cross-platform category:

Titanium is a real native UI. When you’re in an iPhone TableView it’s actually a real native TableView, not an HTML5 table that happens to look like a TableView. You get the best of both worlds. You get a JavaScript-driven, web-driven API, but when you actually create the app you get a real app. Then we have an open extensible API so it’s really easy if you want to expose additional capabilities or bring in third-party libraries, very similar to what you do in Java with JNI [Java Native Invocation].

The category has got a bit of a bad rap. We wouldn’t really describe ourselves as cross-platform. We’re really an API that allows you target multiple different devices. It’s not a write-once run anywhere, it’s really API driven.

80% of our core APIs are meant to be portable. Filesystems, threads, things like that. Even some of the UI layer, basic views and buttons and things like that. But then you have a Titanium iOS namespace [for example] which allows you to access all the iOS-specific APIs, that aren’t portable.

I asked Haynie for his perspective on the mobile platform wars. Apple and Android dominate, but what about the others?

RIM and Microsoft are fighting for third place. I would go long on Microsoft. Look at Xbox, look at the impact of long-term endeavours, they have the sustainability and the investment power to play the long game, especially in the enterprise. We’ll see Microsoft make significant strides in Windows 8 and beyond.

Even within Android, there are going to be a lot of different types of Android that will be both complementary and competitive with Google. They will continue to take the lion’s share of the market. Apple will be a smaller but highly profitable and vertically integrated ecosystem. In my opinion Microsoft is a bit of bridge between both. They’re more open than Apple, and more vertically integrated than Google, with tighter standardisation and stacks.

I wouldn’t quite count RIM out. They still have a decent market share, especially in certain parts of the world and certain types of application. But they’ve got a long way to go with their new platform.

So will Titanium support Windows 8 “Metro” apps, running on the new WinRT runtime?

Yes, we don’t have a date or anything to announce, but yes.

I was also interested in his thoughts on Adobe, particularly as there is some flow of employees from Adobe to Appcelerator. Is he seeing migration of developers from Flex, Flash and AIR to Titanium?

Adobe has had a tremendously successful product in Flash, the web wouldn’t be the web today if it wasn’t for Flash, but the advent of HTML5 is encroaching on that. How do they move to the next big thing, I don’t know if they have a next big thing? And they’re dealing in an ecosystem that’s not necessarily level ground. That’s churning lots of dissenting and different opinions inside Adobe, is what we’re hearing.

We’re seeing a large degree of people that are Flash, ActionScript oriented that are migrating. We’ve hired a number of people from Adobe. Quite a lot of people in our QA group actually came out of the Adobe AIR group. Adobe is a fantastic company, the question is what’s their future and what’s their plan?

FInally, we discussed web standards. With a product that depends on web technology, does Appcelerator get involved in the HTML5 standards process? The question prompted an intriguing response with regard to WebKit, the open source browser engine.

We’re heavily involved in the Eclipse foundation, but not in the W3C today. I spent about 3 and half years on the W3C in my last company, so I’m familiar with the process and the people. The W3C process is largely driven – and I know the PhoneGap people have tried to get involved – by the WHAT working group and the HTML5 working group, which ultimately are driven by the browser manufacturers … it’s a largely vendor-oriented, fragmented space right now, that’s the challenge. We still haven’t managed to get a royalty-free, IPR-free codec for video.

I’d also say that one of the biggest factors pushing HTML5 is less the standardisation itself and more WebKit. WebKit has become the de facto [standard], which has really been driven by Apple and Google and against Microsoft. That’s driving HTML5 forward as much as the working group itself.

Wikipedia goes dark for a day to protest against proposed US legislation

All Wikipedia English requests today redirect to a page protesting against proposed US legislation, specifically the draft SOPA and PIPA legislation.


Other sites will also be protesting, including Reddit (a 12 hour protest) and Mozilla, the Firefox people.

Many web searchers will be discovering the value of the cached pages held by search engines. That aside, this is a profound issue that is about more than just SOPA and PIPA. SOPA stands for Stop Online Piracy, and PIPA Protect IP. The problem is that the internet is the powerful means of sharing information that mankind has devised. This brings many benefits, but not so much if it is your proprietary information that is being exchanged for free (music, video, ebooks, software) or if it gives easy access to counterfeit versions of your products, with designer handbags, watches, drugs and the like being particularly vulnerable, because intellectual property forms a large proportion of the value of the purchase.

If you consider this issue at the highest level, there are three broad solutions:

  1. Do nothing, on the grounds that the world has changed and there is nothing you can do that is effective. Technology has made some forms of copyright impossible to enforce. Affected businesses have to adapt to this new world.
  2. Introduce legislation that widens the responsibility for web sites that enable or facilitate copyright infringement beyond the sites themselves, to include search engines, ISPs and payment processors. One of the debates here is how much the owners of the pipes, the infrastructure on which the internet runs, should take legal responsibility for the content that flows through them. Such legislation might be somewhat effective, but at a heavy cost in terms of forcing many sites and services offline even if they have only a slight and tangential relationship to infringing content, and greatly raising the cost of providing services. At worst we might end up with a censored, filtered, limited, expensive internet that represents a step backwards in technology. The further risk is that that such legislation may put too much power in the hands of the already powerful, since winning legal arguments is in practice about financial muscle as well as facts, rights and wrongs.
  3. Find some middle path that successfully restrains the flow of infringing content but without damaging the openness of the internet or its low cost.

There is of course a risk that legislators may think they are implementing the third option, while in fact coming close to the second option. There is also a risk that attempting to implement the third option may in practice turn out to be the first option. It is a hard, complex problem; and while I agree that the proposed legislation is not the right legislation (though note that I am not in the USA), there is no disputing the core fact, that the internet facilitates copyright infringement.

There are also aspects of today’s internet that concern me as I see, for example, children relying on the outcome of Google searches to paste into their homework with little understanding of the distinction between what is authoritative, what is propaganda, and what is anecdotal or simply wrong.

In other words, the “no control” approach to the internet has a downside, even if the upside is so great that it is worth it.

Meet Resilient File System (ReFS), a new file system for Windows

Microsoft has announced the Resilient File System (ReFS), a replacement for the NTFS file system which has been used since the first release of Windows NT in 1993.

The new file system increases limits in NTFS as follows:

Max file size 2^64 -1 2^64-1 bytes
Max volume size 2^40 bytes 2^78 bytes
Max files in a directory 2^32 –1 (per volume) 2^64
Max file name length 32K unicode (255 unicode) 32K unicode
Max path length 32K 32K

I have done my best to set out the NTFS limits but it is complicated, and there are limitations in the Windows API as well as in NTFS. See this article for more on NTFS limits; and this article for an explanation of file name and path length limits in the Windows API.

Microsoft’s announcement focuses on two things. One is resilience, with claims that ReFS is better at preserving data in the event of power failure or other calamity. Another is how ReFS is designed to work alongside Storage Spaces, about which I posted earlier this month.

Of the two, Storage Spaces will be more visible to users. In addition, it sounds as if ReFS will not be the default in Windows 8 client:

…we will implement ReFS in a staged evolution of the feature: first as a storage system for Windows Server, then as storage for clients, and then ultimately as a boot volume. This is the same approach we have used with new file systems in the past.

Note that there are losses as well as gains in ReFS. Short file names are gone, so are quotas, so is compression:

The NTFS features we have chosen to not support in ReFS are: named streams, object IDs, short names, compression, file level encryption (EFS), user data transactions, sparse, hard-links, extended attributes, and quotas.

Overall ReFS strikes me as a conservative rather than radical upgrade. This is not the return of WinFS, an abandoned project which was to bring relational file storage to Windows. It will not help, in itself, with the biggest problem client users have with their file system: finding their stuff. Nor does it have built-in deduplication, which can make storage substantially more efficient. Microsoft says the file system is pluggable (as is NTFS) so that features like deduplication can added by other providers or by Microsoft with other products.

OEMs are still breaking Windows: can Microsoft fix this with Windows 8?

Mark Russinovich works for Microsoft and has deep knowledge of Windows internals; he created the original Sysinternals tools which are invaluable for troubleshooting.

His account of troubleshooting a new PC purchased by a member of his family is both amusing and depressing, though I admire his honesty:

My mom recently purchased a new PC, so as a result, I spent a frustrating hour removing the piles of crapware the OEM had loaded onto it (now I would recommend getting a Microsoft Signature PC, which are crapware-free). I say frustrating because of the time it took and because even otherwise simple applications were implemented as monstrosities with complex and lengthy uninstall procedures. Even the OEM’s warranty and help files were full-blown installations. Making matters worse, several of the craplets failed to uninstall successfully, either throwing error messages or leaving behind stray fragments that forced me to hunt them down and execute precision strikes.

I admire his honesty. What he is describing, remember, is his company’s core product, following its mutilation by one of the companies Microsoft calls “partners”.

Russinovich adds:

As my cleaning was drawing to a close, I noticed that the antimalware the OEM had put on the PC had a 1-year license, after which she’d have to pay to continue service. With excellent free antimalware solutions on the market, there’s no reason for any consumer to pay for antimalware, so I promptly uninstalled it (which of course was a multistep process that took over 20 minutes and yielded several errors). I then headed to the Internet to download what I – not surprisingly given my affiliation – consider the best free antimalware solution, Microsoft Security Essentials (MSE).

Right. I do the same. However, the MSE install failed, probably thanks to a broken transfer application used to migrate files and settings from an old PC, and it took him hours of work to identify the problem and complete the install.

What interests me here is not so much the specific problems, but Microsoft’s big problem: that buying a new Windows PC is so often a terrible user experience. Not always: business PCs tend to be cleaner, and some OEMs are better than others. Nevertheless, although I have had Microsoft folk tell me a number of times that its partners were getting the message, that to compete with Apple they need to deliver a better experience, the problem has not been cracked.

There is something about the ecosystem which ensures that users get a bad product. It goes like this I guess: customers are price-sensitive, and to get the price required OEM vendors have to take the money from malware companies and others desperate to drive users towards their products. Yet in doing so they perpetuate the situation where you you have to buy Apple, or be a computer professional, in order to get a clean install. That describes a broken ecosystem.

Microsoft’s Signature PCs are another option, but they are only available from Microsoft stores.

The next interesting question is whether Microsoft can fix this with Windows 8. It may want to follow the example of Windows Phone 7, which is carefully locked down so that OEMs and operators can add their own apps, but their ability to customise the operating system is limited, protecting the user experience. It is hard to see how Microsoft can achieve the same with the x86 version of Windows 8, since this remains an open platform, though it may be possible to insulate the Metro side from too much tinkering. Windows 8 on ARM, on the other hand, may well follow the Windows Phone pattern.

Nokia Drive on the Lumia: it works

Over the weekend I took the opportunity to try out Nokia Drive, a turn-by-turn navigation app which comes bundled in the Lumia 800 I have been testing. Well, it was not so much “took the opportunity” as “try anything”, since the Tom Tom the driver was relying on had lost its signal somewhere in the depths of rural England.

I fired up Nokia Drive, entered the destination, and was impressed. It picked up a signal, displayed a well-designed screen stating what was the next turn and how far away, showed our location and progress complete with the road name, and spoke out clear instructions in a voice that was less robotic than some.


I was a passenger in this case; how does this work if you are the driver? It turns out that Nokia Drive disables the screen saver (which developers can do with a couple of lines of code – check out UserIdleDetectionMode) so it runs continuously. This is a battery drain, so for longer journeys you will need some sort of car kit; you can get by with just a bracket to hold the phone and a standard micro USB power supply.

For basic navigation this seems to me as good as a Tom Tom though there are a few things missing. You cannot calculate a route offline, it does not show time to destination, and it does not have speed camera warnings.

Nevertheless, a significant benefit for Nokia’s Windows Phone users.

PHP Developer survey shows dominance of mobile, social media and cloud

Zend, a company which specialises in PHP frameworks and tools, has released the results of a developer survey from November 2011.

The survey attracted 3,335 respondents drawn, it says, from “enterprise, SMB and independent developers worldwide.” I have a quibble with this, since I believe the survey should state that these were PHP developers. Why? Because I have an email from November which asked me to participate and said:

Zend is taking the pulse of PHP developers. What’s hot and what matters most in your view of PHP?

There is a difference between “developers” and “PHP developers”, and much though I love PHP the survey should make this clear. Nevertheless, If you participated, but mainly use Java or some other language, your input is still included. Later the survey states that “more than 50% of enterprise developers and more than 65% of SMB developers surveyed report spending more than half of their time working in PHP.” But if they are already identified as PHP developers, that is not a valuable statistic.

Caveat aside, the results make good reading. Some highlights:

  • 66% of those surveyed are working on mobile development.
  • 45% are integrating with social media
  • 41% are doing cloud-based development

Those are huge figures, and demonstrate how far in the past was the era when mobile was some little niche compared to mainstream development. It is the mainstream now – though you would get a less mobile-oriented picture if you surveyed enterprise developers alone. Similar thoughts apply to social media and cloud deployment.

The next figures that caught my eye relate to cloud deployment specifically.

  • 30% plan to use Amazon
  • 28% will use cloud but are undecided which to use
  • 10% plan to use Rackspace
  • 6% plan to use Microsoft Azure
  • 5% have another public cloud in mind (Google? Heroku?)
  • 3% plan to use IBM Smart Cloud

The main message here is: look how much business Amazon is getting, and how little is going to giants like Microsoft, IBM and Google. Then again, these are PHP developers, in which light 6% for Microsoft Azure is not bad – or are these PHP developer who also work in .NET?

I was also interested in the “other languages used” section. 82% use JavaScript, which is no surprise given that PHP is a web technology, but more striking is that 24% also use Java, well ahead of C/C++ at 17%, C# at 15% and Python at 11%.

Finally, the really important stuff. 86% of developers listen to music while coding, and the most popular artists are:

  1. Metallica
  2. = Pink Floyd and Linkin Park


The mystery of unexpected expiring sessions in ASP.NET

This is one of those posts that will not interest you unless you have a similar problem. That said, it does illustrate one general truth, that in software problems are often not what they first appear to be, and solving them can be like one of those adventure games where you think your quest is for the magic gem, but when you find the magic gem you discover that you also need the enchanted ring, and so on.

Recently I have been troubleshooting a session problem on an ASP.NET application running on a shared host (IIS 7.0).

This particular application has a form with some lengthy text fields. Users complete the form and then hit save. The problem: sometimes they would take too long thinking, and when they hit save they would lose their work and be redirected to a login page. It is the kind of thing most of us have experienced once in a while on a discussion forum.

The solution seems easy though. Just increase the session timeout.  However, this had already been done, but the sessions still seemed to time out too early. Failure one.

My next thought was to introduce a workaround, especially as this is a shared host where we cannot control exactly how the server is configured. I set up a simple AJAX script that ran in the background and called a page in the application from time to time, just to keep the session alive. I also had it write a log for each ping, in order to track the behaviour.

By the way, if you do this, make sure that you disable caching on the page you are pinging. Just pop this at the top of the .aspx page:

<%@ OutputCache Duration="1" Location="None" VaryByParam="None"%>

It turned out though that the session still died. One moment it was alive, next moment gone. Failure two.

This pretty much proved that session timeout as such was not the issue. I suspected that the application pool was being recycled – and after checking with the ISP, who checked the event log, this turned out to be the case. Check this post for why this might happen, as well as the discussion here. If the application pool is recycled, then your application restarts, wiping any session values. On a shared host, it might be some else’s badly-behaved application that triggers this.

The solution then is to change the way the application stores session variables. ASP.NET has three session modes. The default is InProc, which is fast but not resilient, and for obvious reasons not suitable for apps which run on multiple servers. If you change this to StateServer, then session values are stored by the ASP.NET State Service instead. Note that this service is not running by default, but you can easily enable it, and our helpful ISP arranged this. The third option is to use SQLServer, which is suitable for web farms. Storing session state outside the application process means that it survives pool recycling.

Note the small print though. Once you move away from InProc, session variables are serialized, not just held in memory. This means that classes must have the System.Serializable attribute. Note also that objects might emerge from serialization and deserialization a little different from how they went in, if they hold state that is more complex than simple properties. The constructor is not called, for example. Further, some properties cannot sensibly be serialized. See this article for more information, and why you might need to do custom serialization for some classes.

After tweaking the application to work with the State Service though, the outcome was depressing. The session still died. Failure three.

Why might a session die when the pool recycles, even if you are not using InProc mode? The answer seems to be that the new pool generates a new machine key by default. The machine key is used to encrypt and decrypt the session values, so if the key changes, your existing session values are invalid.

The solution was to specify the machine key in web.config. See here for how to configure the machine key.

Everything worked. Success at last.

Windows Phone, Windows 8, and Metro Metro Metro feature in Microsoft’s last keynote at CES

I watched Microsoft CEO Steve Ballmer give the last in a long series of Microsoft keynotes at the Consumer Electronics Show in Las Vegas.


There were three themes: Windows Phone, Windows 8, and Xbox with Kinect. It was a disappointing keynote though, mainly because of the lack of new news. Most of the Windows Phone presentation could have been from last year, except that we now have Nokia involvement which has resulted in stronger devices and marketing. What we have is in effect a re-launch necessitated by the failure of the initial launch; but the presentation lacked the pizzazz that it needed to convince sceptics to take another look. That said, I have enjoyed using Nokia’s Lumia 800 and still believe the platform has potential; but Microsoft could have made more of this opportunity. A failed voice demo did nothing other than remind us that voice control in Windows Phone is no Apple Siri.


What about Windows 8? Windows Chief Marketing Officer Tami Reller gave a presentation, and I was hoping to catch a glimpse of new stuff since the preview at last year’s BUILD conference. There was not much though, and Reller was using the same Samsung tablet as given to BUILD delegates. We did get a view of the forthcoming Windows Store that I had not seen before:


Reller mainly showed the Metro interface, in line with a general focus on Metro also emphasised by Ballmer. She talked about ARM and said that Metro apps will run on both Intel and ARM editions of Windows 8; notably she did not say the same thing about desktop apps, which implies once again that Microsoft intends to downplay the desktop side in the ARM release.

Reller also emphasised that Windows 8 Metro works well on small screens, as if to remind us that it will inevitably come to Windows Phone in time.

Windows 8 looks like a decent tablet OS, but the obvious questions are why users will want this when they already have iOS and Android, and why Microsoft is changing direction so dramatically in this release of Windows? The CES keynote was a great opportunity to convince the world of the merits of its new strategy, but instead it felt more as if Microsoft was ducking these issues.

Xbox and Kinect followed, and proved firmer ground for the company, partly because these products are already successful. There was a voice control demo for Xbox which worked perfectly, in contrast to the Windows Phone effort. We also heard about Microsoft’s new alliance with News Corporation, which will bring media including Fox News and the Wall Street Journal to the console. We also saw the best demo of the day, a Sesame Street interactive Kinect game played with genuine enthusiasm by an actual child.

Microsoft unveiled Kinect for Windows, to be released on 1st February, except that there was not much to say about it. has the product available for pre-order, and there was more to be learned there.


The new product  retails at $249.99, compared to $149 for the Xbox version, but seems little different. Here is what the description says:

This Kinect Sensor for Windows has a shortened USB cable to ensure reliability across a broad range of computers and includes a small dongle to improve coexistence with other USB peripherals. The new firmware enables the depth camera to see objects as close as 50 centimeters in front of the device without losing accuracy or precision, with graceful degradation down to 40 centimeters. “Near Mode” will enable a whole new class of “close up” applications, beyond the living room scenarios for Kinect for Xbox 360.

I imagine hackers are already wondering if they can get the new firmware onto the Xbox edition and use that instead. Kinect for Windows does not come with any software.

What is the use of it? That is an open question. Potentially it could be an interesting alternative to a mouse or touch screen, face recognition could be used for personalisation, and maybe there will be some compelling applications. If so, none were shown at CES.

I am not sure of the extent of Microsoft’s ambitions for this first Windows release of Kinect, but at $249 with no software (the Xbox version includes a game) I would think it will be a hard sell, other than to developers. If wonderful apps appear, of course, I will change my mind.

Top languages on Github: JavaScript reigns, Ruby and Python next

I cloned a github repository today, and while browsing the site noticed the language stats:


Git was originally developed for the Linux kernel and is mainly for the open source community. I was interested to see JavaScript, the language of HTML 5, riding so high. PHP, C and C++ are lower than I would have guessed, Ruby and Python higher.

Here are some figures for the venerable Sourceforge:

Java (7,163) 19%
C++ (6,449) 17%
C (4,752) 13%
PHP (3,521) 10%
Python (2,694) 7%
C# (2,481) 7%
JavaScript (2,011) 5%
Perl (1,138) 3%
Shell (757) 2%
Visual Basic NET (688) 2%
Delphi/Kylix (581) 2%

This comes with a health warning. I have taken the figures from the what you get if you browse the directory and drop down Programming Languages; but the total is only about 37,000, whereas Sourceforge hosts around 324,000 projects. I am not sure what accounts for the discrepancy; it could be that language is not specified for the other projects, or they are dormant, or some other reason. But I hope the proportions indicate something of value.

Github is madly trendy, and Sourceforge ancient, so this tells us something about how open source activity has shifted towards JavaScript, Ruby and Python, and away from Java, C/C++ and C#.

Of course the overall picture of programming language usage is quite different. For example, you can get some kind of clue about commercial activity from a job board like, which currently has 77,457 US vacancies for Java, 22,413 for JavaScript, and only 5030 for Ruby.

Nevertheless, interesting to see what languages developers on Github are choosing to work with, and perhaps an indicator of what may be most in demand on the job boards a few years from now.

Finally, looking at these figures I cannot help thinking how short-sighted Microsoft was in abandoning IronPython and IronRuby back in 2010.

How many clouds is too many? AcerCloud announced in Las Vegas

Acer has announced its AcerCloud in the run-up to CES in Las Vegas. This is a service that spans mobile devices, PCs and the internet, the aim being that pictures, documents and multimedia are available from any device. Take a picture on your smartphone, and it appears seamlessly on your PC. Download a video to your PC, and view it on your tablet. Play music stored at home from your tablet while out and about.

The press release is short on technical details, but does say:

AcerCloud intelligently uses local and cloud storage together so all data is always available

That said, it is more PC-centric than some cloud services. It seems that Acer considers the PC or notebook to be the primary repository of your data, with the cloud acting as a kind of cache:

Professionals can update sales documents on a PC and save them, and the documents will be put into the personal cloud and streamed to other devices. They can then go to their meeting with their notebook or tablet PC and have immediate access to all the updated files. The files will be temporarily accessible for 30 days in the personal cloud and on the devices, or they can choose to download the files on to other devices for long-term storage.

One of the features, which failed in the CES demo, is that a PC which is in hibernation can be woken up through wi-fi to deliver your content on demand:

As long as the main PC is in sleep (standby/hibernation) mode, Acer Always Connect technology can wake it up through Wi-Fi® so media can be retrieved via a mobile device.

This whole thing would work better if the cloud, rather than the home PC, were the central repository of data. A PC or notebook sitting at home is unreliable. It has a frail hard drive. It might be a laptop on battery power, and the battery might expire. The home broadband connection might fail – and most home connections are much slower uploading to the internet than downloading from it.

Another question: if you one of the professionals Acer refers to, will you want to put your faith in AcerCloud for showing documents at your business meeting?

Acer wants to differentiate its products so that users seek out an Acer PC or tablet. The problem though is that similar services are already available from others. DropBox has a cloud/device synchronisation service that works well, with no 30 day expiry. Microsoft’s SkyDrive is an excellent, free cloud storage service with smart features like online editing of Office documents. Google Music will put all your music in the cloud. Apple iCloud shares content seamlessly across Apple devices, and so on.

The problem with this kind of effort is that if it is less than excellent, it has a reverse effect on the desirability of the products, being one more thing users want to uninstall or which gets in the way of their work.

We will see then.

Finally, I note this statement:

AcerCloud will be bundled on all Acer consumer PCs starting Q2 2012. It will support all Android devices, while future support is planned for Windows-based devices.

Android first.