Xamarin acquires LessPainful, announces Test Cloud for mobile apps

Xamarin, a company which provides tools for cross-platform development in C#, has announced its acquisition of LessPainful and the creation of cloud-based testing for mobile apps based on LessPainful’s technology and the Calabash scripting language it created.

The Test Cloud will perform automated user-interface tests on real devices, hosted by Xamarin, will provide detailed reports in the event of test failures, and will support Continuous Integration so that bugs are caught as early as possible.


“After you’ve conquered the cross-platform mobile development problem, testing is the next large pain point” says Xamarin CEO Nat Friedman. “You can’t just get by with manual testing. There’s a need for the same level of tools and processes in mobile testing that you have in desktop and web testing.”

“Quality is actually more important on mobile than in other places. Mobile sessions are very short. People are really intolerant of low quality on mobile. The release cycles are shorter too. People are revving more frequently, and testing is a bigger challenge.”

Another issue with mobile testing is the number of devices out there, especially if you throw cross-platform into the mix. “You have on Android all these manufacturers who customise the OS in different ways, you have multiple different versions that are in use, and you have multiple different form factors and device capabilities. The testing permutation matrix is huge.”

“Automated UI testing is the only kind of testing that can ensure that the app does what it is supposed to do.”

Friedman says that the Xamarin UI tests are more robust than competing UI test frameworks because they do not depend on UI image recognition. “The right answer is object-based, you identify user interface elements on the screen by object IDs”.”

How does testing on real devices work? If you have 50 developers testing on 27 devices in Xamarin’s cloud, will there be racks and racks of devices to support them?  “That’s what it looks like at our end, racks and racks of devices,” confirmed Friedman. “The service is going to be built based on device/hour usage. We’ll be able to scale up to match what people need.

“We talk to developers who spend $8,000 a month just to get new devices. That’s not counting the labour and everything else they need to do, to set up their own testing infrastructure. It’s a giant pain point.”

Xamarin’s Test Cloud will offer plug-ins for Jenkins, TeamCity, and Microsoft’s Team Foundation Server, to support Continuous Integration.

The scripting language for the Test Cloud is either Calabash , or C# scripting which is under development by Xamarin.

The Test Cloud is not just for applications developed using Xamarin’s C# framework, but also supports other frameworks including those written in native iOS Objective C. However, only iOS and Android are supported.

Availability is set for the third quarter of 2013.

Xamarin’s Evolve conference is currently under way in Austin, Texas, with around 600 developers in attendance. Friedman says the company is growing fast. 1000 developers a day download the tools, there are over 15,000 paid developers, and the company now has 65 employees.

More information on the Xamarin Test Cloud is here.

Intel fights back against iOS with free tools for HTML5 cross-platform mobile development

Today at its Software Conference in Paris Intel presented its HTML5 development tools.


There are several components, starting with the XDK, a cross-platform development kit based on HTML5, CSS and JavaScript designed to be packaged as mobile apps using Cordova, the open source variant of PhoneGap.

There is an intriguing comment here:

The XDK is fully compatible with the PhoneGap HTML5 cross platform development project, providing many features that are missing from the open source project.

PhoneGap is Adobe’s commercial variant of Cordova. It looks as if Intel is doing its own implementation of features which are in PhoneGap but not Cordova, which might not please Adobe. Apparently code that Intel adds will be fed back into Cordova in due course.

Intel has its own JavaScript app framework, formerly called jqMobi and now called Intel’s App Framework. This is an open source framework hosted on Github.

There are also developer tools which run as an extension to Google Chrome, and a cloud-based build service which targets the following platforms:

  • Apple App Store
  • Google Play
  • Nook Store
  • Amazon Appstore for Android
  • Windows 8 Store
  • Windows Phone 8

And web applications:

  • Facebook
  • Intel AppUp
  • Chrome Store
  • Self-hosted

The build service lets you compile and deploy for these platforms without requiring a local install of the various mobile SDKs. It is free and according to Intel’s Thomas Zipplies there are no plans to charge in future. The build service is Intel’s own, and not related to Adobe’s PhoneGap Build, other than the fact that both share common source in Cordova. This also is unlikely to please Adobe.

You can start a new app in the browser, using a wizard.


Intel also has an iOS to HTML5 porting tool in beta, called the App Porter Tool. The aim is to convert Objective C to JavaScript automatically, and while the tool will not convert all the code successfully it should be able to port most of it, reducing the overall porting effort.

Given that the XDK supports Windows 8 modern apps and Windows Phone 8, this is also a route to porting from iOS to those platforms.

Why is Intel doing this, especially on a non-commercial basis? According to Zipplies, it is a reaction to “walled garden” development platforms, which while not specified must include Apple iOS and to some extent Google Android.

Note that both iOS and almost all Android devices run on ARM, so another way of looking at this is that Intel would rather have developers work on cross-platform apps than have them develop exclusively for ARM devices.

Zipplies also says that Intel can optimise the libraries in the XDK to improve performance on its processors.

You can access the HTML5 development tools here.

Review: Three-in-one Jabra Revo headphones and headset: wired, wireless and USB

If headphones are judged on versatility, the Jabra Revo wins the prize. It works wired and wireless, it’s a USB audio device, it’s a headset with remote control, and as a final flourish it folds into a moderately compact size that you can slip in the supplied bag.


You might think that the result of this flexibility would be a fiddly and complex device, but this is not the case. The Revo has an elegant design and looks modern and sleek. The construction feels high quality as well; these headphones are lovely to handle.

In the solid plastic box you get the headphones, a drawstring bag, a USB cable, an audio cable (with four connectors on each 3.5mm jack, suitable for a headset connection to a mobile phone or tablet. The cables are braided for tangle-free connections, and bright orange so you will not miss them.


There is also a “Getting started” leaflet which I recommend you read, since not everything is obvious.

Step one is to charge the headphones. This is done using the USB cable. No charger is supplied, but you probably have a few of these already, or you can plug into any PC or Mac. A red light comes on while charging, and turns off when charging is complete, which takes about two hours from flat.

Step two is to pair the headphones with your mobile device. For this you can put a three-way on/off/pairing switch, tucked under the right-hand earphone, into the pairing position, for pairing in the normal way. Alternatively, just put it to the On position, and touch an NFC-enabled device to the left earphone (as I noted, not everything is obvious). This should then pair automatically, subject to a prompt on your mobile device.

I had mixed success with NFC. A Sony Xperia T smartphone failed twice, with a message “Could not pair Jabra Revo”, but worked on the third attempt. A Nokia Lumia 620 worked on the second attempt.

More than one device can be connected simultaneously, though only one at a time will play. I found this worked; I could play music on one device, then press play on another device and it automatically switched.

The good news is that Bluetooth audio worked well for me, with no skips or stutters, perhaps thanks to Jabra’s long experience with mobile communications. Volume was low to begin with, but note that the back of the right-hand earphone is also a touch volume control, and with a few strokes you can get more than enough volume.

There are also buttons at the centre of each earphone.

The right-hand button is multi-function, and does play/pause, or answer/end call, or reject a call if you hold it down, or redial last number if you double-tap.

The left-hand button is for the Jabra Sound App for iOS or Android. It is meant to launch the app, but this did not work for me with the Sony Xperia.

If you want to use the headphones wired, just plug the audio cable into the headphones. No battery power is required. If you want to use them as a USB device, attach the USB cable to a computer, wait for the drivers to install, and it works. I tried it with Skype and got reasonable results, though the microphone quality is less good than that of the headphones.

Jabra Sound app

If you have an Apple iOS or Google Android device, you can download the Jabra Sound app. This is a music player which claims to optimise sound for the headphones. The app is free but requires a code, supplied with the headphones, to activate it.

Using the app, you specify which Jabra headphones you are using. Next, you can set Dolby Processing, Mobile Surround, and Equalisation. If you turn Dolby Processing off, the other options are disabled as well.


I am a sceptic when it comes to this kind of processing, and the Jabra Sound app did nothing to convince me that it is worthwhile. I listened to I.G.Y. by Donald Fagen, which is a well-recorded track, and found that adding “Mobile Surround” made it noticeably worse, less natural and less clear. The equalizer could be useful though, particularly as the Revo are not the most neutral headphones I have heard.


Jabra Sound is a music player and only works with local music files. You cannot use it with Spotify or Google Play or other streaming services.

Revo in use

The comfort of these on-ear headphones is good, though tastes vary and I found them just a little stiff. Then again, wireless implies mobility and a firm fit is no bad thing.

How about the sound? There are a couple of points to note. First, all connections are not equal. I found that the wired connection sounds best, followed by the USB connection, followed by the wireless connection. That does not mean that wireless sounds bad, but I did find it slightly grainy in comparison. Only slightly; if you think Bluetooth audio means low quality sound, think again.

Second, the Revo seems to accentuate the bass, a little too much for my taste. This may be good marketing as many people seem to prefer this kind of sound, but if you want to hear what the mastering engineer intended you may prefer a more neutral sound.

These points aside, the sound is sweet, clear and refined. They are not reference quality, being easily bettered by, say, high-end Sennheisers, Judged purely on the basis of sound quality for the price, the Revo is nothing special. On the other hand, this is a bundle of smart technology, considering that it is also a wireless headset with a built-in touch volume control. This makes it hard to make a fair comparison. Given the capabilities of the product overall, the sound quality is decent.

I have mixed feelings about the touch controls. The ability to control volume and skip music tracks using taps and strokes is elegant, but inevitably there is more scope for mis-taps than with conventional buttons, and I found the volume control imprecise. That said, it is great to have volume and play/pause on the headphones themselves.


The Revo has a lot going for it. Elegant design, high quality construction, good wireless performance without any skips or stutters, and unmatched flexibility – remember, this is a headset that you can use for phone calls as well as for enjoying music.

On the negative side, the tonality is a little bass-heavy and the sound quality good but no better than it should be considering the premium price.

If the flexibility is something you can make use of, the Revo is a strong contender.


Driver size 40mm
Impedance 32 Ohm
Frequency response (no tolerance given) 20Hz – 20,000Hz
Sensitivity 119 dB at 1v/1kHz
Weight 240g
Battery life 12 hours playback/10 days standby
Charge time 2 hours
Wireless range 10m


Review: RHA MA350 Earphones

I have just spent a happy few hours comparing earbuds, ranging from a freebie supplied with a budget smartphone to my favourite Digital Silence DS-421D.


The reason was to assess the RHA MA350 earbuds. These are well-specified for the price, made of aluminium and supplied with a small bag and three sizes of tips.


The cable is braided for tangle-resistance. There is no microphone, but if you want a headset you can get the similar MA 350i which does have a microphone and remote but costs about £15 more.

Comfort is good and the earbuds felt secure. With three sizes of tips you will probably find one that fits nicely.

What about the sound? This is what counts from my perspective, and for the money I am impressed. No, they are not the equal of the Digital Silence which has active noise cancelling and costs four times more. They are miles better than the worst models I tried though.

Cheap earbuds can be really nasty, with a small, boxy, shouty sound. These by contrast sound sweet, clear and crisp. The soundstage is a little constricted and the depth limited compared to the best, but they are never harsh or annoying.

Like many earbuds, they are bass-shy, but not in the least tinny.

The MA 350 are advertised as “noise isolating” and I think that is fair. In a noisy environment they do an excellent job at shutting out sound, as good as you can expect from earbuds without active noise cancelling.

The tangle-free cable works well, but there is one small annoyance. The right and left markings are faintly inscribed on the cable protector rather than on the body of the earbud, and you have to squint closely to see which is which.

If you are looking for decent earbuds at a reasonable price, these are easy to recommend. I also noticed the generous 3 year warranty, and the fact that replacement tips are easy to get hold of.

The Manufacturer’s specifications are as follows:

Drivers 10mm Mylar
Frequency response (no tolerance given) 16-22,000Hz
Impedance 16 ohms
Sensitivity 103dB
Rated/Max power 3/10mW
Weight 11g
Cable 1.2m braided


The PC puzzle: does the sales drop implicate or justify Windows 8?

Gartner has joined IDC in releasing figures showing a steep drop in PC sales for the first quarter of 2013.

Worldwide PC shipments totalled 79.2 million units in the first quarter of 2013, an 11.2 per cent decline from the first quarter of 2012, according to preliminary results by Gartner, Inc. Global PC shipments went below 80 million units for the first time since the second quarter of 2009. All regions showed a decrease in shipments, with the EMEA region experiencing the steepest decline.

says the release. In EMEA the decline was 16%. In the US, the decline was only 9.6%, but marked the 6 consecutive quarter of decline.

Gartner does not give worldwide figures for Apple, but says that its shipments grew by 7.4% in the US, which is a particularly strong market for Apple, giving it an 11.6% market share.

One bright spot for Microsoft:

Unlike the consumer PC segment, the professional PC market, which accounts for about half of overall PC shipments, has seen growth, driven by continuing PC refreshes.

That will please the folk at the event I am attending right now, the Microsoft Management Summit in Las Vegas, which is about managing servers, PCs and other devices in the enterprise. The consumerisation of IT is real, and so is Bring Your Own Device, but never underestimate the extent to which Windows is embedded in business.

Still, does the overall decline prove that Windows 8 was a huge mistake, and that Windows/Microsoft is now set for long-term decline?


Not necessarily. There is another way to look at these figures, which is that Microsoft was correct to conclude, back when Windows 8 was planned, that tablets and touch devices would erode the traditional PC market, and that it had to take the risk of reshaping its desktop operating system accordingly.

It is plausible, even likely, that PC sales would not have declined so fast if Windows 8 had been less radical. On the other hand, the long term cost of not reshaping the Windows UI for touch, nor introducing the app store model of software deployment, would probably be greater.

Put another way, the Windows 8 experiment means that PC sales may eventually stop declining, whereas without it they would continue to trend download, even though the curve for this last quarter might be less shocking.

Even if you accept this reasoning, you can still argue that the Windows 8 tablet personality is so poorly executed that it cannot compete with iOS and Android devices. Most Windows 8 users live on the desktop, even those with touch screens and tablets. I am seeing a lot of Surface Pro here in Vegas, with users loving its portability, performance, and elegant keyboard cover, but I see it being used like a laptop, not like a tablet.

Microsoft undoubtedly made mistakes in the initial release of Windows 8, the biggest problem being that the Windows Runtime side, which supports the tablet personality, was rushed out and is really not finished. Creating excellent and good-looking apps is harder than it should be, which is one reason why there are so few.

  • The Windows 8 experience for new users, especially those with long familiarity with earlier versions, is so poor that many prefer to stick with Windows 7. A few tweaks and compromises would have made this easier.
  • Windows RT, the ARM based edition which runs only “Modern” apps and Office, is spoilt by poor performance as well as the lack of good apps. The absence of Outlook from Office in Windows RT spoils its for the business market, where it is potentially attractive as a cost-effective, secure tablet operating system.
  • Microsoft’s OEM and retail partners do not seem to know how to sell Windows 8.

When I put these points to some Microsoft folk informally here at MMS the answer I got was “Blue will make you happy.” Blue, according to these guys, is not the code name for a new version of Windows. Rather, it is a process of incremental updates which users will get automatically. It is well-known of course that significant Windows 8 updates are on the way, and builds have been leaked.

Windows 8 has made a bad start, but it is not all bad. The desktop side (which is what most of us use most of the time) improves on Windows 7, and it is plausible that a combination of user learning along with updates that make the transition to the new Start screen less jarring will make adoption easier.

Equally, the Windows Runtime side will get better. I expect to see new and improved components for developers building apps, and better reliability and performance. Outlook is rumoured to be coming to Windows RT, and at some point we may also see versions of Office applications appear in the Modern UI.

Windows RT will have a tough fight with Intel-based tablets, but users will win either way, since next-generation ARM chipsets are much faster and Intel is making great strides with low-power, high-performance chipsets of its own.

Incidentally, Windows RT is not quite dead. I heard a questioner here at MMS ask questions about how to deploy their forthcoming purchase of a “large quantity” of RT devices.

Microsoft is at times a stumbling giant, but it is stumbling in the right direction with Windows 8, and it may yet work out. Even if by then it is called Windows 9.

Microsoft takes aim at VMware, talks cloud and mobile device management at MMS 2013

I am attending the Microsoft Management Summit in Las Vegas (between 5 and 6,000 attendees I was told), where Brad Anderson, corporate vice president of Windows Server & System Center, gave the opening keynote this morning.


There was not a lot of news as such, but a few things struck me as notable.

Virtualisation rival VMware was never mentioned by name, but frequently referenced by Anderson as “the other guys”. Several case studies from companies that had switched from “the other guys” were mentioned, with improved density and lower costs claimed as you would expect. The most colourful story concerned Dominos (pizza delivery) which apparently manages 15,000 servers across 5,000 stores using System Center and has switched to Hyper-V in 750 of them. The results:

  • 28% faster hard drive writes
  • 36% faster memory speeds
  • 99% reduction in virtualisation helpdesk calls

That last figure is astonishing but needs more context before you can take it seriously. Nevertheless, there is momentum behind Hyper-V. Microsoft says it is now optimising products like Exchange and SQL Server specifically for running on virtual machines (that is, Hyper-V) and it now looks like a safe choice, as well as being conveniently built into Windows Server 2012.

I also noticed how Microsoft is now letting drop some statistics about use of its cloud offerings, Azure and Office 365. The first few years of Azure were notable in that the company never talked about the numbers, which is reason to suppose that they were poor. Today we were told that Azure storage is doubling in capacity every six to nine months, that 420,000 domains are now managed in Azure Active Directory (also used by Office 365), and that Office 365 is now used in some measure by over 20% of enterprises worldwide. Nothing dramatic, but this is evidence of growth.

Back in October 2012 Microsoft acquired a company called StorSimple which specialises in integrating cloud and on-premise storage. There are backup and archiving services as you would expect, but the most innovative piece is called Cloud Integrated Storage (CiS) and lets you access storage via the standard iSCSI protocol that is partly on-premise and partly in the cloud. There was a short StorSimple demo this morning which showed how how you could use CiS for a standard Windows disk volume. Despite the inherent latency of cloud storage performance can be good thanks to data tiering, which puts the most active data on the fastest storage, and the least active data in the cloud. From the white paper (find it here):

CiS systems use three different types of storage: performance-oriented flash SSDs, capacity- oriented SAS disk drives and cloud storage. Data is moved from one type of storage to another according to its relative activity level and customer-chosen policies. Data that becomes more active is moved to a faster type of storage and data that becomes less active is moved to a higher capacity type of storage. 

CiS also uses compression and de-duplication for maximum efficiency.

This is a powerful concept and could be just the thing for admins coping with increased demands for storage. I can also foresee this technology becoming part of Windows server, integrated into Storage Spaces for example.

A third topic in the keynote was mobile device management. When Microsoft released service pack 1 of Configuration Manager (part of System Center) it added the ability to integrate with InTune for cloud management of mobile devices, provided that the devices are iOS, Android, Windows RT, or Windows Phone 8. A later conversation with product manager Andrew Conway confirmed that InTune rather than EAS (Exchange ActiveSync) policies is Microsoft’s strategic direction for mobile device management, though EAS is still used for Android. “Modern devices should be managed from the cloud” was the line from the keynote. InTune includes policy management as well as a company portal where users can install corporate apps.

What if you have a BlackBerry 10 device? Back to EAS. A Windows Mobile 6.x device? System Center Configuration Manager can manage those. There is still some inconsistency then, but with iOS and Android covered InTune does support a large part of what is needed.

Office 2013 Home and Business requires a Microsoft account to activate, a nuisance for Office 365 users

A small business contacted me with a perplexing problem related to Office 2013 and Office 365. The scenario looks like this:

  • All their staff have Office 365 E1 accounts (for small and midsize businesses)
  • They normally buy laptops with Microsoft Office. That would normally be the OEM version or more recently the Product Key Card (PKC) equivalent. This is licensed only for the PC on which it is first installed.
  • Since they already have Office, purchasing the more expensive Office 365 subscription (£9.80 vs £5.20, or £55.20 extra per user per year) which includes desktop Office is poor value (update: see comments for more notes on this option).

With me so far? Now comes the moment when a new member of staff joins, for whom a new laptop is purchased. They buy with it the closest equivalent to the Office 2010 Product Key Card, which is Home and Business 2013, this guy:


Note the designation Home and Business, indicating that it is fine for business use.

Next, they set up the laptop for Office 365 and install their new Office 2013. Only there is a problem. Office Home and Business cannot be activated without a “Microsoft account”. You might think that an Office 365 subscription counts as a “Microsoft account” but it is the wrong kind: it is an “organizational” account in Microsoft’s jargon, which is a subtly different creature. The Office 2013 purchase is then tied in to some extent to that account.

Specifically, the normal way to install is to go to http://www.office.com/setup. When you do, you enter the supplied product key, following which the unavoidable next step is to sign in with a Microsoft account.

Another feature of Office Home and Business 2013 (again different from Office 2010) is that there is no way (that I know of) to install it other than via Click and Run, which uses application virtualisation. Personally I prefer the non-virtualised install, after experiencing problems with previous versions of Click and Run. Maybe these are fixed now, maybe not, but this choice has been removed.

You can also install from a DVD as discussed here, if you download the DVD image from Microsoft. Unfortunately this is still a click-to-run install, and still requires a Microsoft account. You can enter the product key when invited to activate, but the process will not complete without logging in online. If you sign into Office 365 instead, you get an error. I also spotted this message:


It says, “You’re currently signed in with an organizational account. To view or manage any consumer subscriptions you may have purchased, please sign in with your Microsoft account.” This intrigues me, since if you have purchased a perpetual product called “Home and Business” you might imagine that is it neither consumer, nor a subscription.

There are a couple of problems with the requirement for a Microsoft account. One is that the business does not want the employee to start using features like Skydrive which are attached to any Microsoft account other than Office 365. Another is that the employee may leave, and the laptop transferred to somebody new. With the old Office 2010 PKC, which did not require a hook to a Microsoft account, that was a smooth transition. Office is licensed for the laptop, not the individual. The new Office 2013 is still licensed only for one laptop, but also has some sort of relationship to an individual Microsoft account, which will be a nuisance if that person leaves the company.

You can overcome these problems by purchasing a volume license for Office 2013 instead. The ideal product is Office Professional Plus. You can install it without using click-to-run and it does not require a Microsoft account to activate. But you guessed: this costs more than double the cost of Home and Business 2013. The approximate ex-VAT cost in the UK is £150 for Home and Business, versus £375 for Professional Plus.

The dependency on a Microsoft account is not clear on Microsoft’s site. The specifications for Office Home and Business are here. It says:

Certain online functionality requires a Microsoft account.

True; but in this case the product cannot be activated at all without a Microsoft account. It is useless without it.

The workaround is to give in and create a Microsoft account just for the purpose of activating Office. Of course you need an email address for this, though apparently (taking this from the above referenced discussion) you can activate up to 10 Office 2013 installs with one Microsoft account.

Once activated, there is no problem that I am aware of with using the product with Office 365.

It is still messy, since that Office install is forever linked with the Microsoft account you use, even though it is intended for use with Office 365.

Taking a wider perspective, it also seems to be that there may be purchasers who want to use Microsoft Office in part because (unlike, say, Google apps) it does not require online sign-in. They may prefer not to have a Microsoft account. With Office 2010 that was easy, but not with this new edition, and I am not seeing this spelt out in the product descriptions. Once you get it home, you will spot this on the packaging:


Considering the complications of using Home and Business 2013 with Office 365, it looks like the best option is to upgrade to the Office 365 subscription type which includes desktop Office, but that is a heavy financial penalty for a business that has already purchased Office for all its laptops.

Google forks WebKit into Blink: what are the implications?

Yesterday Google announced that it is forking WebKit to create Blink, a new rendering engine to be used in its Chrome browser:

Chromium uses a different multi-process architecture than other WebKit-based browsers, and supporting multiple architectures over the years has led to increasing complexity for both the WebKit and Chromium projects. This has slowed down the collective pace of innovation – so today, we are introducing Blink, a new open source rendering engine based on WebKit.

Odd that not long ago we were debating the likelihood and merits of WebKit becoming the de facto standard for HTML. Now Google itself is arguing against such a thing:

… we believe that having multiple rendering engines—similar to having multiple browsers—will spur innovation and over time improve the health of the entire open web ecosystem.

Together with the announcement from Mozilla and Samsung of a new Android browser which, one assumes, may become the default browser on Samsung Android phones, there is now significant diversity/competition/fragmentation in the browser market (if you can call it a market when everything is free).

The stated reason for the split concerns multi-process architecture, with claims that Google was unwilling to assist with integrating Chromium’s multi-process code into WebKit:

Before we wrote a single line of what would become WebKit2 we directly asked Google folks if they would be willing to contribute their multiprocess support back to WebKit, so that we could build on it. They said no.

At that point, our choices were to do a hostile fork of Chromium into the WebKit tree, write our own process model, or live with being single-process forever. (At the time, there wasn’t really an API-stable layer of the Chromium stack that packaged the process support.)

Writing our own seemed like the least bad approach.

Or maybe it was the other way around and Apple wanted to increase its control over WebKit and optimize it for the OSX and iOS rather than for multiple platforms (which would be the Apple way).

It matters little. Either way, it is unsurprising that Apple and Google find it difficult to cooperate when Android is the biggest threat to the iPhone and iPad.

The new reality is that WebKit, instead of being a de facto standard for the Web, will now be primarily an Apple rendering engine. Chrome/Chromium will be all Google, making it less attractive for others to adopt.

That said, several third parties have already adopted Chromium, thanks to the attractions of the Chromium Embedded Framework which makes it easy to use the engine in other projects. This includes Opera, which is now a Blink partner, and Adobe, which uses Chromium for its Brackets code editor and associated products in the Adobe Edge family.

The benefit of Blink is that diverse implementations promote the importance of standards. The risk of Blink is that if Google further increases the market share of Chrome, on desktop and mobile, to the point where it dominates, then it is in a strong position to dictate de-facto standards according to its own preferences, as suggested by this cynical take on the news.

The browser wars are back.

Make your iPhone a close-up or wide-angle camera with Olloclip

Here’s a gadget I came across at Mobile World Congress earlier this year. The Olloclip is a clip-on supplementary lens for the iPhone or iPod Touch, giving it three new modes: wide-angle, Fisheye, and Macro.


In the box you get the reversible lens with covers for each end, an adapter clip for the slimmer iPod Touch, and a handy bag.


The lens clips onto the corner of the iPhone, covering the on/off button. There are different models for the iPhone 4/4S and iPhone 5, which is a drawback. Every time you upgrade to a new iPhone, you will have to buy a new Olloclip, or do without it. You also lose use on the on/off button when the Olloclip is attached.


Still, that is a small price to pay if you get amazing new photographic capabilities, and to some extent you do. I was particularly impressed by the macro mode. Here is my snap of a coin getting as close as I could quickly manage with the iPhone 4 alone:


Snap on the Olloclip, and I can capture a world of detail that was previously unavailable.


I also tried the wide-angle and fisheye modes, both of which work as advertised.

The twist here is that the Olloclip gives your iPhone camera features which your purpose-built compact camera may not have. If you want or need to take the kind of shots which the Olloclip enables, it is a great choice, spoilt a little by the inconvenience of clipping and unclipping the lens.


Twilio integrates with Google App Engine for cloud telephony applications

Cloud telephony company Twilio has announced a partnership with Google to integrate its API with App Engine, Google’s platform for cloud applications. Google has a clear explanation of what this enables here. You can have your application respond to incoming SMS texts or voice calls, and send an SMS back, or for voice, play messages, record the call, or ask for further digits to be pressed to route the call appropriately. You can also use the API to initiate calls or send texts.

If you look here there are how-to examples (generic to Twilio, not specific to App Engine) for some of the things you do with Twilio:

  • Automated reminder calls
  • Click to call on your web site
  • Company directory
  • IVR (Interactive Voice Response) for automated support
  • Conferencing
  • Phone polls
  • Voice mail
  • Voice transcription

and more of course. Help desk applications and other kinds of support are the most obvious applications, but there are no limits: if you want to build voice calls or SMS messaging into your app, Twilio is the obvious solution.

The relationship with Google is not exclusive. Twilio already has integration with Windows Azure, Microsoft’s cloud platform. Google has one-upped Microsoft though. The Azure promotion gets you free credit for 1000 texts or minutes for Azure, while there are free 2000 texts or minutes for Google App Engine customers.

You can also use Twilio on any platform that can use a REST API. There is a module for Node.js, and libraries for PHP, Python, Ruby, C#, Java and Apex (used by Salesforce.com).