Category Archives: software

Microsoft results: old business model still humming, future a concern

Microsoft has published its latest financials. Here is my at-a-glance summary:

Quarter ending March 31st 2012 vs quarter ending March 31st 2011, $millions

Segment Revenue Change Profit Change
Client (Windows + Live) 4624 +177 2952 +160
Server and Tools 4572 +386 1738 +285
Online 707 +40 -479 +297
Business (Office) 5814 +485 3770 +457
Entertainment and devices 1616 -319 -229 -439

What is notable? Well, Windows 7 is still driving Enterprise sales, but more striking is the success of Microsoft’s server business. The company reports “double-digit” growth for SQL Server and more than 20% growth in System Center. This seems to be evidence that the company’s private cloud strategy is working; and from what I have seen of the forthcoming Server 8, I expect it to continue to work.

Losing $229m in entertainment and devices seems careless though the beleaguered Windows Phone must be in there too. Windows Phone is not mentioned in the press release.

Overall these are impressive figures for a company widely perceived as being overtaken by Apple, Google and Amazon in the things that matter for the future: mobile, internet and cloud.

At the same time, those “things that matter” are exactly the areas of weakness, which must be a concern.

A quiet revolution in UK government IT: open source ousting big-vendor lock-in

The most striking and surprising presentation at the Monki Gras developer event in London earlier this week was from two quietly spoken men from the UK government’s Cabinet Office. James Stewart and Matt Wall work on the Government Data Service (GDS), and what they are doing is revolutionary.

What is the GDS? “It’s a new branch of the cabinet office which exists to deliver public services, public sector information in-house, rather than the traditional out-sourcing model,” they explained, though it turned out to be rather more than that.

Wall described his experience of talking to government workers about their IT needs.

A common thing you see from very small to very large is someone in government who wants to get something done, who has a business problem or a user need that they want to serve, surrounded by a complex array of integrators, vendors, contractors, suppliers, and all of that, kind-of locked into that, their ability to manoeuvre or deliver services [is limited].

he explained. The only solution is to reform the way software is procured. They described their boss Mike Bracken’s goal:

We want to move from government procuring systems to government commissioning them, whether we build them ourselves, or just that we know what it is we’re asking for. We need that knowledge.

This is also about breaking the hold of the large vendors and finding ways to work on a smaller scale.

If you want to buy something in government, traditionally, some software or some system, the amount of momentum that you have to get up, the amount of people you can easily engage with, they tend to be from companies that are absolutely vast and they tend to take projects that are absolutely vast, the whole mechanism of working is stultifying for everyone involved. It is not just us, a small group of developers sitting in an office able to write some stuff, because that’s not scalable, you can’t do that for everyone. It’s finding small to medium sized companies, partners, out there in the market and finding ways to engage them … why should five very large companies get all the work?

Mike Bracken and the Cabinet Office minister Frances Maude are currently on the West Coast of the USA, they said.

They were invited to meet the usual suspects, Oracle, the major systems integrators. They cancelled it. They’re visiting Joyent, they’re visiting 10Gen, they’re visiting Twilio [applause]. It’s a wholesale change. We’re looking at how great web services are built.

There is also a commitment to open source. “All of the code that we’re producing is open source and out on the Internet,” they said.

What tools do they use?

Most of the core apps are in Ruby, with a mixture of Sinatra and Rails, and some Scala. We’re using a mixture of MySQL and Mongo for the database,

they told us.

The GDS is currently only about 30 people, 10 of whom are developers. How much impact can such a small team have?

We’ve just started and we’re very small. We’re already having a significant impact in some quite large and some quite small projects. The incoming demand that we face across central government and local government is absolutely astronomical, and one of the things that’s important to resolve over the coming years is how to manage that demand and provide services, abilities and communities for people . . . we never want to parachute into somewhere, rewrite all the systems and then go off somewhere else., that’s not sustainable.

Can this small group really change government IT so profoundly? That is an open question, and perhaps in the long term they will fail. There is no doubting though that this particular team is doing inspiring work. This blog post from GDS yesterday describes how open source participation was used to fix a government web site; it may seem a small thing, but as a new and different approach it is significant.

For more information see Mike Bracken’s post This is why we are here, and take a look at the team’s early work on GOV.UK, which is in beta.

image

How to brew better software: The Monki Gras in London

I attended The Monki Gras in London yesterday, a distinctive developer event arranged by the analyst firm RedMonk.

This was not only a developer event, with the likes of Andre Charland and Dave Johnson from the PhoneGap team at Adobe, Mike Milinkovich the executive director of the Eclipse Foundation, and Jason Hoffman with Bryan Cantrill from cloud services (and Node.js sponsors) Joyent. It was also a serious beer event, complete with a range of craft beers, a beer tasting competition with nine brews to try, and a talk plus a free book from  beer expert Melissa Cole. An unusual blend of flavours.

image

In charge of the proceedings was RedMonk co-founder and all round impressario James Governor. I am a big fan of RedMonk and its developer-focused approach; it has been a fresh and heady brew in the dry world of IT analysts.

image

The Monki Gras did seem like an attempt by a regular IT conference sufferer to fix problems often encountered. The Wi-Fi worked, the food was fresh, unusual and delicious, the coffee was superb; though brewing good coffee takes time so the queues were long. Not everything scales. Fortunately this was a small event, and a rare treat for the couple of hundred or so who attended.

That said, there were frustrations. The sessions were short, which in general is a good thing, but left me wanting more depth and more details in some cases; we did not learn much about PhoneGap other than a brief overview, for example.

Nevertheless there was serious content. Redmonk’s Stephen O’Grady made the point succinctly: IT decision makers are ignorant about what developers actually use and what they want to use, which is one reason why there is so much dysfunction in this industry. Part of the answer is to pay more attention, and several sessions covered different aspects of analytics: Matt LeMay from bitly on what users click on the Web; Matt Biddulph (ex BBC, Dopplr, Nokia) gave a mind-stretching talk on social network analysis which, contrary to what some think, was not invented by Facebook but predates the Internet; and O’Grady shared some insights from developer analytics at RedMonk.

I had not noticed before that github now gets nearly double the number of commits than does Google Code. That is partly because developers like git, but may also say something about Google’s loss of kudos in the open source developer community.

Kohsuke Kawaguchi, lead for Jenkins Continuous Integration and an architect at CloudBees, spoke on building a developer community. His context was how Jenkins attracted developers, but his main point has almost limitless application:  “Make everything easy, relentlessly.”

Something I see frequently is how big companies (the bigger the worse) place obstacles in front of developers or users who have an interest in their products or services. Examples are enforced registration, multiple clicks through several complex pages to get to the download you want, complex installs, and confusing information. It all adds friction. If the target is sufficiently compelling, like apps on Apple’s app store, developers will get there anyway; but it all adds friction, and if you are not Apple that can be fatal.

The Joyent guys did not speak about Node.js, sadly, but rather on the distinction between a VP of engineering and a Chief Technology Officer. Sounds dry and abstruse? I thought so too, but the delivery was so energetic that they were soon forgiven. Hoffman and Cantrill moved on to talk about management antipatterns in the software industry, prompting many wry nods of recognition from the audience. “It is very hard for middle management to add value,” said Cantrill.

Milinkovich made the point that the most valued open source projects generally make their way to a software foundation; PhoneGap to Apache is a recent example. He then gave the talk he really wanted to give, noting that as new software stacks emerge they have a tendency to re-implement CORBA, a middleware specification from the Nineties that tackled problems including remote objects, language independence, and transactions across the Internet. CORBA is remembered for drowning in complexity, but Milinkovich’s point is that the creators of exciting new stacks like Node.js should at least research and learn from past experience.

Milinkovich also found time to proclaim that “Flash is dead, Silverlight is dead, browser plugins are dead.” Perhaps premature; but I did not hear many dissenting voices.

I tweeted the conference extensively yesterday (losing at least one follower but gaining several more). Look out also for a couple of follow-up posts on topics of particular importance.

Storage Spaces coming to Windows 8 client as well as server

Steven Sinofsky has posted on the Building Windows 8 blog, making it clear that this feature is coming to the Windows 8 client as well as to Windows Server 8.

I took a hands-on look at Storage Spaces back in October.

The feature lets you add and remove physical drives from a pool of storage, create virtual disks in that pool with RAID-like resiliency if you have more than one physical drive available. There is also “thin provisioning”, which lets you create a virtual disk bigger than the available space. It sounds daft at first, but makes sense if you think of it as a resource to which you add media as needed rather than paying for it all up-front. It

The server version includes data deduplication so that similar or identical files occupy less physical space. Another feature which is long overdue is the ability to allocate space to a virtual folder rather than to a drive letter.

I do not know if all these features will come to the Windows 8 client version, but as data deduplication is not mentioned in Sinofsky’s post, and the dialog he shows does not include a folder option, it may well be that these are server-only. This is the new Windows 8 dialog:

image

Storage Spaces occupies a kind of middle ground in that enterprises will typically have more grown-up storage systems such as a Fibre Channel or iSCSI SAN (Storage Area Network). At the other end of the scale, individual business users do not want to bother with multiple drives at all. Nevertheless, for individuals with projects like storing large amounts of video, or small businesses looking for good value but reliable storage based on cheap SATA drives, Storage Spaces look like a great feature.

Most computer professionals will recall seeing users struggling with space issues on their laptop, not realising that the vendor (Toshiba was one example) had partitioned the drive and that they had a capacious D drive that was completely empty. It really is time that Microsoft figured out how to make storage management seamless and transparent for the user, and this seems to me a big step in that direction.

NVIDIA CEO Jen-Hsun Huang beats the drum for GPU computing

In his keynote at the GPU Technology Conference here in Beijing NVIIDA CEO Jens-Hsun Huang presented the simple logic of GPU computing. The main constraint on computing is power consumption, he said:

Power is now the limiter of every computing platform, from cellphones to PCs and even datacenters.

CPUs are optimized for single-threaded computing and are relatively inefficient. According to Huang a CPU spends 50 times as much power scheduling instructions as it does executing them. A GPU by contrast is formed of many simple processors and is optimized for parallel processing, making it more efficient when measured in FLOP/s (Floating Point Operations per Second), a way of benchmarking computer performance. Therefore it is inevitable that computers make use of GPU computing in order to achieve best performance. Note that this does not mean dispensing with the CPU, but rather handing off processing to the GPU when appropriate.

This point is now accepted in the world of supercomputers. The computer at Chinese National Supercomputing Center in Tianjin has 14,336 Intel CPUs, 7168 Nvidia Tesla GPUs, and 2048 custom-designed 8-core CPUs called Galaxy FT-1000, and can achieve 4.7 Petaflop/s for a power consumption of 4.04 MegaWatts (million watts), as presented this morning by the center’s Vice Director Xiaoquian Zhu. This is currently the 2nd fastest supercomputer in the world.

Huang says that without GPUs the world would wait until 2035 for the first Exascale (1 Exaflop/s) supercomputer, presuming a power constraint of 20MW and current levels of performance improvement year by year, whereas by combining CPUs with GPUs this can be achieved in 2019.

Supercomputing is only half of the GPU computing story. More interesting for most users is the way this technology trickles down to the kind of computers we actually use. For example, today Lenovo announced several workstations which use NVIDIA’s Maximus technology to combine a GPU designed primarily for driving a display (Quadro) with a GPU designed primarily for GPU computing (Tesla). These workstations are aimed at design professionals, for whom the ability to render detailed designs quickly is important. The image below shows a Lenovo S20 on display here. Maybe these are not quite everyday computers, but they are still PCs. Approximate price to follow soon when I have had a chance to ask Lenovo. Update: prices start at around $4500 for an S20, with most of the cost being for the Tesla board.

image

Exchange 2010 Service Pack 2 with Office 365 migration wizard and retro Outlook Mini

Microsoft has released Exchange 2010 SP2, which I have successfully installed on my small system.

image

There is a description of what’s new here. The most notable features are the Hybrid Configuration Wizard for setting up co-existence between on-premise Exchange and Office 365, and Outlook Mini for low-end phones with basic browsers.

A hybrid setup lets you include on-Premise Exchange and Office 365 Exchange in a single organisation. You can move mailboxes back and forth, archive messages online (even from on-Premise mailboxes), and synchronize Active Directory information. The feature is not new, but the wizard is.

image

This looks similar to the Exchange migration tools for BPOS and Office 365 so this is mainly a matter of baking them into the product.

Outlook Mini is very retro; I like it. It is also called Outlook Mobile Access and is similar to a feature of Exchange 2003 though it is new code; it is actually built using Outlook Web Access forms and accessed at the url yourexchange/owa/oma. There is no automatic redirection so users will have to be shown where to find it.

image image

 

Finally, this note amused me as evidence of how far litigation issues have permeated into Microsoft’s products. But what is the point of a “litigation hold” if it is so easily bypassed?

In Exchange 2010 SP2, you can’t disable or remove a mailbox that has been placed on litigation hold. To bypass this restriction, you must either remove litigation hold from the mailbox, or use the new IgnoreLegalHold switch parameter when removing or disabling the mailbox.

Twilio: programmable telephony, SMS comes to the UK, Europe

Web telephony provider twilio, which is based in San Francisco, has today announced its first international office, in London. You can now purchase UK telephone numbers at a cost of $1.00 per month, or Freephone numbers for $2.00 per month.

Twilio is not in competition with Skype or Google Voice; rather it offers an API so that you can incorporate voice calls and SMS messaging into web or mobile applications. The REST API lets you provision numbers with various options for what happens to incoming calls (conferencing, forwarding to another number or voice over IP, recording, transcriptions), as well as notifications so that you can get email or SMS alerts.

CEO and co-founder Jeff Lawson came from Amazon Web Services (AWS), and has a similar business model in that twilio targets developers and offers infrastructure as a service, rather than selling complete applications to its customers. Twilio does not own any datacenters, but uses mainly AWS and some RackSpace virtual servers to provide a resilient and scalable service.

The launch partner for the UK is Zendesk, a cloud-based helpdesk provider, which is using twilio to add voice to what was previously an email-based product. Zendesk forms an excellent case study. Using the service, you can provision a support number and have calls redirected to agents, or have a voicemail recorded, using a simple setup procedure. Calls can be recorded and you can have alerts sent when they are received.

What this means is that even the smallest businesses can offer helpdesk support using a pay-as-you-go model.

Lawson observes that twilio is the 6th and 13th most popular API on ProgrammableWeb (he says it is 5th if you combine voice and SMS) and claims very rapid growth in traffic using the API, though he will not talk about revenue. The company has around 60 employees in San Francisco and just one in the UK initially.

The service is also launching in beta for 5 other European countries: Poland, France, Portugal, Austria and Denmark. 11 other countries will be added by the end of 2011, though there are prominent omissions – no Germany or Spain, for example.

I was impressed by the demo and presentation at the press launch. Lawson provisioned a conferencing number and had us dial in during the briefing. He says twilio is engaged in disrupting on-premise telephony applications with a cloud service, in the same way Salesforce.com has done for CRM (Customer Relationship Management). The service is inexpensive to set up; Lawson said that this commodity pay-as-you-go pricing is essential for disruptive technology to succeed, another strategy borrowed from AWS.

There are server libraries for web platforms including Ruby, PHP, Java and C#, and client SDKs for JavaScript, Android and iOS.

Delphi XE2 FireMonkey for Windows, Mac, iOS: great idea, but is it usable?

I am sure all readers of this blog will know by now that Delphi XE2 (and RAD Studio XE2) has been released, and that to the astonishment of Delphi-watchers it supports not only 64-bit compilation on Windows, but also cross-platform apps for Windows, Mac OS X and even iOS for iPhone and iPad (with Android promised).

I tried this early on and was broadly impressed – my app worked and ran on all three platforms.

image

However it is an exceedingly simple app, pretty much Hello World, and there are some worrying aspects to this Delphi release. FireMonkey is based on technology from KSDev, which was acquired by Embarcadero in January this year. To go from acquisition to full Delphi integration and release in a few months is extraordinary, and makes you wonder what corners were cut.

It seems that corners were cut: you only have to read this post by developer and Delphi enthusiast Chris Rolliston:

To put it bluntly, FireMonkey in its current state isn’t good enough even for writing a Notepad clone (I know, because I’ve been trying). You can check out Herbert Sauro’s blog for various details (here, also a follow up post here). For my part, here’s a highish-level list of missing features and dubious coding practices, written from the POV of FireMonkey being a VCL substitute on the Mac (since on OS X, that is what it is).

Fortunately I did not write a Notepad clone, I wrote a Calculator clone, which explains why I did not run into as many problems.

Update: See also A look at the 3D side of FireMonkey by Eric Grange:

…if you want to achieve anything beyond a few poorly texture objects, you’ll need to design and write a lot of custom code rather than rely on the framework… with obvious implications of obsolescence and compatibility issues whenever FMX finally gets the features in standard.

There has already been an update for Delphi XE2 which is said to fix over 120 bugs as well as an open source licensing issue. I also noticed better performance for my simple iOS calculator after the update.

Still, FireMonkey early adopters face some significant issues if they are trying to make VCL-like applications, which I am guessing is a common scenario. There is a mismatch here, in that FireMonkey is based on VGScene and DXScene from KSDev, and the focus of those libraries was rich 2D and 3D graphics. Some Delphi developers undoubtedly develop rich graphical applications, but a great many do not, and I would judge that if Embarcadero had been able to deliver something more like a cross-platform VCL that just worked, the average Delphi developer would have been happier.

The company must be aware of this, and one reading of the journey from VSCene/DXScene to FireMonkey is that Embarcadero has been madly stuffing bits of VCL into the framework. Eventually, once the bugs are shaken out and missing features implemented, we may have something close to the ideal.

In the meantime, you can make a good case for Adobe Flash and Flex if what you really want is cross-platform 2D and 3D graphics; while VCL-style developers may be best off using the current FireMonkey more for trying out ideas and learning the new Framework than for real work, pending further improvements.

On the positive side, even though FireMonkey is a bit rough, Embarcadero has delivered a development environment for Windows and Mac that works. You can work in the familiar Delphi IDE and code around any problems. The Delphi community is not short of able developers who will share their workarounds.

I have some other questions about Delphi. Why are there so many editions, and who uses the middleware framework DataSnap, or other enterprisey features like UML modeling?

There appear to be five editions of Delphi XE2: Starter, Professional, Enterprise, Ultimate and Architect, where Architect has features missing in Ultimate – should the Ultimate be called the Penultimate? It breaks down like this:

  • Starter: low cost, restrictive license that is mainly non-commercial (you are allowed revenue up to $1000 per year). No 64-bit, no Mac or iOS. $199.00
  • Professional: The basic Delphi product. Missing a few features like UML diagramming, no DataSnap. Limited IntraWeb. $899.00.
  • Enterprise: For more than double the price, you get DataSnap and dbExpress server drivers. $1,999.00
  • Ultimate: Adds a developer edition of Embarcadero’s DBPowerStudio. $2999.00
  • Architect: Adds more UML modeling, and a developer edition of Embarcadero’s ER/Studio database modeling tool. $3499.00

The RAD Studio range is similar, but adds C++ Builder, PHP and .NET development. No Starter version. Prices from $1399.00 for Professional to $4299.00 for Architect. The non-Ultimate Ultimate is $3799.00.

All prices discounted by around 40% for upgraders.

The problem for Embarcadero is that Delphi is such a great and flexible tool that you can easily use it for database or multi-tier applications with just the Professional edition. See here, for example, for REST client and server suggestions. Third parties like devart do a good job of providing alternative data access components and dbExpress drivers. I would be interested to know, therefore, what proportion of Delphi developers buy into the official middleware options.

As an aside, I wondered about DataSnap licensing. I looked at the DataSnap page which says for licensing information look here – which is a MIDAS article from 2000, yes Embarcadero, that is 11 years ago. Which proves if nothing else what a ramshackle web site has evolved over the years.

Personally I would prefer to see Embarcadero focus on the Professional edition and improve humdrum things like FireMonkey documentation and bugs, and go easy on enterprise middleware which is a market that is well served elsewhere.

I have seen huge interest in Delphi as a productive, flexible, high-performance tool for Windows, Mac and mobile, but the momentum is endangered by quality issues.

Subversion 1.7 released: just one .svn directory per working copy

Yesterday saw the 1.7 release of Subversion, the widely used open source version control system. It is a significant release with many new features, bug-fixes and performance improvements, and I suggest reading the release notes or complete change log. One thing to highlight is that the default working copy metadata storage is now a single sqlite database per working copy, rather than a .svn direction containing metadata in sub-directory.

I upgraded my TortoiseSVN, which is already updated to 1.7, and tried upgrading one of my own projects. Here is the .svn folder before the upgrade:

image

and after

image

Those pesky .svn folders can be a nuisance so this is a welcome change, although there is a downside as the release notes warn:

It is not safe to copy an SQLite file while it’s being accessed via the SQLite libraries. Consequently, duplicating a working copy (using tar, cp, or rsync) that is being accessed by a Subversion process is not supported for Subversion 1.7 working copies, and may cause the duplicate (new) working copy to be created corrupted.

Subversion is less fashionable since the advent of distributed version control systems like git and mercurial; though for corporate development Subversion remains popular because a centralised system is easier to control.

WANdisco’s Jessica Thornsby has a helpful post on the new 1.7 features more details on the benefits of the new working copy metadata managements system.

Sneak Peeks at Adobe MAX 2011 … and that annoying updater

The Sneaks session at Adobe MAX is always fun as well as giving some insight into what is coming from the company, though note that these are research projects and there is no guarantee that any will make it into products.

This time we also got commentary from Rainn Wilson, an actor in the US version of The Office. His best moment came during the MAX Awards just before the sneaks, when he put a little ad lib into one of the award intros:

Customers demand … that the little Adobe Acrobat update pop-up window just go away for a while, go the way of the Microsoft paper clip Clippy, the customer is demanding right now. I’m tired of clicking No No No No No.

I only read a PDF occasionally, he said.

We all know the reasons for that updater (and the one for Flash), but he is right: it is a frequent annoyance. What is the fix? There would be some improvement if Adobe were to make a deal with Microsoft and Apple to include Flash and Adobe Reader servicing in system update mechanisms like Windows Update, but beyond that it takes a different model of computing, where the operating system is better protected. It is another reason why users like Apple iOS and why Microsoft is building a locked-down Windows client for ARM.

Now, on to the sneaks.

1. Local Layer Ordering

image

We are used to the idea of layer ordering, but what about a tool that lets you interleave layers, with a pointer to put this part on top, this part underneath? You can do this with pieces of paper, but less easily with graphics software, at least until Local Layer Ordering makes it into an Adobe product.

2. Project rub-a-dub

image

The use case: you have a video with some speech, but want to re-record the speech to fix some problem. In this case it is hard to do it perfectly so that the lip synch is right. Project rub-a-dub automatically modifies the newly recorded speech to align it correctly.

3. Liquid Layout

image

This one is for the InDesign publishing software: it is about intelligent layout modification to deliver the same content on different screen sizes and orientation. I was reminded of the way Times Reader works, creating different numbers of columns on the fly, but this is InDesign.

4. Synchronizing crowd-sourced multi-camera video

image

This one struck me as a kind of video version of PhotoSynth, where multiple views of the same image are combined to make a composite. This is for video and is a bit different, in that it does not attempt to make a single video image, but does play synchronize multiple videos with a merged soundtrack. We saw a concert example, but it could be fascinating if applied to a moment of revolution, say, if many individuals capture the event on their mobiles.

5. Smart debugging – how did my code get here?

image

This is a debugging tool based on a recorded trace, letting you step backwards as well as forwards through code. We have seen similar tools before, such as in Visual Studio 2010. Another facet of this one though is an English-like analysis of “how did my code get here”, which you can see if you squint at my blurry snap above.

6. Near-field communications for AIR

image

This demo showed near-field communications for Adobe AIR for mobile. We are most familiar with this for applications like payments, where you wave your mobile at a sensor, but it has plenty of potential for other scenarios, such as looking up product details without having to scan a barcode.

7. Pixel Nuggets: find commonality in your digital photos

The idea of this one is to identify “like” images by searching and analysing a collection. For example, you could perhaps point it at a folder with thousands of images and find all the ones which show flowers.

8. Monocle: telemetry data for Flex applications

image

In this demo, Deepa Subramaniam showed what I guess is a kind of profiler, showing a visualization of where your code is spending its time.

9. Video Mesh – amazing video editing

image

My snap does not capture this well, but it was amazing to watch. As I understand it, this is software than analyses a video to get intelligent understanding of its objects and perspective. In the example, we saw how a person walking across the front of the screen image could be made to walk more towards the rear, or behind a pillar, with correct size and perspective.

10. GPU Parallelism in Flash

image

This demo used a native extension to perform intensive calculations using GPU parallelism. We saw how an explosion of particles was rendered much more quickly, which of course I cannot capture in a static image, so I am showing Adam Welc’s lighthearted intro slide instead. I am a fan of general purpose computing on the GPU and would love to see this in Flash.

11. Re-focus an image

image

This is a feature that I’d guess will almost certainly show up in Photoshop or perhaps in a future tablet app: take an out of focus image and make it an in-focus image. The demo we saw was an image suffering from camera shake. The analysis worked out the movement path of the camera, which you can see in the small wiggly line in the right panel above, and used it to move parts of the image back so they are properly superimposed. I would guess this really only works for images out of focus because of camera shake; it will not fix incorrect lens settings. I have also seen a similar feature built into the firmware of a camera, though I am sure Photoshop can do a much better job if only because of the greater processing power available.

This was a big hit with the MAX crowd though. Perhaps most of us were thinking of photos we have taken that could do with this kind of processing.