Hands on with Office 365 – great service, some hassles

I have been trying Microsoft’s Office 365 which has recently gone into public beta, and is expected to go live later this year.

This cloud service provides Exchange 2010, SharePoint 2010 with Office Web Apps, and Lync Server to provide a complete collaboration service for organisations who prefer not to run these servers themselves – which is understandable give their cost and complexity.

Trying the beta is a little complex when you already have a working email and collaboration infrastructure. I chose to use a virtual machine running Windows 7 Professional. I also pre-installed Office 2010 Professional in an attempt to get the best experience.

Initial sign-up is easy and I was soon online looking at the admin screen. I could also sign into Outlook Web Access and view my SharePoint site.

image

Hassles started when I clicked to setup up desktop applications. This is done by a helper application which configures and updates Outlook, SharePoint and Lync on your desktop PC. At this point I had not configured my own domain; I was simply username@username.onmicrosoft.com.

setup-office-365

The wizard successfully configured SharePoint and Lync, but not Outlook.

image

There was a “Learn more” link; but I was in a maze of twisty passages, all alike, none of which seemed to lead to the information I needed.

Part of the problem – and I have noticed this with BPOS as well – is that the style of the online help is masterful at telling you things you know already, while neglecting to tell you what you need to know. It also has a patronising style that I find infuriating, and a habit of showing you marketing videos at every opportunity.

I did eventually find instructions for configuring Outlook manually for Office 365, but they did not work. I also noticed discrepancies in the instructions. For example, this document says that the Exchange server is ch1prd0201.mailbox.outlook.com and that the proxy server for Outlook over HTTP is pod51004.outlook.com. That did not match with the server given in my online account for IMAP, POP3 and SMTP use, which was a different podnnnnn.outlook.com. I tried all sorts of combinations and none worked.

Eventually I found this comment in another help document:

Currently, the only supported scenario for configuring Outlook to work with Office 365 is a fully migrated environment.

I am not sure if this is true, but it did seem to explain my problems. Of course it would be easy for Microsoft to surface this information in a more obvious place, such as building it into the setup wizard. Anyway, I decided to go for the full Office 365 experience and to set up a domain.

Fortunately I have a domain which I obtained for a bright idea that I have yet to find time for. I added it to Office 365. This is a process which involves first adding a CNAME record to the DNS in order to prove ownership, and then making Office 365 the authoritative nameserver for the domain. I was not impressed by the process, because when Microsoft took over the nameserver role it threw away existing settings. This means that if you have a web site or blog at that domain, for example, it will disappear from the internet after the transfer. Once transferred, you can reinstate custom records.

Still, I had chosen an unused domain so that I did not care about this. I set up a new user with an email address at the new domain, and I amended the default SharePoint web site address to use the domain as well.

image

That all worked fine; but what about Outlook? The bad news was that the setup wizard still failed to configure Outlook, and I still did not know the correct server settings.

I could have contacted support; but I had one last try. I went into the mail applet in control panel and deleted the Outlook profile, so Outlook had no profile at all. Then I ran Outlook, went through the setup wizard, and it all worked, using autodiscover. Out of interest, I then checked the server settings that the wizard had found, which were indeed different in every case from those in the various help documents I had seen.

A few hassles then, and I am not happy with the way this stuff is documented, but nevertheless it all looks good once set up. The latest Exchange and SharePoint does make a capable collaboration platform, the storage limits are generous – up to 25GB per Exchange mailbox – and I think it makes a lot of sense. I expect Microsoft’s online services to win huge amounts of business that is currently going to Small Business Server, and some business from larger organisations too. Migration from existing Microsoft-platform servers should be smooth.

The biggest disappointment so far is that in Lync online the Enterprise Voice feature is disabled. This means no general-purpose voice over IP, though you can call PC to PC. To get this you have to install Lync on-premise:

Organizations that want to leverage the full benefits of Microsoft Unified Communications can purchase and deploy Microsoft Lync Server 2010 on their premises as part of Microsoft Office 365. Lync Server 2010 on-premises delivers full enterprise voice and premises-based, dial-in audio conferencing, enabling customers to reduce costs and increase productivity by replacing or enhancing traditional PBX systems.

though it is confusing since Enterprise Voice is listed as a feature of the high-end E4 edition; I believe this implies an on-premise server alongside Office 365 in the cloud.

Perhaps the biggest question is the unknown: will Office 365 live up to its promised 99.9% scheduled uptime SLA, and how will its reliability compare to that of Google Apps?

Office 365 is priced at $10 per user per month for the basic service (E1), $16 to add Office Web Apps (E2), $24.00 to add licenses for Office Professional, archiving for Exchange (E3) and voicemail, and $27.00 to add Enterprise Voice (E4). The version in beta is E3.

The rise of the eBook is a profound change in our culture

The Association of American Publishers has announced that in February 2011 ebooks ranked above print in all trade categories. Note that these figures are for the USA, and that in revenue ebooks are well behind print – $164.1M vs $441.7M. It is also worth noting that print sales are falling fast, 24.8% year on year, whereas ebooks are growing fast, 202.3% year on year.

image

This does sound like a reprise of what has happened in the music industry, where broadly speaking physical formats are heading toward obsolescence, download is growing, but the overall pie is smaller because of the ease of piracy. There is perhaps another more subtle point, that when the marginal cost of production is near zero, prices too tend to race to the bottom in a competitive market.

Books are not equivalent to music. Physical books still have advantages. They have zero battery requirements, work well in sunlight, some have beautiful pictures, you can write on them and fold back the corner of a page, and so  on. There are more advantages to ebooks though, in cost, weight, searchability, interactivity, and freedom from the constraints of a printed page. Years ago I was in the book publishing industry, and convinced that ebooks would take off much sooner than in fact they did. Much money was wasted in the light of false dawns. I remember – though it was long after I was involved – how some booksellers invested in Microsoft’s .lit format, readable on PCs and Pocket PCs, only to discover that there was little market for it.

What changed? It was no single thing; but factors include the advent of high-contrast screens that are both low-power and readable outside; the appearance of dedicated tablet-style readers that are lightweight but with book-sized screens; the marketing muscle of Amazon with the Kindle and Apple with the iPad – though the iPad screen is sub-optimal for reading – and some mysterious change in public perception that caused ebooks to transition from niche to mainstream.

Books are not going away of course, just as CDs and even vinyl records are still with us. I think though we can expect more high street closures, and libraries wondering what exactly their role is meant to be, and that the publishing industry is going to struggle with this transition just as the music industry has done. Ebook growth will continue, and as Amazon battles its rivals we will see the price of the Kindle fall further. Apple will lock its community more tightly to iTunes, as its policy on forbidding in-app purchases that do not go through its own App Store and pay the Apple tax plays out.

That is all incidental. What I am struggling to put into words is what the decline of the printed word means for our culture. You can argue that it is merely a symptom of what the internet has brought us, which is true in its way; but it is a particularly tangible symptom. No longer will you be able to go into someone’s room and see clues about their interests and abilities by glancing at bookshelves.

I am on a train, and by one of life’s strange synergies someone has just sat down next to me and pulled out a Kindle.

I do not mean to be negative. Much though I love books, there are now better ways to store and read words, and while the printed word may be in decline, the written word has never been more popular. I am in no doubt though that this is a profound change.

Native apps better than web apps? That’s silly talk says PhoneGap president

When I attended Mobile World Congress in February one of my goals was to explore the merits of the various different approaches to writing cross-platform mobile apps. One of the key ones is PhoneGap, and I got in touch with Nitobi’s president and co-founder André Charland. As it turned out he was not at that particular event, but he kept in touch and I spoke to him last week.

PhoneGap works by using the installed HTML and JavaScript engine on the device as a runtime for apps. That is not as limiting as it may sound, since today’s devices have high performance JavaScript engines, and PhoneGap apps can be extended with native plug-ins if necessary. But aren’t there inconsistencies between all these different browser engines?

Sure, it’s kinda like doing web development today. Just a lot better because it’s just different flavours of WebKit, not WebKit, Gecko, whatever is in IE, and all sorts of other differentiation. So that’s definitely how it is, but that is being overcome rather quickly I’d say with modern mobile JavaScript libraries. There’s JQuery Mobile, there’s Sencha Touch, there’s DoJo Mobile just released, SproutCore, which is backed by Strobe, which is kinda the core of Apple’s MobileMe.

There’s tons of these things, Zepto.js which is from the scriptaculous guy, Jo which is a framework out of a Palm engineer, the list of JavaScript frameworks coming out is getting longer and longer and they’re getting refined and used quite a bit, and those really deal with these platform nuances.

At the same time, phone manufacturers, or iOS, Android, WebOS, and now RIM, they’re competing to have the best WebKit. That means you’re getting more HTML5 features implemented quicker, you’re getting better JavaScript performance, and PhoneGap developers get to take advantage of that.

says Charland. He goes further when I put to him the argument made by native code advocates – Apple CEO Steve Jobs among them – that PhoneGap apps can never achieve the level of integration, the level of performance that they get with native code. Will the gap narrow?

I think it will go away, and people will look back on what they’re saying today and think, that was a silly thing to say.

Today there are definitely performance benefits you can get with native code, and our answer to that is simply that PhoneGap is a bundle made of core libraries, so at any point in your application that you don’t want to use HTML and JavaScript you can write a native plugin, it’s a very flexible, extensible architecture … So you can do it. We don’t necessarily say that’s the best way to go. Really if you’re into good software development practices the web stack will get you 90%, 95% of the way there, so that apps are indistinguishable from native apps.

Some of the native features we see in iOS apps, they’re reminiscent of Flash home pages of ten years ago, sure you can’t do it in HTML and JavaScript but it doesn’t add any value to the end user, and it detracts from the actual purpose of the application.

The other thing is, a lot of these HTML and JavaScript things, are one step away from being as good in a web stack as they are in native. When hardware acceleration gets into WebKit and the browser, then performance is really just as good.

Charland is also enthusiastic about Adobe’s recent announcement, that PhoneGap is integrated into Dreamweaver 5.5:

Two things are exciting from our perspective. It gives us massive reach. Dreamweaver is a widely used product that ties in very nicely to the other parts of the creative suite toolchain, so you can get from a high-level graphic concept to code a lot quicker. Having PhoneGap and JQuery Mobile in there together is nice, JQuery Mobile is definitely one of the more popular frameworks that we see our community latching on to.

The other thing is that Dreamweaver targets a broader level of developer, it’s maybe not super hard core, either Vi or super-enterprise, Eclipse guys, you know, it’s people who are more focused on the UI side of things. Now it gives them access to quickly use PhoneGap and package their applications, test them, prove their concepts, send them out to the marketplace.

He says Adobe should embrace HTML and Flash equally.

I also asked about Windows Phone support, and given that Microsoft shows no sign of implementing WebKit, I was surprised to get a strongly positive response:

We have something like 80% of the APIs in PhoneGap running on Windows Phone already. That’s open and in the public repo. We are just waiting basically for the IE9 functionality to hit the phone. The sooner they get that out in public, the sooner we can support Windows Phone 7. We have customers knocking at our door begging for it, we’ve actually signed contracts to implement it, with some very large customers. Just can’t there soon enough, really. I think it’s an oversight on their part to not get IE9 onto the phone quicker.

PhoneGap is at version 0.94 at the moment; Charland says 0.95 will be out “in a few weeks” and he is hoping to get 1.0 completed by O’Reilly OSCON in July.

I’ve posted nearly the complete transcript of my interview, so if you are interested in Charland’s comments on building a business on open source, and how PhoneGap compares to Appcelerator’s Titanium, and what to do about different implementations of local SQL on devices, be sure to read the longer piece.

Is Appcelerator Titanium native? And what does native mean anyway?

Of course we all know that Microsoft’s IE9 and the forthcoming IE10 are native – VP Dean Hachamovitch said so many times during his keynote at the Mix 2011 conference earlier this week. That has sparked a debate about what native means – so here is another interesting case.

Appcelerator’s Titanium cross-platform tool for mobile development is native, or at least that is what it claims:

image

Now, I am not sure that native has a precise definition, but to me it suggests a compiled application, rather than one interpreted at runtime. So this description of how Titanium executes JavaScript – its main language – is surprising:

In a Titanium Mobile application, your source code is packaged into a binary file and then interpreted at runtime by a JavaScript engine bundled in by the Titanium build process. In this guide, you will learn more about the JavaScript runtime environment that your code runs in. Titanium runs your application’s JavaScript using one of two JavaScript interpreters – JavaScriptCore on iOS (the interpreter used by Webkit) and Mozilla Rhino on Android and BlackBerry.

So a Titanium application is actually interpreted.

Native is a vague enough term that Appcelerator can no doubt justify its use here. “Native UI” is fair enough, so is “Native capabilities.” Native performance? That seems to me a stretch, though JavaScript performance is good and constantly improving. Appcelerator even has a web page devoted to what it means by native.

Titanium is also open source. Anyone doubtful about how it works need only consult the code.

In the light of Microsoft’s statements, it is interesting that what Appcelerator really means by native is “not a web page”:

Build Native Apps … Everything else is basically a web page.

So can an application be both native and interpreted? What about Silverlight apps on Windows Phone 7, are they native? Adobe AIR apps, surely they are not native? Google Android has a Native Development Kit which is introduced thus:

The Android NDK is a companion tool to the Android SDK that lets you build performance-critical portions of your apps in native code.

The implication is that byte code executed by the Dalvik virtual machine, which is the normal route for an Android app, is in some sense not native code. Which also implies that Appcelerator’s claims for Titanium are at least open to misunderstanding.

Oracle says OpenOffice non-strategic, ceases commercial versions. Time to reunite with Libre Office?

The OpenOffice story has taken a curious turn today with Oracle announcing that it intends to cease the commercial versions of this office suite and to move the project a non-commercial organisation.

What the press release does not say is that there is already a non-commercial organisation working on the OpenOffice code. The Document Foundation was formed  in September 2010 against Oracle’s wishes. This online OpenOffice meeting shows some of the tensions:

(21:59:42) louis_to: your role in the Document Foundation and LibreOffice makes your role as a representative in the OOo CC untenable and impossible
(22:00:01) Andreas_UX: I would support that. I think that the more we discuss the more we will harden the fronts
(22:00:17) louis_to: it causes confusion, it is a plain conflict of interest, as TDF split from OOo

In this dialog, louis_to is Louis Suárez-Potts who works at Oracle as OpenOffice.org community manager.

Oracle’s Edward Screven, Chief Corporate Architect, says in the new press release:

We believe the OpenOffice.org project would be best managed by an organization focused on serving that broad constituency on a non-commercial basis. We intend to begin working immediately with community members to further the continued success of Open Office.

Why is Oracle distancing itself from OpenOffice? The implication is that it is non-strategic and not broadly adopted among Oracle customers, because these two factors are given as reasons for continuing with Linux and MySQL:

We will continue to make large investments in open source technologies that are strategic to our customers including Linux and MySQL. Oracle is focused on Linux and MySQL because both of these products have won broad based adoption among commercial and government customers.

The question now: will Oracle try to set up an independent foundation to compete with the Document Foundation? Or will there be a reconciliation, which would seem the only sensible way forward?

Background: OpenOffice was originally a commercial suite called Star Office. It was bought by Sun and made free and open source in an attempt to loosen Microsoft’s hold on business computing. While OpenOffice has been popular, in the business world OpenOffice has had little impact on the success of Windows and Office. That said, it is possible that Microsoft’s development of the Office Ribbon and the huge effort behind Office 2007 was partly driven by a desire to differentiate and improve its product in response to the OpenOffice competition.

All-new Adobe Audition is re-written for cross-platform, some features not yet ported

Adobe’s forthcoming Creative Suite 5.5 includes a significant change to its audio editing support. The Soundbooth application has gone, replaced by a new version of Adobe Audition for both Mac and Windows.

image

I thought this was good news. Audition has always been an excellent product, even back in the days when it was Cool Edit from Syntrillium – Adobe acquired Syntrillium’s technology in 2003. I found it difficult to understand why Adobe had two audio products, especially when Soundbooth is not as capable as Audition. Until now though, Audition was Windows-only, and Creative Suite is cross-platform for Mac and Windows.

Now Adobe’s Durin Gleaves has posted in detail about the history of Soundbooth and Audition. The rationale for Soundbooth was not that suite users required a simpler audio editor, as Adobe had told me previously, but rather that porting Audition was too difficult:

The Audition team looked at the 15 years of legacy Windows code and were not confident the application could be ported quickly enough to satisfy the CS release schedule. As an audio editor was necessary in the suite package, we created Soundbooth which was a simple audio editor built on top of Premiere Pro’s media playback engine. This enabled the team to provide value to the Suite, but the limitations of a playback engine crafted to handle large video files was not ideal for detailed audio production.

To Adobe’s credit, it did not give up on bringing Audition to Creative Suite but has spent two years re-writing Audition in cross-platform code:

So we’ve spent the past two years re-writing Audition from the ground-up, preserving or updating our core DSP, modernizing the code to take advantage of current hardware and operating system technology, and emphasizing increased productivity and speed with every feature.

says Gleaves. The new Audition is optimised for multi-core systems and makes full use of background processing to improve productivity. On the Mac it supports Core Audio and Apple AudioUnit effects, and on Windows ASIO, though there is no mention of WASAPI, the low-latency audio API in Windows Vista and Windows 7. Steinberg’s VST (Visual Studio Technology) is supported on both platforms.

It it is not all good news though. To some extent Audition in CS 5.5 is a new application, and not all the features of Audition 3 have made it across. Gleaves lists the following as features which are not in this version:

  • Tone and noise generation
  • Pitch correction
  • Scientific filters
  • Graphic Phase Shiftter
  • MIDI support
  • CD burning

Most of these are likely to return in a future update.

While it is a shame to see missing features, it makes sense for Adobe to unify its audio development effort on a new and solid base.

One other thing I should mention. Soundbooth has a feature called Analyze Speech for which I had high hopes, as I frequently need to transcribe interviews, but in practice the results were disappointing. I suspect it may work reasonably when there is a script with which to match the audio. That does raise the question though: are there any features in Soundbooth that will be missed following the transition to Audition?

Windows Phone at Mix 2011: what Microsoft said and did not say

Yesterday Microsoft’s Joe Belfiore (phone VP) and Scott Guthrie (developer VP) took the stage at the Mix 2011 conference in Las Vegas to tell us what is new with Windows Phone.

The opening part of the keynote was significant. Belfiore spent some time talking about the “update situation”.

image

This is all to do with who controls what ends up on your phone. If you buy a Windows PC or laptop, you can get updates from Microsoft using Windows update or by downloading service packs; the process is between you and Microsoft.

Not so with Windows Phone. The operators have a say as well; and operators are not noted for delivering speedy OS updates to users. Operators seem to have difficulty with the notion that by delivering strong updates to existing devices that have already been purchased, they build user loyalty and satisfaction. They are more geared to the idea of delivering new features with new hardware. Updating existing phones can cause support calls and other hassles, or even at worst bricked devices. They would rather leave well alone.

When Microsoft launched Windows Phone it announced that there would be regular updates under Microsoft’s control; but this has not been the case with the first update, codenamed “NoDo”. The update process has been delayed and inconsistent between operators, just like the bad old days of Windows Mobile.

Belfiore went on about testing and phones being different from PCs and improvements to the process; but in the end it seems to me that Microsoft has given in:

Mobile operators have a very real and reasonable interest in testing updates and making sure they’re going to work well on their phones and on their network. Especially if you think about large operators with huge networks, they are the retailer who sells the phone, so they have to deal with returns, they take the support calls and they have to worry about whether their network will stay up and perform well for everyone … From our point of view, that’s quite reasonable, and our belief and understanding is that it’s standard practice in the industry that phones from all different vendors undergo operator testing before updates are made available.

That “testing” label can cover any amount of prevarication. It appears that Microsoft is unable to achieve what Apple has achieved: the ability to update its phone OS when it wants to. That is a disadvantage for Microsoft and there is no sign of improvement.

More positively, Microsoft announced a number of significant new features in the first major update to the OS, codenamed Mango. This is for existing devices as well as new ones, though new devices will have enhanced hardware. He focused on what matters for developers, and hinted that there will be other end user features. A few bullet points:

  • Internet Explorer 9 is on Mango – “The same exact code that has just shipped and is now getting installed on tons and tons of PCs is the code base that will be on the phone” said Belfiore. No, it is not built in Silverlight.
  • Limited multitasking for third-party apps. This is in the form of “Live agents” which run in the background. Full apps cannot multitask as I understand, though they can be suspended in memory for fast switching. Currently apps appear to do this but it is faked; now it will be for real, with the proviso that a suspended app may get shut down if its memory is needed by the OS.
  • Multiple live tiles for a single app.
  • Fixed marketplace search so that music does not appear when you search for an app.
  • Apps can register with search so that Bing searches can integrate with an app.
  • There will be a built in SQL Server CE database with programmatic access using Linq (Language Integrated Query).
  • Full TCP/IP socket support
  • Access to raw camera data for interesting imaging applications or barcode  processing
  • 1,500 new APIs in Mango
  • Performance improvements including a better garbage collector that apparently gives a significant boost
  • Improved tools with the ability to simulate GPS on the emulator, capture performance trace log from phone

It adds up to a decent update, though more Window Phone 7.5 than Windows Phone 8 (I do not know what the official name will be). Belfiore also mentioned new apps coming to Windows Phone 7, including Spotify, Skype and Angry Birds.

But what was not said? Here are a few things I would like to have heard:

  • When will get Adobe Flash on Windows Phone? Not mentioned.
  • What about Silverlight in the browser? You would think this would be easy to implement; but I have not seen it confirmed (let me know if you have news).
  • When will Nokia ship Windows Phone devices? Nokia’s Marco Argenti appeared on stage but said nothing of substance.
  • The Mango update is coming “in the fall” but when will current users get updates?
  • Will Windows Phone 8 move away from Windows CE to full Windows, so the same OS will work across phone, tablets and desktop PCs?

Above all, I would like convincing news about how Microsoft intends to get Windows Phone better exposure and fuller support from operators. I still hardly see it in retailers, and it seems a long way down the list when you talk to a salesperson about what new phone you should buy. I do not have a Windows Phone at the moment, but when I tried it for a  couple of weeks I mostly liked the user interface – I found the soft buttons on the Mozart annoying because they are easy to press accidentally – and I also like the developer tools, though I would like to see a native code development option. In the end though, it is no use developing for Windows Phone if your customers are asking for Apple iOS and Google Android.

Microsoft shared the following figures:

  • 12,000+ apps
  • 35,000 registered developers
  • 1.5 million tool downloads

It is a start, but these are not really big numbers, and the proportion of tool downloaders that end up delivering apps seems small so far.

A lot rests on the Nokia partnership and how that plays out.

It now appears that we will need to wait until September and the newly announced PDC (Professional Developers Conference) in Anaheim 13th-16th September before we learn more about the long-term mobile strategy.

Update: Microsoft’s Phil Winstanley tells me that the Windows Phone OS is just called “Windows Phone” regardless of version; but that the Mango update is referred to as “Windows Phone OS 7.5” when it is necessary to differentiate. If that sounds confusing, do not blame me!

Spotify is now less free but still a better deal than Apple iTunes

Spotify’s Daniel Ek has announced restrictions to Spotify’s free edition:

  • Users will be able to play any track for free up to 5 times only
  • Total listening time for free users will be limited to 10 hours per month

The changes are presented as a necessity:

It’s vital that we continue offering an on-demand free service … but to make that possible we have to put some limits in place going forward.

You can easily escape the restrictions by subscribing to the unlimited service at £4.99 per month (or equivalent in your currency), or the Premium service at £9.99. Unlimited offers music without advertisements, while premium includes mobile and offline music, and a higher bitrate of 320 kbps.

While it is a shame to see free Spotify become less attractive, the free and premium services are well priced. For the cost of one album per month you can play anything on Spotify’s service as often as you like. The main downside is that there are gaps in what is available. Over time, my guess is that either Spotify will win the argument and the business, and those gaps will be filled; or of course it may fail.

Spotify’s problem is that it has to pay even for the music that is streamed for free. That is always a difficult business model, and it seems that advertising is not enough to pay for it at the rates the music companies require.

If the restrictions result in a surge of new paid subscriptions, this may even work out well for the company, though the service is still not available in the USA.

Personally I think Spotify is inherently a better deal than iTunes downloads, for example, which offer an unlimited license but only on a track by track basis and with no resale value. Anyone who still buys music is likely to spend less with Spotify, and to get more choice. The subscription model is the only one that makes sense in the internet era.

At the same time, I can understand why the music companies want to maintain a high price for streamed music. They are playing a high-risk game though, since by making legal music more expensive and adding friction, they make illegal music more attractive.

For example, there is now more incentive for a user to record a favourite track during one of their five free listens, and never pay for it again; or to get the tracks they want from a friend’s ripped CD – both actions that are untraceable.

When will Intel’s Many Integrated Core processors be mainstream?

I’m at Intel’s software tools conference in Dubrovnik, which I have attended for the last three years, and as usual the big topic is concurrent programming and how to write code that takes advantage of the multiple cores in today’s computers.

Clearly this remains a critical subject, but in some ways the progress over these last three years has been disappointing when it comes to the PCs that most of us use. Many machines are only dual-core, which is sub-optimal for concurrent programming since there is an overhead to multi-threading programming that eats into the benefit of having two cores. Quad core is now common too, and more useful, but what about having 50 or 80 or more cores? This enables massively parallel processing of the kind that you can easily do today with general-purpose GPU programming using OpenCL or NVidia’s CUDA, but not yet on the CPU unless you have a super computer. I realise that GPU cores are not the same as CPU cores; but nevertheless they enable some spectacularly fast parallel processing.

I am interested therefore in Intel’s MIC or Many Integrated Core architecture, which combines 50 or more CPU cores on a single chip. MIC is already in preview, with hardware codenamed Knight’s Corner and a development kit called Knight’s Ferry. But when will MIC hit the mainstream for servers and workstations, and how long is it until we can have 50 cores on a commodity desktop PC? I spoke to Intel’s chief evangelist James Reinders.

Reinders first gave me some background on MIC:

“We’ve made those bold steps to dual core, quad core and we’ve got even ten core now, but if you look inside those microprocessors they have a very simple structure. All the cores are hooked together and share their connection to memory, through a shared cache usually that’s on the chip. It’s a simple computer structure, and we know from experience when you build computers with more and more processors, that eventually you go to more sophisticated connections between the cores. You don’t build a 1000-processor super computer and hook them all together with a bus to one memory.

“It’s inevitable that on a chip we need to design a more sophisticated connection. That’s what MIC’s about, that’s what the Larrabee project has always been about, a belief that we should take a bunch of x86 cores and hook them together with something more sophisticated. In this case it’s a ring, a bi-directional, 512-bit wide high performance ring, with multiple connections to memory off the chip, which gives us more bandwidth.

“That’s how I look at MIC, it’s putting a cluster-type of design on a chip.”

But what about timing?

“The first place you’ll see this is in servers and in workstations, where there’s a lot of demand for a lot of computation. In that case we’ll see that availability sometime by the end of 2012. The Intel product should be out late in that year.

“When will we see it in other devices? I think that’s a ways off. It’s a very high core count part, more than 50, it’s going to consume a fair amount of power. The same part 18 months later will probably consume half the power. So inside a decade we could see this being common on desktops, I don’t know about mobile devices, it might even make it to tablets. A decade’s a long time, it gives a lot of time for people to come up with innovative uses for it in software.

“We’ll see single core disappear everywhere.”

Incidentally, it is hard to judge how much computing power is “enough”. Although having many CPU cores may seem overkill for everyday computing, things like speech recognition or on-the-fly image processing make devices smarter at the expense of intense processing under the covers. From super computers to smartphones, if more computing capability is available history tells us that we will find ways to use it.

As Cisco closes down Flip, is device convergence finally happening?

Cisco is closing down the Flip video camera business it acquired with Pure Digital in May 2009:

Cisco will close down its Flip business and support current FlipShare customers and partners with a transition plan.

A sad day for Flip enthusiasts. The cool thing about a Flip device is that making a video is quick, easy and cheap. Most commentators say Flip is being killed because Smartphones now do this equally well; though this thoughtful post by Michael Mace says it is more to do with Cisco not understanding the consumer market, and being too slow to deliver upgraded Flip devices:

It’s almost impossible for any enterprise company to be successful in consumer, just as successful consumer companies usually fail in enterprise. The habits and business practices that make them a winner in one market doom them in the other.

Maybe it is a bit of both. I have a Flip and I rarely use it, though I am not really a good example since I take more still pictures than videos. Most of the time it stays at home, because I already have too many things to carry and too many devices to keep charged.

My problem though is that convergence is happening too slowly. I have slightly different requirements from most people. I do interviews so I need high quality recordings, and I take snaps which I use to illustrate posts and articles. I also do a lot of typing on the road.

This means I end up taking a Windows 7 netbook – I have given up travelling with a full-power laptop – for typing, email, and browsing the web.

The netbook has a built-in microphone which is rubbish, and an microphone input which I find does not work well either, so I carry a dedicated recorder as well. It is an antique, an iRiver H40, but with a 40GB hard drive, 6 hrs battery life on its original battery, and a decent microphone input with plug-in power, it still works well for me. I use a small Sony table microphone which gives me excellent quality, and that makes it possible to transcribe interviews even when there is background noise. Even though it is “only voice” I find that recording in high quality with a proper microphone is worth the effort; when the iRiver finally gives up I might go to something like the Edirol R-09HR to replace it. 

As for photos, I have tried using a smartphone but get better results from a dedicated Canon camera, so much so that it is worth carrying this extra device.

Of course I still need a mobile phone. I also tempted to pack a tablet or Amazon Kindle for  reading; but how many devices is too many?

I am still hopeful that I may find a smartphone with a camera that is good enough, and audio recording that is good enough, and maybe with an add-on keyboard I could leave the netbook at home as well; or take a tablet instead of a netbook.

But for now I am still weighed down with phone, camera, recorder, microphone and netbook. Roll on converged devices, I can’t wait!