All posts by onlyconnect

As Microsoft releases new tools for Windows Phone, developers ask: how is it selling?

Microsoft has released Visual Basic for Windows Phone Developer Tools – not a lot to report, I guess, except that what you could already do in C# you can now also do in Visual Basic.

Still, when someone at Microsoft asked me what I thought of the Windows Phone 7 developer platform I replied that the tools look good for the most part – though I would like to see a native code option and it seems unfortunate that mobile operators can install native code apps but the rest of us officially cannot – but the bigger question is around the size of the market.

We all know that a strong and large community of developers is critical to the success of a platform – but as I’ve argued before, developers will go where their customers are, rather than selecting a platform based on the available tools and libraries. It is a bit of both of course: the platform has to be capable of running the application, and ease of development is also a factor, but in the end nothing attracts developers more than a healthy market.

Therefore the critical question for developers is how well Windows Phone 7 is selling.

Nobody quite knows, though Tom Warren makes the case for not much more than 126,000, that being the number of users of the Windows Phone Facebook application.

I’m not quite convinced when Warren says:

It’s likely that most users will connect their Facebook account so the statistics could indicate nearly accurate sales figures.

Not everyone loves Facebook; and when I was trying out Windows Phone 7 I found myself reluctant to have it permanently logged in. Even so, I’d agree that well over 50% of users will enable Facebook integration so it is a useful statistic.

Although that suggests a relatively small number in the context of overall Smartphone sales, my perception is that lack of availability is part of the reason, so it is too early to judge the platform’s success. I do not see many Windows Phone 7 in the mobile phone shops that I pass in the UK; in fact it is unusual to see it at all. I am not sure if this is mainly because of supply shortages, or because Microsoft and its partners found it difficult to build expectations in the trade that this would be a sought-after device, or both.

Some bits of anecdotal evidence are encouraging for Microsoft. Early adopters seem to like it well enough. Nevertheless, it is a minority player at the moment and that will not change soon.

Developers are therefore faced with a small niche market. Microsoft has done a fair job with the tools; now it needs to get more devices out there, to convince developers that once they have built their applications, there are enough customers to make it worth while.

HTML 5 Canvas: the only plugin you need?

The answer is no, of course. And Canvas is not a plugin. That said, here is an interesting proof of concept blog and video from Alexander Larsson: a GTK3 application running in Firefox without any plugin.

image

GTK is an open source cross-platform GUI framework written in C but with bindings to other languages including Python and C#.

So how does C native code run the browser without a plugin? The answer is that the HTML 5 Canvas element, already widely implemented and coming to Internet Explorer in version 9, has a rich drawing API that goes right down to pixel manipulation if you need it. In Larsson’s example, the native code is actually running on a remote server. His code receives the latest image of the application from the server and transmits mouse and keyboard operations back, creating the illusion that the application is running in the browser. The client only needs to know what is different in the image as it changes, so although sending screen images sounds heavyweight, it is amenable to optimisation and compression.

It is the same concept as Windows remote desktop and terminal services, or remote access using vnc, but translated to a browser application that requires no additional client or setup.

There are downsides to this approach. First, it puts a heavy burden on the server, which is executing the application code as well as supplying the images, especially when there are many simultaneous users. Second, there are tricky issues when the user expects the application to interact with the local machine, such as playing sounds, copying to the clipboard or printing. Everything is an image, and not character-by-character text, for example. Third, it is not well suited to graphics that change rapidly, as in a game with fast-paced action.

On the other hand, it solves an immense problem: getting your application running on platforms which do not support the runtime you are using. Native applications, Flash and Silverlight on Apple’s iPad and iPhone, for example. I recall seeing a proof of concept for Flash at an Adobe MAX conference (not the most recent one) as part of the company’s research on how to break into Apple’s walled garden.

It is not as good as a true local application in most cases, but it is better than nothing.

Now, if Microsoft were to do something like this for Silverlight, enabling users to run Silverlight apps on their Apple and Linux devices, I suspect attitudes to the viability of Silverlight in the browser would change considerably.

Microsoft removes Drive Extender from new Windows Home Server, users rebel

Microsoft’s Windows Home Server has a popular feature called Drive Extender [Word docx] which lets you increase storage space simply by adding an internal or external drive – no fussing with drive letters. In addition, Drive Extender has some resilience against drive failure, duplicating files stored in shared folders when more than one drive is available.

Recognising the usefulness of this feature for business users as well as in the home, Microsoft prepared a significantly upgraded Drive Extender for the next version of Windows Home Server, code-named Vail, and for new “Essentials” editions of Small Business Server (SBS) and Storage Server. Anandtech has an explanation of the changes, necessary to support business features such as the Encrypted File System.

The new version is more complex though, and it seems Microsoft could not get it working reliably. Rather than delay the new products, Microsoft decided to drop the feature, as announced by product manager Michael Leworthy. Note the rating on the announcement.

image

Part of the problem is that rather than discuss difficulties in the implementation, Leworthy presented the decision as something to do with the availability of larger drives:

We are also seeing further expansion of hard drive sizes at a fast rate, where 2Tb drives and more are becoming easy accessible to small businesses.  Since customers looking to buy Windows Home Server solutons from OEM’s will now have the ability to include larger drives, this will reduce the need for Drive Extender functionality.

He added that “OEM partners” will implement “storage management and protection solutions”.

Unfortunately this was a key feature of Windows Home Server. The announcement drew comments like this:

My great interest in Vail has just evaporated.  Drive Extender is the great feature of Home Server, and what my personal data storage is based around.  I have loved owning my WHS but unfortunately without DE I will be looking for other products now.

A thread (requires login to WHS beta) on the beta feedback site Microsoft Connect attracted thousands of votes in a couple of days.

 image

One of the concerns is that while Drive Extender 2 may be needed for the business servers, the version 1 is fine for home users. Therefore it seems that the attempt to bring the technology to business servers has killed it for both.

The SBS community is less concerned about the issue than home users. For example, Eriq Neale says:

While I can see how the Home Server folks are going to lament the loss of DE from their product, as cool as it is, removing that technology removes a LOT of roadblocks I was expecting for Aurora and Breckenridge, and that’s good news for my business.

though Wayne Small says:

I know that a few of my fellow MVPs were told of this recently and sworn to secrecy under our NDA, and we honestly were dumbstruck as to the fact it had been cancelled.  I can only assume that the powers that be at Microsoft know what they are truly doing by removing this feature.  On the flip side however, it means that any server backup or antivirus product that worked with Windows Server 2008 R2 will now most certainly work with SBS 2011 Essentials without modification!  See – there is a silver lining there somewhere.

What should Microsoft do? I guess it depends on how badly broken Drive Extender 2 is. Perhaps one option would be to keep Drive Extender 1 in Vail, but leave it out of the business servers. Another idea would be to delay the products while Drive Extender 2 is fixed, presuming it can be done in months rather than years.

Or will Microsoft ignore the feedback and ship without Drive Extender at all? Microsoft may be right, in that shipping a server with broken storage management would be a disaster, no matter how much users like the feature.

How will online services impact Microsoft’s partner business?

2010 is the year Microsoft got serious about cloud services. Windows Azure opened for real business in November 2009 – OK, just before 2010 – and CEO Steve Ballmer took to telling the world how Microsoft is “all in” for cloud computing whenever he got up to speak. Office and SharePoint 2010 launched in May 2010 complete with the ability to create and edit Office documents from a web browser. Microsoft also announced Office 365, essentially an upgrade of its existing BPOS offering, offering hosted Exchange, Sharepoint and Lync (Office Communicator). Microsoft also announced Small Business Server 2011, including an Essentials edition, formerly codenamed “Aurora”, which is little more than Windows Home Server plus Active Directory and points small businesses towards cloud services for email and document collaboration.

I’d guess that Microsoft’s cloud conversion is driven in part by the progress Google, Salesforce.com and others have made in persuading businesses that hosted internet services make more sense than maintaining your own servers and server applications in many cases.

But what is the impact on Microsoft partners, who have been kept busy supplying and configuring servers, implementing backup, keeping systems running, and then upgrading them as they become obsolete? On the face of it they have less to do in a hosted world, and although Microsoft offers commission on the sale of online subscriptions, that might not compensate for lost business.

Then again, cloud services offer new opportunities, still need configuring, and look likely to be a source of new business for partners particularly at a time when the majority of businesses have not yet made the transition.

I’m researching a further piece on the subject and would love to hear honest views from partners such as resellers and solution providers about how Microsoft’s online services are affecting partner business now and in the future. Or maybe you think this cloud thing is overdone and it will be business as usual for a while yet. You can contact me by email – tim(at)itwriting.com – or of course comment below.

The Microsoft Azure VM role and why you might not want to use it

I’ve spent the morning talking to Microsoft’s Steve Plank – whose blog you should follow if you have an interest in Azure – about Azure roles and virtual machines, among other things.

Windows Azure applications are deployed to one of three roles, where each role is in fact a Windows Server virtual machine instance. The three roles are the web role for IIS (Internet Information Server) applications, the worker role for general applications, and newly announced at the recent PDC, the VM role, which you can configure any way you like. The normal route to deploying a VM role is to build a VM on your local system and upload it, though in future you will be able to configure and deploy a VM role entirely online.

It’s obvious that the VM role is the most flexible. You will even be able to use 64-bit Windows Server 2003 if necessary. However, there is a critical distinction between the VM role and the other two. With the web and worker roles, Microsoft will patch and update the operating system for you, but with the VM role it is up to you.

That does not sound too bad, but it gets worse. To understand why, you need to think in terms of a golden image for each role, that is stored somewhere safe in Azure and gets deployed to your instance as required.

In the case of the web and worker roles, that golden image is constantly updated as the system gets patched. In addition, Microsoft takes responsibility for backing up the system state of your instance and restoring it if necessary.

In the case of the VM role, the golden image is formed by your upload and only changes if you update it.

The reason this is important is that Azure might at any time replace your running VM (whichever role it is running) with the golden image. For example, if the VM crashes, or the machine hosting it suffers a power failure, then it will be restarted from the golden image.

Now imagine that Windows server needs an emergency patch because of a newly-discovered security issue. If you use the web or worker role, Microsoft takes responsibility for applying it. If you use the VM role, you have to make sure it is applied not only to the running VM, but also to the golden image. Otherwise, you might apply the patch, and then Azure might replace the VM with the unpatched golden image.

Therefore, to maintain a VM role properly you need to keep a local copy patched and refresh the uploaded golden image with your local copy, as well as updating the running instance. Apparently there is a differential upload, to reduce the upload time.

The same logic applies to any other changes you make to the VM. It is actually more complex than managing VMs in other scenarios, such as the Linux VM on which this blog is hosted.

Another feature which all Azure developers must understand is that you cannot safely store data on your Azure instance, whichever role it is running. Microsoft does not guarantee the safety of this data, and it might get zapped if, for example, the VM crashes and gets reverted to the golden image. You must store data in Azure database or blob storage instead.

This also impacts the extent to which you can customize the web and worker VMs. Microsoft will be allowing full administrative access to the VMs if you require it, but it is no good making extensive changes to an individual instance since they could get reverted back to the golden image. The guidance is that if manual changes take more than 5 minutes to do, you are better off using the VM role.

A further implication is that you cannot realistically use an Azure VM role to run Active Directory, since Active Directory does not take kindly to be being reverted to an earlier state. Plank says that third-parties may come up with solutions that involve persisting Active Directory data to Azure storage.

Although I’ve talked about golden images above, I’m not sure exactly how Azure implements them. However, if I have understood Plank correctly, it is conceptually accurate.

The bottom line is that the best scenario is to live with a standard Azure web or worker role, as configured by you and by Azure when you created it. The VM role is a compromise that carries a significant additional administrative burden.

25 years of Windows: triumph and tragedy

I wrote a (very) short history of Windows for the Register, focusing on the launch of Windows 1.0 25 years ago.

image

I used Oracle VirtualBox to run Windows 1.0 under emulation since it more or less works. I found an old floppy with DOS 3.3 since Windows 1.0 does not run on DOS 6.2, the only version offered by MSDN. In the course of my experimentation I discovered that Virtual PC still supports floppy drives but no longer surfaces this in the UI. You have to use a script. Program Manager Ben Armstrong says:

Most users of Windows Virtual PC do not need to use floppy disks with their virtual machines, as general usage of floppy disks has become rarer and rarer.

An odd remark in the context of an application designed for legacy software.

What of Windows itself? Its huge success is a matter of record, but it is hard to review its history without thinking how much better it could have been. Even in version 1.0 you can see the intermingling of applications, data and system files that proved so costly later on. It is also depressing to see how mistakes in the DOS/Windows era went on to infect the NT range.

Another observation. It took Microsoft 8 years to release a replacement for DOS/Windows – Windows NT in 1993 – and another 8 years to bring Windows NT to the mainstream on desktop and server with Windows XP in 2001. It is now 9 years later; will there ever be another ground-up rewrite, or do just get gradual improvements/bloat from now on?

I don’t count 64-bit Windows as a ground-up rewrite since it is really a port of the 32-bit version.

Finally, lest I be accused of being overly negative, it is also amazing to look at Windows 1.0, implemented in fewer than 100 files in a single directory, and Windows 7/Server 2008 R2, a platform on which you can run your entire business.

What you are saying about the Java crisis

A week or so ago I posted about the Java crisis and what it means for developers. The post attracted attention both here and later on The Guardian web site where it appeared as a technology blog. It was also picked up by Reddit prompting a discussion with over 500 posts.

So what are you saying? User LepoldVonRanke takes a pragmatic view:

I’d much rather have Java given a purpose and streamlined from a central authoritative body with a vision, than a community-run egg-laying, wool-growing, milk-giving super cow pig-sheep, that runs into ten directions at the same time, and therefore does not go anywhere. The Java ship needs a captain. Sun never got a good shot at it. There was always someone trying to wrestle control over Java away. With the Oracle bully as Uberfather, maybe Java has a place to go.

which echoes my suggestion that Java might technically be better of under more dictatorial control, unpalatable though that may be. User 9ren is sceptical:

Theoretically, the article is quite right that Java could advance faster under Oracle. It would be more proprietary, and of course more focussed on the kinds of business applications that bring in revenue for Oracle. It would be in Oracle’s interest; and the profit motive might even be a better spur than Sun had.

But – in practice – can they actual execute the engineering challenges?

Although Oracle has acquired many great software engineers (eg. from Sun, BEA Systems, many others), do they retain them? Does their organizational structure support them? And is Oracle known for attracting top engineering talent in general?

In its formation, Oracle had great software engineers (theirs was the very first commercial relational database, a feat many thought impossible). But that was 40 years ago, and now it’s a (very successful) sales-driven company.

There’s an important point from djhworld:

Java is hugely popular in the enterprise world, companies have invested millions and millions of pounds in the Java ecosystem and I don’t see that changing. Many companies still run Java 1.4.2 as their platform because it’s stable enough for them and would cost too much to upgrade.

The real business world goes at its own pace, whereas tech commentators tend to focus on the latest news and try to guess the future. It is a dangerous disconnect. Take no notice of us. Carry on coding.

On Reddit, some users focused on my assertion that the C# language was more advanced than Java. Is it? jeffcox111 comments:

I write in C# and Java professionally and I have to say I prefer C# hands down. Generics are very old news now in .Net. Take a look at type inference, lambdas, anonymous types, and most of all take a look at LINQ. These are all concepts that have been around for 3 years now in .Net and I hate living without them in Java. With .Net 5 on the horizon we are looking forward to better asynchronous calling/waiting and a bunch of other coolness. Java was good, but .Net is better these days.

and I liked this remark on LINQ:

I remember my first experience with LINQ after using C# for my final-year project (a visual web search engine). I asked a C# developer for some help on building a certain data structure and the guy sent me a pseudocode-looking stuff. I thanked him for the help and said that I’d look to find a way to code it and he said "WTF, I just gave you the code".

From there on I’ve never looked back.

Another discussion point is write once – run anywhere. Has it ever been real? Does it matter?

The company I work for has a large Java "shrinkwrap" app. It runs ok on Windows. It runs like shit on Mac, and it doesn’t run at all on Linux.

write once, run anywhere has always been a utopian pipe dream. And the consequence of this is that we now have yet another layer of crap that separates applications from the hardware.

says tonymt, though annannsi counters:

I’ve worked on a bunch of Java projects running on multiple unix based systems, windows and mac. GUI issues can be a pain to get correct, but its been fine in general. Non-GUI apps are basically there (its rare but I’ve hit bugs in the JVM specific to a particular platform)

Follow the links if you fancy more – I’ll leave the last word to A_Monkey:

I have a Java crisis every time I open eclipse.

WS-I closes its doors–the end of WS-* web services?

The Web Services Interoperability Organization has announced [pdf] the “completion” of its work:

After nearly a decade of work and industry cooperation, the Web Services Interoperability Organization (WS-I; http://www.ws-i.org) has successfully concluded its charter to document best practices for Web services interoperability across multiple platforms, operating systems and programming languages.

In the whacky world of software though, completion is not a good thing when it means, as it seems to here, an end to active development. The WS-I is closing its doors and handing maintenance of the WS interoperability profiles to OASIS:

Stewardship over WS-I’s assets, operations and mission will transition to OASIS (Organization for the Advancement of Structured Information Standards), a group of technology vendors and customers that drive development and adoption of open standards.

Simon Phipps blogs about the passing of WS-I and concludes:

Fine work, and many lessons learned, but sadly irrelevant to most of us. Goodbye, WS-I. I know and respect many of your participants, but I won’t mourn your passing.

Phipps worked for Sun when the WS-* activity was at its height and WS-I was set up, and describes its formation thus:

Formed in the name of "preventing lock-in" mainly as a competitive action by IBM and Microsoft in the midst of unseemly political knife-play with Sun, they went on to create massively complex layered specifications for conducting transactions across the Internet. Sadly, that was the last thing the Internet really needed.

However, Phipps links to this post by Mike Champion at Microsoft which represents a more nuanced view:

It might be tempting to believe that the lessons of the WS-I experience apply only to the Web Services standards stack, and not the REST and Cloud technologies that have gained so much mindshare in the last few years. Please think again: First, the WS-* standards have not in any sense gone away, they’ve been built deep into the infrastructure of many enterprise middleware products from both commercial vendors and open source projects. Likewise, the challenges of WS-I had much more to do with the intrinsic complexity of the problems it addressed than with the WS-* technologies that addressed them. William Vambenepe made this point succinctly in his blog recently.

It is also important to distinguish between the work of the WS-I, which was about creating profiles and testing tools for web service standards, and the work of other groups such as the W3C and OASIS which specify the standards themselves. While work on the WS-* specifications seems much reduced, there is still work going on. See for example the W3C’s Web Services Resource Access Working Group.

I partly disagree with Phipps about the work of the WS-I being “sadly irrelevant to most of us”. It depends who he means by “most of us”. Granted, all this stuff is meaningless to the world at large; but there are a significant number of developers who use SOAP and WS-* at least to some extent, and interoperability is key to the usefulness of those standards.

The Salesforce.com API is mainly SOAP based, for example, and although there is a REST API in preview it is not yet supported for production use. I have been told that a large proportion of the transactions on Salesforce.com are made programmatically through the API, so here is one place at least where SOAP is heavily used.

WS-* web services are also built into Microsoft’s Visual Studio and .NET Framework, and are widely used in my experience. Visual Studio does a good job of wrapping them so that developers do not have to edit WSDL or SOAP requests and responses by hand. I’d also suggest that web services in .NET are more robust than DCOM (Distributed COM) ever was, and work successfully over the internet as well as on a local network, so the technology is not a failure.

That said, I am sure it is true that only a small subset of the WS-* specifications are widely used, which implies a large amount of wasted effort.

Is SOAP and WS-* dying, and REST the future? The evidence points that way to me, but I would be interested in other opinions.

The Beatles come to Apple iTunes

Apple made an extraordinary fuss about the arrival of Beatles music on its iTunes download store – even allowing the news to take over its home page for a day or two.

image

Why? I can think of a few reasons. Because Steve Jobs was born in 1955 and this is the music of his teen years. Because it is the finale in a long battle between Apple Computer and Apple Corps Ltd. And because the Beatles are arguably the pinnacle of popular music, regularly topping lists like the Rolling Stone 500 Greatest Albums of All Time. In fact, Beatles albums occupy four of the top ten slots.

image

It follows that the Beatles coming to iTunes is a landmark moment for Apple (computer) and shows the extent to which it now dominates music delivery.

That said, some observers were bewildered. Beatles fans already have the music, and have ripped their CDs to music servers and iPods so that iTunes availability will make no difference to them; and people born from a decade or two later than Steve Jobs mostly do not revere the band in the same way.

Speaking personally, those four albums are not in my top ten all-time favourites, good though they are, and I am more likely to put on Lennon’s cathartic Plastic Ono Band album than Sergeant Pepper.

I also wonder how long iTunes can sustain its position. To my mind, the streaming model of Spotify, where you pay a subscription and can listen to anything you want, makes more sense than the download model of iTunes.

But you want to own the music? Well, you cannot; even a CD or LP only sells you a licence. An iTunes purchase is more ephemeral than a CD, because it is a personal licence with no resale value, and comes with no physical container that you can put on the shelf. It is also, in the case of the Beatles albums and many others,  more expensive to buy the iTunes download than the CD, so you are paying a premium for the convenience of near-instant digital delivery.

It follows that iTunes offers rather poor value in an absolute sense. It is best to think of it as a service; and Apple does a nice job of making music easy to find and enjoy.

Final note: even if you have no interest in buying, it is worth running up iTunes and playing the hitherto unavailable video Live at the Washington Coliseum, 1964, which you can stream for free for an introductory period.

Now you can rent GPU computing from Amazon

I wrote back in September about why programming the GPU is going mainstream. That’s even more the case today, with Amazon’s announcement of a Cluster GPU instance for the Elastic Compute Cloud. It is also a vote of confidence for NVIDIA’s CUDA architecture. Each Cluster GPU instance has two NVIDIA Tesla M2050 GPUs installed and costs $2.10 per hour. If one GPU instance is not enough, you can use up to 8 by default, with more available on request.

GPU programming in the cloud makes sense in cases where you need the performance of a super-computer, but not very often. It could also enable some powerful mobile applications, maybe in financial analysis, or image manipulation, where you use a mobile device to input data and view the results, but cloud processing to do the heavy lifting.

One of the ideas I discussed with someone from Adobe at the NVIDIA GPU conference was to integrate a cloud processing service with PhotoShop, so you could send an image to the cloud, have some transformative magic done, and receive the processed image back.

The snag with this approach is that in many cases you have to shift a lot of data back and forth, which means you need a lot of bandwidth available before it makes sense. Still, Amazon has now provided the infrastructure to make processing as a service easy to offer. It is now over to the rest of us to find interesting ways to use it.