Tag Archives: windows

Silverlight 4.0 released to the web; tools still not final

Microsoft released the Silverlight 4.0 runtime yesterday. Developers can also download the Silverlight 4 Tools; but they are not yet done:

Note that this is a second Release Candidate (RC2) for the tools; the final release will be announced in the coming weeks.

Although it is not stated explicitly, I assume it is fine to use these tools for production work.

Another product needed for Silverlight development but still not final is Expression Blend 4.0. This is the designer-focused IDE for Silverlight and Windows Presentation Foundation. Microsoft has made the release candidate available, but it looks as if the final version will be even later than that for Silverlight 4 Tools.

Disappointing in the context of the launch of Visual Studio 2010; but bear in mind that Silverlight has been developed remarkably fast overall. There are huge new features in version 4, which was first announced at the PDC last November; and that followed only a few months after the release of version 3 last summer.

Why all this energy behind Silverlight? It’s partly Adobe Flash catch-up, I guess, with Silverlight 4 competing more closely with Adobe AIR; and partly a realisation that Silverlight can be the unifying technology that brings together web and client, mobile and desktop for Microsoft. It’s a patchy story of course – not only is the appearance of Silverlight on Apple iPhone or iPad vanishingly unlikely, but more worrying for Microsoft, I hear few people even asking for it.

Even so, Silverlight 4.0 plus Visual Studio 2010 is a capable platform; it will be interesting to see how well it is taken up by developers. If version 4.0 is still not enough to drive mainstream adoption, then I doubt whether any version will do it.

That also raises the question: how can we measure Silverlight take-up? The riastats charts tell us about browser deployment, but while that is important, it only tells us how many have hit some Silverlight content and allowed the plug-in to install. I look at things like activity in the Silverlight forums:

Our forums have 217,426 threads and 247,562 posts, contributed by 77,034 members from around the world. In the past day, we had 108 new threads, 529 new posts, and 70 new users.

it says currently – substantial, but not yet indicative of a major platform shift. Or job stats – 309 UK vacancies right now, according to itjobswatch, putting it behind WPF at 662 vacancies and Adobe Flash at 740. C# on the other hand has 5349; Java 6023.

Microsoft maybe gets the cloud – maybe too late

Microsoft CEO Steve Ballmer gave a talk on the company’s cloud strategy at the University of Washington yesterday. Although a small event, the webcast was widely publicised and coincides with a leaked internal memo on “how cloud computing will change the way people and businesses use technology”, a new Cloud website, and a Cloud Computing press portal, so it is fair to assume that this represents a significant strategy shift.

According to Ballmer:

about 70 percent of our folks are doing things that are entirely cloud-based, or cloud inspired. And by a year from now that will be 90 percent

I watched the webcast, and it struck me as significant that Ballmer kicked off with a vox pop video where various passers by were asked what they thought about cloud computing. Naturally they had no idea, the implication being, I suppose, that the cloud is some new thing that most people are not yet aware of. Ballmer did not spell out why Microsoft made the video, but I suspect he was trying to reassure himself and others that his company is not too late.

I thought the vox pop was mis-conceived. Cloud computing is a technical concept. What if you did a vox pop on the graphical user interface? or concurrency? or Unix? or SQL? You would get equally baffled responses.

It was an interesting contrast with Google’s Eric Schmidt who gave a talk at last month’s Mobile World Congress that was also a big strategy talk; I posted about it here. Schmidt takes the cloud for granted. He does not treat it as the next big thing, but as something that is already here. His talk was both inspiring and chilling. It was inspiring in the sense of what is now possible – for example, that you can go into a restaurant, point your mobile at a foreign-language menu, and get back an instant translation, thanks to Google’s ability to mine its database of human activity. It was chilling with its implications for privacy and Schmidt’s seeming disregard for them.

Ballmer on the other hand is focused on how to transition a company whose business is primarily desktop operating systems and software to one that can prosper in the cloud era:

If you think about where we grew up, other than Windows, we grew up with this product called Microsoft Office. And it’s all about expressing yourself. It’s e-mail, it’s Word, it’s PowerPoint. It’s expression, and interaction, and collaboration. And so really taking Microsoft Office to the cloud, letting it run in the cloud, letting it run from the cloud, helping it let people connect and communicate, and express themselves. That’s one of the core kind of technical ambitions behind the next release of our Office product, which you’ll see coming to market this June.

Really? That’s not my impression of Office 2010. It’s the same old desktop suite, with a dollop of new features and a heavily cut-down online version called Office Web Apps. The problem is not only that Office Web Apps is designed to keep you dependent on offline Office. The problem is that the whole model is wrong. The business model is still based on the three-year upgrade cycle. The real transition comes when the Web Apps are the main version, to which we subscribe, which get constant incremental updates and have an API that lets them participate in mash-ups across the internet.

That said, there are parallels between Ballmer’s talk and that of Schmidt. Ballmer spoke of 5 dimensions:

  • The cloud creates opportunities and responsibilities
  • The cloud learns and helps you learn, decide and take action
  • The cloud enhances your social and professional interactions
  • The cloud wants smarter devices
  • The cloud drives server advances

In the most general sense, those are similar themes. I can even believe that Ballmer, and by implication Microsoft, now realises the necessity of a deep transition, not just adding a few features to Office and Windows. I am not sure though that it is possible for Microsoft as we know it, which is based on Windows, Office and Partners.

Someone asks if Microsoft is just reacting to others. Ballmer says:

You know, if I take a look and say, hey, look, where am I proud of where we are relative to other guys, I’d point to Azure. I think Azure is very different than anything else on the market. I don’t think anybody else is trying to redefine the programming model. I think Amazon has done a nice job of helping you take the server-based programming model, the programming model of yesterday that is not scale agnostic, and then bringing it into the cloud. They’ve done a great job; I give them credit for that. On the other hand, what we’re trying to do with Azure is let you write a different kind of application, and I think we’re more forward-looking in our design point than on a lot of things that we’re doing, and at least right now I don’t see the other guy out there who’s doing the equivalent.

Sorry, I don’t buy this either. Azure does have distinct advantages, mainly to do with porting your existing ASP.NET application and integrating with existing Windows infrastructure. I don’t believe it is “scale agnostic”; something like Google App Engine is better in that respect. With Azure you have to think about how many virtual machines you want to purchase. Nor do I think Azure lets you write “a different kind of application.” There is too little multi-tenancy, too much of the old Windows server model remains in Azure.

Finally, I am surprised how poor Microsoft has become at articulating its message. Azure was badly presented at last year’s PDC, which Ballmer did not attend. It is not an attractive platform for small-scale developers, which makes it hard to get started.

Google Chrome usage growing fast; Apple ahead on mobile web

Looking at my browser stats for February one thing stands out: Google Chrome. The top five browsers are these:

  1. Internet Explorer 40.5%
  2. Firefox 34.1%
  3. Chrome 10.5%
  4. Safari 4.3%
  5. Opera 2.9%

Chrome usage has more than doubled in six months, on this site.

I don’t pretend this is representative of the web as a whole, though I suspect it is a good leading indicator because of the relatively technical readership. Note that although I post a lot about Microsoft, IE usage here is below that on the web as a whole. Here are the figures from NetMarketShare for February:

  1. Internet Explorer 61.58%
  2. Firefox 24.23%
  3. Chrome 5.61%
  4. Safari 4.45%
  5. Opera 2.35%

and from  statcounter:

  1. Internet Explorer 54.81%
  2. Firefox 31.29%
  3. Chrome 6.88%
  4. Safari 4.16%
  5. Opera 1.94%

There are sizeable variations (so distrust both), but similar trends: gradual decline for IE, Firefox growing slightly, Chrome growing dramatically. Safari I suspect tracks Mac usage closely, a little below because some Mac users use Firefox. Mobile is interesting too, here’s StatCounter:

  1. Opera 24.26
  2. iPhone 22.5
  3. Nokia 16.8
  4. Blackberry 11.29
  5. Android 6.27
  6. iTouch 10.87

Note that iPhone/iTouch would be top if combined. Note also the complete absence of IE: either Windows Mobile users don’t browse the web, or they use Opera to do so.

I’m most interested in how Chrome usage is gathering pace. There are implications for web applications, since Chrome has an exceptionally fast JavaScript engine. Firefox is fast too, but on my latest quick Sunspider test, Firefox 3.6 scored 998.2ms vs Chrome 4.0’s 588.4ms (lower is better). IE 8.0 is miserably slow on this of course; just for the record, 5075.2ms.

Why are people switching to Chrome? I’d suggest the following. First, it is quick and easy to install, and installs into the user’s home directory on Windows so does not require local administrative rights. Second, it starts in a blink, contributing to a positive impression. Third, Google is now promoting it vigorously – I frequently see it advertised. Finally, users just like it; it works as advertised, and generally does so quickly.

Windows trackpad annoyances: disappearing pointer, auto clicking

Today the mouse pointer disappeared on my Toshiba laptop running Windows 7 disappeared. I could see that the trackpad or mouse (it made no difference if I plugged in a USB mouse) was working, because I could see mouseover effects as I moved round the screen, but the actual pointer was not visible.

The immediate workaround is to go to Control Panel, search for Mouse, and click Make it easier to see the mouse pointer. Tab to Display Pointer Trails and press the spacebar. This lets you see the mouse at least while it is moving. It disappears again when stationary.

image

It’s definitely an improvement; but not a complete fix. What is? Well, rebooting; but the problem may recur. Things that might help: tweaking display settings, updating the video driver, avoiding hibernation. If anyone has a definitive fix, I’d love to hear.

While I’m on the subject, here’s another constant annoyance. Every new laptop install of Windows 7 that I’ve seen has a feature called tapping enabled. This converts taps on your touchpad into mouse clicks. It is the first thing I turn off. The reason it drives me crazy is that it always detects unintended taps. The consequences are severe: buttons appear to click themselves, dialogs close unexpectedly, work can even be lost.

Worse still, it is not an easy setting to find. First, you must have the proper driver installed – usually its a Synaptics driver, though downloaded from your laptop vendor’s site. Second, you have to go to Control Panel, Mouse, Change Mouse Settings, Advanced tab, click Advanced Feature Settings, then click Settings under Detailed Settings for Touch Pad operations, then uncheck Enable Tapping. At least, that’s how it is on mine; the path may vary slightly on others.

image

This is a setting that should be off by default; and a setting that should be easy to find, not buried under obscure labels like “Advanced”.

I have lost count of the number of people who have been delighted when I’ve showed them how to disable this feature. “Thank you; I wondered why it kept clicking by itself.”

Fragmentation and the RIA wars: Flash is the least bad solution

The latest salvo in the Adobe Flash wars comes from the Free Software Foundation, in an open letter to Google:

Just think what you can achieve by releasing the VP8 codec under an irrevocable royalty-free license and pushing it out to users on YouTube? You can end the web’s dependence on patent-encumbered video formats and proprietary software (Flash) … Apple has had the mettle to ditch Flash on the iPhone and the iPad – albeit for suspect reasons and using abhorrent methods (DRM) – and this has pushed web developers to make Flash-free alternatives of their pages. You could do the same with YouTube, for better reasons, and it would be a death-blow to Flash’s dominance in web video.

Fair point; but one thing the FSF misses is that Apple’s stance has not only “pushed web developers to make Flash-free alternatives of their pages”. It has also pushed developers into making Apple-specific apps as an alternative to web pages – which to my mind is unfortunate.

The problem goes beyond web pages. If you have an application that goes beyond HTML and JavaScript, maybe for offline use or to integrate with other local applications or hardware, there is no cross-platform solution for the iPhone, iTouch or forthcoming iPad.

While I understand that non-proprietary platforms are preferable to proprietary platforms, it seems to me that a free cross-platform runtime is less evil than a vendor-controlled platform where I have to seek approval and share income with the vendor just to get my app installed.

More broadly, it is obvious that the days of Windows on the desktop, Web for everything else are over. We are seeing a proliferation of devices, each with their own SDK: alongside Apple there is Palm WebOS, Nokia/Intel Meego, Google Android, and when Windows Phone 7 comes along, Microsoft Silverlight.

The question: if you have an application and want to reach all these platforms, what do you do? A web app if possible; but otherwise?

It is the new fragmentation; and frankly, Adobe Flash is the closest thing we have to a solution, particularly with the native compilation option for iPhone that is coming in Creative Suite 5.

I don’t like the idea of a single company owning the runtime that unifies all these platforms. That’s not healthy. Still, at least Adobe is currently independent of the obvious industry giants: Google, Apple, Microsoft, IBM and so on.

Dealing a death-blow to Flash is all very well, but the end result could be something worse.

Should IT administration be less annoying?

I am more a developer than an IT administrator but sometimes find myself doing (and writing about) admin-type tasks. I am usually under time pressure and I find myself increasingly irritated by annoyances that take up precious time.

It seems to me that there is a hidden assumption in IT, that usability is all-important when it comes to end users, but that the admin can tolerate any amount of complexity and obscurity, provided that the end result is happy users with applications that work. The analogy I suppose is something like that of a motor car with an engineer who gets hands grubby under the bonnet, and a driver who settles back in a comfortable seat and uses only clean, smooth and simple controls to operate the vehicle.

That said, any engineer will tell you that some vehicles are easier to work on than others, and some documentation (whether paper or electronic) more precise and helpful than others. No engineer minds getting oil on their hands, but wasting time because the service manual did not mention that you have to loosen the widget before you can remove the doodah is guaranteed to annoy.

A little detail that I’ve been pondering is the Internet Explorer Enhanced Security Configuration found in server versions of the Windows operating system. This is a specially locked-down configuration of IE that is designed to save you from getting malware onto your server.

That’s a worthy goal; and another good principle is not to browse the web at all on a server. Still, as we all know the first thing you have to do on a Windows server is to install patches and drivers, some of which are not available on Windows Update. In addition, not all servers are mission-critical; I find myself setting them up and tearing them down on a regular basis for trying out new software. It may therefore happen that you open up IE to grab a patch from somewhere; and it is a frustrating experience. Javascript does not work; files do not download. The usual solution is to add the target site to Trusted Sites – thereby giving the site more trust than it really needs. The sequence goes something like this:

1. Browse to vendor’s site to find driver.

2. Notice nothing works, click Tools – Internet Options – Security, Trusted Sites, Sites button, Add.

3. Click Add, forgetting to uncheck the box that says “Require server verification (https:)”.

4. Get this dialog:

image

5. Wonder briefly why IE did not spot that you are adding a site with an http: prefix before rather than after you clicked Add.

6. Uncheck the box, repeat the Add, go back to IE, refresh page to make scripts etc run and likely lose your progress through the site.

7. Find that the site now redirects to ftp://vendorsite.com and you have to repeat the process.

A minor issue of course; but if this is a sequence you have gone through a few times you will agree that it is annoying and not really thought through. Perhaps it is to do with Windows server having a GUI that it does not really need; on Linux or even Server Core you would use the fine wget utility having found the url of the file you need using the browser that you have running alongside your terminal window.

I also realise there are may ways round it, ranging from something to do with laptops and USB pen drives, to installing Google Chrome which only takes a few clicks, does not require admin rights, and happily downloads anything.

What prompts this little rant is not actually IE Enhanced Security Configuration, which is a familiar enemy, but a day figuring out the subtleties of Microsoft’s App-V, brilliant in concept but not the easiest thing to set up, thanks to verbose but unhelpful documentation, dependency on SQL Server set up in the right way that is not clearly spelt out, lack of support for Windows x64 clients except in the beta of App-V 4.6 which is available from a Microsoft Connect URL that in fact reports non-availability; you know the kind of thing:

image

At times like this, the system seems downright hostile. Of course this does not matter, because administrators are trained to do this, and don’t mind provided that the users are happy in the end.

But I don’t actually believe that. In Windows 7 Microsoft deliberately targeted  the things that annoy users because, under pressure from Apple, it figured out that this was necessary in order to compete. The result is an OS that users generally like much better. The things that annoy admins are different, but equally affect how much they enjoy their work; and effort in this area is equally worthwhile though less visible to end-users.

In fairness, initiatives like the web platform installer show that in some areas at least, Microsoft has learned this lesson. There is, however, plenty still to do, especially in these somewhat neglected areas like App-V.

My final reflection: when Microsoft came out with Windows NT Server back in 1993 I expect that being easier to use than Unix was one of the goals. Perhaps it was, then; but Windows soon developed its own foibles that were as bad or worse.

Adobe Flash getting faster on the Mac

According to Adobe CTO Kevin Lynch:

Flash Player on Windows has historically been faster than the Mac, and it is for the most part the same code running in Flash for each operating system. We have and continue to invest significant effort to make Mac OS optimizations to close this gap, and Apple has been helpful in working with us on this. Vector graphics rendering in Flash Player 10 now runs almost exactly the same in terms of CPU usage across Mac and Windows, which is due to this work. In Flash Player 10.1 we are moving to CoreAnimation, which will further reduce CPU usage and we believe will get us to the point where Mac will be faster than Windows for graphics rendering.

Video rendering is an area we are focusing more attention on — for example, today a 480p video on a 1.8 Ghz Mac Mini in Safari uses about 34% of CPU on Mac versus 16% on Windows (running in BootCamp on same hardware). With Flash Player 10.1, we are optimizing video rendering further on the Mac and expect to reduce CPU usage by half, bringing Mac and Windows closer to parity for video.

Also, there are variations depending on the browser as well as the OS — for example, on Windows, IE8 is able to run Flash about 20% faster than Firefox.

Many of us are not aware of these kinds of differences, because we live in one browser on one operating system, but the non-uniform performance of Flash helps to explain divergent opinions of its merits.

I would be interested to see a similar comparison for Linux, which I suspect would show significantly worse performance than on Windows or Mac.

The mystery of the slow Exchange 2007: when hard-coded values come back to haunt you

Following a migration from Microsoft Small Business Server 2003 to SBS 2008 users were complaining that Exchange was slower than before in some scenarios. How could this be? The new machine had 64-bit goodness and far more RAM than before.

I checked out the machine’s performance and noticed something odd. Store.exe, the Exchange database, usually grabs vast amounts of RAM, but in this case it was using surprisingly little, around 640MB. Could this be related to the performance issue?

I speculated that Exchange memory usage was limited in some way, so looked up where such a limit is set. I found this article. Ran ADSI Edit and there it was, a 640MB limit (or thereabouts), set in msExchESEParamCacheSizeMax.

I removed the limit, restarted Exchange 2007, and it immediately said “thank you very much” and grabbed 8GB instead.

Why did this setting exist? No doubt because back in the days of SBS 2003 and a much less powerful 32-bit machine, someone set it in order to prevent store.exe from crippling the box. It is another example of why Small Business Server is harder to manage than full server setups when Exchange invariably has a dedicated server (or several).

SBS 2008 cannot be installed as an in-place upgrade; but the official migration process does preserve Active Directory; and since that is where this value lives, and since it is not specific to any version of Exchange, it was dutifully transferred.

Why wasn’t the setting discovered and changed before? Well, you will observe that it is somewhat hidden. The main chances of finding it would be either if you were deeply schooled in the ways of Exchange, or if one of the Best Practices Analyzer (BPA) tools picked it up, or if the users screamed that Exchange was slow (which is what happened) and you figured out what was wrong.

The SBS BPA did not notice it. The Exchange BPA did, kind-of. It was not shown as a critical problem, but listed for information under “Non-Default Settings”, ironically with a tick beside it, as “Maximum ESE cache size changed”. Summoning help on this setting leads to this article which refers to Exchange 2000.

An admin failure, yes, but arguably also a defect in Exchange and SBS. Typical Microsoft: critical setting, hard-coded when it would make more sense to use a percentage value, not checked by setup and persistent across major upgrades of Exchange, deeply buried in Active Directory.

Mentioned here just in case it saves someone time when trying to figure out why their shiny 64-bit Exchange 2007 is running worse than 32-bit Exchange 2003 ever did.

Silverlight 4 with COM can do anything – on Windows

At PDC Microsoft played down the significance of adding COM support to Silverlight 4 when run out of the browser and fully trusted (you can also be out of the browser and not fully trusted). The demos were of Office automation, and journalists were told that the feature was there to satisfy the requests of a few Enterprise customers.

Now former Microsoft Silverlight program manager Justin Angel, who has implemented his blog in Silverlight, has spelt out what we all knew, that Silverlight with COM support can do just about anything. His richly-illustrated blog post has code examples for:

  • reading and writing to any file (subject I guess to the permissions of the current user)
  • executing any command or file
  • emulating user input with WShell.SendKeys
  • pinning files to the Windows 7 taskbar
  • reading any registry values
  • adding an application to the Windows startup folder
  • doing text to speech using Windows built-in engine
  • accessing local databases with ODBC
  • automating scanners and cameras
  • using the Windows 7 location API, accessing the full .NET Framework
  • and of course … automating Microsoft Office.

Well, fully trusted means fully trusted; and these are great features for powerful though Windows-only Silverlight applications, though I hope no user installs and trusts one of these applets thinking it is “only Silverlight” and can’t do much harm.

The post also has comments on the lack of any equivalent feature for the Mac in Silverlight 4:       

If Microsoft chooses to not go ahead with Mac support in Silverlight 4 RTM, well, it’s not because they couldn’t

says Angel, suggesting that it would be easy to add AppleScript support. (I had to type that quote – no clipboard support in Silverlight 3).

Of course there is time for Microsoft to unveil such a feature, say at Mix10 in March, though I wouldn’t count on it.

New HP and Microsoft agreement commits $50 million less than similar 2006 deal

I’ve held back comment on the much-hyped HP and Microsoft three-year deal announced on Wednesday mainly because I’ve been uncertain of its significance, if any. It didn’t help that the press release was particularly opaque, full of words with many syllables but little meaning. I received the release minutes before the conference call, during which most of us were asking the same thing: how is this any different from what HP and Microsoft have always done?

It’s fun to compare and contrast with this HP and Microsoft release from December 2006 – three years ago:

We’ve agreed to a three-year, US$300 million investment between our two companies, and a very aggressive go-to-market program on top of that. What you’ll see us do is bring these solutions to the marketplace in a very aggressive way, and go after our customers with something that we think is quite unique in what it can do to change the way people work.

$300 million for three years in 2006; $250 million for three years in 2010. Hmm, not exactly the new breakthrough partnership which has been billed. Look here for what the press release should have said: it’s mainly common-sense cooperation and joint marketing.

Still, I did have a question for CEOs Mark Hurd and Steve Ballmer which was what level of cloud focus was in this new partnership, drawing these remarks from Ballmer:

The fact that our two companies are very directed at the cloud is the driving force behind this deal at this time. The cloud really means a modern architecture for how you build and deploy applications. If you build and deploy them to our service that we operate that’s called Windows Azure. If a customer deploys them inside their own data centre or some other hosted environment, they need a stack on which to build, hardware software and services, that instances the same application model that we’ll have on Windows Azure. I think of it as the private cloud version of Windows Azure.

That thing is going to be an integrated stack from the hardware, the virtualization layer, the management layer and the app model. It’s on that that we are focusing the technical collaboration here … we at Microsoft need to evangelize that same application model whether you choose to host in the the cloud or on your own premises. So in a sense this is entirely cloud motivated.

Hurd added his insistence that this is not just more of the same:

I would not want you to write that it sounds a lot like what Microsoft and HP have been talking about for years. This is the deepest level of collaboration and integration and technical work we’ve done that I’m aware of … it’s a different thing that what you’ve seen before. I guarantee Steve and I would not be on this phone call if this was just another press release from HP and Microsoft.

Well, you be the judge.

I did think Ballmer’s answer was interesting though, in that it shows how much Microsoft (and no doubt HP) are pinning their hopes on the private cloud concept. The term “private cloud” is a dubious one, in that some of the defining characteristics of cloud – exporting your infrastructure, multi-tenancy, shifting the maintenance burden to a third-party – are simply not delivered by a private cloud. That said, in a large organisation they might look similar to most users.

I can’t shake off the thought that since HP wants to carry on selling us servers, and Microsoft wants to carry on selling us licences for Windows and Office, the two are engaged in disguised cloud avoidance. Take Office Web Apps in Office 2010 for example: good enough to claim the online document editing feature; bad enough to keep us using locally installed Office.

That will not work long-term and we will see increasing emphasis on Microsoft’s hosted offerings, which means HP will sell fewer servers. Maybe that’s why the new deal is for a few dollars less than the old one.