All posts by onlyconnect

New in Windows 7 RC: Windows XP Mode, Remote Media Streaming

A new feature in Windows 7 has been announced as part of the Release Candidate rollout. Called XP Mode (XPM), it lets users run applications in a virtual instance of Windows XP itself, for excellent compatibility. Although not part of the retail Windows 7, XPM will be a free download or may be installed at no extra cost by PC vendors.

The neat aspect of this is that XP applications don’t have to run within an XP desktop, but can be published to the host system. What this means is that users can start an XP application from the Windows 7 desktop, and only see the application window. This is more user-friendly than having to cope with two operating systems at once.

The main advantage is compatibility. Since this really is XP, pretty much anything that works on XP should run correctly. That said, since the hardware is virtualized there could be issues with some devices, or with applications that require accelerated graphics.

Another aspect is security. For example, if you have some applications that do not work properly with UAC (User Account Control) enabled, you can run them in XP Mode rather than compromising the security of the entire system.

It is a clever move from Microsoft, since it will remove most compatibility concerns that could otherwise impede adoption.

Another interesting new feature is Remote Media Streaming:

Windows 7 offers new functionality called Remote Media Streaming that enables you to access your home-based digital media libraries over the Internet from another Windows 7-based computer outside the home. Simply associate two or more computers running Windows 7 with your online ID provider credentials (such as your Windows Live™ email address and password) and allow Internet access to your media.

says the press release. This feature extends to any PC in your home network, so if you have a fast enough connection you need never be parted from your music. Then again, you could just run Spotify. There’s also support for MOV files in Windows Media Player.

There’s a few more detail changes in the UI; I’ll report further when I’ve had a look.

Windows 7 RC will be released to Technet and MSDN subscribers on April 30th, and made generally available on May 5th.

Microsoft’s quarterly results: will it ever make sense of the cloud?

Most comments on Microsoft’s quarterly results are understandably focused on the overall picture: a quarterly revenue decline for the first time ever.

Revenue decline can be forgiven during a recession, but it’s more interesting to look at the breakdown. I made a simple quarter-on-quarter table to look at the pattern:

Quarter ending Mar 31st 2009 vs quarter ending March 31st 2008, $millions

Client Revenue % change Profit % change
Client (Windows) 3404 -15.6 2514 -19.29
Server and Tools 3467 7.07 1344 24.44
Online 721 14.47 -575 -154.42
Business (Office) 4505 -4.78 2877 -7.99
Entertainment and devices 1567 -1.57 -31 -129.25%

The weak Windows client figures are unsurprising. The poorly-received Windows Vista is out in the market, and the highly-praised Windows 7 is being prepared for release. When anyone asks me, I suggest that they should wait for Windows 7 before buying a new PC or laptop, if they are in a position to delay.

The Business division (Office) remains massively profitable, even though it too has declined a little. Office may be ludicrously expensive, but there’s little evidence of a significant shift to cheaper or free alternatives.

It’s also notable that the server and tools business continues to perform well. Again, I’m not surprised: Server 2008 strikes me as a solid product, and there’s not much wrong with products like SQL Server 2008 and Visual Studio.

Not much to say about entertainment and devices. Xbox is doing so-so; Windows Mobile is rather a mess.

The real shocker here is the online business. Revenue is down and losses have grown. It is no use just blaming the recession: this is a sector that is growing in importance. Should Microsoft back out and leave it to Google? That would be as if Kodak had refused to invest in digital photography. But something is badly wrong here.

That said, I’m guessing that the figures mostly represent the failure of the various Windows Live properties to attract advertising income; the small market share of Live Search must be an important factor. The newer cloud computing business model, where Microsoft sells subscriptions to its online platform and services, is largely still in beta – I’m thinking of things like Windows Azure and Live Mesh. Further, I’m not sure where Microsoft puts revenue from things like hosted Exchange or hosted Dynamics CRM, which straddle server and online. There is still time for the company to get this right.

I’m not convinced though that Microsoft yet has the will or the direction to make sense of its online business. Evidence: the way the company blows hot and cold about Live Mesh; the way SQL Server Data Services was scrapped and replaced by full online SQL Server at short notice; and the ugly and confusing web site devoted to Windows Azure.

When I looked at Virtual Earth recently I was impressed by its high quality and ease of development. It illustrates the point that within Microsoft there are teams which are creating excellent online services. Others are less strong; but what is really lacking is the ability to meld everything together into a compelling online platform.

That could change at any time; but we’ve been waiting a long while already.

Ubuntu 9.04 not so jaunty

I still love Ubuntu, but it’s hard to find much to enthuse about in the latest release, 9.04 also known as Jaunty Jackalope. As this post observes, most of the changes are under the hood, so users will not notice much difference from the previous release, Intrepid Ibex or 8.10. Well, there’s faster start-up, and Open Office 3.0 – but then again, I installed Open Office 3.0 as soon as Intrepid came out, so this is not really exciting.

My own upgrade went better than the last one, but I’ve still had problems. Specifically:

  • I had to edit Grub’s menu.lst manually after the upgrade. I always have to do this, since it detects the hard drive configuration incorrectly.
  • My Adobe AIR installation was broken and had to be re-installed
  • I’ve lost hardware graphics acceleration and desktop effects. This is a laptop with embedded Intel graphics; apparently this is a common problem and Intel graphics support in Jaunty is work in progress. See here for more details an experimental suggested fix, which is not for the faint-hearted.

There are other updates, of course, and I was glad to see Mono 2.0.1 and MonoDevelop 2.0 available in the repository, for .NET development on Linux. If Jaunty is the same as before, but faster and more stable, that is no bad thing, though the shaky Intel graphics support undermines that argument.

My question: why is Canonical persevering with its policy of supposedly major releases every six months? This looks to me like a minor update; would it not be better presented as updates to 8.10, and focusing efforts on 9.10 in October? Six-monthly releases must be a heavy burden for the team.

I don’t mean to put you off Ubuntu. It is well worth trying either as a companion or alternative to Windows and Mac.

Update:

I have fixed my desktop effects. How? First, a little more about the problem. DRI (Direct Rendering Infrastructure) was not enabled. My graphics card (from lspci –nn | grep VGA) is:

Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller [8086:27a2] (rev 03)

The problem I had before was reported in Xorg.0.log as:

Xorg.0.log:(EE) intel(0): [dri] DRIScreenInit failed. Disabling DRI.

I also noticed that /dev/dri/card0 did not exist on my system.

Well, I tried the technique described here. That is, I booted into an older version of the kernel; the oldest available on my system being 2.6.22.14. DRI magically started working. Then I rebooted into the latest version of the kernel, 2.6.28.11. DRI still works. So I am sorted. I’d be interested to know why this works.

Parallel Programming: five reasons for caution. Reflections from Intel’s Parallel Studio briefing.

I’m just back from an Intel software conference in Salzburg where the main topic was Parallel Studio, a new suite which adds Intel’s C/C++ compiler, debugging and profiling tools into Visual Studio. To some extent these are updates to existing tools like Thread Checker and VTune, though there are new features such as memory checking in Parallel Inspector (the equivalent to Thread Checker) and a new user interface for Parallel Amplifier (the equivalent to VTune). The third tool in the suite, Parallel Composer, is comprised of the compiler and libraries including Threading Building Blocks and Intel Integrated Performance Primitives.

It is a little confusing. Mostly Parallel Studio replaces the earlier products for Windows developers using Visual Studio; though we were told that there are some advanced features in products like VTune that meant you might want to stick with them, or use both.

Intel’s fundamental point is that there is no point in having multi-core PCs if the applications we run are unable to take advantage of them. Put another way, you can get remarkable performance gains by converting appropriate routines to use multiple threads, ideally as many threads as there are cores.

James Reinders, Intel’s Chief Evangelist for software products, introduced the products and explained their rationale. He is always worth listening to, and did a good job of summarising the free lunch is over argument, and explaining Intel’s solution.

That said, there are a few caveats. Here are five reasons why adding parallelism to your code might not be a good idea:

1. Is it a problem worth solving? Users only care about performance improvements that they notice. If you have a financial analysis application that takes a while to number-crunch its data, then going parallel is a big win. If your application is a classic database forms client, it is probably a waste of time from a performance perspective. You care much more about how well your database server is exploiting multiple threads on the server, because that is likely to be the bottleneck.

There is a another reason to do background processing, and that is in order to keep the user interface responsive. This matters a lot to users. Intel said little about this aspect; Reinders told me it is categorised as convenience parallelism. Nevertheless, it is something you probably should be doing, but requires a different approach than parallelising for performance.

2. Will it actually speed up your app? There is an overhead in multi-threading, as you now have to manage the threads as well as performing your calculations. The worst case, according to Reinders, is a dual-core machine, where you have all the overhead but only one additional core. If the day comes when we routinely have, say, 64 cores on our desktop or laptop, then the benefit becomes overwhelming.

3. Is it actually desirable on a multi-tasking operating system? Consider this: an ideally parallelised application, from a performance perspective, is one that uses 100% CPU across all cores until it completes its task. That’s great if it is the only application you are running, but what if you started four of these guys (same or different applications) simultaneously on a quad-core system? Now each application is contending with others, there’s no longer a performance benefit, and most likely the whole system is going to slow down. There is no perfect solution here: sometimes you want an application to go all-out and grab whatever CPU it needs to get the job done as quickly as possible, while sometimes you would prefer it to run with lower priority because there are other things you care about more, such as a responsive operating system, other applications you want to use, or energy efficiency.

This is where something like Microsoft’s concurrency runtime (which Intel will support) could provide a solution. We want concurrent applications to talk to the operating system and to one another, to optimize overall use of resources. This is more promising than simply maxing out on concurrency in every individual application.

4. Will your code still run correctly? Edward Lee argues in a well-known paper, The Problem with Threads, that multi-threading is too dangerous for widespread use:

Many technologists are pushing for increased use of multithreading in software in order to take advantage of the predicted increases in parallelism in computer architectures. In this paper, I argue that this is not a good idea. Although threads seem to be a small step from sequential computation, in fact, they represent a huge step. They discard the most essential and appealing properties of sequential computation: understandability, predictability, and determinism. Threads, as a model of computation, are wildly nondeterministic, and the job of the programmer becomes one of pruning that nondeterminism. Although many research techniques improve the model by offering more effective pruning, I argue that this is approaching the problem backwards. Rather than pruning nondeterminism, we should build from essentially deterministic, composable components. Nondeterminism should be explicitly and judiciously introduced where needed, rather than removed where not needed.

I put this point to Reinders at the conference. He gave me a rather long answer, saying that it is partly a matter of using the right libraries and tools (Parallel Studio, naturally), and partly a matter of waiting for something better:

Law articulates the dangers of threading. Did we magically fix it or do we really know what we’re doing in inflicting this on the masses? It really come down to determinism. If programmers make their program non-deterministic, getting out of that mess is something most programmers can’t do, and if they can it’s horrendously expensive.

He’s right, if we stayed with Windows threads and Pthreads and programming at that level, we’re headed for disaster. What you need to see is tools and programming templates that avoid that. The evil thing is what we call shared mutable state. When you have things happening in parallel, the safest thing you can do is that they’re totally independent. This is one of the reasons that parallelism on servers works so well, in that you do lots and lots of transactions and they don’t bump into each other, or they only interface through the database.

Once we start opening up shared mutable state, encouraging threading, we set ourselves up for disaster. Parallel Inspector can help you figure out what disasters you create and get rid of them, but ultimately the answer is that you need to encourage people to use programming like OpenMP or Threading Building Blocks. Those generally guide you away from those mistakes. You can still make them.

One of the open questions is can you come up with programming techniques that completely avoid the problem? We do have one that that we’ve just started talking about called Ct … but I think we’re at the point now where OpenMP and Threading Building Blocks have proven that you can write code with that and get good results.

Reinders went on to distinguish between three types of concurrent programming, referring to some diagrams by Microsoft’s David Callaghan. The first is explicit, unsafe parallelism, where the developer has to do it right. The second is explicit, safe parallelism. The best approach according to Reinders would be to use functional languages, but he thinks it unlikely that they will catch on in the mainstream. The third type is implicit parallelism that’s safe, where the developer does not even have to think about it. An example is the math kernel library in IPP (Intel Integrated Performance Primitives) where you just call an API that returns the right answers, and happens to use concurrency for its work.

Intel also has a project called Ct (C/C++ for Throughput) which is a dynamic runtime for data parallelism, which Reinders considers also falls into the implicit parallelism category.

It was a carefully nuanced answer, but proceed with caution.

5. Will your application need a complete rewrite? This is a big maybe. Intel’s claim is that many applications can be updated for parallelism with substantial benefits. A guy from Nero did a presentation though, and said that an attempt to parallelise one of their applications, a media transcoder, had failed because the architecture was not right, and it had to be completely redone. So I guess it depends.

This brings to mind another thing which everyone agrees is a hard challenge: how to design an application for effective parallelism. Intel has a tool in preparation called Parallel Advisor, to be part of Parallel Studio at a future date, which is meant to identify candidates for parallelism, but that will not be a complete answer.

Go parallel, or not?

None of the above refutes Intel’s essential point: that effective concurrent programming is essential to the future of computing. This is an evolutionary process though, and at this point there is every reason to be cautious rather than madly parallelising every piece of code you touch.

Additional Links

Microsoft has a handy Parallel Computing home page.

David Callaghan: Design considerations for Parallel Programming

Cannot open the Outlook window – what sort of error message is that?

I’m actually enjoying Outlook 2007 on my desktop, especially since applying the February patch. It opens in a couple of seconds even from cold. I’m running on Vista 64-bit, and not using cached Exchange mode.

Until today, that is. Started Outlook and got this bewildering message:

This Microsoft article suggested I might not have a default gateway. That was nonsense; so I opened Mail setup (I can get to this through the Properties of the Outlook shortcut), clicked Show Profiles, added a new profile and set the new one as default.

Outlook works fine now. I’m not impressed with the error message though.

Update

While using a new profile works, there is an easier fix as noted in many of the comments to this post. You can run Outlook with the /resetnavpane argument. Here’s the step by step:

1. Press Windows key and R together to open the Run dialog.

2. In the Open field, type:

outlook.exe /resetnavpane

image

Note: there is a space between outlook.exe and /resetnavpane. There are no other spaces in what you have to type.

3. Click OK. Outlook should open.

Helpful post? Sponsor ITWriting.com for ad-free access to the site

The end of Sun’s bold open source experiment

This is a sad day for Sun. It sought to re-invent its business through open source; and the experiment has failed, culminating not in a re-invigorated company, but instead acquisition by an old-school proprietary software company, Oracle.

It is possible to build a successful business around open source software. Zend is doing it with PHP; Red Hat has done it with Linux. These are smaller companies though, and they have not tried to migrate an older business built on a proprietary model. A further complication is that Sun is a hardware business, and although open source is an important part of its hardware strategy as well as its software strategy, it is a different kind of business.

Maybe the strategy was good, but it was the recession, or the server market, that killed Sun. In the end it does not make any difference, the outcome is what counts.

Reading the official overview of the deal, I see lots of references to “open” and “standard-based”, which means nothing, but no mention of open source.

The point of interest now is what happens to Sun’s most prominent open source projects: OpenOffice.org, MySQL, Java and OpenSolaris. Developers will be interested to see what happens to NetBeans, the open source Java IDE, following the Oracle acquisition, and how it will relate to Oracle’s JDeveloper IDE. These open source projects have a momentum of their own and are protected by their licenses, but a significant factor is what proportion of the committers – those who actually write the software and commit their changes to the repository – are Sun employees. Although it is not possible to take back open source code, it is possible to reduce investment, or to start creating premium editions available only to commercial subscribers, which already appeared to be part of MySQL’s strategy.

I presume that both OpenOffice and Java will feature in Oracle’s stated intention to build an end-to-end integrated solution:

Oracle will be the only company that can engineer an integrated system – applications to disk – where all the pieces fit and work together so customers do not have to do it themselves. Our customers benefit as their systems integration costs go down while system performance, reliability and security go up.

says CEO Larry Ellison, who also says nothing about open source. This will involve invading Microsoft’s turf – something Sun was always willing to do, but not particularly successful at executing.

The best outcome for the open source community will be if Oracle continues to support Sun’s open source projects along the same lines as before. Even if that happens, the industry has lost a giant of the open source world.

Some good comments from Redmonk’s Michael Coté here.

What’s new in Exchange 2010 and Hyper-V R2

Mark Wilson’s blog has the best summary I’ve seen on what’s coming in Exchange 2010 and what’s new in Hyper-V R2.

The big thing in Hyper-V R2 is live migration. The big thing in Exchange 2010 is, well:

For me, it seems that Exchange 2010 is not a major upgrade – just as 2003 was an incremental change built on 2000, 2010 builds on 2007 but, nevertheless, the improvements are significant.

says Wilson. Microsoft’s product releases (irrespective of whether the main version number is incremented) can often be categorized as either a major release, or fine-tuning, and it seems that Exchange 2010 is in the latter category. Not a bad thing, given that there was a lot for admins to learn in Exchange 2007. Still, there is a lot in Exchange 2010 if you are excited about compliance, auditing and rights management, as well as some interesting new storage options:

In what will be a massive shift for many organisations, Microsoft is encouraging Exchange 2010 customers to store mailbox data on inexpensive local disks and to replicate databases between servers rather than using SAN-based replication.

There’s also no sign yet of Exchange moving to SQL Server rather than its own Blue JET Extensible Storage Engine. Confused about Red JET, Blue JET and Exchange? Roger Jennings wrote an extensive discussion of the matter.

And what of the VSS plug-in that enables Exchange-aware backup without purchasing a 3rd party solution? Promised in June 2008, still not delivered. I will be interested to see if it arrives with Exchange 2010, expected towards the end of this year. It’s no longer an issue for me personally; I’m using the old NTBackup copied from 64-bit Windows Server 2003 and it seems to work fine for this purpose. The reason Microsoft does not care about this is that most users are either enterprises, which are meant to use Data Protection Manager, or small businesses with Small Business Server, that has its own backup solution. That does not excuse broken promises.

Is Silverlight the problem with ITV Player? Microsoft, you have a problem.

I sat down last night to watch a programme on ITV’s catch-up service, using the Silverlight-based ITV Player. It was watchable, but not too good. Now and again the stream would pause for buffering, and I saw the Silverlight busy icon for a while. Quality is also an issue. Sometimes it is great; sometimes it is horribly pixelated.

I took a look at the ITV forums. It seems to be a common problem. The Best of ITV section is dominated by complaints. Some are from an aggrieved minority running Linux or PowerPC Macs; but there are plenty of others. My experience is relatively good; other issues include broadcasts that only play the ads; or codec issues; or streams failing completely half way through a programme. Here’s a sample:

Believe me guys even if you had Windows OS the player still wouldn’t work its completely rubbish; 6 times i’ve tried to watch Britains Got Talent and it either vanishes, or skips etc.
Rubbish, rubbish, rubbish! BBC iPlayer is excellent compared to this, i’m quite disappointed!

Readers of this blog will know that I have nothing against Silverlight, though my interest is more in the application development side than video streaming. Still, the impact of one on the other should not be discounted. You can guess what the pundits in the ITV forum are calling for. It’s Adobe Flash, because they have seen it working well for the BBC and elsewhere.

Now transition to the development team as they put forward the question of whether to use Flash or Silverlight for their upcoming RIA (Rich Internet Application) project. If the exec responsible struggled to watch ITV player the night before, thanks as far as she can tell to the Silverlight plug-in, that becomes a factor in the outcome.

I understand why people blame Silverlight for these problems; but I realise that this may be wrong, cross-platform issues aside. Maybe ITV has inadequate servers; or there is some other technical issue, and Silverlight is innocent.

If you know the answer to this, please let me know or comment below.

Microsoft must realise, though, that this is the most visible use of Silverlight for many UK folk. Some may also remember how BBC iPlayer transformed its reputation when it moved from using primarily Microsoft technology – though not Silverlight, and made worse by poor peer-to-peer client software – to Adobe’s Flash platform. I suggest that Redmond’s finest give it some attention; though who knows, it may be too late.

RIA (Rich Internet Applications): one day, all applications will be like this

I loved this piece by Robin Bloor on The PC, The Cloud, RIA and the future. My favourite line:

Nowadays very few Mac/PC users have any idea where any program is executing.

And why should they? Users want stuff to just work, after all. Bloor says more clearly than I have managed why RIA is the future of client computing. He emphasises the cost savings of multi-tenancy, and the importance of offline capability; he says the PC will become a caching device. He thinks Google Chrome is significant. So do I. He makes an interesting point about piracy:

All apps will gradually move to RIA as a matter of vendor self interest. (They’d be mad not to, it prevents theft entirely.)

Bloor has said some of this before, of course, and been only half-right. In 1997 he made his remark that

Java is the epicenter of a software earthquake, and the shockwaves are already shaking the foundations of the software industry.

predicting that Java browser-hosted or thin clients would dominate computing; he was wrong about Java’s impact, though perhaps he could have been right if Sun had evolved the Java client runtime to be more like Adobe Flash or Microsoft Silverlight, prior to its recent hurried efforts with JavaFX. I also suspect that Microsoft and Windows have prospered more than Bloor expected in the intervening 12 years. These two things may be connected.

I think Bloor is more than half-right this time round, and that the RIA model with offline capability will grow in importance, making Flash vs Silverlight vs AJAX a key battleground.

Google’s cut-down Java: wanton and irresponsible, or just necessary?

Sun’s Simon Phipps stirred things up last weekend when he called Google’s actions wanton and irresponsible. Its crime: delivering a cut-down Java library for use on its App Engine platform, “flaunting the rules” which forbid creating sub-sets of the core classes.

It does sound as if Google is not talking to Sun as much as it might. Still, let’s note that Google has good reason to omit certain classes or methods. App Engine is a distributed, shared environment; this mean that some things make no sense – for example: writing to a local file – and other things may be unacceptable, such as grabbing a large slice of CPU time for an extended period.

Salesforce.com addressed this same issue by inventing a new language, called Apex. It’s Java-like, but not Java. The company therefore avoided accusations of creating an incompatible Java, and conveniently ensured that Apex code would run only on Force.com, at least until someone attempts to clone it.

Google’s approach was to use Java, but leave a few things out. This FAQ gives an overview; and the article Will it play in App Engine lists common frameworks and libraries with notes on whether they work. Given that languages like JRuby, Groovy and Rhino work fine, it’s clear that core App Engine Java is not too badly damaged. The big omissions are JDBC (because you are meant to use the App Engine datastore, which is not relational), and Enterprisey things like JMS, EJB and JNDI. Google is nudging, or shoving, developers towards RESTful APIs along with its built-in services.

Will you be able to escape App Engine if you have a change of heart after deployment? I’d guess that porting the code will not be all that hard. Perhaps the biggest lock-in is with identity; you could roll your own I guess, but Google intends you to use Google accounts and supplies a Java API. Microsoft is ahead of Google here since it does support federated identity, if you can get your head round it: you can authenticate users in the Microsoft cloud against your own directory using Geneva. The best Google can offer is Directory Sync; though even that is some protection from identity lock-in.

Java support on App Engine is actually a vote of confidence in Java; if what is good for Java is good for Sun, then Sun is a winner here. That said, just where is the benefit for Sun if companies host Java applications, built with Eclipse, on Google’s platform? Not much that I can see.

Technorati Tags: ,,,