Microsoft partners are not whooping and cheering for Office 365

There is a telling moment in the day two keynote at Microsoft’s Worldwide Partner Conference. “Now we’ve added Office 365”, says Corporate VP Jon Roskill. Do you guys feel the momentum?” There is a muted cheer, not the big whoop Roskill is looking for. “Now let’s have some momentum, whoo!” he repeats. Another barely audible cheer.

Why are partners not whooping and cheering?  Take a look at the Microsoft-commissioned Forrester report [PDF] on the total economic impact of Office 365. This report claims a remarkable payback period of only 2 months for a midsize organization moving to Office 365.

Looking at the figures in more detail, Forrester claims $54,000 saved over three years in eliminated hardware, $10,000 over the period in eliminated third-party software, $25,000 saved in web conferencing (Lync Online is bundled with Office 365), and $18,000 in “internal labor and professional services” saved on planning and implementation. There is an even bigger saving in support. Here I find it hard to puzzle out exactly what Forrester is claiming. It talks about “savings of $206,350 over three years” from simplified support and outsourced administration of infrastructure, but also refers to $146,250 costs in admin and support costs for Office 365; I am not sure if the $206,350 is a net figure. Forrester also throws in $260,625 saved on reduced travel thanks to online collaboration, which strikes me as highly speculative.

I suggest therefore that you do not take Forrester’s figures too seriously; but it is still worth noting that many of the savings come from revenue that would otherwise have gone to partners. How much partner income is lost will depend on the extent to which an organization outsources its IT admin, planning, support and administration, and on the margins partners achieve on things like third-party software; but it is considerable.

Of course there are also new business opportunities for partners. Presuming the savings from Office 365 and Microsoft’s other cloud offerings are real, a cloud-oriented partner has a strong sales pitch both to existing and new customers. Partners get an ongoing commission from subscriptions.

There is also an opportunity for new applications which link to cloud services. Yesterday Microsoft announced that the Windows Azure Marketplace, which used to offer data services and application building blocks, now also offers finished applications in US markets.

It is also true that Microsoft’s cloud offering is more partner-friendly than others, because it is a hybrid solution. Forrester’s report mentioned above assumes use of Active Directory Federation Services for single-sign on between on-premise and Office 365, a key feature which has been under-reported in the media coverage I have seen for Office 365. This feature, along with the fact that Microsoft’s server products like Exchange, SharePoint and Dynamics CRM can be deployed either online or as hosted services, means that there is flexibility over what is hosted and what is on-premise.

Nevertheless, it is hard to construct a reality in which the savings customers get from cloud services are real, without the further implication that total partner revenue will diminish, even though certain individual partners who take advantage of the new opportunities may end up winners.

This is true even if Microsoft succeeds in retaining all of its existing Microsoft-platform customers, rather than losing them to Google or other cloud providers. The consequences of a migration to Google, which is inherently not a hybrid platform, seem to me more severe.

Is there any way to put a positive spin on this, from a partner’s perspective? A couple of thoughts on this.

First, even if certain kinds of IT business are under threat from cloud migration, it is also true that the transforming impact of IT and the internet on businesses is far from complete. Much of what businesses currently do with IT can be greatly improved, there is still a thirst for new and improved business applications, and new technology including not only the cloud, but also massively parallel computing and of course mobile presents many new opportunities.

Second, it seems to me that partners should not be asking themselves how to maintain their business, but instead planning for change. It seems to me inevitable that the demand for skills in installing and nursing servers, deploying applications, and in maintaining and supporting clients, will diminish; and that is a good thing because these activities are IT plumbing and if they can be reduced it frees resources for other activities which have more business potential.

Behind the whooping and cheering, Microsoft’s message to partners is a tough one. Change, or die.

Google+, Bing social search, and internet monopolies

The big new thing in social media right now is Google+, the search giant’s latest attempt to grab a slice of the social internet from Facebook and Twitter.  I have been trying it for a few days and like everyone else have enjoyed playing with circles, the ability to categorise contacts into groups and choose who you sharing with. I like that it addresses a core issue, the fact that we want to share different things with different people, but dislike the added complexity. In practice, if I have a personal message I am likely to use email or some other form of direct messaging, whereas what I post on a social networking site I will likely address to everyone.

Still, Google+ is a decent effort, and irrespective of how it compares in detail to its rivals, I think it may take off simply because Google has other properties, specifically Google search and Google Android, which will point you to it.

The value of social networks to a search company was highlighted this week, not by Google but by Microsoft at its Worldwide Partner Conference. The opening keynote was short on big news, but did include a demo of new features in Bing, that other search engine.

Stefan Weitz Director of Influentials, showed how Bing can interact with Facebook so that you search results are annotated with the preferences of your friends. Here, Weitz has searched for “Mango” and Bing shows a section of results marked as Liked by your Facebook friends:

image

He then searches for Hawaii hotels for kids and sees this:

image

Once again, he sees two of his own contacts who have Liked a specific web site. He can go to the site with more confidence, or even click the name to interact directly with his contact and find out more.

This is powerful stuff, though the examples are contrived, and this is only going to work if you and your contacts do many of the same searches with the same search engine. The Microsoft/Facebook alliance has an advantage over Google in that Facebook has a bigger and more mature social graph; but Google has the advantage of a far larger search share, especially outside the USA. On this site, for example, here are the figures for July:

  • Google 90%
  • Bing 3.7%
  • Yahoo! 3.4%

You can figure out how much that leaves for “Other”.

Another Bing move also merits reflection. Weitz went on to demonstrate how Bing wants to you to do the transaction as well as the search on its portal. It is actually fine for Bing to do this with its small market share; but I am not sure that I like the implications for search in general.

This hints at my central concern, which is monopoly. One reason I like Twitter is that I have no sense that Twitter wants to take over my digital life. I know Google does; it wants my searches, my email, my documents, my music, my location, and now my friends.

I know Facebook wants a big slice of it too; it wants me to live inside its walled garden.

These thoughts chime for me with another incident from the last few days. I posted something  for sale on eBay, the dominant online auction site, and found that it has notched up its terms and conditions with me further in its own favour by insisting that I set up automatic payment of its fees before it would allow me to post the item. It also happens that PayPal, owned by eBay, has recently sent me a notice advising that it is restricting the number of sales that can be funded by credit card, I presume because it dislikes the consumer protection gained by buying by credit card.

The connection here is that eBay and PayPal only have the liberty to make these unilateral changes in their terms because of lack of competition. Yes, there are other online markets; but if you actually want to sell stuff, there is little real-world choice. Well, there is Amazon; and there is another organisation which, for all its many merits, is constantly extending its reach.

It is curious in a way, that when the web first appeared it seemed to be a great opportunity for the little guys – because on the Internet, nobody knows you’re a dog – but what we are now seeing is that winner-takes-all applies to a degree which goes beyond anything in the bricks and mortar world.

Renault’s electric Frendzy includes RIM Playbook, external 37” screen

I am not a motoring journalist, but this is a car I would like to review. Renault’s Frendzy, which will be shown at the Frankfurt Motor Show in September, has several notable features:

1. It’s electric

2. An asymmetric design in which the passenger’s side represents business, and the driver’s side leisure.

3. An external 37” widescreen display embedded into the passenger side door, which is a sliding affair with no window.

4. A dock for the RIM Playbook.

clip_image002

The Playbook does obvious things like navigation, but also more than that:

As soon as it is plugged in, it becomes an integral element of the vehicle and configures itself into the Renault environment. Continuity of work is assured once the device is removed, and it can of course be used for all of the renowned BlackBerry PlayBook tablet features.

The device has an important role to play, too, in the customization of the vehicle as it controls the exterior screen while the vehicle is in motion and when parked, for business as well as for personal uses – pictograms illustrating life with electric vehicles, or the viewing of a film, for example.

says the release.

5. RFID sensors in the door sills. If you are delivering goods that have RFID chips, you can have a truly intelligent courier service. The packages can inform the vehicle of their destination, talk to the navigation system to display the route, and I presume could even raise the alarm if you drove away from the destination having forgotten to deliver parcel 3 of 3.

All cool; and I have not even mentioned the interior lighting can be switched from green for work to orange for leisure.

At the same time, I have some questions.

I am not sure whether giving the driver easy access to a full-featured tablet is wise, as I would rather he concentrated on driving rather than posting messages to Facebook or engaging in the latest MMORG (Massively Multiplayer Online Role-Playing Game).

It also seems a bit of a waste having the 37” screen external. I can see the sense for advertising, though having the screen on the side means it will be more visible to pedestrians than to other motorists, and is even 37” really big enough to get a message across at the kind of distance that will be typical? Renault says:

… a large external screen that can display useful messages or information (such as “making deliveries” or “back in five minutes”, the battery-charging method or the remaining charge) or advertising messages, either whilst parked or on the move.

I am not sure that I really want to tell the world how low my battery is.

The one thing you cannot do with the external screen is watch a film on it, unless you park and have a picnic I guess, though not to worry:

Depending on their mood of the moment, children can watch a film or play games on the touch-sensitive pad which slides out from the back of the driver’s seat. They can even draw on a special slate integrated into the sliding door.

Finally, I am amused by the trouble Renault has taken with the sound scheme – yes, since electric vehicles are inherently silent but need to make a noise for safely reasons, even the sound has a personality:

FRENDZY’s dual personality prompted Renault and IRCAM (Institut de Recherche et Coordination Acoustique / Musique) to develop a broad range of sounds. The programme has led to a variety of sounds that are emitted both inside and outside of the vehicle to ensure that everyone can tell whether it is in business or passenger car mode, thanks simply to its sound signature.

More information on the Frendzy is here.

Embarcadero promises Delphi everywhere: Mac, iOS this year, Android, Blackberry, Windows Phone to follow

I noticed the following remark from Embarcadero’s David Intersimone regarding Delphi, its application builder based on Pascal.

We are putting Delphi (and C++Builder) everywhere this year and over the next 5 years. Today you can use Delphi for Desktop, Client/Server, Multi-Tier, Cloud, Web, Web Services (REST and SOAP). This year you will also be able to build for Macintosh and iOS. Linux is also on the roadmap for the coming years along with Android, Blackberry and Windows Phone 7.

Welcome news; though Delphi enthusiasts are all too familiar with bold promises. Two years ago I interviewed Embarcadero’s CEO Wayne Williams and he promised cross-platform Delphi in 2010; but when Delphi XE appeared last year neither Mac nor 64-bit (another longstanding request) was included.

That said, I am still a big Delphi fan. Mobile is a particularly interesting prospect. I have tried numerous cross-platform mobile toolkits and they all have problems; on the other hand they are improving fast and in a couple of years things like Appcelerator’s Titanium and  Nitobi’s PhoneGap may be hard to catch.

Update: what will Delphi’s Android support look like? I would be interested to know whether Embarcadero is working on its own compiler, or whether it is partnering with RemObjects and that what Intersimone is referring to is Project Cooper:

“Cooper” is a new and exciting research project going on in the RemObjects Software Labs, to bring the Oxygene language to the Java and Android platforms. The original Oxygene for .NET set out to bring a modern and “next generation” Object Pascal to the .NET world; Project “Cooper” is taking this endeavor to the next level, expanding the reach of Oxygene to the second big managed platform.

In other words, Project Cooper will compile Delphi code to Java.

Note that Embarcadero officially adopted Oxygene and offers it as its own product called Prism. It seems plausible that the same will happen with Project Cooper. Since Windows Phone is a .NET platform, there is also potential for Oxygene/Prism to target Microsoft’s mobile platform:

Windows Phone 7 – Microsoft’s new Windows Phone 7 uses Silverlight for application development,  and did I mention Delphi Prism does Silverlight?

says Jim McKeeth at RemObjects.

What about Delphi on the Mac and on iOS? There is also a possible Oxygene/Prism route here, via MonoMac: Delphi to .NET/Mono to Mac. However, I suspect Delphi developers would be disappointed if this turned out to be Embarcadero’s approach to Mac and iOS support. Programmers choose Delphi because they like compilation to native code.

Google Plus demands your location on iPhone, iPad and mobile devices – but you still have control

Last week I signed up for Google + (you can find me here), and one of first things I tried was to sign in on an Apple iPad.

I was annoyed to see the following message:

image

Google demanded the right to use my location with Google Plus, otherwise it would not let me sign in. But what if you want to use Google Plus without sharing your location with the world? Since Google Plus works fine on desktop PCs without location information, why should you not use it on an iPad in the same way?

This led me to investigate the W3C Geolocation API. In fact, I wrote my own web page to test how it works. I went over to Bing Maps, signed up for a developer account, and wrote a small amount of JavaScript to test it. You can try it here if you have a reasonably modern browser. I have not bothered to test for older browsers that do not support geolocation.

You will notice a couple of things about this test page. One is that it will ask your consent before attempting to retrieve your location. Another is that on a home broadband connection, it is rather inaccurate. According to Internet Explorer 9 I am in Berkhamsted – do not try and visit me there though, I am nowhere near.

image

However, if you try this on an iPad or other mobile device, you will likely get much better results. If I use the iPad, even on home wifi, it shows my house dead centre of the map.

That is only if you give consent though. Since Google + is a web application, this consent is determined by Safari, irrespective of what terms and conditions you agreed with Google. If it bothers you, you can even go to settings – location services and disable them for Safari completely:

image

That said, Google could add some code that tried to retrieve your location and would not let you use Google+ if access is denied – but it has not done so. In fact, so far the only time I have seen Safari prompt for consent in Google+ is when making a post:

image

If you agree, this allows Google+ to geotag your post.

I am sure there are other ways Google plans to use your location in Google+. For the moment though, if you would rather maintain location privacy Google+ still allows you do to do so.

Hands on debugging an Azure application – what to do when it works locally but not in the cloud

I have been writing a Facebook application hosted on Microsoft Azure. I hit a problem where my application worked fine on the local development fabric, but failed when deployed to Azure. The application was not actually crashing; it just did not work as expected. Specifically, either the Facebook authentication or the ASP.NET Forms Authentication was failing; when I tried to log on, the log on failed.

This scenario, where the app works locally but not on Azure, is potentially a bad one because you do not have the luxury of breakpoints and variable inspection. There are several approaches. You can have the application write a log, which you could download or view by using Remote Desktop to the Azure instance. You can have the application output debug messages to HTML. Or you can use IntelliTrace.

I tried IntelliTrace. It is easy to set up, just check the box when deploying.

image

Once deployed, I tried the application. Clicked the Log On button, after which the screen flashed but still asked me to Log On. The log on had failed.

image

I closed the app, opened Server Explorer in Visual Studio, drilled down into the Windows Azure Compute node and selected View IntelliTrace Logs.

image

The logs took a few minutes to download. Then you can view is the IntelliTrace log summary, which includes a list of exceptions. You can double-click an exception to start an IntelliTrace debug session.

image

Useful, but I still could not figure out what was wrong. I also found that IntelliTrace did not show the values for local variables in its debug sessions, though it does show exceptions in detail.

Now, if you really want to debug and trace an Azure application you had better read this MSDN article which explains how to create custom debugging and trace agents and write logs to Azure storage. That seems like a lot of work, so I resorted to the old technique of writing messages to HTML.

At this point I should mention something you must do in order to debug on Azure and remain sane.  This is to enable WebDeploy:

image

It is not that hard to set up, though you do need to enable Remote Desktop which means a trip to the Azure management portal. In my case I am behind a firewall so I needed to configure Web Deploy to use the standard SSL port. All is explained here.

Why use Web Deploy? Well, normally when you deploy to Azure the service actually builds, copies and spins up a new virtual machine image for your app. That process is fundamental to Azure’s design and means there are always at least two copies of the VM in existence. It is also slow, so if you are making changes to an app, deploying, and then testing, you will spend most of your time waiting for Azure.

Web Deploy, by contrast, writes to your existing instance, so it is many times quicker. Note that once you have your app working, it is essential to deploy it properly, since Azure might revert your app to the last VM you created.

With Web Deploy enabled I got back to work. I discovered that FormsAuthentication.SetAuthCookie was not working. The odd thing being, it worked locally, and it had worked in a previous version deployed to Azure.

Then I began to figure it out. My app runs in a Facebook canvas. Since the app is served from a different site than Facebook, cookies may be rejected. When I ran the app locally, the app was in a different IE security zone, so different rules applied.

But why had it worked before? I realised that when it worked before I had used Google Chrome. That was it. IE worked locally; but only Chrome worked when deployed.

I have given up trying to fix the specific problem for the moment. I have dug into it a little, and discovered that cookie handling in a Facebook canvas with IE is a long-standing problem, and that the Facebook C# SDK may have bugs in this area. It is not essential for my sample; I have found I can get by with the Facebook session. To get the user ID, for example:

FacebookWebContext.Current.Session.UserId

The time has not been wasted though as I have learned a bit about Azure debugging. I was also amused to discover that my Azure VM has activation problems:

image

The frustration of developing for Facebook with C#

I am researching a piece on developing for Facebook with Microsoft Azure, and of course the first thing I did was to try it out.

It is not easy. The first problem is that Facebook does not care about C#. There are four SDKs on offer: JavaScript, Apple iOS, Google Android, and PHP. This has led to a proliferation of experimental and third-party SDKs which are mostly not very good.

The next problem is that the Facebook API is constantly changing. If you try to wrap it neatly in an SDK, it is likely that some things will break when the next big change comes along.

This leads to the third problem, which is that Google may not be your friend. That helpful article or discussion on developing for Facebook might be out of date now.

Now, there are a couple of reasons why it should be getting better. Jim Zimmerman and Nathan Totten at Thuzi (Totten is now a technical evangelist at Microsoft) created a new C# Facebook SDK, needing it for their own apps and frustrated with what was on offer elsewhere. The Facebook C# SDK looks like it has some momentum.

C# 4.0 actually works well with Facebook, thanks to the dynamic keyword, which makes it easier to cope with Facebook’s changes and also lets it map closely to the official PHP SDK, as Totten explains.

Nevertheless, there are still a few problems. One is that documentation for the SDK is sketchy to say the least. There is currently no reference for it on the Codeplex site, and most of the comments are the kind that produces impressive-looking automatic documentation but actually tells you nothing of substance. Plucking one at random:

FacebookClient.GetAsync(System.Collections.Generic.IDictionary<string,object>)

Summary:
Makes an asynchronous GET request to the Facebook server.

Parameters:
parameters: The parameters.

Another problem, inherent to dynamic typing, is that IntelliSense (auto-completion in Visual Studio) has limited value. You constantly need to reference the Facebook documentation.

Finally, the SDK has changed quite a bit in different versions and some of the samples reference old versions.

In particular, I found it a struggle getting OAuth authentication and access token retrieval working and ended up borrowing Totten’s sample code here which mostly works – though note that the code in the sample does not cope with the same users logging out and logging in again; I fixed this by changing his InMemoryUserStore to use a ConcurrentDictionary instead of a ConcurrentBag, though there are plenty of other ways you can store users.

I’m puzzled why Microsoft does not invest more in making this easier. Microsoft invested in Facebook and it is easy to get the impression that Microsoft and Facebook are in some sort of informal alliance versus Google. Windows Phone 7, for example, ties in closely with Facebook and is probably the best Facebook phone out there.

As it is, although I prefer coding in C# to PHP, I would say that choosing PHP as the platform for your Facebook app will present less friction.

HP breaks 2.5 million web support links

The internet and search: the greatest resource ever for troubleshooting computer systems.

Except when you follow a promising link to find this:

image

On June 26th, the HP IT Resource Center forums were migrated to the HP Enterprise Business Community. This migration coincided with the release of the new HP Support Center, and the retirement of the legacy ITRC support portal. As part of the transition, we have migrated all ~2.5 million posts and ~712k users from the ITRC forums into the new community site.
As a result of this transition, all links/bookmarks/search results that attempt to load an ITRC forum page will redirect to this announcement page.

I understand the reasons; but I wish companies would think twice before doing this. Or three times. Eventually the search engines will stop listing the broken links, but other references to these support discussions will still be broken.

How much would it cost HP to keep the old links online in read-only form?

It is not just HP of course. These generic “sorry, we broke the link” pages pop up regularly on Microsoft’s site, for example, often after following a link on Microsoft’s own site.

The web is designed to tolerate broken links; it is one of the reasons why it works. However, that is no reason to break them with abandon.

ReSharper 6.0 arrives: intelligent editing and decompiling for Visual Studio

JetBrains has released ReSharper 6.0, an add-on for Visual Studio 2008 and 2010 that delivers a remarkable range of tools, mostly focused on code editing and static analysis. There is also a unit test runner and a source code decompiler.

The heart of ReSharper is refactoring, hence the name, and it adds a large number of refactoring options to Visual Studio. These are nicely integrated with the editor, not only as right-click menu options, but with light-bulb suggestions that appear automatically. Here, for example, ReSharper is telling me that I could use implicit type declaration, and offering to make the change for me, or alternatively to suppress this type of suggestion forever if I do not like it:

image

Source code decompiling is also nicely done. In the above code, IClaimsIdentity is part of the .NET Framework so the source code is not normally available. With ReSharper though, I can navigate to decompiled source:

image

This could be legally sensitive, so I have to pass a Decompiler Legal Notice in which JetBrains attempts to disclaim liability.

image

Then I am in, though the results are not exciting in this instance:

image

If you only want the decompiler, you may find the free dotPeek is all you need.

The what’s new list in ReSharper 6.0 is long. It includes support for JavaScript, ASP.NET Razor, CSS and HTML, better XAML support including creating properties and dependency properties from usage, and macros for file headers which automates things like inserting current date and time.

The pricing is not excessive: in the UK it costs £148 for a personal license or £259 for a commercial license. If you think ReSharper will save you time and improve your code quality, which it likely will, it will soon pay for itself.

Now you can rip SACDs

Sony’s Super Audio CD (SACD) is an audiophile format featuring high resolution and multi-channel sound. The discs are are copy protected, and until now it has not been possible to create an exact copy. Of course you can capture the analogue output and re-digitise it, and certain players from manufacturers such as Oppo enable you to capture digital output converted from Sony’s DSD (Direct Stream Digital) format to high-resolution PCM (Pulse-Code Modulation); but still, it is not an exact copy.

Ripping an SACD is still not that easy. The crack depends on getting hold of an early model of the PlayStation 3 that has not been updated to the latest firmware. Recent PS3s do not play SACD at all, plus you need firmware of 3.55 or lower, before Sony removed the capability of running an alternative operating system. There is no downgrade path, so it is a matter of scouring eBay for one that has not been updated.

Once you have the right hardware you can follow the instructions here  to rip the SACD:

SACD-Ripper supports the following output formats:
– 2ch DSDIFF (DSD)
– 2ch DSDIFF (DST) (if already DST encoded)
– 2ch DSF (DSD)
– mch DSDIFF (DSD)
– mch DSDIFF (DST)
– mch DSF (DSD)
– ISO (due to the 4GB FAT32 size limit on the PS3, files will be splitted when larger)

There is some discussion of the procedure here from where I have grabbed this image:

image

Is it worth it? Good question. There are SACD enthusiasts who swear that DSD reproduces sound with a natural fidelity that PCM cannot match. On the other hand, researchers conducted a test showing that listeners could not tell the difference if the output from SACD was converted to CD standard PCM. I have also seen papers suggesting that DSD is inferior to PCM and may colour the sound. Expect heated opinions if you enter this debate.

Nevertheless, there are many great sounding SACDs out there and the format is not completely dead. Universal Japan, for example, issues SACDs made of SHM (Super High Material) at premium prices, and whether it is thanks to the super super technology, or simply clean mastering from good tape sources, these are proving popular within the niche audiophile market.

The fact that these discs cannot be perfectly ripped is part of the appeal from the industry’s perspective. Now that is no longer the case, and the torrent sites will be able to offer DSD files with full SACD quality.