All posts by onlyconnect

When remote desktop does not connect: changing Windows DNS setttings remotely

This was an annoying. I tried to remote desktop into my Hyper-V Server today and could not. The message:

Remote Desktop cannot verify the identity of the remote computer because there is a time or date difference between your computer and the remote computer.

image

Hmm. I typed:

net time \\myhypervbox

and it was the same as the time on my desktop.

A Google or two later, and I discovered that this message is caused by an incorrect DNS setting on the target computer. That made sense, since a DNS server died recently. I had changed the settings on the VMs but forgot to do it on the Hyper-V host. Thank you Microsoft for a misleading error message.

Of course my Hyper-V server has no screen attached. So how to change the DNS setting? Umm, not by remote desktop.

I fiddled with netsh for a bit. This looks promising, but it was not playing ball. I tried to list the interfaces and it gave an error saying it could not do so when remote access is not running. Further, I have two network cards in this machine, and Hyper-V creates virtual interfaces, and I was not sure what the correct network interface name was.

Next up was the registry editor. Run Regedit, choose File – Connect Network Registry. That worked. I went to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\tcpip\Parameters\Interfaces

This lists the network interfaces as GUIDs. I went through them one by one, and in the two cases where the NameServer entry was set to the dead server, I changed it to the new one.

There is also an entry for NameServer in the top level Parameters key but this was blank and I left it alone.

If you want to know what all these keys do, there is a guide here.

I rebooted the machine, remotely of course:

shutdown /m \\myhypervbox /r

and when it restarted remote desktop worked again.

SQL Server 2011 Denali publishes tables as Windows network folders

I’ve been testing the new Community Tech Preview of SQL Server 2011, codenamed “Denali”.

Here is an intriguing feature. You can now create a new kind of table called a FileTable. A FileTable is mapped to a folder on the filesystem, though you are not meant to access it directly once it is managed by SQL Server. However, you can access the folder in Windows Explorer, or over the network, as a network share. When you do this, a SQL Server component intercepts the Windows API calls and updates the FileTable. FileTables build on the existing FILESTREAM feature in SQL Server 2008, and the documents in the folder are stored as FILESTREAM data.

The illustration shows a folder in Windows Explorer that is also a SQL Server FileTable.

image

Is this the return of WinFS, the fabled relational file system which was originally planned for Windows Longhorn, but abandoned? Not really. According to the docs:

FileTables remove a significant barrier to the use of SQL Server for the storage and management of unstructured data that is currently residing as files on file servers. Enterprises can move this data from file servers into FileTables to take advantage of integrated administration and services provided by SQL Server. At the same time, they can maintain Windows application compatibility for their existing Windows applications that see this data as files in the file system.

Embarcadero RAD Studio XE2 will have cross-platform compilation

A Google search for RAD Studio XE2, presumably the successor to RAD Studio XE which includes Delphi, Delphi Prism (for .NET), C++ Builder and RAD PHP, throws up the following page:

image

The actual links need a login for a closed beta unfortunately.

Hmm, what caught my eye is the entry for cross-platform applications. Good to see this coming soon.

Adobe releases 64-bit Flash Player 11 beta, AIR 3 with packager for Windows, Mac, Android

Adobe has released a beta version of Flash Player 11 and AIR 3. The AIR release is of limited interest since as yet there is no public SDK; Adobe mainly wants to test compatibility.  That said, the announcement describes a key new feature, the ability to package AIR applications as standalone executables on Windows, Mac and Android. You can already do this on Apple iOS, a feature that was forced on Adobe by Apple’s refusal to allow application runtimes on iOS – unless they are WebKit or FileMaker. This is new for the other platforms though, and I assume comes as a result of the popularity of the iOS packager. The effect is that you no longer have to advertise the fact that your app runs on AIR or require users to obtain the runtime; your app will just work.

Adobe may have its eye on the Mac App Store, which will disallow applications that require a runtime. Extending the AIR packager to desktop OS X should get around that limitation.

64-bit Flash Player is also a big deal, and really long overdue, though there has already been a preview codenamed Square which offered 64-bit. Although there are probably not many Flash applications that really need 64-bit, this is good for compatibility with 64-bit browsers and of course desktop applications when compiled with AIR. There could also be value in 64-bit for business intelligence clients which manipulate large datasets.

Another new feature in Flash Player 11 is Stage3D, codename Molehill, which is a new API for hardware-accelerated 3D graphics. Stage3D has its own shader language, called AGAL (Adobe Graphics Assembly Language); my heart sinks a little when I see vendors inventing new languages rather than using one that is already available, such as OpenGL Shading Language, but Adobe says AGAL is simpler and more secure. If you would like to use GL SL with Stage3D, check out the 3rd-party Mandreel framework which comples GL SL shaders to AGAL.

Flash Player 11 also has a built-in H.264/AVC software encoder for cameras, which will improve video chat and video conferencing, and adds potential for applications that stream video out as well as in.

Native JSON support will simplify and accelerate the handling of data in this popular format.

Another feature that caught my eye is socket progress events. When transferring data, it is important to give feedback to the user on progress. A new property lets developers monitor the number of bytes remaining in the write buffer, and a new event is raised when data is being sent, enabling more informative data transfer applications.

LZMA compression for SWF files, the compiled format for Flash content, is claimed to reduce SWF size by up to 40%.

When do we get a full release? Adobe is taking its time, but my hunch is that it will be in 2011, maybe in time for the MAX conference in October.

Microsoft partners are not whooping and cheering for Office 365

There is a telling moment in the day two keynote at Microsoft’s Worldwide Partner Conference. “Now we’ve added Office 365”, says Corporate VP Jon Roskill. Do you guys feel the momentum?” There is a muted cheer, not the big whoop Roskill is looking for. “Now let’s have some momentum, whoo!” he repeats. Another barely audible cheer.

Why are partners not whooping and cheering?  Take a look at the Microsoft-commissioned Forrester report [PDF] on the total economic impact of Office 365. This report claims a remarkable payback period of only 2 months for a midsize organization moving to Office 365.

Looking at the figures in more detail, Forrester claims $54,000 saved over three years in eliminated hardware, $10,000 over the period in eliminated third-party software, $25,000 saved in web conferencing (Lync Online is bundled with Office 365), and $18,000 in “internal labor and professional services” saved on planning and implementation. There is an even bigger saving in support. Here I find it hard to puzzle out exactly what Forrester is claiming. It talks about “savings of $206,350 over three years” from simplified support and outsourced administration of infrastructure, but also refers to $146,250 costs in admin and support costs for Office 365; I am not sure if the $206,350 is a net figure. Forrester also throws in $260,625 saved on reduced travel thanks to online collaboration, which strikes me as highly speculative.

I suggest therefore that you do not take Forrester’s figures too seriously; but it is still worth noting that many of the savings come from revenue that would otherwise have gone to partners. How much partner income is lost will depend on the extent to which an organization outsources its IT admin, planning, support and administration, and on the margins partners achieve on things like third-party software; but it is considerable.

Of course there are also new business opportunities for partners. Presuming the savings from Office 365 and Microsoft’s other cloud offerings are real, a cloud-oriented partner has a strong sales pitch both to existing and new customers. Partners get an ongoing commission from subscriptions.

There is also an opportunity for new applications which link to cloud services. Yesterday Microsoft announced that the Windows Azure Marketplace, which used to offer data services and application building blocks, now also offers finished applications in US markets.

It is also true that Microsoft’s cloud offering is more partner-friendly than others, because it is a hybrid solution. Forrester’s report mentioned above assumes use of Active Directory Federation Services for single-sign on between on-premise and Office 365, a key feature which has been under-reported in the media coverage I have seen for Office 365. This feature, along with the fact that Microsoft’s server products like Exchange, SharePoint and Dynamics CRM can be deployed either online or as hosted services, means that there is flexibility over what is hosted and what is on-premise.

Nevertheless, it is hard to construct a reality in which the savings customers get from cloud services are real, without the further implication that total partner revenue will diminish, even though certain individual partners who take advantage of the new opportunities may end up winners.

This is true even if Microsoft succeeds in retaining all of its existing Microsoft-platform customers, rather than losing them to Google or other cloud providers. The consequences of a migration to Google, which is inherently not a hybrid platform, seem to me more severe.

Is there any way to put a positive spin on this, from a partner’s perspective? A couple of thoughts on this.

First, even if certain kinds of IT business are under threat from cloud migration, it is also true that the transforming impact of IT and the internet on businesses is far from complete. Much of what businesses currently do with IT can be greatly improved, there is still a thirst for new and improved business applications, and new technology including not only the cloud, but also massively parallel computing and of course mobile presents many new opportunities.

Second, it seems to me that partners should not be asking themselves how to maintain their business, but instead planning for change. It seems to me inevitable that the demand for skills in installing and nursing servers, deploying applications, and in maintaining and supporting clients, will diminish; and that is a good thing because these activities are IT plumbing and if they can be reduced it frees resources for other activities which have more business potential.

Behind the whooping and cheering, Microsoft’s message to partners is a tough one. Change, or die.

Google+, Bing social search, and internet monopolies

The big new thing in social media right now is Google+, the search giant’s latest attempt to grab a slice of the social internet from Facebook and Twitter.  I have been trying it for a few days and like everyone else have enjoyed playing with circles, the ability to categorise contacts into groups and choose who you sharing with. I like that it addresses a core issue, the fact that we want to share different things with different people, but dislike the added complexity. In practice, if I have a personal message I am likely to use email or some other form of direct messaging, whereas what I post on a social networking site I will likely address to everyone.

Still, Google+ is a decent effort, and irrespective of how it compares in detail to its rivals, I think it may take off simply because Google has other properties, specifically Google search and Google Android, which will point you to it.

The value of social networks to a search company was highlighted this week, not by Google but by Microsoft at its Worldwide Partner Conference. The opening keynote was short on big news, but did include a demo of new features in Bing, that other search engine.

Stefan Weitz Director of Influentials, showed how Bing can interact with Facebook so that you search results are annotated with the preferences of your friends. Here, Weitz has searched for “Mango” and Bing shows a section of results marked as Liked by your Facebook friends:

image

He then searches for Hawaii hotels for kids and sees this:

image

Once again, he sees two of his own contacts who have Liked a specific web site. He can go to the site with more confidence, or even click the name to interact directly with his contact and find out more.

This is powerful stuff, though the examples are contrived, and this is only going to work if you and your contacts do many of the same searches with the same search engine. The Microsoft/Facebook alliance has an advantage over Google in that Facebook has a bigger and more mature social graph; but Google has the advantage of a far larger search share, especially outside the USA. On this site, for example, here are the figures for July:

  • Google 90%
  • Bing 3.7%
  • Yahoo! 3.4%

You can figure out how much that leaves for “Other”.

Another Bing move also merits reflection. Weitz went on to demonstrate how Bing wants to you to do the transaction as well as the search on its portal. It is actually fine for Bing to do this with its small market share; but I am not sure that I like the implications for search in general.

This hints at my central concern, which is monopoly. One reason I like Twitter is that I have no sense that Twitter wants to take over my digital life. I know Google does; it wants my searches, my email, my documents, my music, my location, and now my friends.

I know Facebook wants a big slice of it too; it wants me to live inside its walled garden.

These thoughts chime for me with another incident from the last few days. I posted something  for sale on eBay, the dominant online auction site, and found that it has notched up its terms and conditions with me further in its own favour by insisting that I set up automatic payment of its fees before it would allow me to post the item. It also happens that PayPal, owned by eBay, has recently sent me a notice advising that it is restricting the number of sales that can be funded by credit card, I presume because it dislikes the consumer protection gained by buying by credit card.

The connection here is that eBay and PayPal only have the liberty to make these unilateral changes in their terms because of lack of competition. Yes, there are other online markets; but if you actually want to sell stuff, there is little real-world choice. Well, there is Amazon; and there is another organisation which, for all its many merits, is constantly extending its reach.

It is curious in a way, that when the web first appeared it seemed to be a great opportunity for the little guys – because on the Internet, nobody knows you’re a dog – but what we are now seeing is that winner-takes-all applies to a degree which goes beyond anything in the bricks and mortar world.

Embarcadero promises Delphi everywhere: Mac, iOS this year, Android, Blackberry, Windows Phone to follow

I noticed the following remark from Embarcadero’s David Intersimone regarding Delphi, its application builder based on Pascal.

We are putting Delphi (and C++Builder) everywhere this year and over the next 5 years. Today you can use Delphi for Desktop, Client/Server, Multi-Tier, Cloud, Web, Web Services (REST and SOAP). This year you will also be able to build for Macintosh and iOS. Linux is also on the roadmap for the coming years along with Android, Blackberry and Windows Phone 7.

Welcome news; though Delphi enthusiasts are all too familiar with bold promises. Two years ago I interviewed Embarcadero’s CEO Wayne Williams and he promised cross-platform Delphi in 2010; but when Delphi XE appeared last year neither Mac nor 64-bit (another longstanding request) was included.

That said, I am still a big Delphi fan. Mobile is a particularly interesting prospect. I have tried numerous cross-platform mobile toolkits and they all have problems; on the other hand they are improving fast and in a couple of years things like Appcelerator’s Titanium and  Nitobi’s PhoneGap may be hard to catch.

Update: what will Delphi’s Android support look like? I would be interested to know whether Embarcadero is working on its own compiler, or whether it is partnering with RemObjects and that what Intersimone is referring to is Project Cooper:

“Cooper” is a new and exciting research project going on in the RemObjects Software Labs, to bring the Oxygene language to the Java and Android platforms. The original Oxygene for .NET set out to bring a modern and “next generation” Object Pascal to the .NET world; Project “Cooper” is taking this endeavor to the next level, expanding the reach of Oxygene to the second big managed platform.

In other words, Project Cooper will compile Delphi code to Java.

Note that Embarcadero officially adopted Oxygene and offers it as its own product called Prism. It seems plausible that the same will happen with Project Cooper. Since Windows Phone is a .NET platform, there is also potential for Oxygene/Prism to target Microsoft’s mobile platform:

Windows Phone 7 – Microsoft’s new Windows Phone 7 uses Silverlight for application development,  and did I mention Delphi Prism does Silverlight?

says Jim McKeeth at RemObjects.

What about Delphi on the Mac and on iOS? There is also a possible Oxygene/Prism route here, via MonoMac: Delphi to .NET/Mono to Mac. However, I suspect Delphi developers would be disappointed if this turned out to be Embarcadero’s approach to Mac and iOS support. Programmers choose Delphi because they like compilation to native code.

Google Plus demands your location on iPhone, iPad and mobile devices – but you still have control

Last week I signed up for Google + (you can find me here), and one of first things I tried was to sign in on an Apple iPad.

I was annoyed to see the following message:

image

Google demanded the right to use my location with Google Plus, otherwise it would not let me sign in. But what if you want to use Google Plus without sharing your location with the world? Since Google Plus works fine on desktop PCs without location information, why should you not use it on an iPad in the same way?

This led me to investigate the W3C Geolocation API. In fact, I wrote my own web page to test how it works. I went over to Bing Maps, signed up for a developer account, and wrote a small amount of JavaScript to test it. You can try it here if you have a reasonably modern browser. I have not bothered to test for older browsers that do not support geolocation.

You will notice a couple of things about this test page. One is that it will ask your consent before attempting to retrieve your location. Another is that on a home broadband connection, it is rather inaccurate. According to Internet Explorer 9 I am in Berkhamsted – do not try and visit me there though, I am nowhere near.

image

However, if you try this on an iPad or other mobile device, you will likely get much better results. If I use the iPad, even on home wifi, it shows my house dead centre of the map.

That is only if you give consent though. Since Google + is a web application, this consent is determined by Safari, irrespective of what terms and conditions you agreed with Google. If it bothers you, you can even go to settings – location services and disable them for Safari completely:

image

That said, Google could add some code that tried to retrieve your location and would not let you use Google+ if access is denied – but it has not done so. In fact, so far the only time I have seen Safari prompt for consent in Google+ is when making a post:

image

If you agree, this allows Google+ to geotag your post.

I am sure there are other ways Google plans to use your location in Google+. For the moment though, if you would rather maintain location privacy Google+ still allows you do to do so.

Hands on debugging an Azure application – what to do when it works locally but not in the cloud

I have been writing a Facebook application hosted on Microsoft Azure. I hit a problem where my application worked fine on the local development fabric, but failed when deployed to Azure. The application was not actually crashing; it just did not work as expected. Specifically, either the Facebook authentication or the ASP.NET Forms Authentication was failing; when I tried to log on, the log on failed.

This scenario, where the app works locally but not on Azure, is potentially a bad one because you do not have the luxury of breakpoints and variable inspection. There are several approaches. You can have the application write a log, which you could download or view by using Remote Desktop to the Azure instance. You can have the application output debug messages to HTML. Or you can use IntelliTrace.

I tried IntelliTrace. It is easy to set up, just check the box when deploying.

image

Once deployed, I tried the application. Clicked the Log On button, after which the screen flashed but still asked me to Log On. The log on had failed.

image

I closed the app, opened Server Explorer in Visual Studio, drilled down into the Windows Azure Compute node and selected View IntelliTrace Logs.

image

The logs took a few minutes to download. Then you can view is the IntelliTrace log summary, which includes a list of exceptions. You can double-click an exception to start an IntelliTrace debug session.

image

Useful, but I still could not figure out what was wrong. I also found that IntelliTrace did not show the values for local variables in its debug sessions, though it does show exceptions in detail.

Now, if you really want to debug and trace an Azure application you had better read this MSDN article which explains how to create custom debugging and trace agents and write logs to Azure storage. That seems like a lot of work, so I resorted to the old technique of writing messages to HTML.

At this point I should mention something you must do in order to debug on Azure and remain sane.  This is to enable WebDeploy:

image

It is not that hard to set up, though you do need to enable Remote Desktop which means a trip to the Azure management portal. In my case I am behind a firewall so I needed to configure Web Deploy to use the standard SSL port. All is explained here.

Why use Web Deploy? Well, normally when you deploy to Azure the service actually builds, copies and spins up a new virtual machine image for your app. That process is fundamental to Azure’s design and means there are always at least two copies of the VM in existence. It is also slow, so if you are making changes to an app, deploying, and then testing, you will spend most of your time waiting for Azure.

Web Deploy, by contrast, writes to your existing instance, so it is many times quicker. Note that once you have your app working, it is essential to deploy it properly, since Azure might revert your app to the last VM you created.

With Web Deploy enabled I got back to work. I discovered that FormsAuthentication.SetAuthCookie was not working. The odd thing being, it worked locally, and it had worked in a previous version deployed to Azure.

Then I began to figure it out. My app runs in a Facebook canvas. Since the app is served from a different site than Facebook, cookies may be rejected. When I ran the app locally, the app was in a different IE security zone, so different rules applied.

But why had it worked before? I realised that when it worked before I had used Google Chrome. That was it. IE worked locally; but only Chrome worked when deployed.

I have given up trying to fix the specific problem for the moment. I have dug into it a little, and discovered that cookie handling in a Facebook canvas with IE is a long-standing problem, and that the Facebook C# SDK may have bugs in this area. It is not essential for my sample; I have found I can get by with the Facebook session. To get the user ID, for example:

FacebookWebContext.Current.Session.UserId

The time has not been wasted though as I have learned a bit about Azure debugging. I was also amused to discover that my Azure VM has activation problems:

image

The frustration of developing for Facebook with C#

I am researching a piece on developing for Facebook with Microsoft Azure, and of course the first thing I did was to try it out.

It is not easy. The first problem is that Facebook does not care about C#. There are four SDKs on offer: JavaScript, Apple iOS, Google Android, and PHP. This has led to a proliferation of experimental and third-party SDKs which are mostly not very good.

The next problem is that the Facebook API is constantly changing. If you try to wrap it neatly in an SDK, it is likely that some things will break when the next big change comes along.

This leads to the third problem, which is that Google may not be your friend. That helpful article or discussion on developing for Facebook might be out of date now.

Now, there are a couple of reasons why it should be getting better. Jim Zimmerman and Nathan Totten at Thuzi (Totten is now a technical evangelist at Microsoft) created a new C# Facebook SDK, needing it for their own apps and frustrated with what was on offer elsewhere. The Facebook C# SDK looks like it has some momentum.

C# 4.0 actually works well with Facebook, thanks to the dynamic keyword, which makes it easier to cope with Facebook’s changes and also lets it map closely to the official PHP SDK, as Totten explains.

Nevertheless, there are still a few problems. One is that documentation for the SDK is sketchy to say the least. There is currently no reference for it on the Codeplex site, and most of the comments are the kind that produces impressive-looking automatic documentation but actually tells you nothing of substance. Plucking one at random:

FacebookClient.GetAsync(System.Collections.Generic.IDictionary<string,object>)

Summary:
Makes an asynchronous GET request to the Facebook server.

Parameters:
parameters: The parameters.

Another problem, inherent to dynamic typing, is that IntelliSense (auto-completion in Visual Studio) has limited value. You constantly need to reference the Facebook documentation.

Finally, the SDK has changed quite a bit in different versions and some of the samples reference old versions.

In particular, I found it a struggle getting OAuth authentication and access token retrieval working and ended up borrowing Totten’s sample code here which mostly works – though note that the code in the sample does not cope with the same users logging out and logging in again; I fixed this by changing his InMemoryUserStore to use a ConcurrentDictionary instead of a ConcurrentBag, though there are plenty of other ways you can store users.

I’m puzzled why Microsoft does not invest more in making this easier. Microsoft invested in Facebook and it is easy to get the impression that Microsoft and Facebook are in some sort of informal alliance versus Google. Windows Phone 7, for example, ties in closely with Facebook and is probably the best Facebook phone out there.

As it is, although I prefer coding in C# to PHP, I would say that choosing PHP as the platform for your Facebook app will present less friction.