Tag Archives: microsoft

Microsoft’s Windows 10 October 2018 update on hold after some users suffer deleted documents: what to conclude?

Microsoft has paused the rollout of the October 2018 Windows update for Windows 10 while it investigates reports of users losing data after the upgrade.

image

Update: Microsoft’s “known issues” now asks affected uses to “minimize your use of the affected device”, suggesting that file recovery tools are needed for restoring documents, with uncertain results.

Windows 10, first released in July 2015, was the advent of “Windows as a service.” It was a profound change. The idea is that whether in business or at home, Windows simply updates itself from time to time, so that you always have a secure and up to date operating system. Sometimes new features arrive. Occasionally features are removed.

Windows as a service was not just for the benefit of we, the users. It is vital to Microsoft in its push to keep Windows competitive with other operating systems, particularly as it faces competition from increasingly powerful mobile operating systems that were built for the modern environment. A two-year or three-year upgrade cycle, combined with the fact that many do not bother to upgrade, is too slow.

Note that automatic upgrade is not controversial on Android, iOS or Chrome OS. Some iOS users on older devices have complained of performance problems, but in general there are more complaints about devices not getting upgraded, for example because of Android operators or vendors not wanting the bother.

Windows as a service has been controversial though. Admins have worried about the extra work of testing applications. There is a Long Term Servicing Channel, which behaves more like the old 2-3 year upgrade cycle, but it is not intended for general use, even in business. It is meant for single-purpose PCs such as those controlling factory equipment, or embedded into cash machines.

Another issue has been the inconvenience of updates. “Restart now” is not something you want to see just before giving a presentation, or working on it at the last minute, for example. Auto-restart occasionally loses work if you have not saved documents.

The biggest worry though is the update going wrong. For example, causing a PC to become unusable. In general this is rare. Updates do fail, but Windows simply rolls back to the previous version, annoying but not fatal.

What about deleting data? Again it is rare; but in this case recovery is not simple. You are in the realm of disk recovery tools, if you do not have a backup. However it turns out that users have reported updates deleting data for some time. Here is one from 4 months ago:

image

Why is the update deleting data? It is not yet clear, and there may be multiple reasons, but many of the reports I have seen refer to user documents stored outside the default location (C:\users\[USERNAME]\). Some users with problems have multiple folders called Documents. Some have moved the location the proper way (Location tab in properties of special folders like Documents, Downloads, Music, Pictures) and still had problems.

Look through miglog.xml though (here is how to find it) and you will find lots of efforts to make sense of the user’s special folder layout. This is not my detailed diagnosis of the issue, just an observation having ploughed through long threads on Reddit and elsewhere; of course these threads are full of noise.

Here is an example of a user who suffered the problem and had an unusual setup: the location of his special folders had been moved (before the upgrade) to an external drive, but there was still important data in the old locations.

We await the official report with interest. But what can we conclude, other than to take backups (which we knew already)?

Two things. One is that Microsoft needs to do a better job of prioritising feedback from its Insider hub. Losing data is a critical issue. The feedback hub, like the forums, is full of noise; but it is possible to identify critical issues there.

This is related of course to the suspicion that Microsoft is now too reliant on unpaid enthusiast testers, at the expense of thorough internal testers. Both are needed and both, I am sure, exist. What though is the proportion and has internal testing been reduced on the basis of these widespread public betas?

The second thing is about priorities. There is a constant frustration that vendors (and Microsoft is not alone) pay too much attention to cosmetics and new features, and not enough to quality and fixing long-standing bugs and annoyances.

What do most users do after Windows upgrades? They are grateful that Windows is up and running again, and go back to working in Word and Excel. They do not care about cosmetic changes or new features they are unlikely to use. They do care about reliability. Such users are not wrong. They deserve better than to find documents missing.

One final note. Microsoft released Windows 10 1809 on 2nd October. However the initial rollout was said to be restricted to users who manually checked Windows Update or used the Update Assistant. Microsoft said that automatic rollout would not begin until Oct 9th. In my case though, on one PC, I got the update automatically (no manual check, no Insider Build setting) on October 3rd. I have seen similar reports from others. I got the update on an HP PC less than a year old, and my guess is that this is the reason:

With the October 2018 Update, we are expanding our use of machine learning and intelligently selecting devices that our data and feedback predict will have a smooth update experience.

In other words, my PC was automatically selected to give Microsoft data on upgrades expected to go smoothly. I am guessing though. I am sure I did not trigger the update myself, since I was away all day on the 2nd October, and buried in work on the 3rd when the update arrived (I switched to a laptop while it updated). I did not lose data, even though I do have a redirected Documents folder. I did see one anomaly: my desktop background was changed from blue to black, and I had to change it back manually.

What should you do if you have this problem and do not have backups? Microsoft asks you to call support. As far as I can tell, the files really are deleted so there will not be an easy route to recovery. The best chance is to use the PC as little as possible; do a low-level copy of the hard drive if you can. Shadow Copy Explorer may help. Another nice tool is Zero Assumption Recovery. What you recover is dependent on whether files have been overwritten by other files or not.

Update: Microsoft has posted an explanation of why the data loss occurred. It’s complicated and all do to with folder redirection (with a dash of OneDrive sync). It affected some users who redirected “known folders” like Documents to another location. The April 2018 update created spurious empty folders for some of these users. The October 2018 update therefore sought to delete them, but in doing so also deleted non-empty folders. It still looks like a bad bug to me: these were legitimate folders for storing user data and should not have been removed if not empty.

More encouraging is that Microsoft has made some changes to its feedback hub so that users can “provide an indication of impact and severity” when reporting issues. The hope is that Microsoft will find reports of severe bugs more easily and therefore take action.

Updated 8th Oct to remove references to OneDrive Sync and add support notes. Updated 10th Oct with reference to Microsoft’s explanatory post.

Linux applications and .NET Core on a Chromebook makes this an increasingly interesting device

I have been writing about Google Chromebooks of late and as part of my research went out and bought one, an HP Chromebook 14 that cost me less than £200. It runs an Intel Celeron N3350 processor and has a generous (at this price) 32GB storage; many of the cheaper models have only 16GB.

This is a low-end notebook for sure, but still boots quickly and works fine for general web browsing and productivity applications. Chrome OS (the proprietary version of the open source Chromium OS) is no longer an OS that essentially just runs Google’s Chrome browser, though that is still the main intent. It has for some time been able to run Android applications; these run in a container which itself runs Android. Android apps run fairly well though I have experienced some anomalies.

Recently Google has added support for Linux applications, though this is still in beta. The main motivation for this seems to be to run Android Studio, so that Googlers and others with smart Pixelbooks (high-end Chromebooks that cost between £999 and £1,699) can do a bit more with their expensive hardware.

I had not realised that even a lowly HP Chromebook 14 is now supported by the beta, but when I saw the option in settings I jumped at it.

image

It took a little while to download but then I was able to open a Linux terminal. Like Android, Linux runs in a container. It is also worth noting that Chrome OS itself is based on Linux so in one sense Chromebooks have always run Linux; however they have been locked down so that you could not, until now, install applications other than web apps or Android.

Linux is therefore sandboxed. It is configured so that you do not have access to the general file system. However the Chromebook Files application has access to your user files in both Chrome OS and Linux.

image

I found little documentation for running Linux applications so here are a few notes on my initial stumblings.

First, note that the Chromebook trackpad has no right-click. To right-click you do Alt-Click. Useful, because this is how you paste from the clipboard into the Linux terminal.

Similarly, there is no Delete key. To Delete you do Alt-Backspace.

I attribute these annoyances to the fact that Chrome OS was mostly developed by Mac users.

Second, no Linux desktop is installed. I did in fact install the lightweight LXDE with partial success but it does not work properly.

The idea is that you install GUI applications which run in their own window. It is integrated so that once installed, Linux applications appear in the Chromebook application menu.

I installed Firefox ESR (Extended Support Release).  Then I installed an application which promises to be particularly useful for me, Visual Studio Code. Next I installed the .NET Core SDK, following the instructions for Debian.

image

Everything worked, and after installing the C# extension for VS Code I am able to debug and run .NET Core applications.

I understand that you will not be so lucky with VS Code if you have an ARM Chromebook. Intel x86 is the winner for compatibility.

What is significant to me is not only that you can now run desktop applications on a Chromebook, but also that you can work on a Chromebook without needing to be deeply hooked into the Google ecosystem. You still need a Google account of course, for log in and the Play Store.

You will also note from the screenshot above that Chrome OS is no longer just about a full-screen web browser. Multiple overlapping windows, just like Windows and Mac.

These changes might persuade me to spend a little more on a Chromebook next time around. Certainly the long battery life is attractive. Following a tip, I disabled Bluetooth, and my Chromebook battery app is reporting 48% remaining, 9 hrs 23 minutes. A little optimistic I suspect, but still fantastic.

Postscript: I was always a fan of the disliked Windows RT, which combined a locked-down operating system with the ability to run Windows applications. Maybe container technology is the answer to the conundrum of how to provide a fully capable operating system that is also protected from malware. Having said which, there is no doubt that these changes make Chromebooks more vulnerable to malware; even if it only runs in the Linux environment, it could be damaging and steal data. The OS itself though will be protected.

Microsoft Azure Stack: a matter of compliance

At the Ignite conference last week in Orlando, Microsoft’s hardware partners were showing off their latest Azure Stack boxes.

In conversation, one mentioned to me that Azure Stack was selling better in Europe than in the USA. Why? Because stricter compliance regulations (perhaps alongside the fact that the major cloud platforms are all based in North America) makes Azure Stack more attractive in Europe.

image
Lenovo’s Azure Stack

Azure Stack is not just “Azure for your datacentre”. It is a distinctive way to purchase IT infrastructure, where you buy the hardware but pay for the software with a usage-based model.

Azure / Azure Stack VMs are resilient so you cannot compare the value directly with simply running up a VM on your own server. Azure Stack is a premium option. The benefits are real. Microsoft mostly looks after the software, you can use the excellent Azure management tools, and you get deep integration with Azure in the cloud. Further, you can diminish the cost by scaling back at times of low demand; especially easy if you use abstracted services such as App Service, rather than raw VMs.

How big is the premium? I would be interested to hear from anyone who has done a detailed comparison, but my guess is that running your own servers with Windows Server Datacenter licenses (allowing unlimited VMs once all the cores are licensed) is substantially less expensive.

You can see therefore that there is a good fit for organizations that want to be all-in on the cloud, but need to run some servers on-premises for compliance reasons.

Microsoft will add an Azure-hosted Windows (and office) Virtual Desktop to Office 365 for small businesses

What if small businesses could add a virtual Windows desktop option to their Office 365 subscription, enabling users to log on to a remote desktop and run the full desktop versions of Office as well as other Windows applications, without the hassle of managing local PC desktops?

This is coming, according to information gleaned from the announcements here at Microsoft’s Ignite conference in Orlando. It is all part of a new Azure service called Windows Virtual Desktop, which sees Microsoft getting serious about desktop virtualisation on Azure for the first time.

image
Microsoft Ignite is under way in Orlando, Florida

Windows desktop virtualization is already available on Azure, but Microsoft currently points you to its partners Citrix or VMWare for this. You will still be able to do this, and the third-party solutions still have advantages especially in terms of management and imaging tools, but of course they are expensive. Windows Virtual Desktop will be interesting to large organisations who are already licensed for virtual desktop access via licenses for subscriptions including Windows 10 Enterprise E3 and E5.

There will apparently be options for both hosted Windows 10, and shared hosting using Remote Desktop Services, but exactly what will be in the Office 365 offering is not yet clear.

A cost-effective solution for small businesses wanting a hosted virtual desktop on Azure is something new though and if Microsoft prices it right, I would expect it to be popular. Virtual desktops are handy for staff working at home or on the go, for example.

Will the pricing be right? That is not yet known of course. But it does look hopeful that Microsoft may be moving away from its policy of making Windows desktop virtualisation deliberately expensive in order to protect its licensing income. 

Microsoft’s 82 Ignite announcements: what really matters

Microsoft’s PR team has helpfully summarised many of the announcements at the Ignite event, kicking off today in Orlando. I count 82, but you might make it fewer or many more, depending on what you call an announcement. And that is not including the business apps announcements made at the end of last week, most notably the arrival of the HoloLens-based Remote Assist in Dynamics 365.

image

Not all announcements are equal. Some, like the release of Windows Server 2019, are significant but not really news; we knew it was coming around now, and the preview has been around for ages. Others, like larger Azure managed disk sizes (8, 16 and 32TB) are cool if that is what you need, but hardly surprising; the specification of available cloud infrastructure is continually being enhanced.

Note that this post is based on what Microsoft chose to reveal to press ahead of the event, and there is more to come.

It is worth observing though that of these 82 announcements, only 3 or 4 are not cloud related:

  • SQL Server 2019 public preview
  • [Windows Server 2019 release] – I am bracketing this because many of the new features in Server 2019 are Azure-related, and it is listed under the heading Azure Infrastructure
  • Chemical Simulation Library for Microsoft Quantum
  • Surface Hub 2 release promised later this year

Microsoft’s journey from being an on-premises company, to being a service provider, is not yet complete, but it is absolutely the focus of almost everything new.

I will never forget an attendee at a previous Microsoft event a few years back telling me, “this cloud stuff is not relevant to us. We have our own datacenter.” I cannot help wondering how much Office 365 and/or Azure that person’s company is consuming now. Of course on-premises servers and applications remain important to Microsoft’s business, but it is hard to swim against the tide.

Ploughing through 82 announcements would be dull for me to write and you to read, so here are some things that caught my eye, aside from those already mentioned.

1. Azure confidential computing in public preview. A new series of VMs using Intel’s SGX technology lets you process data in a hardware-enforced trusted execution environment.

2. Cortana Skills Kit for Enterprise. Currently invite-only, this is intended to make it easier to write business bots “to improve workforce productivity” – or perhaps, an effort to reduce the burden on support staff. I recall examples of using conversational bots for common employee queries like “how much holiday allowance do I have remining, and which days can I take off?”. As to what is really new here, I have yet to discover.

3. A Python SDK for Azure Machine Learning. Important given the popularity of Python in this space.

4. Unified search in Microsoft 365. Is anyone using Delve? Maybe not, which is why Microsoft is bringing a search box to every cloud application, which is meant to use Microsoft Graph, AI and Bing to search across all company data and bring you personalized results. Great if it works.

5. Azure Digital Twins. With public preview promised on October 15, this lets you build “comprehensive digital models of any physical environment”. Once you have the model, there are all sorts of possibilities for optimization and safe experimentation.

6. Azure IoT Hub to support the Android Things platform via the Java SDK. Another example of Microsoft saying, use what you want, we can support it.

7. Azure Data Box Edge appliance. The assumption behind Edge computing is both simple and compelling: it pays to process data locally so you can send only summary or interesting data to the cloud. This appliance is intended to simplify both local processing and data transfer to Azure.

8. Azure Functions 2.0 hits general availability. Supports .NET Core, Python.

9. Helm repositories in Azure Container Registry, now in public preview.

10 Windows Autopilot support extended to existing devices. This auto-configuration feature previously only worked with new devices. Requires Windows 10 October Update, or automated upgrade to this.

Office and Office 365

In the Office 365 space there are some announcements:

1. LinkedIn integration with Office 365. Co-author documents and send emails to LinkedIn contacts, and surface LinkedIn information in meeting invites.

2. Office Ideas. Suggestions as you work to improve the design of your document, or suggest trends and charts in Excel. Sounds good but I am sceptical.

3. OneDrive for Mac gets Files on Demand. A smarter way to use cloud storage, downloading only files that you need but showing all available documents in Mac Finder.

4. New staff scheduling tools in Teams. Coming in October. ”With new schedule management tools, managers can now create and share schedules,employees can easily swap shifts, request time off, and see who else is working.” Maybe not a big deal in itself, but Teams is huge as I previously noted. Apparently the largest Team is over 100,000 strong now and there are 50+ out there with 10,000 or more members.

Windows Virtual Desktop

This could be nothing, or it could be huge. I am working on the basis of a one-paragraph statement that promises “virtualized Windows and Office on Azure … the only cloud-based service that delivers a multi-user Windows 10 experience, is optimized for Office 365 Pro Plus … with Windows Virtual Desktop, customers can deploy and scale Windows and Office on Azure in minutes, with built-in security and compliance.”

Preview by the end of 2018 is targeted.

Virtual Windows desktops are already available on Azure, via partnership with Citrix or VMWare Horizon, but Microsoft has held back from what is technically feasible in order to protect its Windows and Office licensing income. By the time you have paid for licenses for Windows Server, Remote Access per user, Office per user, and whatever third-party technology you are using, it gets expensive.

This is mainly about licensing rather than technology, since supporting multiple users running Office applications is now a light load for a modern server.

If Microsoft truly gets behind a pure first-party solution for hosted desktops on Azure at a reasonable cost, the take up would be considerable since it is a handy solution for many scenarios. This would not please its partners though, nor the many hosting companies which offer this.

On the other hand, Microsoft may want to compete more vigorously with Amazon Web Services and its Workspaces offering. Workspaces is still Windows, but of course integrates nicely with AWS solutions for storage, directory, email and so on, so there is a strategic aspect here.

Update: A little more on Microsoft Virtual Desktop here.

More details soon.

Microsoft Office 365 and Google G-Suite: why multi-factor authentication is now essential

Businesses using Office 365, Google G-Suite or other hosted environments (but especially Microsoft and Google) are vulnerable to phishing attacks that steal user credentials. Here is a recent example, which sailed through Microsoft’s spam and malware filters despite its attempts to use AI and other techniques to catch them.

image

If a user clicks the link and signs in, the bad guys have their credentials. What are the consequences?

– at best, a bunch of spam sent out from the user’s account, causing embarrassment and a quick password reset.

– at worst, something much more serious. Once an unauthorised party has user credentials, there are all sorts of social engineering possibilities to escalate the attack, obtain other credentials, or see what interesting data can be found in collaborative document stores and shared applications.

– another risk is to discover information about an organisation’s customers and contact them to advise of new bank details which of course direct payments to the attacker’s account.

The truth is there are many risks and it is worth every effort to prevent this happening in the first place.

However, it is hard to educate every user to the extent that you can be confident they will never click a link in an email such as the one above, or reveal their password in some other way – such as using the same one as one that has been leaked – check here to find out, for example.

Multi-factor authentication (MFA), which is now easy to set up on both Office 365 or G-Suite, helps matters by requiring users to enter a one-time code from their mobile, either via an authenticator app or a text message, before they can log in. It does not cost any extra and now is the time to set it up, if you have not already.

It seems to me that in some ways the prevalence of a few big providers in hosted email and applications has made matters easier for the hackers. They know that a phishing attack simulating, say, Office 365 support will find many potential victims.

The more positive view is that even small businesses can now easily use Enterprise-grade security, if they choose to take advantage.

I do not think MFA is perfect. It usually depends on a mobile phone, and given that possession of a user’s phone also often enables you to reset the password, there is a risk that the mobile becomes the weak link. It is well known that social engineering against mobile providers can persuade them to cancel a SIM and issue a new one to an impostor.

That said, hijacking a phone is a lot more effort than sending out a million phishing emails, and on balance enabling MFA is well worth it.

Want to connect PowerBI to Dynamics 365 CRM on-premises? Good luck with the official documentation

Microsoft champions hybrid IT, that is, some IT on-premises, some in the cloud; but its cloud-first strategy means that on-premises customers sometimes have a hard time getting the most from their software.

I have posted before about Dynamics CRM, which is very expensive but in places oddly sloppy, as if Microsoft has quality control issues or just does not care about some of the details in the product.

I encountered another example of this when attempting to configure Power BI desktop to connect to an on-premises instance of Dynamics CRM. At one time this was not supported, but it is now possible using OAuth to authenticate (presuming you have an internet-facing CRM deployment, which is generally the case).

There is an official document explaining how to set this up here.

That said, it seems that whoever wrote the document did not follow through the steps to check that they work, because they do not.

The first error is in in the documentation for enabling OAuth, which tells you to use ClaimsSettings in PowerShell:

image

However this is not the right setting, and the steps given will give you an error. The correct setting is called OAuthClaimsSettings. It is disabled by default. Set it to enabled using similar steps to those above.

Second, the document tells you to run the Add-Adfsclient command “on the PC where you are running Power BI Desktop”. In fact this must be run on the server where ADFS is installed.

The command itself is not all that reassuring:

Add-AdfsClient -ClientId “a672d62c-fc7b-4e81-a576-e60dc46e951d” -Name “Microsoft Power BI” -RedirectUri @(“https://de-users-preview.sqlazurelabs.com/account/reply/”, “https://preview.powerbi.com/views/oauthredirect.html”) -Description “ADFS OAuth 2.0 client for Microsoft Power BI”

Note the word “preview” that appears a couple of times in this mysterious command.

Even if you do all this, many people have struggled with connection issues. For myself, when I got this working on a test setup, I still got the error:

OData: The feed’s metadata document appears to be invalid. Error: The metadata document could not be read from the message content.

The fix in my case was to use “https://orgname.yourdomain/XRMServices/2011/Organizationdata.svc” for the feed, instead of “https://orgname.yourdomain/api/data/v8.2/”. Then I was up and running.

image

Maybe someone just needs to tell Microsoft to fix its documentation? A good point, but Cobalt’s Chris Capistran pointed out the errors back in April and nothing has changed.

Of course this sort of thing is not all bad for Microsoft partners, who can come in with superior knowledge and get things working.

Windows Server 2019 Essentials may be Microsoft’s last server offering for small businesses

Microsoft’s Windows Server Team has posted about Windows Server 2019 Essentials, stating that:

“There is a strong possibility that this could be the last edition of Windows Server Essentials.”

Server Essentials is an edition aimed at small organisations that includes 25 Client Access Licenses (CALs). If you go beyond that you have to upgrade to Windows Server Standard at a much higher cost. There are some restrictions in the product, such as lack of support for Remote Desktop Services (other than for admin use).

image

Microsoft has already greatly reduced its server offering for small businesses. Small Business Server, the last version of which was Windows Small Business Server 2011, bundled Exchange, SharePoint and System Update Services, and supported up to 75 users.

“Capabilities that small businesses need, like file sharing and collaboration are best achieved with a cloud service like Microsoft 365,” says the team, though also observing that Server 2019 will be supported according to the normal timeline, which means you will get something like mainstream support until 2024 and extended support until 2029 or so.

Good decision? There are several ways to look at this. Microsoft’s desire for small businesses to adopt cloud is not without self-interest. The subscription model is great for vendors, giving them a consistent flow of income and a vehicle for upselling.

Cloud also has specific benefits for small businesses. Letting Microsoft manage your email server makes huge sense, for example. The cloud model has brought many enterprise-grade features to organisations which would otherwise lack them.

Despite that, I do not altogether buy the “cloud is always best” idea. From a technical point of view, running stuff locally is more efficient, and from a business point of view, it can be cheaper. Of course there is also a legacy factor, as many applications are designed to run on a server on the local network.

Businesses do have a choice though. Linux works well as a file and print server, and pretty well as a Windows domain controller.

Network attached storage (NAS) devices like those from Synology and Qnap are easy to manage and include a bunch of features which are small-business friendly, including directory services and even mail servers if you still want to do that.

A common problem though with small businesses and on-premises servers (whether Windows or Linux) is weak backup. It makes sense to use the cloud for that, if nothing else.

Although it is tempting to rail at Microsoft for pulling the rug from under small businesses with their own servers, the truth is that cloud does mostly make better sense for them, especially with the NAS fallback for local file sharing.

Should you convert your Visual Basic .NET project to C#? Why and why not…

When Microsoft first started talking about Roslyn, the .NET compiler platform, one of the features described was the ability to take some Visual Basic code and “paste as C#”, or vice versa.

Some years later, I wondered how easy it is to convert a VB project to C# using Roslyn. The SharpDevelop team has a nice tool for this, CodeConverter, which promises to “Convert code from C# to VB.NET and vice versa using Roslyn”. You can also find this on the Visual Studio marketplace. I installed it to try out.

image

Why would you do this though? There are several reasons, the foremost of which is cross-platform support. The Xamarin framework can use VB to some extent, but it is primarily a C# framework. .NET Core was developed first for C#. Microsoft has stated that “with regard to the cloud and mobile, development beyond Visual Studio on Windows and for non-Windows platforms, and bleeding edge technologies we are leading with C#.”

Note though that Visual Basic is still under active development and history suggests that your Windows VB.NET project will continue running almost forever (in IT terms that is). Even Visual Basic 6.0 applications still run, though you might find it convenient to keep an old version of Windows running for the IDE.

Still, if converting a project is just a right-click in Visual Studio, you might as well do it, right?

image

I tried it, on a moderately-size VB DLL project. Based on my experience, I advise caution – though acknowledging that the converter does an amazing job, and is free and open source. There were thousands of errors which will take several days of effort to fix, and the generated code is not as elegant as code written for C#. In fact, I was surprised at how many things went wrong. Here are some of the issues:

The tool makes use of the Microsoft.VisualBasic namespace to simplify the conversion. This namespace provides handy VB features like DateDiff, which calculates the difference between two dates. The generated project failed to set a reference to this assembly, generating lots of errors about unknown objects called Information, Strings and so on. This is quick to fix. Less good is that statements using this assembly tend to be more convoluted, making maintenance harder. You can often simplify the code and remove the reference; but of course you might introduce a bug with careless typing. It is probably a good idea to remove this dependency, but it is not a problem if you want the quickest possible port.

Moving from a case-insensitive language to a case-sensitive language is a problem. Visual Studio does a good job of making your VB code mostly consistent with regard to case, but that is not a fix. The converter was unable to fix case-sensitivity issues, and introduced some of its own (Imports System.Text became using System.text and threw an error). There were problems with inheritance, and even subtle bugs. Consider the following, admittedly ugly and contrived, code:

image

Here, the VB coder has used different case for a parameter and for referencing the parameter in the body of the method. Unfortunately another variable with the different case is also accessible. The VB code and the converted C# code both compile but return different results. Incidentally, the VB editor will work very hard to prevent you writing this code! However it does illustrate the kind of thing that can go wrong and similar issues can arise in less contrived cases.

C# is more strict than VB which causes errors in conversion. In most cases this is not a bad thing, but can cause headaches. For example, VB will let you pass object members ByRef but C# will not. In fact, VB will let you pass anything ByRef, even literal values, which is a puzzle! So this compiles and runs:

image

Another example is that in VB you can use an existing variable as the iteration variable, but in C# foreach you cannot.

Collections often go wrong. In VB you use an Item property to access the members of a collection like a DataReader. In C# this is omitted, but the converter does not pick this up.

Overloading sometimes goes wrong. The converter does not always successfully convert overloaded methods. Sometimes parameters get stripped away and a spurious new modifier is added.

Bitwise operators are not correctly converted.

VB allows indexed properties and properties with parameters. C# does not. The converter simply strips out the parameters so you need to fix this by hand. See https://stackoverflow.com/questions/2806894/why-c-sharp-doesnt-implement-indexed-properties if the language choices interest you.

There is more, but the above gives some idea about why this kind of conversion may not be straightforward.

It is probably true that the higher the standard of coding in the original project, the more straightforward the conversion is likely to be, the caveat being that more advanced language features are perhaps more likely to go wrong.

Null strings behave differently

Another oddity is that VB treats a String set to null (Nothing) as equivalent to an empty string:

Dim s As String = Nothing

If (s = String.Empty) Then ‘TRUE in VB
     MsgBox(“TRUE!”)
End If

C# does not:

String s = null;

   if (s == String.Empty) //FALSE in C#
    {
        //won’t run
    }

Same code, different result, which can have unfortunate consequences.

Worth it?

So is it worth it? It depends on the rationale. If you do not need cross-platform, it is doubtful. The VB code will continue to work fine, and you can always add C# projects to a VB solution if you want to write most new code in C#.

If you do need to move outside Windows though, conversion is worthwhile, and automated conversion will save you a ton of manual work even if you have to fix up some errors.

There are two things to bear in mind though.

First, have lots of unit tests. Strange things can happen when you port from one language to another. Porting a project well covered by tests is much safer.

Second, be prepared for lots of refactoring after the conversion. Aim to get rid of the Microsoft.VisualBasic dependency, and use the stricter standards of C# as an opportunity to improve the code.

Microsoft’s Dynamics CRM 2016/365: part brilliant, part perplexing, part downright sloppy

I have just completed a test installation of Microsoft’s Dynamics CRM on-premises; it is now called Dynamics 365 but the name change is cosmetic, and in fact you begin by installing Dynamics CRM 2016 and it becomes Dynamics 365 after applying a downloaded update.

Microsoft’s Dynamics product has several characteristics:

1. It is fantastically useful if you need the features it offers

2. It is fantastically expensive for reasons I have never understood (other than, “because they can”)

3. It is tiresome to install and maintain

I wondered if the third characteristic had improved since I last did a Dynamics CRM installation, but I feel it has not much changed. Actually the installation went pretty much as planned, though it remains fiddly, but I wasted considerable time setting up email synchronization with Exchange (also on-premises). This is a newish feature called Server-Side Synchronization, which replaces the old Email Router (which still exists but is deprecated). I have little love for the Email Router, which when anything goes wrong, fills the event log with huge numbers of identical errors such that you have to disable it before you can discover what is really going wrong.

Email is an important feature as automated emails are essential to most CRM systems. The way the Server-Side Synchronization works is that you configure it, but CRM mailboxes are disabled until you complete a “Test and Enable” step that sends and receives test emails. I kept getting failures. I tried every permutation I could think of:

  • Credentials set per-user
  • Credentials set in the server profile (uses Exchange Impersonation to operate on behalf of each user)
  • Windows authentication (only works with Impersonation)
  • Basic authentication enabled on Exchange Web Services (EWS)

All failed, the most common error being “Http server returned 401 Unauthorized exception.” The troubleshooting steps here say to check that the email address of the user matches that of the mailbox; of course it did.

An annoyance is that on my system the Test and Enable step does not always work (in other words, it is not even tried). If I click Test and Enable in the Mailbox configuration window, I get this dialog:

image

However if I click OK, nothing happens and the dialog stays. If I click Cancel nothing happens and the dialog stays. If I click X the dialog closes but the test is not carried out.

Fortunately, you can also access Test and Enable from the Mailbox list (select a mailbox and it appears in the ribbon). A slightly different dialog appears and it works.

I was about to give up. I set Windows authentication in the server profile, which is probably the best option for most on-premises setups, and tried the test one more time. It worked. I do not know what changed. As this tech note (which is about server-side synchronization using Exchange Online) remarks:

If you get it right, you will hear Microsoft Angels singing

But what’s this about sloppy? There is plenty of evidence. Things like the non-functioning dialog mentioned above. Things like the date which shows for a mailbox that has not been tested:

image

Or leaving aside the email configuration, things like the way you can upload Word templates for use in processes, but cannot easily download them (you can use a tool like the third-party XRMToolbox).

And the script error dialog which has not changed for a decade.

Or the warning you get when viewing a report in Microsoft Edge, that the browser is not supported:

image

so you click the link and it says Edge is supported.

Or even the fact that whenever you log on you get this pesky dialog:

image

So you click Don’t show this again, but it always reappears.

It seems as if Microsoft does not care much about the fit and finish of Dynamics CRM.

So why do people persevere – in fact, the Dynamics business is growing for Microsoft, largely because of Dynamics 365 online and its integration with Office 365. The cloud is one reason, which removes at least some of the admin burden. The other thing though is that it does bring together a set of features that make it invaluable to many businesses. You can use it not only for sales and marketing, but for service case management, quotes, orders and invoices.

It is highly customizable, which is a mixed blessing as your CRM installation becomes increasingly non-standard, but does mean that most things can be done with sufficient effort.

In the end, it is all about automation, and can work like magic with the right carefully designed custom processes.

With all those things to commend it, it would pay Microsoft to work at making the user interface less annoying and the administration less prone to perplexing errors.