Microsoft’s 82 Ignite announcements: what really matters

Microsoft’s PR team has helpfully summarised many of the announcements at the Ignite event, kicking off today in Orlando. I count 82, but you might make it fewer or many more, depending on what you call an announcement. And that is not including the business apps announcements made at the end of last week, most notably the arrival of the HoloLens-based Remote Assist in Dynamics 365.

image

Not all announcements are equal. Some, like the release of Windows Server 2019, are significant but not really news; we knew it was coming around now, and the preview has been around for ages. Others, like larger Azure managed disk sizes (8, 16 and 32TB) are cool if that is what you need, but hardly surprising; the specification of available cloud infrastructure is continually being enhanced.

Note that this post is based on what Microsoft chose to reveal to press ahead of the event, and there is more to come.

It is worth observing though that of these 82 announcements, only 3 or 4 are not cloud related:

  • SQL Server 2019 public preview
  • [Windows Server 2019 release] – I am bracketing this because many of the new features in Server 2019 are Azure-related, and it is listed under the heading Azure Infrastructure
  • Chemical Simulation Library for Microsoft Quantum
  • Surface Hub 2 release promised later this year

Microsoft’s journey from being an on-premises company, to being a service provider, is not yet complete, but it is absolutely the focus of almost everything new.

I will never forget an attendee at a previous Microsoft event a few years back telling me, “this cloud stuff is not relevant to us. We have our own datacenter.” I cannot help wondering how much Office 365 and/or Azure that person’s company is consuming now. Of course on-premises servers and applications remain important to Microsoft’s business, but it is hard to swim against the tide.

Ploughing through 82 announcements would be dull for me to write and you to read, so here are some things that caught my eye, aside from those already mentioned.

1. Azure confidential computing in public preview. A new series of VMs using Intel’s SGX technology lets you process data in a hardware-enforced trusted execution environment.

2. Cortana Skills Kit for Enterprise. Currently invite-only, this is intended to make it easier to write business bots “to improve workforce productivity” – or perhaps, an effort to reduce the burden on support staff. I recall examples of using conversational bots for common employee queries like “how much holiday allowance do I have remining, and which days can I take off?”. As to what is really new here, I have yet to discover.

3. A Python SDK for Azure Machine Learning. Important given the popularity of Python in this space.

4. Unified search in Microsoft 365. Is anyone using Delve? Maybe not, which is why Microsoft is bringing a search box to every cloud application, which is meant to use Microsoft Graph, AI and Bing to search across all company data and bring you personalized results. Great if it works.

5. Azure Digital Twins. With public preview promised on October 15, this lets you build “comprehensive digital models of any physical environment”. Once you have the model, there are all sorts of possibilities for optimization and safe experimentation.

6. Azure IoT Hub to support the Android Things platform via the Java SDK. Another example of Microsoft saying, use what you want, we can support it.

7. Azure Data Box Edge appliance. The assumption behind Edge computing is both simple and compelling: it pays to process data locally so you can send only summary or interesting data to the cloud. This appliance is intended to simplify both local processing and data transfer to Azure.

8. Azure Functions 2.0 hits general availability. Supports .NET Core, Python.

9. Helm repositories in Azure Container Registry, now in public preview.

10 Windows Autopilot support extended to existing devices. This auto-configuration feature previously only worked with new devices. Requires Windows 10 October Update, or automated upgrade to this.

Office and Office 365

In the Office 365 space there are some announcements:

1. LinkedIn integration with Office 365. Co-author documents and send emails to LinkedIn contacts, and surface LinkedIn information in meeting invites.

2. Office Ideas. Suggestions as you work to improve the design of your document, or suggest trends and charts in Excel. Sounds good but I am sceptical.

3. OneDrive for Mac gets Files on Demand. A smarter way to use cloud storage, downloading only files that you need but showing all available documents in Mac Finder.

4. New staff scheduling tools in Teams. Coming in October. ”With new schedule management tools, managers can now create and share schedules,employees can easily swap shifts, request time off, and see who else is working.” Maybe not a big deal in itself, but Teams is huge as I previously noted. Apparently the largest Team is over 100,000 strong now and there are 50+ out there with 10,000 or more members.

Windows Virtual Desktop

This could be nothing, or it could be huge. I am working on the basis of a one-paragraph statement that promises “virtualized Windows and Office on Azure … the only cloud-based service that delivers a multi-user Windows 10 experience, is optimized for Office 365 Pro Plus … with Windows Virtual Desktop, customers can deploy and scale Windows and Office on Azure in minutes, with built-in security and compliance.”

Preview by the end of 2018 is targeted.

Virtual Windows desktops are already available on Azure, via partnership with Citrix or VMWare Horizon, but Microsoft has held back from what is technically feasible in order to protect its Windows and Office licensing income. By the time you have paid for licenses for Windows Server, Remote Access per user, Office per user, and whatever third-party technology you are using, it gets expensive.

This is mainly about licensing rather than technology, since supporting multiple users running Office applications is now a light load for a modern server.

If Microsoft truly gets behind a pure first-party solution for hosted desktops on Azure at a reasonable cost, the take up would be considerable since it is a handy solution for many scenarios. This would not please its partners though, nor the many hosting companies which offer this.

On the other hand, Microsoft may want to compete more vigorously with Amazon Web Services and its Workspaces offering. Workspaces is still Windows, but of course integrates nicely with AWS solutions for storage, directory, email and so on, so there is a strategic aspect here.

Update: A little more on Microsoft Virtual Desktop here.

More details soon.

Microsoft Office 365 and Google G-Suite: why multi-factor authentication is now essential

Businesses using Office 365, Google G-Suite or other hosted environments (but especially Microsoft and Google) are vulnerable to phishing attacks that steal user credentials. Here is a recent example, which sailed through Microsoft’s spam and malware filters despite its attempts to use AI and other techniques to catch them.

image

If a user clicks the link and signs in, the bad guys have their credentials. What are the consequences?

– at best, a bunch of spam sent out from the user’s account, causing embarrassment and a quick password reset.

– at worst, something much more serious. Once an unauthorised party has user credentials, there are all sorts of social engineering possibilities to escalate the attack, obtain other credentials, or see what interesting data can be found in collaborative document stores and shared applications.

– another risk is to discover information about an organisation’s customers and contact them to advise of new bank details which of course direct payments to the attacker’s account.

The truth is there are many risks and it is worth every effort to prevent this happening in the first place.

However, it is hard to educate every user to the extent that you can be confident they will never click a link in an email such as the one above, or reveal their password in some other way – such as using the same one as one that has been leaked – check here to find out, for example.

Multi-factor authentication (MFA), which is now easy to set up on both Office 365 or G-Suite, helps matters by requiring users to enter a one-time code from their mobile, either via an authenticator app or a text message, before they can log in. It does not cost any extra and now is the time to set it up, if you have not already.

It seems to me that in some ways the prevalence of a few big providers in hosted email and applications has made matters easier for the hackers. They know that a phishing attack simulating, say, Office 365 support will find many potential victims.

The more positive view is that even small businesses can now easily use Enterprise-grade security, if they choose to take advantage.

I do not think MFA is perfect. It usually depends on a mobile phone, and given that possession of a user’s phone also often enables you to reset the password, there is a risk that the mobile becomes the weak link. It is well known that social engineering against mobile providers can persuade them to cancel a SIM and issue a new one to an impostor.

That said, hijacking a phone is a lot more effort than sending out a million phishing emails, and on balance enabling MFA is well worth it.

Want to connect PowerBI to Dynamics 365 CRM on-premises? Good luck with the official documentation

Microsoft champions hybrid IT, that is, some IT on-premises, some in the cloud; but its cloud-first strategy means that on-premises customers sometimes have a hard time getting the most from their software.

I have posted before about Dynamics CRM, which is very expensive but in places oddly sloppy, as if Microsoft has quality control issues or just does not care about some of the details in the product.

I encountered another example of this when attempting to configure Power BI desktop to connect to an on-premises instance of Dynamics CRM. At one time this was not supported, but it is now possible using OAuth to authenticate (presuming you have an internet-facing CRM deployment, which is generally the case).

There is an official document explaining how to set this up here.

That said, it seems that whoever wrote the document did not follow through the steps to check that they work, because they do not.

The first error is in in the documentation for enabling OAuth, which tells you to use ClaimsSettings in PowerShell:

image

However this is not the right setting, and the steps given will give you an error. The correct setting is called OAuthClaimsSettings. It is disabled by default. Set it to enabled using similar steps to those above.

Second, the document tells you to run the Add-Adfsclient command “on the PC where you are running Power BI Desktop”. In fact this must be run on the server where ADFS is installed.

The command itself is not all that reassuring:

Add-AdfsClient -ClientId “a672d62c-fc7b-4e81-a576-e60dc46e951d” -Name “Microsoft Power BI” -RedirectUri @(“https://de-users-preview.sqlazurelabs.com/account/reply/”, “https://preview.powerbi.com/views/oauthredirect.html”) -Description “ADFS OAuth 2.0 client for Microsoft Power BI”

Note the word “preview” that appears a couple of times in this mysterious command.

Even if you do all this, many people have struggled with connection issues. For myself, when I got this working on a test setup, I still got the error:

OData: The feed’s metadata document appears to be invalid. Error: The metadata document could not be read from the message content.

The fix in my case was to use “https://orgname.yourdomain/XRMServices/2011/Organizationdata.svc” for the feed, instead of “https://orgname.yourdomain/api/data/v8.2/”. Then I was up and running.

image

Maybe someone just needs to tell Microsoft to fix its documentation? A good point, but Cobalt’s Chris Capistran pointed out the errors back in April and nothing has changed.

Of course this sort of thing is not all bad for Microsoft partners, who can come in with superior knowledge and get things working.

Google Assistant was all over IFA in Berlin. What are the implications?

Last week I attended IFA in Berlin, perhaps Europe’s biggest consumer electronics event, and was struck by the ubiquity of Google Assistant. The company spent big on promoting its digital assistant both outside and inside the venue.

image
Mach mal, Google; or in English, Go Google.

image

On the stands and in press briefings I soon lost count of who was supporting Google’s voice assistant. A few examples:

image

JBL/Harman in its earbuds

image

Lenovo with its Home Control Solutions – Lenovo also uses its own cloud and will support Amazon Alexa

image

LG with audio, TV, kitchen, home automation and more

image 

Bang & Olufsen with its smart speakers. No logo, but it is using Google Assistant both as a feature in itself (voice search and so on) and to control other audio devices.

And Sony with its TVs and more. For example, then new AF9 and ZF9 series: “Using the Google Assistant with both the AF9 and ZF9 will be even easier. Both models have built-in microphones that will free the hands; now you simply talk to the TV to find what you quickly want, or to ask the Google Assistant to play TV shows, movies, and more.*

I was only at IFA for the pre-conference press days so this is just a snapshot of what I saw; there were many more Google Assistant integrations on display, and quite a few (though not as many) for Amazon Alexa.

It is fair to say then that Google is treating this as a high priority and having considerable success in getting vendors to sign up.

What is Google Assistant?

Google Assistant really only needs three things in order to work. A microphone, to hear you. An internet connection, to send your voice input to its internet service for voice to text transcription, and then to its AI/Search service to find a suitable response. And a speaker, to output the result. You can get it as a product called Google Home but it is the software and internet service that counts.

image

Vendors of smart devices – anything that has an internet connection – can develop integrations so that Google Assistant can control them. So you can say, “Hey Google, turn on the living room light” and it will be so. Cool.

Amazon Alexa has similar features and this is Google’s main competition. Alexa was first and ties in well with Amazon services such as shopping and media. However Google has the advantage of its search services, its control of Android, and its extensive personal data derived from search, Android, Google Maps and location services, GMail and more. This means Google can do better AI and richer personalisation.

Natural language UI

Back in March I attended an AI Assistant Summit in London organised by Re-Work. One of the speakers was Yariv Adan, a Product Lead at Google Assistant.

image

I attend lots of presentations but this one made a particular impact on me. Adan believes that natural language UI is the next big technological shift. The preceding ones he identified were the Internet in the nineties and smartphones in the early years of this century. Adan envisages an era in which we no longer constantly pull out devices.

“I believe the next revolution is happening now, powered by AI. I call it the paradigm switch to natural UI. Instead of humans adapting to machines, machines adapt to humans. What we’re trying to create is we interact with machines the same way we interact with each other, in a natural way. Meaning using natural language, showing things, pointing at things, assuming context, assuming a human-like memory, expecting personality, humour, opinion, some kind of an emotional connection, empathy.

[In future] it is not the device changing, it is the device disappearing. We are not going to interact with devices any more. We are starting to interact with this AI entity, an ambient entity that exists everywhere.”

Note: If you ever read Isaac Asimov’s science fiction novels, you will recognise this as very like his Multivac computer, which hears and responds to your questions wherever you are.

“Imagine now that everything is connected, that the entity follows you. That there is no more device that you need to take out, turn on, speak to it. It’s around you, it’s on the TV, it’s in the speakers, it’s in your headphones, it’s in the watch, it’s in the auto, it’s there. Internet of things, any connected device that only has a speaker you can actually start interacting with that thing,”

said Adan.

Adan gave a number of demonstrations. Incidentally, he never uttered the words “Hey Google”. Simply, he spoke into his phone, where I presume some special version of Google Assistant was running. In particular, he was keen to show how the AI is learning about context and memory. So he asked what is the largest castle in the UK where people live. Answer: Windsor Castle. Then, Who built it? When? Is it open now? How can I get there by public transport? What about food? In each case, the Assistant answered as a human would, understanding that the topic was Windsor Castle. “I found some restaurants within 0.4 miles,” said the Assistant, betraying a touch of computer-style logic.

“Thank you you’re awesome,” says Adan. “Not a problem”, responds the Assistant. This is an example of personality or emotion, key factors, said Adan, in making interaction natural.

Adan also talked about personalisation. “Show me my flight”. The Assistant knows he is away from home and also has access to his mailbox, from where it has parse flight details. So it answers this generic question with specific details about tomorrow’s flight to Zurich.

“Where did I park my car?” In this case, Adan had taken a picture of his car after parking. The Assistant knew the location of the picture and was able to show both the image and its place on a map.

“I want to show how we use some of that power for the ecosystem that we have built … we’re trying to make that revolution to a place where you don’t need to think about the machine any more, where you just interact in a way that is natural. I am optimistic, I think the revolution is happening now.”

Implications and unintended consequences

An earlier speaker at the Re-Work event (sorry I forget who it was) noted that voice systems give simplified results compared to text-based searches. Often you only get one result. Back in the nineties, we used to talk about “10 blue links” as the typical result of a search. This meant that you had some sort of choice about where you clicked, and an easy way to get several different perspectives. Getting just one result is great if the answer is purely factual and is correct, but reinforces the winner-takes-all tendency. Instead of being on the first page of results, you have to be top. Or possibly pay for advertising; that aspect has not yet emerged in the voice assistant world.

If we get into the habit of shopping via voice assistants, it will be disruptive for brands. Maybe Amazon Basics will do well, if users simply say “get me some A4 paper” rather than specifying a brand. Maybe more and more decisions will be taken for you. “Get me a takeaway dinner”, perhaps, with the assistant knowing both what you like, and what you ate yesterday and the day before.

All this is speculation, but it is obvious that a shift from screens to voice for both transactions and information will have consequences for vendors and information providers; and that probably it will tend to reduce rather than increase diversity.

What about your personal data? This is a big question and one that the industry hates to talk about. I heard nothing about it at IFA. The assumption was that if you could turn on a light, or play some music, without leaving your chair, that must be a good thing. Yet, having a device or devices in your home listening to your every word (in case you might say “Hey Google”) is something that makes me uncomfortable. I do not want Google reading my emails or tracking my location, but it is becoming hard to avoid.

For most people, Google Assistant will just be a feature of their TV, or audio system, or a way to call up recipes in the kitchen.

From Google’s perspective though, it is safe to assume that the ability to collect data is a key reason for its strong promotion and drive behind Google Assistant. That data has enormous value. Targeted advertising is the start, but it also provides deep insight into how we live, trends in human behaviour, changing patterns of consumption, and much more. When things are going wrong with our health, our finances or our relationships, it is not implausible that Google may know before we do.

This is a lot of power to give a giant US corporation; and we should also note that in some scenarios, if the US government were to demand that data be handed over, a company like Google has no choice but to comply.

Personalisation can make our lives better, but also has the potential to harm us. An area of concern is that of shared risk, such as health insurance. Insurers may be reluctant to give policies to those people most likely to make a claim. Could Google’s data store somehow end up impacting our ability to insure, or its cost?

Personalisation is always a trade-off. Organisation gets my data; I get a benefit. I shop at a supermarket and this is fairly transparent. I use a loyalty card so the shop knows what I buy; in return I get discount points and special offers.

In the case of Google Assistant it is not so transparent. The EU’s GDPR legislation has helped, giving citizens the right to access their data and the right to be forgotten. However, we are still in the era of one-sided privacy policies and in many cases the binary choice of agree, or do not use our services. This becomes a problem if the service provider has anything close to a monopoly, which is true in Google’s case. Regulation, it seems to me, is exactly the right answer to the risks inherent in putting too much power in the hands of a business entity.

For myself, I am happy to cross the room and turn on the light, and to find my flight in my calendar. The trade-off is not worth it. But if Adan’s “ambient entity” comes to pass (which is actually most likely Google) I am not sure of the extent to which I will have a choice.

Adan’s work is terrific and the ability for machines to converse with humans in something close to a natural way is a huge technical achievement. I have nothing but respect for him and his team. It is part of a wider picture though, about data gathering, personalisation, and control of information and transactions, and it seems to me that this deserves more attention.

Windows Server 2019 Essentials may be Microsoft’s last server offering for small businesses

Microsoft’s Windows Server Team has posted about Windows Server 2019 Essentials, stating that:

“There is a strong possibility that this could be the last edition of Windows Server Essentials.”

Server Essentials is an edition aimed at small organisations that includes 25 Client Access Licenses (CALs). If you go beyond that you have to upgrade to Windows Server Standard at a much higher cost. There are some restrictions in the product, such as lack of support for Remote Desktop Services (other than for admin use).

image

Microsoft has already greatly reduced its server offering for small businesses. Small Business Server, the last version of which was Windows Small Business Server 2011, bundled Exchange, SharePoint and System Update Services, and supported up to 75 users.

“Capabilities that small businesses need, like file sharing and collaboration are best achieved with a cloud service like Microsoft 365,” says the team, though also observing that Server 2019 will be supported according to the normal timeline, which means you will get something like mainstream support until 2024 and extended support until 2029 or so.

Good decision? There are several ways to look at this. Microsoft’s desire for small businesses to adopt cloud is not without self-interest. The subscription model is great for vendors, giving them a consistent flow of income and a vehicle for upselling.

Cloud also has specific benefits for small businesses. Letting Microsoft manage your email server makes huge sense, for example. The cloud model has brought many enterprise-grade features to organisations which would otherwise lack them.

Despite that, I do not altogether buy the “cloud is always best” idea. From a technical point of view, running stuff locally is more efficient, and from a business point of view, it can be cheaper. Of course there is also a legacy factor, as many applications are designed to run on a server on the local network.

Businesses do have a choice though. Linux works well as a file and print server, and pretty well as a Windows domain controller.

Network attached storage (NAS) devices like those from Synology and Qnap are easy to manage and include a bunch of features which are small-business friendly, including directory services and even mail servers if you still want to do that.

A common problem though with small businesses and on-premises servers (whether Windows or Linux) is weak backup. It makes sense to use the cloud for that, if nothing else.

Although it is tempting to rail at Microsoft for pulling the rug from under small businesses with their own servers, the truth is that cloud does mostly make better sense for them, especially with the NAS fallback for local file sharing.

Where next for Windows Mixed Reality? At IFA, Acer has an upgraded headset at IFA; Dell is showing Oculus Rift

It is classic Microsoft. Launch something before it is ready, then struggle to persuade the market to take a second look after it is fixed.

This may prove to be the Windows Mixed Reality story. At IFA in Berlin last year, all the major Windows PC vendors seemed to have headsets to show and talked it up in their press events. This year, Acer has a nice new generation headset, but Asus made no mention of upgrading its hardware. Dell is showing Oculus Rift on its stand, and apparently is having an internal debate about future Mixed Reality hardware.

I reviewed Acer’s first headset and the technology in general late last year. The main problem was lack of content. In particular, the Steam VR compatibility was in preview and not very good.

Today I tried the new headset briefly at the Acer booth.

image 

The good news: it is a big improvement. It feels less bulky but well made, and has integrated headphones. It felt comfortable even over glasses.

On the software side, I played a short Halo demo. The demo begins with a promising encounter with visceral Halo aliens, but becomes a rather dull shooting game. Still, even the intro shows what is possible.

I was assured that Steam VR compatibility is now much improved, but would like to try for myself.

The big questions are twofold. Will VR really take off at all, and if it does, will anyone use Windows Mixed Reality?

Adobe announces extensibility for XD design and prototyping tool, integration with Microsoft Teams, Slack and Jira

Adobe XD (Experience Design) is a tool for prototyping apps and web applications. The full application runs on Windows and Mac, as part of Adobe’s Creative Cloud, and there are apps for iOS and Android that let you preview your designs on a device. Note that it is only a prototyping tool: you still have to re-implement the design in Android Studio, Xcode, Visual Studio or your preferred development tool. However the ability to create and share prototypes is a critical part of the workflow for many applications.

image

Adobe has now announced extensibility for XD via an API. This enables third-party plugins, which will enable “adding new features, automating workflows and connecting XD to tools and services,” according to the press release.

There are also new integrations with collaboration tools including Microsoft Teams and Slack, and Jira (Atlassian’s software development management tool).

The release emphasises that that Microsoft Teams is Adobe’s “preferred collaboration service”, showing that the company’s alliance with Microsoft is still on.

These are not the only tools which integrate with XD. Others were announced in January this year, including Dropbox and Sketch.

What do these integrations do? It is mainly a matter of rich preview within the tool, and the ability to receive notifications, such as when someone comments on an XD design.

Adobe has a generous free starter plan for XD. This includes:

  • Adobe XD
  • 1 active shared prototype
  • 1 active shared design spec
  • 2 GB cloud storage
  • Typekit Free (limited set of fonts)

You can get the free plan here, play around with the tool, and upgrade to the full plan (with unlimited prototypes) if you need to, at $9.99 per month.

SQLite with .NET: excellent but some oddities

I have been porting a C# application which uses an MDB database (the old Access/JET format) to one that uses SQLite. The process has been relatively smooth, but I encountered a few oddities.

One is puzzling and is described by another user here. If you have a column that normally stores string values, but insert a string that happens to be numeric such as “12345”, then you get an invalid cast exception from the GetString method of the SQLite DataReader. The odd thing is that the GetFieldType method correctly returns String. You can overcome this by using GetValue and casting the result to a string, or calling GetString() on the result as in dr.GetValue().ToString().

Another strange one is date comparisons. In my case the application only stores dates, not times; but SQLite using the .NET provider stores the values as DateTime strings. The SQLite query engine returns false if you test whether “yyyy-mm-dd 00:00:00” is equal to “yyy-mm-dd”. The solution is to use the date function: date(datefield) = date(datevalue) works as you would expect. Alternatively you can test for a value between two dates, such as more than yesterday and less than tomorrow.

Performance is excellent, with SQLite . Unit tests of various parts of the application that make use of the database showed speed-ups of between 2 and 3 times faster then JET on average; one was 8 times faster. Note though that you must use transactions with SQLite (or disable synchronous operation) for bulk updates, otherwise database writes are very slow. The reason is that SQLite wraps every INSERT or UPDATE in a transaction by default. So you get the effect described here:

Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second. Transaction speed is limited by the rotational speed of your disk drive. A transaction normally requires two complete rotations of the disk platter, which on a 7200RPM disk drive limits you to about 60 transactions per second.

Without a transaction, a unit test that does a bulk insert, for example, took 3 minutes, versus 6 seconds for JET. Refactoring into several transactions reduced the SQLite time to 3 seconds, while JET went down to 5 seconds.

Rumours of a new Mac Mini? It is about time.

Bloomberg is reporting rumours of a new Mac Mini in time for the back-to-school market this year. The source of the rumours is claimed to be “people familiar with the plans”.

Apple is also planning the first upgrade to the Mac mini in about four years. It’s a Mac desktop that doesn’t include a screen, keyboard, or mouse in the box and costs $500. The computer has been favored because of its lower price, and it’s popular with app developers, those running home media centers, and server farm managers. For this year’s model, Apple is focusing primarily on these pro users, and new storage and processor options are likely to make it more expensive than previous versions, the people said.

You can still buy a Mac Mini, but it has not been updated since 2014, making it particularly poor value. It is useful for developers since a Mac of some kind is required for iOS and of course Mac development. It is also handy for keeping up to date with macOS.

The latest rumours sound plausible though the prospect of being “more expensive than previous versions” will not go down well with some of the target market, who want to minimise the premium paid for Apple products. Another reason why the 2014 Mac Mini is unappealing is that additional RAM is factory-fit only, which again means extraordinarily high prices. Check out the iFixit teardown:

Unfortunately, the RAM is soldered to the logic board. This means that if you want to upgrade the RAM, you can only do so at time of purchase.

Will Apple do the same again? It seems likely. My guess is that the new Mac Mini (if it exists) will be even smaller than before, but just as hard to upgrade.

Should you convert your Visual Basic .NET project to C#? Why and why not…

When Microsoft first started talking about Roslyn, the .NET compiler platform, one of the features described was the ability to take some Visual Basic code and “paste as C#”, or vice versa.

Some years later, I wondered how easy it is to convert a VB project to C# using Roslyn. The SharpDevelop team has a nice tool for this, CodeConverter, which promises to “Convert code from C# to VB.NET and vice versa using Roslyn”. You can also find this on the Visual Studio marketplace. I installed it to try out.

image

Why would you do this though? There are several reasons, the foremost of which is cross-platform support. The Xamarin framework can use VB to some extent, but it is primarily a C# framework. .NET Core was developed first for C#. Microsoft has stated that “with regard to the cloud and mobile, development beyond Visual Studio on Windows and for non-Windows platforms, and bleeding edge technologies we are leading with C#.”

Note though that Visual Basic is still under active development and history suggests that your Windows VB.NET project will continue running almost forever (in IT terms that is). Even Visual Basic 6.0 applications still run, though you might find it convenient to keep an old version of Windows running for the IDE.

Still, if converting a project is just a right-click in Visual Studio, you might as well do it, right?

image

I tried it, on a moderately-size VB DLL project. Based on my experience, I advise caution – though acknowledging that the converter does an amazing job, and is free and open source. There were thousands of errors which will take several days of effort to fix, and the generated code is not as elegant as code written for C#. In fact, I was surprised at how many things went wrong. Here are some of the issues:

The tool makes use of the Microsoft.VisualBasic namespace to simplify the conversion. This namespace provides handy VB features like DateDiff, which calculates the difference between two dates. The generated project failed to set a reference to this assembly, generating lots of errors about unknown objects called Information, Strings and so on. This is quick to fix. Less good is that statements using this assembly tend to be more convoluted, making maintenance harder. You can often simplify the code and remove the reference; but of course you might introduce a bug with careless typing. It is probably a good idea to remove this dependency, but it is not a problem if you want the quickest possible port.

Moving from a case-insensitive language to a case-sensitive language is a problem. Visual Studio does a good job of making your VB code mostly consistent with regard to case, but that is not a fix. The converter was unable to fix case-sensitivity issues, and introduced some of its own (Imports System.Text became using System.text and threw an error). There were problems with inheritance, and even subtle bugs. Consider the following, admittedly ugly and contrived, code:

image

Here, the VB coder has used different case for a parameter and for referencing the parameter in the body of the method. Unfortunately another variable with the different case is also accessible. The VB code and the converted C# code both compile but return different results. Incidentally, the VB editor will work very hard to prevent you writing this code! However it does illustrate the kind of thing that can go wrong and similar issues can arise in less contrived cases.

C# is more strict than VB which causes errors in conversion. In most cases this is not a bad thing, but can cause headaches. For example, VB will let you pass object members ByRef but C# will not. In fact, VB will let you pass anything ByRef, even literal values, which is a puzzle! So this compiles and runs:

image

Another example is that in VB you can use an existing variable as the iteration variable, but in C# foreach you cannot.

Collections often go wrong. In VB you use an Item property to access the members of a collection like a DataReader. In C# this is omitted, but the converter does not pick this up.

Overloading sometimes goes wrong. The converter does not always successfully convert overloaded methods. Sometimes parameters get stripped away and a spurious new modifier is added.

Bitwise operators are not correctly converted.

VB allows indexed properties and properties with parameters. C# does not. The converter simply strips out the parameters so you need to fix this by hand. See https://stackoverflow.com/questions/2806894/why-c-sharp-doesnt-implement-indexed-properties if the language choices interest you.

There is more, but the above gives some idea about why this kind of conversion may not be straightforward.

It is probably true that the higher the standard of coding in the original project, the more straightforward the conversion is likely to be, the caveat being that more advanced language features are perhaps more likely to go wrong.

Null strings behave differently

Another oddity is that VB treats a String set to null (Nothing) as equivalent to an empty string:

Dim s As String = Nothing

If (s = String.Empty) Then ‘TRUE in VB
     MsgBox(“TRUE!”)
End If

C# does not:

String s = null;

   if (s == String.Empty) //FALSE in C#
    {
        //won’t run
    }

Same code, different result, which can have unfortunate consequences.

Worth it?

So is it worth it? It depends on the rationale. If you do not need cross-platform, it is doubtful. The VB code will continue to work fine, and you can always add C# projects to a VB solution if you want to write most new code in C#.

If you do need to move outside Windows though, conversion is worthwhile, and automated conversion will save you a ton of manual work even if you have to fix up some errors.

There are two things to bear in mind though.

First, have lots of unit tests. Strange things can happen when you port from one language to another. Porting a project well covered by tests is much safer.

Second, be prepared for lots of refactoring after the conversion. Aim to get rid of the Microsoft.VisualBasic dependency, and use the stricter standards of C# as an opportunity to improve the code.

Tech Writing