Category Archives: Notes from the field

Fixing a .NET P/Invoke issue on Apple Silicon – including a tangle with Xcode

I have a bridge platform (yes the game) written in C# which I am gradually improving. Like many bridge applications it makes use of the open source double dummy solver (DDS) by Bo Haglund and Soren Hein.

My own project started out on Windows and is deployed to Linux (on Azure) but I now develop it mostly on a Mac with Visual Studio Code. The DDS library is cross-platform and I have compiled it for Windows, Linux and Mac – I had some issues with a dependency, described here, which taught me a lot about the Linux app service on Azure, among other things.

Unfortunately though the library has never worked with C# on the Mac – until today that is. I could compile it successfully with Xcode, it worked with its own test application dtest, but not from C#. This weekend I decided to investigate and see if I could fix it.

I have an Xcode project which includes both dtest and the DDS library, which is configured as a dynamic library. What I wanted to do was to debug the C++ code from my .NET application. For this purpose I did not use the ASP.Net bridge platform but a simple command line wrapper for DDS which I wrote some time back as a utility. It uses the same .NET wrapper DLL for DDS as the bridge platform. The problem I had was that when the application called a function from the DDS native library, it printed: Memory::GetPtr 0 vs. 0 and then quit.

The error from my .NET wrapper

I am not all that familiar with Xcode and do not often code in C++ so debugging this was a bit of an adventure. In Xcode, I went to Product – Scheme – Edit Scheme, checked Debug executable under Info, and then selected the .NET application which is called ddscs.

Adding the .NET application as the executable for debugging.

I also had to add an argument under Arguments passed on Launch, so that my application would exercise the library.

Then I could go to Product – Run and success, I could step through the C++ code called by my .NET application. I could see that the marshalling was working fine.

Stepping through the C++ code in Xcode

Now I could see where my error message came from:

The source of my error message.

So how to fix it? The problem was that DDS sets how much memory it allows itself to use and it was set to zero. I looked at the dtest application and noticed this line of code:

SetResources(options.memoryMB, options.numThreads);

This is closely related to another DDS function called SetMaxThreads. I looked at the docs for DDS and found this remark:

The number of threads is automatically configured by DDS on Windows, taking into account the number of processor cores and available memory. The number of threads can be influenced using by calling SetMaxThreads. This function should probably always be called on Linux/Mac, with a zero argument for auto­ configuration.

“Probably” huh! So I added this to my C# wrapper, using something I have not used before, a static constructor. It just called SetMaxThreads(0) via P/Invoke.

Everything started working. Like so many programming issues, simple when one has figured out the problem!

Remote Desktop on Mac failes to connect with 0x207 error

I am setting up a new Mac and got this annoying error from the Microsoft Remote Desktop client.

Worse, a number of people have complained about this error but there is a lot of useless advice out there, and also the bad advice to disable NLA (Network Level Authentication) on the Windows PC. Don’t do that, it is bad for security.

One of the few helpful threads on the topic is this one which point to this article on the subject of how to enable integrated authentication on Mac and Linux using Kerberos. I followed the advice here and it worked though I’m not sure if the ALL CAPS is necessary for the domain, but I used it and it worked – as long as I entered user@ALLCAPS in the RDP username as well.

HP unhinged, refuses to honour warranty on its defective laptop

My son bought an HP laptop. He is a student and bought it for his studies. It was an HP ENVY – 13-aq0002na. It was expensive – approaching £1000 as I recall – and he took care of it, buying an official HP case. He bought it directly from HP’s site. A 3-year extended warranty was included. Here is how the HP Care Pack was described:

image

After  a few months, the hinge of the laptop screen started coming away from the base of the laptop at one side, causing the base to start splitting from the top part.

image

He was certain that it was a manufacturing defect. However he did not make a claim immediately, because he needed the laptop for his studies, and he knew he had a long warranty. (I think this was a mistake, but understand his position). The December vacation approached and he had time to have the laptop sorted so he raised the claim.

HP closed the case. He had a confusing conversation with HP who offered to put him through to sales for paid service, he agreed (another mistake) and ended up talking to someone who quoted hundreds of pounds for the repair, which he could not afford.

He attempted to follow up but had no success. He did engage with HP Support on Twitter who seemed helpful at first but ended with this:

image

At no point did HP even offer to look at the laptop.

My son replied:

“The problem is absolutely not accidental damage or customer-induced – I’ve never dropped the laptop, and have always taken very good care of it (there are no marks on the chassis, for example). Whenever I’ve moved it around, it’s been in an HP case designed for the laptop. The problem emerged during normal use within eight months of me purchasing the laptop, and others have reported the same issue – for example, see this post on the HP forum:

https://h30434.www3.hp.com/t5/Notebook-Hardware-and-Upgrade-Questions/HP-Envy-13-2019-Hinge-Issues/td-p/7590894

The problem is the sole result of HP’s hinge design being inadequate. It’s therefore a manufacturing fault and should be covered by the care pack.”

Follow that link and you find this:

image

It is the same model. The same problem at an earlier stage. And 10 people have clicked to say they “have the same question”. So HP’s claim that “nothing has been reported” concerning a similar defect is false.

See also here for another case https://h30434.www3.hp.com/t5/Notebook-Hardware-and-Upgrade-Questions/HP-Envy-13-hinge-issue/m-p/7911883

“I bought my HP Envy in 2017 for university, unfortunately that means it is now out of warranty. One of the hinges is loose and pops out of place every time I open the laptop”

and here for another https://h30434.www3.hp.com/t5/Notebook-Hardware-and-Upgrade-Questions/Hinge-Split-issue-HP-Envy-13-2019/m-p/7991805

“I purchased HP ENVY 13-aq0011ms Laptop in December 2019. After using about 1 year, (I treated it very gently all the time), the issue started: when the screen of the laptop is being moved to open/close, the chassis of the bottom case pops and split-opens”

and here https://h30434.www3.hp.com/t5/Business-Notebooks/Broken-hinge-in-HP-ENVY-13-Laptop/td-p/6749446

“Just after 13 months of buying my new HP Envy and spending 1400£ for it, the right hinge broke. There were no falls/accidents”

My son did attempt to follow up with customer service as suggested but got nowhere.

He knows he could take it further. He could get an independent inspection. He could raise a small claim – subject to finding the right entity to pursue as that isn’t necessarily easy. But he found the whole process exhausting. He raised a claim which was rejected, he escalated it and it was rejected again. He decided life was too short, he is still using the broken laptop for his studies, and when he starts work and has a bit of money he plans to buy a Mac.

Personally I’ve got a lot of respect for HP. I have an HP laptop myself (x360) which has lasted for years. I was happy for my son to buy an HP laptop. I did not believe the company would work so hard to avoid its warranty responsibilities. I guess those charged with minimising the cost to the company of warranty claims have done a good job. There is a hidden cost though. Why would he ever buy or recommend HP again? Why would I?

Note: HP Inc is the vendor who supplies PCs and printers, like my son’s laptop. HP Enterprise sells servers, storage and networking products and is a separate company. None of the above has any relevance to HP Enterprise.

PS I posted a link to this blog on the HP support forum, where others are complaining about this issue. I was immediately banned.

image

Odd behaviour from Azure App Service: production version sometimes serves app from staging slot

I am developing an application which is deployed to Azure App Service. It runs on .NET 5.0 on Linux. I have set up a simple DevOps process so that committing changes to GitHub runs an Azure DevOps pipeline that deploys the application to a staging slot on Azure App Service for Linux. Then I can use Swap in the Azure portal to update the production slot. Swap simply exchanges the content of the staging slot with that in the production slot, so there is a route back in the event of disaster. Swap also restarts the application and forces users to log back in.

Yesterday I fixed a bug, deployed the change to the staging slot, and performed a swap. Logged back into the application, but the bug was still there, though intermittent. That was the bit I could not figure out: what was causing the code to behave differently on different requests? I became suspicious that it was sometimes serving the old version. I proved this by refreshing a page that demonstrated the bug. My page has an application version in the footer, and I could see that when the bug appeared, the version was older.

image

image

Well this is odd. In the App Service Deployment slot settings I have traffic set to 100% for the production slot:

image

In general I tend to assume a bug in my code or an error in my configuration settings is more likely than an issue with the Azure App Service. This does look odd though: why, if traffic is going 100% to the production slot, does the application sometimes serve the old version?

The pragmatic fix was easy. A second deployment to the staging slot means both now have code that works. The bug no longer appears; but I have kept the version number different and can see that the issue is actually still occurring.

I will update this post when I have more information, just in case anyone else hits this issue.

Debugging Safari on an old iPad

Someone was trying to use the bridge application I have in progress, using an iPad 2.0. There were a couple of interesting things about this. One was that I had to rethink the warning thrown up, base on Modernizr, which detects incompatible web browsers. The problem (obvious when you think about it) is that if you use some potentially incompatible features in the same page where you are testing for them, then with an old web browser the JavaScript fails with a syntax error and the warning does not appear. The fix: I now show the warning by default, and the compatibility check hides it.

Still, I was interested in the Safari error and wanted to debug it, in case it was something I could fix. How do you debug Safari on an iPad?  The way it is meant to work is this:

– On a Mac, enable the Safari Develop menu (in Safari preferences, Advanced, Show Develop menu).

– On iOS, enable Safari Web Inspector (Settings – Safari – Advanced – Web Inspector).

– Connect the iPad to the Mac via USB. You can now use Web Inspector on the Mac to debug the Safari iOS pages and scripts.

This did not work for me on my Catalina Mac. The iOS Safari did not show up in the Web Inspector on Safari Mac. I could get it to show briefly, by switching Web Inspector on the iPad off and on again, but after than, no go. I tried a few things, but none of the proposed solutions I could find for this issue fixed it for me.

I have an older 2011 Mac Mini in a drawer, so I thought that might work, being a similar age to the iPad. I fired it up, marvelled at how old-fashioned the UI looked (I had reset it to OS X Lion), and connected the iPad. No go. Same problem as with Catalina.

Surprisingly, what did work were the instructions here (more or less) for debugging Safari iOS on Windows. This is based on the RemoteDebug iOS WebKit Adapter described here, a project which originated as an internal Microsoft experiment.

image

I did find it amusing that I could do this on Windows, having failed with the Mac.

The next generation of this is Inspect. This is in private beta, though the GitHub page for RemoteDebug says it has been superseded and to use Inspect instead.

It worked for me though.

Microsoft Office and privacy: happy to send what you type to the cloud for analysis?

I attempted to open a document from on-premises SharePoint recently and was greeted with an error asking me to check my privacy settings.

image

“The service required to use this feature is turned off” I was informed. Hmm, what service is that then? The solution turned out to be in the new Office privacy settings, just as the dialog suggested.

If you disable what Microsoft calls “Connected experiences” it appears to block access to SharePoint. Probably not what the user intended.

image 

This setting is not great for clarity. Privacy-conscious users like myself may disable it because it represents your agreement to “experiences that analyze your content”. Since this means uploading your content to the cloud for analysis it sounds as if it might weaken both privacy and security. If you look at all the options though, it may be possible to agree to access online file storage without agreeing to content analysis:

image

It looks as if by unchecking “Let Office analyze your content” you might be able to stop Office uploading your stuff.

Is there anything to worry about? We need to know more about what happens to our data. There is a Learn More link that takes us here. This lists lots of features but does not tell us what we want to know. Maybe here? This tell us that:

Three types of information make up required service data.

  • Customer content, which is content you create using Office, such as text typed in a Word document, and is used in conjunction with the connected experience.

It is still not clear though what happens to our data, other than that it is “sent to Microsoft”. Even the massive Microsoft Privacy Statement is no more illuminating on this point. In fact, it is arguably rather alarming since it contains this statement:

Microsoft uses the data we collect to provide you with rich, interactive experiences. In particular, we use data to:

  • Provide our products, which includes updating, securing, and troubleshooting, as well as providing support. It also includes sharing data, when it is required to provide the service or carry out the transactions you request.
  • Improve and develop our products.
  • Personalize our products and make recommendations.
  • Advertise and market to you, which includes sending promotional communications, targeting advertising, and presenting you with relevant offers.

We also use the data to operate our business, which includes analyzing our performance, meeting our legal obligations, developing our workforce, and doing research.

In carrying out these purposes, we combine data we collect from different contexts (for example, from your use of two Microsoft products) or obtain from third parties to give you a more seamless, consistent, and personalized experience, to make informed business decisions, and for other legitimate purposes.

This suggests that Microsoft will profile me and send me advertising based on the data it collects. What I need to know is not only the fact that this happens, but also the mechanism, in order to make an informed judgement about whether it is sensible to enable these options. Of course it is also possible that the Office content analysis service does not do this. I am guessing.

What can go wrong? These risks are hard to quantify. If you are typing something confidential, it makes sense not to share it more than is necessary, as further sharing can only increase the risk. There are some interesting scenarios too, such as what happens if Microsoft receives a legal demand to have sight of the content of your documents. Microsoft may not want to give access to your content, but in some circumstances it might not have the choice. Then again, I doubt it retains content sent for the purpose of personalisation, beyond whatever factors the service determines are significant. However this is not stated here.

Is this any different from storing documents on a cloud service such as SharePoint / OneDrive online? It is a bit different since in the Office case you are permitting Microsoft to analyze as well as to store your content.

All of this is up for debate. I accept that the risks are probably small as well as the fact that the wider world has little or no interest in most of the content I type but do not choose to publish.

Nevertheless, there are a few things which seem to me reasonable requests.

– A clear statement concerning what happens to my content if I choose to let it be analyzed by Microsoft’s cloud service, to enable better informed decisions about whether or not to enable this feature. Dumping the user into an all-encompassing privacy policy is not good enough.

– Improved settings (and possibly some fixed bugs) so that privacy-conscious users do not inadvertently disable access to on-premises SharePoint, as in my example, or other unexpected outcomes.

– A simple way to exclude a specific document from the service, conceptually similar to “in-private” mode in a web browser though with more chance of actually protecting your privacy (in-private mode is not really very private).

In general, I do not think the solution to a customer’s reasonable concerns about privacy and security of personal information is to obscure how this data is handled.

Hands on with Windows Virtual Desktop

Microsoft’s Windows Virtual Desktop (WVD) is now in preview. This is virtual Windows desktops on Azure, and the first time Microsoft has come forward with a fully integrated first-party offering. There are also a few notable features:

– You can use a multi-session edition of Windows 10 Enterprise. Normally Windows 10 does not support concurrent sessions: if another user logs on, any existing session is terminated. This is an artificial restriction which is more to do with licensing than technology, and there are hacks to get around it but they are pointless presuming you want to be correctly licensed.

– You can use Windows 7 with free extended security updates to 2023. As standard, Windows 7 end of support is coming in January 2020. Without Windows Virtual Desktop, extended security support is a paid for option.

– Running a VDI (Virtual Desktop Infrastructure) can be expensive but pricing for Windows Virtual Desktop is reasonable. You have to pay for the Azure resources, but licensing comes at no extra cost for Microsoft 365 users. Microsoft 365 is a bundle of Office 365, Windows InTune and Windows 10 licenses and starts at £15.10 or $20 per month. Office 365 Business Premium is £9.40 or $12.50 per month. These are small business plans limited to 300 users.

Windows Virtual Desktop supports both desktops and individual Windows applications. If you are familiar with Windows Server Remote Desktop Services, you will find many of the same features here, but packaged as an Azure service. You can publish both desktops and applications, and use either a client application or a web browser to access them.

What is the point of a virtual desktop when you can just use a laptop? It is great for manageability, security, and remote working with full access to internal resources without a VPN. There could even be a cost saving, since a cheap device like a Chromebook becomes a Windows desktop anywhere you have a decent internet connection.

Puzzling out the system requirements

I was determined to try out Windows Virtual Desktop before writing about it so I went over to the product page and hit Getting Started. I used a free trial of Azure. There is a complication though which is that Windows Virtual Desktop VMs must be domain joined. This means that simply having Azure Active Directory is not enough. You have a few options:

Azure Active Directory Domain Services (Azure ADDS) This is a paid-for azure service that provides domain-join and other services to VMs on an Azure virtual network. It costs from about £80.00 or $110.00 per month. If you use Azure ADDS you set up a separate domain from your on-premises domain, if you have one. However you can combine it with Azure AD Connect to enable sign-on with the same credentials.

There is a certain amount of confusion over whether you can use WVD with just Azure ADDS and not AD Connect. The docs say you cannot, stating that “A Windows Server Active Directory in sync with Azure Active Directory” is required. However a user reports success without this; of course there may be snags yet to be revealed.

Azure Active Directory with AD Connect and a site to site VPN. In this scenario you create an Azure virtual network that is linked to your on-premises network via a site to site VPN. I went this route for my trial. I already had AD Connect running but not the VPN. A VPN requires a VPN Gateway which is a paid-for option. There is a Basic version which is considered legacy, so I used a VPNGw1 which costs around £100 or $140 per month.

Update: I have replaced the VPN Gateway with once using the Basic sku (around £20.00 or $26.00 per month) and it still works fine. Microsoft does not recommend this for production but for a very small deployment like mine, or for testing, it is much more cost effective.

This solution is working well for me but note that in a production environment you would want to add some further infrastructure. The WVD VMs are domain-joined to the on-premises AD which means constant network traffic across the VPN. AD integrates with DNS so you should also configure the virtual network to use on-premises DNS. The solution would be to add an Azure-hosted VM on the virtual network running a domain controller and DNS. Of course this is a further cost. Running just Azure ADDS and AD Connect is cheaper and simpler if it is supported.

Incidentally, I use pfsense for my on-premises firewall so this is the endpoint for my site-to-site VPN. Initially it did not work. I am not sure what fixed it but it may have been the TCP MSS Clamping referred to here. I set this to 1350 as suggested. I was happy to see the connection come up in pfsense.

image 

Setup options

There are a few different ways to set up WVD. You start by setting some permissions and creating a WVD Tenant as described here. This requires PowerShell but it was pretty easy.

image

The next step is to create a WVD host pool and this was less straightforward. The tutorial offers the option of using the Azure Portal and finding Windows Virtual Desktop – Provision a host pool in the Azure Marketplace. Or you can use an Azure Resource Manager template, or PowerShell.

I used the Azure Marketplace, thinking this would be easier. When I ran into issues, I tried using PowerShell, but had difficulty finding the special Windows 10 Enterprise Virtual Desktop edition via this route. So I went back to the portal and the Azure marketplace.

Provisioning the host pool

Once your tenant is created, and you have the system requirements in place, it is just a matter of running through a wizard to provision the host pool. You start by naming it and selecting a desktop type: Pooled for multi-session Windows 10, or Personal for a VM per user. I went for the Pooled option.

image

Next comes VM configuration. I stumbled a bit here. Even if you specify just 10 (or 1) users, the wizard recommends a fairly powerful VM, a D8s v3. I thought this would be OK for the trial, but it would not let me continue using the trial subscription as it is too expensive. So I ended up with a D4s v3. Actually, I also tried using a D4 v3 but that failed to deploy because it does not support premium storage. So the “s” is important.

image

The next dialog has some potential snags.

image

This is where you choose an OS image, note the default is Windows 10 Enterprise multi-session, for a pooled WVD. You also specify a user which becomes the default for all the VMs and is also used to join the VMs to the domain. These credentials are also used to create a local admin account on the VM, in case the domain join fails and you need to connect (I did need this).

Note also that the OU path is specified in the form OU=wvd,DC=yourdomain,DC=com (for example). Not just the name of an OU. Otherwise you will get errors on domain join.

Finally take care with the virtual network selection. It is quite simple: if you are doing what I did and domain-joining to an on-premises domain, the virtual network and subnet needs to have connectivity to your on-premises DCs and DNS.

The next dialog is pretty easy. Just make sure that you type in the tenant name that you created earlier.

image

Next you get a summary screen which validates your selections.

image

I suggest you do not take this validation too seriously. I found it happily validated a non-working configuration.

Hit OK and you can deploy your WVD host pool. This takes a few minutes, in my case around 10-15 minutes when it works. If it does not work, it can fail quickly or slowly depending on where in the process it fails.

My problem, after fixing issues like using the wrong type of OS image, was failure to join the VM to the domain. I could not see why this did not work. The displayed error may or may not be useful.

image

If the deployment got as far as creating the VM (or VMS), I found it helpful to connect to a VM to look at its event viewer. I could connect from my on-premises network thanks to the site to site VPN.

I discovered several issues before I got it working. One was simple: I mistyped the name of the vmjoiner user when I created it so naturally it could not authenticate. I was glad when it finally worked.

image

Connection

Once I got the host pool up and running my trial WVD deployment was fine. I can connect via a special Remote Desktop Client or a browser. The WVD session is fast and responsive and the VPN to my office rather handy.

image

Observations

I think WVD is a good strategic move from Microsoft and will probably be popular. I have to note though that setup is not as straightforward as I had hoped. It would benefit Microsoft to make the trial easier to get up and running and to improve the validation of the host pool deployment.

It also seems to me that for small businesses an option to deploy with only Azure ADDS and no dependency on an on-premises AD is essential.

As ever, careful network planning is a requirement and improved guidance for this would also be appreciated.

Update:         

There seems to a problem with Office licensing. I have an E3 license. It installs but comes up with a licensing error. I presume this is a bug in the preview.    

image

This was my mistake as it turned out. You have to take some extra steps to install Office Pro Plus on a terminal server, as explained here. In my case, I just added the registry key SharedComputerLicensing with a setting of 1 under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\ClickToRun\Configuration. Now it runs fine. Thanks to https://twitter.com/getwired for the tip.

Microsoft’s Dynamics CRM 2016/365: part brilliant, part perplexing, part downright sloppy

I have just completed a test installation of Microsoft’s Dynamics CRM on-premises; it is now called Dynamics 365 but the name change is cosmetic, and in fact you begin by installing Dynamics CRM 2016 and it becomes Dynamics 365 after applying a downloaded update.

Microsoft’s Dynamics product has several characteristics:

1. It is fantastically useful if you need the features it offers

2. It is fantastically expensive for reasons I have never understood (other than, “because they can”)

3. It is tiresome to install and maintain

I wondered if the third characteristic had improved since I last did a Dynamics CRM installation, but I feel it has not much changed. Actually the installation went pretty much as planned, though it remains fiddly, but I wasted considerable time setting up email synchronization with Exchange (also on-premises). This is a newish feature called Server-Side Synchronization, which replaces the old Email Router (which still exists but is deprecated). I have little love for the Email Router, which when anything goes wrong, fills the event log with huge numbers of identical errors such that you have to disable it before you can discover what is really going wrong.

Email is an important feature as automated emails are essential to most CRM systems. The way the Server-Side Synchronization works is that you configure it, but CRM mailboxes are disabled until you complete a “Test and Enable” step that sends and receives test emails. I kept getting failures. I tried every permutation I could think of:

  • Credentials set per-user
  • Credentials set in the server profile (uses Exchange Impersonation to operate on behalf of each user)
  • Windows authentication (only works with Impersonation)
  • Basic authentication enabled on Exchange Web Services (EWS)

All failed, the most common error being “Http server returned 401 Unauthorized exception.” The troubleshooting steps here say to check that the email address of the user matches that of the mailbox; of course it did.

An annoyance is that on my system the Test and Enable step does not always work (in other words, it is not even tried). If I click Test and Enable in the Mailbox configuration window, I get this dialog:

image

However if I click OK, nothing happens and the dialog stays. If I click Cancel nothing happens and the dialog stays. If I click X the dialog closes but the test is not carried out.

Fortunately, you can also access Test and Enable from the Mailbox list (select a mailbox and it appears in the ribbon). A slightly different dialog appears and it works.

I was about to give up. I set Windows authentication in the server profile, which is probably the best option for most on-premises setups, and tried the test one more time. It worked. I do not know what changed. As this tech note (which is about server-side synchronization using Exchange Online) remarks:

If you get it right, you will hear Microsoft Angels singing

But what’s this about sloppy? There is plenty of evidence. Things like the non-functioning dialog mentioned above. Things like the date which shows for a mailbox that has not been tested:

image

Or leaving aside the email configuration, things like the way you can upload Word templates for use in processes, but cannot easily download them (you can use a tool like the third-party XRMToolbox).

And the script error dialog which has not changed for a decade.

Or the warning you get when viewing a report in Microsoft Edge, that the browser is not supported:

image

so you click the link and it says Edge is supported.

Or even the fact that whenever you log on you get this pesky dialog:

image

So you click Don’t show this again, but it always reappears.

It seems as if Microsoft does not care much about the fit and finish of Dynamics CRM.

So why do people persevere – in fact, the Dynamics business is growing for Microsoft, largely because of Dynamics 365 online and its integration with Office 365. The cloud is one reason, which removes at least some of the admin burden. The other thing though is that it does bring together a set of features that make it invaluable to many businesses. You can use it not only for sales and marketing, but for service case management, quotes, orders and invoices.

It is highly customizable, which is a mixed blessing as your CRM installation becomes increasingly non-standard, but does mean that most things can be done with sufficient effort.

In the end, it is all about automation, and can work like magic with the right carefully designed custom processes.

With all those things to commend it, it would pay Microsoft to work at making the user interface less annoying and the administration less prone to perplexing errors.

Configuring the Android emulator for Hyper-V

Great news that the Android emulator now supports Hyper-V, but how do you enable it?

Pretty simple. First, you have to be running at least Windows 10 1803 (April 2018 update). Then, go into Control Panel – Programs – Turn Windows Features on and off and enabled both Hyper-V and the Windows Hypervisor Platform:

image

Note: this is not the same as just enabling Hyper-V. The Windows Hypervisor Platform, or WHPX, is an API for Hyper-V. Read about it here.

Reboot if necessary and run the emulator.

image

TroubleshootIng? Try running the emulator from the command line.

emulator -list-avds

will list your AVDs.

emulator @avdname -qemu -enable-whpx

will run the AVD called avdname using WHPX (Windows Hypervisor Platform). If it fails, you may get a helpful error message.

Note: If you get a Qt library not found error, use the full path to the emulator executable. This should be the one in the emulator folder, not the one in the tools folder. The full command is:

[path-to-android-sdk]\emulator\emulator @[avdname] -qemu -enable-whpx

You can also use the emulator from Visual Studio, though you need Visual Studio 2017 version 15.8 Preview 1 or higher with the Xamarin tools installed. That said, I had some success with starting the Hyper-V emulator separately (use the command above), then using it with a Xamarin project in Visual Studio 15.7.5.

image

Notes from the Field: dmesg error blocks MySQL install on Windows Subsystem for Linux

I enjoy Windows Subsystem for Linux (WSL) on Windows 10 and use it constantly. It does not patch itself so from time to time I update it using apt-get. The latest update upgraded MySQL to version 5.7.22 but unfortunately the upgrade failed. The issue is that dpkg cannot configure it. I saw messages like:

invoke-rc.d: could not determine current runlevel

2002: Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock

After multiple efforts uninstalling and reinstalling I narrowed the problem down to a dmesg error:

dmesg: read kernel buffer failed: Function not implemented

It is true, dmesg does not work on WSL. However there is a workaround here that says if you write something to /dev/kmsg then at least calling dmesg does not return an error. So I did:

sudo echo foo > /dev/kmsg

Removed and reinstalled MySQL one more time and it worked:

image

Apparently partial dmesg support in WSL is on the way, previewed in Build 17655.

Note: be cautious about fully uninstalling MySQL if you have data you want to preserve. Export/backup the databases first.