Category Archives: tech

The Future of PowerShell:

PowerShell, Microsoft’s scripting platform, is significant for several reasons. It is critical to Microsoft’s strategy of reducing dependence on a GUI in Windows Server. It is also a key piece in automating IT administration, which is fundamental to business agility.

The platform was invented by Microsoft’s Jeffrey Snover, now Technical Fellow and Lead Architect for the Enterprise Cloud Group. The evolution of PowerShell has gone hand in hand with the company’s broader strategy for Windows and Azure, guided by Snover as architect.

PowerShell is everywhere in Azure Stack, Microsoft’s packaged version of Azure for running on-premises, and presumably in the online version of Azure as well. There are an “average 472 cmdlet calls” when a VM is created, according to Snover’s keynote at the recent PowerShell Europe conference in Hanover.

The PowerShell team is now apparently part of the Azure Management Team within Microsoft.

What is happening with PowerShell? The main thing to understand is that Microsoft has forked the platform. Windows PowerShell is the Windows-only version, while PowerShell Core is cross-platform on Windows, Mac and Linux. Windows PowerShell is based on the .NET Framework, while PowerShell Core is based on .NET Core.

The situation with this is odd, in that Windows PowerShell is installed by default in Windows Server and the one that most people use; but PowerShell Core is the one that is under active development. This is explained here. Snover emphasised in his keynote that Windows PowerShell is done:

image

Note though that PowerShell is modular, and although the Windows PowerShell engine is not being developed, new or enhanced modules will still appear. In fact, they are likely to run both on Windows PowerShell and on PowerShell Core. Like all forks, there will be some pain over compatibility versus using the latest features as the Core platform evolves.

If you want to try PowerShell Core you can download it here. However it is of limited use for day to day work unless you also install and activate a module called WindowsPSModulePath which you can get from the PowerShell Gallery. This lets you use all your current Windows PowerShell modules, subject of course to compatibility.

image

So what is next for PowerShell? Snover’s ambition for the platform, he said, is to manage any server or service, from any client, running on any cloud (or on-premises, any hypervisor).

Much of what is interesting is not so much new features in PowerShell itself, but additional modules or other utilities.

PSSwagger is helpful for creating modules: it will create a PowerShell module from a Swagger API (a popular standard for specifying RESTful APIs).

CloudShell is a command shell for Azure which you can run from a web browser.

Windows Admin Center, formerly known as Project Honolulu is a browser application for managing servers (and some desktops). You can open a PowerShell session directly from the browser. In addition, it is PowerShell that enables much of the other functionality.

image

As for Microsoft’s plans for PowerShell Core, Snover refers to this site which sets out the company’s strategic investments. These include help system improvements, a GUI framework for the console perhaps like Curses on Linux, a mechanism for PowerShell to prompt for install (as in Bash) when a command is not found, but a module containing that command is known to exist, and Just Enough Administration (JAE) on Linux.

At the PowerShell conference Principal Software Engineer Steve Lee talked about a PowerShell Standard Library, which itself targets .NET Standard 2.0, for module authors to use when creating modules so they will work cross-platform.

PowerShell and Microsoft’s platform

One of the intriguing things about Microsoft’s evolution is its embrace of both Linux and cross-platform. PowerShell is one small part of this, but fits in with that strategy. We should no longer think of Microsoft’s platform as based on Windows, even though of course it mostly runs on Windows today. The OS is becoming less important as the company focuses on services and applications.

The further implication is that cross-platform support is not just a nice-to-have feature for pieces like .NET Core and PowerShell Core, but essential for Microsoft itself as it integrates multiple operating systems in its cloud platform.

While we tend to applaud cross-platform support as a good thing, it is not without pain. PowerShell is a case in point. Windows PowerShell is at the same time the current thing, and the thing that is no longer evolving.

PowerShell and IT admins

PowerShell is an essential skill for Windows IT admins. On Office 365, for example, there always seem to be things you can do in PowerShell that you cannot easily do though the GUI, and even where you can, it often pays to use PowerShell because you can script and automate common operations. The same is true for Azure.

Not everyone loves PowerShell as a language. Some complain of its verbosity. It can also be prickly to work with. It is not at all English-like, making it less accessible for beginners than most scripting languages.

It is however well suited to its purpose, which is what counts.

What is Azure Sphere?

Microsoft has announced Azure Sphere, and in a manner which I’m guessing many will find confusing.

It is obviously something to do with IoT (Internet of Things) and intended to make your IoT solutions more secure. It is obviously something to do with Azure, Microsoft’s cloud platform. But what is a “crossover class of MCU”? What is an “HLOS small enough for MCUs”? Where does the “Azure Sphere OS”, which is Microsoft’s new Linux, actually run?

image

Let’s start with MCU (Microcontroller Unit). The most informative description of what Azure Sphere is all about is this research paper [PDF]. The target of Azure Sphere is devices powered by microcontrollers – in other words, IoT devices that are more than just sensors and have their own processors, though with less capability than a full SoC (System on a Chip). It is obvious that such devices, if compromised, have considerable risks. A fire in your oven? A radiotherapy machine that kills rather than heals? Toys that spy on children? Not good.

Microsoft’s solution is to have those devices run on a new processor designed in partnership with MediaTek (a large Taiwanese system-on-chip manufacturer) and running the tiny Azure Sphere OS. Built-in features include hardware-based security (private keys in a hardware-protected vault), hardware-enforced compartmentalization, certificate-based authentication and failure reporting. The new processor is called Sopris in Microsoft’s paper.

image
The Sopris Microprocessor

These Azure Sphere devices communicate with Microsoft’s Azure Sphere service to receive both OS and application updates, and to process failure reports.

Azure Sphere does not determine how the production data from your IoT device is handled. You can deal with this as you like, using Azure, another cloud provider, or on-premises infrastructure.

A point of interest is that the Azure Sphere OS runs Microsoft’s own customised version of Linux. Why Linux? Microsoft must have concluded that there was insufficient advantage, and more friction, in using a version of Windows (though Windows IoT Core exists). Use of Linux in Microsoft can only increase; and remember, Linux is now built into Windows.

Why Subsystem for Linux in Windows 10 and Windows Server? And what are the implications?

Microsoft is busy improving Windows Subsystem for Linux (WSL), the compatibility layer that lets you run Linux on Windows. WSL is not an emulator. It accesses the same file system and you can launch Windows applications from WSL, and vice versa. It also runs actual Linux binaries.

The latest announcements cover copy/paste between Linux and Windows, and a tabbed console. Both enhancements are in the skip-ahead insider version of Windows 10, which means they are unlikely to be in the one about to be released, currently known as Spring Creators Update (but rumoured to be getting a name change). In other words, you may have to wait around six months for this to be generally available.

image 

These are not huge changes, but overall WSL is a big deal. Why is Microsoft doing it? One Betanews commenter says:

I still can’t figure out who this whole "Linux-on-Windows" thing is meant for. Developers who work on both platforms maybe? I guess it would be handy for people who just want to try out Linux before migrating to it, but that’s the last thing Microsoft would want to promote.

Microsoft has in fact stated the primary purpose of WSL:

This is primarily a tool for developers — especially web developers and those who work on or with open source projects. This allows those who want/need to use Bash, common Linux tools (sed, awk, etc.) and many Linux-first tools (Ruby, Python, etc.) to use their toolchain on Windows.

There is a bit more to it. Developers are small in number relative to general users, but disproportionately influential, since they make the applications the rest of us run, and if the applications are not there or are inferior, the ecosystem starts to fail and the operating system declines.

I am not sure when it was that developers started to prefer Macs, but I noticed this trend many years ago, perhaps from the time that OS X moved to x86 (2006). This was not just about preferring the Mac user interface. In 2008 Apple opened up iOS, its mobile OS, to third-party applications, and a Mac was required for iOS development (this is still the case). It has long been relatively easy to run a Windows emulator on a Mac, but not vice versa, so for developers who want to support multiple target platforms from one computer, the Mac makes sense.

OS X / macOS is a Unix-like operating system, based on BSD (Berkeley Software Distribution). This means that moving between Linux and Mac is relatively smooth, from a developer perspective. The same tools are generally available. The internet runs mostly on Linux so the Mac has an advantage there as well.

In some cases this is more than just inconvenience. Windows has a long-standing issue with path lengths. MAX_PATH is defined as 260 characters. This limitation can be mostly removed if you have Windows 10 build 1607 or higher. Nevertheless, path issues have made Windows awkward for developing with Java, Node.js, and other languages or frameworks which typically use deeply nested directories. Open source developers perhaps did not care as much about these issues because they were mostly using Mac or Linux.

Microsoft has responded by improving Windows as a platform on which to develop applications. Visual Studio now targets Mac, iOS and Android as well as Windows. MAX_PATH has been alleviated as far as possible. WSL however goes much further. You can install and run Linux development tools and utilities such as gcc, perl, sed, awk, grep, wget, openssl, perl and more. There is no MAX_PATH issue. You can run the Linux build of Apache, PHP, MySQL and more. I used WSL to debug a PHP application and explained how here.

WSL is not perfect. Not everything is implemented. You can check the current issues here. Still, it is genuinely useful and mitigates the advantages of Mac or Linux for developers.

Microsoft has also added WSL to Windows Server. Why? The main focus here seems to be on administrators. There are times when it is handy to run a Linux command or script on Windows Server. It is not intended for production use as a server, though there is now support for background tasks; however it is still per-session so you would need to keep a user logged on in order to run, for example, a web server. More important, Microsoft has not designed WSL for production use as a server platform so it might not be as optimized or reliable as you require.

Implications of WSL

Where is this going? This is where it gets speculative. I will argue though that WSL is in part an admission of defeat. Windows remains an important development platform, but is now greatly outweighed by Unix-like platforms:

  • Web/Internet applications
  • iOS applications
  • Android applications

Where Windows support is needed, developers have many cross-platform options to choose from, a popular choice today being Electron, based on Chromium (the open source foundation of Google Chrome) and Node.js.

Today there seems little chance of Windows winning back market share as a mobile operating system, and the importance of desktop applications looks destined for long slow decline.

Windows Server remains a significant application platform, but Microsoft is focused more on driving developers to Azure cloud services than on Windows Server itself. SQL Server now runs on Linux, ASP.NET Core is cross-platform, and Azure has excellent support for Linux.

All of this leads me to think that WSL will continue to improve, perhaps to the point where production loads are supported on Windows Server, for example. Further, the ability to run Windows applications on Linux (which is more or less what happens in SQL Server for Linux) may become equally as important as the reverse.

What is ML? What is AI? Why does it matter?

ML (Machine Learning) and AI (Artificial Intelligence) are all the rage and changing the world, but what are they?

I was asked this recently which made me realise that it is not obvious. So I will have a go at a quick explanation, based on my own perception of what is going on. I am not a data scientist so this will be a high-level take.

I will start with ML. The ingredients of ML are:

1. Data

2. Pattern recognition algorithms

Imagine that you want to identify pictures that contain images of people. The data is lots of images. What you want is an algorithm that automatically detects which images contain people. Rather than trying to code this on your own, you give the ML system a quantity of images that do contain people, and a quantity of images that do not. This is the training process. Once trained, the ML system will predict, when shown an image, whether or not it contains people. If your training has been successful, it will have a high success rate.

The combination of the algorithm and the parameters (these being the characteristics you want to identify) is called a model. There are many types of model and a number of different ML systems from open source (eg TensorFlow) to big brands like Amazon Machine Learning, Azure Machine Learning, and Google Machine Learning.

So what is AI? This is a more generic term, so we can say that ML is a form of AI. IBM describes its Watson service as AI – Watson is really a bunch of different services so that makes sense.

A quick way to think of AI is that it answers questions. Is this customer a good credit risk? Is this component good or faulty? Who is the person in this picture?

Another common form of AI is a chatbot or conversational UI. The key task here is artificial language understanding, possibly accompanied by speech to text transcription if you want voice input, and then a back-end service that will generate a response to what it things the language input means. I coded a simple one myself using Microsoft’s Bot Framework and LUIS (Language Understanding Intelligent Service). My bot just performed searches, so if you wrote or said “tell me about x”, it would do a search for x and return the results. Not much use; but you can see how the concept can work for things like travel bookings and customer service. The best chatbots understand context and remember previous conversations, and when combined with personal information like location and preferences, they can get a lot of things right by conjecture. “Get me a taxi” – “Is that to Charing Cross station as usual?”

Internet search has morphed into a kind of AI. If you type “What is the time?”, it comes up on the screen.

image

The more the search engines personalise the search results, the more assumptions they can make. In the example above, Bing has used what it thinks is my location to give me the time where I am.

AI can also take decisions. A self-driving car, like a human driver, takes decisions constantly, whether to stop, go, what speed, turn this way or that. It uses sensors and pattern recognition, as well as its programmed route, to make those decisions.

Why does AI matter? It feels like early days; but there are obvious commercial applications, now that using ML and AI is within reach of any developer.

Marketers and advertisers are enthusiastic because they love targeting – and consumers often prefer more relevant advertising (though they might prefer less advertising even more). Personalisation is the key, and as mentioned above, ML and AI are good at answering questions. The more data, the more personal the targeting. How much does this person earn? Male or female? Where are they? Single or in a relationship? Do they have children? Even answering these (and many more) questions somewhat inaccurately will greatly increase the ability of marketers to offer the right product or service at the right moment.

Of course there are privacy questions around this. There are other questions too. What about the commercial advantage this gives to those few entities that hold huge volumes of personal data, such as Google and Facebook? What about when showing people “more relevant content” becomes a threat to democracy, because individuals get a distorted view of the world, seen through a tunnel formed by their own preference to avoid competing views? Society is only just beginning to grasp the implications.

Another key area is automation. Amazon made a splash by opening a store where you do not have to check out: object recognition detects what you buy and charges your account automatically. Fewer staff needed, and more convenient for shoppers.

Detecting faulty goods on a production line is another common use. Instead of a human inspecting goods for flaws, AI can identify a high percentage of problems automatically. It may be just a case of recognizing patterns in images, as discussed above.

AI can go wrong though. An example was mentioned at an event I attended recently. I cannot vouch for the truth of the story, but it is kind-of plausible. The task was to help the military detect tanks hidden in trees. They took photos of trees with hidden tanks, and trees without hidden tanks, and used them for machine learning. The results were abysmal. The reason: all the photos which included tanks were taken on an overcast day, and those without tanks on a clear day. So the ML decided that tanks only hide on cloudy days.

ML is prone to this kind of mistake. What similar problems might occur when applied to people? Could ML make inappropriate inferences from characteristics such as beards, certain types of clothing, names, or other things about which we should be neutral? It is quite possible, which is another reason why applications of AI need an ethical framework as well as appropriate regulation. This will not happen smoothly or quickly, and will not be universally implemented, so humanity’s use of ML and AI is something of a social experiment, with potential for both good and bad outcomes.

QCon London: the Ethics track and the psychology of software

The most significant thing about the Ethics track at QCon London, a software development conference I attended last week, is that it existed. I can recall ethics being discussed at QCon in previous years (including a memorable appeal by Martin Fowler at Thoughtworks about rectifying the gender imbalance in IT) but not a specific track.

Why does ethics matter more today? Ethics has always mattered, but the power of software over our lives is increasing. It is possible be that algorithms at Facebook, YouTube and Twitter influenced the result of the last US election and the UK’s Brexit referendum. Algorithms play a large role in influencing many of choices, what to buy, where to eat, where to stay, which airline to book, which vendor to use.

Software also consumes more of our time than ever, as we constantly check our phones for notifications, play games or read online content.

The increasing importance of AI (Artificial Intelligence) also raises ethical questions. Last week I attended the Re-work AI Assistant Summit, also in London. One of the sessions concerned “Building an AI Friend”, presented by Artem Rodichev from Replika. The demos were impressive, showing how a bot can be engaging and help users to talk about what matters to them. I asked though if the company had thought about ethical issues, for example if a child became attached to a bot without realising it was non-human. The answer I got was in effect a blank look, followed by the statement “we have a minimum age limit of 7”. The company has no announced business model, but I would encourage it to form an ethical policy early as these things are hard to bolt on in retrospect – as Facebook is discovering today, in the aftermath of the exposure of how its personal data is being misused by third parties.

AI is also poised to take over more jobs previously done by people. This could be a great liberator for humanity, or alternatively divide society even more deeply into haves and have-nots.

We need more ethics discussion then; but is it too late? Well, it is never too late to improve matters, but perhaps much harm could have been avoided if the industry had focused on this earlier.

I attended a talk by Alexander Steinhart (a technologist at ThoughtWorks) on the psychologist’s perspective on ethics in technology.

image

Steinhart talked about addiction. “We all want to unplug, but cannot”, he said.

“Now we are all connected. On average people are nearly three hours online every day. They check phones every 7 to 15 minutes. Many people have difficulties in finding the right balance.”

When is a habit an addiction? When it “gets into the way of your life and you can’t do anything else, and when you try to change behaviour you don’t manage,” said Steinhart, mentioning that “distraction” is identified as a risk by many people today, including teenagers.

Interruptions and distractions are detrimental to our productivity and also a source of stress, he said. Once you are distracted, it takes 20-25 minutes to recover your focus. “Take care that you are not connected all the time.”

Unfortunately we have also developed an “attention economy” where web sites and apps are rewarded for holding our attention and they have evolved to do that effectively.

A great way, apparently, to get us addicted is to have mechanisms that only occasionally reward us. We will try and try again in hope of reward. Lotteries are like this. So are slot machines. So too, says Steinhart, are things like notifications in apps, or the action of pulling down to refresh emails or other feeds. Most of the time we get nothing of value and we know that. But occasionally something really good arrives. The possibility keeps us hooked.

Another difficulty is that humans do not always cope well with abundance. When a previously scarce resource, food for example, becomes abundant, logically what should happen is that we become more discriminating, selecting only the best and discarding the rest. In practice though this is not the case, and we have seen the ascendance of junk food that does us harm.

We now have abundant information. Answering a question that might once have required a trip to the library or several phone calls can now be done in an instance. That is fantastic; but are we coping well? Somehow, instead of becoming more discriminating about the sources and value of available information, humanity is prone to consuming more and more information of low quality, whether that is banal time-wasting or actual falsehoods and information that is intended to deceive or mislead us.

Steinhart argues that we have moved into a new technological era but have not yet learned how to manage it. He draws an analogy with urbanisation; it tool mankind a while to learn how to build cities that were agreeable places in which to live.

Human needs includes some that are will served by today’s technological landscape. We need to experience “all of the different senses, to small, to taste.” We need privacy and solitude. “If you put managers alone for one hour in a room with nothing to do, they make better decisions the rest of the day,” claims Steinhart. We also need conversations, not just connections. “There is so much human interaction that you cannot digitise, like looking someone in the eye” he said.

How does this translate to ethics in technology? We need positive computing and software design that is “aligned with human goals,” he said.

Free and open source software is helpful in this respect, because the goals of the software are aligned with our needs rather then profit.

What can software developers do? “It is not your fault that technology is distracting,” said Steinhart, “but it’s your responsibility to change something.”

image

It is interesting to imagine what software might look like if designed for human needs rather than business interests. Steinhart’s ideas are around making software quieter, designed to get out of our way rather than to interrupt us, smartphones that encourage us to leave them alone, and of course to avoid anti-patterns which feed addiction or deliberately try to trip us up.

I noticed this tweet today about how an Amazon app behaves when you try to cancel. The user clicks Cancel subscription and gets this:

image

The following screen reverses the button colouring so that if you trained yourself to tap the faint button, you actually do the opposite of what you intend:

image

Until the last screen (there’s another one?) where they switch again:

image

This was not in Steinhart’s talk, but it seems a good example of software designed for the business and not for the user.

I have seen a similar pattern in Amazon’s web checkout where you have to click carefully to avoid being signed up for Amazon’s Prime subscription by accident. Not good.

Ethics and technology

This post is long enough; but there is, I hope, much more to say on this subject.

Despite enjoying Steinhart’s talk and others in the Ethics track, I was not encouraged. We need, of course, regulation as well as more principled businesses, and we do not know what such regulation should look like, nor how to implement it.

One thing though is worth repeating: if as a software developer you are asked to do something that is ethically unacceptable, you should refuse. Professional standards include more than quality of coding.

A week of QCon: introduction

I attended QCon London last week and found it fascinating, but have not written as much about it as I intended because of various other deadlines. In order to address this I will do a quick daily post for the next week or so.

QCon is a software development conference run by InfoQ. It is vendor-neutral and focuses on large-scale enterprise development as well as future trends, language choices and changes, software architecture and more. If you delve into the history of the event it has championed techniques including Agile development, Service Oriented Architecture, Microservices, and now AI. The event has a culture and an ethos, which is something to do with human-centred software, team communications, taking hte side of the user, aversion to unnecessary complexity, and constant exploration of emerging technology.

image
Laura Bell of SafeStack speaks at QCon London on Architecting a Culture of Secure Software.

QCon, like many other events, encourages attendees to give feedback on sessions they attend. At other events I have often seen forms with several categories and questions like “How well did the speaker know their subject” and “What was your biggest takeaway from this session”? While such questions are reasonable, the problem is that they are too difficult and time-consuming and therefore not many respond, or the responses are of low quality. The QCon organisers decided years ago that the only feedback system that works is to have attendees vote good, indifferent or poor as they leave. This used to be done with coloured paper and is now electronic. I mention this because it says something about the event culture: let’s prefer something that works and is not a burden, despite the seeming crudity of a 1-2-3 scoring system. And of course even such basic information is highly valuable in discerning which sessions were most appreciated.

The event prefers practitioners, engineers and team leads over evangelists, trainers and consultants. It attracts a particularly able audience:

image

Of course you can learn plenty outside the actual sessions by chatting to other attendees.

Up next: technical ethics at QCon London.

Kaspersky encrypted connection scanning breaks ADFS login, internet-facing Dynamics CRM

I was asked to look at a case where a user could not log in to Dynamics CRM. This is an internet-facing deployment which uses ADFS (Active Directory Federation Services). The user put in valid credentials but received a 401 – unauthorized: Access is denied due to invalid credentials.

The odd thing from the user’s perspective is that everything worked fine on other PCs; but switching web browsers did not fix it.

I noticed that Kaspersky anti-virus was installed.

image

Pausing Kaspersky made no difference to the error. However I came back to this after eliminating some other possible problems. I noticed that if you looked at the certificate on the ADFS site it was not from the site itself, but a Kaspersky certificate.

image

The reason for this is that Kaspersky wants to inspect encrypted traffic for malware.

I understand the rationale but I dislike this behaviour. Your security software should not hide the SSL certificate of the web site you are visiting. Of course it is particularly dislikeable if it breaks stuff, as in this case. I found the setting in Kaspersky and disabled both this feature, and another which injects script into web traffic (though this proved not to be the culprit here), for the sake of Kaspersky’s “URL Advisor”).

Personally I feel that encrypted traffic should only be decrypted in the recipient application. Kaspersky’s feature is an SSL Man-in-the-Middle attack and to my mind reduces rather than increases the security of the PC. However you made the decision to trust your anti-virus vendor when you installed the software.

There are other anti-virus solutions that also do this so Kaspersky is not alone. As to why it breaks ADFS I am not sure, but regard this as a good thing since the user’s SSL connection is compromised.

image

As it turns out, it isn’t essential to disable the feature entirely. You can simply set an exclusion for the ADFS site by clicking Manage exclusions.

Posted here in case others hit this issue.

Your favourite article on The Register, and what that says about technology and the media

I’m at Mobile World Congress in Barcelona and meeting people new to me who say, “who do you write for”? I’ve been struck by several separate occasions when people say, after I mention The Register, “Oh yes, I loved that Apple article”.

The piece they mean (not one of mine), is this one by Kieren McCarthy. It recounts the Reg’s efforts to attend the iPhone 7 launch; or more precisely, efforts to get Apple PR to admit that the Reg is on a “don’t invite” list and would not be able to attend.

image

Why does everyone remember this piece? In short, because it is a breath of reality in a world of hype.

The piece also exposes hidden pressures that influence tech media. There are more people working in PR than in journalism, as I recall, and it is their job to attempt to manage media coverage in order to get it to reflect as closely as possible the messaging that that their customers, the tech companies, wish to put out.

Small tech companies and start-ups struggle to get any coverage and welcome almost any press interest. The giants though are in a more privileged position, none more so than Apple, for whom public interest in its news is intense. This means it can select who gets to attend its events and naturally chooses those it thinks will give the most on-message coverage.

I do not mean to imply that those favoured journalists are biased. I believe most people write what they really think. Still, consciously or unconsciously they know that if they drift too far from the vendor’s preferred account they might not get invited next time round, which is probably a bad career move.

Apple is in a class of its own, but you see similar pressures to a lesser extent with other big companies.

Another thing I’ve noticed over years of attending technology events is that the opportunities for open questioning of the most senior executives have diminished. They would rather have communication specialists answer the questions, and stay behind closed doors or give scripted presentations from a stage.

Here in Barcelona I’ve discovered the Placa de George Orwell for the first time:

image

Orwell knew as well as anyone the power of the media, even though he almost certainly did not say what is now often attributed to him, “Journalism is printing what someone else does not want printed: everything else is public relations.”

Still, as I move into a series of carefully-crafted presentations it is a thought worth keeping in front of mind.

Finally, let me note that I have never worked full-time for The Register though I have written a fair amount there over the years (the headlines by the way are usually not written by me). The more scurrilous aspect of some Reg pieces is not really me, but I absolutely identify with The Register’s willingness to allow writers to say what they think without worrying about what the vendor will think. 

Microsoft introduces new feedback system for technical documentation, will delete existing comments

Microsoft is introducing a new feedback system for https://docs.microsoft.com, used for its technical documentation.

The new system, which you can already see for certain topics such as the Visual Studio IDE, is based on GitHub issues. When you leave a comment, you can specify whether it concerns documentation or product functionality.

image

So far so good, but the downside is that all existing comments will be deleted:

image

The statement “Old comments will not be carried over. If content within a comment thread is important to you, please save a copy.” is unhelpful. Nobody knows what comments will be useful to them in future.

Few things sap enthusiasm for community participation more than having all the past contributions into which you have put effort suddenly zapped. Nor is this the first time, as user guibirow notes:

As much I like the new system idea, I hate the fact that this is happening over and over.
It used to be a Disqus comment system, then moved to LiveFyre, then moved now to this new system, what will be the next?
The worst part of this all is that MS does not care about past content lost on these discussions, so many times I found issues described in the docs that are gone now.
Please, pay attention to your previous mistakes, don’t let the information be lost again, at lest import them as closed issue in the new system.

Sometimes progress has a cost and that is understood. However it is not impossible to migrate content from one system to another. It just takes effort.

Update: Microsoft’s Rob Eisenberg has responded with an explanation and mitigation plan regarding existing comments. He says that a straight migration of the comments is impossible:

    • There is a lot of garbage, spam and even dangerous content within existing LiveFyre comments which would violate GitHub terms of usage and our open source code of conduct, as well as cause security problems.
    • There isn’t a good way to map LiveFyre users to GitHub users and using a bot account to anonymously add comments is questionable with respect to OSS practices and GitHub terms of use.
    • For legal and privacy reasons, we cannot move user-associated data from one system to another without consent from users (GDPR).
    • LiveFyre conversations are threaded, while GitHub issues are not.
    • Placing the old comments into the GitHub Issues system would derail the entire GitHub Issues workflow for both customers and employees and muddle the data.
    • It isn’t clear whether there is a way to invoke GitHub APIs for a migration of this scale such that it wouldn’t violate GitHub API terms of use.

He also has an archiving proposal:

We would take the comments from an article on docs.microsoft.com and then convert them into a Markdown file. During this process, we would strip all user info (remember GDPR). The Markdown file would then be committed to a GitHub repo. Finally, at the bottom of the feedback section, next to the link that says "View on GitHub" we would add a second link that said something like "View Comment Archive". This link would connect you directly to the Markdown comment file for that page.

This sounds positive. At the same time, it is a mess that illustrates some of the disadvantages of a “best of breed” approach to solving technical problems. If Microsoft could use its own technology to host a documentation and commenting system, and a source code management and issue tracking system for that matter, this issue would not occur, and users would not need multiple accounts, causing the legal issues mentioned above.

Microsoft in fact used to use its own platform for all the above but decided to shift to using third-party solutions because they worked better. That seemed to be a good thing, improving user experience and productivity, but becomes a problem when what seems to be the best third-party option changes.

Setting up PHP for development on Windows Subsystem for Linux in Windows 10

I have been working a little with PHP, for the first time for a while, and soon found it annoying not to have the convenience of instant application testing and line by line debugging. I have set up a PHP development environment before using XAMPP for Windows and Eclipse, but it was fiddly. I also prefer PHP on Linux, which is where my scripts will be running.

Since Windows 10 now has a Linux environment built-in, called Windows Subsystem for Linux (WSL), I decided to set this up to run Apache, PHP and MySQL and to try debugging my scripts there.

My PC is a recent installation and I had not yet installed WSL. To do so, you have to both download a Linux distribution from the Store (I chose Ubuntu), and enable WSL in Windows features. Then restart, launch Ubuntu, set a username and password, and you are up and running.

Note the Linux commands that follow should be run as root using sudo.

Before doing anything else, I got Ubuntu up to date:

apt-get update

apt-get upgrade

Then I installed the LAMP suite:

apt-get install lamp-server^

(the final ^ is intentional; see the guide here).

To check that everything is working, I created the file phpinfo.php in /var/www/html with the following contents:

<?php phpinfo(); ?>

and restarted Apache:

/etc/init.d/apache2 restart

Note: if you have IIS running in Windows, or another web server, Apache will not be able to listen on port 80. Change the port in /etc/apache2/ports.conf and in /etc/apache2/sites-enabled/000-default.conf

Then I opened a web browser on the Windows side and browsed to localhost:

image

and

image

We are up and running, but not debugging PHP yet. Remember the basic rules of WSL:

  • you cannot change Linux files from Windows.
  • you can access Windows files from Linux.

We want to edit PHP from Windows, so we’ll define a site that uses Windows files. Windows files are under /mnt/c (or whatever drive letter you are using).

So if you example you have your PHP website in a folder called c:\websites\mysite, you can have Apache serve files from that folder.

The quickest way to get up and running is to create a symbolic link in the Apache home directory, in my case /var/www/html. Change to that directory and type:

ln -s /mnt/c/websites/mysite mysite

Now you can view the site at http://localhost/mysite/

This worked first time for me, complete with PHP running. You could also set up multiple virtual hosts in Apache, and use the hosts file in Windows to map other host names to localhost.

Next, you probably want PHP to show error messages. To do this, replace the default php.ini with the development version (or tweak it according to your own preferences. At the time of writing, on Ubuntu, the default PHP version is 7.0 and php.ini-development is located in /usr/lib/php/7.0/php.ini-development. So I backed up the ini file at /etc/php/7.0/apache2, replaced it with the development version, and restarted Apache. My PHP form immediately showed me a non-fatal undefined index error, so it worked.

There is one small inconvenience. Apache in WSL will only run during the session. So before starting work, you have to open Ubuntu and type:

sudo apache2ctl start

Well, background task support is coming to WSL but I do not regard this as a big problem.

OK, this is cool, we can make changes in the PHP code in our favourite Windows editor, save, and view the results directly in the browser. But what about line-by-line debugging? For this, we are going to use Visual Studio Code with the PHP Debug extension:

image

Then on the Ubuntu side:

apt-get install php-xdebug

Restart Apache:

apache2ctl restart

Check that phpinfo.php now shows an Xdebug section. Then edit php.ini and add the following:

[XDebug]
xdebug.remote_enable = 1
xdebug.remote_autostart = 1

Restart Apache again and XDebug is ready to go.

Over in Visual Studio code there is a little more work to do. The problem is that although everything is running on localhost, the location of the files looks different to Linux than to Windows. We can fix this with a pathMappings setting. In Visual Studio code, open the PHP file you want to debug. Click the Debug icon and then the little gearwheel near top left; this will open launch.json. By default there are a couple of settings for XDebug. These are OK for a default setup, but we need to add path mapping so that the debugger knows where to find the files. For example:

image

Now you can set a breakpoint, start debugging, and open the page in your browser:

image

More guidance on the PHP Debug extension by Felix Becker is here.

Final thoughts

This is cool; but is it better or worse than an old-style VM running Linux and PHP? The WSL solution is lightweight and convenient, but unlike a VM it is not isolated and you may hit issues that are unique to WSL, because not everything runs. I did happen to suffer crashes in Visual Studio and in Outlook while WSL was running; it may well be coincidence, but I cannot help wondering if WSL might be to blame.

Still, a great feature of WSL is that when you exit your session, it goes away, so it is not too intrusive. I plan to use it for PHP debugging and will see how it goes.