Category Archives: open source

Amazon Linux 2023 on Hyper-V

Amazon Linux 2023 came out in March 2023, somewhat late as it was originally called Amazon Linux 2022. It took even longer to provide images for running it outside AWS, but these did eventually arrive – but only for VMWare and KVM, even though old Amazon Linux 2 does have a Hyper-V image.

Update: Hyper-V is now officially supported making this post obsolete but it may be of interest!

I wanted to try out AL 2023 and it makes sense to do that locally rather than spend money on EC2; but my server runs Windows Hyper-V. Migrating images between hypervisors is nothing new so I gave it a try.

  • I used the KVM image here (or the version that was available at the time).
  • I used the qemu disk image utility to convert the .qcow2 KVM disk image to .vhdx format. I installed qemu-img by installing QUEMU for Windows but not enabling the hypervisor itself.
  • I used the seed.iso technique to initialise the VM with an ssh key and a user with sudo rights. I found it helpful to consult the cloud-init documentation linked from that page for this.
  • In Hyper-V I created a new Generation 1 VM with 4GB RAM and set it to boot from converted drive, plus seed.iso in the virtual DVD drive. Started it up and it worked.
Amazon Linux 2023 running on Hyper-V

I guess I should add the warning that installing on Hyper-V is not supported by AWS; on the other hand, installing locally has official limitations anyway. Even if you install on KVM the notes state that the KVM guest agent is not packaged or supported, VM hibernation is not supports, VM migration is not supported, passthrough of any device is not supported and so on.

What about the Hyper-V integration drivers? Note that “Linux Integration Services has been added to the Linux kernel and is updated for new releases.” Running lsmod shows that the essentials are there:

The Hyper-V modules are in the kernel in Amazon Linux 2023

Networking worked for me without resorting to a legacy network card emulation.

This exercise also taught me about the different philosophy in Amazon Linux 2023 versus Amazon Linux 2. That will be the subject of another post.

Desktop development: is Electron the answer, or a tragedy?

A few weeks ago InfoQ posted a session by Paul Betts on Desktop Applications in Electron. Betts worked on Slack Desktop, which he says was one of the first Electron apps after the Atom editor. There is a transcript as well as a video (which is great for text-oriented people like myself).

Electron, in case you missed it, is a framework for building desktop applications with Chromium, Google’s open source browser on which Chrome is based, and Node.js. In that it uses web technology for desktop applications, it is a similar concept to older frameworks like Apache Cordova/PhoneGap, though Electron only targets Windows, macOS and Linux, not mobile platforms, and is specific to a particular browser engine and JavaScript runtime.

image

Electron is popular as a quick route to cross-platform desktop applications. It is particularly attractive if you come from a web development background since you can use many of the same libraries and skills.

Betts says:

Electron is a way to build desktop applications that run on Mac and Linux and Windows PCs using web technologies. So we don’t have to use things like Cocoa or WPF or Windows Forms; these things from the 90s. We can use web technology and reuse a lot of the pieces we’ve used to build our websites, to build desktop applications. And that’s really cool because it means that we can do interesting desktop-y things like, open users’ files and documents and stuff like that, and show notifications and kind of do things that desktop apps can do. But we can do them in less than the bazillion years it will take you to write WPF and Coco apps. So that’s cool.

There are many helpful tips in this session, but the comment posted above gave me pause for thought. You can get excellent results from Electron: look no further than Visual Studio Code which in just a few years (first release was April 2015) has become one of the most popular development tools of all time.

At the same time, I am reluctant to dismiss native code desktop development as yesterday’s thing. John Gruber articulates the problem in his piece about Electron and the decline of native apps.

As un-Mac-like as Word 6 was, it was far more Mac-like then than Google Docs running inside a Chrome tab is today. Google Docs on Chrome is an un-Mac-like word processor running inside an ever-more-un-Mac-like web browser. What the Mac market flatly rejected as un-Mac-like in 1996 was better than what the Mac market tolerates, seemingly happily, today. Software no longer needs to be Mac-like to succeed on the Mac today. That’s a tragedy.

Unlike Gruber I am not a Mac person but even on Windows I love the performance and integration of native applications that look right, feel right, and take full advantage of the platform.

As a developer I also prefer C# to JavaScript but that is perhaps more incidental – though it shows how far-sighted C# inventor Anders Hejlsberg was when he shifted to work on TypeScript, another super popular open source project from Microsoft.

Running ASP.NET 5.0 on Nano Server preview

I have been trying out Microsoft’s Nano Server Preview and wrote up initial experiences for the Register. One of the things I mentioned is that I could not get an ASP.NET app successfully deployed. After a bit more effort, and help from a member of the team, I am glad to say that I have been successful.

image

What was the problem? First, a bit of background. Nano Server does not run the .NET Framework, presumably because it has too many dependencies on pieces of Windows which Microsoft wanted to omit from this cut-down deployment. Nano Server does support .NET Core, also known as Core CLR, which is the open source fork of the .NET Framework. This enables it to run PowerShell, although with a limited range of cmdlets, and my main two ways of interacting with Nano Server are with PowerShell remoting, and Windows file sharing for copying files across.

On your development machine, you need several pieces in order to code for ASP.NET 5.0. Just installing Visual Studio 2015 RC will do, except that there is currently an incompatibility between the version of the ASP.NET 5.0 .NET Core runtime shipped with Visual Studio, and what works on Nano Server. This meant that my first effort, which was to build an empty ASP.NET 5.0 template app and publish it to the file system, failed on Nano Server with a NativeCommandError.

This meant I had to dig a bit more deeply into ASP.NET 5.0 running on .NET Core. Note that when you deploy one of these apps, you can include all the dependencies in the app directory. In other words, apps are self-hosting. The binary that enables this bit of magic is called DNX (.NET Execution Environment); it was formerly known as the K runtime.

Developers need to install the DNX SDK on their machines (Windows, Mac or Linux). There is currently a getting started guide here, though note that many of the topics in this promising documentation are as yet unwritten.

image

However, after installation you will be able to use several handy commands:

dnvm This is the .NET Version manager. You can have several versions of the DNX runtime installed and this utility lets you list them, set aliases to save typing full paths, and manage defaults.

image

dnu This is the .NET Development Utility (formerly kpm) that builds and publishes .NET Core projects. The two commands I found myself using regularly are dnu restore which downloads Nuget (.NET repository) packages and dnu publish which packages an app for deployment. Once published, you will find .cmd files in the output which you use to start the app.

dnx This is the binary which you call to run an app. On the development machine, you can use dnx . run to run the console app in the current directory and dnx . web to run the web app in the current directory.

Now, back to my deployment issues. The Visual Studio templates are all hooked to DNX beta 4, and I was informed that I needed DNX beta 5 for Nano Server. I played around with trying to get Visual Studio to target the updated DNX but ran into problems so decided to ignore Visual Studio and do everything from the command line. This should mean that it would all work on Mac and Linux as well.

I had a bit of trouble persuading DNX to update itself to the latest unstable builds; the main issue I recall is targeting the correct repository. You NuGet sources must include (currently) https://www.myget.org/F/aspnetvnext/api/v2.

Since I was not using Visual Studio, I based my samples on these, Hello World Console, MVC and Web apps that you can use for testing that everything works. My technique was to test on the development machine using dnx . web, then to use dnu publish and copy the output to Nano Server where I could run ./web.cmd in a remote PowerShell session.

Note that I found it necessary to specify the CoreClr 64-bit runtime in order to get dnu to publish the correct files. I tried to make this the default but for some reason* it reverted itself to x86:

dnu publish –runtime "c:\users\[USERNAME]\.dnx\runtime\dnx-coreclr-win-x64.1.0.0-beta5-11701"

Of course the exact runtime version to use will change soon.

If you run this command and look in the /bin/output folder you will find web.cmd, and running this should start the app. The port on which the app listens is set in project.json in the top level directory of the project source. I set this to 5001, opened that port in the Windows Firewall on the Nano Server, and got a started message on the command line. However I still could not browse to the app running on Nano Server; I got a 400 error. Even on the development machine it did not work; the browser just timed out.

It turned out that there were several issues here. On the development machine, which is running Windows 10 build 10074, I discovered to my annoyance that the web app worked fine with Internet Explorer, but not in Project Spartan, sorry Edge. I do not know why.

Support also gave me some tips to get this working on Nano Server. In order for the app to work across the network, you have to edit project.json so that localhost is replaced either with the IP number of the server, or with a *. I was also advised to add dnx.exe to the allowed apps in the firewall, but I do not think this is necessary if the port is open (it is a nuisance, since the location of dnx.exe changes for every app).

Finally I was successful.

Final observations

It seems to me that ASP.NET vNext running on .NET Core has the characteristic of many open source projects, a few dedicated people who have little time for documentation and are so close to the project that their public communications assume a fair amount of pre-knowledge. The site I referenced above does have helpful documentation though, for the few topics that are complete. Some other posts I found helpful are this series by Steve Perkins, and the troubleshooting suggestions here especially David Fowler’s post.

I like The .NET Core initiative overall since I like C# and ASP.NET MVC and now it is becoming a true cross-platform framework. That said, the code does seem to be in rapid flux and I doubt it will really be ready when Visual Studio 2015 ships. The danger I suppose is that developers will try it in the first release, find lots of problems, and never go back.

I also like the idea of running apps in Nano Server, a low-maintenance environment where you can get the isolation of a dedicated server for your app at low cost in terms of resources.

No doubt though, the lack of pieces that you expect to find on Windows Server will be an issue and I am not sure that the mainstream Microsoft developer ecosystem will take to it. Aidan Finn is not convinced, for example:

Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now.

Finn’s point is that if your headless server is having networking issues it is hard to troubleshoot, since of course remote tools will not work reliably. That said, I have personally run Hyper-V Server (which is essentially Server Core with just the Hyper-V role) with great success for several years; I started keeping notes on how to troubleshoot from the command line and found solutions to common problems. If networking fails with Nano Server then yes, you have a problem, but there is always something you can do, even if it means mounting the Nano Server VHD or VHDX on another VM. Windows Server admins have become accustomed to a local GUI though and adjusting even to Server Core has not been easy.

*the reason was that I did not use the –p argument with dnvm use which would have made it persistent

Mobile World Congress 2015 round-up: MediaTek Helio, Samsung Galaxy S6, Boyd smell sensor, Jolla Sailfish 2.0, Alcatel OneTouch devices, ZTE eye scanning, and Ford’s electric bike

Finding time to write everything up is a struggle, so rather than risk not doing so at all, here is a quick-fire reflection on the event.

image

Microsoft’s Windows 10 was part of it of course; I’ve covered this in a separate post.

I attended MediaTek’s press event. This Taiwan SoC company announced the Helio X10 64-bit 8-core chip and had some neat imaging demos. Helio is its new brand name. I was impressed with the company’s presentation; it seems to be moving quickly and delivering high-performance chips.

image

Alcatel OneTouch showed me its latest range. The IDOL 3 smartphone includes a music mixing app which is good fun.

image

There is also a watch of course:

image

Despite using Android for its smartphones, Alcatel OneTouch says Android Wear is too heavyweight for its watches.

The Alcatel OneTouch range looks good value but availability in the UK is patchy. I was told in Barcelona that the company will address this with direct sales through its own ecommerce site, though currently this only sells accessories, and trying to get more retail presence as opposed to relying on carrier deals.

I attended Samsung’s launch of the Galaxy S6. Samsung is a special case at MWC. It has the largest exhibits and the biggest press launch (many partners attend too). It is not just about mobile devices but has a significant enterprise pitch with its Knox security piece.

So to the launch, which took place in the huge Centre de Convencions Internacional, unfortunately the other side of Barcelona from most of the other events.

image

The S5 was launched at the same venue last year, and while it was not exactly a flop, sales disappointed. Will the S6 fare better?

image

It’s a lovely phone, though there are a few things missing compared to the S5: no microSD slot, battery not replaceable, not water resistance. However the S6 is more powerful with its 8-core processor and 1440×2560 screen, vs quad-core and 1920×1080 in the S5. Samsung has also gone for a metal case with tough Gorilla Glass front and back, versus the plastic and glass construction of the S5, and most observers feel this gives a more premium feel to the newer smartphone.

I suspect that these details are unimportant relative to other factors. Samsung wants to compete with the iPhone, but it is hardly possible to do so, given the lock which the Apple brand and ecosystem holds on its customers. Samsung’s problem is that the cost of an excellent smartphone has come down and the perceived added value of a device at over £500 or $650 versus one for half the price is less than it was a couple of years ago. Although these prices get hidden to some extent in carrier deals, they still have an impact.

Of particular note at MWC were the signs that Samsung is falling out with Google. Evidence includes the fact that Samsung Knox, which Google and Samsung announced last year would be rolled into Android, is not in fact part of Android at Work, to the puzzlement of Samsung folk I talked to on the stand. More evidence is that Samsung is bundling Microsoft’s Office 365 with Knox, not what Google wants to see when it is promoting Google Apps.

Google owns Android and intends it to pull users towards its own services; the tension between the company and its largest OEM partner will be interesting to watch.

At MWC I also met with Imagination, which I’ve covered here.

Jolla showed its crowd-sourced tablet running Sailfish OS 2.0, which is based on the abandoned Nokia/Intel project called MeeGo. Most of its 128 employees are ex-Nokia.

image

Jolla’s purpose is not so much to sell a tablet and phone, as to kick-start Sailfish which the company hopes will become a “leading digital content and m-commerce platform”. It is targeting government officials, businesses and “privacy-aware consumers”  with what it calls a “security strengthened mobile solution”. Its business model is not based on data collection, says the Jolla presentation, taking a swipe at Google, and it is both independent and European. Sailfish can run many Android apps thanks to Myriad’s Alien Dalvik runtime.

The tablet looks great and the project has merit, but what chance of success? The evidence, as far as I can tell, is that most users do not much object to their data being collected; or put another way, if they do care, it does not much affect their buying or app-using decisions. That means Sailfish will have a hard task winning customers.

China based ZTE is differentiating its smartphones with eye-scanning technology. The Grand S3 smartphone lets you unlock the device with Eyeprint ID, based on a biometric solution from EyeVerify.

image

Senior Director Waiman Lam showed me the device. “It uses the retina characteristic of your eyes for authentication,” he said. “We believe eye-scanning technology is one of the most secure biometric ways. There are ways to get around fingerprint. It’s very very secure.”

Talking of sensors, I must also mention San Francisco based Boyd Sense, a startup, which has a smell sensor. I met with CEO Bruno Thuillier. “The idea we have is to bring gas technology to the mobile phone,” he said. Boyd Sense is using technology developed by partner Alpha MOS.

The image below shows a demo in which a prototype sensor is placed into a jar smelling of orange, which is detected and shown on the connected smartphone.

image

What is the use of a smell sensor? What we think of as smell is actually the ability to detect tiny quantities of chemicals, so a smell sensor is a gas analyser. “You can measure your environment,” says Thuillier. “Think about air quality. You can measure food safety. You can measure beverage safety. You can also measure your breath and some types of medical condition. There are a lot of applications.”

Not all of these ideas will be implemented immediately. Measuring gas accurately is difficult, and vulnerable to the general environment. “The result depends on humidity, temperature, speed of diffusion, and many other things,” Thuillier told me.

Of course the first thing that comes to mind is testing your breath the morning after a heavy night out, to see if you are safe to drive. “This is not complicated, it is one gas which is ethanol,” says Thuillier. “This I can do easily”.

Analysing multiple gasses is more complex, but necessary for advanced features like detecting medical conditions. Thuillier says more work needs to be done to make this work in a cheap mobile device, rather than the equipment available in a laboratory.

I had always assumed that sampling blood is the best way to get insight into what is happening in your body, but apparently some believe breathe is as good or better, as well as being easier to get at.

For this to succeed, Boyd Sense needs to get the cost of the sensor low enough to appeal to smartphone vendors, and small enough not to spoil the design, as well as working on the analysis software.

It is an interesting idea though, and more innovative than most of what I saw on the MWC floor. Thuillier is hoping to bring something to the consumer market next year.

Finally, one of my favourite items at MWC this year was Ford’s electric bikes.

image

Ford showed two powered bicycles at the show, both prototypes and the outcome of an internal competition. The idea, I was told, is that bikes are ideal for the last part of a journey, especially in today’s urban environments where parking is difficult. You can put your destination into an app, get directions to the car park nearest your destination, and then dock your phone to the bike for the handlebar by handlebar directions.

image

I also saw a prototype delivery van with three bikes in the back. Aimed at delivery companies, this would let the driver park at a convenient spot for the next three deliveries, and have bikers zip off to drop the parcels.

LibreOffice is four years old, plans Android version

Four years ago, on 28th September 2010, the open source LibreOffice productivity suite was created by forking OpenOffice. This Microsoft Office alternative offers a word processor, spreadsheet, presentation graphics, vector drawing package, and database manager. Its origins are in a German suite called Star Office, which was acquired by Sun Microsystems in 1999. In an effort to disrupt Microsoft, Sun made Star Office free and open source, creating OpenOffice.org. However Sun itself was acquired by Oracle Corporation in 2010, and LibreOffice was created by a breakaway group of OpenOffice contributors who were wary of what might happen to the project under Oracle’s stewardship.

image

They probably need not have worried, since Oracle donated OpenOffice to the Apache foundation in 2011. It is still performing its intended function as a Microsoft disruptor; see for example this report of the Italian city of Udine moving from Microsoft Windows and Office to Linux and OpenOffice.

A key motivation is that it is easier to keep free software up to date, and organisations like having all their users on the same version:

"Some of our PCs are stuck with pretty old software like Office 2000, which is no longer supported, as we haven’t had the resources to upgrade," Gabriele Giacomini, the innovation and economic development councillor for the municipality of Udine, told ZDNet.

"By switching to open source, we will have the chance to allow our employees to work with the latest version of the suite”

Microsoft, of course, wants to address this by persuading users to subscribe to Office rather than buying it outright; though this does not solve the problem of out of date Windows versions (but watch this space).

But what about LibreOffice? What is the point of having two major open source productivity suites based on essentially the same products?

Good question; but one possible differentiator is that LibreOffice is working on an Android port. The Document Foundation, which runs the LibreOffice project, is inviting tenders for implementation of the suite on Android, complete with a basic interface for integrating with the user’s “preferred cloud storage”.

Another point of interest is that the Foundation is asking for commercial tenders rather than hiring its own coders to work with the open source community.

That said, there is already an Android port of OpenOffice, called AndrOpen Office, though this is a fork and not an official Apache OpenOffice project.

Are these multiple forks healthy proliferation, or open source confusion? That depends on your point of view, though it does show the ability of the open source community to respond to obvious needs.

It seems to me though that the suite would be more attractive to businesses if LibreOffice and OpenOffice could merge, and develop an official Android version of the suite.

My guess is that productivity software on tablets (and phablets) will be a key battleground as users do an increasing proportion of their work on mobile devices rather than PCs or laptops. Microsoft already has an iOS version of Office, and one for Android in preparation. There is also a version of Office for the Windows 8 “Metro” personality in preparation.

Open source advocate Glyn Moody has posted about the LibreOffice project here.

Microsoft releases WinJS cross-browser JavaScript library but why?

Microsoft has announced WinJS 3.0:

The Windows Library for JavaScript (WinJS) project is pleased to announce the general availability of its first release – WinJS 3.0 – since the open source project began at //BUILD 2014.

Much of WinJS will run on any modern browser but the browser support matrix has a number of gaps:

image

You can also see what runs where from this status table.

But what is WinJS? Note that it comes from the Windows apps team, not the web development team at Microsoft. WinJS was designed to enable app development for Windows 8 “Metro” (also known as the Windows Runtime) using JavaScript, CSS and HTML. Back in 2010, when Microsoft signalled the end of Silverlight and the rise of HTML 5 for browser-based applications, early versions of WinJS would already have been in preparation. Using WinJS you can share code across a Windows 8 app, web apps, and via an app packager like Apache Cordova, in apps for Android and iOS as well.

Note that Cordova is now integrated into Visual Studio, using the catchy name Multi-Device Hybrid App:

image

If you want to know what kind of controls and components are on offer in WinJS, you can find out using the excellent demo site here. This is Firefox:

image

Quick summary then: WinJS lets you build apps that look like Windows 8 Store apps, but which run cross-browser and cross-platform. But who wants to do that?

Maybe Microsoft does. The messaging from the company, especially since CEO Satya Nadella took over from Windows guy Steve Ballmer, is “any device”, provided of course that they hook up to Microsoft’s services. That messaging is intended for developers outside the company too. Check out the current campaign for Microsoft Azure, which says “consume on any device”.

image

This could be a web application, or it could be a client app using Azure Mobile Services or an ASP.NET Web API application to connect to cloud data.

You do not have to use WinJS to consume Microsoft’s services of course. Why would developers want to use the look and feel of a rather unloved app platform, rather than the native look and feel of Android or iOS? That is an excellent question, and in most cases they will not. There could be cases though, for example for internal business apps where users care most about functionality. What is the current stock? What is the lead time? Show me this customer’s order history. A WinJS app might not look right for the platform, but the UI will be touch-friendly, and ease of rollout across the major mobile platforms could trump Apple’s design guidelines.

If you are writing a pure web application, users expectations concerning native look and feel are not so high. The touch-oriented design of WinJS is its main appeal, though other web frameworks like JQuery Mobile also offer this. The “Metro” design language is distinctive, and Microsoft will be making a renewed push for Windows Store apps, or Universal Apps, as part of the new wave of Windows called Windows 9 or “Threshold”. WinJS is the way to build apps for that platform using JavaScript and HTML, with the added bonus of easy porting to a broad range of devices.

This is a hard sell though. I am impressed by the effort Microsoft has put into making WinJS work cross-platform, but will be surprised to see much usage outside Windows Store apps (including Windows Phone). On the other hand, it does help to keep the code honest: this really is HTML and JavaScript, not just a wrapper for Windows Runtime APIs.

The UK government is adopting Open Document: some observations

The UK government is adopting the Open Document Format for Office Applications, for documents that are editable (read-only documents will be PDF or HTML). You can read Mike Bracken’s (Government Digital Service) blog on the subject here, and the details of the new requirements here. If you want to see the actual standards, they are on the OASIS site here.

I followed the XML document standards wars in some details back in 2006-2008. The origins of ODF go back to Sun Microsystems (a staunch opponent of Microsoft) which acquired an Office suite called Star Office, made it open source, and supported OpenOffice.org. My impression was that Sun’s intentions were in part to disrupt the market for Microsoft Office, and in part to promote a useful open standard out of conviction. OpenOffice eventually found its way to the Apache Foundation after Oracle’s acquisition of Sun. You can find it here.

During the time, Microsoft responded by shifting Office to use XML formats by default – these are the formats we know as .docx, .xlsx etc. It also made the formats an open standard via ECMA and ISO, to the indignation of ODF advocates who found every possible fault in the standards and the process. There were and are faults; but it has always seemed to me that an open XML standard for Microsoft Office documents was a real step forward from the wholly proprietary (but reverse engineered) binary formats.

The standards wars are to some extent a proxy for the effort to shift Microsoft from its dominance of business document authoring. Microsoft charges a lot for Office, particularly for businesses, and arguably this is an unnecessary burden. On the other hand, it is a good product which I personally prefer to the alternatives on Windows (on the Mac I am not so sure), and considering the amount of use Office gets during the working day even a small improvement in productivity is worth paying for.

As a further precaution, Microsoft added ODF support into its own Office suite. This was poor at first, though it has no doubt improved since 2007. However I would not advise anyone to set Microsoft Office to use ODF by default, unless mandated by some requirement such as government regulation. It is not the native format and I would expect a greater likelihood that something could go slightly wrong in formatting or metadata.

Bracken does not mention Microsoft Office in his blog; but as ever, the interesting part of this decision is how it will impact Office users in government, or working with government. If it is a matter of switching defaults in Office, that is no big deal, but if it means replacing Microsoft Office with Open Office or its fork, Libre Office, that will have more impact.

The problem with abandoning Microsoft Office is not only that that the alternatives may fall short, but also that the ecosystem around Microsoft Office and is document formats is richer – in other words, tools that consume or generate Office documents, add-ins for Office, and so on.

This also means that Microsoft Office documents are, in my experience, more interoperable (not less) than ODF documents.

That does not in itself make the UK government’s decision a bad one, because in making the decision it is helping to promote an alternative ecosystem. On the other hand, it does mean that the decision could be costly in constraining the choice of tools while the ODF ecosystem catches up (if it does).

How does the move towards cloud services like Office 365 and Google Docs impact on all this? Microsoft says it supports ODF in SharePoint; but for sure it is better to use Microsoft’s own formats there. For example, check the specifications for Office Online. You can edit docx in the browser, but not odt (Open Document Text); it is the same story with spreadsheets and presentations.

Google has recently added native support for the Microsoft formats to Google Docs.

Amazon’s Zocalo service, which I have just reviewed for the Register, can preview Microsoft’s formats in the browser, but while it also supports odt for preview, it does not support ods (Open Document Spreadsheet).

A good decision then by the UK government? Your answer may be partly ideological, but as a UK taxpayer, my feelings are mixed.

For more information on this and other government IT matters, I recommend Bryan Glick’s pieces over on Computer Weekly, like this one.

Microsoft’s new open source direction for C# and .NET (and native compilation too): Anders Hejlsberg explains

At the April 2014 Build conference Microsoft made some far-reaching announcements about its .NET platform and the C# programming language. Yes, there was talk of C# 6.0, the next version, but the real changes are more profound. Specifically:

C# and Visual Basic have a new compiler, itself written in C#, code-named Roslyn. Roslyn is not just a new compiler; Microsoft now calls it the “.NET Compiler Platform”.

There is a new commitment to open source for .NET projects. Microsoft formed the .NET Foundation to oversee existing open source projects, including  ASP.NET, Entity Framework, the Azure .NET SDK, and now Roslyn as well. “When it comes to development projects we are going to operate from the premise that open source is the default. Unless there are reasons why it does not work,” said C# lead architect Anders Hejlsberg.

image

Note that open source does not mean chaos. It does mean that you can fork the project if you want – the Roslyn license is Apache 2.0 – but getting Microsoft to accept new features you have contributed will not be trivial. Hejlsberg makes the point that language features are easy to add, but impossible to take away, so extreme care is necessary.

Microsoft is also supporting cross-platform C# to a greater extent than it has done in the past. The most obvious sign of this is its cooperation with Xamarin, which provides C# compilers for iOS and Android. Xamarin’s Miguel de Icaza got a top billing at Build, and is also involved in the .NET Foundation.

There is more though. The idea of standardised C# is re-emerging:

“The last ECMA standard was C# 2.0. There wasn’t a lot of demand for it, but that demand has recently risen and we have re engaged with the ECMA community to produce a standard for C# 5.0,” said Hejlsberg.

This bears some unpacking. Why was there little demand for ECMA C#? Partly I would guess from the assumption the C# was firmly in Microsoft’s grip, with Java the obvious choice for cross-platform development. The main interest was from the Mono folk (Miguel de Icaza again), which implemented .NET for Linux and the Mac with some success, but nothing to disturb Java’s momentum.

The focus now though is on mobile, and interest in C# is stronger, mainly from Microsoft-platform developers reaching beyond Windows. There is also Unity, which uses C# as a scripting language for developing games for multiple platforms, including iOS, Android, Windows, Mac, Linux, Xbox, PS3 and Wii – PS4 is coming very soon.

Microsoft has now consciously embraced multiple platforms, as evidenced by Office for iOS as well as the Xamarin collaboration. “We want C#developers to build great applications across different form factors and different device platforms,” said Jay Schmelzer Director of Program Management for Visual Studio.

You might observe that this position has been forced on the company by the rise of iOS and Android, a view which likely has some merit, but the impact it has on C# and .NET itself is still real.

I asked Hejlsberg to unpack the difference between the Roslyn project and C# 6.0, bearing in mind that both are covered on the Roslyn open source site; you can see the current status of C# 6.0 and the next Visual Basic here.

Roslyn is the name for the project that encompasses the new C#compiler and the new VB compiler and the new language services that they share. C# 6.0 is the name of the next version of the C #language which will have a specification and which will have an implementation. We are implementing C# 6.0 on the Roslyn platform. We are not going to continue to evolve our old C++ C# compiler – the C# compiler was originally written in C++ and has been evolved up through C# 5.0. That is where we are going to retire that code base, and going forward versions of C# will be built on Roslyn and therefore will be built open source. Unlike previously where, boom. C# came down from the sky with a set of features, it is going to happen more organically now, people will submit pull requests, open up issues, and you will see us work on these features. You will see them from inception to fruition.

“The C# team, the Roslyn team, the VB team, their day to day workplace now is the open source site. That is where they check-in code. It is a community in the making.

Even that is not all. At Build, Microsoft also announced .NET Native, which is a native compiler for C# and Visual Basic, now in preview for x64 Store apps. What is the difference between .NET Native and the existing NGen native compiler for .NET? Over to Hejlsberg:

NGen is the native feature that we currently support. NGen is really, “I’m going to JIT [Just in time compile] your code and then snapshot all the data structures and dump them in a file so that I can quickly rebuild that file later when you run this particular application”. But it is the same code generator and all the same features, and JIT is still there. NGen is really a way to pre-cache the JIT output and therefore get better performance, but it adds to the size of your app because you still have all the assemblies and metadata and then the NGen image as well.

.NET Native is a completely different approach. Instead of the JIT we use the backend from the C++ compiler. You can think of it as a linker that takes as input assemblies, and as output produces a PE [Portable Executable] executable. In the process this linker or code generator will analyse all the IL [Intermediate Language] that goes into the application and it will apply a thing known as tree-shaking where it eliminates all of the code that will never execute based on known execution roots.

In other words, the public static main of your program and also whatever pieces of your app that you designate as reflectable, they also become roots. Based on that we produce an optimised exe, and into that exe we link the pieces of the framework that you are referencing. We link in a garbage collector [GC], and it looks to the operating system just like an exe. When you run it, it runs a local GC in there and it is as efficient really as C++ code.

There are some restrictions associated with .net native, mainly that you can’t just willy-nilly reflect on the whole world. You can’t just generate new code and ask for that to be jitted because they may not be a JIT compiler. We are considering allowing you to link in a JIT compiler, but there are certain execution environments which don’t permit jitting, like Xbox. If you use reflection in your lap you have to tell us what to keep reflectable, because otherwise we will optimise it away.

According to Schmelzer:

The preview out today is scoped to Store app x64 and ARM. We haven’t run into any technical limitation that shows it can’t be done across the breadth, it is just a matter of request and need.

Open source, native code compilation, and an innovative compiler: it adds up to huge changes for C# and .NET, positive ones as far as I can tell.

The Xamarin connection is intriguing though. Developers in general admire the technology as far as I can tell, but it is expensive, and paying out for a Xamarin subscription on top of maybe MSDN for Visual Studio is too much for some smaller organisations and does not encourage experimentation. Might Microsoft acquire Xamarin and build Visual Studio into an IDE targeting all the major mobile platforms, but with special hooks to Azure-hosted services?

That prospect makes sense to me, though it would be a shame if the energetic Xamarin culture became bogged down in big-company bureaucracy. Currently though: no news to report.

Microsoft Build 2014: what happened

It’s curious. Microsoft’s new CEO Satya Nadella has been in place for only a month which means that almost everything announced at Build, Microsoft’s developer conference which took place last week in San Francisco, must have been set before he was appointed; yet there was a sense of “all things new” at the event, as if he had overseen a wave of changes.

The wave began the previous week, with the simultaneous announcement and delivery of Office for iPad. The significance of this is threefold:

  • It demonstrated Microsoft’s decision to give first-class support to mobile platforms other than Windows
  • It demonstrated that Office can be redesigned to work nicely on a tablet
  • The quality of the product exceeded expectations, showing that in the right circumstances Microsoft can do excellent non-Windows software

image

Next came Build itself. It was a tale of two keynotes. The first was all about Windows client – both Phone and PC. The core news is the arrival of the Windows Runtime  (WinRT, the engine behind Metro/Store Apps) on Windows Phone 8.1. This means that WinRT is now the runtime that developers should target for apps that run across phone and desktop – and even, we were shown, Xbox One, which will support WinRT apps written in HTML and WinJS (Microsoft’s JavaScript library for Windows apps).

In support of this, Microsoft announced a new Universal App project for Visual Studio, which lets you share both visual and non-visual code across multiple targets. How much is shared is a developer choice.

There is more. A Universal App is now (kind-of) a desktop app as well as a Store app, since in a future free update to Windows 8, it will run on the desktop within a window, as well as appearing in the Start menu on the desktop. We were even shown this; apparently it is a mock-up. This was the biggest surprise at Build.

image

What did Executive VP Terry Myerson say about this? Here is the exact quote:

We are going all in with this desktop experience, to make sure your applications can be accessed and loved by people that love the Windows desktop. We’re going to enable your Universal Windows applications to run in a window. We’re going to enable your users to find, discover and run your Windows applications with the new Start menu. We have Live Tiles coming together with the familiar experience customers are looking for to start and run their applications and we’ll be making this available to all Windows 8.1 users as an update. I think there will be a lot of happy people out there.

This is significant. When Myerson says, “we are going all in with this desktop experience”, he does not mean backtracking on Windows Store apps, to return to desktop windows apps (Win32 or WPF) as the future of Windows development. Rather, he means Windows Store apps integrated into the desktop.

There is a further twist to this. Windows Store apps are sandboxed and cannot communicate with each other or with the operating system other than via carefully designed and secured paths. This is in general a good thing, but restrictive for businesses designing line of business apps. It also means that legacy code cannot be carried over into a Store app, other than by full porting.

In the just-released Windows 8.1 Update this has changed. Side-loaded apps (in other words, not deployed from the Windows Store) can now escape the sandbox thanks to Brokered Windows Runtime Components. There are some limitations (32-bit only on the desktop side, for example) but this will make it possible to implement business applications as Store apps even if they need to interact with existing desktop applications or services.

There is still a huge blocker to Store apps from a business perspective, which is that you need Windows 8. Still, my guess is that once the update with the restored Start menu appears, most of the objections to Windows 8 will melt away.

We also saw Office for the Windows Runtime, which will run on both Phone and PC. It is written, I discovered later, in XAML, DirectX and C++ (“Blazingly fast”, we were told). Corporate VP Kirk Koenigsbauer introduced a preview of this, or at least PowerPoint.

image

No detail yet, and several references to “early code” suggest to me that this is a year or more away from full release (giving Office on iPad a big head start); but it will come. Koenigsbauer did not call it cut-down; in fact, it was instanced as proof that WinRT is suitable for large-scale apps, so I would expect something more complete than Office on iPad; yet it is hard to imagine things like the VBA macro language appearing here in its current form (VBA is based on the ancient Visual Basic 6.0 runtime), so there will be some major differences.

We also saw Windows Phone 8.1, including the Cortana virtual personal assistant who responds to voice input. For me other things in Windows Phone 8.1 are more significant, including new swipe-style keyboard for fast text input, VPN, S/MIME secure email, and a new notification centre. Unlike touch Office, Windows Phone 8.1 is coming soon; Nokia’s Stephen Elop (soon to be in charge of Windows Phone at Microsoft) said that the first 8.1 Lumia devices could be out from May, depending on territory, and that all Lumia Windows Phone 8 devices will get the update in the summer.

On to day two, which was Cloud day, though we also got significant .NET developer news.

Executive VP Scott Guthrie introduced a new portal for Microsoft Azure, the cloud platform. This is not just a new look, but integrates with Visual Studio online so you can easily view and edit the code and track team projects. There are also new monitoring and analytics features so you can check page views, page load time, browser usage and more. Guthrie also announced integration with Puppet and Chef for deployment automation.

image

Language designer Anders Hejsberg also came on stage. He announced the release version of TypeScript, a “typed superset of JavaScript” which is suitable for large applications. He also announced a new preview release of the compiler project code-named Roslyn, and on stage pushed the button that published it as open source. What is Roslyn? It is the next generation compiler for C# and VB, and is itself written in C#. This enables compiler and workspace APIs, which in turn enable rich editor features:

The transition to compilers as platforms dramatically lowers the barrier to entry for creating code focused tools and applications. It creates many opportunities for innovation in areas such as meta-programming, code generation and transformation, interactive use of the C# and VB languages, and embedding of C# and VB in domain specific languages.

Roslyn will be fully released in the next version of Visual Studio, for which we do not yet have a date. Roslyn will be delivered alongside C# 6.0.

There is also a new .NET Foundation which will oversee open source projects for .NET, with backing from folk including Xamarin’s Miguel de Icaza and Umbraco’s Niels Hartvig. It is all a bit vague at the moment:

In the upcoming months, the .NET Foundation will be inviting many companies and community leaders to join the foundation, including its Board of Directors and will then finalize its operational details, including governance models for its open source initiatives, membership structure and industry and community engagement.

Another significant event in the .NET story is the arrival of true native code compilation for .NET, although currently only for 64-bit Store apps. More on this soon.

A couple of events during Build caught my eye. One was de Icaza’s session on using C# to build for iOS and Android, not so much for the content itself (though there was nothing wrong with it), but rather for the huge attendance it drew.

image

The session was moved to the Build keynote room, and while there were spare seats, the room felt well filled. This speaks loudly about the importance of those platforms even to Microsoft platform developers, as well as of Microsoft’s support of Xamarin’s work.

Another was the appearance of John Gruber, author of the Daring Fireball blog and an Apple enthusiast. He appeared in a video during the keynote, explaining how a project in which he is involved uses Azure for back-end services, and then in person at another session, interviewing journalist Ed Bott about what is changing at Microsoft.

image

Gruber seems to me representative of a group of smart observers who have not in general been impressed with Microsoft’s endeavours over the past few years; but he for one is now more positive on the subject. Windows Phone is much better than its market share suggests, he said. This alongside Azure and a new openness to supporting third-party clients has made him look more favourably on the company.

My summary is this. On the Windows client side, Microsoft is taking its unpopular Windows release and its minority Phone platform and making them better and more compatible with each other, making sense of the client platform in a way that should result in growth of the app ecosystem both on Phone and PC/Tablet. On the cloud side, the company is building Azure and Office 365 (two platforms united by Azure Active Directory) into a one-stop platform that is increasingly compelling. The result was a conference and a direction that was largely welcomed by those in attendance, as far as I could tell.

That does not mean that the PC will stop declining, or that iOS and Android will become less dominant in mobile. There is progress though, and more clarity about the direction of Microsoft’s platform than we have seen for some years.

For the official news from Build, see the Build Newsroom.

Visual C++ will implement all of C++ 11 and C++ 14, some of C99 says Microsoft

Microsoft’s Herb Sutter spoke at Microsoft Build in San Francisco on the future of C++.

image

Microsoft has been criticised for being slow to implement all the features of ISO C++ 11. Sutter says most features are now included in the public preview of Visual Studio 2013 – which has a “Go Live” license so you can use it in production – including the oft-requested variadic templates. The full list:

  • Explicit conversion operators
  • Raw string literals
  • Function template default arguments
  • Delegating constructors
  • Uniform int and initializer_lists
  • Variadic templates

More features are coming in the RTM (final release) of Visual Studio 2013 later this year:

  • Non-static member initializers
  • =default
  • =delete
  • ‘using’ aliases

A technical preview will then follow and Sutter listed possible features of which there will be a subset. Full conformance will follow at an unspecified time.

Microsoft is also promising a full implementation of C++ 14, the next update to the standard, even though the exact specification is not yet fully agreed. Some C++ 14 features will be implemented ahead of C++ 11 features, if they are considered to add high value.

Two other points of interest.

Async/await (familiar to C# developers) will be implemented in the post-RTM CTP because it is such a useful feature for Windows Runtime app developers, even though it is not part of the ISO standard.

Finally, Microsoft will also several C99 features in the RTM of Visual Studio 2013:

  • Variable decls
  • C99_Bool
  • compound literals
  • designated initializers

The reason for implementing these is that they are needed to compile popular open source libraries like FFmpeg.

I asked Sutter why Microsoft is not planning full conformance to C99. He said it was a matter of priorities and that work on C++ 11 and C++ 14 was more important. If there are particular additional features of C99 developers would like to see implemented, contacting Sutter with requests and rationale might eventually yield results.

image