Category Archives: professional

HackerRank survey shows programming divides in more ways than one

Developer recruitment company HackerRank has published a survey of developer skills. The first place I look in any survey is who took part, and how many:

HackerRank conducted a study of developers to identify trends in developer education, skills and hiring practices. A total of 39,441 professional and student developers completed the online survey from October 16 to November 1, 2017. The survey was hosted by SurveyMonkey and HackerRank recruited respondents via email from their community of 3.2 million members and through social media sites.

I would like to see the professional and student reponses shown separately. The world of work and the world of learning is different. This statement may also be incomplete, since several of the questions analyse what employers want, which suggests another source of data (not difficult to find for a recruitment company).

It is still a good read. It is notable for example that the youngest generation is learning to code later in life than those who are now over 35:

image

I am not sure how to interpret these figures, but can think of some factors. One is that the amount of stuff you can do with a computer without coding has risen. In the earliest days when computing became affordable for anyone (late seventies/early eighties), you could not do much without coding. This was the era of type-in listings for kids wanting to play games. That soon changed, but coding remained important to getting things done if you wanted to make a business database useful, or create a website. Today though you can do all kinds of business, leisure and internet computing without needing to see code, so the incentive to learn is lower. It has become a more specialist skill. It remains valuable though, so older people have reason to be grateful.

How do people learn to code? The most popular resource is Stack Overflow, followed by YouTube, with books coming in third. In truth the most popular resource must be Google search. Credit to Stack Overflow though: like Wikipedia, it offers a good browsing experience at a time when the web has become increasingly unpleasant to use, infected by pop-up surveys, autoplay videos and intrusive advertising, not to mention the actual malware out there.

No surprises in language popularity, though oddly the survey does not tell us directly what languages are most used or best known by the respondents. The most in demand languages are apparently:

1. JavaScript
2. Java
3. Python
4. C++
5. C
6. C#
7. PHP
8. Ruby
9. Go
10. Swift

If you ask what languages developers plan to learn next, Go, Python and Scala head the list. And then there is a fascinating chart showing which languages developers prefer grouped by age. Swift, apparently, is loved by 75% of those over 55, but only by 15% of those under 25, the opposite of what I would expect (though I don’t know if this is a percentage of those who use the language, or includes those who do not know it at all).

Frameworks is another notable topic. Everyone loves Node.js; but two of the frameworks on offer are “.NET Core” and “ASP”. This is odd, since .NET Core is not really a framework, and ASP normally refers to the ancient “Active Server Pages” framework which nobody uses any longer, and ASP.NET runs on .NET Core so is not alternative to it.

This may be a clue that the HackerRank company or community is not well attuned to the Microsoft platform. That itself is of interest, but makes me question the validity of the survey results in that area.

C# and .NET: good news and bad as Python rises

Two pieces of .NET news recently:

Microsoft has published a .NET Core 2.1 roadmap and says:

We intend to start shipping .NET Core 2.1 previews on a monthly basis starting this month, leading to a final release in the first half of 2018.

.NET Core is the cross-platform, open source implementation of the .NET Framework. It provides a future for C# and .NET even if Windows declines.

Then again, StackOverflow has just published a report on the most sought-after programming languages in the UK and Ireland, based on the tags on job advertisements on its site. C# has declined to fourth place, now below Python, and half the demand for JavaScript:

image

To be fair, this is more about increased demand for Python, probably driven by interest in AI, rather than decline in C#. If you look at traffic on the StackOverflow site C# is steady, but Python is growing fast:

image

The point that interest me though is the extent to which Microsoft can establish .NET Core beyond the Microsoft-platform community. Personally I like C# and would like to see it have a strong future.

There is plenty of goodness in .NET Core. Performance seems to be better in many cases, and cross-platforms is a big advantage.

That said, there is plenty of confusion too. Microsoft has three major implementations of .NET: the .NET Framework for Windows, Xamarin/Mono for cross-platform, and .NET Core for, umm, cross-platform. If you want cross-platform ASP.NET you will use .NET Core. If you want cross-platform Windows/iOS/macOS/Android, then it’s Xamarin/Mono.

The official line is that by targeting a specification (a version of .NET Standard), you can get cross-platform irrespective of the implementation. It’s still rather opaque:

The specification is not singular, but an incrementally growing and linearly versioned set of APIs. The first version of the standard establishes a baseline set of APIs. Subsequent versions add APIs and inherit APIs defined by previous versions. There is no established provision for removing APIs from the standard.

.NET Standard is not specific to any one .NET implementation, nor does it match the versioning scheme of any of those runtimes.

APIs added to any of the implementations (such as, .NET Framework, .NET Core and Mono) can be considered as candidates to add to the specification, particularly if they are thought to be fundamental in nature.

Microsoft also says that plenty of code is shared between the various implementations. True, but it still strikes me that having both Xamarin/Mono and .NET Core is one cross-platform implementation too many.

Server shipments decline as customers float towards cloud

Gartner reports that worldwide server shipments have declined by 4.2% in the first quarter of 2017.

Not a surprise considering the growth in cloud adoption but there are several points of interest.

One is that although Hewlett Packard Enterprise (HPE) is still ahead in revenue (over $3 billion revenue and 24% market share), Dell EMC is catching up, still number two with 19% share but posting growth of 4.5% versus 8.7% decline for HPE.

In unit shipments, Dell EMC is now fractionally ahead, with 17.9% market share and growth of 0.5% versus HPE at 16.8% and decline of 16.7%.

Clearly Dell is doing something right where HPE is not, possibly through synergy with its acquisition of storage vendor EMC (announced October 2015, completed September 2016).

The larger picture though is not great for server vendors. Businesses are buying fewer servers since cloud-hosted servers or services are a good alternative. For example, SMBs who in the past might run Exchange are tending to migrate to Office 365 or perhaps G Suite (Google apps). Maybe there is still a local server for Active Directory and file server duties, or maybe just a NAS (Networked Attached Storage).

It follows that the big cloud providers are buying more servers but such is their size that they do not need to buy from Dell or HPE, they can go directly to ODMs (Original Design Manufacturers) and tailor the hardware to their exact needs.

Does that mean you should think twice before buying new servers? Well, it is always a good idea to think twice, but it is worth noting that going cloud is not always the best option. Local servers can be much cheaper than cloud VMs as well as giving you complete control over your environment. Doing the sums is not easy and there are plenty of “it depends”, but it is wrong to assume that cloud is always the right answer.

How to use Vi: a minimalist guide

Occasionally you may find Vi is the only editor available. To use it to edit somefile.txt type:

vi somefile.txt

Now type:

i

This puts you into insert mode and you can type. Type something.

When you are done, type the Esc key to go back to command mode. Type:

ZZ

(must be upper case) to save the file and quit.

If you don’t want to save, type:

:q!

to quit without saving.

At this point I am tempted to add all sorts of other tips, like / to search, but then this would not be a minimalist guide!

QCon London 2017: IoT insecurity, serverless computing, predicting technical debt, and why .NET Core depends on a 36,000 line C++ file

I’m at the QCon event in London, a multi-vendor conference aimed primarily at enterprise developers and architects.

image
Adam Tornhill speaks at QCon London 2017

A few notes on day one. Alasdair Allan gave a keynote on security and the internet of things; it was an entertaining and disturbing résumé of all that is wrong with the mad rush to connect everything to the internet though short on answers; our culture has to change so that organisations such as hotels, toy manufacturers, appliance vendors and even makers of medical equipment take security seriously but it is not clear how this will come about unless so many bad things happen that customers start to insist on it.

Michael Feathers spoke on strategic code deletion, part of a track on “Dark code: the legacy/tech debt dilemma.” This was an excellent session; code is added to projects more often than it is removed, and lack of hygiene in this regard has risks including security, reliability and performance. But discovering which code is safe to remove is not always trivial, and Feathers explored some of the nuances and suggested some techniques.

Steve Faulkner gave a session on serverless JavaScript, or more specifically, using Amazon Web Services (AWS) Lambda and API Gateway. Faulkner said that the API Gateway was the piece that made Lambda viable for them; he is Director of Platform Engineering at Bustle, a busy content site based in the USA. In a nutshell, moving from EC2 VMs to Lambda has yielded both financial savings and easier management. The only downside is performance; each call to a Lambda function takes a minimum of 100ms whereas the same function on a WM might take 20ms. In the end it is not critical as performance remains satisfactory.

Faulkner said that AWS is ahead of its competitors (Microsoft, Google and IBM were mentioned) but when pressed said that both Microsoft and Google offered strong alternatives. Microsoft’s Azure Functions are spoilt by the need to specify a maximum scale, rather than scaling automatically, but its routing solution is in some ways ahead of AWS, he said. Google’s Functions will be great when out of beta.

Adam Tornhill spoke on A Crystal Ball to prioritise Technical Debt, another session in the dark code track. This was my favourite of the day. Tornhill presented a relatively simple way to discover what code you should refactor now in order to avoid future issues. His method is based on looking for files with many lines of code (a way of measuring complexity) and many commits (suggesting high importance and activity), the “hotspots” in your projects. For more detail and some utilities see Tornhill’s blog.

Why do we end up with bad or risky code in our software? Tornhill said that developers often mistake organisational problems for technical problems and try unsuccessfully to fix them with tools.

He also mentioned an example of high-risk code, the file gc.cpp which performs garbage collection in .NET Core, the next generation of Microsoft’s .NET Framework. This file is over 36,000 lines and should be refactored. There is a discussion on the subject here. It exactly bears out Tornhill’s point. A developer proposes to refactor the file, back in March 2015. Microsoft’s Karel Zikmund defends the status quo:

Why it is this way? … Partly historical reasons (it is this way since the start). Partly because devs working on it didn’t feel the urge to refactor it. Partly because splitting of gc.cpp is non-trivial and risky and because it does not bring too big value (ramp up in the code base can be gained also in the combination of reading BOTR and debugging the code). Why it is staying this way? … Cost/benefit/risk ratio is IMO not in favor of a change here.

Few additional thoughts:
Am I happy that there is only 1 large file? No, but it doesn’t hurt me much either.
Do I see the disadvantages of large file? Yes, but I don’t think they are huge. More like minor annoyances with easy workarounds.
And to turn it around: Do you see the risk of any changes here? Do you see the cost of extra careful code reviews to mitigate the risk?

Strictly technically, we truly believe this is a formatting change. If it was simple to split it up and if it would be low risk and if it would be very easy to review, it might be worth the ‘minor’ improvements mentioned above … but I don’t see that combo happening (not on a noticeable scale in gc.cpp).
On a personal note: I also trust CLR team that if all these three things were true, the refactoring would have happened long time ago.

Note that some of this code goes back beyond .NET Core to the .NET Framework, the “historical reasons” that Zikmund mentions. We can see that the factors preventing change are as much organisational as technical.

Finally I attended a session on Microsoft’s Cognitive Services. Note this was in the “Sponsored solution track”. Microsoft also has a stand here focused on its Cognitive Services.

There is not much Microsoft Platform content at QCon and it seems under-represented, though many of the sessions are applicable to developers on any platform. I am not sure of all the reasons for this; there used to be an Advanced .NET track at QCon. It does reflect some overall development trends as well as the history and evolution of QCon itself. That said, there is a session on SQL Server on Linux so the company is not completely invisible here.

As for the session, it was a reasonable overview of Microsoft’s expanding Cognitive Services APIs, which covers things like image recognition, speech recognition and more. I would have liked more depth and would have preferred to hear from a practitioner, in other words, “we built an application on Cognitive Services and this is what we learned.” I am not altogether clear why the company is pushing this so hard, except that it is a driver for developers to use Azure. I asked about how developers should deal with the problem of uncertainty*, in other words, that Cognitive Services does not deliver absolute results but rather draws conclusions with a confidence score – eg it might be pretty sure that an image contains a human face, fairly sure that it is male, and somewhat confident that the age of the person is mid forties. When the speaker demoed speech recognition it went pretty well except that “Start” was transcribed as “Stop.” This stuff is difficult.

Looking forward now to Day Two: Containers, Machine Learning, and more.

*More concisely expressed as “Systems are moving from the deterministic to the probabilistic” by Stephen Whitworth, who is now speaking on Machine Learning.

Microsoft improves Windows Subsystem for Linux: launch Windows apps from Linux and vice versa

The Windows 10 anniversary update introduced a major new feature: a subsystem for Linux. Microsoft marketing execs call this Bash on Windows; Ubuntu calls it Ubuntu on Windows; but Windows Subsystem for Linux is the most accurate description. You run a Linux binary and the subsystem redirects system calls so that it behaves like Linux.

image

The first implementation (which is designated Beta) has an obvious limitation. Linux works great, and Windows works great, but they do not interoperate, other than via the file system or networking. This means you cannot do what I used to do in the days of Services for Unix: type ls at the command prompt to get a directory listing.

That is now changing. Version 14951 introduces interop so that you can launch Windows executables from the Bash shell and vice versa. This is particularly helpful since the subsystem does not support GUI applications. One of the obvious use cases is launching a GUI editor from Bash, such as Visual Studio Code or Notepad++.

The nitty-gritty on how it works is here.

image

Limitations? A few. Environment variables are not shared so an executable that is on the Windows PATH may not be on the Linux PATH. The executable also needs to be on a filesystem compatible with DrvFs, which means NTFS or ReFS, not FAT32 or exFAT.

This is good stuff though. If you work on Windows, but love Linux utilities like grep, now you can use them seamlessly from Windows. And if you are developing Linux applications with say PHP or Node.js, now you can develop in the Linux environment but use Windows editors.

Note that this is all still in preview and I am not aware of an announced date for the first non-beta release.

Microsoft at Ignite: Building on Office 365, getting more like Google, Adobe mysteries and FPGA magic

I’m just back from Microsoft’s Ignite event in Atlanta, Georgia, where around 23,000 attendees mostly in IT admin roles assembled to learn about the company’s platform.

There are always many different aspects to this type of event. The keynotes (there were two) are for news and marketing hype, while there is lots of solid technical content in the sessions, of which of course you can only attend a small fraction. There was also an impressive Expo at Ignite, well supported both by third parties and by Microsoft, though getting to it was a long walk and I fear some will never find it. If you go to one of these events, I recommend the Microsoft stands because there are normally some core team members hanging around each one and you can get excellent answers to questions as well as a chance to give them some feedback.

The high level story from Ignite is that the company is doing OK. The event was sold out and Corporate VP Brad Anderson assured me that many more tickets could have been sold, had the venue been bigger. The vibe was positive and it looks like Microsoft’s cloud transition is working, despite having to compete with Amazon on IaaS (Infrastructure as a service) and with Google on productivity and collaboration.

My theory here is that Microsoft’s cloud advantage is based on Office 365, of which the core product is hosted Exchange and the Office suite of applications licensed by subscription. The dominance of Exchange in business made the switch to Office 365 the obvious solution for many companies; as I noted in 2011, the reality is that many organisations are not ready to give up Word and Excel, Outlook and Active Directory. The move away from on-premises Exchange is also compelling, since running your own mail server is no fun, and at the small business end Microsoft has made it an expensive option following the demise of Small Business Server. Microsoft has also made Office 365 the best value option for businesses licensing desktop Office; in fact, I spoke to one attendee who is purchasing a large volume of Office 365 licenses purely for this reason, while still running Exchange on-premises. Office 365 lets users install Office on up to 5 PCs, Macs and mobile devices.

Office 365 is only the starting point of course. Once users are on Office 365 they are also on Azure Active Directory, which becomes a hugely useful single sign-on for cloud applications. Microsoft is now building a sophisticated security story around Azure AD. The company can also take advantage of the Office 365 customer base to sell related cloud services such as Dynamics CRM online. Integrating with Office 365 and/or Azure AD has also become a great opportunity for developers. If I had any kind of cloud-delivered business application, I would be working hard to get it into the Office Store and try to win a place on the newly refreshed Office App Launcher.

image

Office 365 users have had to put up with a certain amount of pain, mainly around the interaction between SharePoint online/OneDrive for Business and their local PC. There are signs that this is improving, and a key announcement made at Ignite by Jeff Teper is that SharePoint (which includes Team Sites) will be supported by the new generation sync client, which I hope means goodbye to the ever-problematic Groove client and a bit less confusion over competing OneDrive icons in the notification area.

A quick shout-out too for SharePoint Groups, despite its confusing name (how many different kinds of groups are there in Office 365?). Groups are ad-hoc collections of users which you set up for a project, department or role. Groups then have an automatic email distribution list, shared inbox, calendar, file library, OneNote notebook (a kind of Wiki) and a planning tool. Nothing you could not set up before, but packaged in a way that is easy to grasp. I was told that usage is soaring which does not surprise me.

I do not mean to diminish the importance of Azure, the cloud platform. Despite a few embarrassing outages, Microsoft has evolved the features of the service rapidly as well as building the necessary global infrastructure to support it. At Ignite, there were several announcements including new, more powerful virtual machines, IPv6 support, general availability of Azure DNS, faster networking up to an amazing 25 Gbps powered by FPGAs, and the public preview of a Web Application Firewall; the details are here:

My overall take on Azure? Microsoft has the physical infrastructure to compete with AWS though Amazon’s service is amazing, reliable and I suspect can be cheaper bearing in mind Amazon’s clever pricing options and lower price for application services like database management, message queuing, and so on. If you want to run Windows server and SQL server in the cloud Azure will likely be better value. Value is not everything though, and Microsoft has done a great job on making Azure accessible; with a developer hat on I love how easy it is to fire up VMs or deploy web applications via Visual Studio. Microsoft of course is busy building hooks to Azure into its products so that if you have System Center on-premises, for example, you will be constantly pushed towards Azure services (though note that the company has also added support for other public clouds in places).

There are some distinctive features in Microsoft’s cloud platform, not least the forthcoming Azure Stack, private cloud as an appliance.

I put “getting more like Google” in my headline, why is that? A couple of reasons. One is that CEO Satya Nadella focused his keynote on artificial intelligence (AI), which he described as “the ability to reason over large amounts of data and convert that into intelligence,” and then, “How we infuse every application, Cortana, Office 365, Dynamics 365 with intelligence.” He went on to describe Cortana (that personal agent that gets a bit in the way in Windows 10) as “the third run time … it’s what helps mediate the human computer interaction.” Cortana, he added, “knows you deeply. It knows your context, your family, your work. It knows the world. It is unbounded. In other words, it’s about you, it’s not about any one device. It goes wherever you go.”

I have heard this kind of speech before, but from Google’s Eric Schmidt rather than from Microsoft. While on the consumer side Google is better at making this work, there is an opportunity in a business context for Microsoft based on Office 365 and perhaps the forthcoming LinkedIn acquisition; but clearly both companies are going down the track of mining data in order to deliver more helpful and customized experiences.

It is also noticeable that Office 365 is now delivering increasing numbers of features that cannot be replicated on-premises, or that may come to on-premises one day but Office 365 users get them first. Further, Microsoft is putting significant effort into improving the in-browser experience, rather than pushing users towards Windows applications as you might have expected a few years back. It is cloud customers who are now getting the best from Microsoft.

While Microsoft is getting more like Google, I do not mean to say that it is like Google. The business model is different, with Microsoft’s based on paid licenses versus Google’s primarily advertising model. Microsoft straddles cloud and on-premises whereas Google has something close to a pure cloud play – there is Android, but that drives advertising and cloud services rather than being a profit centre in itself. And so on.

There were a couple more notable events during Nadella’s keynote.

image
Distinguished Engineer Doug Burger and one of Microsoft’s custom FPGA boards.

One was Distinguished Engineer Doug Burger’s demonstration of the power of FPGA boards which have been added to Azure servers, sitting between the servers and the network so they can operate in part independently from their hosts (see my short interview with Burger here).

During the keynote, he gave what he called a “visual demo” of the impact of these FPGA accelerators on Azure’s processing power. First we saw accelerated image recognition. Then a translation example, using Tolstoy’s War and Peace as a demo:

image

The FPGA-enabled server consumed less power but performed the translation 8 times faster. The best was to come though. What about translating the whole of English Wikipedia? “I’ll show you what would happen if we were to throw most of our existing global deployment at it,” said Burger.

image

“Less than a tenth of a second” was the answer. Looking at that screen showing 1 Exa-op felt like being present at the beginning of a computing revolution. As the Top500 supercomputing site observes, “the fact the Microsoft has essentially built the world’s first exascale computer is quite an achievement.” Exascale is a billion billion operations per second.

However, did we see Wikipedia translated, or just an animation? Bearing in mind first, that Burger spoke of “what would happen”, and second, that the screen says “Estimated time”, and third, that the design of Azure’s FPGA network (as I understand it) means that utilising it could impact other users of the service (since all network traffic to the hosts goes through these boards), it seems that we saw a projected result and not an actual result – which means we should be sceptical about whether this would actually work as advertised, though it remains amazing.

One more puzzle before I wrap up. Adobe CEO Shantanu Narayen appeared on stage with Nadella, in the morning keynote, to announce that Adobe will make Azure its “preferred cloud.” This appears to include moving Adobe’s core cloud services from Amazon Web Services, where they currently run, to Azure. Narayen:

“we’re thrilled and excited to be announcing that we are going to be delivering all of our clouds, the Adobe Document Cloud, the Marketing Cloud and the Creative Cloud, on Azure, and it’s going to be our preferred way of bringing all of this innovation to market.”

Narayen said that Adobe’s decision was based on Microsoft’s work in machine learning and intelligence. He also looked forward to integrating with Dynamics CRM for “one unified and integrated sales and marketing service.”

This seems to me interesting in all sorts of ways, not only as a coup for Microsoft’s cloud platform versus AWS, but also as a case study in migrating cloud services from one public cloud to another. But what exactly is Adobe doing? I received the following statement from an AWS spokesperson:

“We have a significant, long-term relationship and agreement with Adobe that hasn’t changed. Their customers will want to use AWS, and they’re committed to continuing to make that easy.”

It does seem strange to me that Adobe would want to move such a significant cloud deployment, that as far as I know works well. I am trying to find out more.

Azure Stack on show at Microsoft Ignite

At the Expo here at Microsoft’s Ignite you can see Azure Stack – though behind glass.

image

Azure Stack is Microsoft’s on-premises edition of Azure, a private cloud in a box. Technical Preview 2 has just been released, with two new services: Azure Queue Storage and Azure Key Vault. You can try it out on a single server just to get a feel for it; the company calls this a “one node proof of concept”.

Azure Stack will be delivered as an appliance, hence the exhibition here. There are boxes from Dell, HP Enterprise and Lenovo on display. General availability is planned for mid-2017 according to the folk on the stand.

There is plenty of power in one of these small racks, but what if there is a fire or some other disaster? Microsoft recommends purchasing at least two, and locating them some miles from one another, so you can set up resilience just as you can between Azure regions.

Incidentally, the Expo at Ignite seems rather quiet; it is not on the way to anything other than itself, and I have to allow 10-15 minutes to walk there from the press room. I imagine the third party exhibitors may be disappointed by the attendance, though I may just have picked a quiet time. There is a huge section with Microsoft stands and this is a great way to meet some of the people on the various teams and get answers to your questions.

Reflections on QCon London 2016 – part one

I attended QCon in London last week. This is a software development conference focused on large-scale projects and with a tradition oriented towards Agile methodology. It is always one of the best events I get to attend, partly because it is vendor-neutral (it is organised by InfoQ), and partly because of the way it is structured. The schedule is divided into tracks, such as “Back to Java” or “Architecting for failure”, each of which has a track leader, and the track leader gets to choose who speaks on their track. This means you get a more diverse range of speakers than is typical; you also tend to hear from practitioners or academics rather than product managers or evangelists.

image

The 2016 event was well up to standard from my perspective – though bear in mind that with 6 tracks on each day I only got to attend a small fraction of the sessions.

This post is just to mention a few highlights, starting with the opening keynote from Adrian Colyer, who specialised in finding interesting IT-related research papers and writing them up on his blog. He seems to enjoy being contrarian and noted, for example, that you might be doing too much software testing – drawing I guess on this post about the art of testing less without sacrificing quality. The takeaway for me is that it is always worth analysing what you do and trying to avoid the point where the cost exceeds the benefit.

Next up was Gavin Stevenson on “love failure” – I wrote this up on the Reg – there is a perhaps obvious point here that until you break something, you don’t know its limitations.

On Monday evening we got a light-hearted (virtual) look at Babbage’s Analytical Engine (1837) which was never built but was interesting as a mechanical computer, and Ada Lovelace’s attempts to write code for it, thanks to John Graham-Cumming and illustrator Sydney Padua (author of The Thrilling Adventures of Lovelace and Babbage).

image

Tuesday and the BBC’s Stephen Godwin spoke on Microservices powering BBC iPlayer. This was a compelling talk for several reasons. The BBC is hooked on AWS (Amazon Web Services) apparently and stores 21TB daily into S3 (Simple Storage Service). This includes safety copies. iPlayer was rebuilt in 2013, Godwin told us, and the team of 25 developers achieves 34 live deployments per week on average; clearly the DevOps stuff is working here. Godwin advocates genuinely “micro” services. “How big should a microservice be? For us, about 600 Java statements,” he said.

Martin Thompson spoke on the characteristics of a good software engineer, though oddly the statement that has stayed with me is that an ORM (Object-Relational Mapping) “is the wrong abstraction for a database”, something that chimes with me even though I get the value of ORMs like Microsoft’s Entity Framework for rapid development where database performance is non-critical.

Then came another highlight: Google’s Micah Lemonik on Architecting Google Docs. This talk sadly was not recorded; a touch of paranoia from Google? This was fascinating both from a historical perspective – Lemonik was involved in a small company called 2Web technologies which developed an Excel-like engine in 2003-4, and joined Google (which acquired 2Web) in 2005 to work on Google Sheets. The big story here was the how Google Sheets became collaborative, so more than one person could work on a spreadsheet simultaneously. “Google didn’t like it initially,” said Lemonik. “They thought it was too weird.” The team persisted though, thinking about the editing process as “messages being transferred between collaborators” rather than as file updates; and it worked.

You can actually use today’s version in your own projects, with Google’s Realtime API, provided that you are happy to have your stuff on Google Drive.

I particularly enjoyed Lemonik’s question to the audience. Two people are working on a sheet, and one types “6” into a cell. Then the same person overtypes this with “7”. Then the collaborator overtypes the same cell with “8”. Next, the first person presses Ctrl-z for undo. What should be the result?

The audience split neatly into “6”, “7”, and just a few “8” (the rationale for “8” is that undo should only undo your own changes and not touch those made by others).

Google, incidentally, settled on “6”, maintaining a separate undo stack for each user. But there is no right answer.

Lemonik also discussed the problem of consistency when there are large numbers of contributors. A hard problem. “There have to be bounds to the system in order for it to perform well,” he said. “The biggest takeaway for me in building the system is that you just can’t have it all. All of engineering is this trade-off.”

image

I have more to say about QCon so look out for part two shortly.

New Delphi and C++ Builder Roadmap promises Linux server support

Embarcadero has published a new roadmap explaining what to expect in forthcoming editions of its RAD Studio suite, including Delphi and C++ Builder.

The company has been acquired by IDERA though the Embarcadero brand is to continue under the new ownership.

The roadmap covers two “development tracks”, though it is not completely clear what that means. One is described as the “Spring development track” which suggests a release in April, 12 months after RAD Studio XE8. However, the post adds that “The team is working the following features that will be included in 2016 releases,” raising the possibility that some features in this track may come later, perhaps in the scheduled summer update.

The Spring track, to be called “Berlin”, seems to be mainly a tidying-up exercise in any case, with features including Bluetooth LE support for Windows 10, DirectX 12 support, native support for Utf8String on all platforms (you mean it does not have this already?) and enhancements to the FireMonkey cross-platform framework.

“Spring” also offers C++ CLANG 3.3 on all platforms.

The second development track “will deliver a Fall release”, to be known as “Tokyo”, following the pattern of recent years where RAD Studio has two major updates every year. The Fall track is more interesting, and includes support for Delphi and C++ Builder on Linux Server, as well as “Linux platform support for console apps with IoT support.” I guess non-GUI Linux is the common thread here.

The IDE will remain on Windows, with cross-compilation for Linux. Initially supported distributions are Ubuntu Server and RedHat Enterprise, though further distributions will be added “as demand dictates”.

It is good to see Linux support back in Delphi. I remember Borland Kylix (2001-2003) well, but this was back in the days when desktop Linux looked like more of a thing.

The feature-list for Tokyo also includes Windows Centennial support. This is potentially big news. Centennial is a Microsoft project to deliver Windows desktop applications through the Windows Store, using application virtualisation based on the existing App-V technology to remove dependency issues. You can expect to hear more about Centennial at Microsoft’s Build conference at the end of March; it was covered at last year’s Build but we have not heard much more about it since.

image

Embarcadero is also promising a new installer for RAD Studio, based on its GetIt technology, which will reduce installation time and give more flexibility in selecting features. This would be welcome; I never look forward to installing RAD Studio as it tends to be a time-consuming process. It would also be good if it messed less with system environmental variables, though I do not know if this is on the cards. The new installer will comes in two phases, phase 1 in Berlin and phase 2 in Tokyo.

My own view is that two major releases a year is one too many, so I would prefer if Embarcadero scrapped Berlin and went straight to Tokyo.