Tag Archives: qcon

QCon London 2017: IoT insecurity, serverless computing, predicting technical debt, and why .NET Core depends on a 36,000 line C++ file

I’m at the QCon event in London, a multi-vendor conference aimed primarily at enterprise developers and architects.

image
Adam Tornhill speaks at QCon London 2017

A few notes on day one. Alasdair Allan gave a keynote on security and the internet of things; it was an entertaining and disturbing résumé of all that is wrong with the mad rush to connect everything to the internet though short on answers; our culture has to change so that organisations such as hotels, toy manufacturers, appliance vendors and even makers of medical equipment take security seriously but it is not clear how this will come about unless so many bad things happen that customers start to insist on it.

Michael Feathers spoke on strategic code deletion, part of a track on “Dark code: the legacy/tech debt dilemma.” This was an excellent session; code is added to projects more often than it is removed, and lack of hygiene in this regard has risks including security, reliability and performance. But discovering which code is safe to remove is not always trivial, and Feathers explored some of the nuances and suggested some techniques.

Steve Faulkner gave a session on serverless JavaScript, or more specifically, using Amazon Web Services (AWS) Lambda and API Gateway. Faulkner said that the API Gateway was the piece that made Lambda viable for them; he is Director of Platform Engineering at Bustle, a busy content site based in the USA. In a nutshell, moving from EC2 VMs to Lambda has yielded both financial savings and easier management. The only downside is performance; each call to a Lambda function takes a minimum of 100ms whereas the same function on a WM might take 20ms. In the end it is not critical as performance remains satisfactory.

Faulkner said that AWS is ahead of its competitors (Microsoft, Google and IBM were mentioned) but when pressed said that both Microsoft and Google offered strong alternatives. Microsoft’s Azure Functions are spoilt by the need to specify a maximum scale, rather than scaling automatically, but its routing solution is in some ways ahead of AWS, he said. Google’s Functions will be great when out of beta.

Adam Tornhill spoke on A Crystal Ball to prioritise Technical Debt, another session in the dark code track. This was my favourite of the day. Tornhill presented a relatively simple way to discover what code you should refactor now in order to avoid future issues. His method is based on looking for files with many lines of code (a way of measuring complexity) and many commits (suggesting high importance and activity), the “hotspots” in your projects. For more detail and some utilities see Tornhill’s blog.

Why do we end up with bad or risky code in our software? Tornhill said that developers often mistake organisational problems for technical problems and try unsuccessfully to fix them with tools.

He also mentioned an example of high-risk code, the file gc.cpp which performs garbage collection in .NET Core, the next generation of Microsoft’s .NET Framework. This file is over 36,000 lines and should be refactored. There is a discussion on the subject here. It exactly bears out Tornhill’s point. A developer proposes to refactor the file, back in March 2015. Microsoft’s Karel Zikmund defends the status quo:

Why it is this way? … Partly historical reasons (it is this way since the start). Partly because devs working on it didn’t feel the urge to refactor it. Partly because splitting of gc.cpp is non-trivial and risky and because it does not bring too big value (ramp up in the code base can be gained also in the combination of reading BOTR and debugging the code). Why it is staying this way? … Cost/benefit/risk ratio is IMO not in favor of a change here.

Few additional thoughts:
Am I happy that there is only 1 large file? No, but it doesn’t hurt me much either.
Do I see the disadvantages of large file? Yes, but I don’t think they are huge. More like minor annoyances with easy workarounds.
And to turn it around: Do you see the risk of any changes here? Do you see the cost of extra careful code reviews to mitigate the risk?

Strictly technically, we truly believe this is a formatting change. If it was simple to split it up and if it would be low risk and if it would be very easy to review, it might be worth the ‘minor’ improvements mentioned above … but I don’t see that combo happening (not on a noticeable scale in gc.cpp).
On a personal note: I also trust CLR team that if all these three things were true, the refactoring would have happened long time ago.

Note that some of this code goes back beyond .NET Core to the .NET Framework, the “historical reasons” that Zikmund mentions. We can see that the factors preventing change are as much organisational as technical.

Finally I attended a session on Microsoft’s Cognitive Services. Note this was in the “Sponsored solution track”. Microsoft also has a stand here focused on its Cognitive Services.

There is not much Microsoft Platform content at QCon and it seems under-represented, though many of the sessions are applicable to developers on any platform. I am not sure of all the reasons for this; there used to be an Advanced .NET track at QCon. It does reflect some overall development trends as well as the history and evolution of QCon itself. That said, there is a session on SQL Server on Linux so the company is not completely invisible here.

As for the session, it was a reasonable overview of Microsoft’s expanding Cognitive Services APIs, which covers things like image recognition, speech recognition and more. I would have liked more depth and would have preferred to hear from a practitioner, in other words, “we built an application on Cognitive Services and this is what we learned.” I am not altogether clear why the company is pushing this so hard, except that it is a driver for developers to use Azure. I asked about how developers should deal with the problem of uncertainty*, in other words, that Cognitive Services does not deliver absolute results but rather draws conclusions with a confidence score – eg it might be pretty sure that an image contains a human face, fairly sure that it is male, and somewhat confident that the age of the person is mid forties. When the speaker demoed speech recognition it went pretty well except that “Start” was transcribed as “Stop.” This stuff is difficult.

Looking forward now to Day Two: Containers, Machine Learning, and more.

*More concisely expressed as “Systems are moving from the deterministic to the probabilistic” by Stephen Whitworth, who is now speaking on Machine Learning.

Reflections on QCon London 2016 – part one

I attended QCon in London last week. This is a software development conference focused on large-scale projects and with a tradition oriented towards Agile methodology. It is always one of the best events I get to attend, partly because it is vendor-neutral (it is organised by InfoQ), and partly because of the way it is structured. The schedule is divided into tracks, such as “Back to Java” or “Architecting for failure”, each of which has a track leader, and the track leader gets to choose who speaks on their track. This means you get a more diverse range of speakers than is typical; you also tend to hear from practitioners or academics rather than product managers or evangelists.

image

The 2016 event was well up to standard from my perspective – though bear in mind that with 6 tracks on each day I only got to attend a small fraction of the sessions.

This post is just to mention a few highlights, starting with the opening keynote from Adrian Colyer, who specialised in finding interesting IT-related research papers and writing them up on his blog. He seems to enjoy being contrarian and noted, for example, that you might be doing too much software testing – drawing I guess on this post about the art of testing less without sacrificing quality. The takeaway for me is that it is always worth analysing what you do and trying to avoid the point where the cost exceeds the benefit.

Next up was Gavin Stevenson on “love failure” – I wrote this up on the Reg – there is a perhaps obvious point here that until you break something, you don’t know its limitations.

On Monday evening we got a light-hearted (virtual) look at Babbage’s Analytical Engine (1837) which was never built but was interesting as a mechanical computer, and Ada Lovelace’s attempts to write code for it, thanks to John Graham-Cumming and illustrator Sydney Padua (author of The Thrilling Adventures of Lovelace and Babbage).

image

Tuesday and the BBC’s Stephen Godwin spoke on Microservices powering BBC iPlayer. This was a compelling talk for several reasons. The BBC is hooked on AWS (Amazon Web Services) apparently and stores 21TB daily into S3 (Simple Storage Service). This includes safety copies. iPlayer was rebuilt in 2013, Godwin told us, and the team of 25 developers achieves 34 live deployments per week on average; clearly the DevOps stuff is working here. Godwin advocates genuinely “micro” services. “How big should a microservice be? For us, about 600 Java statements,” he said.

Martin Thompson spoke on the characteristics of a good software engineer, though oddly the statement that has stayed with me is that an ORM (Object-Relational Mapping) “is the wrong abstraction for a database”, something that chimes with me even though I get the value of ORMs like Microsoft’s Entity Framework for rapid development where database performance is non-critical.

Then came another highlight: Google’s Micah Lemonik on Architecting Google Docs. This talk sadly was not recorded; a touch of paranoia from Google? This was fascinating both from a historical perspective – Lemonik was involved in a small company called 2Web technologies which developed an Excel-like engine in 2003-4, and joined Google (which acquired 2Web) in 2005 to work on Google Sheets. The big story here was the how Google Sheets became collaborative, so more than one person could work on a spreadsheet simultaneously. “Google didn’t like it initially,” said Lemonik. “They thought it was too weird.” The team persisted though, thinking about the editing process as “messages being transferred between collaborators” rather than as file updates; and it worked.

You can actually use today’s version in your own projects, with Google’s Realtime API, provided that you are happy to have your stuff on Google Drive.

I particularly enjoyed Lemonik’s question to the audience. Two people are working on a sheet, and one types “6” into a cell. Then the same person overtypes this with “7”. Then the collaborator overtypes the same cell with “8”. Next, the first person presses Ctrl-z for undo. What should be the result?

The audience split neatly into “6”, “7”, and just a few “8” (the rationale for “8” is that undo should only undo your own changes and not touch those made by others).

Google, incidentally, settled on “6”, maintaining a separate undo stack for each user. But there is no right answer.

Lemonik also discussed the problem of consistency when there are large numbers of contributors. A hard problem. “There have to be bounds to the system in order for it to perform well,” he said. “The biggest takeaway for me in building the system is that you just can’t have it all. All of engineering is this trade-off.”

image

I have more to say about QCon so look out for part two shortly.

What’s on at the QCon London software development conference (and a discount for readers)

The QCon London conference is on in early March (5-7). It is always a conference I look forward to since it is vendor neutral, though with an agile flavour. Although it covers high scale systems it is not the place to go if you think heavyweight Enterprise middleware from a big name vendor will solve your problems. In fact, it was QCon where I heard Martin Fowler and Jim Webber from Thoughtworks expound on Does my Bus look big in this? (what a great title) in which they argue that whatever software need you have, an Enterprise Service Bus is not the best way to meet it. You are not going to hear this kind of disruptive address at vendor-driven events.

So what is on this year? There are 15 tracks, some covering the buzzwords of the day like Big Data Architecture, DevOps, Internet of Things (two different tracks on this) and Bleeding Edge HTML5 and JavaScript (with case studies from Netflix and the Financial Times). Next Gen Cloud intrigues me because it covers multi-cloud services.

Java is almost conspicuous by absence (though it will no doubt show up throughout) but there is a track on Not Only Java which looks at performance tuning, garbage collections, lambdas and streams in Java 8, and more.

In the wake of Snowden’s revelations, Privacy and Security gets a track to itself, long overdue.

So how does QCon choose its tracks? It’s done by a committee of community experts, InfoQ CEO Floyd Marinescu told me.

Over 15 people came together, facilitated by QCon, and through an intensive multi-week process debated and voted down our tracks until we had these final 15.   We do place some constraints such as a desire to have a certain balance of tracks to cover areas of innovation of interest to different communities such as project managers, architects, engineers as well as operations people.

What really counts of course is the speakers, and this year they include Jafar Husain (technical lead at Netflix), Eva Andreasson who pioneered deterministic garbage collection, Graham Tackley, director of architecture at Guardian News and Media, Erik Meijer who created Microsoft’s LINQ, and Joe Armstrong, co-inventor of Erlang.

If you book with the code “itwriting50” you will get a £50 discount.

Microsoft’s platform nearly invisible at QCon London 2012

QCon London ended yesterday. It was the biggest London QCon yet, with around 1200 developers and a certain amount of room chaos, but still a friendly atmosphere and a great opportunity to catch up with developers, vendors, and industry trends.

Microsoft was near-invisible at QCon. There was a sparsely attended Azure session, mainly I would guess because QCon attendees do not see that Azure has any relevance to them. What does it offer that they cannot get from Amazon EC2, Google App Engine, Joyent or another niche provider, or from their own private clouds?

Mark Rendle at the Azure session did state that Node.js runs better on Windows (and Azure) than on Linux. However he did not have performance figure to hand. A quick search throws up these figures from Node.js inventor Ryan Dahl: 

  v 0.6.0 (Linux) v 0.6.0 (Windows)
http_simple.js /bytes/1024 6263 r/s 5823 r/s
io.js read 26.63 mB/s 26.51 mB/s
io.js write 17.40 mB/s 33.58 mB/s
startup.js 49.6 ms 52.04 ms

These figures are more “nothing to choose between them” than evidence for better performance, but since 0.6.0 was the first Windows release it is possible that it has swung in its favour since. It is a decent showing for sure, but there are other more important factors when choosing a cloud platform: cost, resiliency, services available and so on. Amazon is charging ahead; why choose Azure?

My sense is that developers presume that Azure is mainly relevant to Microsoft platform businesses hosting Microsoft platform applications; and I suspect that a detailed analysis would bear out that presumption despite the encouraging figures above. That said, Azure seems to me a solid though somewhat expensive offering and one that the company has undersold.

I have focused on Azure because QCon tends to be more about the server than the client (though there was  a good deal of mobile this year), and at enterprise scale. It beats me why Microsoft was not exhibiting there, as the attendees are an influential lot and exactly the target audience, if the company wants to move beyond its home crowd.

I heard little talk of Windows 8 and little talk of Windows Phone 7,  though Nokia sponsored some of the catering and ran a hospitality suite which unfortunately I was not able to attend.

Nor did I get to Tomas Petricek’s talk on asynchronous programming in F#, though functional programming was hot at QCon last year and I would guess he drew a bigger audience than Azure managed.

Microsoft is coming from behind in cloud -  Infrastructure as a Service and/or Platform as a Service – as well as in mobile.

I should add the company is, from what I hear, doing better with its Software as a Service cloud, Office 365; and of course I realise that there are plenty of Microsoft-platform folk who attend other events such as the company’s own BUILD, Tech Ed and so on.

Update:

This is the basis for the claim that node.js runs better on Windows:

IOCP supports Sockets, Pipes, and Regular Files.
That is, Windows has true async kernel file I/O.
(In Unix we have to fake it with a userspace thread pool.)

from Dahl’s presentation on the Node roadmap at NodeConf May 2011.

The most enduring software development techniques revealed at QCon London

I am in London for the QCon event, a vendor-neutral development conference which I have been fortunate to attend regularly over the last few years.

image

These events tend to have an underlying theme, which reflects the current thinking of developers and software architects. Each year I hear cogent and thoughtful explanations of why this or that approach will enable us to code better and please users more. Each year I also hear cogent and thoughtful explanations of why the fix proposed last year or the year before is actually a prime reason why projects fail.

Way back when it was SOA (Service Oriented Architecture) that was sweeping away the mistakes of the past. Next SOA itself was the mistake of the past and we got REST (Representational State Transfer). This year I am hearing how RPC is making a comeback, or at least not going away, for example because it can be more efficient when you want to transfer as little data as possible across the WAN.

Another example is enterprise Java. Enterprise Java Beans and J2EE were the fix, and then the problem, for scalable distributed applications. Rod Johnson came up with Spring, the lightweight alternative. Now I am hearing how Spring has become bloated and complicated and developers are looking for lightweight alternatives.

Test-driven development (TDD) brings fantastic benefits to software development, making it possible to change and improve your code while defending against the introduction of bugs. Yesterday though Dan North observed that TDD also has a cost, in that you write much more code. It is not uncommon for projects to have more test code than code that is active in production. If you did not write that code, you could be doing other productive work in the time made available. 

Agile methodologies like Scrum were devised to promote or even create communication and agility in software teams. Now every big enterprise vendor says it does Scrum and runs courses, but the result is a long way from the agile (with a small a) original concept.

This year I have heard a lot about over-optimisation, or creating code for situations that in fact never arise. This is the problem to which the solution is YAGNI (You Ain’t Gonna Need It). Since they apply across all the methodologies, I suggest that YAGNI, and its cousin DRY (Don’t Repeat Yourself), and the even older KISS (Keep It Simple Stupid) are the most enduring software methodologies.

That said, even DRY took a beating yesterday. Greg Young in his evening keynote said that rigorous DRY advocates can end up creating single blocks of code where really the procedure was only nearly the same. If your DRY functions are full of edge cases and special conditions, then maybe DRY has been taken to excess.

In the light of the above, I would therefore like to propose the first draft of my first theorem of software development:

There is no development methodology which will not become a burden when embraced rigidly

The other lesson I have learned from multiple QCons is that effective teams and smart developers count for much, much more than any specific tool or language or approach. There is no substitute.

Sold out QCon kicks off in London: big data, mobile, cloud, HTML 5

QCon London has just started in London, and I’m interested to see that it is both bigger than last year and also sold out. I should not be surprised, because it is usually the best conference I attend all year, being vendor-neutral (though with an Agile bias), wide-ranging and always thought-provoking.

image

A few more observations. One reason I attend is to watch industry trends, which are meaningful here because the agenda is driven by what currently concerns developers and software architects. Interesting then to see an entire track on cross-platform mobile, though one that is largely focused on HTML 5. In fact, mobile and cloud between them dominate here, with other tracks covering cloud architecture, big data, highly available systems, platform as a service, HTML 5 and JavaScript and more.

I also noticed that Abobe’s Christophe Coenraets is here this year as he was in 2011 – only this year he is talking not about Flex or AIR, but HTML, JavaScript and PhoneGap.

Guardian.co.uk enthuses about MongoDB, plans to ditch Oracle

The Guardian’s Mat Wall has spoken here at Qcon London about why it is migrating its web site away from Oracle and towards MongoDB.

He also said there are moves towards cloud hosting, I think on Amazon’s hosted infrastructure, and that its own data centre can be used as a backup in case of cloud failure – an idea which makes some sense to me.

So what’s wrong with Oracle? The problem is the tight relationship between updates to the code that runs the site, and the Oracle database schema. Significant code updates tend to require schema updates too, which means pausing content updates while this takes place. Journalists on a major news site hate these pauses.

MongoDB by contrast is not a relational database. Rather, it stores documents in JSON (JavaScript Object Notation) format. This means that documents with new attributes can be added to the database at runtime.

Although this was the main motivation for change, the Guardian discovered other benefits. Developer productivity is significantly better with MongoDB and they are enjoying its API.

Currently both MongoDB and Oracle are in use. The Guardian has written its own API layer to wrap database access and handle the complexity of having two radically different data stores.

I enjoyed this talk, partly thanks to Wall’s clear presentation, and partly because I was glad to hear solid pragmatic reasons for moving to a NoSQL data store.

QCon London kicks off with call to rediscover Agile, use open source

I’m at the QCon developer conference in London – one of my favourite developer conferences of the year because of its breadth and energy.

The opening keynote was from Craig Larman who spoke on doing lean and agile development – in particular, the Scrum methodology – with large multi-site teams. He means sizeable product groups of 500-1500 persons, though he also remarked that development on this scale is really a bad idea and that a team of 10 smart folk is much better.

Still, I guess large teams are an inevitability, and Larman has written books on the subject. I am not going to summarise the talk exactly, interesting though it was, but I am going to pick out a couple of asides which interested me.

Agile methodology is really about promoting communication; and one of Larman’s themes is that if you do what seems obvious, that is to break down a project into components and give one to each small team, then you end up with numerous teams that do not communicate well with each other. Agile becomes something you do in name only.

Larman spent a bit of time on which collaboration tools to use. One of his points was not to use any commercial tool that describes itself as being for agile project management or similar. I can think of several. He says these tools are just the commercial tool vendors repackaging their old non-agile tools. Whiteboards, spreadsheets on Google docs, wikis and other simple tools are his recommendation. For source code management he suggests Subversion, Git or other open source solutions. Never use Rational Clearcase, he added, it always causes problems.

In fact, he went on to say that any commercial tools cause problems when mutli-site development extends beyond to teams in developing countries. They cannot afford the licences, he says, so avoid them.

It seems to me that the common theme here is how easily agile development intentions become non-agile in practice, especially in these large project groups.

QCon London 2010 report: fix your code, adopt simplicity, cool .NET things

I’m just back from QCon London, a software development conference with an agile flavour that I enjoy because it is not vendor-specific. Conferences like this are energising; they make you re-examine what you are doing and may kick you into a better place. Here’s what I noticed this year.

Robert C Martin from Object Mentor gave the opening keynote, on software craftsmanship. His point is that code should not just work; it should be good. He is delightfully opinionated. Certification, he says, provides value only to certification bodies. If you want to know whether someone has the skills you want, talk to them.

Martin also came up with a bunch of tips for how to write good code, things like not having more than two arguments to a function and never a boolean. I’ve written these up elsewhere.

image

Next I looked into the non-relational database track and heard Geir Magnusson explain why he needed Project Voldemort, a distributed key-value storage system, to get his ecommerce site to scale. Non-relational or NOSQL is a big theme these days; database managers like CouchDB and MongoDB are getting a lot of attention. I would like to have spent more time on this track; but there was too much else on; a problem with QCon.

I therefore headed for the functional programming track, where Don Syme from Microsoft Research gave an inspiring talk on F#, Microsoft’s new functional language. He has a series of hilarious slides showing F# code alongside its equivalent in C#. Here is an example:

image

The white panel is the F# code; the rest of the slide is C#.

Seeing a slide that this makes you wonder why we use C# at all, though of course Syme has chosen tasks like asychronous IO and concurrent programming for which F# is well suited. Syme also observed that F# is ideal for working with immutable data, which is common in internet programming. I grabbed a copy of Programming F# for further reading.

Over on the Architecture track, Andres Kütt spoke on Five Years as a Skype Architect. His main theme: most of a software architect’s job is communication, not poring over diagrams and devising code structures. This is a consistent theme at QCon and in the Agile movement; get the communication right and all else follows. I was also interested in the technical side though. Skype started with SOAP but switched to a REST model for web services. Kütt also told us about the languages Skype uses: PHP for the web site, C or C++ for heavy lifting and peer-to-peer networking; Delphi for the Windows interface; PostgreSQL for the database.

Day two of QCon was even better. I’ve written up Martin Fowler’s talk on the ethics of software development in a separate post. Following that, I heard Canonical’s Simon Wardley speak about cloud computing. Canonical is making a big push for Ubuntu’s cloud package, available both for private use or hosted on Amazon’s servers; and attendees at the QCon CloudCamp later on were given a lavish, pointless cardboard box with promotional details. To be fair to Wardley though, he did not talk much about Ubuntu’s cloud solution, though he did make the point that open source makes transitions between providers much cheaper.

Wardley’s most striking point, repeated perhaps too many times, is that we have no choice about whether to adopt cloud computing, since we will be too much disadvantaged if we reject it. He says it is now more a management issue than a technical one.

Dan North from ThoughtWorks gave a funny and excellent session on simplicity in architecture. He used pseudo-biblical language to describe the progress of software architecture for distributed systems, finishing with

On the seventh day God created REST

Very good; but his serious point is that the shortest, simplest route to solving a problem is often the best one, and that we constantly make the mistake of using over-generalised solutions which add a counter-productive burden of complexity.

North talked about techniques for lateral thinking, finding solutions from which we are mentally blocked, by chunking up, which means merging details into bigger ideas, ending up with “what is this thing for anyway”; and chunking down, the reverse process, which breaks a problem down into blocks small enough to comprehend. Another idea is to articulate a problem to a colleague, which exercises different parts of the brain and often stimulates a solution – one of the reasons pair programming can be effective.

A common mistake, he said, is to keep using the same old products or systems or architectures because we always do, or because the organisation is already heavily invested in it, meaning that better alternatives do not get considered. He also talked about simple tools: a whiteboard rather than a CASE tool, for example.

Much of North’s talk was a variant of YAGNI – you ain’t gonna need it – an agile principle of not implementing something until/unless you actually need it.

I’d like to put this together with something from later in the day, a talk on cool things in the .NET platform. One of these was Guerrilla SOA, though it is not really specific to .NET. To get the idea, read this blog post by Jim Webber, another from the ThoughtWorks team (yes, there are a lot of them at QCon). Here’s a couple of quotes:

Prior to our first project starting, that client had already undertaken some analysis of their future architecture (which needs scalability of 1 billion transactions per month) using a blue-chip consultancy. The conclusion from that consultancy was to deploy a bus to patch together the existing systems, and everything else would then come together. The upfront cost of the middleware was around £10 million. Not big money in the grand scheme of things, but this £10 million didn’t provide a working solution, it was just the first step in the process that would some day, perhaps, deliver value back to the business, with little empirical data to back up that assertion.

My (small) team … took the time to understand how to incrementally alter the enterprise architecture to release value early, and we proposed doing this using commodity HTTP servers at £0 cost for middleware. Importantly we backed up our architectural approach with numbers: we measured the throughput and latency characteristics of a representative spike (a piece of code used to answer a question) through our high level design, and showed that both HTTP and our chosen Web server were suitable for the volumes of traffic that the system would have to support … We performance tested the solution every single day to ensure that we would always be able to meet the SLAs imposed on us by the business. We were able to do that because we were not tightly coupled to some overarching middleware, and as a consequence we delivered our first service quickly and had great confidence in its ability to handle large loads. With middleware in the mix, we wouldn’t have been so successful at rapidly validating our service’s performance. Our performance testing would have been hampered by intricate installations, licensing, ops and admin, difficulties in starting from a clean state, to name but a few issues … The last I heard a few weeks back, the system as a whole was dealing with several hundred percent more transactions per second than before we started. But what’s particularly interesting, coming back to the cost of people versus cost of middleware argument, is this: we spent nothing on middleware. Instead we spent around £1 million on people, which compares favourably to the £10 million up front gamble originally proposed.

This strikes me as an example of the kind of approach North advocates.

You may be wondering what other cool .NET things were presented. This session was called the State of the Art .NET, given by Amanda Laucher and Josh Graham. They offer a dozen items which they considered .NET folk should be using or learning about:

  1. F# (again)
  2. M – modelling/DSL language
  3. Boo – static Python for .NET
  4. NUnit – unit testing. Little regard for Microsoft’s test framework in Team System, which is seen as a wasted and inferior effort.
  5. RhinoMocks – mocking library
  6. Moq – another mocking library
  7. NHibernate – object-relational mapping
  8. Windsor – dependency injection, part of Castle project. Controversial; some attendees thought it too complex.
  9. NVelocity – .NET template engine
  10. Guerrilla SOA – see above
  11. Azure – Microsoft’s cloud platform – surprisingly good thanks to David Cutler’s involvement, we were told
  12. MEF – Managed Extensibility Framework as found in Visual Studio 2010, won high praise from those who have tried it

That was my last session (I missed Friday) though I did attend the first part of CloudCamp, an unconference for cloud early adopters. I am not sure there is much point in these now. The cloud is no longer subversive and the next new thing; all the big enterprise vendors are onto it. Look at the CloudCamp sponsor list if you doubt me. There are of course still plenty of issues to talk about, but maybe not like this; I stayed for the first hour but it was dull.

For more on QCon you might also want to read back through my Twitter feed or search the entire #qcon tag for what everyone else thought.

Martin Fowler on the ethics of software development – QCon report

Martin Fowler of ThoughtWorks gave what seemed an important session at QCon London, exploring the ethical dimension of software development with a talk called What are we programming for?. The room was small, since the organisers figured that a track on IT with a a conscience would be a minority interest; but Fowler always attracts a large audience and the result was a predictable crush.

The topic itself has many facets, perhaps awkwardly commingled. Should you walk away if your customer or employer asks you to code the wrong thing – maybe enforcing poor practice, or creating algorithms to fleece your customer’s customers? What about coding for proprietary platforms owned by vendors with questionable business practices? Fowler mentioned that around 50% of ThoughtWorks solutions are built on Microsoft .NET; and that the efforts of Bill Gates to combat malaria was something that helped him make sense of that relationship.

image

Fowler also echoed some of Robert Martin’s theme from the day before: if you build poor software, you are in effect stealing from the future revenue of your customer.

Finally, he talked about the male/female imbalance in software development, rejecting the idea that it is the consequence of natural differences in aptitude or inclination. Rather, he said, we can’t pretend to have the best people for the job while the imbalance continues. I’d add that since communication is a major theme of Agile methodology, and that females seem to have communication skills that males lack, fixing the imbalance could improve the development process in other ways.

Big questions; and Fowler had few answers. When asked for the takeaway, he said the main one is to discuss these issues more.