Tag Archives: qcon

QCon London: the Ethics track and the psychology of software

The most significant thing about the Ethics track at QCon London, a software development conference I attended last week, is that it existed. I can recall ethics being discussed at QCon in previous years (including a memorable appeal by Martin Fowler at Thoughtworks about rectifying the gender imbalance in IT) but not a specific track.

Why does ethics matter more today? Ethics has always mattered, but the power of software over our lives is increasing. It is possible be that algorithms at Facebook, YouTube and Twitter influenced the result of the last US election and the UK’s Brexit referendum. Algorithms play a large role in influencing many of choices, what to buy, where to eat, where to stay, which airline to book, which vendor to use.

Software also consumes more of our time than ever, as we constantly check our phones for notifications, play games or read online content.

The increasing importance of AI (Artificial Intelligence) also raises ethical questions. Last week I attended the Re-work AI Assistant Summit, also in London. One of the sessions concerned “Building an AI Friend”, presented by Artem Rodichev from Replika. The demos were impressive, showing how a bot can be engaging and help users to talk about what matters to them. I asked though if the company had thought about ethical issues, for example if a child became attached to a bot without realising it was non-human. The answer I got was in effect a blank look, followed by the statement “we have a minimum age limit of 7”. The company has no announced business model, but I would encourage it to form an ethical policy early as these things are hard to bolt on in retrospect – as Facebook is discovering today, in the aftermath of the exposure of how its personal data is being misused by third parties.

AI is also poised to take over more jobs previously done by people. This could be a great liberator for humanity, or alternatively divide society even more deeply into haves and have-nots.

We need more ethics discussion then; but is it too late? Well, it is never too late to improve matters, but perhaps much harm could have been avoided if the industry had focused on this earlier.

I attended a talk by Alexander Steinhart (a technologist at ThoughtWorks) on the psychologist’s perspective on ethics in technology.

image

Steinhart talked about addiction. “We all want to unplug, but cannot”, he said.

“Now we are all connected. On average people are nearly three hours online every day. They check phones every 7 to 15 minutes. Many people have difficulties in finding the right balance.”

When is a habit an addiction? When it “gets into the way of your life and you can’t do anything else, and when you try to change behaviour you don’t manage,” said Steinhart, mentioning that “distraction” is identified as a risk by many people today, including teenagers.

Interruptions and distractions are detrimental to our productivity and also a source of stress, he said. Once you are distracted, it takes 20-25 minutes to recover your focus. “Take care that you are not connected all the time.”

Unfortunately we have also developed an “attention economy” where web sites and apps are rewarded for holding our attention and they have evolved to do that effectively.

A great way, apparently, to get us addicted is to have mechanisms that only occasionally reward us. We will try and try again in hope of reward. Lotteries are like this. So are slot machines. So too, says Steinhart, are things like notifications in apps, or the action of pulling down to refresh emails or other feeds. Most of the time we get nothing of value and we know that. But occasionally something really good arrives. The possibility keeps us hooked.

Another difficulty is that humans do not always cope well with abundance. When a previously scarce resource, food for example, becomes abundant, logically what should happen is that we become more discriminating, selecting only the best and discarding the rest. In practice though this is not the case, and we have seen the ascendance of junk food that does us harm.

We now have abundant information. Answering a question that might once have required a trip to the library or several phone calls can now be done in an instance. That is fantastic; but are we coping well? Somehow, instead of becoming more discriminating about the sources and value of available information, humanity is prone to consuming more and more information of low quality, whether that is banal time-wasting or actual falsehoods and information that is intended to deceive or mislead us.

Steinhart argues that we have moved into a new technological era but have not yet learned how to manage it. He draws an analogy with urbanisation; it tool mankind a while to learn how to build cities that were agreeable places in which to live.

Human needs includes some that are will served by today’s technological landscape. We need to experience “all of the different senses, to small, to taste.” We need privacy and solitude. “If you put managers alone for one hour in a room with nothing to do, they make better decisions the rest of the day,” claims Steinhart. We also need conversations, not just connections. “There is so much human interaction that you cannot digitise, like looking someone in the eye” he said.

How does this translate to ethics in technology? We need positive computing and software design that is “aligned with human goals,” he said.

Free and open source software is helpful in this respect, because the goals of the software are aligned with our needs rather then profit.

What can software developers do? “It is not your fault that technology is distracting,” said Steinhart, “but it’s your responsibility to change something.”

image

It is interesting to imagine what software might look like if designed for human needs rather than business interests. Steinhart’s ideas are around making software quieter, designed to get out of our way rather than to interrupt us, smartphones that encourage us to leave them alone, and of course to avoid anti-patterns which feed addiction or deliberately try to trip us up.

I noticed this tweet today about how an Amazon app behaves when you try to cancel. The user clicks Cancel subscription and gets this:

image

The following screen reverses the button colouring so that if you trained yourself to tap the faint button, you actually do the opposite of what you intend:

image

Until the last screen (there’s another one?) where they switch again:

image

This was not in Steinhart’s talk, but it seems a good example of software designed for the business and not for the user.

I have seen a similar pattern in Amazon’s web checkout where you have to click carefully to avoid being signed up for Amazon’s Prime subscription by accident. Not good.

Ethics and technology

This post is long enough; but there is, I hope, much more to say on this subject.

Despite enjoying Steinhart’s talk and others in the Ethics track, I was not encouraged. We need, of course, regulation as well as more principled businesses, and we do not know what such regulation should look like, nor how to implement it.

One thing though is worth repeating: if as a software developer you are asked to do something that is ethically unacceptable, you should refuse. Professional standards include more than quality of coding.

A week of QCon: introduction

I attended QCon London last week and found it fascinating, but have not written as much about it as I intended because of various other deadlines. In order to address this I will do a quick daily post for the next week or so.

QCon is a software development conference run by InfoQ. It is vendor-neutral and focuses on large-scale enterprise development as well as future trends, language choices and changes, software architecture and more. If you delve into the history of the event it has championed techniques including Agile development, Service Oriented Architecture, Microservices, and now AI. The event has a culture and an ethos, which is something to do with human-centred software, team communications, taking hte side of the user, aversion to unnecessary complexity, and constant exploration of emerging technology.

image
Laura Bell of SafeStack speaks at QCon London on Architecting a Culture of Secure Software.

QCon, like many other events, encourages attendees to give feedback on sessions they attend. At other events I have often seen forms with several categories and questions like “How well did the speaker know their subject” and “What was your biggest takeaway from this session”? While such questions are reasonable, the problem is that they are too difficult and time-consuming and therefore not many respond, or the responses are of low quality. The QCon organisers decided years ago that the only feedback system that works is to have attendees vote good, indifferent or poor as they leave. This used to be done with coloured paper and is now electronic. I mention this because it says something about the event culture: let’s prefer something that works and is not a burden, despite the seeming crudity of a 1-2-3 scoring system. And of course even such basic information is highly valuable in discerning which sessions were most appreciated.

The event prefers practitioners, engineers and team leads over evangelists, trainers and consultants. It attracts a particularly able audience:

image

Of course you can learn plenty outside the actual sessions by chatting to other attendees.

Up next: technical ethics at QCon London.

QCon London 2017: IoT insecurity, serverless computing, predicting technical debt, and why .NET Core depends on a 36,000 line C++ file

I’m at the QCon event in London, a multi-vendor conference aimed primarily at enterprise developers and architects.

image
Adam Tornhill speaks at QCon London 2017

A few notes on day one. Alasdair Allan gave a keynote on security and the internet of things; it was an entertaining and disturbing résumé of all that is wrong with the mad rush to connect everything to the internet though short on answers; our culture has to change so that organisations such as hotels, toy manufacturers, appliance vendors and even makers of medical equipment take security seriously but it is not clear how this will come about unless so many bad things happen that customers start to insist on it.

Michael Feathers spoke on strategic code deletion, part of a track on “Dark code: the legacy/tech debt dilemma.” This was an excellent session; code is added to projects more often than it is removed, and lack of hygiene in this regard has risks including security, reliability and performance. But discovering which code is safe to remove is not always trivial, and Feathers explored some of the nuances and suggested some techniques.

Steve Faulkner gave a session on serverless JavaScript, or more specifically, using Amazon Web Services (AWS) Lambda and API Gateway. Faulkner said that the API Gateway was the piece that made Lambda viable for them; he is Director of Platform Engineering at Bustle, a busy content site based in the USA. In a nutshell, moving from EC2 VMs to Lambda has yielded both financial savings and easier management. The only downside is performance; each call to a Lambda function takes a minimum of 100ms whereas the same function on a WM might take 20ms. In the end it is not critical as performance remains satisfactory.

Faulkner said that AWS is ahead of its competitors (Microsoft, Google and IBM were mentioned) but when pressed said that both Microsoft and Google offered strong alternatives. Microsoft’s Azure Functions are spoilt by the need to specify a maximum scale, rather than scaling automatically, but its routing solution is in some ways ahead of AWS, he said. Google’s Functions will be great when out of beta.

Adam Tornhill spoke on A Crystal Ball to prioritise Technical Debt, another session in the dark code track. This was my favourite of the day. Tornhill presented a relatively simple way to discover what code you should refactor now in order to avoid future issues. His method is based on looking for files with many lines of code (a way of measuring complexity) and many commits (suggesting high importance and activity), the “hotspots” in your projects. For more detail and some utilities see Tornhill’s blog.

Why do we end up with bad or risky code in our software? Tornhill said that developers often mistake organisational problems for technical problems and try unsuccessfully to fix them with tools.

He also mentioned an example of high-risk code, the file gc.cpp which performs garbage collection in .NET Core, the next generation of Microsoft’s .NET Framework. This file is over 36,000 lines and should be refactored. There is a discussion on the subject here. It exactly bears out Tornhill’s point. A developer proposes to refactor the file, back in March 2015. Microsoft’s Karel Zikmund defends the status quo:

Why it is this way? … Partly historical reasons (it is this way since the start). Partly because devs working on it didn’t feel the urge to refactor it. Partly because splitting of gc.cpp is non-trivial and risky and because it does not bring too big value (ramp up in the code base can be gained also in the combination of reading BOTR and debugging the code). Why it is staying this way? … Cost/benefit/risk ratio is IMO not in favor of a change here.

Few additional thoughts:
Am I happy that there is only 1 large file? No, but it doesn’t hurt me much either.
Do I see the disadvantages of large file? Yes, but I don’t think they are huge. More like minor annoyances with easy workarounds.
And to turn it around: Do you see the risk of any changes here? Do you see the cost of extra careful code reviews to mitigate the risk?

Strictly technically, we truly believe this is a formatting change. If it was simple to split it up and if it would be low risk and if it would be very easy to review, it might be worth the ‘minor’ improvements mentioned above … but I don’t see that combo happening (not on a noticeable scale in gc.cpp).
On a personal note: I also trust CLR team that if all these three things were true, the refactoring would have happened long time ago.

Note that some of this code goes back beyond .NET Core to the .NET Framework, the “historical reasons” that Zikmund mentions. We can see that the factors preventing change are as much organisational as technical.

Finally I attended a session on Microsoft’s Cognitive Services. Note this was in the “Sponsored solution track”. Microsoft also has a stand here focused on its Cognitive Services.

There is not much Microsoft Platform content at QCon and it seems under-represented, though many of the sessions are applicable to developers on any platform. I am not sure of all the reasons for this; there used to be an Advanced .NET track at QCon. It does reflect some overall development trends as well as the history and evolution of QCon itself. That said, there is a session on SQL Server on Linux so the company is not completely invisible here.

As for the session, it was a reasonable overview of Microsoft’s expanding Cognitive Services APIs, which covers things like image recognition, speech recognition and more. I would have liked more depth and would have preferred to hear from a practitioner, in other words, “we built an application on Cognitive Services and this is what we learned.” I am not altogether clear why the company is pushing this so hard, except that it is a driver for developers to use Azure. I asked about how developers should deal with the problem of uncertainty*, in other words, that Cognitive Services does not deliver absolute results but rather draws conclusions with a confidence score – eg it might be pretty sure that an image contains a human face, fairly sure that it is male, and somewhat confident that the age of the person is mid forties. When the speaker demoed speech recognition it went pretty well except that “Start” was transcribed as “Stop.” This stuff is difficult.

Looking forward now to Day Two: Containers, Machine Learning, and more.

*More concisely expressed as “Systems are moving from the deterministic to the probabilistic” by Stephen Whitworth, who is now speaking on Machine Learning.

Reflections on QCon London 2016 – part one

I attended QCon in London last week. This is a software development conference focused on large-scale projects and with a tradition oriented towards Agile methodology. It is always one of the best events I get to attend, partly because it is vendor-neutral (it is organised by InfoQ), and partly because of the way it is structured. The schedule is divided into tracks, such as “Back to Java” or “Architecting for failure”, each of which has a track leader, and the track leader gets to choose who speaks on their track. This means you get a more diverse range of speakers than is typical; you also tend to hear from practitioners or academics rather than product managers or evangelists.

image

The 2016 event was well up to standard from my perspective – though bear in mind that with 6 tracks on each day I only got to attend a small fraction of the sessions.

This post is just to mention a few highlights, starting with the opening keynote from Adrian Colyer, who specialised in finding interesting IT-related research papers and writing them up on his blog. He seems to enjoy being contrarian and noted, for example, that you might be doing too much software testing – drawing I guess on this post about the art of testing less without sacrificing quality. The takeaway for me is that it is always worth analysing what you do and trying to avoid the point where the cost exceeds the benefit.

Next up was Gavin Stevenson on “love failure” – I wrote this up on the Reg – there is a perhaps obvious point here that until you break something, you don’t know its limitations.

On Monday evening we got a light-hearted (virtual) look at Babbage’s Analytical Engine (1837) which was never built but was interesting as a mechanical computer, and Ada Lovelace’s attempts to write code for it, thanks to John Graham-Cumming and illustrator Sydney Padua (author of The Thrilling Adventures of Lovelace and Babbage).

image

Tuesday and the BBC’s Stephen Godwin spoke on Microservices powering BBC iPlayer. This was a compelling talk for several reasons. The BBC is hooked on AWS (Amazon Web Services) apparently and stores 21TB daily into S3 (Simple Storage Service). This includes safety copies. iPlayer was rebuilt in 2013, Godwin told us, and the team of 25 developers achieves 34 live deployments per week on average; clearly the DevOps stuff is working here. Godwin advocates genuinely “micro” services. “How big should a microservice be? For us, about 600 Java statements,” he said.

Martin Thompson spoke on the characteristics of a good software engineer, though oddly the statement that has stayed with me is that an ORM (Object-Relational Mapping) “is the wrong abstraction for a database”, something that chimes with me even though I get the value of ORMs like Microsoft’s Entity Framework for rapid development where database performance is non-critical.

Then came another highlight: Google’s Micah Lemonik on Architecting Google Docs. This talk sadly was not recorded; a touch of paranoia from Google? This was fascinating both from a historical perspective – Lemonik was involved in a small company called 2Web technologies which developed an Excel-like engine in 2003-4, and joined Google (which acquired 2Web) in 2005 to work on Google Sheets. The big story here was the how Google Sheets became collaborative, so more than one person could work on a spreadsheet simultaneously. “Google didn’t like it initially,” said Lemonik. “They thought it was too weird.” The team persisted though, thinking about the editing process as “messages being transferred between collaborators” rather than as file updates; and it worked.

You can actually use today’s version in your own projects, with Google’s Realtime API, provided that you are happy to have your stuff on Google Drive.

I particularly enjoyed Lemonik’s question to the audience. Two people are working on a sheet, and one types “6” into a cell. Then the same person overtypes this with “7”. Then the collaborator overtypes the same cell with “8”. Next, the first person presses Ctrl-z for undo. What should be the result?

The audience split neatly into “6”, “7”, and just a few “8” (the rationale for “8” is that undo should only undo your own changes and not touch those made by others).

Google, incidentally, settled on “6”, maintaining a separate undo stack for each user. But there is no right answer.

Lemonik also discussed the problem of consistency when there are large numbers of contributors. A hard problem. “There have to be bounds to the system in order for it to perform well,” he said. “The biggest takeaway for me in building the system is that you just can’t have it all. All of engineering is this trade-off.”

image

I have more to say about QCon so look out for part two shortly.

What’s on at the QCon London software development conference (and a discount for readers)

The QCon London conference is on in early March (5-7). It is always a conference I look forward to since it is vendor neutral, though with an agile flavour. Although it covers high scale systems it is not the place to go if you think heavyweight Enterprise middleware from a big name vendor will solve your problems. In fact, it was QCon where I heard Martin Fowler and Jim Webber from Thoughtworks expound on Does my Bus look big in this? (what a great title) in which they argue that whatever software need you have, an Enterprise Service Bus is not the best way to meet it. You are not going to hear this kind of disruptive address at vendor-driven events.

So what is on this year? There are 15 tracks, some covering the buzzwords of the day like Big Data Architecture, DevOps, Internet of Things (two different tracks on this) and Bleeding Edge HTML5 and JavaScript (with case studies from Netflix and the Financial Times). Next Gen Cloud intrigues me because it covers multi-cloud services.

Java is almost conspicuous by absence (though it will no doubt show up throughout) but there is a track on Not Only Java which looks at performance tuning, garbage collections, lambdas and streams in Java 8, and more.

In the wake of Snowden’s revelations, Privacy and Security gets a track to itself, long overdue.

So how does QCon choose its tracks? It’s done by a committee of community experts, InfoQ CEO Floyd Marinescu told me.

Over 15 people came together, facilitated by QCon, and through an intensive multi-week process debated and voted down our tracks until we had these final 15.   We do place some constraints such as a desire to have a certain balance of tracks to cover areas of innovation of interest to different communities such as project managers, architects, engineers as well as operations people.

What really counts of course is the speakers, and this year they include Jafar Husain (technical lead at Netflix), Eva Andreasson who pioneered deterministic garbage collection, Graham Tackley, director of architecture at Guardian News and Media, Erik Meijer who created Microsoft’s LINQ, and Joe Armstrong, co-inventor of Erlang.

If you book with the code “itwriting50” you will get a £50 discount.

Microsoft’s platform nearly invisible at QCon London 2012

QCon London ended yesterday. It was the biggest London QCon yet, with around 1200 developers and a certain amount of room chaos, but still a friendly atmosphere and a great opportunity to catch up with developers, vendors, and industry trends.

Microsoft was near-invisible at QCon. There was a sparsely attended Azure session, mainly I would guess because QCon attendees do not see that Azure has any relevance to them. What does it offer that they cannot get from Amazon EC2, Google App Engine, Joyent or another niche provider, or from their own private clouds?

Mark Rendle at the Azure session did state that Node.js runs better on Windows (and Azure) than on Linux. However he did not have performance figure to hand. A quick search throws up these figures from Node.js inventor Ryan Dahl: 

  v 0.6.0 (Linux) v 0.6.0 (Windows)
http_simple.js /bytes/1024 6263 r/s 5823 r/s
io.js read 26.63 mB/s 26.51 mB/s
io.js write 17.40 mB/s 33.58 mB/s
startup.js 49.6 ms 52.04 ms

These figures are more “nothing to choose between them” than evidence for better performance, but since 0.6.0 was the first Windows release it is possible that it has swung in its favour since. It is a decent showing for sure, but there are other more important factors when choosing a cloud platform: cost, resiliency, services available and so on. Amazon is charging ahead; why choose Azure?

My sense is that developers presume that Azure is mainly relevant to Microsoft platform businesses hosting Microsoft platform applications; and I suspect that a detailed analysis would bear out that presumption despite the encouraging figures above. That said, Azure seems to me a solid though somewhat expensive offering and one that the company has undersold.

I have focused on Azure because QCon tends to be more about the server than the client (though there was  a good deal of mobile this year), and at enterprise scale. It beats me why Microsoft was not exhibiting there, as the attendees are an influential lot and exactly the target audience, if the company wants to move beyond its home crowd.

I heard little talk of Windows 8 and little talk of Windows Phone 7,  though Nokia sponsored some of the catering and ran a hospitality suite which unfortunately I was not able to attend.

Nor did I get to Tomas Petricek’s talk on asynchronous programming in F#, though functional programming was hot at QCon last year and I would guess he drew a bigger audience than Azure managed.

Microsoft is coming from behind in cloud -  Infrastructure as a Service and/or Platform as a Service – as well as in mobile.

I should add the company is, from what I hear, doing better with its Software as a Service cloud, Office 365; and of course I realise that there are plenty of Microsoft-platform folk who attend other events such as the company’s own BUILD, Tech Ed and so on.

Update:

This is the basis for the claim that node.js runs better on Windows:

IOCP supports Sockets, Pipes, and Regular Files.
That is, Windows has true async kernel file I/O.
(In Unix we have to fake it with a userspace thread pool.)

from Dahl’s presentation on the Node roadmap at NodeConf May 2011.

The most enduring software development techniques revealed at QCon London

I am in London for the QCon event, a vendor-neutral development conference which I have been fortunate to attend regularly over the last few years.

image

These events tend to have an underlying theme, which reflects the current thinking of developers and software architects. Each year I hear cogent and thoughtful explanations of why this or that approach will enable us to code better and please users more. Each year I also hear cogent and thoughtful explanations of why the fix proposed last year or the year before is actually a prime reason why projects fail.

Way back when it was SOA (Service Oriented Architecture) that was sweeping away the mistakes of the past. Next SOA itself was the mistake of the past and we got REST (Representational State Transfer). This year I am hearing how RPC is making a comeback, or at least not going away, for example because it can be more efficient when you want to transfer as little data as possible across the WAN.

Another example is enterprise Java. Enterprise Java Beans and J2EE were the fix, and then the problem, for scalable distributed applications. Rod Johnson came up with Spring, the lightweight alternative. Now I am hearing how Spring has become bloated and complicated and developers are looking for lightweight alternatives.

Test-driven development (TDD) brings fantastic benefits to software development, making it possible to change and improve your code while defending against the introduction of bugs. Yesterday though Dan North observed that TDD also has a cost, in that you write much more code. It is not uncommon for projects to have more test code than code that is active in production. If you did not write that code, you could be doing other productive work in the time made available. 

Agile methodologies like Scrum were devised to promote or even create communication and agility in software teams. Now every big enterprise vendor says it does Scrum and runs courses, but the result is a long way from the agile (with a small a) original concept.

This year I have heard a lot about over-optimisation, or creating code for situations that in fact never arise. This is the problem to which the solution is YAGNI (You Ain’t Gonna Need It). Since they apply across all the methodologies, I suggest that YAGNI, and its cousin DRY (Don’t Repeat Yourself), and the even older KISS (Keep It Simple Stupid) are the most enduring software methodologies.

That said, even DRY took a beating yesterday. Greg Young in his evening keynote said that rigorous DRY advocates can end up creating single blocks of code where really the procedure was only nearly the same. If your DRY functions are full of edge cases and special conditions, then maybe DRY has been taken to excess.

In the light of the above, I would therefore like to propose the first draft of my first theorem of software development:

There is no development methodology which will not become a burden when embraced rigidly

The other lesson I have learned from multiple QCons is that effective teams and smart developers count for much, much more than any specific tool or language or approach. There is no substitute.

Sold out QCon kicks off in London: big data, mobile, cloud, HTML 5

QCon London has just started in London, and I’m interested to see that it is both bigger than last year and also sold out. I should not be surprised, because it is usually the best conference I attend all year, being vendor-neutral (though with an Agile bias), wide-ranging and always thought-provoking.

image

A few more observations. One reason I attend is to watch industry trends, which are meaningful here because the agenda is driven by what currently concerns developers and software architects. Interesting then to see an entire track on cross-platform mobile, though one that is largely focused on HTML 5. In fact, mobile and cloud between them dominate here, with other tracks covering cloud architecture, big data, highly available systems, platform as a service, HTML 5 and JavaScript and more.

I also noticed that Abobe’s Christophe Coenraets is here this year as he was in 2011 – only this year he is talking not about Flex or AIR, but HTML, JavaScript and PhoneGap.

Guardian.co.uk enthuses about MongoDB, plans to ditch Oracle

The Guardian’s Mat Wall has spoken here at Qcon London about why it is migrating its web site away from Oracle and towards MongoDB.

He also said there are moves towards cloud hosting, I think on Amazon’s hosted infrastructure, and that its own data centre can be used as a backup in case of cloud failure – an idea which makes some sense to me.

So what’s wrong with Oracle? The problem is the tight relationship between updates to the code that runs the site, and the Oracle database schema. Significant code updates tend to require schema updates too, which means pausing content updates while this takes place. Journalists on a major news site hate these pauses.

MongoDB by contrast is not a relational database. Rather, it stores documents in JSON (JavaScript Object Notation) format. This means that documents with new attributes can be added to the database at runtime.

Although this was the main motivation for change, the Guardian discovered other benefits. Developer productivity is significantly better with MongoDB and they are enjoying its API.

Currently both MongoDB and Oracle are in use. The Guardian has written its own API layer to wrap database access and handle the complexity of having two radically different data stores.

I enjoyed this talk, partly thanks to Wall’s clear presentation, and partly because I was glad to hear solid pragmatic reasons for moving to a NoSQL data store.

QCon London kicks off with call to rediscover Agile, use open source

I’m at the QCon developer conference in London – one of my favourite developer conferences of the year because of its breadth and energy.

The opening keynote was from Craig Larman who spoke on doing lean and agile development – in particular, the Scrum methodology – with large multi-site teams. He means sizeable product groups of 500-1500 persons, though he also remarked that development on this scale is really a bad idea and that a team of 10 smart folk is much better.

Still, I guess large teams are an inevitability, and Larman has written books on the subject. I am not going to summarise the talk exactly, interesting though it was, but I am going to pick out a couple of asides which interested me.

Agile methodology is really about promoting communication; and one of Larman’s themes is that if you do what seems obvious, that is to break down a project into components and give one to each small team, then you end up with numerous teams that do not communicate well with each other. Agile becomes something you do in name only.

Larman spent a bit of time on which collaboration tools to use. One of his points was not to use any commercial tool that describes itself as being for agile project management or similar. I can think of several. He says these tools are just the commercial tool vendors repackaging their old non-agile tools. Whiteboards, spreadsheets on Google docs, wikis and other simple tools are his recommendation. For source code management he suggests Subversion, Git or other open source solutions. Never use Rational Clearcase, he added, it always causes problems.

In fact, he went on to say that any commercial tools cause problems when mutli-site development extends beyond to teams in developing countries. They cannot afford the licences, he says, so avoid them.

It seems to me that the common theme here is how easily agile development intentions become non-agile in practice, especially in these large project groups.