Category Archives: agile

Sold out QCon kicks off in London: big data, mobile, cloud, HTML 5

QCon London has just started in London, and I’m interested to see that it is both bigger than last year and also sold out. I should not be surprised, because it is usually the best conference I attend all year, being vendor-neutral (though with an Agile bias), wide-ranging and always thought-provoking.

image

A few more observations. One reason I attend is to watch industry trends, which are meaningful here because the agenda is driven by what currently concerns developers and software architects. Interesting then to see an entire track on cross-platform mobile, though one that is largely focused on HTML 5. In fact, mobile and cloud between them dominate here, with other tracks covering cloud architecture, big data, highly available systems, platform as a service, HTML 5 and JavaScript and more.

I also noticed that Abobe’s Christophe Coenraets is here this year as he was in 2011 – only this year he is talking not about Flex or AIR, but HTML, JavaScript and PhoneGap.

Review: Continuous Delivery by Jez Humble and David Farley

I like this book. I know I like it because I find myself wanting to quote from it frequently. It is a book that almost every software developer should read, even if you disagree with parts of it – which is likely, because it is opinionated. The authors always give reasons for their opinions though, which means that if you disagree, you need to articulate why that is; or they may even change your mind. In consequence you find yourself learning as you read.

The authors are software theoreticians, but they are also practitioners; in fact they are practitioners first and theoreticians afterwards. This means they are pragmatic rather than dogmatic. Here is an example. Chapter 13 discusses software dependencies, and page 372 covers circular dependencies, “probably the nastiest dependency problem.” A circular dependency is when component A depends on component B, and component B also depends on component A.

A bad idea; but the authors write:

Surprisingly, we have seen successful projects with circular dependencies in their build systems. You may argue with our definition of “successful” in this case, but there was working code in production, which is enough for us.

As an aside, this kind of dry humour is characteristic, as also evident in remarks like this:

We are certain that, occasionally, manually intensive releases work smoothly. We may well have been unlucky in having mostly seen the bad ones.

The subject of the book is Continuous Delivery. So what is that? Well, if Continuous Integration is about ensuring that your software always builds, then Continuous Delivery is about ensuring that your software always deploys. The final form, as it were, of Continuous Delivery is Continuous Deployment, where you are so confident of your automated build and deploy process that any checked-in code that passes its tests can be deployed immediately. I was confused about the difference between Continuous Delivery and Continuous Deployment so I wrote a post about it; it turns out that there is not much difference.

The principle behind Continuous Delivery is that software is not done until it is released. If the release process is long, arduous and infrequent, then you are not really doing Agile development. A section of chapter 1 is devoted to release anti-patterns, and these form an excellent rationale for taking an interest in Continuous Delivery.

My guess is that anyone who has been involved in professional software development will wince a little while reading through these anti-patterns, thinking “that is what we used to do” or even “that is what we do”.

That said, Humble and Farley do not fall into the trap of merely writing about how not to do it. Rather, they address in some detail the kinds of problems you will face if you decide to embrace the Continuous Delivery methodology. The key ingredient in Continuous Delivery is that pretty much everything must be automated, otherwise it is too difficult to do. But how do you automate something like Acceptance Testing? That is the subject of chapter 8. How do you automate a deployment at all? That is the subject of chapter 6. The authors are not on a higher plane than the rest of us, and much of the advice is straightforward, even at the level of “Always use relative paths,” which is a tip in chapter 6.

The authors talk a lot about testing, as you would expect, but there is also extensive discussion of software configuration management, describing different approaches such as centralised and distributed version control and even specific tools. The chapter on Advanced Version Control is a particularly good read. Humble and Farley articulate the point that branching and merging is antithetical to Continuous Integration and therefore Continuous Delivery:

If different members of the team are working on separate branches or streams then by definition they’re not continuously integrating (p 390)

Does this mean branches are a bad idea? Not always, say the authors, but they also state:

Our strong recommendation is to crate long-lived branches only on release … new work is always committed to the trunk (p 392)

The reason is not only to enable Continuous Integration, but also because merging is complex and error-prone.

Software configuration management is not easy, but it is a relatively mature aspect of software development. This is less true of what you might call infrastructure configuration management; yet infrastructure dependencies such as versions and configurations of the operating system or web server are a common reason for deployment failures. Several chapters discuss this problem in detail. In principle, the authors say:

The desired state of your infrastructure should be specified through version-controlled configuration.

This leads to some thoughtful discussion of how to achieve this.

Another theme, as you would expect, is that development and operations people need to be working together and not in isolation. To some extent this is a DevOps book.

A great book then; but there are flaws. One is that there is some repetition because of the way the book is organised. This is good if you are inclined to read chapters in isolation, but not so good if you are reading straight through. In practice I did not find it too annoying, but it is there.

Another issue is that while the authors do cover Microsoft .NET to some extent, this is usually in the form of a brief mention and there is more focus on Java. This may be in part because of their preference for open source. It is still a good read for .NET developers, because the principles are platform-agnostic, but Microsoft platform developers may find it irritating at times. Team Foundation Server, say the authors, is “essentially an inferior knock-off of Perforce” (p 386).

The discussion of specific tools is a strength but also a weakness, in that the tools will change over time and the book will become dated.

This is not the last word on Continuous Delivery, but it is an enjoyable and thought-provoking read. Recommended.

 

QCon London kicks off with call to rediscover Agile, use open source

I’m at the QCon developer conference in London – one of my favourite developer conferences of the year because of its breadth and energy.

The opening keynote was from Craig Larman who spoke on doing lean and agile development – in particular, the Scrum methodology – with large multi-site teams. He means sizeable product groups of 500-1500 persons, though he also remarked that development on this scale is really a bad idea and that a team of 10 smart folk is much better.

Still, I guess large teams are an inevitability, and Larman has written books on the subject. I am not going to summarise the talk exactly, interesting though it was, but I am going to pick out a couple of asides which interested me.

Agile methodology is really about promoting communication; and one of Larman’s themes is that if you do what seems obvious, that is to break down a project into components and give one to each small team, then you end up with numerous teams that do not communicate well with each other. Agile becomes something you do in name only.

Larman spent a bit of time on which collaboration tools to use. One of his points was not to use any commercial tool that describes itself as being for agile project management or similar. I can think of several. He says these tools are just the commercial tool vendors repackaging their old non-agile tools. Whiteboards, spreadsheets on Google docs, wikis and other simple tools are his recommendation. For source code management he suggests Subversion, Git or other open source solutions. Never use Rational Clearcase, he added, it always causes problems.

In fact, he went on to say that any commercial tools cause problems when mutli-site development extends beyond to teams in developing countries. They cannot afford the licences, he says, so avoid them.

It seems to me that the common theme here is how easily agile development intentions become non-agile in practice, especially in these large project groups.

QCon London 2010 report: fix your code, adopt simplicity, cool .NET things

I’m just back from QCon London, a software development conference with an agile flavour that I enjoy because it is not vendor-specific. Conferences like this are energising; they make you re-examine what you are doing and may kick you into a better place. Here’s what I noticed this year.

Robert C Martin from Object Mentor gave the opening keynote, on software craftsmanship. His point is that code should not just work; it should be good. He is delightfully opinionated. Certification, he says, provides value only to certification bodies. If you want to know whether someone has the skills you want, talk to them.

Martin also came up with a bunch of tips for how to write good code, things like not having more than two arguments to a function and never a boolean. I’ve written these up elsewhere.

image

Next I looked into the non-relational database track and heard Geir Magnusson explain why he needed Project Voldemort, a distributed key-value storage system, to get his ecommerce site to scale. Non-relational or NOSQL is a big theme these days; database managers like CouchDB and MongoDB are getting a lot of attention. I would like to have spent more time on this track; but there was too much else on; a problem with QCon.

I therefore headed for the functional programming track, where Don Syme from Microsoft Research gave an inspiring talk on F#, Microsoft’s new functional language. He has a series of hilarious slides showing F# code alongside its equivalent in C#. Here is an example:

image

The white panel is the F# code; the rest of the slide is C#.

Seeing a slide that this makes you wonder why we use C# at all, though of course Syme has chosen tasks like asychronous IO and concurrent programming for which F# is well suited. Syme also observed that F# is ideal for working with immutable data, which is common in internet programming. I grabbed a copy of Programming F# for further reading.

Over on the Architecture track, Andres Kütt spoke on Five Years as a Skype Architect. His main theme: most of a software architect’s job is communication, not poring over diagrams and devising code structures. This is a consistent theme at QCon and in the Agile movement; get the communication right and all else follows. I was also interested in the technical side though. Skype started with SOAP but switched to a REST model for web services. Kütt also told us about the languages Skype uses: PHP for the web site, C or C++ for heavy lifting and peer-to-peer networking; Delphi for the Windows interface; PostgreSQL for the database.

Day two of QCon was even better. I’ve written up Martin Fowler’s talk on the ethics of software development in a separate post. Following that, I heard Canonical’s Simon Wardley speak about cloud computing. Canonical is making a big push for Ubuntu’s cloud package, available both for private use or hosted on Amazon’s servers; and attendees at the QCon CloudCamp later on were given a lavish, pointless cardboard box with promotional details. To be fair to Wardley though, he did not talk much about Ubuntu’s cloud solution, though he did make the point that open source makes transitions between providers much cheaper.

Wardley’s most striking point, repeated perhaps too many times, is that we have no choice about whether to adopt cloud computing, since we will be too much disadvantaged if we reject it. He says it is now more a management issue than a technical one.

Dan North from ThoughtWorks gave a funny and excellent session on simplicity in architecture. He used pseudo-biblical language to describe the progress of software architecture for distributed systems, finishing with

On the seventh day God created REST

Very good; but his serious point is that the shortest, simplest route to solving a problem is often the best one, and that we constantly make the mistake of using over-generalised solutions which add a counter-productive burden of complexity.

North talked about techniques for lateral thinking, finding solutions from which we are mentally blocked, by chunking up, which means merging details into bigger ideas, ending up with “what is this thing for anyway”; and chunking down, the reverse process, which breaks a problem down into blocks small enough to comprehend. Another idea is to articulate a problem to a colleague, which exercises different parts of the brain and often stimulates a solution – one of the reasons pair programming can be effective.

A common mistake, he said, is to keep using the same old products or systems or architectures because we always do, or because the organisation is already heavily invested in it, meaning that better alternatives do not get considered. He also talked about simple tools: a whiteboard rather than a CASE tool, for example.

Much of North’s talk was a variant of YAGNI – you ain’t gonna need it – an agile principle of not implementing something until/unless you actually need it.

I’d like to put this together with something from later in the day, a talk on cool things in the .NET platform. One of these was Guerrilla SOA, though it is not really specific to .NET. To get the idea, read this blog post by Jim Webber, another from the ThoughtWorks team (yes, there are a lot of them at QCon). Here’s a couple of quotes:

Prior to our first project starting, that client had already undertaken some analysis of their future architecture (which needs scalability of 1 billion transactions per month) using a blue-chip consultancy. The conclusion from that consultancy was to deploy a bus to patch together the existing systems, and everything else would then come together. The upfront cost of the middleware was around £10 million. Not big money in the grand scheme of things, but this £10 million didn’t provide a working solution, it was just the first step in the process that would some day, perhaps, deliver value back to the business, with little empirical data to back up that assertion.

My (small) team … took the time to understand how to incrementally alter the enterprise architecture to release value early, and we proposed doing this using commodity HTTP servers at £0 cost for middleware. Importantly we backed up our architectural approach with numbers: we measured the throughput and latency characteristics of a representative spike (a piece of code used to answer a question) through our high level design, and showed that both HTTP and our chosen Web server were suitable for the volumes of traffic that the system would have to support … We performance tested the solution every single day to ensure that we would always be able to meet the SLAs imposed on us by the business. We were able to do that because we were not tightly coupled to some overarching middleware, and as a consequence we delivered our first service quickly and had great confidence in its ability to handle large loads. With middleware in the mix, we wouldn’t have been so successful at rapidly validating our service’s performance. Our performance testing would have been hampered by intricate installations, licensing, ops and admin, difficulties in starting from a clean state, to name but a few issues … The last I heard a few weeks back, the system as a whole was dealing with several hundred percent more transactions per second than before we started. But what’s particularly interesting, coming back to the cost of people versus cost of middleware argument, is this: we spent nothing on middleware. Instead we spent around £1 million on people, which compares favourably to the £10 million up front gamble originally proposed.

This strikes me as an example of the kind of approach North advocates.

You may be wondering what other cool .NET things were presented. This session was called the State of the Art .NET, given by Amanda Laucher and Josh Graham. They offer a dozen items which they considered .NET folk should be using or learning about:

  1. F# (again)
  2. M – modelling/DSL language
  3. Boo – static Python for .NET
  4. NUnit – unit testing. Little regard for Microsoft’s test framework in Team System, which is seen as a wasted and inferior effort.
  5. RhinoMocks – mocking library
  6. Moq – another mocking library
  7. NHibernate – object-relational mapping
  8. Windsor – dependency injection, part of Castle project. Controversial; some attendees thought it too complex.
  9. NVelocity – .NET template engine
  10. Guerrilla SOA – see above
  11. Azure – Microsoft’s cloud platform – surprisingly good thanks to David Cutler’s involvement, we were told
  12. MEF – Managed Extensibility Framework as found in Visual Studio 2010, won high praise from those who have tried it

That was my last session (I missed Friday) though I did attend the first part of CloudCamp, an unconference for cloud early adopters. I am not sure there is much point in these now. The cloud is no longer subversive and the next new thing; all the big enterprise vendors are onto it. Look at the CloudCamp sponsor list if you doubt me. There are of course still plenty of issues to talk about, but maybe not like this; I stayed for the first hour but it was dull.

For more on QCon you might also want to read back through my Twitter feed or search the entire #qcon tag for what everyone else thought.

Technology trends: Silverlight, Flex little use says Thoughtworks as it Goes Google

Today Martin Fowler at Thoughtworks tweeted a link to the just-published Thoughtworks Technology Radar [pdf] paper, which aims to “help decision makers understand emerging technologies and trends that affect the market today”.

It is a good read, as you would expect from Thoughtworks, a software development company with a bias towards Agile methodology and a formidable reputation.

The authors divide technology into four segments, from Hold – which means steer clear for the time being – to Adopt, ready for prime time. In between are Assess and Trial.

I was interested to see that Thoughtworks is ready to stop supporting IE6 and that ASP.NET MVC is regarded as ready to use now. So is Apple iPhone as a client platform, with Android not far behind (Trial).

Thoughtworks is also now contemplating Java language end of life (Assess), but remains enthusiastic about the JVM as a platform (Adopt), and about Javascript as a first class language (also Adopt). C# 4.0 wins praise for its new dynamic features and pace of development in general.

Losers? I was struck by how cool Thoughtworks is towards Rich Internet Applications (Adobe Flash and Microsoft Silverlight):

Our position on Rich Internet Applications has changed over the past year. Experience has shown that platforms such as Silverlight, Flex and JavaFX may be useful for rich visualizations of data but provide few benefits over simpler web applications.

The team has even less interest in Microsoft’s Internet Explorer – even IE8 is a concern with regard to web standards – whereas Firefox lies at the heart of the Adopt bullet.

In the tools area, Thoughtworks is moving away from Subversion and towards distributed version control systems (Git, Mercurial).

Finally, Thoughtworks is Going Google:

At the start of October, ThoughtWorks became a customer of Google Apps. Although we have heard a wide range of opinions about the user experience offered by Google Mail, Calendar and Documents, the general consensus is that our largely consultant workforce is happy with the move. The next step that we as a company are looking to embrace is Google as a corporate platform beyond the standard Google Apps; in particular we are evaluating the use of Google App Engine for a number of internal systems initiatives.

A thought-provoking paper which makes more sense to me than the innumerable Gartner Magic Quadrants; I’d encourage you to read the whole paper (only 8 pages) and not to be content with my highlights.

Adobe’s chameleon Flash shows its enterprise colours

Duane Nickull is Senior Technical Evangelist at Adobe and co-author of Web 2.0 Architectures which I reviewed recently. He is also Duane Chaos of grunge band 22nd Century and entertained us at the Adobe MAX party last night in Los Angeles.

Duane Chaos at Adobe MAX bash in LA

It’s appropriate that he works for Adobe, whose Flash runtime has parallel chameleon characteristics. Most of the time it is delivering annoying ads, games or silly videos; but it also turns up as a flexible cross-platform client runtime for Enterprise applications.

We saw this demonstrated yesterday in an excellent session on scaling Flex for a large trading application, given by the developers of Morgan Stanley’s Matrix application about which I have written before. This session was far more informative than the earlier online briefing, and a fascinating case study in how to create Enterprise-grade software.

Matrix was built by a team of around 30 Flex developers, over a period of between 18 months and two years. It uses a REST-based service layer which talks to a variety of Java and .NET back-end servers – we didn’t hear much about these – and delivering XML to the Flex client. The team did not use the Flash-optimised AMF protocol because the app uses Lightstreamer which did not support it at the time, though we were told that AMF would be advantageous and may be used in future. LiveCycle Data Services were ruled out because of lack of support for edge server deployment; again, this has apparently been fixed in the latest LiveCycle so migrating in that direction is possible.

Matrix uses the Cairngorm 3 architecture, which specifies best-practice design patterns for Flex, implemented using the Parsley Application Framework. The application is modular, and we heard a lot about how rigorous module encapsulation makes a large application like this – 600,000 lines of Flex code – manageable, reliable, flexible and testable. One module cannot access the implementation details of another, and a message bus handles communication.

I was also impressed by the attention given to performance. Another advantage of using modules is that they are loaded on demand, reducing the load time and memory footprint. Each module is profiled separately. The team also found that a big factor in Flex performance is efficiency in managing redraw regions – apparently Flash can easily be sloppy about this and redraw regions that have not actually changed. The team patched the UIMovieClip component to overcome problems in this area.

A model-view-controller architecture is used for the user interface, and this enables better testability. The team uses continuous integration to maintain quality.

According to the session presenters, the result is an application that has the high performance required of a financial trading application, and can run for extended periods without issues.

Although I had the impression that developing Matrix has been bleeding edge at times, with the team using beta software to get access to new features, there was also evidence that Adobe was responding to issues and using this as an opportunity to improve its platform.

This makes a great case study for those sceptical about whether the Flash runtime is really capable of powering Enterprise clients, or for any Flex developer.

Beck on Agile: it’s all about the team

Kent Beck is really a relationship consultant, or should that be counsellor? This is not a bad thing. Beck gave a keynote this morning here at Qcon and talked a bit about techie topics like frequent deployment (he claims that Flickr deploys every half an hour) and creating more tests more often, but the main focus of his talk is relationships within the development team and between the team and the business people (if they regard themselves as separate).

Beck says that the ubiquity of computing is changing the typical characteristics of a programmer. When only geeks had computers, programmers were inevitably geeky – and for whatever reason, that often meant something of a social misfit. Today everyone grows up with computers, which he says makes programming more accessible to non-geeks, who have better social skills.

Reflecting on this, I’m not quite convinced. Yes, everyone grows up with computers, but few have any inclination to understand how they work. A nation of car-drivers does not make a nation of engineers.

Still, that doesn’t affect his main point, which is that characteristics like trustworthiness, transparency, honesty, accountability, and the ability to get on well with others, are critical to successful development:

I focus on what developers can do to have better social skills and be better business partners.

In an aside on accountability, Beck makes a point about Windows and the “beginning of the end of the Microsoft monopoly.” He says that people are realising that they don’t have to put up with computers that are unreliable or require frequent restarts:

How many hours are spent worldwide waiting for Windows to restart, do the maths. Software needs to be effective and needs to work; increasingly there are alternatives.

Windows can work pretty well in the right circumstances; but it’s a fair point nonetheless. I recall the effort it took to set up a laptop recently. Microsoft’s fault, or third-party problems? Both; but the user doesn’t care whose fault it is, but only wants a better experience.

Incidentally, the team theme came up again when Peter Goodliffe spoke on good and bad application design. He observed that bad design is damaging to teams; uncertainty about what the code does or where new code should go stresses relationships, and working with a bad design damages morale. My reflection was that the team is primary, not the design. A bad team will never come up with a good design. A good team could still find itself working with a bad design though, so focus on design is never wasted.