Tag Archives: agile

Is Ron Jeffries right about the shortcomings of Agile?

A post from InfoQ alerted me to this post by Agile Manifesto signatory Ron Jeffries with the rather extreme title “Developers should abandon Agile”.

If you read the post, you discover that what Jeffries really objects to is the assimilation of Agile methodology into the old order of enterprise software development, complete with expensive consultancy, expensive software that claims to manage Agile for you, and the usual top-down management.

All this goes to show that it is possible do do Agile badly; or more precisely, to adopt something that you call Agile but in reality is not. Jeffries concludes:

Other than perhaps a self-chosen orientation to the ideas of Extreme Programming — as an idea space rather than a method — I really am coming to think that software developers of all stripes should have no adherence to any “Agile” method of any kind. As those methods manifest on the ground, they are far too commonly the enemy of good software development rather than its friend.

However, the values and principles of the Manifesto for Agile Software Development still offer the best way I know to build software, and based on my long and varied experience, I’d follow those values and principles no matter what method the larger organization used.

I enjoyed a discussion on the subject of Agile with some of the editors and writes at InfoQ during the last London QCon event. Why is it, I asked, that Agile is no longer at the forefront of QCon, when a few years back it was at the heart of these events?

The answer, broadly, was that the key concepts behind Agile are now taken for granted so that there are more interesting things to discuss.

While this makes sense, it is also true (as Jeffries observes) that large organizations will tend to absorb these ideas in name only, and continue with dark methods if that is in their culture.

The core ideas in Extreme Programming are (it seems to be) sound. Working in small chunks, forming a team that includes the customer, releasing frequently and delivering tangible benefits, automated tests and continuous refactoring, planning future releases as you go rather than in one all-encompassing plan at the beginning of a project; these are fantastic principles and revolutionary when you first come across them. See here for Jeffries’ account of what is Extreme Programming.

These ideas have everything to do with how the team works and little to do with specific tools (though it is obvious that things like a test framework, DevOps strategy and so on are needed).

Equally, you can have all the best tools but if the team is not functioning as envisaged, the methodology will fail. This is why software development methodology and the psychology of human relationships are intimately linked.

Real change is hard, and it is easy to slip back into bad practices, which is why we need to rediscover Agile, or something like it, repeatedly. Maybe the Agile word itself is not so helpful now; but the ideas are as strong as ever.

Microsoft and mediocrity in programming

A post by Ahmet Alp Balkan on working as a developer at Microsoft has stimulated much discussion. Balkan says he joined Microsoft 8 months ago (or two years ago if you count when he started as an intern) and tells a depressing tale (couched in odd language) of poor programming practice. Specifically:

  • Lack of documentation and communication. “There are certain people, if they got hit by a bus, nobody can pick up their work or code.”
  • Inability to improve the codebase. “Nobody will appreciate you for fixing styling or architectural issues in their core, in fact they may get offended.”
  • Lack of enthusiasm. “Writing better code is not a priority for the most”
  • Lack of productivity. “I spend most of my time trying to figure out how others’ uncommented/undocumented code work, debugging strange things and attending daily meetings.”
  • Lack of contribution to the community. “Everybody loves finding Stack Overflow answers on search results, but nobody contributes those answers.”
  • Lack of awareness of the competition. “No one I met in Windows Azure team heard about Heroku or Rackspace.”
  • Working by the book. “Nobody cares what sort of mess you created. As long as that functionality is ready, it is okay and can always be fixed later.”
  • Clipboard inheritance. “I’ve seen source files copy pasted across projects. As long as it gets shit done (described above) no one cares if you produced unmaintainable code.”
  • Using old tools. “Almost 90% of my colleagues use older versions of Office, Windows, Visual Studio and .NET Framework.”
  • Crippling management hierarchy. “At the end, you are working for your manager’s and their managers’ paychecks.”

There are a couple of points to emphasize. This is one person in one team which is part of a very large corporation, and should not be taken as descriptive of Microsoft programming culture as a whole. Balkan’s team is in “the test org”, he says, and not making product decisions. Further, many commenters observe that they have seen similar at other organisations.

Nevertheless, some of the points chime with other things I have seen. Take this post by Ian Smith, formerly a Microsoft-platform developer, on trying to buy a Surface Pro at Microsoft’s online store. From what he describes, the software behind the store is of dreadful quality. Currently, there is a broken image link on the home page.

image

This is not how you beat the iPad.

Another piece of evidence is in the bundled apps for Windows 8. The more I have reflected on this, the more I feel that supplying poor apps with Windows 8 was one of the worst launch mistakes. Apps like Mail, Calendar and Contacts on the Metro-style side have the look of waterfall development (though I have no inside knowledge of this). They look like what you would get from having a series of meetings about what the apps should do, and handing the specification over to a development team. They just about do the job, but without flair, without the benefit of an iterative cycle of improvements based on real user experience.

When the Mail app was launched, it lacked the ability to see the URL behind a hyperlink before tapping it, making phishing attempts hard to spot. This has since been fixed in an update, but how did that slip through? Details matter.

A lot is known about how to deliver high quality, secure and robust applications. Microsoft itself has contributed excellent insights, in books like Steve McConnell’s Code Complete and Michael Howard’s Writing Secure Code. The Agile movement has shown the importance of iterative development, and strong communication between all project stakeholders. Departing from these principles is almost always a mistake.

The WinRT platform needed a start-up culture. “We’re up against iPad and Android, we have to do something special.” Microsoft can do this; in fact, Windows Phone 7 demonstrated some of that in its refreshing new user interface (though the 2010 launch was botched in other ways).

Another piece of evidence: when I open a Word document from the SkyDrive client and work on it for a while, typing starts to slow down and I have to save the document locally in order to continue. I am not alone in experiencing this bug. Something is broken in the way Office talks to SkyDrive. It has been that way for many months. This is not how you beat Dropbox.

In other words, I do think Microsoft has a problem, though equally I am sure it does not apply everywhere. Look, for example, at Hyper-V and how that team has gone all-out to compete with VMWare and delivered strong releases.

Unfortunately mediocrity, where it is does exist, is a typical side-effect of monopoly profits and complacency. Microsoft (if it ever could) cannot afford for it to continue.

The most enduring software development techniques revealed at QCon London

I am in London for the QCon event, a vendor-neutral development conference which I have been fortunate to attend regularly over the last few years.

image

These events tend to have an underlying theme, which reflects the current thinking of developers and software architects. Each year I hear cogent and thoughtful explanations of why this or that approach will enable us to code better and please users more. Each year I also hear cogent and thoughtful explanations of why the fix proposed last year or the year before is actually a prime reason why projects fail.

Way back when it was SOA (Service Oriented Architecture) that was sweeping away the mistakes of the past. Next SOA itself was the mistake of the past and we got REST (Representational State Transfer). This year I am hearing how RPC is making a comeback, or at least not going away, for example because it can be more efficient when you want to transfer as little data as possible across the WAN.

Another example is enterprise Java. Enterprise Java Beans and J2EE were the fix, and then the problem, for scalable distributed applications. Rod Johnson came up with Spring, the lightweight alternative. Now I am hearing how Spring has become bloated and complicated and developers are looking for lightweight alternatives.

Test-driven development (TDD) brings fantastic benefits to software development, making it possible to change and improve your code while defending against the introduction of bugs. Yesterday though Dan North observed that TDD also has a cost, in that you write much more code. It is not uncommon for projects to have more test code than code that is active in production. If you did not write that code, you could be doing other productive work in the time made available. 

Agile methodologies like Scrum were devised to promote or even create communication and agility in software teams. Now every big enterprise vendor says it does Scrum and runs courses, but the result is a long way from the agile (with a small a) original concept.

This year I have heard a lot about over-optimisation, or creating code for situations that in fact never arise. This is the problem to which the solution is YAGNI (You Ain’t Gonna Need It). Since they apply across all the methodologies, I suggest that YAGNI, and its cousin DRY (Don’t Repeat Yourself), and the even older KISS (Keep It Simple Stupid) are the most enduring software methodologies.

That said, even DRY took a beating yesterday. Greg Young in his evening keynote said that rigorous DRY advocates can end up creating single blocks of code where really the procedure was only nearly the same. If your DRY functions are full of edge cases and special conditions, then maybe DRY has been taken to excess.

In the light of the above, I would therefore like to propose the first draft of my first theorem of software development:

There is no development methodology which will not become a burden when embraced rigidly

The other lesson I have learned from multiple QCons is that effective teams and smart developers count for much, much more than any specific tool or language or approach. There is no substitute.

Sold out QCon kicks off in London: big data, mobile, cloud, HTML 5

QCon London has just started in London, and I’m interested to see that it is both bigger than last year and also sold out. I should not be surprised, because it is usually the best conference I attend all year, being vendor-neutral (though with an Agile bias), wide-ranging and always thought-provoking.

image

A few more observations. One reason I attend is to watch industry trends, which are meaningful here because the agenda is driven by what currently concerns developers and software architects. Interesting then to see an entire track on cross-platform mobile, though one that is largely focused on HTML 5. In fact, mobile and cloud between them dominate here, with other tracks covering cloud architecture, big data, highly available systems, platform as a service, HTML 5 and JavaScript and more.

I also noticed that Abobe’s Christophe Coenraets is here this year as he was in 2011 – only this year he is talking not about Flex or AIR, but HTML, JavaScript and PhoneGap.

Continuous Integration vs Continuous Delivery vs Continuous Deployment: what is the difference?

I am reading the excellent book Continuous Delivery by Jez Humble and David Farley. But what is Continuous Delivery and how does it differ from the other “continuous” development methodologies?

It helps to understand that all these methodologies spring from the Agile software development movement, and the expression Continuous Delivery is a quote from the Agile Manifesto:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

Now, the starting assumption is that most software projects integrate a number of smaller projects, whether from third-parties or from team members. Since these pieces are developed to some extent independently there is a risk that changes made to one piece will require modifications to another piece; hence according to Humble and Farley:

Most software developed by large teams spends a significant proportion of its development time in an unusable state.

The business of getting all the parts to work together is called integration, and if this involves serious work you need to have an integration phase where this is the sole objective. This is a bad idea for all sorts of reasons, slowing development and preventing proper testing other than at the end of these integration phases.

The solution is called Continuous Integration (CI). You have a frequent automated build that assembles all the pieces from all the teams into a working application. If the build fails, or if automated tests run against the build fail, then this is a bug that should be fixed immediately, not later in some separate phase of development.

Tools for CI include Cruise Control, Hudson, TeamCity and others; .NET developers can also configure Visual Studio Team Foundation Server for CI.

The problem with CI alone is that the development environment is not the same as the production environment. What if the CI build works and tests pass, but once deployed the application breaks or performs badly? Perhaps the development environment runs a multi-tier application with all the tiers on a single box, but when deployed onto actual multiple machines or VMs, something goes wrong. Permission problems are another common source of errors.

Continuous Delivery means that you not only build the software, but also deploy it frequently. This usually means provisioning servers, which you can automate using a tool like Puppet for Unix-like servers, or with Virtual Lab Management in a Visual Studio environment. Automation is pretty much essential for this to work. The more closely the test environment matches the production environment, the better.

Generally though, Continuous Delivery means deployment to a test environment. What about taking the next step, and deploying continuously to production? That is the methodology called Continuous Deployment. It sounds risky; but if you have a very extensive and thorough set of automated tests, then the risks are mitigated, especially as the extent of the changes in any one deployment is reduced.

Other suggestions for reducing risk include deploying to a small subset of users first, called “canary testing”; and making rollback easy.

That said, to judge by the Humble/Farley book the distinction between Continuous Delivery and Continuous Deployment is just a little blurred. The authors acknowledge that continuous deployment into production is not always a good idea. They also imply that Continuous Deployment might mean only that your application is always ready and easy to deploy into production, not that you necessarily deploy it constantly:

Your implementation should make it possible to deploy any version of your application that has made it past the automated tests into any of your environments at the push of a button, given the correct credentials.

Compliance and security are also factors that may rightly make it impossible to automate deployment to production completely.

Client-installed applications present some special difficulties which Humble and Farley discuss.

In summary then:

Continuous integration: your application always builds and passes its tests, including all the pieces from different sub-teams.

Continuous delivery: your application always builds and deploys to a test environment and passes its tests.

Continuous deployment: your application is always ready to deploy to production through a largely automated process.

Update: I received an email from Martin Fowler about this post. He refers to Jez Humble’s post on Continuous Delivery vs Continous Deployment and adds:

– I would use your definition of Continuous Deployment for Continuous Delivery

– I would change the definition of Continuous Deployment to say something like "every good build is released to production"

However, I clarified with him that if you building for a test environment but are confident that any build that passes would be OK to deploy to production, then you are still doing Continuous Delivery. In the end, while I am sure you should use Fowler and Humble’s definitions rather than mine, it seems to me a fine distinction and that if you are doing Continuous Delivery properly then the transition to Continuous Deployment is largely a matter of policy.

QCon London kicks off with call to rediscover Agile, use open source

I’m at the QCon developer conference in London – one of my favourite developer conferences of the year because of its breadth and energy.

The opening keynote was from Craig Larman who spoke on doing lean and agile development – in particular, the Scrum methodology – with large multi-site teams. He means sizeable product groups of 500-1500 persons, though he also remarked that development on this scale is really a bad idea and that a team of 10 smart folk is much better.

Still, I guess large teams are an inevitability, and Larman has written books on the subject. I am not going to summarise the talk exactly, interesting though it was, but I am going to pick out a couple of asides which interested me.

Agile methodology is really about promoting communication; and one of Larman’s themes is that if you do what seems obvious, that is to break down a project into components and give one to each small team, then you end up with numerous teams that do not communicate well with each other. Agile becomes something you do in name only.

Larman spent a bit of time on which collaboration tools to use. One of his points was not to use any commercial tool that describes itself as being for agile project management or similar. I can think of several. He says these tools are just the commercial tool vendors repackaging their old non-agile tools. Whiteboards, spreadsheets on Google docs, wikis and other simple tools are his recommendation. For source code management he suggests Subversion, Git or other open source solutions. Never use Rational Clearcase, he added, it always causes problems.

In fact, he went on to say that any commercial tools cause problems when mutli-site development extends beyond to teams in developing countries. They cannot afford the licences, he says, so avoid them.

It seems to me that the common theme here is how easily agile development intentions become non-agile in practice, especially in these large project groups.

QCon London 2010 report: fix your code, adopt simplicity, cool .NET things

I’m just back from QCon London, a software development conference with an agile flavour that I enjoy because it is not vendor-specific. Conferences like this are energising; they make you re-examine what you are doing and may kick you into a better place. Here’s what I noticed this year.

Robert C Martin from Object Mentor gave the opening keynote, on software craftsmanship. His point is that code should not just work; it should be good. He is delightfully opinionated. Certification, he says, provides value only to certification bodies. If you want to know whether someone has the skills you want, talk to them.

Martin also came up with a bunch of tips for how to write good code, things like not having more than two arguments to a function and never a boolean. I’ve written these up elsewhere.

image

Next I looked into the non-relational database track and heard Geir Magnusson explain why he needed Project Voldemort, a distributed key-value storage system, to get his ecommerce site to scale. Non-relational or NOSQL is a big theme these days; database managers like CouchDB and MongoDB are getting a lot of attention. I would like to have spent more time on this track; but there was too much else on; a problem with QCon.

I therefore headed for the functional programming track, where Don Syme from Microsoft Research gave an inspiring talk on F#, Microsoft’s new functional language. He has a series of hilarious slides showing F# code alongside its equivalent in C#. Here is an example:

image

The white panel is the F# code; the rest of the slide is C#.

Seeing a slide that this makes you wonder why we use C# at all, though of course Syme has chosen tasks like asychronous IO and concurrent programming for which F# is well suited. Syme also observed that F# is ideal for working with immutable data, which is common in internet programming. I grabbed a copy of Programming F# for further reading.

Over on the Architecture track, Andres Kütt spoke on Five Years as a Skype Architect. His main theme: most of a software architect’s job is communication, not poring over diagrams and devising code structures. This is a consistent theme at QCon and in the Agile movement; get the communication right and all else follows. I was also interested in the technical side though. Skype started with SOAP but switched to a REST model for web services. Kütt also told us about the languages Skype uses: PHP for the web site, C or C++ for heavy lifting and peer-to-peer networking; Delphi for the Windows interface; PostgreSQL for the database.

Day two of QCon was even better. I’ve written up Martin Fowler’s talk on the ethics of software development in a separate post. Following that, I heard Canonical’s Simon Wardley speak about cloud computing. Canonical is making a big push for Ubuntu’s cloud package, available both for private use or hosted on Amazon’s servers; and attendees at the QCon CloudCamp later on were given a lavish, pointless cardboard box with promotional details. To be fair to Wardley though, he did not talk much about Ubuntu’s cloud solution, though he did make the point that open source makes transitions between providers much cheaper.

Wardley’s most striking point, repeated perhaps too many times, is that we have no choice about whether to adopt cloud computing, since we will be too much disadvantaged if we reject it. He says it is now more a management issue than a technical one.

Dan North from ThoughtWorks gave a funny and excellent session on simplicity in architecture. He used pseudo-biblical language to describe the progress of software architecture for distributed systems, finishing with

On the seventh day God created REST

Very good; but his serious point is that the shortest, simplest route to solving a problem is often the best one, and that we constantly make the mistake of using over-generalised solutions which add a counter-productive burden of complexity.

North talked about techniques for lateral thinking, finding solutions from which we are mentally blocked, by chunking up, which means merging details into bigger ideas, ending up with “what is this thing for anyway”; and chunking down, the reverse process, which breaks a problem down into blocks small enough to comprehend. Another idea is to articulate a problem to a colleague, which exercises different parts of the brain and often stimulates a solution – one of the reasons pair programming can be effective.

A common mistake, he said, is to keep using the same old products or systems or architectures because we always do, or because the organisation is already heavily invested in it, meaning that better alternatives do not get considered. He also talked about simple tools: a whiteboard rather than a CASE tool, for example.

Much of North’s talk was a variant of YAGNI – you ain’t gonna need it – an agile principle of not implementing something until/unless you actually need it.

I’d like to put this together with something from later in the day, a talk on cool things in the .NET platform. One of these was Guerrilla SOA, though it is not really specific to .NET. To get the idea, read this blog post by Jim Webber, another from the ThoughtWorks team (yes, there are a lot of them at QCon). Here’s a couple of quotes:

Prior to our first project starting, that client had already undertaken some analysis of their future architecture (which needs scalability of 1 billion transactions per month) using a blue-chip consultancy. The conclusion from that consultancy was to deploy a bus to patch together the existing systems, and everything else would then come together. The upfront cost of the middleware was around £10 million. Not big money in the grand scheme of things, but this £10 million didn’t provide a working solution, it was just the first step in the process that would some day, perhaps, deliver value back to the business, with little empirical data to back up that assertion.

My (small) team … took the time to understand how to incrementally alter the enterprise architecture to release value early, and we proposed doing this using commodity HTTP servers at £0 cost for middleware. Importantly we backed up our architectural approach with numbers: we measured the throughput and latency characteristics of a representative spike (a piece of code used to answer a question) through our high level design, and showed that both HTTP and our chosen Web server were suitable for the volumes of traffic that the system would have to support … We performance tested the solution every single day to ensure that we would always be able to meet the SLAs imposed on us by the business. We were able to do that because we were not tightly coupled to some overarching middleware, and as a consequence we delivered our first service quickly and had great confidence in its ability to handle large loads. With middleware in the mix, we wouldn’t have been so successful at rapidly validating our service’s performance. Our performance testing would have been hampered by intricate installations, licensing, ops and admin, difficulties in starting from a clean state, to name but a few issues … The last I heard a few weeks back, the system as a whole was dealing with several hundred percent more transactions per second than before we started. But what’s particularly interesting, coming back to the cost of people versus cost of middleware argument, is this: we spent nothing on middleware. Instead we spent around £1 million on people, which compares favourably to the £10 million up front gamble originally proposed.

This strikes me as an example of the kind of approach North advocates.

You may be wondering what other cool .NET things were presented. This session was called the State of the Art .NET, given by Amanda Laucher and Josh Graham. They offer a dozen items which they considered .NET folk should be using or learning about:

  1. F# (again)
  2. M – modelling/DSL language
  3. Boo – static Python for .NET
  4. NUnit – unit testing. Little regard for Microsoft’s test framework in Team System, which is seen as a wasted and inferior effort.
  5. RhinoMocks – mocking library
  6. Moq – another mocking library
  7. NHibernate – object-relational mapping
  8. Windsor – dependency injection, part of Castle project. Controversial; some attendees thought it too complex.
  9. NVelocity – .NET template engine
  10. Guerrilla SOA – see above
  11. Azure – Microsoft’s cloud platform – surprisingly good thanks to David Cutler’s involvement, we were told
  12. MEF – Managed Extensibility Framework as found in Visual Studio 2010, won high praise from those who have tried it

That was my last session (I missed Friday) though I did attend the first part of CloudCamp, an unconference for cloud early adopters. I am not sure there is much point in these now. The cloud is no longer subversive and the next new thing; all the big enterprise vendors are onto it. Look at the CloudCamp sponsor list if you doubt me. There are of course still plenty of issues to talk about, but maybe not like this; I stayed for the first hour but it was dull.

For more on QCon you might also want to read back through my Twitter feed or search the entire #qcon tag for what everyone else thought.