Tag Archives: software development

Developers like coding in the dark

Many developers prefer to code against dark backgrounds, according to this post by Monty Hammontree, Director of User Experience in Microsoft’s developer tools division.

Many of you have expressed a preference for coding within a dark editor. For example, dark editor themes dominate the list of all-time favorites at web sites such as http://studiostyl.es/ which serve as a repository for different Visual Studio styles.

Chief among the reasons many of you have expressed for preferring dark backgrounds is the reduced strain placed on the eyes when staring at the screen for many hours. Many developers state that light text on a dark background is easier to read over longer periods of time than dark text on a light background.

image

Personally I am not in this group. A white-ish background works well for me, and if it is too bright, simply reducing the monitor brightness is an effective fix.

Interesting post though, if only for the snippets of information about the new Visual Studio. Apparently it has around 6000 icons used in 28,000 locations. Another little fact:

Visual Studio’s UI is a mix of WPF, Windows Forms, Win32, HTML, and other UI technologies which made scrollbar theming a challenging project.

If you will be using Visual Studio 2012, are you on the dark side?

The most enduring software development techniques revealed at QCon London

I am in London for the QCon event, a vendor-neutral development conference which I have been fortunate to attend regularly over the last few years.

image

These events tend to have an underlying theme, which reflects the current thinking of developers and software architects. Each year I hear cogent and thoughtful explanations of why this or that approach will enable us to code better and please users more. Each year I also hear cogent and thoughtful explanations of why the fix proposed last year or the year before is actually a prime reason why projects fail.

Way back when it was SOA (Service Oriented Architecture) that was sweeping away the mistakes of the past. Next SOA itself was the mistake of the past and we got REST (Representational State Transfer). This year I am hearing how RPC is making a comeback, or at least not going away, for example because it can be more efficient when you want to transfer as little data as possible across the WAN.

Another example is enterprise Java. Enterprise Java Beans and J2EE were the fix, and then the problem, for scalable distributed applications. Rod Johnson came up with Spring, the lightweight alternative. Now I am hearing how Spring has become bloated and complicated and developers are looking for lightweight alternatives.

Test-driven development (TDD) brings fantastic benefits to software development, making it possible to change and improve your code while defending against the introduction of bugs. Yesterday though Dan North observed that TDD also has a cost, in that you write much more code. It is not uncommon for projects to have more test code than code that is active in production. If you did not write that code, you could be doing other productive work in the time made available. 

Agile methodologies like Scrum were devised to promote or even create communication and agility in software teams. Now every big enterprise vendor says it does Scrum and runs courses, but the result is a long way from the agile (with a small a) original concept.

This year I have heard a lot about over-optimisation, or creating code for situations that in fact never arise. This is the problem to which the solution is YAGNI (You Ain’t Gonna Need It). Since they apply across all the methodologies, I suggest that YAGNI, and its cousin DRY (Don’t Repeat Yourself), and the even older KISS (Keep It Simple Stupid) are the most enduring software methodologies.

That said, even DRY took a beating yesterday. Greg Young in his evening keynote said that rigorous DRY advocates can end up creating single blocks of code where really the procedure was only nearly the same. If your DRY functions are full of edge cases and special conditions, then maybe DRY has been taken to excess.

In the light of the above, I would therefore like to propose the first draft of my first theorem of software development:

There is no development methodology which will not become a burden when embraced rigidly

The other lesson I have learned from multiple QCons is that effective teams and smart developers count for much, much more than any specific tool or language or approach. There is no substitute.

How to brew better software: The Monki Gras in London

I attended The Monki Gras in London yesterday, a distinctive developer event arranged by the analyst firm RedMonk.

This was not only a developer event, with the likes of Andre Charland and Dave Johnson from the PhoneGap team at Adobe, Mike Milinkovich the executive director of the Eclipse Foundation, and Jason Hoffman with Bryan Cantrill from cloud services (and Node.js sponsors) Joyent. It was also a serious beer event, complete with a range of craft beers, a beer tasting competition with nine brews to try, and a talk plus a free book from  beer expert Melissa Cole. An unusual blend of flavours.

image

In charge of the proceedings was RedMonk co-founder and all round impressario James Governor. I am a big fan of RedMonk and its developer-focused approach; it has been a fresh and heady brew in the dry world of IT analysts.

image

The Monki Gras did seem like an attempt by a regular IT conference sufferer to fix problems often encountered. The Wi-Fi worked, the food was fresh, unusual and delicious, the coffee was superb; though brewing good coffee takes time so the queues were long. Not everything scales. Fortunately this was a small event, and a rare treat for the couple of hundred or so who attended.

That said, there were frustrations. The sessions were short, which in general is a good thing, but left me wanting more depth and more details in some cases; we did not learn much about PhoneGap other than a brief overview, for example.

Nevertheless there was serious content. Redmonk’s Stephen O’Grady made the point succinctly: IT decision makers are ignorant about what developers actually use and what they want to use, which is one reason why there is so much dysfunction in this industry. Part of the answer is to pay more attention, and several sessions covered different aspects of analytics: Matt LeMay from bitly on what users click on the Web; Matt Biddulph (ex BBC, Dopplr, Nokia) gave a mind-stretching talk on social network analysis which, contrary to what some think, was not invented by Facebook but predates the Internet; and O’Grady shared some insights from developer analytics at RedMonk.

I had not noticed before that github now gets nearly double the number of commits than does Google Code. That is partly because developers like git, but may also say something about Google’s loss of kudos in the open source developer community.

Kohsuke Kawaguchi, lead for Jenkins Continuous Integration and an architect at CloudBees, spoke on building a developer community. His context was how Jenkins attracted developers, but his main point has almost limitless application:  “Make everything easy, relentlessly.”

Something I see frequently is how big companies (the bigger the worse) place obstacles in front of developers or users who have an interest in their products or services. Examples are enforced registration, multiple clicks through several complex pages to get to the download you want, complex installs, and confusing information. It all adds friction. If the target is sufficiently compelling, like apps on Apple’s app store, developers will get there anyway; but it all adds friction, and if you are not Apple that can be fatal.

The Joyent guys did not speak about Node.js, sadly, but rather on the distinction between a VP of engineering and a Chief Technology Officer. Sounds dry and abstruse? I thought so too, but the delivery was so energetic that they were soon forgiven. Hoffman and Cantrill moved on to talk about management antipatterns in the software industry, prompting many wry nods of recognition from the audience. “It is very hard for middle management to add value,” said Cantrill.

Milinkovich made the point that the most valued open source projects generally make their way to a software foundation; PhoneGap to Apache is a recent example. He then gave the talk he really wanted to give, noting that as new software stacks emerge they have a tendency to re-implement CORBA, a middleware specification from the Nineties that tackled problems including remote objects, language independence, and transactions across the Internet. CORBA is remembered for drowning in complexity, but Milinkovich’s point is that the creators of exciting new stacks like Node.js should at least research and learn from past experience.

Milinkovich also found time to proclaim that “Flash is dead, Silverlight is dead, browser plugins are dead.” Perhaps premature; but I did not hear many dissenting voices.

I tweeted the conference extensively yesterday (losing at least one follower but gaining several more). Look out also for a couple of follow-up posts on topics of particular importance.

Appcelerator CEO on EMEA expansion, Titanium vs PhoneGap, and how WebKit drives HTML5 standards

I spoke to Appcelerator CEO Jeff Haynie yesterday, just before today’s announcement of the opening of an EMEA headquarters in Reading. It has only 4 or 5 staff at the moment, mostly sales and marketing, but will expand into professional services and training.

Appcelerator’s product is a cross-platform (though see below) development platform for both desktop and mobile applications. The mobile aspect makes this a hot market to be in, and the company says it has annual growth of several hundred percent. “We’re not profitable yet, but we’ve got about 1300 customers now,” Haynie told me. “ On the developer numbers side, we’ve got about 235,000 mobile developers and about 35,000 apps that have been built.”

Jeff Haynie, Appcelerator

In November 2011, Red Hat invested in Appcelerator and announced a partnership based on using Titanium with OpenShift, Red Hat’s cloud platform.

Another cross-platform mobile toolkit is PhoneGap, which has received lots of attention following the acquisition of Nitobi, the company which built PhoneGap, by Adobe, and also the donation of PhoneGap to the Apache Foundation. I asked Haynie to explain how Titanium’s approach differs from that of PhoneGap.

Technically what we do and what PhoneGap does is a lot different. PhoneGap is about how do you take HTML and wrap it into a web browser and put it into a native container and expose some of the basic APIs. Titanium is really about how you expose JavaScript for an API for native capabilities, and have you build a real native application or an HTML5 application. We offer both a true native application – I mean the UI is native and you get full access to all the API as if you had written it native, but you are writing it in JavaScript. We have also got now an HTML5 product where that same codebase can be deployed into an HTML5 web-driven interface. We think that is wildly different technically and delivers a much better application.

Haynie agrees that cross-platform tools can compromise performance and design, and even resists placing Titanium in the cross-platform category:

Titanium is a real native UI. When you’re in an iPhone TableView it’s actually a real native TableView, not an HTML5 table that happens to look like a TableView. You get the best of both worlds. You get a JavaScript-driven, web-driven API, but when you actually create the app you get a real app. Then we have an open extensible API so it’s really easy if you want to expose additional capabilities or bring in third-party libraries, very similar to what you do in Java with JNI [Java Native Invocation].

The category has got a bit of a bad rap. We wouldn’t really describe ourselves as cross-platform. We’re really an API that allows you target multiple different devices. It’s not a write-once run anywhere, it’s really API driven.

80% of our core APIs are meant to be portable. Filesystems, threads, things like that. Even some of the UI layer, basic views and buttons and things like that. But then you have a Titanium iOS namespace [for example] which allows you to access all the iOS-specific APIs, that aren’t portable.

I asked Haynie for his perspective on the mobile platform wars. Apple and Android dominate, but what about the others?

RIM and Microsoft are fighting for third place. I would go long on Microsoft. Look at Xbox, look at the impact of long-term endeavours, they have the sustainability and the investment power to play the long game, especially in the enterprise. We’ll see Microsoft make significant strides in Windows 8 and beyond.

Even within Android, there are going to be a lot of different types of Android that will be both complementary and competitive with Google. They will continue to take the lion’s share of the market. Apple will be a smaller but highly profitable and vertically integrated ecosystem. In my opinion Microsoft is a bit of bridge between both. They’re more open than Apple, and more vertically integrated than Google, with tighter standardisation and stacks.

I wouldn’t quite count RIM out. They still have a decent market share, especially in certain parts of the world and certain types of application. But they’ve got a long way to go with their new platform.

So will Titanium support Windows 8 “Metro” apps, running on the new WinRT runtime?

Yes, we don’t have a date or anything to announce, but yes.

I was also interested in his thoughts on Adobe, particularly as there is some flow of employees from Adobe to Appcelerator. Is he seeing migration of developers from Flex, Flash and AIR to Titanium?

Adobe has had a tremendously successful product in Flash, the web wouldn’t be the web today if it wasn’t for Flash, but the advent of HTML5 is encroaching on that. How do they move to the next big thing, I don’t know if they have a next big thing? And they’re dealing in an ecosystem that’s not necessarily level ground. That’s churning lots of dissenting and different opinions inside Adobe, is what we’re hearing.

We’re seeing a large degree of people that are Flash, ActionScript oriented that are migrating. We’ve hired a number of people from Adobe. Quite a lot of people in our QA group actually came out of the Adobe AIR group. Adobe is a fantastic company, the question is what’s their future and what’s their plan?

FInally, we discussed web standards. With a product that depends on web technology, does Appcelerator get involved in the HTML5 standards process? The question prompted an intriguing response with regard to WebKit, the open source browser engine.

We’re heavily involved in the Eclipse foundation, but not in the W3C today. I spent about 3 and half years on the W3C in my last company, so I’m familiar with the process and the people. The W3C process is largely driven – and I know the PhoneGap people have tried to get involved – by the WHAT working group and the HTML5 working group, which ultimately are driven by the browser manufacturers … it’s a largely vendor-oriented, fragmented space right now, that’s the challenge. We still haven’t managed to get a royalty-free, IPR-free codec for video.

I’d also say that one of the biggest factors pushing HTML5 is less the standardisation itself and more WebKit. WebKit has become the de facto [standard], which has really been driven by Apple and Google and against Microsoft. That’s driving HTML5 forward as much as the working group itself.

The mystery of unexpected expiring sessions in ASP.NET

This is one of those posts that will not interest you unless you have a similar problem. That said, it does illustrate one general truth, that in software problems are often not what they first appear to be, and solving them can be like one of those adventure games where you think your quest is for the magic gem, but when you find the magic gem you discover that you also need the enchanted ring, and so on.

Recently I have been troubleshooting a session problem on an ASP.NET application running on a shared host (IIS 7.0).

This particular application has a form with some lengthy text fields. Users complete the form and then hit save. The problem: sometimes they would take too long thinking, and when they hit save they would lose their work and be redirected to a login page. It is the kind of thing most of us have experienced once in a while on a discussion forum.

The solution seems easy though. Just increase the session timeout.  However, this had already been done, but the sessions still seemed to time out too early. Failure one.

My next thought was to introduce a workaround, especially as this is a shared host where we cannot control exactly how the server is configured. I set up a simple AJAX script that ran in the background and called a page in the application from time to time, just to keep the session alive. I also had it write a log for each ping, in order to track the behaviour.

By the way, if you do this, make sure that you disable caching on the page you are pinging. Just pop this at the top of the .aspx page:

<%@ OutputCache Duration="1" Location="None" VaryByParam="None"%>

It turned out though that the session still died. One moment it was alive, next moment gone. Failure two.

This pretty much proved that session timeout as such was not the issue. I suspected that the application pool was being recycled – and after checking with the ISP, who checked the event log, this turned out to be the case. Check this post for why this might happen, as well as the discussion here. If the application pool is recycled, then your application restarts, wiping any session values. On a shared host, it might be some else’s badly-behaved application that triggers this.

The solution then is to change the way the application stores session variables. ASP.NET has three session modes. The default is InProc, which is fast but not resilient, and for obvious reasons not suitable for apps which run on multiple servers. If you change this to StateServer, then session values are stored by the ASP.NET State Service instead. Note that this service is not running by default, but you can easily enable it, and our helpful ISP arranged this. The third option is to use SQLServer, which is suitable for web farms. Storing session state outside the application process means that it survives pool recycling.

Note the small print though. Once you move away from InProc, session variables are serialized, not just held in memory. This means that classes must have the System.Serializable attribute. Note also that objects might emerge from serialization and deserialization a little different from how they went in, if they hold state that is more complex than simple properties. The constructor is not called, for example. Further, some properties cannot sensibly be serialized. See this article for more information, and why you might need to do custom serialization for some classes.

After tweaking the application to work with the State Service though, the outcome was depressing. The session still died. Failure three.

Why might a session die when the pool recycles, even if you are not using InProc mode? The answer seems to be that the new pool generates a new machine key by default. The machine key is used to encrypt and decrypt the session values, so if the key changes, your existing session values are invalid.

The solution was to specify the machine key in web.config. See here for how to configure the machine key.

Everything worked. Success at last.

Holiday season free giveaway: a must-read for developers

Among my top books for 2011 is this one by Jez Humble and David Farley. I reviewed it here, and it also sparked some discussion of what is the difference between the various continuous software development/deployment models.

I have a spare copy of this book to give away. All you need to do is comment to this post with a valid email address – this will not be posted or used for any other purpose, but I will use it to request your address if you win. Please do not include an URL as it risks being dumped in the spam bucket!

On  January 6th I will select a winner at random. I will post to anywhere in the world.

Update: I have selected a winner. To do so, I used Java’s Random class to generate a number between zero and one less than the number of comments. The number it came up with was 4, so the winner is Ian Smith, the 5th person to comment. Congratulations!

 

Not allowed in Windows 8 Metro: porn, ads in live tiles, bugs, or opt-out data collection

Microsoft’s newly published Certification Requirements for the forthcoming Windows 8 store includes some notable points. Here are a few that caught my eye.

2.3 Your app must not use tiles or notifications for ads

No complaints about that one.

3.2 Your app must not stop responding, end unexpectedly, or contain programming errors

Hmm, this could be a tough one.

3.3 Your app must provide the same user experience on all processor types

OK, no “Intel-only” features. However you could by implication submit an “Intel-only” version of your app as long as it is called something different than than the ARM version.

3.7 Your app must not use an interaction gesture in a way that is different from how Windows uses the gesture

This is interesting as an example of enforcing application style guidelines. The intent is a consistent user experience, but is this heavy-handed?

4.1 Your app must obtain opt-in or equivalent consent to publish personal information

No stealthy personal data collection. A good thing; though if opt-in means “Hand over your data or you cannot run the app” it can still be difficult for users to avoid.

4.4 Your app must not be designed or marketed to perform, instruct, or encourage tasks that could cause physical harm to a customer or any other person

What a relief!

5.1 Your app must not contain adult content

Windows Metro a porn-free zone? This could be troublesome though. No games beyond PEGI 16? This is a preliminary document and it would not surprise me if there is some change here; maybe this is a restriction for the beta period only.

Subversion 1.7 released: just one .svn directory per working copy

Yesterday saw the 1.7 release of Subversion, the widely used open source version control system. It is a significant release with many new features, bug-fixes and performance improvements, and I suggest reading the release notes or complete change log. One thing to highlight is that the default working copy metadata storage is now a single sqlite database per working copy, rather than a .svn direction containing metadata in sub-directory.

I upgraded my TortoiseSVN, which is already updated to 1.7, and tried upgrading one of my own projects. Here is the .svn folder before the upgrade:

image

and after

image

Those pesky .svn folders can be a nuisance so this is a welcome change, although there is a downside as the release notes warn:

It is not safe to copy an SQLite file while it’s being accessed via the SQLite libraries. Consequently, duplicating a working copy (using tar, cp, or rsync) that is being accessed by a Subversion process is not supported for Subversion 1.7 working copies, and may cause the duplicate (new) working copy to be created corrupted.

Subversion is less fashionable since the advent of distributed version control systems like git and mercurial; though for corporate development Subversion remains popular because a centralised system is easier to control.

WANdisco’s Jessica Thornsby has a helpful post on the new 1.7 features more details on the benefits of the new working copy metadata managements system.

Review: Continuous Delivery by Jez Humble and David Farley

I like this book. I know I like it because I find myself wanting to quote from it frequently. It is a book that almost every software developer should read, even if you disagree with parts of it – which is likely, because it is opinionated. The authors always give reasons for their opinions though, which means that if you disagree, you need to articulate why that is; or they may even change your mind. In consequence you find yourself learning as you read.

The authors are software theoreticians, but they are also practitioners; in fact they are practitioners first and theoreticians afterwards. This means they are pragmatic rather than dogmatic. Here is an example. Chapter 13 discusses software dependencies, and page 372 covers circular dependencies, “probably the nastiest dependency problem.” A circular dependency is when component A depends on component B, and component B also depends on component A.

A bad idea; but the authors write:

Surprisingly, we have seen successful projects with circular dependencies in their build systems. You may argue with our definition of “successful” in this case, but there was working code in production, which is enough for us.

As an aside, this kind of dry humour is characteristic, as also evident in remarks like this:

We are certain that, occasionally, manually intensive releases work smoothly. We may well have been unlucky in having mostly seen the bad ones.

The subject of the book is Continuous Delivery. So what is that? Well, if Continuous Integration is about ensuring that your software always builds, then Continuous Delivery is about ensuring that your software always deploys. The final form, as it were, of Continuous Delivery is Continuous Deployment, where you are so confident of your automated build and deploy process that any checked-in code that passes its tests can be deployed immediately. I was confused about the difference between Continuous Delivery and Continuous Deployment so I wrote a post about it; it turns out that there is not much difference.

The principle behind Continuous Delivery is that software is not done until it is released. If the release process is long, arduous and infrequent, then you are not really doing Agile development. A section of chapter 1 is devoted to release anti-patterns, and these form an excellent rationale for taking an interest in Continuous Delivery.

My guess is that anyone who has been involved in professional software development will wince a little while reading through these anti-patterns, thinking “that is what we used to do” or even “that is what we do”.

That said, Humble and Farley do not fall into the trap of merely writing about how not to do it. Rather, they address in some detail the kinds of problems you will face if you decide to embrace the Continuous Delivery methodology. The key ingredient in Continuous Delivery is that pretty much everything must be automated, otherwise it is too difficult to do. But how do you automate something like Acceptance Testing? That is the subject of chapter 8. How do you automate a deployment at all? That is the subject of chapter 6. The authors are not on a higher plane than the rest of us, and much of the advice is straightforward, even at the level of “Always use relative paths,” which is a tip in chapter 6.

The authors talk a lot about testing, as you would expect, but there is also extensive discussion of software configuration management, describing different approaches such as centralised and distributed version control and even specific tools. The chapter on Advanced Version Control is a particularly good read. Humble and Farley articulate the point that branching and merging is antithetical to Continuous Integration and therefore Continuous Delivery:

If different members of the team are working on separate branches or streams then by definition they’re not continuously integrating (p 390)

Does this mean branches are a bad idea? Not always, say the authors, but they also state:

Our strong recommendation is to crate long-lived branches only on release … new work is always committed to the trunk (p 392)

The reason is not only to enable Continuous Integration, but also because merging is complex and error-prone.

Software configuration management is not easy, but it is a relatively mature aspect of software development. This is less true of what you might call infrastructure configuration management; yet infrastructure dependencies such as versions and configurations of the operating system or web server are a common reason for deployment failures. Several chapters discuss this problem in detail. In principle, the authors say:

The desired state of your infrastructure should be specified through version-controlled configuration.

This leads to some thoughtful discussion of how to achieve this.

Another theme, as you would expect, is that development and operations people need to be working together and not in isolation. To some extent this is a DevOps book.

A great book then; but there are flaws. One is that there is some repetition because of the way the book is organised. This is good if you are inclined to read chapters in isolation, but not so good if you are reading straight through. In practice I did not find it too annoying, but it is there.

Another issue is that while the authors do cover Microsoft .NET to some extent, this is usually in the form of a brief mention and there is more focus on Java. This may be in part because of their preference for open source. It is still a good read for .NET developers, because the principles are platform-agnostic, but Microsoft platform developers may find it irritating at times. Team Foundation Server, say the authors, is “essentially an inferior knock-off of Perforce” (p 386).

The discussion of specific tools is a strength but also a weakness, in that the tools will change over time and the book will become dated.

This is not the last word on Continuous Delivery, but it is an enjoyable and thought-provoking read. Recommended.

 

C++ 11 is approved by ISO: a big day for native code development

Herb Sutter reports that C++ 0x, which will be called C++ 11, has been unanimously approved by the ISO C++ committee. The “11” in the name refers to the year of approval, 2011. The current standard is C++ 98, though amended as C++ 03, so it has taken 8 or 13 years to update it depending on how you count it.

This means that compiler makers can get on with implementing the full C++ 11 standard. Most current compilers implement some of the features already. This Apache wiki shows the current status. A quick glance suggests that the open source GCC is ahead of the pack, followed by Intel C++ and then perhaps Microsoft Visual C++.

C++ 11 is pretty much compatible with C++ 03 so existing code should still work. However there are many new features, enough for Bjarne Stroustrup to say in his feature summary:

Surprisingly, C++0x feels like a new language: The pieces just fit together better than they used to and I find a higher-level style of programming more natural than before and as efficient as ever. If you timidly approach C++ as just a better C or as an object-oriented language, you are going to miss the point. The abstractions are simply more flexible and affordable than before. Rely on the old mantra: If you think of it as a separate idea or object, represent it directly in the program; model real-world objects, and abstractions directly in code. It’s easier now.

Concurrent programming is better supported in C++ 11, important for getting the best performance from modern hardware.

It is curious how the programming landscape has changed in recent year. A few years back, you might have foreseen a day when most programming would be .NET, Java or JavaScript: all varieties of managed code. While those languages do still dominate, native code has come more to the fore, thanks to factors like Apple’s focus on Objective C, and signs of internal conflict at Microsoft over the best language for coding Windows applications.

That said, C++ 11 remains a demanding language to learn and use. As Stroustrup notes, since C++ 11 is a superset of C++ 98 it is technically harder to learn all of it, though new libraries and abstractions should help beginners. The reasons for using or not using C++ are not going to change significantly with this new standard.