Category Archives: software development

Quick thoughts from QCon

I’m at QCon in London – a conference aimed at the “technical team lead, architect, and project manager”, according to the little printed guide, and notable for having tracks on .NET as well as Java, though in reality this is more of a Java crowd.

Good session from Martin Fowler and others from ThoughtWorks on “Modifiability: or is there Design in Agility?” This is about the distinction between agility and chaos; Fowler referenced a remark he attributed to Kent Beck about the difference between the simplest thing that will work (good) and the stupidest thing that will work (…). Just common sense nicely articulated: take most care over the decisions that are the least reversible.

I also enjoyed the comments on test-driven development, noting that a spin-off benefit of TDD is that it enforces modular design, since without modular design you cannot easily create tests.

I sat in briefly on Christian Weyer’s introduction to WCF (Windows Communication Foundation). What I learnt is that outside a niche of advanced Microsoft platform developers few people have any clue what WCF is; Weyer’s presentation didn’t change this much as few attended, which is a shame. Incidentally I’m seeing quite a bit of WCF misinformation floating about, for example that it is just a wrapper around old stuff like .NET remoting, or that it can only use SOAP and therefore must be slow. Neither is correct. Microsoft has a tricky PR job on its hands to get attention for this; the same applies to WPF (Windows Presentation Foundation) and the other .NET Framework 3.0 technologies.

I won’t say more about Weyer’s session as I had to leave early to talk to Amazon’s Werner Vogels about its platform services like S3 and EC2 (internet storage and on-demand servers). I asked Vogels why Amazon offers no SLA (Service Level Agreement) on these services; he said it was early days and to watch this space. Ironically he mentioned that Amazon attaches great importance to SLA’s internally, so at least it understands the need. He added that Amazon is committed to maintaining its current strategy of relatively low pricing. It was a good chat and I’ll try to find time to write some more about it shortly.

That’s it for now; next up for me is Larry Constantine’s keynote on usability; tomorrow Eric Maijer is speaking on LINQ (Language Integrated Query), which he created. More later, but not from the conference center as the internet connection is injuriously expensive (£6.00 for 30 minutes). Ouch.

A virtual conference for Delphi 2007, Delphi for PHP, JBuilder

Starting today, you can attend the CodeRage 2007 developer conference. It’s free, entirely virtual, and has some promising sessions for anyone wanting to keep up with what’s new for Delphi, Delphi for PHP and JBuilder. For some reason there are also sessions on Ruby; looks like CodeGear (a wholly-owned Borland subsidiary) is cooking something up here.

I like this idea. Conferences are part of IT culture, and I guess pros will always want to get together for real conferences, if only for the networking opportunities they present, along with the chance to collar the people who actually have the answers and grill them with your burning questions or complaints.

Even so, there is huge logic behind virtualizing conferences, especially bearing in mind the environmental cost of travel. The vendor gets access to a larger potential audience, and delegates have more flexibility over what content they view.

This one looks rather good.

Update

I’m seeing reports of connection problems, video breaking up and so on. Perhaps that’s the major downside of virtual conferences. On the other hand, this stuff ought to work by now. If CodeGear can’t scale its conference servers, that’s not a good advertisement for its technology.

 

Technorati tags: , , , ,

Software architects cautious about SOA; London Underground makes it work

SOA (Service Oriented Architecture) once seemed to promise a new world of smooth cross-platform and cross-language interopability, high software reuse, easier maintenance of complex systems, and clean wrapping of legacy systems. Has it delivered? I found the recent Microsoft Architecture Insight Conference surprisingly downbeat. Sam Lowe from Capgemini gave a brisk overview of where SOA is valuable, emphasizing that it is not always applicable, that its value is hard to prove, and that it often does not live up to its hype. “You need to think across business and IT”, he said, making the point that project roadmaps should not be technology-centric. It is no good rewarding people simply for creating services; if you do, you end up with lots of services for no clear reason. Too many services may be worse than too few.

All sound stuff. A second session from Conchango’s James Saull continued the theme in his “real world” session. It’s “very very rare” to see SOA success stories, he told us, following up with “I have never seen a business case for doing a service-oriented engagement.” One delegate immediately came up with one; but there was a fair amount of head-nodding as well. The supposed reusability of SOA services also got bashed. “I haven’t seen anyone to date really getting reuse,” said another delegate. Versioning and dependency issues were mentioned. The takeaway was not that SOA is useless, but rather that resources have been wasted in a mistaken belief in SOA as a solution to everthing.

It took a case study to bring relief from these depressing assessments. This was from the London Underground, the same WPF (Windows Presentation Foundation) application that I commented on earlier, but with a little more detail. I was not the only person impressed by this application; apparently the governor of the Bank of England, Mervyn King, paid the developers a visit to find out more about it. Although the project is only a proof of concept, there is great enthusiasm for it and a production version is actively being planned, though it will take until Q3 2007 before the 20,000 London Underground desktops are powerful enough to run it (.NET Framework 3.0 is required). Passengers may actually see station displays running WPF.

The London Underground application brings 3 or 4 systems together into one visualization. You can think of it as a kind of Enterprise mash-up. Although it is the user interface that catches the eye, it would not be possible without an existing investment in SOA, going back several years. It therefore appears that London Underground is getting reusability and other SOA benefits that are eluding most others. I asked what the secret was. The answer was a little vague. “We’re fortunate that we had the right services in the right place at the right time,” said developer Keith Walker. Peter Goss expanded on this. “We have four of five main services we use, but each of our large applications has an interface exposed which we can consume from if necessary. It’s an ongoing process.” In other words, every application was designed to be part of a platform, not just to work in isolation.  There was the right level of granularity for the services, which matched the business well.

Here at last is an example of SOA yielding perhaps unexpected benefits, presuming that the proof of concept does translate successfully into a deployed application. For more background on this case study, download the presentation referenced in my earlier post.

So what does it take to be successful with SOA? It’s hard to discern whether London Underground is just a particularly good fit for this kind of architecture, or whether it happens to be using development principles that would work equally well in other contexts.

 

Visual Studio 2005: still needs admin rights on Vista?

It was good to see – at last – the release of Visual Studio 2005 Service Pack 1 Update for Vista.

I was hoping this update would remove the need to run Visual Studio 2005 with administrator rights on Vista. Unfortunately I don’t think it does. It’s hard to be sure; in fact, I can’t find any clear statement about what the “Update for Vista” actually does. Following the “more information” links on the download page is like playing the original Adventure game – a maze of twisty little passages, all alike – none of which tells you what you want to know.

Still, I note that the list of bad things which happen when you run with normal permissions still exists; so I’m presuming Microsoft still recommends using “Run as administrator” for Visual Studio.

I dislike doing this. I don’t develop on Linux as root, nor on the Mac – why should it be needed on Windows? I realise that some things need local admin rights for good reasons – registering a COM DLL, for example – but I don’t see why I should have to run the entire IDE as admin just for the sake of those few activities.

How dangerous is it? I presume it’s no worse than running as admin on XP, for example, but it’s pretty bad. For example, I checked out what happens with online help if you use “Run as administrator” to start Visual Studio. Help opens in a separate application called Document Explorer, which embeds Internet Explorer to render the online documentation. As I expected, if you open this from Visual Studio’s Help menu it runs with elevated rights. Naturally, the docs include links to external web sites. What if you right-click one of these and choose “Open link in external window”? The site will open in IE, but take a look at bottom right. “Protected mode off”. In fact, IE is now running with a high integrity level, just like Visual Studio. Nothing to stop you browsing the web from here, probably not realising you are more at risk than usual.

It’s crazy to be reading documentation and browsing the web with full admin rights, just to keep Visual Studio happy.

I intend to try running Visual Studio as a normal user and see how it goes. I reckon it will work for some projects at least.

Note: if you want to see the integrity level of the processes on your system, download the latest Process Explorer. You’ll need to select the Integrity Level column. The ins and outs of UAC and the extent to which it protects you are discussed in Mark Russinovich’s blog entry on the subject.

 

Technorati tags: , , ,

What would the young Bill Gates make of today’s Microsoft?

He would be hacking (in a good way) with the crowd at the Future of Web Apps conference I attended two weeks ago, not here with a bunch of senior software architects discussing the failures and successes of SOA (Service Oriented Architecture). I’m at the Microsoft Architecture Insight Conference in Wales, where I’ve been hearing a lot about old-fashioned ideas like requirements analysis, making the business case for change, being realistic about software reuse, and other sound, sensible, but unexciting software development principles.

That’s not to say this is a bad conference, far from it. I had an excellent chat with Microsoft’s Jack Greenfield, a Microsoft architect who is putting together the next generation of Microsoft’s modeling and enterprise development tools for Visual Studio. “Software factories” is the buzzword – see here for more background on this. There is also good stuff on identity management within and beyond the firewall, sessions on using development methodologies in Visual Studio Team System; amigo Ivar Jacobson is here talking up his Essential Unified Process (though “process” is last year’s word; we do “practices” now); and a number of case studies including one on visualizing the London Underground network which I’m looking forward to later today – this is the amazing WPF application which was shown off at one of the Vista launches.

It’s easy to find fault with products like Vista or Office 2007; yet you have to give Microsoft credit for establishing .NET as a major platform for enterprise development against considerable JEE momentum.

That said, let’s go back to the young Bill Gates. There is a track here on SaaS (Software as a service), which seems to mean hosted, on-demand applications versus traditional premises-based development. We heard some research on disruptive technology which Microsoft is sponsoring in conjunction with the Manchester Business School, including a look at Siebel vs Salesforce.com for CRM (Customer Relationship Management). Here’s one facet that stuck in my mind. According to Dr Steven Moxley of the MBS, Marc Benioff’s first customers were not SMEs or start-ups, but groups within large enterprises that were frustrated by the shortcomings or inflexibility of their existing software. It was a kind of stealth adoption. Salesforce.com was able to sell to such groups because its software is zero-install, pay as you go.

I immediately thought of the times I’ve had phone calls that go, “Could you send that attachment to my Gmail account. Our email is playing up today.”

Gmail may be less feature-rich than Exchange; but it tends to just work.

In other words, you could as easily do Microsoft vs Google as Siebel vs Salesforce.com. Why is Microsoft sponsoring studies that articulate its own vulnerability? Officially, this is about helping its partners to grown their own distruptive solutions using Microsoft technology; but I also see this as evidence that Microsoft has abundant understanding of the difficulties it faces. What it lacks is any conherent strategy for overcoming them, though there are always hints that some such strategy will emerge sometime “soon”.

I think it might. Gates disrupted IBM; he didn’t topple it. But there is going to be some pain.

Postscript: See also this pertinent post from Zoli Erdos who is looking forward to ditching his desktop software, subject to finding a solution for a couple of unsolved problems:

My bet is on Google or Zoho to get there first. As soon as it happens, I’m going 100% on-demand.

 

WordPress hacked: where do we go from here?

WordPress founder Matt Mullenweg reports the bad news:

Long story short: If you downloaded WordPress 2.1.1 within the past 3-4 days, your files may include a security exploit that was added by a cracker, and you should upgrade all of your files to 2.1.2 immediately.

This is truly painful and highlights the inherent risk of frequent patching. I haven’t seen any estimates of how many websites installed the hacked code, but I’d guess it is in the thousands; the number of WordPress blogs out there is in the hundreds of thousands. Ironically it is the most conscientiously administered installations that have been at risk. Personally I’d glanced at the 2.1.1. release when it was announced, noted that it did not mention any critical security fixes, and decided to postpone the update for a few days. I’m glad I did.

Keeping up-to-date with the latest patches is risky because the patches themselves may be broken or, as in this case, tampered with. On the other hand, not patching means exposure to known security flaws. There’s no safe way here, other than perhaps multi-layered security. All the main operating systems – Windows, OS X, Linux distributions – have automatic or semi-automatic patching systems in place. Applications do this as well. We have to trust in the security of the source servers and the process by which they are updated.

Having said that, there are a few things which can be done to reduce the risk. One is code signing. Have a look at the Apache download site – note the PGP and MD5 links to the right of each download. These let you verify that the download has not been tampered with. Why doesn’t WordPress sign its downloads?*

Next question, of course, is how WordPress allowed its site to be hacked. Was it through one of the other known insecurities in the WordPress code, perhaps?

I’m also reminded of recent comments by Rasmus Lerdorf on how PHP does not spoonfeed security. There is a ton of insecure PHP code around; it’s a obvious target for hackers in search of web servers to host their content or send out spam.

*Update: See Mullenweg’s comment to this post. I looked at the download page which does not show the MD5 checksums. If you look at the release archive you can see MD5 links. Apologies. Having said that, why couldn’t the cracker just update the MD5 checksum as well? This is mainly a check for corrupt rather than hacked files. The PGP key used by Apache is better in that it links to the public key of the Apache developers. See here for an explanation.

Perhaps this is a good moment to add that the reaction of the WordPress folk has been impeccable in my view. They’ve acknowledged the problem, fixed it promptly, and are taking steps to prevent a repeat. Nobody should lose confidence in WordPress because of this.

 

Technorati tags: , ,

Jitters about Adobe becoming “Microsoft of the web”

Ted Leung is bothered about Adobe becoming too sucessful with its Flash/Flex/Apollo technology:

Flash has a great cross platform story. One runtime, any platform. Penetration of the Flash Player is basically the same as penetration of browsers capable of supporting big AJAX apps. There are nice development tools. This is highly appealing.

What is not appealing is going back to a technology which is single sourced and controlled by a single vendor. If web applications liberated us from the domination of a single company on the desktop, why would we be eager to be dominated by a different company on the web?

These are valid concerns though arguably premature – we’ve not seen widespread adoption of Flex yet, let alone Apollo which is not yet released. But is Adobe’s potential monopoly equally as dangerous as what we’ve seen on the desktop? My instinct is that it is not, though I don’t pretend to have thought through all the implications, and I don’t like those proprietary Adobe protocols like Action Media Format (AMF) and Real Time Messaging Protocol (RTMP). I also think it will be healthy for the industry if Microsoft gains some momentum with WPF and WPF/E, and if Java stays alive as a client-side platform, simply because competition is our best protection against vendor greed. And as Leung notes, there is also Open Laszlo.

 

Technorati tags: , , , ,

Who’s coding the Linux OS?

LWN.net has an article (subscriber only until March 1st) on who wrote the current release of the Linux kernel, 2.6.20. The author analyzes the code repository to see who submitted changes and what company they work for. Here are the conclusions:

The end result of all this is that a number of the widely-expressed opinions about kernel development turn out to be true. There really are thousands of developers – at least, almost 2,000 who put in at least one patch over the course of the last year. Linus Torvalds is directly responsible for a very small portion of the code which makes it into the kernel. Contemporary kernel development is spread out among a broad group of people, most of whom are paid for the work they do. Overall, the picture is of a broad-based and well-supported development community.

The top contributing companies are:

Unknown: 19%

Red Hat: 12.8%

None: 11.0%

IBM: 7.3%

Other stats that caught my eye: Novell with 3.4%, Intel 3.4%, Sony with 2.4%, Nokia 1.6%.

The figures should not be relied on too much (note the large “Unknown” category) but it is still interesting. Contrary to a myth still sometimes peddled, Linux is not primarily the work of hobbyists in back bedrooms or students pulling all-nighters; but nor is it wholly taken over by the usual commercial suspects. I think these are healthy indicators.

Don Dodge has more extracts and commentary.

 

Technorati tags: ,

Can CodeGear make sense of PHP development on Windows?

I had a chat with CodeGear’s David Intersimone and Jason Vokes about Delphi for PHP, following which I wrote a short article for The Register.

I do have reservations about the CodeGear product, though I’ve not seen it yet. My main concerns are first, that CodeGear will find it difficult to work alongside PHP’s open source community; second, that Delphi for PHP will have an unexciting feature set in its first release; and third, that over-reliance on data-binding frameworks may get in the way of lean, fast PHP development. I am not a great enthusiast for data binding, which can all too easily be inefficient, hard to debug, and restrictive in terms of database drivers. I also think the name is silly, and that long-term it makes no sense for Delphi for PHP to have its own IDE, as opposed to using Borland Developer Studio or Eclipse.

Drag-and-drop form building is hardly an exciting feature these days. I’m more interested in aspects like how easily developers and designers can collaborate, or how the IDE helps developers create secure applications, profile performance, or refactor existing spaghetti PHP into something resembling a well-structured application.

Then again, PHP is poorly served by IDEs right now, so there must be an opportunity here. One of the reasons is that setting up to test and debug PHP on Windows is awkward, posing a problem for those who develop on Windows but deploy to Linux web servers. It is an ugly mismatch. Will you use Apache on Windows, or try to get IIS working well with PHP? Presumably you want MySQL as well? Or perhaps run one of those combined installers like XAMPP and hope that that all this stuff is being installed in a secure manner and won’t break IIS, ASP.NET, or anything else.

This is before you start thinking about the IDE. Will it be the Zend/Eclipse PHP Development Tools? Or the less official PHPEclipse? Something else? And not forgetting Dreamweaver, which is great for designers but less good for code unless you are happy with the built-in wizards.

It appears that folk often run into difficulties simply getting debugging working sensibly in their PHP setups.

Delphi for PHP will not necessarily be any better. In the past, Borland has not been shy about installing lots of miscellaneous bits onto your system unless you are careful what you click; it may be no different from XAMPP. Yet if it can pull off a smooth installation with a half-decent PHP editor, smooth debugging, and no conflict with our existing Visual Studio / ASP.NET / IIS setups, then that alone will make it a worthwhile proposition.

 

Got Paint.NET?

I am late with this; Paint.NET 3.0 was released at the end of last month. It deserves more publicity, since it is of high quality. If you have .NET Framework 2.0, Download it here.

The application is fine for general use; I may switch to it from my old favourite Paint Shop Pro, for trimming and touching up screen captures. One feature I like is the way it handles multiple documents. A thumbnail of each open document appears at top right, in a fat toolbar; click a thumbnail to switch to that document.

Paint.NET is particularly interesting for developers. It is written in C#, and started out as a design project; as I understand it, one of the intentions was to discover whether Microsoft’s .NET Framework was up to the task, given that image applications do a lot of intensive number-crunching. Most of the code is C# but not quite all. There is a shell extension written in C++ and some use of PInvoke and COM interop. I get the impression that the chief developer Rick Brewster is now more interested in creating an excellent application than in proving a point about .NET.

One point of interest is the user of multi-threading for optimized performance on multicore processors. Brewster has recently posted his performance tests on various processors from two to eight cores:

The 8-core system is frightfully fast, and it’s very clear that having rendering code optimized for multiple threads is a big win. However, I will be honest and state that the performance scaling is not at the level I was hoping for: we’re already seeing diminishing returns at this point! In general, I am seeing gains of about 3.0x on a quad-core system, and 5.1x on an 8-core system (compared to running with only 1 thread). Unfortunately, I do not have an 8-core Opteron system to compare against which might provide some more meaty information to chew on (does it scale better? worse?).

I take his point, though a 5.1x gain on an 8-core system strikes me as decent. I recommend downloading the source code and taking a look; it is well commented and has workarounds for various System.Windows.Forms annoyances. Before you ask the obvious question, Brewster recently commented in the Paint.NET forum that he has not yet looked at WPF (Windows Presentation Foundation).

 

Technorati tags: , , ,