Tag Archives: .net

QCon London 2010 report: fix your code, adopt simplicity, cool .NET things

I’m just back from QCon London, a software development conference with an agile flavour that I enjoy because it is not vendor-specific. Conferences like this are energising; they make you re-examine what you are doing and may kick you into a better place. Here’s what I noticed this year.

Robert C Martin from Object Mentor gave the opening keynote, on software craftsmanship. His point is that code should not just work; it should be good. He is delightfully opinionated. Certification, he says, provides value only to certification bodies. If you want to know whether someone has the skills you want, talk to them.

Martin also came up with a bunch of tips for how to write good code, things like not having more than two arguments to a function and never a boolean. I’ve written these up elsewhere.

image

Next I looked into the non-relational database track and heard Geir Magnusson explain why he needed Project Voldemort, a distributed key-value storage system, to get his ecommerce site to scale. Non-relational or NOSQL is a big theme these days; database managers like CouchDB and MongoDB are getting a lot of attention. I would like to have spent more time on this track; but there was too much else on; a problem with QCon.

I therefore headed for the functional programming track, where Don Syme from Microsoft Research gave an inspiring talk on F#, Microsoft’s new functional language. He has a series of hilarious slides showing F# code alongside its equivalent in C#. Here is an example:

image

The white panel is the F# code; the rest of the slide is C#.

Seeing a slide that this makes you wonder why we use C# at all, though of course Syme has chosen tasks like asychronous IO and concurrent programming for which F# is well suited. Syme also observed that F# is ideal for working with immutable data, which is common in internet programming. I grabbed a copy of Programming F# for further reading.

Over on the Architecture track, Andres Kütt spoke on Five Years as a Skype Architect. His main theme: most of a software architect’s job is communication, not poring over diagrams and devising code structures. This is a consistent theme at QCon and in the Agile movement; get the communication right and all else follows. I was also interested in the technical side though. Skype started with SOAP but switched to a REST model for web services. Kütt also told us about the languages Skype uses: PHP for the web site, C or C++ for heavy lifting and peer-to-peer networking; Delphi for the Windows interface; PostgreSQL for the database.

Day two of QCon was even better. I’ve written up Martin Fowler’s talk on the ethics of software development in a separate post. Following that, I heard Canonical’s Simon Wardley speak about cloud computing. Canonical is making a big push for Ubuntu’s cloud package, available both for private use or hosted on Amazon’s servers; and attendees at the QCon CloudCamp later on were given a lavish, pointless cardboard box with promotional details. To be fair to Wardley though, he did not talk much about Ubuntu’s cloud solution, though he did make the point that open source makes transitions between providers much cheaper.

Wardley’s most striking point, repeated perhaps too many times, is that we have no choice about whether to adopt cloud computing, since we will be too much disadvantaged if we reject it. He says it is now more a management issue than a technical one.

Dan North from ThoughtWorks gave a funny and excellent session on simplicity in architecture. He used pseudo-biblical language to describe the progress of software architecture for distributed systems, finishing with

On the seventh day God created REST

Very good; but his serious point is that the shortest, simplest route to solving a problem is often the best one, and that we constantly make the mistake of using over-generalised solutions which add a counter-productive burden of complexity.

North talked about techniques for lateral thinking, finding solutions from which we are mentally blocked, by chunking up, which means merging details into bigger ideas, ending up with “what is this thing for anyway”; and chunking down, the reverse process, which breaks a problem down into blocks small enough to comprehend. Another idea is to articulate a problem to a colleague, which exercises different parts of the brain and often stimulates a solution – one of the reasons pair programming can be effective.

A common mistake, he said, is to keep using the same old products or systems or architectures because we always do, or because the organisation is already heavily invested in it, meaning that better alternatives do not get considered. He also talked about simple tools: a whiteboard rather than a CASE tool, for example.

Much of North’s talk was a variant of YAGNI – you ain’t gonna need it – an agile principle of not implementing something until/unless you actually need it.

I’d like to put this together with something from later in the day, a talk on cool things in the .NET platform. One of these was Guerrilla SOA, though it is not really specific to .NET. To get the idea, read this blog post by Jim Webber, another from the ThoughtWorks team (yes, there are a lot of them at QCon). Here’s a couple of quotes:

Prior to our first project starting, that client had already undertaken some analysis of their future architecture (which needs scalability of 1 billion transactions per month) using a blue-chip consultancy. The conclusion from that consultancy was to deploy a bus to patch together the existing systems, and everything else would then come together. The upfront cost of the middleware was around £10 million. Not big money in the grand scheme of things, but this £10 million didn’t provide a working solution, it was just the first step in the process that would some day, perhaps, deliver value back to the business, with little empirical data to back up that assertion.

My (small) team … took the time to understand how to incrementally alter the enterprise architecture to release value early, and we proposed doing this using commodity HTTP servers at £0 cost for middleware. Importantly we backed up our architectural approach with numbers: we measured the throughput and latency characteristics of a representative spike (a piece of code used to answer a question) through our high level design, and showed that both HTTP and our chosen Web server were suitable for the volumes of traffic that the system would have to support … We performance tested the solution every single day to ensure that we would always be able to meet the SLAs imposed on us by the business. We were able to do that because we were not tightly coupled to some overarching middleware, and as a consequence we delivered our first service quickly and had great confidence in its ability to handle large loads. With middleware in the mix, we wouldn’t have been so successful at rapidly validating our service’s performance. Our performance testing would have been hampered by intricate installations, licensing, ops and admin, difficulties in starting from a clean state, to name but a few issues … The last I heard a few weeks back, the system as a whole was dealing with several hundred percent more transactions per second than before we started. But what’s particularly interesting, coming back to the cost of people versus cost of middleware argument, is this: we spent nothing on middleware. Instead we spent around £1 million on people, which compares favourably to the £10 million up front gamble originally proposed.

This strikes me as an example of the kind of approach North advocates.

You may be wondering what other cool .NET things were presented. This session was called the State of the Art .NET, given by Amanda Laucher and Josh Graham. They offer a dozen items which they considered .NET folk should be using or learning about:

  1. F# (again)
  2. M – modelling/DSL language
  3. Boo – static Python for .NET
  4. NUnit – unit testing. Little regard for Microsoft’s test framework in Team System, which is seen as a wasted and inferior effort.
  5. RhinoMocks – mocking library
  6. Moq – another mocking library
  7. NHibernate – object-relational mapping
  8. Windsor – dependency injection, part of Castle project. Controversial; some attendees thought it too complex.
  9. NVelocity – .NET template engine
  10. Guerrilla SOA – see above
  11. Azure – Microsoft’s cloud platform – surprisingly good thanks to David Cutler’s involvement, we were told
  12. MEF – Managed Extensibility Framework as found in Visual Studio 2010, won high praise from those who have tried it

That was my last session (I missed Friday) though I did attend the first part of CloudCamp, an unconference for cloud early adopters. I am not sure there is much point in these now. The cloud is no longer subversive and the next new thing; all the big enterprise vendors are onto it. Look at the CloudCamp sponsor list if you doubt me. There are of course still plenty of issues to talk about, but maybe not like this; I stayed for the first hour but it was dull.

For more on QCon you might also want to read back through my Twitter feed or search the entire #qcon tag for what everyone else thought.

Windows Phone 7 incompatibility may drive developers elsewhere

Microsoft’s Charlie Kindel has blogged about the Windows Phone 7 development platform.

As widely leaked, the new mobile device supports Silverlight and XNA; Kindel also mentions .NET, but since both Silverlight and XNA are .NET platforms, that might not mean anything additional.

The big story is about compatibility:

To deliver what developers expect in the developer platform we’ve had to change how phone apps were written. One result of this is previous Windows mobile applications will not run on Windows Phone 7 Series.

This puts Microsoft in an awkward position. Support for custom business apps has been one of the better aspects of Windows Mobile. What Microsoft should do is to have some way of continuing to run those old apps on the new devices. Instead, Kindel adds:

To be clear, we will continue to work with our partners to deliver new devices based on Windows Mobile 6.5 and will support those products for many years to come, so it’s not as though one line ends as soon as the other begins.

I would not take much account of this. No doubt there will some devices, but demand for Windows Mobile will dive through the floor (if it has not already) once Phone 7 is available, making it an unattractive proposition for hardware partners.

The danger for Microsoft is that after this let-down, those with existing Windows Mobile apps that are now forced to choose a new development platform might choose one from a competitor.

The mitigation is that apps which use the Compact Framework will likely be easier to port to Windows Phone 7, because the language is the same. Native code apps are a different matter. Of course it will be technically possible to write native code apps for Windows Phone 7, but probably locked down and restricted to special cases, such as perhaps the Adobe Flash runtime (I am speculating here).

PS – I see that developer Thomas Amberg has articulated exactly these concerns in a comment to Kindel’s post:

Platform continuity was the single most important feature of Windows Mobile. Being able to run code from 2003 on a current phone is more important to our customers than a fancy UI (which Microsoft seems not able to get right anyway). Further, the ability to access hardware specific APIs through P/Invoke has been vital in many of our projects (e.g. to use Bluetooth in the early days). Those advantages have now gone. You just rendered useless years of development work and many thousands of lines of code.

"we will continue to work with our partners to deliver new devices based on Windows Mobile 6.5 and will support those products for many years to come"

You will, I bet. But which device manufacturer will produce such "dead-end" devices?

Time to switch to another mobile OS.

Microsoft .NET gotchas revealed by Visual Studio team

The Visual Studio Blog makes great reading for .NET developers, and not only because of the product it describes. Visual Studio 2010 is one of the few Microsoft products that has made a transition from native C++ code to .NET managed code – the transition is partial, in that parts of Visual Studio remain in native code, but this is true of the shell and the editor, two of the core components. Visual Studio is also a complex application, and one that is extensible by third parties. Overall the development team stressed the .NET platform, which is good for the rest of us because the developers are in a strong position both to understand problems, and to get them fixed even if it means changes to the .NET Framework.

Two recent posts interested me. One is Marshal.ReleaseComObject Considered Dangerous. I have some familiarity with this obscure-sounding topic, thanks to work on embedding Internet Explorer components. It relates to a fundamental feature of .NET: the ability to interact with the older COM component model, which is still widely used. In fact, Microsoft still uses COM for new Windows 7 APIs; but I digress. A strong feature of .NET from its first release is that it can easily consume COM objects, and also expose .NET objects to COM.

The .NET platform manages memory using garbage collection, where the runtime detects objects that are no longer referenced by active code and deletes them. COM on the other hand uses reference counting, maintaining a count of the number of references to an object and deleting the object when it reaches zero.

Visual Studio 2008 and earlier has lots of COM APIs. Some of these were called from .NET code, and for the same of efficiency called the method mentioned above, Marshal.ReleaseComObject, to reduce the reference count immediately so that the COM object would be deleted.

Now here comes Visual Studio 2010, and some of those COM APIs are re-implemented as .NET code. For compatibility with existing code, the new .NET code is also exposed as a COM API. Some of that existing code is actually .NET code which wraps the COM API as .NET code. Yes, we have .NET to COM to .NET, a double wrapper. Everything still works though, until you call Marshal.ReleaseComObject on the doubly-wrapped object. At this point the .NET runtime throws up its hands and says it cannot decrement the reference count, because it isn’t really a COM object. Oops.

The post goes on to observe that Marshal.ReleaseComObject is dangerous in any cause, because it leaves you with an invalid .NET wrapper. This means you should only call it when the .NET instance is definitely not going to be used again. Obvious, really.

Once you’ve digested that, try this illuminating post on WPF in Visual Studio 2010 – Part 2 : Performance tuning. WPF, or Windows Presentation Foundation, is the .NET API for rich graphical user interfaces on desktop Windows applications. Here is an example of why you should read the post, if you work with WPF. Many of us frequently use Remote Desktop to run applications on remote PCs or PCs that do not have a screen and keyboard attached. This is really a bad scenario for WPF, which is designed to take advantage of local accelerated graphics. Here’s the key statement:

Over a remote desktop connection, all WPF content is rendered as a bitmap. This is in contrast to GDI rendering, where primitives such as rectangles and text are sent over the wire for reconstruction on the client.

It’s a bad scenario, but mitigated if you use graphics that are amenable to compression, like solid colours. There are also some tweaks introduced in WPF 4.0, like the ability to scroll an area on the remote client, which saves having to re-send the entire bitmap if it has moved.

Mono Tools for Visual Studio: code on Windows, run on Linux

I have just com across Mono Tools, a Novell add-in for Visual Studio that lets you test Mono compatibility. It adds a Mono menu which has options to run locally or remotely in Mono, analyze for compatibility issues, and create deployment packages. No sign of Mac support, which is a missed opportunity, but understandable given that Novell owns SUSE Linux.

For those few still unfamiliar with Mono, it is an open source implementation of Microsoft’s .NET Framework, enabling your .NET applications to run on other platforms. One compelling use is to have your ASP.NET web applications run on the free Apache web server, rather than Microsoft’s IIS.

image

Mono Tools works with both Windows Forms and web projects.

image

This is just the sort of thing Mono needs to move it further into the mainstream, though another less welcome sign of business acceptance is that this is a commercial product, currently costing $99.00 for an individual or $249.00 per seat in an organization. There is also an Ultimate edition at $2,499, which comes with a commercial non-LGPL license to redistribute Mono.

The Mono Tools team is now looking for testers for its 1.1 edition, which supports Visual Studio 2010.

Visual Studio 2010 RC arrives with go-live license

Microsoft has made the Release Candidate of Visual Studio 2010 available for download to MSDN subscribers. From tomorrow (10th February) the same release will be available to everyone. There is a go-live license so you can use this in production if you wish, though if the full release comes in April as planned, it hardly seems worth it in most scenarios.

What’s new since the beta? Jason Zander says mainly performance. Note that the Chief Architect of Visual Studio is Rico Mariani, formerly Microsoft’s .NET performance guru, which is encouraging in this respect.

The blow-by-blow account of issues with the RC is here.

Whatever your views on the direction and future of Microsoft’s platform, there’s no doubting the huge scope of this release, though in my view the company has not communicated this particularly well, saying too much about things like SharePoint development, top of its list of walkthroughs but still an ugly business, and not enough about features such as IntelliTrace debugging, or the new ability to float windows out of the IDE and onto a second display, which will have a more immediate impact on developers. Note that the Visual Studio IDE has been re-built using WPF (Windows Presentation Foundation), and that it comes with a the first completely new version of the .NET Framework since 2005.

Silverlight 4.0 is another area of interest, though I understand that it will not be complete in time for this release. Visual Studio 2010 will have Silverlight 3.0 out of the box, with the ability to install the 4.0 preview release and eventually the final release as an add-on. I’ve also heard that Silverlight 4.0 is not yet supported at all in the RC, so be cautious if this is your area of work – you may need to stick with the last beta for the moment.

New is not always better, of course. I’m interested in hearing from developers working with Visual Studio 2010 – whether performance and stability issues have been overcome, and what you think of it overall.

What’s new in Visual Studio 2010 – more than you may realise

I’m beginning to think Microsoft has under-sold Visual Studio 2010. Of course it is a huge product, as I observed back in October, especially since it includes a major new release of the .NET Framework as well as updated tools, but I thought I had discovered most of the significant new features. Still, when I sat down recently to write up an extended review, I found a lot that I had missed.

One of my reflections on this is that Microsoft has done of poor job of communicating what is new. I attended the Professional Developer’s Conference in 2008 and 2009. The developer-focused keynote on the second day last November should have hyped the best of what is new; but instead we got Steven Sinofsky on Windows 7 quality control – hardly the most exciting of topics – a sneak preview of IE 9, an unconvincing tour of Sharepoint and Office 2010, and Scott Guthrie on Silverlight 4. Guthrie was fantastic, leading us blow by blow through Silverlight’s new capabilities, but much else was neglected.

It doesn’t help that Microsoft’s home page for Visual Studio 2010 has meaningless headlines. “Set your ideas free”, “Simplicity through integration”, “Quality tools help ensure quality results.” Pure fluff, which saps your will to read further.

Here are a few things that I found interesting – nothing like comprehensive, just features that perhaps have not had the attention they deserve.

Microsoft F# – a new language from Microsoft Research, integrated into Visual Studio with remarkable speed. The people I’ve spoken to who have taken the time to discover what it does are truly enthusiastic. Some of its strengths are parallelism, asynchronous programming, graphics manipulations, and maths. You probably won’t write a complete application in F#, but it will be great for assembling libraries.

Windows Workflow Foundation 4.0 – potentially a new and effective approach to visual programming and long-running state management. Flow charts are often used to teach programming, since they express common concepts like if conditions visually. WF lets you draw a process as a flow chart – or there are other types of chart – using the nice new WPF design tools, and then execute it in the runtime, which is part of the “Dublin” extensions to IIS, now known as Windows Server AppFabric (I have no clue why this confusing name was chosen). To get the idea, I suggest reading David Chappell’s Workflow Way. For applications that fit this kind of model, it is a compelling approach, and integrates well with Windows Communication Foundation for messaging.

Dotfuscator – I know this is a third-party thing, but this is no longer just a tool for obscuring your .NET assemblies in the hope of preventing decompilation. The new Dotfuscator does runtime analytics, and can report back to a portal when your application runs, what features you use, what operating system it is on, whether it crashed, and so on. It also supports application expiry, known as “shelf life”, and can detect if assemblies have been tampered with. Some of this sails close to the spyware wind, but this is a matter of getting informed user consent. These are interesting features for Windows desktop developers, if there are any left, and even the free edition is quite capable.

Test and Lab management – a challenge to set up and configure, but when it works, amazing. Lab Management uses Visual Studio, Hyper-V and System Center Virtual Machine Manager to automate deploying an application over one or more VMs, so you can run tests against it. This hooks into Team System so you can file a bug report with a link that actually shows the bug happening at runtime, with a snapshot of the virtual environment.

Step backwards through code – IntelliTrace is a new feature of the Visual Studio debugger. Configure it to collect IntelliTrace events and call information, and you can then step backwards as well as forwards from a breakpoint, examining variable values as they change.

Team Foundation Server Basic – what this means is that even a solo Visual Studio developer can have TFS running locally or on a networked machine for source code management, issue tracking and so on. It’s worth considering because of the way it integrates with the IDE. I admit, I still like Subversion which I have on a remotely hosted server, since it acts as an effective off-site backup, but I’d much rather use TFS Basic than nothing.

UML – Microsoft has finally done what it should have done years ago, and implemented a wide range of up-to-date UML diagram tools. Nothing revolutionary, just useful.

Not everything is wonderful in the new Visual Studio. Deploying to Azure remains clunky in Beta 2 – when is this going to get better? SharePoint is another one; I appreciate the value of F5 debugging, but you still need SharePoint installed locally, with great potential for mucking up IIS, and the whole thing feels unwieldy.

The end of Code Access Security in Microsoft .NET

In the early days of .NET I remember being hugely impressed by Code Access Security. It gave administrators total control over what .NET code was permitted to run. It’s true that the configuration tool was a little intimidating, but there were even wizards to adjust .NET security, trust an assembly, or fix an application – great idea, that last one.

image

Well, now the truth is out. Code Access Security was too complex for humans to configure. Buried deep in the documentation for .NET Framework 4.0 you can find Microsoft’s confession, under the heading Security Policy Simplification:

In the .NET Framework 4 Beta 2, the common language runtime (CLR) is moving away from providing security policy for computers. Historically, the .NET Framework has provided code access security (CAS) policy as a mechanism to tightly control and configure the capabilities of managed code. Although CAS policy is powerful, it can be complicated and restrictive. Furthermore, CAS policy does not apply to native applications, so its security guarantees are limited. System administrators should look to operating system-level solutions such as Windows Software Restriction Policies (SRP) as a replacement for CAS policy, because SRP policies provide simple trust mechanisms that apply to both managed and native code. As a security policy solution, SRP is simpler and provides better security guarantees than CAS.

The section below, headed Obsolete Permission Requests, is even more damning of the old system:

Runtime support has been removed for enforcing the Deny, RequestMinimum, RequestOptional, and RequestRefuse permission requests. In general, these requests were not well understood and presented the potential for security vulnerabilities when they were not used properly.

It goes on to explain why they did not work, with explanations like this one for RequestOptional:

RequestOptional was confusing and often used incorrectly with unexpected results. Developers could easily omit permissions from the list without realizing that doing so implicitly refused the omitted permissions.

The new .NET Framework 4.0 no longer enforces these obsolete permissions.

Microsoft is right. As far as I’m aware, few used the .NET Configuration tool, and I cannot even find it in Windows 7, even though Visual Studio and all the versions of the .NET Framework are installed. Developers feared, with justification, that tinkering with the settings would simply cause mysterious exceptions that were hard to resolve.

I recall though that Code Access Security was considered a highly strategic feature when .NET was first released. One of the promises of .NET was that applications would be more secure and malware less prevalent. The fine-grained permissions were a selling point versus Java.

The painful lesson is that simplicity is a feature. Of course some things are inherently complex; but technology succeeds when it simplifies rather than complicates the tasks that we face.

Silverlight 3 is out

Microsoft has released Silverlight 3, though some pieces of the platform are still not done – it seems there is always something to wait for.

There are links to the tools developers and designers need to install here:

http://silverlight.net/GetStarted/

Note that Expression Blend and Sketchflow are still at Release Candidate stage.

The .NET RIA services, a server-side piece that simplifies authentication and database operations, is available in a new July 2009 preview:

http://www.microsoft.com/DOWNLOADS/details.aspx?FamilyID=76bb3a07-3846-4564-b0c3-27972bcaabce&displaylang=en#filelist

See this excellent post by Nikhil Kothari for more on RIA Services – it’s from March but does a good job of explaining what they are about.

Using Silverlight 3, or plan to? I’d love to hear from you, along with your views on what is best and what is worst about Microsoft’s RIA efforts.