Category Archives: database


Adobe news: Flash Builder 4, Creative Suite 5, quarterly results

Lots of Adobe news this week.

First, the release of Flash Builder 4. It seems a long time ago that I was looking at the first preview of code-name Gumbo; it’s good to see this finally released. Since it is Eclipse-based, it looks similar to to Flex Builder 3.0; but under the covers there is the new Flex 4 SDK with the Spark component architecture. The design tools have been revamped, and a time-saving feature is that you can now generate an event handler with one click. Flash Builder 4 also has built-in unit testing with FlexUnit, which is a big deal for those enlightened folk who do test-driven development.

Adobe has also worked hard on database connectivity. Flash Builder 4 will generate wrapper code for a variety of data sources, including HTTP and REST, PHP, SOAP, and Adobe’s LiveCycle Data Services middleware.


There is a new data/services panel that shows all the available sources, with drag-and-drop data binding, batch updates, and other handy features.

There are a few downsides to Flash Builder 4. ActionScript feels dated if you have been playing with something like C# 4.0, soon to be released as part of Microsoft’s Visual Studio 2010. I’ve also heard complaints that equivalent projects built with Flex are larger than equivalents built with the Flash IDE. The naming is puzzling; we now have to distinguish between the Flash IDE and the Flash Builder IDE, which are completely different products, but the SDK for code-centric development is still called Flex. There is no support yet for AIR 2.0, the latest version of the desktop runtime; nor for the much-hyped iPhone app development. Patience is called for, I guess.

More information on Flex and Flash Builder here.

The next big product launch from Adobe will be Creative Suite 5, for which a launch date of Monday, 12th April has been announced. You can sign up for an online launch event and see some sneak peek videos here.

Finally, Adobe released quarterly financial figures today. The company says they are strong results; revenue is 9.1% higher than last year and GAAP earnings are positive (unlike the last quarter). However, looking at the investor datasheet [PDF] I noticed that new analytics acquisition Omniture now accounts for 10% of revenue; if you deduct that from the increase it does not look so good. Still, a profit is a profit, and the quarter before a major update to CS 5.0 may be under par as users wait for the new release,  so overall it does not look too bad.  The Q1 Earnings Call is worth looking at if only for its nice indexing; I wish all online videos worked like this.

One questioner asked about HTML 5 – “how quickly can you provide support when it comes”? An intriguing question. I suspect it reflects more on the publicity around Flash vs HTML than on the progress of the HTML 5 standard itself, which is coming in fits and starts. “The reality is that it’s a fragmented standard, but we will continue to support it”, was the answer from CEO Shantanu Narayen, though he added a plug for the “benefits of our runtime, which is Flash”.

MySQL comes to Amazon’s cloud. Anyone for Quadruple Extra Large?

Amazon has announced the Amazon Relational Database Service:

Amazon RDS gives you access to the full capabilities of a familiar MySQL database. This means the code, applications, and tools you already use today with your existing MySQL databases work seamlessly with Amazon RDS. Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period. You also benefit from the flexibility of being able to scale the compute resources or storage capacity associated with your relational database instance via a single API call. As with all Amazon Web Services, there are no up-front investments required, and you pay only for the resources you use.

The cost starts at $0.11 per hour for a small database instance (1.7GB RAM, 1 virtual core, 64-bit), increasing in stages as more power is required. The engagingly-titled “Quadruple Extra Large DB Instance” offers 68GB RMA and 8 virtual cores, at $3.10 per hour.

In addition, you pay for database storage at $0.10 per GB-month, $0.10 per 1 million I/O requests, $.10 per GB transferred in, and $0.17 per GB transferred out.

That’s a worrying collection of charges, but the usual logic applies: if you need a hefty database server for a defined period, say to cover a special event, this will work out more cost-effective than installing your own servers.

You can also install MySQL or other database servers on general-purpose Amazon EC2 instances, but this pre-built solution looks attractive.

Microsoft started its cloud database initiative with a preview of SQL Server Data Services, offering a limited database API more like Amazon SimpleDB. Then Microsoft decided to offer full SQL Server on its Azure cloud. However, Azure is still a Community Tech Preview, and during the interim period Amazon has come up with its own fully relational solution.

Technorati Tags: ,,,

Why the EU should not worry about Oracle and MySQL

The European Commission is examining Oracle’s acquisition of Sun and has concerns about the implications for MySQL:

Competition Commissioner Neelie Kroes said: “The Commission has to examine very carefully the effects on competition in Europe when the world’s leading proprietary database company proposes to take over the world’s leading open source database company. In particular, the Commission has an obligation to ensure that customers would not face reduced choice or higher prices as a result of this takeover. Databases are a key element of company IT systems. In the current economic context, all companies are looking for cost-effective IT solutions, and systems based on open-source software are increasingly emerging as viable alternatives to proprietary solutions. The Commission has to ensure that such alternatives would continue to be available”.

The most remarkable thing about this investigation is that it exists. One of the supposed benefits of open source is that, come what may, your product cannot be abandoned at the whim of some commercial giant; you have the code, and as long as a viable community of users and developers exists, its future is in your hands. So why is the EU worried?

The issue I suppose is that while Oracle cannot remove code from the community, it would have it in its power to disrupt MySQL – in fact, that is happening already. It could refuse to invest in further development, and encourage customers with support agreements to move to the latest Oracle solution instead. I am not saying that is likely; I have no idea what Oracle plans, and it already owns Innobase, which supplies the most widely-used transactional engine for MySQL, without obvious adverse affects.

Still, it is important to think clearly about the case. I’ve just been talking to Simon Cattlin at Ingres, who is using the opportunity to mention that worried MySQL customers are making enquiries at his company. He also argues that the EU’s intervention proves the increasing importance of open source technology.

That latter point is true; but there is some doublethink going on here. There are two sides to MySQL. On one side it’s powering a zillion mostly non-critical web applications for free, while on the other it is a serious business contender covered by support contracts. It is all the free users that make it “the world’s leading open source database company”, not the relatively small number of commercial licensees; and it was Sun’s failure to shift users from one to the other that accounted (among other things) for its decline.

So which of these groups is the EU concerned about? If it’s the free users, I don’t think it should worry too much. The existing product works, the community will maintain it, and forks are already appearing, not least MariaDB from a company started by MySQL creator Monty Widenius.

On the other hand, if it is the Enterprise users, I don’t think the EU should worry either, because it is not a big enough deal to warrant anti-competitive concerns. Cattlin told me that Ingres actually had higher revenue that MySQL at the time of the Sun takeover.

It makes no sense to conflate the free and commercial users into one, and use the number of free users to justify action which mainly concerns the commercial users.

That said, it’s true that having an open source product owned and mainly developed by a commercial company is always somewhat uncomfortable. One of the reasons the Apache web server succeeds is because it belongs to an independent foundation. There is rarely a clean separation between what is commercial and what is open source though: the money has to come from somewhere, and entities like Apache and Eclipse survive on staff and funds contributed by profit-making companies.

Technorati Tags: ,,,,,

SQLite C# port raises hopes for a Silverlight local database manager

Yesterday programmer Noah Hart announced a port of SQLite to C#:

I am pleased to announce that the C# port is done to the point where others can look at it.

Unfortunately the code was taken offline almost immediately afterwards, thanks to the intervention of the author of SQLite, D Richard Hipp:

Noah, you are welcomed, even encouraged, to take the source code to SQLite and translate it in any way you want and do whatever you want with it. But you need to make it abundantly clear to everyone on your site and in the comments of your source code that your code is not the original SQLite … SQLite is a registered trade mark. If I don’t defend the trademark, then I could lose it. So, I really do need to insist that you not use the name "SQLite" for your product.

The reason given is that Dr Hipp does not want to receive support requests for the port, though the intervention is a little surprising since there are other 3rd party adaptions out there that do use the SQLite name, though these generally modify or wrap the original code rather than porting it completely.

Still, Hart has taken it in his stride and it looks as if the code may be back soon under the name sqlsharp – a Google code project with that name has been created. I hope this is the name since I suggested it, though it is rather an obvious one and I might not have been the first.

Why the interest? First, it’s always interesting to compare languages. Currently, Hart says his executable compiles to 528kb vs 506kb for the native version, and performs 3-5 times more slowly (results in rows per second):

Test SQLite3 C# SQLite3
Inserts 300K 1300K
Selects 1500K 8450K
Updates 60K 300K
Deletes 250K 700K

Although that may seem disappointing, SQLite is remarkably fast so even 5 times slower is still acceptable in many contexts; and there are no doubt many possibilities for optimisation.

What’s the point? Hart says it was a C# learning exercise, which is fair enough. Others are hopeful for a local database manager for Microsoft Silverlight, writing to isolated storage. Competitor Adobe AIR includes SQLite in the runtime, as does Google Gears.

Silverlight may a stretch for Hart’s port. Silverlight does not allow platform invoke or code marked as unsafe; and while there are apparently only a few p/invoke calls I’m guessing there may be many unsafe sections since the original SQLite makes heavy use of pointers.*

Although Silverlight is an implementation of the .NET Framework, it does not include the System.Data namespace. It does include System.Linq.

There are a few other efforts at creating a local database manager for Silverlight, including McObject’s Perst, db4o (work in progress), and Silverlight Database which works by persisting XML.

*Update: the project has now been published as csharp-sqlite, which is an excellent name; it looks as if Hipp relented to some extent. Now that I’ve seen the code I find I’m wrong about unsafe sections. In fact, I added C#-Sqlite to a Silverlight project and it failed to compile with a mere 53 errors, many of them related to file locking – possibly less necessary in isolated storage? A Silverlight port looks feasible.

The end of the Borland story: acquired by Micro Focus

It is not unexpected, but still sad to see loss-making Borland acquired by Micro Focus for a knock-down price of $75m. Borland’s release says little beyond the financial details. Micro Focus, which is also acquiring Compuware’s ASQ (Automated Software Quality) tools (such as QADirector, DevPartner and Optimal Trace, I presume) says:

Acquiring Borland and the Compuware Testing and ASQ Business will give Micro Focus a leading market position in the highly complementary Application Testing / ASQ market. This market is estimated to be worth c.US$2 billion a year and is logically adjacent to Micro Focus’ core application management and modernization business.  The move into the ASQ market is consistent with Micro Focus’ stated strategy of extending in logically adjacent segments to expand its addressable market.

Why sad? Well, if you were around in the eighties and nineties you will remember a bold company which came up with a series of excellent products: Turbo Pascal, Borland C/C++, Quattro Pro, Paradox, and of course the incomparable Windows development tool Delphi. The visual development model in Delphi was successfully transitioned to Java in the JBuilder product, which in its early versions used a Delphi-compiled IDE.

These developer-focused products live on, of course, mostly in the hands of Embarcadero. The Borland that has been acquired is what was left when, in my developer-centric opinion, the best parts had already left.

What went wrong at Borland? It is mostly the victim of changes in the industry, made worse today by the economic downturn. It was a tools company, and the tools market was hit by the double blow of excellent open-source competition on one side (Eclipse, GCC) and vendor-subsidised tools on the other (Visual Studio).

Still, there were some spectacular own goals along the way. The 1991 acquisition of Ashton-Tate, at the time the market leader in PC database managers, was one, mainly because dBASE IV was not very good and did nothing to help Borland transition to Windows; in any case, Borland already had a better product in the form of Paradox.

Talking of Paradox, Paradox for Windows was another disaster. Wonderful product, but mostly incompatible with its DOS predecessor, and probably a tad too complex as well. It also had to compete with Microsoft Access, which was both cheaper and part of the impregnable Microsoft Office suite.

The company made up for it with Delphi; but even that under-performed relative to its quality. Enterprises felt safer with Microsoft’s Visual Basic. JBuilder did well at first; but its market share diminished rapidly in the face of competition from Eclipse and NetBeans. In retrospect, Borland should have made its core Java IDE free much earlier, to build a community round it, though competing with free is never easy.

Since it was so hard making money out of compilers and IDEs, Borland changed tack in order to target Enterprise ALM (Application Lifecycle Management). It could have worked, but it wasn’t actually a great fit with the independent developers who formed a large part of its customer base, and who tended to ignore large, complex and expensive supplementary tools in favour of just getting on with coding.

The nadir was 1998 when Borland changed its name to Inprise, to reflect its Enterprise focus. “Many thought Borland had gone out of business”, says Wikipedia. It was changed back to Borland in 2001.

Another mis-step was the way Borland (then Inprise) handled InterBase, its client-server database. In 2000, with a burst of community enthusiasm, the product was made open source. A couple of years later, it changed its mind and continued to develop InterBase as a proprietary product; but by then FireBird had been born, based on the open source code.

Thought for the day: Borland paid more for TogetherSoft in 2002 (around $185m, including $82.5m cash), than Micro Focus is paying now for Borland.

A Silverlight database application with image upload

I’ve been amusing myself creating a simple online database application using Silverlight. I had this mostly working a while back, but needed to finish off some pieces in order to get it fully functional.

This is created using Silverlight 2.0 and demonstrates the following:

  • A bound DataGrid (as you can see, work is still needed to get the dates formatted sensibly).
  • Integration with ASP.NET authentication. You have to log in to see the data, and you have to log in with admin rights to be able to update it.
  • Create,Retrieve,Update,Delete using ASP.NET web services.
  • Image upload using Silverlight and an ASP.NET handler.
  • Filter a DataGrid (idea taken from here).
  • Written in Visual Studio 2008, and hosted on this site, which runs Debian Linux, hence Mono and MySQL. Would you have known if I had not told you?

You can try it here. I’ll post the code eventually, but it will be a couple of months as it links in with another article.

MVP Ken Cox notes in a comment to Jesse Liberty’s blog:

Hundreds of us are scouring the Internet for a realistic (but manageable and not over-engineered) sample of manipulating data (CRUD operations) in a Silverlight 2 application. There are promising pieces of the puzzle scattered all over the place. Unfortunately, after investing time in a sample, we discover it lacks a key element – like actually saving changed data back to the database.

I can safely say that mine is not over-engineered, and that yes, it does write data.

SQLite developer argues for quick bug disclosure and fixes, despite egg on face

SQLite developer D Richard Hipp has posted to his mailing list to announce a third release in the space of a few days, to fix bugs discovered in version 3.6.10:

Some concern has been expressed that we are releasing too frequently. (Three releases in one week is a lot!) The concern is that this creates the impression of volatility and unreliability. We have been told that we should delay releases in order to create the impression of stability. But the SQLite developers feel that truth is more important than perception, not the other way around. We think it is important to make the highest quality and most stable version of SQLite available to users at all times. This week has seen two important bugs being discovered shortly after a major release, and so we have issued two emergency patch releases after the regularly scheduled major release. This makes us look bad. This puts "egg on our face." We do not like that. But, three releases also ensures that the best quality SQLite code base is available available to you at all times.

He goes on to say that an extended beta period would be unlikely to reduce the risk of bugs found on release, because most bugs in SQLite are found by internal testing rather than by external users. He also argues against withholding releases until they “testing is finished”:

The fallacy there is that we never finish testing. We are constantly writing new test cases for SQLite and thinking of new ways to stress and potentially break the code. This is a continuous, never-ending, and on-going process. All existing tests pass before each release. But we will always be writing new tests the day after a release, regardless of how long we delay that release. And sometimes those new tests will uncover new problems.

Anyone who has ever developed an application will know that sinking feeling when problems are discovered in code that has been distributed. Thoroughly implemented unit testing, as in SQLite, improves quality greatly. When bugs are found though, full disclosure and prompt fixes are the best possible response, so I agree with Hipp’s general approach here.

SQL Server 2008 is done

Microsoft has announced that SQL Server 2008 is released to manufacturing – ie. the bits are done, even if you can’t buy it yet. MSDN subscribers can download it now.

This is the product that was “launched” back in February; it’s been a long delay but I get the impression that the SQL team likes to wait until its release really is ready.

SQL Server 2008 is more like a suite of products than a single product now. It has a large range of editions from Compact to Enterprise, and product areas like Analysis Services and Reporting Services are distinct from the core engine.

The pieces that interest me most are the spatial data types, sparse columns, FILESTREAM data type, and the various object-relational layers including LINQ, Entity Framework, ADO.NET Data Services, and the ongoing work with SQL Server Data Services (which is far from done yet).

DBAs will likely have a very different view of what is important, as will Business Intelligence specialists.

SQL Server has prospered by being cheaper than than the likes of Oracle and DB2, and by integrating smoothly with Windows and Active Directory. I wonder if it will feel pressure from even more cost-effective open source offerings like MySQL, as they become more Enterprise-ready?

Where is your SQL Server CE Database?

Maybe not where you think. Now, I admit I am three years late with this bug strange feature of Visual Studio but it wasted some of my time today so it is still worth reporting.

I’ve been writing about creating database applications in Visual Studio. Specifically, I was looking at what happens if you download Visual Basic Express and take the quickest, easiest route to knocking together a database application.

The default local database engine these days is SQL Server Compact 3.5:

When you create applications, the preferred local database is SQL Server Compact 3.5.

says MSDN.

OK, so you add a new database to your project and accept various defaults. The wizard then asks you whether you would like to “copy the file to your project and modify the connection”?

Sounds reasonable, if you can figure out what it means. Default is Yes, so OK do it.

Mistake. Don’t do that. Not, at least, without reading and understanding this document. But I digress. Next up, you are asked another question:

Storing connection strings in your application configuration file eases maintenance and deployment … do you want to save the connection string to the application configuration file?

It’s another option that sounds good. OK, do it.

Now you set up a little table or two, add some data-bound controls, and off you go. Run the app, enter some data, save it. Run the app again … and all your data has disappeared. Why?

Well, it has to be either that the updates are silently failing; or that the database file is getting overwritten. It’s the latter. It turns out that VB is treating your database like any other resource, and copying to bin/debug when you run the app. This is the copy you are connecting to, you update it, but next time you build and run it gets overwritten.

None of this is obvious, because when you look at the connection string in the application settings (which VB hides by default, sigh), it shows the database file in the root of your project folder. Click Test Connection, all is fine. The only warning sign is that the connection string looks like this:

Data Source=|DataDirectory|\test.sdf

So where is |DataDirectory| set? That’s not obvious either. Read here for the answer. It’s an application property that is not visible anywhere, that gets set to different values depending on how you deploy the app. I can see why someone thought this was a smart idea; but the implementation is horrible. It gives you the illusion of having one database file, when in fact you have multiple copies (source, debug, release etc) overwriting one another, and during testing you are never editing the correct one.

Once you have worked this out you can fix it, of course. But here’s another problem. You are the single user of a database. You insert a record and save it, using all the generated data-bound stuff that Visual Studio provides. Works fine. Then you edit the record you just inserted, and save again. Boom. Concurrency exception. Why?

It is all do to with a limitation of SQL Server Compact 3.5. It can’t handle multiple SQL statements. This means that a feature of the ADO.NET TableAdapter, called Refresh the Data Table in the configuration wizard, is not available. This option kicks in when you have an identity column that auto-increments, which is by the easiest way to create a primary key. In this scenario, the actual value of the identity column is not known until after you make the insert, because it is generated by the database engine. Normally, the TableAdapter would retrieve it with a Select statement immediately after the Insert statement. However, with SQL Server Compact 3.5 that does not work.

The result is that saving a record works fine, but next time around the row has an incorrect primary key in the DataSet. No wonder you get a concurrency exception.

You can work around this in code, of course. But what surprises me is just how hard Microsoft has made all this for the kind of newbie programmer who might pick up VB Express. In fact, easy database programming in VB has marched backwards since Visual Basic 3 back in 1993.

By the way, I also dislike the way VB adds so much database gunk to your main form, again by default. What if you add another form to your app? What if you want to delete the first form? It all gets messy fast.

Look at Ruby on Rails. It has simple database handling that works. OK, you are going to have to modify that code eventually; and I accept that database apps have an inherent complexity that no amount of wizards, O/R layers or even "Convention over Configuration” can remove. I still think that simple, single table, single user apps should be, well, simple. Not in VB, unfortunately.

Bet on Entity Framework, not LINQ to SQL

So says Roger Jennings in his post Is the ADO.NET team abandoning LINQ to SQL? His main points in favour of ADO.NET Data Services (formerly Astoria) Entity Framework:

  • It is the focus of more energetic development
  • It already has richer features
  • It supports multiple database engines, not just SQL Server

As Andres Aguiar, software architect at Infragistics, notes in a comment, this has a lot to do with internal politics at Microsoft:

The Data Programmability Team never owned LinQ to SQL, it was owned by the C# team. That’s why we have two O/R mappers, both teams wanted to ship theirs. The C# team looks to be thinking about functional programming now. The Data Programmability will always be thinking about data. That’s why the EF [Entity Framework] is the safe choice.

Although LINQ to SQL is now (apparently) owned by the SQL Server team, it still doesn’t seem plausible or sensible that both will get equal attention. We also learn from Matt Warren that LINQ to SQL was deliberately tied to SQL Server only:

LINQ to SQL was actually designed to be host to more types of back-ends than just SQL server. It had a provider model targeted for RTM, but was disabled before the release. Don’t ask me why. Be satisfied to know that is was not a technical reason.

Note that this wasn’t necessarily a plot in favour of SQL Server world dominance; keeping the entire stack as a Microsoft stack no doubt makes support easier. That said, to me this is the big weakness of LINQ to SQL.

I was impressed by Astoria when I first saw it at the European Tech Ed in 2007. I am not surprised it is gaining ground.