Tag Archives: sql server

Point-in-time restore: a handy built-in feature in Azure SQL

I am working on a project that is hosted in Azure and I made a mistake, running a SQL script that was dependent on another SQL script that I had forgotten to run. It messed up the foreign keys and I would have to restore a backup … but my most recent backup was from the day before. Annoying.

But wait. Looking the Azure portal I saw this:

image

This is a plain Azure SQL instance with no extras, but look, you can restore the database from 6 minutes ago.

I did it; it restored to a second database. I deleted the bad one, renamed the restored one, ran my scripts in the right order, and all was well.

I recommend you do not run scripts in the wrong order … but if you do, or make some other error, this is a handy feature of Azure SQL which I was not aware of before.

Generating code for simple SQL Server data access without Entity Framework, works with .NET Core

I realise that Microsoft’s Entity Framework is the most common approach for data access in the .NET world, but I have also always had good results from a simple manual approach using DbConnection, DbCommand and DataReader objects, and like the fact that I can see and control exactly what SQL gets executed. If you prefer using Entity Framework or another abstraction that is fine and please stop reading now!

One snag with this more manual approach is that you have to write tedious code building SQL statements. I figured that someone must have written a utility application to generate this code but could not find one quickly so I did my own. It supports both C# and Visual Basic. The utility connects to a database and lets you generate a class for each table along with code for retrieving and saving these objects, ready for modification. Here you can see a generated class:

image

and here is an example of the generated data access code:

image

This is NOT complete code (otherwise I would be perilously close to writing my own ORM) but simply automates creating SQL parameters and SQL statements.

One of my thoughts was that this code should work well with .NET core. The SQLClient implements the required classes. Here is my code for retrieving an author object, mostly generated by my utility:

public static ClsAuthor GetAuthor(string authorID)
        {
            SqlConnection conn = new SqlConnection(ConnectString);
            SqlCommand cmd = new SqlCommand();
            SqlDataReader dr;
            ClsAuthor TheAuthor = new ClsAuthor();
            try
            {
                cmd.CommandText = "Select * from Authors where au_id = @auid";
                cmd.Parameters.Add("@auid", SqlDbType.Char);
                cmd.Parameters[0].Value = authorID;
                cmd.Connection = conn;
                cmd.Connection.Open();

                dr = cmd.ExecuteReader();

            if (dr.Read()) {

                    //Get Function
                    TheAuthor.Auid = GetSafeDbString(dr, "au_id");
                    TheAuthor.Aulname = GetSafeDbString(dr, "au_lname");
                    TheAuthor.Aufname = GetSafeDbString(dr, "au_fname");
                    TheAuthor.Phone = GetSafeDbString(dr, "phone");
                    TheAuthor.Address = GetSafeDbString(dr, "address");
                    TheAuthor.City = GetSafeDbString(dr, "city");
                    TheAuthor.State = GetSafeDbString(dr, "state");
                    TheAuthor.Zip = GetSafeDbString(dr, "zip");
                    TheAuthor.Contract = GetSafeDbBool(dr, "contract");
                }
            }
            finally
            {
                conn.Close();
                conn.Dispose();
            }

            return TheAuthor;
        }

Everything worked perfectly and I soon had a table showing the authors, using ASP.NET MVC.

In order to verify that it really does work with .NET Core I moved the project to Visual Studio Mac and ran it there:

image

I may be unusual; but I am reassured that I have a relatively painless way to write a database application for .NET Core without using Entity Framework.

Microsoft SQL Server is coming to Linux. What are the implications for Windows Server?

Microsoft is porting SQL Server, its popular database manager, to Linux. According to Executive VP Scott Guthrie:

Today I’m excited to announce our plans to bring SQL Server to Linux as well. This will enable SQL Server to deliver a consistent data platform across Windows Server and Linux, as well as on-premises and cloud. We are bringing the core relational database capabilities to preview today, and are targeting availability in mid-2017.

Why do this? The short answer is that like any other software company, Microsoft wants to sell more licenses, and porting its premier (and excellent) database manager to Linux extends its market and helps it compete more directly with the likes of Oracle and even MySQL.

image

However that begs a second question, which is why has Microsoft not done this before? After all, SQL Server has been around forever. The first release was in 1989, jointly with Ashton Tate and Sybase, and was for OS/2. The first Windows release was 1993. There was a significant leap forward in SQL Server 7.0, in 1998, which I think of as the beginning of the product as we know it today.

Microsoft in the nineties and in the first decade of the new millennium was all about Windows. Dominant on the desktop, the idea was to build synergies between Windows desktop and Windows server so that running server applications like Active Directory, Exchange and SQL Server was the obvious choice. The Visual Studio development environment pushed developers towards Visual Studio in subtle and not-so-subtle ways. Some programming language innovations like LINQ to SQL (a form of Language Integrated Query) only worked with SQL Server. It was not quite lock-in, it was always possible to use a different database engine, but SQL Server was always the default, used in all the examples and documentation, and the best understood when you needed support.

Today Microsoft’s circle of dominance is breaking down. Windows still has desktop dominance, but the importance of the desktop is less, thanks to mobile devices which mostly do not run Windows, and a move away from desktop applications towards web applications that do not care which operating system you use. Active Directory is still important, but cloud computing giants like Google and Amazon are encroaching on that space.

“Only on Windows Server” has become a liability rather than the key to keeping customers locked to Microsoft’s platform.

You can see this in the company’s development strategy, which is migrating towards a cross-platform implementation of .NET as well as embracing iOS and Android via the recently announced Xamarin acquisition. You can also see it in the Azure cloud platform, and Microsoft’s partnership with Red Hat for Linux on Azure. The company is happy to take your money whatever operating system you choose.

It is early days though, and Microsoft is still a Windows-centric company. SQL Server on Linux, expected sometime next year, will probably not be feature-complete compared to SQL Server on Windows – I am guessing, but things like .NET Stored Procedures may be tricky to get right, as well as features like in-memory databases that are tightly integrated with the operating system.

It is worth noting that cross-platform is actually a burden as well as a strength and may involve compromises. It will be fascinating to see how performance compares on equivalent hardware.

Microsoft is now betting than opening up new markets for SQL Server is more important than keeping customers hooked on Windows Server – especially as that last strategy is failing in the cloud computing era.

Finally, there is the question I posed in the title of this post. How does moving key server applications to Linux impact the appeal of Windows Server? After all, Linux licenses are generally cheaper than Windows Server and in some cases free. The answer is that it is one less reason to buy Windows Server, presuming SQL Server works properly on Linux.

You can see this as a process of commoditizing the operating system so that in time expensive server operating system licenses are a thing of the past. This is probably not a good trend for Microsoft. It can still prosper though if you rent your virtual infrastructure from the company and use its cloud services, like Azure and Office 365.

Another way of looking at this is that there is more pressure on Windows Server architect Jeffrey Snover and his team to make Windows Server better than Linux, so that you want to run it because of its merits, not because it is the only way to run SQL Server or Exchange.

Hands on: SQL Server 2014 with data files in Azure Blob Storage

One intriguing new feature in Micrsosoft’s SQL Server 2014 is the ability to create or attach databases whose files are in Azure blog storage. This sounds like something that would not work at all well: why would you want a database engine to mount files located hundreds or thousands of miles away? However, the feature is apparently baked deeply into SQL Server, according to this white paper (which is essential reading if you want to know more):

SQL Server 2014 integration with Windows Azure blob storage occurs at a deep level, directly into the SQL Server Storage Engine; SQL Server Data Files in Windows Azure is more than a simple adapter mechanism built on top of an existing software layer.

· The Manager Layer includes a new component called XFCB Credential Manager, which manages the security credentials necessary to access the Windows Azure blob containers and provides the necessary security interface; secrets are maintained encrypted and secured in the SQL Server built-in security repository in the master system database.

· The File Control Layer contains a new object called XFCB, which is the Windows Azure extension to the file control block (FCB) used to manage I/O against each single SQL Server data or log file on the NTFS file system; it implements all the APIs that are required for I/O against Windows Azure blob storage.

· At the Storage Layer, the SQL Server I/O Manager is now able to natively generate REST API calls to Windows Azure blob storage with minimal overhead and great efficiency; in addition, this component can generate information about performance counters and extended events (xEvents).

It also seems that the main target usage is SQL Server running on Azure VMs in the same region as the blog storage, removing latency concerns, though the wording of the explanation is curious, implying almost that on-premise connection is supported but should not be:

Although it is theoretically possible and officially supported, using an on-premises SQL Server 2014 installation and database files in Windows Azure blob storage is not recommended due to high network latency, which would hurt performance; for this reason, the main target scenario for this white paper is SQL Server 2014 installed in Windows Azure Virtual Machines (IaaS). This scenario provides immediate benefits for performance, data movement and portability, data virtualization, high availability and disaster recovery, and scalability limits.

If you use blob storage in this way on an Azure VM, then I/O goes through the Virtual Network Driver, whereas an Azure data disk uses the Virtual Disk Driver. This nicety may be the main reason to consider the feature.

I tried both scenarios: on-premise and from an Azure VM. I had some difficulty getting started, despite this seemingly exhaustive tutorial. I followed it, I thought, to the letter, but got either the error:

Unable to open the physical file "https://myaccount.blob.core.windows.net/sqldata/azuredb.mdf". Operating system error 86: "86(The specified network password is not correct.)".

or else

CREATE FILE encountered operating system error 1117(The request could not be performed because of an I/O device error.) while attempting to open or create the physical file https://myaccount.blob.core.windows.net/sqldata/azuredb.mdf

The problem turned out to relate to the Shared Access Signature required. The supposedly exhaustive tutorial merely refers you to the CloudBlobContainer.GetSharedAccessSignature method in the Azure SDK and offers an incomplete code snippet. I wrote C# code for this and was able to generate a Shared Access Signature but it did not work (see above). I found myself in the depths of the Azure SDK, wondering if I should use version 2.1 or 3.0, and whether I should use Microsoft.WindowsAzure.StorageClient.CloudBlobClient or Microsoft.WindowsAzure.Storage.Blob.CloudBlobClient. The tutorial is also not clear about exactly which part of the Shared Access Signature you should store in the SQL Server Credential Manager; it is a multipart string separated by ampersands.

I have still not fully worked it out, but discovered the very helpful Azure Storage Explorer on CodePlex. If you follow the instructions in the white paper referenced above, and use the Azure Storage Explorer to generate the Shared Access Signature, then it works. The project is open source, so with a little effort it should be possible to find and document the exact requirements.

image

I tried creating and using a database from my on-premise SQL Server 2014 and I find the performance remarkably good, considering. There is no doubt some smart caching going on under the covers. Selecting 1000 rows from a table took 11 seconds the first time, and was instant the second time. It seems to me viable, on my brief look, though I am not sure why you would want to do this. However it is a good demonstration of how cloud and on-premise are coming ever-closer.

image

Running from an Azure VM in the same region is a different case, though I would suggest detailed and intensive testing before going into production.

SQL Server 2014 is done: Hekaton, Azure integration

Microsoft has released SQL Server 2014 to manufacturing (an odd phrase in these diskless days) but which signifies that it is code complete for the initial release. General availability is April 1st.

What do you do if hardware trends enable you to stuff vast amounts of RAM into your server, along with many CPU cores? The answer is that you optimize applications to work mostly in RAM, with disk important as a persistence layer. This contrasts to the approach when you have large amounts of disk space and little RAM, when you focus on loading only as much data into memory as you absolutely need.

The implications for a database server are profound. Instead of a logic that goes something like “read from disk, do something, write to disk” you can address the data directly; it is just a memory pointer.

Now combine that with stored procedures compiled to native code. Performance leaps up, and by much more than you get simply by caching data in RAM, or using fast SSD storage, but still using the old disk-based approach in the database engine.

This is the reasoning behind “Hekaton”, properly known as In-Memory OLTP (online transaction processing), which is a new in-memory database engine that comes with SQL Server 2014.

It is fully integrated. You just have to add a filegroup to a a SQL Server database with the keyword CONTAINS MEMORY_OPTIMIZED_DATA and then create a table with the keyword WITH (MEMORY_OPTIMIZED=ON). And for the stored procedures, use WITH NATIVE_COMPILATION.

The speed-up is as great as you would expect. I have seen demonstrations of 30x or more performance increases, like this one in a demo based on one from the SQL Pass conference, but which I did for myself in one of Microsoft’s “Hands On Labs”:

image

In another demo, on an Azure VM, I got a speed up of 7x. Only seven times faster! Still, hard to complain about those sorts of numbers.

Unfortunately, in-memory OLTP is spoilt by some rather severe limitations in this release. The first problem is that a combination of the need to support native compilation of stored procedures, and other limitations, means that only a subset of T-SQL (the query and management language of SQL Server) is supported. You can see the list of what is not supported here; and it is depressing reading, with lots of keywords that you likely do use at the moment; even IDENTITY is on the list of what does not work.

Another issue is that the ability of In-Memory OLTP to take advantage of hardware is not as extensive as you might hope. Lead program manager Kevin Liu told me at a recent press workshop that the team recommends restricting total data size to 256GB, and that the recommended number of CPU sockets is two. You can get servers today with much more memory and more sockets. It gets complicated though: in a multi-socket server memory has processor affinity and there is a thing called NUMA (Non-Uniform Nemory Access) that describes the way memory is shared between processors.

According to Liu, Microsoft expects to lift these limitations in future releases, as well as improving T-SQL support, but things like this remind you that it is a version one release.

What else is in SQL Server 2014? There is some neat Azure integration, including a managed backup tool that is almost one click to have your data backed up to Azure storage; a brilliant facility for small businesses. You can also use Azure for high availability, creating always-on replicas in Azure VMs.

Data warehouse users will like the new clustered columnstore indexes, which allow you do use a column-oriented table structure for much faster processing of typical report and analysis queries. Columnstore indexes first appeared in SQL Server 2012 but were not updateable. Now they are.

SQL Server is well liked, licensing hassles aside; and even on licensing, Microsoft can always point at Oracle and claim, rightly, to be cheaper and less complex. It has earned a reputation for solid performance. SQL Server 2014 looks as good as ever, even if the management tools now look rather dated – the shell for SQL Server Management Studio uses an old version of Visual Studio, which is one of the reasons. I also suspect the SQL Server team lacks a dialog designer, but doubt that the average database admin cares one jot.

That said, it is difficult to describe this as a must-have upgrade, unless you can make good use of “Hekaton” in-memory OLTP. The porting effort will be worth it presuming you can get it to work. One of the good fits for the technology is managing web app session data, or, as in the example above, rapid processing to display recommendations or customisations on a web site.

I can imaging though that many users will look at Hekaton and decide that it is too much work or too immature for immediate use. What is left for them, apart from some nice Azure integration?

Not a huge amount, it seems to me, making this to my mind a transitional release.

Are you planning to upgrade? I would be interested to know your reasons why or why not.

Microsoft financials: record revenue, consumer sales declining in drift towards Enterprise

Microsoft has announced record revenue for its second financial quarter, October-December 2013. Revenue was bumped up by the launch of Xbox One (3.9 million sold) and new Surface hardware. The real stars though were the server products:

  • SQL Server continued to gain market share with revenue growing double-digits.

  • System Center showed continued strength with double-digit revenue growth.

  • Commercial cloud services revenue more than doubled.

  • Office 365 commercial seats and Azure customers both grew triple-digits.

says the press release.

Another plus point is Bing, which Microsoft says now has 18.2% market share in the USA. Search advertising revenue is up 34%.

It is not all good news. While Microsoft is doing fine in server and cloud, the consumer market is not going well, leaving aside the expected boost from a new Xbox launch:

  • Windows OEM non-pro revenue down 20% year on year (that’s consumer PCs)
  • Office consumer revenue down 24% year on year – partly attributed to the shift towards subscription sales of Office 365 Home Premium

As usual, I have put the results into a quick table for easier viewing:

Quarter ending December 31st 2013 vs quarter ending December 31st 2012, $millions

Segment Revenue Change Gross margin Change
Devices and Consumer Licensing 5384 -319 4978 -153
Devices and Consumer Hardware 4729 +1921 411 -351
Devices and Consumer Other 1793 -206 431 -455
Commercial Licensing 10888 +753 10077 +751
Commercial Other 1780 +391 415 +199

The categories are opaque so here is a quick summary:

Devices and Consumer Licensing: non-volume and non-subscription licensing of Windows, Office, Windows Phone, and “ related patent licensing; and certain other patent licensing revenue” – all those Android royalties?

Devices and Consumer Hardware: the Xbox 360, Xbox Live subscriptions, Surface, and Microsoft PC accessories.

Devices and Consumer Other: Resale, including Windows Store, Xbox Live transactions (other than subscriptions), Windows Phone Marketplace; search advertising; display advertising; Office 365 Home Premium subscriptions; Microsoft Studios (games), retail stores.

Commercial Licensing: server products, including Windows Server, Microsoft SQL Server, Visual Studio, System Center, and Windows Embedded; volume licensing of Windows, Office, Exchange, SharePoint, and Lync; Microsoft Dynamics business solutions, excluding Dynamics CRM Online; Skype.

Commercial Other: Enterprise Services, including support and consulting; Office 365 (excluding Office 365 Home Premium), other Microsoft Office online offerings, and Dynamics CRM Online; Windows Azure.

Here is what is notable. Looking at these figures, Microsoft’s cash cow is obvious: licensing server products, Windows and Office to businesses, which is profitable almost to the point of disgrace: gross margin $million 10,077 on sales of $million 10,888. Microsoft breaks this down a little. Hyper-V has gained 5 points of share, it says, and Windows volume licensing is up 10%.

Cloud (Office 365, Azure, Dynamics CRM online) may be growing strongly, but it is a sideshow relative to the on-premises licensing.

How do we reconcile yet another bumper quarter with the Microsoft/Windows is dead meme? The answer is that it is not dead yet, but the shift away from the consumer market and the deep dependency on on-premises licensing are long-term concerns. Microsoft remains vulnerable to disruption from cheap and easy to maintain clients like Google’s Chromebook, tied to non-Microsoft cloud services.

Nevertheless, these figures do show that, for the moment at least, Microsoft can continue to thrive despite the declining PC market, more so that most of its hardware partners.

Postscript: Microsoft’s segments disguise the reality of its gross margins. The cost of “licensing” is small but it is obvious from its figures that Microsoft is not including all the costs of creating and maintaining the products being licensed. If we look at the figures from a year ago, for example, Microsoft reported a gross margin of $million 2121 on revenue of $million 5186 for Server and Tools. That information is no longer provided and as far as I can tell, we can only guess at the cost per segment of its software products . However, looking at the income statements, you can see that overall Microsoft spent $million 2748 on Research and Development, $million 4283 on Sales and Marketing, and $million 1235 on General and administrative in the quarter.

Visual Studio 2012 hits and misses

A few quick reflections after writing a rather large review of Visual Studio 2012, Microsoft’s development tool for everything Windows.

Several things impressed me. The Graphics Diagnostics Tools for Direct3D, for example, is amazing; you can capture a frame, select a pixel, and drill down into why it is the colour it is. See Amit Mohindra’s blog post here. Though admittedly I got “Unable to start the experiment session” on my first go with this; make sure the Debugger Type is set to Native Only.

image

Graphics is not really my area; but web development is more like it, and I continue to be impressed by what Microsoft has done with Windows Azure. You can go from hitting New Project in Visual Studio to an ASP.NET MVC 4.0 web site up and running on Windows Azure in moments. Even if you add an Azure SQL database into the mix it is not much harder. The experience is slightly spoiled by the fact that the new Azure portal for web sites etc is still in preview and seems to be a bit unreliable; I sometimes get an error when logging in and have to refresh before it works. That is a minor detail though; the actual deployed web sites and applications seem to work fine.

That said, it was also apparent to me that Microsoft’s Azure story has become a little confusing. Want an ASP.NET web app on Azure? Choose between a Web Role, a stateful VM, or a web site. Both web roles and web sites can be scaled quite effectively, thanks to the built in load balancer for web sites. Still, choice is good, as long as the differences are understood. It appears that Microsoft is still backing the web role approach as the most architecturally sound; but web sites are so easy to use and understand that it would not surprise me if they are more successful.

Another hit with me is the SQL Server Data Tools (SSDT), once I understood the difference between DACPACs and BACPACs – a DACPAC encapsulates the schema and logins etc for a database in a single file that you can import elsewhere, whereas a BACPAC also stores the data. The approach in SSDT is that a database schema is just a bunch of SQL scripts, and that if you manage it that way it makes a lot of sense in terms of version control, schema comparisons, deployment and maintenance.

On the ALM side I am impressed with Team Foundation Server Express, which is genuinely easy to install and lights up most of the key features of Microsoft’s ALM platform, though it does not handle the full SharePoint team portal or all the reporting features. Cloud hosting is also an option for TFS and that also worked OK for me, though with occasional delays as my code crossed the internet.

I hesitate slightly with TFS though. It feels like a heavyweight solution, whereas most developers like lightweight solutions. I know that open source tools like git and subversion only do a fraction of what you can do with TFS; but they also never keep me waiting.

I also wonder whether TFS simply creates too many artefacts. Work items seem to multiply rapidly as you use the system, which is good for traceability but could also become a bit of a bureaucratic nightmare if you have team members who make every action into a thing that needs comments and deadlines and links to source code and so on.

I guess it is like any other tool; it will work well for a team that already works well, but will not solve problems for a team that is already a bit dysfunctional.

I had not looked at Microsoft’s modelling featurs for a while; I was interested to discover new code generation features in the UML diagramming tools.

I am slowly beginning to understand what Microsoft is doing with apps for SharePoint. Get this: Microsoft is moving SharePoint 2013 towards being more of a service than a platform; you do not build apps on SharePoint; you build apps that use SharePoint services. This means you can at last develop SharePoint apps without having SharePoint installed on the same box as Visual Studio (thank goodness). And you can build SharePoint apps with PHP or Java as well as ASP.NET, because they are calling SharePoint, not running on it. Makes sense.

So what is not to like? There are a few puzzles, like the way Visual C++ has fallen behind in standards compliance despite the presence of Herb Sutter at Microsoft.

I also still find the whole XAML/Visual Studio/Blend thing a bit of a struggle. One day I will open up Blend, Microsoft’s XAML designer, and it will all fall into place as a natural and quick way to build a user interface, but it has not happened yet. I have also heard that developers should find the designer in Visual Studio enough; but at best it is rather slow and a little unpredictable.

Otherwise the tools for Windows Store apps seem decent to me, though I have heard that advanced developers are finding some issues; not surprising considering how new a platform it is. It is a distinctive platform, and my sense it that while there is a lot to like, developers need time to get the best from it, and there is also scope for Microsoft to improve it, maybe a few refinements in Windows 8 SP1?

Considering its scope, Visual Studio was relatively stable in my tests, though I did once get it into a state where it froze every time I tried to debug an application; fixed by a reboot (sigh).

I do not mind the monochrome user interface; I do not like it especially but it is something you get used to and do not notice after a short time.

Overall? Few people will use everything that is in Visual Studio and of course I have missed out most of it in the above, but it is a mighty achievement and still an asset to Microsoft’s platform.

Microsoft SQL Azure versus SQL Server on Amazon AWS

Amazon RDS for Microsoft SQL Server offers cloud instances of SQL Server. Amazon’s offering even supports “License Mobility”, Microsoft jargon that lets volume licensing customers use an existing SQL Server license for an Amazon’s instance. But how does Amazon’s cloud SQL Server compare with Microsoft’s own offering, SQL Database running on Azure?

Peter Marriott has posted on the subject here (registration required). The key point: despite the obvious similarity (both are SQL Server), these two offerings are radically different. Amazon’s RDS SQL is more IaaS (infrastructure as a service) than PaaS (platform as a service). You choose an edition of SQL Server and rent one or more instances. The advantage is that you get full SQL Server, just like the on-premise editions but hosted by Amazon.

Microsoft’s Azure-hosted SQL on the other hand is more abstracted. You do not rent a SQL Server instance; you rent a database. Under the covers Microsoft provides multiple redundant copies of the data, and if traffic increases, it should scale automatically, though the database size is limited to 150GB. The downside is that not all features of SQL Server are available, as I discovered when migrating data.

Marriott adds that SQL Azure supports encrypted connections and has a more usable administration interface.

A further twist: you can also install SQL Server on an Azure Virtual Machine, which would get you something more like the Amazon approach though I suspect the cost will work out higher.

Moving a database from on-premise SQL Server to SQL Azure: some hassle

I am impressed with the new Windows Azure platform, but when I moved a simple app from my local machine to Azure I had some hassle copying the SQL Server database.

The good news is that you can connect to SQL Azure using SQL Server Management studio. You need to do two things. First, check the server location and username. You should already know the password which you set when the database was created. You can get this information by going to the Azure portal, selecting the database, and clicking Show connection strings on the dashboard.

Second, open the SQL firewall for the IP number of your client. There is a link for this in the same connection string dialog.

Now you can connect in SQL Server Management Studio. However, you have limited access compared to what you get as an admin on your local SQL Server.

Here is the Tasks menu for an on-premise SQL Server database:

image

and here it is for a SQL Server Azure database:

image

Still, you can start Export Data or Copy Database from your on-premise connection and enter the Azure connection as the target. However, you should create the destination table first, since the Export Data wizard will not recreate indexes. In fact, SQL Azure will reject data imported into a table without at least one clustered index.

I tried to script a table definition and then run it against the SQL Azure database. You can generate the script from the Script Table as menu.

image

However even the simplest table will fail. I got:

Msg 40514, Level 16, State 1, Line 2
‘Filegroup reference and partitioning scheme’ is not supported in this version of SQL Server.

when attempting to run the script on SQL Azure.

The explanation is here.

Windows Azure SQL Database supports a subset of the Transact-SQL language. You must modify the generated script to only include supported Transact-SQL statements before you deploy the database to SQL Database.

Fortunately there is an easier way. Right-click the table and choose Generate Scripts. In the wizard, click the Advanced button for Set Scripting Options.

image

Find Script for the database engine type, and choose SQL Azure:

image

You may want to change some of the other options too. This generates a SQL script that works with SQL Azure. Then I was able to use the Export Data wizard using the new table as the target.  If you use Identity columns, don’t forget Enable identity insert in Edit Mappings.

image

Microsoft LocalDB: another option for local databases

Microsoft is launching SQL Server 2012 on  March 7th 2012. In Microsoft’s world “launches” do not always coincide with the availability of release code, which may come before or after, but they are usually not far apart.

The big news in SQL Server 2012 is in new BI (Business Intelligence) features and the ability to import and export from the open source Hadoop framework. Microsoft is also supporting Hadoop on Windows Server and Windows Azure. Robert Sheldon has an excellent article on TechTarget which describes the Hadoop integration.

At the other end of the scale though there is a new approach to local databases, which interests me as this is the kind of thing an application developer might use for local applications. SQL Express LocalDB uses the full SQL Server Express engine but does not require a SQL Server service to be running or even installed. In summary:

  • The LocalDB binaries can be installed with a separate installer or as part of the SQL Server Express.
  • LocalDB instances are isolated to the user.
  • The LocalDB system databases are buried deep in AppData in the user profile. The default location for user databases is the root of the user profile.
  • The old SQL Server User Instances are now deprecated

A driver for LocalDB has to know how to fire up the SQL Server binaries if they are not running, which means that old drivers will not work. Microsoft has patched System.Data.SqlClient in .NET 4 to work with LocalDB.

LocalDB Pros and cons

The advantage of LocalDB over the cut-down Compact Edition is that you get full access to SQL Server features including transactions, stored procedures, geographical data types and so on. It is meant to improve on the old user instances by simplifying matters for the user: no need to run a service, and management of SQL Server completely hidden.

The disadvantage is that your app still has the overhead of SQL Server running in a separate process. A SQL Server LocalDB install also takes around 140MB, which bumps up the download size if your app is distributed on the web.

If you need a local database, it seems to me that Microsoft still has nothing that quite matches SQLite, which runs in-process, is lightning fast, and which does not require any hidden system databases.

On the other hand, it might make sense to use SQL Server if you want to integrate with a server database, or if you are familiar with coding for SQL Server.

I would like to see Windows ship with a local database engine documented as something developers can rely on being there, as with Core Data on the Mac. It would also help if the SQL Server team got together with the Office team and worked out how to get Access and SQL Server Express to use the same database engine – yes, I know Access can use SQL Server data, but it still defaults to its own .ACCDB format and JET database engine.