Tag Archives: c++

Developing software for playing bridge

I am a duplicate bridge player in my spare time and enjoyed playing in my local club once or twice a week. That was before COVID-19 and then, in March this year, lockdown. Bridge clubs were no longer able to meet. There are more important things in the world; but bridge is both a lot of fun and a welcome distraction from weightier matters, and my thoughts soon turned to what we could do to continue playing in these new circumstances.

The answer was to play online; but while there are plenty of ways to play bridge online, the existing systems were not designed with the idea of being a way for bridge clubs to meet in a new context. If anything, the reverse is true: online bridge site were designed for people who could not easily get to a club or wanted to play at any time with whoever else happened to be available. Clubs like my own, by contrast, wanted to replicate their face-to-face meetings with an online equivalent. A further complication back in March was that the biggest online bridge site, called Bridgebase, was immediately overloaded and declared that it was unwilling to allow new people to qualify as directors, people allowed to run online bridge sessions.

My immediate instinct was to build a new site for playing bridge. I was not quite starting from scratch. Back in the early days of Windows 8, I started work on a bridge game for Microsoft’s new and as it turned out ill-fated platform. I had got some way with it; I had created a bridge engine that understood about cards and hands and tricks and shuffling and scoring and all the various elements that go into playing bridge. It was written in C# and what is now UWP XAML. It is designed of course for a solo player. Here is the bidding screen:

image

and the play screen:

image

This is how it looks on Windows 10; it looked a bit better on Windows 8 though it would not win any prizes for design. My software could play bridge though; the reason I never finished it was that I never cracked getting the AI working. But for human to human play that did not matter. A weekend or two coding, I thought, and I could have a website up and running so our club could play bridge online. I made an immediate start, registering the domain name YourBridgeClubOnline.co.uk.

Well, three months later and here we are.

image

image

It is, I have to say, still under development. But it works and we have been able to play bridge again, as a club.

What took you so long? Ha! Much of my old bridge engine code remains untouched and has proved useful; it all runs fine on .NET Core. Even the (useless) AI has been handy, as I can test the mechanics of play without involving others. But I had, of course, wildly underestimated the problem of converting a game for solo play on Windows, to a multi-player web application. There is much to think about:

The UI. I am not a designer (I am sure you can tell) but spent ages puzzling over how to get a workable user interface in the browser for everything from tablets to desktops. Not smartphones yet but it is coming. I decided early on to take a view on compatibility. No Internet Explorer. JavaScript fetch API is required. When time is against you, it is easier to say, just use another browser, than to waste too much time supporting old browsers.

Messaging – both the API kind, and the chat kind. I am using C#, ASP.NET Core and SignalR. In general it works well. SignalR uses WebSockets as first preference, but falls back to Server Sent Events or long polling where necessary. In my first experiments I did my own polling and switching to SignalR was a great relief.

Registration and login. I am using the stuff that comes in the box, ASP.NET Core Identity. It has saved me a ton of work. It’s a bit annoying and not too well documented. I don’t really like using GUIDs for the primary key, for example, and I believe there is way to avoid it, but it isn’t top priority when you are going for Minimum Viable Product.

JavaScript. I’ve written tons of it and I don’t even like the language. I have a new respect for it though. The thing is, it is very fast and there is nothing you cannot do. The worst thing is the friction of doing some debugging in the browser, and some in Visual Studio. I am thinking of switching to VS Code for development since it works nicely with ASP.NET Core and is better for JavaScript than Visual Studio.

Scoring. My Windows software could score a hand of bridge. But duplicate is different; you have to compare the scores with others who played the same hands and work out the percentages, then export the results to standard formats for display on club websites and submission to the English Bridge Union. It was more work than I had expected and I am not done yet; the system only understands Pairs at the moment, not Teams (a different way of scoring).

Directing. Someone has to manage an online bridge session, settle any arguments, and fix errors like cards played by accident. It all needs coding and there was nothing like it in the Windows version.

Movements. Imagine you have 28 people playing bridge (or 14 pairs). They need to all play the same hands, but never play the same hand twice, and it has to be so arranged that each pair plays against other pairs in a defined sequence so it is balanced and fair. We call this the movement. Online, you have a bit more flexibility because you don’t need to share physical cards: everyone can play the same hand at the same time if you like. It is still quite fiddly though, and I did not do any of this in the old Windows version. I saved some time by writing an import function to enable re-use of movements made for EBUScore, a widely used scoring and bridge session management application. There is more to do though.

Claims. This is where, half way through the hand, a player says, “There’s no point in playing on, I’m obviously going to win all the remaining tricks.” A trick is a sequence of four cards played one from each hand, which is won by one of the pairs. This statement is called a claim, and has to be agreed by the other players. Getting this working was more difficult than I had expected – because built into my bridge engine was the idea that you could score by counting the tricks each side had won. But claimed tricks are never played. With hindsight, I should have allowed for this from the beginning.

Database. Every detail of play has to be stored on the server. I am using Dapper and SQL Server currently, though it is possible that PostgreSQL would work just as well. I started with Entity Framework Core, still there as it is used by ASP.NET Core Identity, but I am happier with Dapper.

Things that worked well

Three months is longer than I had thought it would take to get to a playable system, but I suppose as a spare time project it is not too bad. It would not be possible without the likes of ASP.NET Core and Dapper and SignalR doing so much for you. C# is a delight for coding. I am also using an Azure App Service for all this test and development and that has worked well. I am deploying to a Linux container of course; but the nice thing about App Service is that it will scale to a considerable extent without the hassle of Kubernetes. If the project succeeds and needs to scale up, there is an Azure SignalR service ready and waiting. I was nevertheless interested to see that AWS now offers .NET Core on Elastic Beanstalk, complete with some nice Visual Studio integration. Trying it there would be an interesting experiment, though I’m not sure AWS is so savvy about SignalR.

Open Source?

Could this have been done quicker by making it open source and seeking collaborators early on? Will it become open source? I need help for sure, though I also feel the code needs some cleaning up before it is fit to share more widely. You will recall though that I had started out thinking that it would be a small matter to convert my solo bridge game to an online multiplayer web application. I figured it would be better to get something working and then ask for help. But I am open to offers! Note: this is not a commercial project.

Rewarding

Most of the software projects I have been involved in have been business applications. Bridge is a lot more fun. I do see software development as a creative act. I recall starting work on the bridge game back in 2011 (I think); starting a new blank project in Visual Studio and thinking, hmm, I had better write a class to represent a pack of cards. From that beginning I ended up with an application that could play bridge, after a fashion, and now one that multiple people can play concurrently. It is rewarding and I will not regret the time spent on it, irrespective of how much actual use it gets.

Hands On ASP.NET Core

I’ve been putting together a quick web application (well, I thought it would be quick) in my spare time (hah!) and I picked ASP.NET Core on Linux as a sensible option given that I like working in C#. Overall it has been a reasonable experience so far and I still love the language. This is the most extensive work I have done so far with ASP.NET Core though and I have a few observations.

It is not a difficult framework to work with but I believe it could be made more approachable. This is largely a matter of documentation though another point of confusion is the transition Microsoft has been making from ASP.NET MVC to Razor Pages. These two frameworks are similar but different, they share a lot of technology but some things work in one but not the other, and sometimes it is not clear whether what you are reading applies just to ASP.NET MVC, or just to Razor Pages, or to both, or to both but with a little tweaking to account for differences. I started with MVC because I am more familiar with it but have shifted to Razor Pages because that seems to be the preferred direction; really I am equally happy with either.

If you are thinking of getting started with ASP.NET Core I recommend you start not with the framework, but with making sure that you are familiar with the following topics:

Dependency injection. If you are puzzling about something in the framework the answer may be “add it to the constructor and it magically works.” This is obvious if you are familiar with it but not otherwise.

Anonymous types. These seem to crop up quite a lot.

Lambdas and the arrow operator =>

LINQ queries

Now, the documentation. Unless you have perhaps found a good and up to date book you will probably start here.

image

Now, I do think there are lots of good things about docs.microsoft.com, the fact that it is all on GitHub and open for comment and improvement, the fact that it performs well, and the obvious effort that has gone into many of the topics.

That said, I do not much like this page. My biggest problem with it is that there is no simple link to a comprehensive reference. It is a bunch of little tutorials which may or may not tell you what you need to know. It gets better if you click into one of the topics and I like this page, for example, much better, with the hierarchical list of topics on the left.

image

It is still not great though. There is a big emphasis on tutorials, and while I agree that learning through doing is a great way to learn, the problem with the tutorials is that they tend to leave you with lots of questions and no obvious route to answers.

I will give you an example. I decided to use the ASP.NET Identity system in my application, because it saves a ton of tedious work doing registration, password reset, login, and so on, plus it is security-critical code that I would likely get wrong if I did it myself.

The problem you will immediately hit though is that you want to store additional data about users. This could be any kind of data but let’s call it additional profile data. For example, you want to let users upload an image which is then displayed in the application. There are some heavy articles about customizing identity but there is also this one on adding custom user data to an ASP.NET Core web app. It’s great but it does not actually tell you how to retrieve the custom user data in your application. Eventually I figured out a way of doing it. You just have to use dependency injection to get an instance of the UserManager class. So you pop this in the constructor for one of your classes:

UserManager<YourCustomUser> UserManager

and store it in a private variable. Then you can do:

var MyTask = _userManager.GetUserAsync(User);
MyTask.Wait();
var MyUser = MyTask.Result;

or something similar (if it is a synchronous method) and it just works.

Let me add something else. The actual API reference for ASP.NET Core is almost useless. It faithfully documents each class and method while often saying nothing about how or why to use it.

Data access

My application is really forms over data as so many are, so data access plays a big role. There seem to be plenty of tutorials on data access in the ASP.NET Core documentation but I don’t much like them. The problem is Entity Framework. Most of the documentation assumes it. It is not that Entity Framework is bad; it does seem to work well and while there is debate about how well it performs, in many cases it does not matter, and in other cases you can fine-tune it. My problem rather is that what Microsoft calls a “complex data model” is actually the normal case, where you have many-to-many relationships, and dealing with this in Entity Framework soon gets fiddly. I am guilty of lacking patience, but being familiar with SQL it is easier for me just to write the SQL and to know exactly what data is being saved and what data is being retrieved. I have left Entity Framework in place because the Identity system uses it (and it looks non-trivial to replace) but for the rest I have migrated to Dapper which seems ideal. It is not a full-featured ORM and it expects you to write the SQL but does a lot that saves time. My only complaint about Dapper is that (again) the documentation isn’t great but I’ve found it much simpler to grok than the more advanced aspects of Entity Framework.

One thing I do like about Entity Framework is data migrations. Like most developers I have a local database and another one online and code-first data migrations save a lot of work creating database tables and keeping the schema in sync. Dapper does not have this.

StackOverflow

Of course it is true that no matter what is your question, someone has asked it before, and often the best place to find the answer is on StackOverflow. Big appreciation for the folk who take the time to answer questions there, though I’d add that it is not a place from which to copy code, it is a place to understand a solution. Out of date information is a problem, as it is in Microsoft’s own documentation.

Finally

I think ASP.NET Core is a great framework (or frameworks) but not as approachable as it could be. Documenting it in the best way is not an easy problem to solve, and every developer comes with different skills and requirements. Perhaps Microsoft could get someone suitable to write a nice book aimed at intermediate coders, and one that does not assume you want to use Entity Framework. Then offer it as a free download and/or publish it online as part of the documentation, and keep it up to date as new versions appear.

Should you convert your Visual Basic .NET project to C#? Why and why not…

When Microsoft first started talking about Roslyn, the .NET compiler platform, one of the features described was the ability to take some Visual Basic code and “paste as C#”, or vice versa.

Some years later, I wondered how easy it is to convert a VB project to C# using Roslyn. The SharpDevelop team has a nice tool for this, CodeConverter, which promises to “Convert code from C# to VB.NET and vice versa using Roslyn”. You can also find this on the Visual Studio marketplace. I installed it to try out.

image

Why would you do this though? There are several reasons, the foremost of which is cross-platform support. The Xamarin framework can use VB to some extent, but it is primarily a C# framework. .NET Core was developed first for C#. Microsoft has stated that “with regard to the cloud and mobile, development beyond Visual Studio on Windows and for non-Windows platforms, and bleeding edge technologies we are leading with C#.”

Note though that Visual Basic is still under active development and history suggests that your Windows VB.NET project will continue running almost forever (in IT terms that is). Even Visual Basic 6.0 applications still run, though you might find it convenient to keep an old version of Windows running for the IDE.

Still, if converting a project is just a right-click in Visual Studio, you might as well do it, right?

image

I tried it, on a moderately-size VB DLL project. Based on my experience, I advise caution – though acknowledging that the converter does an amazing job, and is free and open source. There were thousands of errors which will take several days of effort to fix, and the generated code is not as elegant as code written for C#. In fact, I was surprised at how many things went wrong. Here are some of the issues:

The tool makes use of the Microsoft.VisualBasic namespace to simplify the conversion. This namespace provides handy VB features like DateDiff, which calculates the difference between two dates. The generated project failed to set a reference to this assembly, generating lots of errors about unknown objects called Information, Strings and so on. This is quick to fix. Less good is that statements using this assembly tend to be more convoluted, making maintenance harder. You can often simplify the code and remove the reference; but of course you might introduce a bug with careless typing. It is probably a good idea to remove this dependency, but it is not a problem if you want the quickest possible port.

Moving from a case-insensitive language to a case-sensitive language is a problem. Visual Studio does a good job of making your VB code mostly consistent with regard to case, but that is not a fix. The converter was unable to fix case-sensitivity issues, and introduced some of its own (Imports System.Text became using System.text and threw an error). There were problems with inheritance, and even subtle bugs. Consider the following, admittedly ugly and contrived, code:

image

Here, the VB coder has used different case for a parameter and for referencing the parameter in the body of the method. Unfortunately another variable with the different case is also accessible. The VB code and the converted C# code both compile but return different results. Incidentally, the VB editor will work very hard to prevent you writing this code! However it does illustrate the kind of thing that can go wrong and similar issues can arise in less contrived cases.

C# is more strict than VB which causes errors in conversion. In most cases this is not a bad thing, but can cause headaches. For example, VB will let you pass object members ByRef but C# will not. In fact, VB will let you pass anything ByRef, even literal values, which is a puzzle! So this compiles and runs:

image

Another example is that in VB you can use an existing variable as the iteration variable, but in C# foreach you cannot.

Collections often go wrong. In VB you use an Item property to access the members of a collection like a DataReader. In C# this is omitted, but the converter does not pick this up.

Overloading sometimes goes wrong. The converter does not always successfully convert overloaded methods. Sometimes parameters get stripped away and a spurious new modifier is added.

Bitwise operators are not correctly converted.

VB allows indexed properties and properties with parameters. C# does not. The converter simply strips out the parameters so you need to fix this by hand. See https://stackoverflow.com/questions/2806894/why-c-sharp-doesnt-implement-indexed-properties if the language choices interest you.

There is more, but the above gives some idea about why this kind of conversion may not be straightforward.

It is probably true that the higher the standard of coding in the original project, the more straightforward the conversion is likely to be, the caveat being that more advanced language features are perhaps more likely to go wrong.

Null strings behave differently

Another oddity is that VB treats a String set to null (Nothing) as equivalent to an empty string:

Dim s As String = Nothing

If (s = String.Empty) Then ‘TRUE in VB
     MsgBox(“TRUE!”)
End If

C# does not:

String s = null;

   if (s == String.Empty) //FALSE in C#
    {
        //won’t run
    }

Same code, different result, which can have unfortunate consequences.

Worth it?

So is it worth it? It depends on the rationale. If you do not need cross-platform, it is doubtful. The VB code will continue to work fine, and you can always add C# projects to a VB solution if you want to write most new code in C#.

If you do need to move outside Windows though, conversion is worthwhile, and automated conversion will save you a ton of manual work even if you have to fix up some errors.

There are two things to bear in mind though.

First, have lots of unit tests. Strange things can happen when you port from one language to another. Porting a project well covered by tests is much safer.

Second, be prepared for lots of refactoring after the conversion. Aim to get rid of the Microsoft.VisualBasic dependency, and use the stricter standards of C# as an opportunity to improve the code.

What the Blazor! After Silverlight, .NET in the browser reappears by another route

Silverlight, Microsoft’s browser plug-in which included a cut-down .NET runtime, once seemed full of promise for developers looking for an end-to-end .NET solution, cross-platform on Windows and Mac, and with support for “out of browser” applications for a native-like experience.

Silverlight was killed by various factors, including the industry’s rejection of old-style browser plug-ins, and warring factions at Microsoft which resulted in Silverlight on Windows Phone, but not on Windows 8. The Windows 8 model won, with what became the Universal Windows Platform (UWP) in Windows 10, but this is quite a different thing with no cross-platform support. Or there is Xamarin which is cross-platform .NET, and one day perhaps Microsoft will figure out what to do about having both UWP and Xamarin.

Yesterday though Microsoft announced (though it was already known to those paying attention) Blazor, an experimental project for hosting the .NET Runtime in the browser via WebAssembly. The name derives from “Browser + Razor”, Razor being the syntax used by ASP.NET to combine HTML and C# in a web application. C# in Razor executes on the server, whereas in Blazor it executes on the client.

Blazor is enabled by work the Xamarin team has done to compile the Mono runtime to WebAssembly. Although this sounds like a relatively large download, the team is hoping that a combination of smart linking (to strip out unnecessary code in both applications and the runtime) with caching and HTTP compression will make this acceptable.

This post by Steve Sanderson is a good technical overview. Some key points:

– you can run applications either as interpreted .NET IL (intermediate language) or pre-compiled

– Blazor is an SPA (Single Page Application) framework with solutions for routing, state management, dependency injection, unit testing and more

– UI components use HTML and CSS

– There will be a browser API which you can call from C# code

– you will be able to interop with JavaScript libraries

– Microsoft will provide ASP.NET libraries that integrate with Blazor, but you can use Blazor with any server-side technology

What version of .NET will be supported? This is where it gets messy. Sanderson says Blazor will support .NET Standard 2.0 or higher, but not completely in the some functions will throw a PlatformNotSupported exception. The reason is that not all functions make sense in the context of a Blazor application.

Blazor sounds promising, if developers can get past the though the demo application on Azure currently gives me a 403 error. So there is this video from NDC Oslo instead.

The other question is whether Blazor has a future or will join Silverlight and other failed attempts to create a new application platform that works. Microsoft demands much patience from its .NET community.

C# and .NET: good news and bad as Python rises

Two pieces of .NET news recently:

Microsoft has published a .NET Core 2.1 roadmap and says:

We intend to start shipping .NET Core 2.1 previews on a monthly basis starting this month, leading to a final release in the first half of 2018.

.NET Core is the cross-platform, open source implementation of the .NET Framework. It provides a future for C# and .NET even if Windows declines.

Then again, StackOverflow has just published a report on the most sought-after programming languages in the UK and Ireland, based on the tags on job advertisements on its site. C# has declined to fourth place, now below Python, and half the demand for JavaScript:

image

To be fair, this is more about increased demand for Python, probably driven by interest in AI, rather than decline in C#. If you look at traffic on the StackOverflow site C# is steady, but Python is growing fast:

image

The point that interest me though is the extent to which Microsoft can establish .NET Core beyond the Microsoft-platform community. Personally I like C# and would like to see it have a strong future.

There is plenty of goodness in .NET Core. Performance seems to be better in many cases, and cross-platforms is a big advantage.

That said, there is plenty of confusion too. Microsoft has three major implementations of .NET: the .NET Framework for Windows, Xamarin/Mono for cross-platform, and .NET Core for, umm, cross-platform. If you want cross-platform ASP.NET you will use .NET Core. If you want cross-platform Windows/iOS/macOS/Android, then it’s Xamarin/Mono.

The official line is that by targeting a specification (a version of .NET Standard), you can get cross-platform irrespective of the implementation. It’s still rather opaque:

The specification is not singular, but an incrementally growing and linearly versioned set of APIs. The first version of the standard establishes a baseline set of APIs. Subsequent versions add APIs and inherit APIs defined by previous versions. There is no established provision for removing APIs from the standard.

.NET Standard is not specific to any one .NET implementation, nor does it match the versioning scheme of any of those runtimes.

APIs added to any of the implementations (such as, .NET Framework, .NET Core and Mono) can be considered as candidates to add to the specification, particularly if they are thought to be fundamental in nature.

Microsoft also says that plenty of code is shared between the various implementations. True, but it still strikes me that having both Xamarin/Mono and .NET Core is one cross-platform implementation too many.

Time for another look at “pure .NET”

Back in the Nineties there was a lot of fuss about “pure Java”. This meant Java code without any native code invocations that tie the application to a specific operating system.

It is possible to write cross-platform Java code that invokes native code, but it adds to the complexity. If it is an operating system API you need conditional code so that the write API is called on each platform. If it is a custom library it will have to be compiled separately for each platform.

Over on the Microsoft .NET site, developers have tended to have a more casual approach. After all, in the great majority of cases the code would only ever run on Windows. Further, Microsoft tended to steer developers towards Windows-only dependencies like SQL Server. After all, that is the value of owning a developer platform.

Times change. Microsoft has got the cross-platform bug, with its business strategy based on attracting businesses to its cloud properties (Office 365 and Azure) rather than Windows. The .NET Framework has been forked to create .NET Core, which runs on Mac and Linux as well as Windows. SQL Server is coming to Linux.

Another issue is porting applications from 32-bit to 64-bit, as I was reminded recently when migrating some ASP.NET applications to a new site. If your .NET code avoids P/Invoke (Platform Invoke) then you can compile for “Any CPU” and 64-bit will just work. If you used P-invoke and want to support both 32-bit and 64-bit it requires more care. IntPtr, used frequently in P/Invoke calls, is a different size. If you have custom native libraries, you need to compile them separately for each platform. The lazy solution is always to run as 32-bit but that is a shame.

What this means is that P/Invoke should only be used as a last resort. Arguably this has always been true, but the reasons are stronger today.

This is also an issue for libraries and components intended for general use, whether open source or commercial. It is early days for .NET Core support, but any native code dependencies will be a problem.

Breaking the P/Invoke habit will not be easy but “Pure .NET” is the way to go whenever possible.

Xamarin announces large round of funding, plans international expansion

It is a case of “right time, right place” for Xamarin, as it scoops up Windows developers who need either to transition to iOS and Android, or to add mobile support to existing applications. You can also port applications to the Mac with its cross-platform development framework based on C#; no bad thing as Mac sales continue to boom.

image

Xamarin also fits with Microsoft’s new strategy, as I understand it, which is to provide strong support for iOS and Android for applications such as Microsoft Office, and services such as those hosted on Microsoft Azure.

Now the company has announced an additional $54 million of funding, which CEO Nat Friedman tells me is “the largest round of financing achieved by any mobile platform company ever”.

The financing comes from “new and existing investors, including Lead Edge Capital, Insight Venture Partners, Charles River Ventures, Ignition Partners, and Floodgate.”

What will the money be spent on? “Two things,” says Friedman. “We’re planning to expand our sales and marketing into Europe. We’re opening a sales office in London in the Fall. We did a roadshow with Microsoft in Europe and it was extremely successful. Second, we’re going to invest in improving the quality of our platforms.”

Friedman notes that mobile should not be considered a development niche. “Our view is that in the future all software will be mobile software in some way or another, when you build an application it will have to have some kind of mobile surface area.”

A few other points to note. One is that Xamarin Forms, recently introduced, has been a big hit with developers. “The Xamarin Forms forum has been our most popular forum,” says Friedman. “We’ve been really surprised.”

The company used to promote the idea of avoiding cross-platform code for the user interface, but then introduced Xamarin Forms as a cross-platform GUI framework, arguing that because it uses only native controls, it avoids the main drawbacks of the idea.

Some of the funding then will go into improving Xamarin Forms and tools to work with the framework.

Another key area is Visual Studio integration. The acquisition of the Visual Studio integration team from Clarius Consulting, in May 2014, is also significant here, since Clarius had strong expertise in this area.

Might Microsoft try to acquire Xamarin? Interesting question, and one which Friedman is not in a position to discuss; I am not a financial expert but would guess that Xamarin’s independent expansion increases its ability to be independent, though investors may be hoping to reap the rewards of an acquisition, who knows?

RemObjects previews native Apple Mac IDE for C#, .NET, Oxygene

RemObjects is previewing a new native Mac IDE for its Oxygene and C# compilers. Oxygene is a Delphi-like language (in other words, a variant of Object Pascal) which targets iOS, Mac, Android, Windows Phone and Windows. RemObjects C# shares the same targets. Both can compile to .NET assemblies for Windows, or to Mono for cross-platform .NET, or to a Mac or iOS executable (using the LLVM compiler), or to Java bytecode for the Android Dalvik runtime. You can get both Oxygene and RemObjects  C# bundled in a product called Elements.

In the past, RemObjects has used Visual Studio as its IDE. While this is a natural choice for Windows users, much development today is done on the Mac. Requiring Mac users to develop in a Windows Virtual Machine adds friction, so RemObjects is now working on a native IDE for the Mac codenamed Fire.

image

I gave Fire the briefest of looks. Here are some of the options for a new .NET application:

image

Note the appearance of ASP.NET MVC 4, and even Silverlight.

Here are the options for a new Cocoa application:

image

If you are developing for Cocoa, you can edit the resource file in Apple’s Xcode and use it in your application. I started a new C# Cocoa app, made a few changes and and then ran it from the IDE:

image

I imagine Microsoft will be keeping an eye on tools like this – if it is not, it should – since they fit with the strategy of supporting Microsoft services on multiple devices. Visual Studio is a fine tool but if Microsoft is serious about cross-platform, it needs strong Mac-native development tools. Xamarin came up with Xamarin Studio, which is cross-platform for Windows and Mac, but the RemObjects approach also looks worth investigating.

PS The first release of RemObjects C# lacked full generic support, for which failing Xamarin and Mono founder Miguel de Icaza took RemObjects to task on Twitter. I was amused to see this in the changelog for April 2014:

 image

65764 Full support for Generics on Cocoa, as requested by Miguel

For more details on Fire, see here.

Notes from the field: putting Azure Blob storage into practice

I rashly agreed to create a small web application that uploads files into Azure storage. Azure Blob storage is Microsoft’s equivalent to Amazon’s S3 (Simple Storage Service), a cloud service for storing files of up to 200GB.

File upload performance can be an issue, though if you want to test how fast your application can go, try it from an Azure VM: performance is fantastic, as you would expect from an Azure to Azure connection in the same region.

I am using ASP.NET MVC and thought a sample like this official one, Uploading large files using ASP.NET Web API and Azure Blob Storage, would be all I needed. It is a start, but the method used only works for small files. What it does is:

1. Receive a file via HTTP Post.

2. Once the file has been received by the web server, calls CloudBlob.UploadFile to upload the file to Azure blob storage.

What’s the problem? Leaving aside the fact that CloudBlob is deprecated (you are meant to use CloudBlockBlob), there are obvious problems with files that are more than a few MB in size. The expectation today is that users see some sort of progress bar when uploading, and a well-written application will be resistant to brief connection breaks. Many users have asynchronous internet connections (such as ADSL) with slow upload; large files will take a long time and something can easily go wrong. The sample is not resilient at all.

Another issue is that web servers do not appreciate receiving huge files in one operation. Imagine you are uploading the ISO for a DVD, perhaps a 3GB file. The simple approach of posting the file and having the web server upload it to Azure blob storage introduces obvious strain and probably will not work, even if you do mess around with maxRequestLength and maxAllowedContentLength in ASP.NET and IIS. I would not mind so much if the sample were not called “Uploading large files”; the author perhaps has a different idea of what is a large file.

Worth noting too that one developer hit a bug with blobs greater than 5.5MB when uploaded over HTTPS, which most real-world businesses will require.

What then are you meant to do? The correct approach, as far as I can tell, is to send your large files in small chunks called blocks. These are uploaded to Azure using CloudBlockBlob.PutBlock. You identify each block with an ID string, and when all the blocks are uploaded, called CloudBlockBlob.PutBlockList with a list of IDs in the correct order.

This is the approach taken by Suprotim Agarwal in his example of uploading big files, which works and is a great deal better than the Microsoft sample. It even has a progress bar and some retry logic. I tried this approach, with a few tweaks. Using a 35MB file, I got about 80 KB/s with my ADSL broadband, a bit worse than the performance I usually get with FTP.

Can performance be improved? I wondered what benefit you get from uploading blocks in parallel. Azure Storage does not mind what order the blocks are uploaded. I adapted Agarwal’s sample to use multiple AJAX calls each uploading a block, experimenting with up to 8 simultaneous uploads from the browser.

The initial results were disappointing. Eventually I figured out that I was not actually achieving parallel uploads at all. The reason is that the application uses ASP.NET session state, and IIS will block multiple connections in the same session unless you mark your ASP.NET MVC controller class  with the SessionStateBehavior.ReadOnly attribute.

I fixed that, and now I do get multiple parallel uploads. Performance improved to around 105 KB/s, worthwhile though not dramatic.

What about using a Windows desktop application to upload large files? I was surprised to find little improvement. But can parallel uploading help here too? The answer is that it should happen anyway, handled by the .NET client library, according to this document:

If you are writing a block blob that is no more than 64 MB in size, you can upload it in its entirety with a single write operation. Storage clients default to a 32 MB maximum single block upload, settable using the SingleBlobUploadThresholdInBytes property. When a block blob upload is larger than the value in this property, storage clients break the file into blocks. You can set the number of threads used to upload the blocks in parallel using the ParallelOperationThreadCount property.

It sounds as if there is little advantage in writing your own chunking code, except that if you just call the UploadFromFile or UploadFromStream methods of CloudBlockBlob, you do not get any progress notification event (though you can get a retry notification from an OperationContext object passed to the method). Therefore I looked around for a sample using parallel uploads, and found this one from Microsoft MVP Tyler Doerksen, using C#’s Parallel.For.

Be warned: it does not work! Doerksen’s approach is to upload the entire file into memory (not great, but not as bad as on a web server), send it in chunks using CloudBlockBlob.PutBlock, adding the block ID to a collection at the same time, and then to call CloudBlockBlob.PutBlockList. The reason it does not work is that the order of the loops in Parallel.For is indeterminate, so the block IDs are unlikely to be in the right order.

I fixed this, it tested OK, and then I decided to further improve it by reading each chunk from the file within the loop, rather than loading the entire file into memory. I then puzzled over why my code was broken. The files uploaded, but they were corrupt. I worked it out. In the following code, fs is a FileStream object:

fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);

Spot the problem? Since fs is a variable declared outside the loop, other threads were setting its position during the read operation, with random results. I fixed it like this:

lock (fs)
{
fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);
}

and the file corruption disappeared.

I am not sure why, but the manually coded parallel uploads seem to slightly but not dramatically improve performance, to around 100-105 KB/s, almost exactly what my ASP.NET MVC application achieves over my broadband connection.

image

There is another approach worth mentioning. It is possible to bypass the web server and upload directly from the browser to Azure storage. To do this, you need to allow cross-origin resource sharing (CORS) as explained here. You also need to issue a Shared Access Signature, a temporary key that allows read-write access to Azure storage. A guy called Blair Chen seems to have this all figured out, as you can see from his Azure speed test and jazure JavaScript library, which makes it easy to upload a blob from the browser.

I was contemplating going that route, but it seems that performance is no better (judging by the Test Upload Big Files section of Chen’s speed test), so I should probably be content with the parallel JavaScript upload solution, which avoids fiddling with CORS.

Overall, has my experience with the Blob storage API been good? I have not found any issues with the service itself so far, but the documentation and samples could be better. This page should be the jumping off point for all you need to know for a basic application like mine, but I did not find it easy to find good samples or documentation for what I thought would be a common scenario, uploading large files with ASP.NET MVC.

Update: since writing this post I have come across this post by Rob Gillen which addresses the performance issue in detail (and links to working Parallel.For code); however I suspect that since the post is four years old the conclusions are no longer valid, because of improvements to the Azure storage client library.

Embarcadero AppMethod: another route to cross-platform mobile, now with C++ support

Embarcadero has updated AppMethod, its IDE for cross-platform mobile and desktop applications. The IDE now supports C++, and as a special offer, you can develop Android phone “free forever”, according to the web site.

AppMethod is none other than our old friend Delphi, combined with the FireMonkey cross-platform framework. The difference between AppMethod and the older RAD Studio product line (current version is XE6) is twofold:

1. AppMethod does not include the VCL, the Delphi framework for Windows applications. It does let you develop for Windows or Mac OS X using FireMonkey.

2. You can buy RAD Studio outright with a perpetual license, from £1342.00 plus VAT for a new user (RAD Studio Professional). AppMethod is only available on subscription.

AppMethod pricing is per developer per platform per year. Currently this is £179.83 plus VAT for individuals (very small businesses up to a maximum of 5 employees in the entire organisation) or £600 for larger businesses (a rather large premium).

C++ support is new in AppMethod 1.14 and supports all target platforms except the iOS Simulator (an annoying limitation). It supports ARC (Automatic Reference Counting) on Android as well as iOS. Mac OS X is supported from 10.8 (Mountain Lion) and up.

There are also a few changes in FireMonkey. You can load HTML into the TWebBrowser component using LoadFromStrings. There is a new date picker component.

Another new feature is in the RTL (run time library). Called App Tethering, it lets applications communicate with each other, for example using TCP. These can be apps on the same device or remote apps. Once paired, apps can run remote actions and share standard data types and streams.

There are also updates to push notifications for iOS and Android, Google Glass support, updated OpenGL and DirectX support on Windows, and more: see here for the complete documentation of what is new.

A Quick Hands-on

I installed the latest AppMethod on Windows 8. The install warns that AppMethod cannot co-exist with RAD Studio XE6, presumably because it is essentially the same thing re-wrapped. The product name is relatively new, but there is plenty of old stuff under the covers. AppMethod still has a dependency on JSharp, Microsoft’s Java implementation for .NET. Java code in the IDE dating back to who knows when?

image

There is a 10-field dialog conforming paths for Android tools, which is a reminder of how many moving parts there are here. It is more complex that most Android development environments because it uses the NDK (Native Development Kit) as well as the usual SDK.

image

Once up and running, you can start a new project such as a FireMonkey mobile application:

image

and then you are in an IDE which would not be entirely unfamiliar to a Delphi user in 1995 (or I suppose, a C++ Builder user in 1997) – I am not saying this is a bad thing, though the IDE feels dated in comparison to Microsoft’s Visual Studio.

image

After coming from a spell of development with XAML it feels odd to have a form builder that defaults to xy layout, but layout managers are available:

image

Compile and run, and after the usual slow initialization of the Android emulator, the app appeared.

image

Why AppMethod?

In the crowded world of cross-platform mobile development, why use AppMethod?

Embarcadero makes a big play of its native development, though it is “native” in respect of code execution but not in GUI fidelity since by default visual controls are custom-drawn by the framework. This is in contrast to Xamarin (the obvious alternative for developers from a Windows background) which does no custom drawing but only uses native controls; however for raw performance AppMethod may have the edge (I have not done comparisons).

Delphi developers should also look at RemObjects Oxygene which also uses a Delphi-like language but is hosted in Visual Studio and, like Xamarin, uses native UI components.

The AppMethod approach does make sense if you prioritise maximum code-sharing over getting exactly the right look and feel for each supported platform, and need better performance or more capability than HTML and JavaScript can get you. There is no support for Windows Phone though; if that is in your plans, Xamarin or HTML and JavaScript development is a better fit.