Category Archives: professional

Book review: Professional ASP.NET MVC 5. Is this the way to learn ASP.NET MVC?

This book caught my eye because while I like ASP.NET MVC, Microsoft’s modern web application framework, it seems to be badly documented. Even the word “badly” is not quite right; there is lots of documentation, some of high quality, but finding your way around it is challenging, thanks to the many different pieces involved. When I completed an ASP.NET MVC project recently, I found it frustrating thanks to over-reliance on sample projects (hey, here is a an application we did that works, see if you can figure out how we did it), many out of date articles relating to old versions; and the opposite, posts and samples which include preview software that does not seem wise to use in production.

image

In my experience ASP.NET MVC is both cleaner and faster than ASP.NET Web Forms, the older .NET web framework, but there is more to learn before you can go ahead and write an application.

Professional ASP.NET MVC 5 gives you nearly 600 pages on the subject. It is aimed at a broad readership: the introduction states:

Professional ASP.NET MVC 5 is designed to teach ASP.NET MVC, from a beginner level through advanced topics.

Perhaps that is too broad, though the idea is that the first six chapters (about 150 pages) cover the basics, and that the later chapters are more advanced, so if you are not a beginner you can start at chapter 7.

The main author is Jon Galloway who is a Technical Evangelist at Microsoft. The other authors are Brad Wilson, formerly at Microsoft and now at CenturyLink Cloud; K Scott Allen at OdeToCode, David Matson who is on the ASP.NET MVC team at Microsoft, and Phil Haack formerly at Microsoft and now at GitHub. I get the impression that Haack wrote several chapters in an earlier edition of the book, but did not work directly on this one; Galloway brought his chapters up to date.

Be in no doubt: there are plenty of well-informed ASP.NET MVC people on this team.

The earlier part of the book uses a sample Music Store application, a version of which is publicly available here. You can also download a tutorial, based on the sample, written by Galloway. The public tutorial however dates from 2011 and is based on ASP.NET MVC 3 and Visual Studio 2010. The book uses Visual Studio 2013.

Chapters 1 to 6, the beginner section, do a decent job of talking you through how to build a first application. There are chapters on Controllers, Views, Models, Forms and HTML Helpers, and finally Data Annotations and Validation. It’s a good basic introduction but if you are like me you will come out with many questions, like what is an ActionResult (the type of most Controller methods)? You have to wait until chapter 16 for a full description.

Chapter 7 is on Membership, Authorization and Security. That is too much for one chapter. It is mostly on security, and inadequate on membership. One of my disappointments with this book is that Azure Active Directory hardly gets a mention; yet to my mind integration of web applications with Office 365 (which uses Azure AD) is a huge feature for Microsoft.

On security though, this is a useful chapter, with handy coverage of Cross-Site Request Forgery and other common vulnerabilities.

Next comes a chapter on AJAX with a little bit on JQuery, client-side validation, and Ajax ActionLinks. Here is the dilemma though. Does it make sense to cover JQuery in detail, when this very popular open source library is widely documented elsewhere? On the other hand, does it make sense not to cover JQuery in detail, when it is usually a vital part of your ASP.NET MVC application?

I would add that this title is poor on design aspects of a web application. That said, I was not expecting much on the design side; but what would help would be coverage of how to work with designers: what is safe to hand over to designers, and how does a typical designer/developer workflow play out with ASP.NET MVC?

I would also like to see more coverage of how to work with Bootstrap, the CSS framework which is integrated with ASP.NET MVC 5 in Visual Studio. I found it a challenge, for example, to discover the best way to change the default fonts and colours used, which is rather basic.

Chapter 9 is on routing, dry but essential background. Chapter 10 on NuGet, the Visual Studio package manager, and a good chapter given how important NuGet now is for most Visual Studio work.

Incidentally, many of the samples for the book can be installed via NuGet. It’s not completely obvious how to do this. I found the best way is to go to http://www.nuget.org and search for Wrox.ProMvc5 – here is the link to the search results. This lists all the packages available; note the package names. Then open the Nuget package manager console and type:

install-package [packagename]

to get the sample.

Chapter 11 is a too-brief chapter on the Web API. I would like to see more on this, maybe even walking through a complete application with clients for say, Windows Phone and a web application – though the following chapter does present a client example using AngularJS.

Chapter 13 is a somewhat theoretical look at dependency injection and inversion of control; handy as Microsoft developers talk a lot about this.

Next comes a very brief introduction to unit testing, intended I think only as a starting point.

For me, the the next two chapters are the most valuable. Chapter 15 concerns extending MVC: you learn about extending models with value providers and model binders; validating models; writing HTML helpers and Razor (the view engine in ASP.NET MVC) helpers; authentication filters and authorization filters. Chapter 16 on advanced topics looks in more detail at Razor, routing, templates, ActionResult and a few other things.

Finally, we get a look at how the Nuget.org application was put together, and an appendix covering some miscellaneous details like what is new in ASP.NET MVC 5.1.

Conclusions

I find this one hard to summarise. There is too much missing to give this an unreserved recommendation. I would like more on topics including ASP.NET Identity, Azure AD integration, Entity Framework, Bootstrap, and more. Trying to cover every developer from beginner to advanced is too much; removing some of the introductory material would have left more room for the more interesting sections. The book is also rather weighted towards theory rather than hands-on coding. At some points it felt more like an explanation from the ASP.NET MVC team on “why we did it this way”, than a developer tutorial.

That said, having those insights from the team is valuable in itself. As someone who has only recently engaged with ASP.NET MVC in a real application, I did find the book useful and will come back to some of those explanations in future.

Looking at what else is available, it seems to me that there is a shortage of books on this subject and that a “what you need to know” title aimed at professional developers would be widely welcomed. It would pay Microsoft to sponsor it, since my sense is that some developers stick with ASP.NET Web Forms not because it is better, but because it is more approachable.

 

Microsoft introduces a new 2D graphics API for the Windows Runtime

Microsoft has announced Win2D, a Windows Runtime API that wraps Direct2D (part of DirectX), for accelerated graphics in Windows Store apps.

The new API is described here and you can download the current binary here. It is in its early stages, but already supports basic drawing, bitmap loading, some image effects, and a vector and matrix math library. Here is some sample code:

void canvasControl_Draw(CanvasControl sender, CanvasDrawEventArgs args)
{
args.DrawingSession.Clear(Colors.CornflowerBlue);
args.DrawingSession.DrawEllipse(190, 125, 140, 40, Colors.Black, 6);
args.DrawingSession.DrawText("Hello, world!", 100, 100, Colors.Yellow);
}

Although this hardly looks exciting, it is important because it enables accelerated custom drawing from languages other than C++, and without needing to learn Direct2D itself. It will be easier to make rich custom controls, or casual 2D games.

That said, there are already alternative C# wrappers for DirectX in Windows Runtime apps, such as SharpDX.

Some of the comments on the MSDN post are sceptical:

Managed DirectX and XNA were however cancelled despite the frustration from the community which in response created open source alternatives to save the projects and customers that had invested in technology Microsoft introduced.

I understand that the future is "uncertain", but is this technology something that we should dare invest in or will it see the same fate as it’s earlier incarnations?

Microsoft’s Shawn Hargreaves assures:

Win2D is absolutely not a side project or some kind of stop gap that will later be replaced by anything different.

The target here is universal apps, so not just Windows Store apps but also Windows Phone. Despite the hesitant reception for the Windows Runtime in Windows 8, it looks as if Microsoft is still committed to the platform and that it will remain centre stage in Windows vNext.

Embarcadero RAD Studio XE7 (Delphi, C++Builder): is seven the magic number?

Embarcadero has released version 7 of its XE programming suite. The main products included are Delphi and C++ Builder, RAD development tools that share the same underlying libraries and visual designers but give developers a choice of language. Delphi uses an object-oriented evolution of Pascal.

image

Delphi is best known as a Windows Programming Tool – it used to be the main competition for Visual Basic – but over the last few years Embarcadero has added cross-platform Mac and mobile development with native compilers for OSX, iOS and Android. The IDE runs only on Windows but can compile for the Mac or for iOS New versions have come thick and fast – XE6 was released in April 2014 – so if you want to stay up to date, prefer for frequent upgrades or buy with a support and maintenance agreement. You can buy Delphi or C++ Builder separately if you do not require the suite.

The full RAD Studio also includes HTML 5 Builder, which supports mobile app development using Cordova (open source version of PhoneGap). There seems to be little new in HTML 5 Builder. An earlier PHP tool variously called Delphi for PHP and RadPHP was dropped some time back. I get the impression that Embarcadero is now more focused on its core good thing.

image

So what’s new? Making effective cross-platform development tools is not easy, with trade-offs between productivity (share more code) and writing the best app for each platform (share less code). This edition introduces a new approach to designing the user interface, called the Multi-Device Designer. It is based on a kind of inheritance. You build your base UI in a master form and write most of the event-handling code there. This master form is automatically adapted, to some extent, to other platforms. You can see how your form looks on these other platform by dropping down a list.

image

When you select the form for a specific platform, you can modify it for that platform. There is still only one form, but the platform-specific views override properties set in the master form. If you then further modify the master, the changes will flow down to the platform-specific forms unless properties have already been overridden.

image

My impression after a five-minute play is that you will indeed have to made modifications to get each form looking right; the automatically generated versions were not too good. There is still good productivity potential here presuming the designer proves to be robust.

A common criticism of Embarcadero’s approach is that visual controls are custom-drawn on each platform, rather than using true native controls. That does not matter at all, Embarcadero always assured me. It does matter though; and now in XE7 we have the beginning of a solution. There are a couple of optional Platform Native Controls, TEdit and TCalendar for iOS, which do use native controls. I suspect this will be popular and hope that more platform native controls arrive in due course.

App Tethering is a feature/library that lets you easily set up connectivity between Delphi/C++ Builder apps on a local network. The first version only supported Ethernet/Wi-Fi, but now Bluetooth support has come, including Bluetooth LE on Windows 8 and recent Android devices.

On Android, a new tool called Java2OP lets you generate Object Pascal interfaces for Java Android classes, which sounds handy.

Aside: the naming of this tool suggests that the language is now called Object Pascal again, rather than Delphi, which became the official name some years back. Object Pascal makes more sense to me.

The System.Threading library now includes a new parallel programming library, including Parallel For, task scheduling, and futures. Futures are a way of creating code that will run at an indeterminate time. You associate a variable with a function that calculates its value. That function will run when you access the value, or before that if a background thread is available.

The IDE now has limited Git support (local repository only).

Another new piece in XE7 is Enterprise Mobility Services, a REST-based middleware stack that runs as an ISAPI DLL in Microsoft’s IIS web server. This includes database connectivity (using the FireDAC library), user management (though not Active Directory integration as yet, as I understand it) and usage analytics.

If you are using IIS, why would you not use ASP.NET and the Web API? The answer is that with EMS you can do end-to-end Delphi/C++ Builder as well as getting the performance of native code on the server.

Challenges for Embarcadero and RAD Studio

In the nineties it was Delphi versus Visual Basic, and although most developers who gave Delphi serious attention discovered that it was superior in most ways to Microsoft’s tool, the big-company backing and integration with Microsoft’s overall platform meant that VB was not much disrupted (though we may have Delphi to thank for the appearance of native code compilation in VB).

Today Embarcadero is up against Xamarin, which is similar in that it gives Microsoft platform developers a route to cross-platform development for Mac, iOS and Android.

From what I hear, cross-platform support in RAD Studio has been successful in reinvigorating the product within its niche, but it is Xamarin that has grown explosively, thanks to a combination of the C# language, Visual Studio integration, and a degree of official endorsement from Microsoft. Whereas Xamarin fits with Microsoft’s Universal App concept, shared C# code across all platforms, RAD Studio takes its own path, avoiding .NET in favour of native executables.

[I realise that there is endless debate about what native means, and that while RAD Studio has a good claim to native code, it is weak when it comes to native controls as noted above].

Unlike Xamarin, which has its own cross-platform IDE for Windows and Mac, RAD Studio requires Mac developers to use a PC or a Windows VM.

Embarcadero chose not to support Windows 8 “Metro” or Store apps, a decision which now looks wise, though it could yet work against them if Universal Apps are more compelling in Windows vNext. Another omission is Windows Phone; perhaps this does not matter greatly given its small market share, but within the Microsoft platform community it is a bigger lack than simple market share implies.

The advantage of the RAD Studio approach is that it is less dependent on Microsoft’s constant changes of direction, and performance is generally good. I have always been a fan of Delphi. There were some quality concerns when the FireMonkey cross-platform UI library was first adopted, but now in RAD Studio XE7 there is reasonable hope that the library is mature enough.

RAD Studio is the obvious route for long-time Delphi or C++ Developers migrating to mobile; it is a viable niche but I question whether it can ever move beyond it to grab a share of the wider mobile development market.

More information here.

Bing Developer Assistant adds code samples to Visual Studio IntelliSense, with mixed results

Microsoft has updated its Bing Developer Assistant Beta, a Visual Studio 2013 add-in which hooks into IntelliSense so that you get code samples as well as brief documentation. For example, in an Entity Framework project, if you select dbContext.SaveChanges, you get a code sample which uses that method.

image

There is no guarantee of course that the sample is relevant to what you are trying to accomplish. You can hit Search More though and get a selection of code snippets and sample projects, drawn from sites including MSDN, StackOverflow and Codeproject.

image

Developer beware though. Looking at the code samples, the top one is from a 2011 blog post relating to CTP (Community Tech Preview) 5 of Entity Framework 4.1. If you hit the link, you get this:

image

“The information in this post is out of date”, it says, followed by a link to what is in fairness a rather helpful article on using SaveChanges.

Hmm, maybe Bing Developer Assistant should try filtering the search to eliminate samples on preview or obsolete APIs? A snag here though is that on occasion the blogs and samples on preview frameworks are all you can get, because by the time the thing is actually released, the developer evangelists have move on to blog about the next up and coming cool thing.

If you choose an object member for which Bing finds no code sample, you are prompted to add one of your own:

image

This takes to to the Developer Network sample upload page:

image

This form is quite a lot of work, but lets you add a code snippet or sample project together with title and comments explaining what it does.

The Bing Developer Assistant also searches for sample projects:

image

Again it is a case of picking and choosing what is really relevant; but developers are experts and expected to use common sense.

A drawback with Bing Developer Assistant is that only one add-on can extend IntelliSense, so if you use Resharper or another tool which also does this, you have to choose which one to allow.

In the end, this is all about integrating web search into the IDE. Is that a good idea, or is it better simply to have your web browser open, perhaps on another display, and type “dbContext SaveChanges EF6” or some such into your favourite search engine?

There is some merit in a search engine that automatically filters to show only code samples – hey, that is what Google’s popular Code Search did, until it was mysteriously shut down – though I’m not sure how much I like the idea of possibly obsolete and deprecated samples showing up in Visual Studio as you are coding.

Still, the truth is that web search is critical to software development today and it is good to see that recognised.

Microsoft StorSimple brings hybrid cloud storage to the enterprise, but what about the rest of us?

Microsoft has released details of its StorSimple 8000 Series, the first major new release since it acquired the hybrid cloud storage appliance business back in late 2012.

I first came across StorSimple at what proved to be the last MMS (Microsoft Management Summit) event last year. The concept is brilliant: present the network with infinitely expandable storage (in reality limited to 100TB – 500TB depending on model), storing the new and hot data locally for fast performance, and seamlessly migrating cold (ie rarely used) data to cloud storage. The appliance includes SSD as well as hard drive storage so you get a magical combination of low latency and huge capacity. Storage is presented using iSCSI. Data deduplication and compression increases effective capacity, and cloud connectivity also enables value-add services including cloud snaphots and disaster recovery.

image

The two new models are the 8100 and the 8600:

  8100 8600
Usable local capacity 15TB 40TB
Usable SSD capacity 800GB 2TB
Effective local capacity 15-75TB 40-200TB
Maxiumum capacity
including cloud storage
200TB 500TB
Price $100,000 $170,000

Of course there is more to the new models than bumped-up specs. The earlier StorSimple models supported both Amazon S3 (Simple Storage Service) and Microsoft Azure; the new models only Azure blob storage. VMWare VAAPI (VMware API for Array Integration) is still supported.

On the positive site, StorSimple is now backed by additional Azure services – note that these only work with the new 8000 series models, not with existing appliances.

The Azure StoreSimple Manager lets you manage any number of StorSimple appliances from the Azure portal – note this is in the old Azure portal, not the new preview portal, which intrigues me.

image

Backup snapshots mean you can go back in time in the event of corrupted or mistakenly deleted data.

image

The Azure StorSimple Virtual Appliance has several roles. You can use it as a kind of reverse StorSimple; the virtual device is created in Azure at which point you can use it on-premise in the same way as other StorSimple-backed storage. Data is uploaded to Azure automatically. An advantage of this approach is if the on-premise StorSimple becomes unavailable, you can recreate the disk volume based on the same virtual device and point an application at it for near-instant recovery. Only a 5MB file needs to be downloaded to make all the data available; the actual data is then downloaded on demand. This is faster than other forms of recovery which rely on recovering all the data before applications can resume.

image

The alarming check box “I understand that Microsoft can access the data stored on my virtual device” was explained by Microsoft technical product manager Megan Liese as meaning simply that data is in Azure rather than on-premise but I have not seen similar warnings for other Azure data services, which is odd. Further to this topic, another journalist asked Marc Farley, also on the StorSimple team, whether you can mark data in standard StorSimple volumes not to be copied to Azure, for compliance or security reasons. “Not right now” was the answer, though it sounds as if this is under consideration. I am not sure how this would work within a volume, since it would break backup and data recovery, but it would make sense to be able to specify volumes that must remain always on-premise.

All data transfer between Azure and on-premise is encrypted, and the data is also encrypted at rest, using a service data encryption key which according to Farley is not stored or accessible by Microsoft.

image

Another way to use a virtual appliance is to make a clone of on-premise data available, for tasks such as analysing historical data. The clone volume is based on the backup snapshot you select, and is disconnected from the live volume on which it is based.

image

StorSimple uses Azure blob storage but the pricing structure is different than standard blob storage; unfortunately I do not have details of this. You can access the data only through StorSimple volumes, since the data is stored using internal data objects that are StorSimple-specific. Data stored in Azure is redundant using the usual Azure “three copies” principal; I believe this includes geo-redundancy though this may be a customer option.

StorSimple appliances are made by Xyratex (which is being acquired by Seagate) and you can find specifications and price details on the Seagate StorSimple site, though we were also told that customers should contact their Microsoft account manager for details of complete packages. I also recommend the semi-official blog by a Microsoft technical solutions professional based in Sydney which has a ton of detailed information here.

StorSimple makes huge sense, but with 6 figure pricing this is an enterprise-only solution. How would it be, I muse, if the StorSimple software were adapted to run as a Windows service rather than only in an appliance, so that you could create volumes in Windows Server that use similar techniques to offer local storage that expands seamlessly into Azure? That also makes sense to me, though when I asked at a Microsoft Azure workshop about the possibility I was rewarded with blank looks; but who knows, they may know more than is currently being revealed.

Notes from the field: putting Azure Blob storage into practice

I rashly agreed to create a small web application that uploads files into Azure storage. Azure Blob storage is Microsoft’s equivalent to Amazon’s S3 (Simple Storage Service), a cloud service for storing files of up to 200GB.

File upload performance can be an issue, though if you want to test how fast your application can go, try it from an Azure VM: performance is fantastic, as you would expect from an Azure to Azure connection in the same region.

I am using ASP.NET MVC and thought a sample like this official one, Uploading large files using ASP.NET Web API and Azure Blob Storage, would be all I needed. It is a start, but the method used only works for small files. What it does is:

1. Receive a file via HTTP Post.

2. Once the file has been received by the web server, calls CloudBlob.UploadFile to upload the file to Azure blob storage.

What’s the problem? Leaving aside the fact that CloudBlob is deprecated (you are meant to use CloudBlockBlob), there are obvious problems with files that are more than a few MB in size. The expectation today is that users see some sort of progress bar when uploading, and a well-written application will be resistant to brief connection breaks. Many users have asynchronous internet connections (such as ADSL) with slow upload; large files will take a long time and something can easily go wrong. The sample is not resilient at all.

Another issue is that web servers do not appreciate receiving huge files in one operation. Imagine you are uploading the ISO for a DVD, perhaps a 3GB file. The simple approach of posting the file and having the web server upload it to Azure blob storage introduces obvious strain and probably will not work, even if you do mess around with maxRequestLength and maxAllowedContentLength in ASP.NET and IIS. I would not mind so much if the sample were not called “Uploading large files”; the author perhaps has a different idea of what is a large file.

Worth noting too that one developer hit a bug with blobs greater than 5.5MB when uploaded over HTTPS, which most real-world businesses will require.

What then are you meant to do? The correct approach, as far as I can tell, is to send your large files in small chunks called blocks. These are uploaded to Azure using CloudBlockBlob.PutBlock. You identify each block with an ID string, and when all the blocks are uploaded, called CloudBlockBlob.PutBlockList with a list of IDs in the correct order.

This is the approach taken by Suprotim Agarwal in his example of uploading big files, which works and is a great deal better than the Microsoft sample. It even has a progress bar and some retry logic. I tried this approach, with a few tweaks. Using a 35MB file, I got about 80 KB/s with my ADSL broadband, a bit worse than the performance I usually get with FTP.

Can performance be improved? I wondered what benefit you get from uploading blocks in parallel. Azure Storage does not mind what order the blocks are uploaded. I adapted Agarwal’s sample to use multiple AJAX calls each uploading a block, experimenting with up to 8 simultaneous uploads from the browser.

The initial results were disappointing. Eventually I figured out that I was not actually achieving parallel uploads at all. The reason is that the application uses ASP.NET session state, and IIS will block multiple connections in the same session unless you mark your ASP.NET MVC controller class  with the SessionStateBehavior.ReadOnly attribute.

I fixed that, and now I do get multiple parallel uploads. Performance improved to around 105 KB/s, worthwhile though not dramatic.

What about using a Windows desktop application to upload large files? I was surprised to find little improvement. But can parallel uploading help here too? The answer is that it should happen anyway, handled by the .NET client library, according to this document:

If you are writing a block blob that is no more than 64 MB in size, you can upload it in its entirety with a single write operation. Storage clients default to a 32 MB maximum single block upload, settable using the SingleBlobUploadThresholdInBytes property. When a block blob upload is larger than the value in this property, storage clients break the file into blocks. You can set the number of threads used to upload the blocks in parallel using the ParallelOperationThreadCount property.

It sounds as if there is little advantage in writing your own chunking code, except that if you just call the UploadFromFile or UploadFromStream methods of CloudBlockBlob, you do not get any progress notification event (though you can get a retry notification from an OperationContext object passed to the method). Therefore I looked around for a sample using parallel uploads, and found this one from Microsoft MVP Tyler Doerksen, using C#’s Parallel.For.

Be warned: it does not work! Doerksen’s approach is to upload the entire file into memory (not great, but not as bad as on a web server), send it in chunks using CloudBlockBlob.PutBlock, adding the block ID to a collection at the same time, and then to call CloudBlockBlob.PutBlockList. The reason it does not work is that the order of the loops in Parallel.For is indeterminate, so the block IDs are unlikely to be in the right order.

I fixed this, it tested OK, and then I decided to further improve it by reading each chunk from the file within the loop, rather than loading the entire file into memory. I then puzzled over why my code was broken. The files uploaded, but they were corrupt. I worked it out. In the following code, fs is a FileStream object:

fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);

Spot the problem? Since fs is a variable declared outside the loop, other threads were setting its position during the read operation, with random results. I fixed it like this:

lock (fs)
{
fs.Position = x * blockLength;
bytesread = fs.Read(chunk, 0, currentLength);
}

and the file corruption disappeared.

I am not sure why, but the manually coded parallel uploads seem to slightly but not dramatically improve performance, to around 100-105 KB/s, almost exactly what my ASP.NET MVC application achieves over my broadband connection.

image

There is another approach worth mentioning. It is possible to bypass the web server and upload directly from the browser to Azure storage. To do this, you need to allow cross-origin resource sharing (CORS) as explained here. You also need to issue a Shared Access Signature, a temporary key that allows read-write access to Azure storage. A guy called Blair Chen seems to have this all figured out, as you can see from his Azure speed test and jazure JavaScript library, which makes it easy to upload a blob from the browser.

I was contemplating going that route, but it seems that performance is no better (judging by the Test Upload Big Files section of Chen’s speed test), so I should probably be content with the parallel JavaScript upload solution, which avoids fiddling with CORS.

Overall, has my experience with the Blob storage API been good? I have not found any issues with the service itself so far, but the documentation and samples could be better. This page should be the jumping off point for all you need to know for a basic application like mine, but I did not find it easy to find good samples or documentation for what I thought would be a common scenario, uploading large files with ASP.NET MVC.

Update: since writing this post I have come across this post by Rob Gillen which addresses the performance issue in detail (and links to working Parallel.For code); however I suspect that since the post is four years old the conclusions are no longer valid, because of improvements to the Azure storage client library.

Embarcadero AppMethod: another route to cross-platform mobile, now with C++ support

Embarcadero has updated AppMethod, its IDE for cross-platform mobile and desktop applications. The IDE now supports C++, and as a special offer, you can develop Android phone “free forever”, according to the web site.

AppMethod is none other than our old friend Delphi, combined with the FireMonkey cross-platform framework. The difference between AppMethod and the older RAD Studio product line (current version is XE6) is twofold:

1. AppMethod does not include the VCL, the Delphi framework for Windows applications. It does let you develop for Windows or Mac OS X using FireMonkey.

2. You can buy RAD Studio outright with a perpetual license, from £1342.00 plus VAT for a new user (RAD Studio Professional). AppMethod is only available on subscription.

AppMethod pricing is per developer per platform per year. Currently this is £179.83 plus VAT for individuals (very small businesses up to a maximum of 5 employees in the entire organisation) or £600 for larger businesses (a rather large premium).

C++ support is new in AppMethod 1.14 and supports all target platforms except the iOS Simulator (an annoying limitation). It supports ARC (Automatic Reference Counting) on Android as well as iOS. Mac OS X is supported from 10.8 (Mountain Lion) and up.

There are also a few changes in FireMonkey. You can load HTML into the TWebBrowser component using LoadFromStrings. There is a new date picker component.

Another new feature is in the RTL (run time library). Called App Tethering, it lets applications communicate with each other, for example using TCP. These can be apps on the same device or remote apps. Once paired, apps can run remote actions and share standard data types and streams.

There are also updates to push notifications for iOS and Android, Google Glass support, updated OpenGL and DirectX support on Windows, and more: see here for the complete documentation of what is new.

A Quick Hands-on

I installed the latest AppMethod on Windows 8. The install warns that AppMethod cannot co-exist with RAD Studio XE6, presumably because it is essentially the same thing re-wrapped. The product name is relatively new, but there is plenty of old stuff under the covers. AppMethod still has a dependency on JSharp, Microsoft’s Java implementation for .NET. Java code in the IDE dating back to who knows when?

image

There is a 10-field dialog conforming paths for Android tools, which is a reminder of how many moving parts there are here. It is more complex that most Android development environments because it uses the NDK (Native Development Kit) as well as the usual SDK.

image

Once up and running, you can start a new project such as a FireMonkey mobile application:

image

and then you are in an IDE which would not be entirely unfamiliar to a Delphi user in 1995 (or I suppose, a C++ Builder user in 1997) – I am not saying this is a bad thing, though the IDE feels dated in comparison to Microsoft’s Visual Studio.

image

After coming from a spell of development with XAML it feels odd to have a form builder that defaults to xy layout, but layout managers are available:

image

Compile and run, and after the usual slow initialization of the Android emulator, the app appeared.

image

Why AppMethod?

In the crowded world of cross-platform mobile development, why use AppMethod?

Embarcadero makes a big play of its native development, though it is “native” in respect of code execution but not in GUI fidelity since by default visual controls are custom-drawn by the framework. This is in contrast to Xamarin (the obvious alternative for developers from a Windows background) which does no custom drawing but only uses native controls; however for raw performance AppMethod may have the edge (I have not done comparisons).

Delphi developers should also look at RemObjects Oxygene which also uses a Delphi-like language but is hosted in Visual Studio and, like Xamarin, uses native UI components.

The AppMethod approach does make sense if you prioritise maximum code-sharing over getting exactly the right look and feel for each supported platform, and need better performance or more capability than HTML and JavaScript can get you. There is no support for Windows Phone though; if that is in your plans, Xamarin or HTML and JavaScript development is a better fit.

Microsoft Azure: growing but still has image problems

I attended a Microsoft Cloud Day in London organised by the Azure User Group; I booked this when Technical Fellow Mark Russinovich was set to attend, but regrettably he cancelled at a late stage. I skipped the substitute keynote by UK Microsoftie Dave Coplin as I heard the very same talk earlier this month, so arrived mid-morning at the venue in Whitechapel; not that easy to find amid the stalls of Whitechapel Market (well, not quite), but if you seek out the Whitechapel branch of the Foxcroft and Ginger cafe (not known to Here Maps on Windows Phone, incidentally) then you will find premises upstairs with logos for Barclays Accelerator and Microsoft Ventures; something to do with assisting the flow of cash from corporate giants desperate for community engagement to business start-ups desperate for cash.

Giving technical presentations is hard, and while I admired Richard Conway’s efforts at showing how, with some PowerShell, he could transform some large dataset into rows of numbers using the magic of Azure HDInsight I didn’t think it quite worked. Beat Schwegler dived into code to explain the how and why of Azure Notification Hubs, a service which delivers push notifications to mobile apps; useful material, but could have been compressed. Then there was Richard Astbury at software development company two10degrees who talked about Project Orleans, high scale applications via “an Actor Model framework of programmable in-memory objects”; we learned about grains and silos (or software equivalents) in a session that was mostly new to me.

At the break I chatted with a somewhat bemused attendee who had come in the hope of learning about whether he should migrate some or all of his small company’s server requirements to Azure. I explained about Office 365 and Azure Active Directory which he said was more relevant to him than the intricacies of software development. It turns out that the Azure User Group is really about software development using Azure services, which is only one perspective on Microsoft’s cloud platform.

For me the most intriguing presentation was from Michael Delaney at ElevateDirect, a young business which has a web application to assist businesses in finding employees directly rather than via recruitment agencies. His company picked Amazon Web Services (AWS) over Azure two and a half years ago, but is now moving to Microsoft’s cloud.

image
Michael Delaney, CTO and co-founder ElevateDirect

Why did he pick AWS? He is not a typical Microsoft-platform person, preferring open source products including Linux, Apache Solr, Python and MySQL. When he chose AWS, Azure was not a suitable platform for a mainly Linux-based application. However, he does prefer C# to Java. According to Delaney, AWS is a Java-first platform and he found this getting in the way of development.

Azure today, says Delaney, has the first-class support for Linux that it lacked a few years back, and is a better platform for C# applications than AWS even though AWS does support Windows servers.

Migrating the application was relatively straightforward, he said, with the biggest issue being the move from Amazon S3 (Simple Storage Service) to Azure Storage, though he overcame this by abstracting the storage API behind his own wrapper code.

Azure is not all the way there though. Delaney is disappointed with the relational database options on offer, essentially SQL Server or third-party managed MySQL from ClearDB. He would like to see options for PostgreSQL and others. He would also like the open source Elastic Search to be offered as an Azure service.

There was a panel discussion later at which the question of Azure’s market perception was discussed. Most businesses, according to one attendee, think of AWS as the only option for cloud, even if they are Microsoft-platform businesses for whom Azure might be more suitable. It is a branding problem caused by the AWS first-mover advantage and market dominance, said Microsoft’s Steve Plank.

I would add that Azure is relatively new, at least in its new incarnation offering full IaaS (infrastructure as a service). AWS is also ahead on the number and variety of services on offer, and has not really messed up, which means there is little incentive for existing users to move unless, like Delaney, they find some aspect of Microsoft’s platform (in his case C#) particularly compelling.

This leads me back to the bemused attendee. It seems to me that Azure’s biggest advantage is Azure Active Directory and seamless integration with Office 365. Having said that, it is not difficult to host an application on AWS that uses Azure Active Directory, but there may be some advantage in working with a single cloud provider (and you can expect fast low-latency networking between Azure and Office 365).

Visual Studio “14” announced, preview available with “Roslyn” open source compiler

Microsoft’s Soma Somasegar has announced the next version of Visual Studio, currently known as Visual Studio 14, but likely to be fully released in 2015 (and, I am guessing, likely to be called Visual Studio 2015).

This is a major release. It includes a new VB and C# compiler which is itself written in managed code, codenamed Roslyn. The open source Roslyn project provides new APIs that enable more powerful IDE features. Visual Basic is getting refactoring support for the first time.

The preview also includes a major update to ASP.NET that unifies ASP.NET MVC and the ASP.NET Web API, and has a new deployment model and developer experience:

Thanks to the Rosyln compiler, if you change ".cs” files or project.json file and want to see the change in the browser, you don’t need to build the project any more. Just refresh the browser.

There is no IIS express, nor IIS involved when you run from the command line. It means that you can publish your website to a USB drive, and run it by double clicking the web.cmd file!

On the C++ side, there is improved C++ 11 support and more features from C++ 14:

The Visual Studio "14" CTP includes support for user-defined literals, noexcept, alignof and alignas, and inheriting constructors from C++11, generalized lambda capture, auto function return type deduction, and generic lambdas from C++14, as well as many more new C++ features.

says Somasegar. There is also a refactored C Runtime (CRT):

msvcr140.dll no longer exists. It is replaced by a trio of DLLs: vcruntime140.dll, appcrt140.dll, and desktopcrt140.dll.

If you install the CTP (mine is downloading) use a spare machine or VM; it is an early preview that does not work side-by-side with other versions and the only uninstall may be to flatten the machine:

Installing a CTP release will place a computer in an unsupported state. For that reason, we recommend only installing CTP releases in a virtual machine, or on a computer that is available for reformatting.

Apple’s Swift programming language: easy coding for OS X and iOS at last?

Apple has announced a new programming language, called Swift. (There was already a language called Swift, used for parallel scripting, but Apple links to the other Swift in case you land on the wrong page. So far it looks like the other Swift has not returned the favour).

For as long as I can remember, serious Apple developers have had to use Objective-C, an object-oriented C that is not like C++. I have only dabbled in Objective-C but when I last tried it I was pleasantly surprised: memory management was no hassle and I found it productive. Nevertheless it is an intimidating language if you come from a background of, say, JavaScript or Microsoft .NET. Apple’s focus on Objective-C has left a gap for easier to use alternatives, though the main reason developers use something other than Objective-C, as far as I am aware, is for cross-platform projects. Companies such as Xamarin and Embarcadero (with Delphi) have had some success, and of course Adobe PhoneGap (or the open source Cordova) has had significant take-up for cross-platform code based on HTML and JavaScript.

I should mention that RAD (Rapid Application Development) on OS X has long been possible using the wholly-owned Filemaker, a database manager with a powerful scripting language, but this is not suitable for general-purpose apps.

Overall, it is fair to say that coding for OS X and iOS has a higher bar than for Windows because Apple has not provided anything like Microsoft’s C# or Visual Basic, type-safe languages with easy form builders that let you snap together an application in a short time, while still being powerful enough for almost any purpose. This has been a differentiator for Windows. Visual Basic is almost as old as Windows itself, and C# was introduced in 2000.

Now Apple has come up with its own equivalent. I am new to Swift as are most people outside Apple, but took a quick look at the book, The Swift Programming Language, along with the announcement details. A few highlights:

  • Swift is a type-safe language that compiles to native code using LLVM.
  • The IDE for Swift is Xcode. It supports Cocoa development (Apple’s user interface framework) via import of the existing Objective-C frameworks, which become Swift APIs via the import keyword:

import UIKit

  • You can mix Swift and Objective-C in a single project. In Objective C you can use #import to make Swift code visible and usable.
  • Swift is a C-family language and you will find familiar features like curly braces and semi-colons to terminate lines (though semi-colons are optional).
  • Swift uses reference counting for automatic memory management. There is rather complex section in the book about weak references and unowned references, to solve some of the problems inherent in reference counting.
  • Type inference is the preferred approach to declaring the type of a variable, but you can state the type if required. You can also declare constants.
  • Swift supports single inheritance for classes and multiple inheritance for protocols (protocols are more or less equivalent to interfaces in other languages).
  • There are advanced features including closures, generics, tuples, and variadic parameters. (I am not sure if “advanced” is the right word, but other languages such as C# and Java took a while to get these).
  • Swift has something like destructors which it calls deinitializers.
  • There is an interesting feature called Extensions which lets you add methods to any existing type. For example, you could extend Int with a prettyprint method and then call 3.prettyprint.
  • Swift variables are not normally nullable; they must have a value. However you can declare optional types (add a ?, such as Int?) that can be set to nil. You can also declare implicitly unwrapped optionals which can be nil, but once assigned a value cannot be nil thereafter.
  • Swift includes the AnyObject type which can represent anything.

Swift seems to me to have similar goals to Microsoft’s C#: easier and safer than C or C++, but intended for any use right up to large and complex applications. One of the best things about it is the smooth interoperability with Objective-C; this also saves Apple from having to write native Swift frameworks for its entire stack.

A smart move? I think so, though Swift is different enough from any other language that developers have some learning to do.

What difference will Swift make? Initially, not that much. Objective-C developers now have a choice and some will move over or start mixing and matching, but Swift is still single-platform and will not change the developer landscape. That said, Swift may make Apple’s platform more attractive to business developers, for whom C# or Java is currently more productive; and perhaps Apple could find ways of using Swift in places where previously you would have to use AppleScript, extending its usefulness.

If Apple developers were tempted towards Xamarin or Delphi for productivity, as opposed to cross-platform, they will probably now use Swift; but I doubt there were all that many in that particular group.

I would be interested to hear from developers though: what do you think of Swift?