Tag Archives: asp.net

Visual Studio, TypeScript, WebPack and ASP.NET Core: somewhat awkward

It is always good to learn a new language so I took advantage of the holiday season to look more closely at TypeScript. At least, that was the original intent. So far I have spent longer on  configuring stuff to work, than I have on actual coding. I think of it as time invested rather than wasted.

As long-term readers will know I am working on a bridge (the card game) website which has been used successfully over the lockdown period. I put this together quickly in the first half of 2020, reusing a unfinished Windows project and taking advantage of everything I could get without having to code it myself, like the ASP.NET identity system. So it is C#, ASP.NET Core, SignalR, runs on Linux on Azure App Service, and mostly coded in Visual Studio, with a few detours into Visual Studio Code.

Of course there is a ton of JavaScript involved and since the user interface for a bridge-playing game is fairly custom I did not use a JavaScript framework unless you could jQuery and Bootstrap. I wrote a separate JavaScript file for each page (possibly a mistake). I also started using the AWS Chime SDK for JavaScript which means referencing a huge 680K JavaScript file.

I therefore had several goals in mind. One was to code in TypeScript rather than JavaScript in order to take advantage of its features and catch more mistakes at compile time. Second, I wanted to optimize the JavaScript better, with automatic minification. Third, I wanted to align my project more closely with the JavaScript ecosystem. The AWS SDK, for example, is written in TypeScript using modules, but I have been using some demo code provided to compile a single JavaScript file. Maybe I can get better optimization by coding my own project in  TypeScript, and importing only the modules I need.

Visual Studio is not well aligned with the modern JavaScript ecosystem, as you can tell if you read this article on bundling and minification of static assets. “ASP.NET Core doesn’t provide a native bundling and minification solution,” it says, and refers developers to the WebOptimizer project or other tools such as Gulp and Webpack.

I did want to start with TypeScript though, and to begin with this looked easy. All you have to do is to add the TypeScript NuGet package, do some minimal configuration by creating and editing tsconfig.json, and you can write TypeScript and have it transpiled to JavaScript in your preferred target directory whenever the project is built. I moved a bunch of my JavaScript files to a directory of TypeScript files, renamed them from .js to .ts, and set to work making the TypeScript compiler happy.

When you do this you discover that the TypeScript compiler considers all .ts files that are not modules to be in the same scope. So if you have two JavaScript files and they both contain functions called DoSomething(), the compiler throws a duplicate function error, even if you will never reference them both from the same web page. You can fix this by making them modules – it feels like TypeScript is designed on this basis – but now you have the opposite problem, that if JavaScript file A references functions or variables in JavaScript file B, they have to be exported and imported. A good thing in principle, but now you have import statements in the code. The TypeScript compiler does not transpile these for compatibility with browsers that do not support import, and in addition, you now have to use “type = ‘module'” on script references in HTML. I also ran into issues with the libraries I use, primarily SignalR and the AWS Chime SDK. You can either npm install these and import them in the proper way, if the developers have provided TypeScript definition files (with a d.ts extension) or find a type library via DefinitelyTyped which provides only the types; you still need to reference the library separately. There is an obvious potential version issue if you go the DefinitelyTyped route.

In other words, what starts out as a simple idea of writing TypeScript instead of JavaScript soon becomes a complete refactor of the code to be modular and use imports and exports. Again, this is not a bad thing, but it is more work and not quite the incremental transition that I had in mind. I had over 1000 errors reported by the TypeScript compiler but gradually whittled them down (and this is with TypeScript set with strict off, intended to be a temporary expedient).

So I did all that but had a problem with these import statements when it came to using them in the browser. It seemed that WebPack could fix this for me, plus I could configure it to do tree-shaking to reduce code size and to use a minifier (it uses terser by default). There is a slight issue though since modern JavaScript tools like WebPack and terser are geared towards bundling all your JavaScript into a single file, and/or having a single-page application, which is not how my bridge site works. Still, it looked like it could be configured to work for me so I started down the track, using a post build step in Visual Studio to run WebPack.

I am sure this is obvious to people familiar with WebPack, but I still had problems getting my HTML pages to talk to the JavaScript. By default terser will mangle and shorten all the function names, but that is easily configured. The HTML still could not call any JavaScript functions: function not defined. Eventually I discovered that you have to configure WebPack to output a library. So if you have an HTML button that called a JavaScript function in its onclick event handler, there are several things you need to do. First, export the function in the JavaScript (or TypeScript) code. Second, add a library name and preferably type ‘umd’ in webpack.config.js. Third, add the library name as a prefix to the function you are calling from HTML, for example mylibrary.myfunction.

I also had issues with code splitting – essential to avoid bloated JavaScript bundles. This is done in WebPack by configuring SplitChunks. If set to ‘all’ then my library exports stopped working. After much trial and error, I found a fix. First, set chunkLoading to ‘jsonp’. Second, if your library variable is set to “undefined” at runtime there is a problem with one of the bundled JavaScript files. Unfortunately this was not reported as an error in the browser console – that is, the undefined library variable was reported, but not the reason for it.I tracked it down to a call to document.readyState or possibly document.addEventListener; using jQuery instead fixed it.

Another tip: do not call any JavaScript code directly from cshtml, other than via event handlers. It might try to run before the JavaScript is loaded. I found it easiest to put initialization code in a function and call it from JavaScript. You can put the function declaration into index.d.ts to keep TypeScript happy, since it is external.

Is it worth it? Watch this space. It has pushed me into refactoring which is improving the structure of my code – but I have also added complexity with a build process that compiles TypeScript to JavaScript (using tsloader), then merging and splitting files with WebPack, while along the way minifying and mangling it with terser. Yes I have encountered unexpected behaviour, partly thanks to my inexperience, but the interactions for example between jQuery and WebPack and library exports are quite complex, for example. I have spent time and energy wresting with WebPack, instead of coding my application. There is a lot to be said for my old approach, where you code in JavaScript and it runs as JavaScript and it is easier to trace what is going on.

Still, it is working and I have achieved some of my goals – but the AWS Chime SDK file is still huge, just 30K smaller than before, which is disappointing. Perhaps there is something I have missed. I will be coding in TypeScript now and look forward to further refactoring as I get to know the language.

Update: I have abandoned this experiment. There were niggling problems with the WebPack bundles and I came to the conclusion that is it unsuitable for a multi-page application. A shame as I wanted to use WebPack; but for the time being I am just using TypeScript with terser. This means I am using native ES modules in the browser and I intend to write up the experience soon.

A UI lesson: do not ask users to choose between Register and Login

I am developing a web site for playing bridge, a project which kicked off in March when lockdown caused bridge clubs everywhere to close. There are lots of sites where you can play bridge online, but not many options (particularly back in March) for clubs that wanted to run their own online sessions.

It’s going OK with a number of clubs now using it every week, though it is still in development. I have learned a painful lesson though. In order to proceed as quickly as possible, I started my project with the Visual Studio template for an ASP.Net Core application with ASP.NET Core Identity – the latter an easy decision since it handles all the complications of registration, password reset and so on. (I did end up having to re-plumb it to use int rather than GUID for the primary key but that is another story).

The default home page the template generates looks like this, with options in the menu to Register or Login.

image

Registration and login are fundamental concepts that have been part of the web forever. It’s simple for a developer to understand: you register to create an account, you login if you already have an account.

The painful discovery is that this is not obvious to everyone, particularly to an older demographic that did not grow up with computers. Another factor is that cookies, browser-saved passwords and single sign-on with Google/Facebook etc means that this whole area is a bit of a mess and there are people who just kinda expect web sites to know who they are (which in one way horrifies me but I do see the massive convenience).

The consequence is that a surprising (to me) number of people had difficulty knowing whether Registration or Login was what they needed. They would Register, then return to the site and hit Register again. Why is this site asking for my details again? Maybe a security thing? Oh no, why does it now say username not available?

This is because asking the user to make this choice is not good design. Registration is rare, login is common. Further, Register is a confusing word. We sometimes use the word register when accounts already exist. Create Account is better. And a better UI is just Login. I need to access this website. Then, underneath the username/password request, an option that says “I need to create an account”. The two options should not be equally prominent; and if you look at how many prominent sites design this, that is what they do:

image image

Lesson learned; but I wish this had occurred to me sooner!

Hands On ASP.NET Core

I’ve been putting together a quick web application (well, I thought it would be quick) in my spare time (hah!) and I picked ASP.NET Core on Linux as a sensible option given that I like working in C#. Overall it has been a reasonable experience so far and I still love the language. This is the most extensive work I have done so far with ASP.NET Core though and I have a few observations.

It is not a difficult framework to work with but I believe it could be made more approachable. This is largely a matter of documentation though another point of confusion is the transition Microsoft has been making from ASP.NET MVC to Razor Pages. These two frameworks are similar but different, they share a lot of technology but some things work in one but not the other, and sometimes it is not clear whether what you are reading applies just to ASP.NET MVC, or just to Razor Pages, or to both, or to both but with a little tweaking to account for differences. I started with MVC because I am more familiar with it but have shifted to Razor Pages because that seems to be the preferred direction; really I am equally happy with either.

If you are thinking of getting started with ASP.NET Core I recommend you start not with the framework, but with making sure that you are familiar with the following topics:

Dependency injection. If you are puzzling about something in the framework the answer may be “add it to the constructor and it magically works.” This is obvious if you are familiar with it but not otherwise.

Anonymous types. These seem to crop up quite a lot.

Lambdas and the arrow operator =>

LINQ queries

Now, the documentation. Unless you have perhaps found a good and up to date book you will probably start here.

image

Now, I do think there are lots of good things about docs.microsoft.com, the fact that it is all on GitHub and open for comment and improvement, the fact that it performs well, and the obvious effort that has gone into many of the topics.

That said, I do not much like this page. My biggest problem with it is that there is no simple link to a comprehensive reference. It is a bunch of little tutorials which may or may not tell you what you need to know. It gets better if you click into one of the topics and I like this page, for example, much better, with the hierarchical list of topics on the left.

image

It is still not great though. There is a big emphasis on tutorials, and while I agree that learning through doing is a great way to learn, the problem with the tutorials is that they tend to leave you with lots of questions and no obvious route to answers.

I will give you an example. I decided to use the ASP.NET Identity system in my application, because it saves a ton of tedious work doing registration, password reset, login, and so on, plus it is security-critical code that I would likely get wrong if I did it myself.

The problem you will immediately hit though is that you want to store additional data about users. This could be any kind of data but let’s call it additional profile data. For example, you want to let users upload an image which is then displayed in the application. There are some heavy articles about customizing identity but there is also this one on adding custom user data to an ASP.NET Core web app. It’s great but it does not actually tell you how to retrieve the custom user data in your application. Eventually I figured out a way of doing it. You just have to use dependency injection to get an instance of the UserManager class. So you pop this in the constructor for one of your classes:

UserManager<YourCustomUser> UserManager

and store it in a private variable. Then you can do:

var MyTask = _userManager.GetUserAsync(User);
MyTask.Wait();
var MyUser = MyTask.Result;

or something similar (if it is a synchronous method) and it just works.

Let me add something else. The actual API reference for ASP.NET Core is almost useless. It faithfully documents each class and method while often saying nothing about how or why to use it.

Data access

My application is really forms over data as so many are, so data access plays a big role. There seem to be plenty of tutorials on data access in the ASP.NET Core documentation but I don’t much like them. The problem is Entity Framework. Most of the documentation assumes it. It is not that Entity Framework is bad; it does seem to work well and while there is debate about how well it performs, in many cases it does not matter, and in other cases you can fine-tune it. My problem rather is that what Microsoft calls a “complex data model” is actually the normal case, where you have many-to-many relationships, and dealing with this in Entity Framework soon gets fiddly. I am guilty of lacking patience, but being familiar with SQL it is easier for me just to write the SQL and to know exactly what data is being saved and what data is being retrieved. I have left Entity Framework in place because the Identity system uses it (and it looks non-trivial to replace) but for the rest I have migrated to Dapper which seems ideal. It is not a full-featured ORM and it expects you to write the SQL but does a lot that saves time. My only complaint about Dapper is that (again) the documentation isn’t great but I’ve found it much simpler to grok than the more advanced aspects of Entity Framework.

One thing I do like about Entity Framework is data migrations. Like most developers I have a local database and another one online and code-first data migrations save a lot of work creating database tables and keeping the schema in sync. Dapper does not have this.

StackOverflow

Of course it is true that no matter what is your question, someone has asked it before, and often the best place to find the answer is on StackOverflow. Big appreciation for the folk who take the time to answer questions there, though I’d add that it is not a place from which to copy code, it is a place to understand a solution. Out of date information is a problem, as it is in Microsoft’s own documentation.

Finally

I think ASP.NET Core is a great framework (or frameworks) but not as approachable as it could be. Documenting it in the best way is not an easy problem to solve, and every developer comes with different skills and requirements. Perhaps Microsoft could get someone suitable to write a nice book aimed at intermediate coders, and one that does not assume you want to use Entity Framework. Then offer it as a free download and/or publish it online as part of the documentation, and keep it up to date as new versions appear.

Microsoft updates the .NET stack with .NET Core 2.0 and updated Visual Studio. Should you use it?

Microsoft has released .NET Core 2.0, a major update to its open source, cross-platform version of the .NET runtime and C# language.

New features include implementation of .NET Standard 2.0 (a way of targeting code to run under multiple .NET platforms), new platform support including Debian Stretch, macOS High Sierra and Suse Linux Enterprise Server 12 SP2. There is preview support for both Linux and Windows on ARM32.

.NET Core 2.0 now supports Visual Basic as well as C# and F#. The version of C# has been bumped to 7.1, including async Main method support, inferred tuple names and default expressions.

Microsoft has also released Visual Studio 2017 15.3, which is required if you want to use .NET Core 2.0. New Visual Studio features include Azure Stack support, C’# 7.1 support, .NET Framework 4.7 support, and other new features and fixes.

I updated Visual Studio and downloaded the new .NET Core 2.0 SDK and was soon up and running.

image

Note the statement about “This product collects usage data” of which more below.

image

The sample ASP.NET MVC application worked first time.

image

How is .NET Core doing? The whole .NET picture is desperately confusing and I get the impression that most .NET developers, while they may have paid some attention to what is happening, have concluded that the safe path is to continue with the Window-only .NET Framework.

At the same time, .NET Core is strategically important to Microsoft. Cross-platform support means that C# has a life on the Mac and on Linux, which is vital to its health considering the popularity of the Mac amongst developers, and of Linux as a deployment platform for web applications. Visual Studio for Mac has also been updated and supports .NET Core 2.0 in the new version.

Another key piece is the container trend. .NET Core is ideal for container deployment, and the only version of .NET supported in Windows Nano Server. If you want to embrace microservices running in containers, while still developing with C#, .NET Core and Nano Server is the optimum solution.

Why not use .NET Core, especially since it is faster than ASP.NET? In these comparisons, .NET Core comes out as substantially faster than .NET Framework for various algorithms – 600 times faster in one case.

The main issue is compatibility. .NET Core is a subset of the .NET Framework, and being a relative newcomer, it lacks the same level of third-party support.

Another factor is that there is no support for desktop applications, though some solutions have been devised. Microsoft does have a cross-platform GUI story, in Xamarin Forms, which is now in preview for macOS alongside iOS, Android, Windows and Tizen. If Xamarin used .NET Core that would be a great solution, but it does not (though it does support .NET Standard 2.0).

One of the pieces that most concerns developers is data access. If you use .NET Core you are strongly guided towards Entity Framework Core, a fork of Microsoft’s ORM (Object-Relational Mapping) framework. Someone asked on this page, is EF Core usable? Here’s an answer from one user (11 days ago):

Answering 4 months later but people should know: Definitely not, it is still not usable unless you are doing something very trivial and/or have very small DB.
I don’t understand how it is possible for MS to ship it, act like it’s OK and sparsely here and there provide shallow information about its limitations like in this article without warning clearly and explicitly about the serious issues this “v1 product” has.

Someone may jump in and say no, it is fine; but there are undoubtedly missing pieces and I would suggest caution.

You can also access data using the Connection/Command/DataReader approach which avoids EF, and although this is more work, this is what I would be inclined to do personally since you get the best performance and flexibility. Here is an example for SQL Server.

Who is using .NET Core? Controversially, Microsoft gathers telemetry from your use of the command-line tools though you can opt out by setting an environment variable. This means we have some data on .NET Core usage, though unfortunately it excludes Visual Studio usage. I downloaded the most recent dataset and imported it into a database. Here are the figures for OS family:

Total rows 5,036,981
Windows 3,841,922 (76.27%)
Linux 887,750 (17.62%)
Mac 307,309 (6.1%)

image

Given that this excludes Visual Studio users, who are also on Windows, we can conclude that the great majority of .NET Core developers use Windows, and only a tiny minority Mac (I do not know if Visual Studio for Mac usage is included). This is evidence that .NET Core has so far failed in its goal of persuading Mac-using developers to adopt .NET. It does show interest in deploying .NET applications to Linux, which is an obvious win in licensing costs as well as performance.

I would be interested in comments from developers on whether or not they use .NET Core and why.

A note on Azure storage and downloading large files

I have written a simple ASP.NET MVC application for upload and download of files to/from Azure storage.

Getting large file upload to work was the first exercise, described here. That is working well; but what about download?

If your files in Azure storage are public, you can simply serve an URL to the file. If it is not public though, you have a couple of choices:

1. Download the file under application control, by writing to Response.OutputStream or using a FileResult action.

2. Issue a Shared Access Signature (SAS) to the client which enables it to retrieve the file directly from Azure storage. The SAS is sent as an URL argument which tells Azure storage that the request is authorised. The browser downloads the file directly, so it makes no difference to your web application if the file is large.

Note that if you use the first option, it will not work with large files if you simply call DownloadToStream or similar:

container.GetBlockBlobReference(FileName).DownloadToStream(Response.OutputStream);

Why not? Well, the way this code works is that it downloads the large file to the web server, then sends it to the browser. What if your large file is 5GB? The browser will wait a long time for the first byte to be served (giving the user an unresponsive page); but before that happens, the web application will probably throw an exception because it does not like downloading such a large file.

This means the SAS option is a good one, though note that you have to specify an expiry time which could cause problems for users on a slow connection.

Another option is to serve the file in chunks. Use CloudBlockBlob.DownloadRangeToStream to write to Response.OutputStream in a loop until the download is complete. Call Response.Flush() after each chunk to send the chunk to the browser immediately.

This gives the user a nice responsive download experience complete with a cancel option as provided by the browser, and does not crash the application on the server. It seems to me a reasonable approach if the web application is also hosted on Azure and therefore has a fast connection to Azure storage.

What about resuming a failed download? The SAS approach should work as Azure supports it. You could also support this in your app with some additional work since Resume means reading the Range header in a GET request. I have not tried doing this but you might find some clues here.

Microsoft open sources further ASP.NET Frameworks, publishes code with Git

Microsoft has released two further ASP.NET frameworks as open source, joining ASP.NET MVC which was already open source. These are published on CodePlex, Microsoft’s open source repository site, using the newly added Git support. You can find the code here.

The two additional frameworks are ASP.NET Web API and ASP.NET Web Pages. Just to recap, ASP.NET supports several frameworks:

ASP.NET Web Forms: the original framework shipped with .NET 1.0 and greatly enhanced since then. Excellent for quickly assembling a dynamic web site but somewhat heavyweight with its ViewState field and complex page lifecycle. Designed in pre-Ajax days.

ASP.NET MVC: A more elegant framework with separation of content from code, amenable to test-driven development, based on controllers and routing.

ASP.NET Web Pages formerly known as Razor: An alternative view engine designed to work with ASP.NET MVC. Uses .cshtml or .vbhtml extension in place of .aspx. A declarative language with codewords like @foreach and @if – though Microsoft’s Scott Guthrie says it is not a language but rather a template markup syntax.

ASP.NET Web API: formerly known as WCF Web API is a framework for building REST services. A key framework if you have a cloud + mobile target in mind. Now gets installed with ASP.NET MVC.

So why is ASP.NET Web Forms not open source? According to Microsoft’s Scott Hanselman:

The components that are being open sourced at this time are all components that are shipped independently of the core .NET framework, which means no OS components take dependencies on them. Web Forms is a part of System.Web.dll which parts of the Windows Server platform take a dependency on. Because of this dependency this code can’t easily be replaced with newer versions expect when updates to the .NET framework or the OS ships.

though it is not clear why this prevents the code being published.

Hanselman adds that Microsoft is not only publishing the code, but also taking contributions:

Today we continue to push forward and now ASP.NET MVC, Web API, Web Pages will take contributions from the community.

Why is Microsoft doing this? Within Microsoft, there have always seemed to be open source advocates like Hanselman, and others who pull back. One answer is that the open source folk are winning more arguments now.

Another take is that this is the outcome of industry-wide changes. Microsoft’s platform is less dominant than it was; it still reigns on the desktop, but Macs, tablets and smartphones are eroding its position on the client, and on the web Netcraft’s figures show steady decline since June 2010:

image

Most of the competition is open source and it is possible that this is a factor behind the latest moves. Microsoft is not open sourcing its IIS web server yet, though Hanselman does make the point that ASP.NET MVC runs well on Mono, the open source implementation of the .NET Framework, which is often used with Apache.

The mystery of unexpected expiring sessions in ASP.NET

This is one of those posts that will not interest you unless you have a similar problem. That said, it does illustrate one general truth, that in software problems are often not what they first appear to be, and solving them can be like one of those adventure games where you think your quest is for the magic gem, but when you find the magic gem you discover that you also need the enchanted ring, and so on.

Recently I have been troubleshooting a session problem on an ASP.NET application running on a shared host (IIS 7.0).

This particular application has a form with some lengthy text fields. Users complete the form and then hit save. The problem: sometimes they would take too long thinking, and when they hit save they would lose their work and be redirected to a login page. It is the kind of thing most of us have experienced once in a while on a discussion forum.

The solution seems easy though. Just increase the session timeout.  However, this had already been done, but the sessions still seemed to time out too early. Failure one.

My next thought was to introduce a workaround, especially as this is a shared host where we cannot control exactly how the server is configured. I set up a simple AJAX script that ran in the background and called a page in the application from time to time, just to keep the session alive. I also had it write a log for each ping, in order to track the behaviour.

By the way, if you do this, make sure that you disable caching on the page you are pinging. Just pop this at the top of the .aspx page:

<%@ OutputCache Duration="1" Location="None" VaryByParam="None"%>

It turned out though that the session still died. One moment it was alive, next moment gone. Failure two.

This pretty much proved that session timeout as such was not the issue. I suspected that the application pool was being recycled – and after checking with the ISP, who checked the event log, this turned out to be the case. Check this post for why this might happen, as well as the discussion here. If the application pool is recycled, then your application restarts, wiping any session values. On a shared host, it might be some else’s badly-behaved application that triggers this.

The solution then is to change the way the application stores session variables. ASP.NET has three session modes. The default is InProc, which is fast but not resilient, and for obvious reasons not suitable for apps which run on multiple servers. If you change this to StateServer, then session values are stored by the ASP.NET State Service instead. Note that this service is not running by default, but you can easily enable it, and our helpful ISP arranged this. The third option is to use SQLServer, which is suitable for web farms. Storing session state outside the application process means that it survives pool recycling.

Note the small print though. Once you move away from InProc, session variables are serialized, not just held in memory. This means that classes must have the System.Serializable attribute. Note also that objects might emerge from serialization and deserialization a little different from how they went in, if they hold state that is more complex than simple properties. The constructor is not called, for example. Further, some properties cannot sensibly be serialized. See this article for more information, and why you might need to do custom serialization for some classes.

After tweaking the application to work with the State Service though, the outcome was depressing. The session still died. Failure three.

Why might a session die when the pool recycles, even if you are not using InProc mode? The answer seems to be that the new pool generates a new machine key by default. The machine key is used to encrypt and decrypt the session values, so if the key changes, your existing session values are invalid.

The solution was to specify the machine key in web.config. See here for how to configure the machine key.

Everything worked. Success at last.

Microsoft’s Scott Guthrie moving to Windows Azure

According to an internal memo leaked to ZDNet’s Mary Jo Foley, Microsoft’s Scott Guthrie who is currently Corporate VP of the .NET Developer Platform is moving to lead the Azure Application Platform team. This means he will report to Ted Kummert who is in charge of the Business Platform Division, instead of S Somasegar who runs the Developer Division; however both divisions are part of the overall Server and Tools Division. Server and Tools is the division from which Bob Muglia was ousted as president in January; the reason for this is still not clear to me, though I would guess at some significant strategy disagreement with CEO Steve Ballmer.

Guthrie was co-inventor of ASP.NET and is one of the most approachable of senior Microsoft execs; he is popular and respected by developers and his blog is one of the first places I look for in-depth and hands-on explanations of new features in Microsoft’s developer platform, such as ASP.NET MVC and Entity Framework.

I have spent a lot of time researching and using Visual Studio 2010, and while not perfect it is among the most impressive developer products I know, from the detail of the editor and debug features right through to ALM (Application Lifecycle Management) aspects like Team Foundation Server, testing in various forms, and build management. Some of that quality is likely due to Guthrie’s influence. The successful evolution of ASP.NET from web forms towards the leaner and more flexible ASP.NET MVC is another achievement in which I am sure he played a significant role.

Is it wise to take Guthrie away from his first love and over to the Azure platform? Only Microsoft can answer that, and of course he will still be responsible for an ASP.NET platform. I’d guess that we will see further improvement in the Visual Studio tools for Azure as well.

Still, it is a bold move and one that underlines the importance of Azure to the company. In my own research I have gained increasing respect for Azure and I would expect Guthrie’s arrival there to be successful in winning attention from the Microsoft platform developer community.

Microsoft WebMatrix released: a simple editor for ASP.NET Razor and more, but who is the target user?

Microsoft has released WebMatrix, a free tool for creating web sites for Microsoft’s web server. It uses the Web Platform Installer and installed smoothly on my Windows 7 64-bit box. What you get is a cleanly-designed tool which lets you start web sites from templates or from standard installs of popular applications including WordPress, Drupal and Moodle.

image

Yes, you can use PHP and MySQL as well as .NET web applications, though the common factor is that all are configured for IIS, Microsoft’s web server.

With many ISPs already offering instant installs of apps like WordPress, it is more interesting to look at the site templates in WebMatrix, though the selection is smaller.

image

What is interesting about these is that they create sites based on Razor, an alternative view engine for ASP.NET. Microsoft VP Scott Guthrie describes Razor here. It is odd though: Razor is a feature of ASP.NET MVC 3, currently in release candidate phase, but you cannot create ASP.NET MVC sites in Web Matrix.

Once a site is created, you can modify it in the WebMatrix editor.

image

You can run the site on IIS Express with one click. WebMatrix will show you all the requests as you run, which could be handy for tracing problems. There is also a database management workspace which uses SQL Server Compact Edition, a reporting workspace which will analyse your site for problems, and the ability to publish a site using  FTP or Microsoft’s Web Deploy.

I like the clean look of WebMatrix, and that it is lightweight and fast; but who is the target user? It appears to be aimed at non-professionals; but this is a techie product that will not appeal to users looking for an easy to use web site builder. There is no visual editor; users are just chucked in at the deep end editing raw HTML and C#. There is not even any intellisense code completion. Clicking Online Help just brings up a Microsoft search form. There is no debugger to speak of; you are expected to upgrade to Visual Studio. Which raises the question, why not just get Visual Web Developer 2010 Express, which is also free, and has a better editor and debugging features? Of course you could use the two together; but Web Matrix is not adding much value. Features like the SEO analysis seem to be be based on the existing Search Engine Optimization Toolkit, which you can install without Web Matrix.

WebMatrix has been available in beta for six months, but its forum is relatively quiet.

Still, if nothing else Web Matrix is a handy way to take a look at Razor, which deserves attention. Shay Friedman has a technical introduction here.

Guthrie has a detailed look at the WebMatrix beta here.

ASP.NET Padding Oracle fix released, time to patch for Windows administrators

Scott Guthrie’s blog reports that a fix is now available for the Padding Oracle attack, which enables successful attackers to break the security of ASP.NET applications. There are a few points of interest.

First, there is not one patch but several, and which ones you need depend both on the version of Windows and the version of .NET. Multiple versions of .NET may be installed on a single server.

Second, the exploit is rated “important” in Microsoft security-speak, rather than “critical”. This is apparently because in itself the vulnerability merely discloses information. However, Microsoft is treating it with a high priority because the vulnerability is likely to reveal information that would let the attacker go to to more sever actions such as taking over a server. Confusing, but to my mind it is as critical as they come.

Third, Guthrie’s blog notes:

We’d like to thank Juliano Rizzo and Thai Duong, who discovered that their previous research worked against ASP.NET, for not releasing their POET tool publicly before our update was ready.

The implication is that the POET tool may be publicly available soon – so if you are responsible for an affected machine, get patching! In fact, in the webcast on the subject Microsoft stated that “The potential for exploit is very high during the next 30 days.”

Fourth, the update works by “additionally signing all data that is encrypted by ASP.NET.”

Update: Marc Brooks has investigated and it looks like there is a bit more to it than that.

Finally, the update will be included in Windows Update but not immediately. Your choice is whether to risk a hack in the period before the automatic update appears, or endure the hassle of the manual downloads. Microsoft advises to do it as soon as possible for servers on the public internet.

I am not sure what percentage of systems are likely to be patched soon, but I’d guess that plenty of vulnerable systems will remain online and that we have not heard the last of this bug.