Office, Azure Active Directory, and mobile: the three pillars of Microsoft’s cloud

When Microsoft first announced Azure, at its PDC Conference in October 2008, I was not impressed. Here is the press release, if you fancy a look back. It was not so much the technology – though with hindsight Microsoft’s failure to offer plain old Windows VMs from the beginning was a mistake – but rather, the body language that was all wrong. After all, here is a company whose fortunes are built on supplying server and client operating systems and applications to businesses, and on a partner ecosystem that has grown up around reselling, installing and servicing those systems. How can it transition to a cloud model without cannibalising its own business and disrupting its own partners? In 2008 the message I heard was, “we’re doing this cloud thing because it is expected of us, but really we’d like you to keep buying Windows Server, SQL Server, Office and all the rest.”

Take-up was small, as far as anyone could tell, and the scene was set for Microsoft to be outflanked by Amazon for IaaS (Infrastructure as a Service) and Google for cloud-based email and documents.

Those companies are formidable competitors; but Microsoft’s cloud story is working out better than I had expected. Although Azure sputtered in its early years, the company had some success with BPOS (Business Productivity Online Suite), which launched in the UK in 2009: hosted Exchange and SharePoint, mainly aimed at education and small businesses. In 2011 BPOS was reshaped into Office 365 and marketed strongly. Anyone who has managed Exchange, SharePoint and Active Directory knows that it can be arduous, thanks to complex installation, occasional tricky problems, and the challenge of backup and recovery in the event of disaster. Office 365 makes huge sense for many organisations, and is growing fast – “the fastest growing business in the history of the company,” according to Corporate VP of Windows Server and System Center Brad Anderson, speaking to the press last week.

image
Brad Anderson, Corporate VP for Windows Server and System Center

The attraction of Office 365 is that you can move users from on-premise Exchange almost seamlessly.

Then Azure changed. I date this from May 2011, when Scott Guthrie and others moved to work on Azure, which a year later offered a new user-friendly portal written in HTML5, and Windows Azure VMs and web sites. From that moment in 2012, Azure because a real competitor in cloud computing.

That is only two years ago, but Microsoft’s progress has been remarkable. Azure has been adding features almost as fast as Amazon Web Services (AWS – and I have not attempted to count), and although it is still behind AWS in some areas, it compensates with its excellent portal and integration with Visual Studio.

Now at TechEd Microsoft has made another wave of Azure announcements. A quick summary of the main ones:

  • Azure Files: SMB shared storage for Azure VMs, also accessible over the internet via a REST API. Think of it as a shared folder for VMs, simplifying things like having multiple web servers serve the same web site. Based on Azure storage.
  • Azure Site Recovery: based on Hyper-V Recovery Manager, which orchestrates replication and recovery across two datacenters, the new service adds the rather important feature of letting you use Azure itself as your space datacenter. This means anyone could use it, from small businesses to the big guys, provided all your servers are virtualised.
  • Azure RemoteApp: Remote Desktop Services in Azure, though currently only for individual apps, not full desktops
  • Antimalware for Azure: System Center Endpoint Protection for Azure VMs. There is also a partnership with Trend Micro for protecting Azure services.
  • Public IPs for individual VMs. If you are happy to handle the firewall aspect, you can now give a VM a public IP and access it without setting up an Azure endpoint.
  • IP Reservations: you get up to five IP addresses per subscription to assign to Azure services, ensuring that they stay the same even if you delete a service and add a new one back.
  • MSDN subscribers can use Windows 7 or 8.1 on Azure VMs, for development and test, the first time Microsoft has allows client Windows on Azure
  • General availability of ExpressRoute: fast network link to Azure without going over the internet
  • General availability of multiple site-to-site virtual network links, and inter-region virtual networks.
  • General availability of compute-intensive VMs, up to 16 cores and 112GB RAM
  • General availability of import/export service (ship data on physical storage to and from Azure)

There is more though. Those above are just a bunch of features, not a strategy. The strategy is based around Azure Active Directory (which everyone gets if they use Office 365, or you can set up separately), Office, and mobile.

Here is how this works. Azure Active Directory (AD), typically synchronised with on-premise active directory, is Microsoft’s cloud identity system which you can use for single sign-on and single point of control for Office 365, applications running on Azure, and cloud apps run by third-parties. Over 1200 software as a service apps support Azure AD, including Dropbox, Salesforce, Box, and even Google apps.

Azure AD is one of three components in what Microsoft calls its Enterprise Mobility Suite. The other two are InTune, cloud-based PC and device management, and Azure Rights Management.

InTune first. This is stepping up a gear in mobile device management, by getting the ability to deploy managed apps. A managed app is an app that is wrapped so it supports policy, such as the requirement that data can only be saved to a specified secure location. Think of it as a mobile container. iOS and Android will be supported first, with Office managed apps including Word, Excel, PowerPoint and Mobile OWA (kind-of Outlook for iOS and Android, based on Outlook Web Access but delivered as a native app with offline support).

Businesses will be able to wrap their own applications as managed apps.

Microsoft is also adding Cordova support to Visual Studio. Cordova is the open source part of PhoneGap, for wrapping HTML and JavaScript apps as native. In other words, Visual Studio is now a cross-platform development tool, even without Xamarin. I have not seen details yet, but I imagine the WinJS library, also used for Windows 8 apps, will be part of the support; yes it works on other platforms.

Next, Azure Rights Management (RMS). This is a service which lets you encrypt and control usage of documents based on Azure AD users. It is not foolproof, but since the protection travels in the document itself, it offers some protection against data leaking out of the company when it finds its way onto mobile devices or pen drives and the like. Only a few applications are fully “enlightened”, which means they have native support form Azure RMS, but apparently 70% of more of business documents are Office or PDF, which means if you cover them, then you have good coverage already. Office for iOS is not yet “enlightened”, but apparently will be soon.

This gives Microsoft a three-point plan for mobile device management, covering the device, the applications, and the files themselves.

Which devices? iOS, Android and Windows; and my sense is that Microsoft is now serious about full support for iOS and Android (it has little choice).

Another announcement at TechEd today concerns SharePoint in Office 365 and OneDrive for Business (the client), which is getting file encryption.

What does this add up to? For businesses happy to continue in the Microsoft world, it seems to me a compelling offering for cloud and mobile.

Acer’s R7 the most twisty Windows 8 tablet/laptop yet

What is the name for a laptop that is also a tablet? A tabtop? Or perhaps a tabletop, which is a good way to describe Acer’s R7. The hinge (called “Ezel”) swings up to become a stalk supporting the tablet, raising it above the table.

image

What is the value of this configuration? Maybe you can think of something?

The thinking here (if I have it right) is that you can get the screen closer to your eyes than would be possible with a normal laptop. In other words, the flat table-top is not the normal use, but rather a tilt towards you but raised above the keyboard, if you see what I mean.

Alternatively, there is a backwards configuration which, I was told, is for presentations. The keyboard is your side, while the screen points into the room.

image

Of course, you won’t actually be able to see the screen yourself but that is a small detail versus the great view afforded to your audience. I am being a little unfair – the idea I think is that you sit screen-side too, and control it by touch.

You can also fold the screen flat to make a standard fat tablet configuration.

Failing that, there is a keyboard-only mode – note, no trackpad on view.

image

This is to avoid hitting the trackpad by mistake, apparently. Or if you really want a trackpad, you can have it, but note that it is behind the keyboard:

image

A bit odd? Let’s just say, different.

Close the lid, and looks like just another laptop.

image

For more information on the R7 see here.

Review: Power Cover for Microsoft Surface tablets

I took advantage of a recent US trip to purchase a Surface Power Cover, at the Microsoft Store in Bellevue, near Seattle.

The concept is simple: you get an external battery integrated into a Surface keyboard cover. The keyboard is similar to the second version of the Type Cover, though curiously without backlighting other than a caps lock indicator. The keys are mechanical which for most people means you can type faster than on the alternative Touch cover, though it is less elegant when considered as a cover rather than as a keyboard.

image

The trackpad is the same on all three second edition covers, which is to say, not good. The problem is not the trackpad itself, but the mouse buttons, which are NOT mechanical keys (they were on the first edition Type Cove). Given that you need to press and hold a mouse key for some operations, having a physical click on the trackpad buttons is particularly useful and much missed. Another annoyance is that you cannot disable tap to click, which means some mis-clicks are inevitable, though on the flip side it is easier to tap to click than to use the fiddly mouse buttons.

Having said that it is the same, I have noticed that the trackpad on the Power Cover seems a bit smoother and better behaved than the one on the Type Cover 2. This could be sample variation, or that it is new, or that Microsoft has slightly tweaked the internal design.

As you would expect, the Power Cover is heavier and more substantial than the Type Cover, though I find you notice the weight more than the bulk. Even with the Power Cover, it is still smaller and neater than a laptop. The extra rigidity is a benefit in some scenarios, such as when the keyboard protrudes over the edge of a table. The fabric hinge, which is a weak point in the design of all the Surface covers, seems to be the same on the Power Cover and I fear this may cause problems as the device wears, since the extra weight will put more strain on this hinge.

As with the other keyboard covers, if you fold it back under the tablet, the keys are disabled. In this mode the Power Cover is purely an external battery.

I used the cover with the original Surface Pro (it is compatible with all the models other than the original Surface RT). I understand that a firmware update is needed for the power cover to work; if so, it installed seamlessly though I did need to restart after connecting the keyboard for the first time. Everything worked as expected. If you click the battery icon in the notification area you can see the status of both batteries and which is charging, if you are plugged in; generally one one charges at a time.

image

I boarded my flight and noticed that the Surface is smart enough to use the external battery first, and then the internal, presumably on the basis that you might want to remove the keyboard and use the Surface in pure tablet mode.

image

It is impossible to be precise about how much extra time you get from the Power Cover, since it depends how you use the machine. It is a big benefit on the original Surface Pro which has rather poor battery life; extended battery life is perhaps the biggest real-world difference between the Surface Pro and the Surface Pro 2. Subjectively I have doubled the battery life on my year-old Surface Pro, which for me makes the difference between running out of battery fairly often, and hardly ever.

The Power Cover costs $199, which is expensive considering that you can get an entire spare Android tablet or Amazon Kindle Fire for less; but put in the context of the equally over-priced Type Cover, which costs $129, you can argue that it is not that much extra to pay. Prices from third-party sites will likely be lower once availability improves.

If you need it, you need it; and this must be the best way to extend the battery life of a Surface tablet.

The Surface keyboard covers are not perfect, and I still sometimes see an annoying fault where the mouse pointer or keys stop responding and you have to jiggle the connection or tap the screen a few times to get it back (I am sure this is a driver issue rather than a poor physical connection). Still, I put up with a few irritations because the Surface gives me full Windows in a more convenient and portable form factor than a laptop, and there is more right than wrong with the overall design.

Summary:

  • If you already have a keyboard and your Surface lasts as long as you need – forget it.
  • If you have a Surface that runs out of power with annoying frequency (probably a Surface Pro 1), this is worth it despite the high price.
  • If you don’t have a keyboard (for example, you are buying a new Surface) then this is worth the extra cost over the Type keyboard.

Review: Nuance Dragon Dictate 4 for the Mac

There is something liberating about working without a keyboard – and I do not mean stabbing hopefully at a touch screen. Voice control means you can sit back, easily refer to books or papers,  and input text more quickly and naturally than is possible using a keyboard. Some conditions including RSI (Repetitive Strain Injury) may make dictation a necessity. I use dictation for transcribing interviews and for rapid text input generally. I do not often use dictation for controlling a computer, as opposed to entering and editing text, but this is also a key feature.

image

Nuance has the best voice recognition system available as far as I can tell, though my experience is mainly with Nuance Dragon NaturallySpeaking on Windows. But what about Mac users? For them, Nuance provides Dragon Dictate, which has recently been updated to version 4. It is not a port of Dragon NaturallySpeaking, but rather has its own distinctive features, though it is less comprehensive, and a glance at the Nuance forums suggests that Mac users feel a bit neglected.

Does Dragon Dictate 4 change that? The good news is that the voice recognition engine in Dragon Dictate appears to be just as good as the one in Dragon naturally speaking. The accuracy is superb though you still have to be realistic. Some recognition problems are just very difficult and the software is bound to make mistakes especially in specialist fields – mine is programming and a specialist phrase like “JIT compiler” is bound to cause an error (Dragon thinks I want “Jet compiler”). Similarly, “pull request” became “full request”. Over time you can build up a custom vocabulary, but recognition will never be 100%, so a dictation system has to handle corrections as well as original input.

Setting up Dragon Dictate involves installing the software and then letting it create a profile and doing some training so that Dragon can learn the characteristics of your voice. I highly recommend using a good quality headset since without it we cannot expect accurate recognition. I found the setup process quick and painless and was soon up and running.

Dragon Dictate has five modes:

  • Dictation Mode is what you use most of the time.
  • Spelling Mode is for spelling out problematic words. You can speak the letters naturally or use the International Radio Alphabet (Alpha Bravo Charlie etc). It is a nice feature since if you know Dragon is likely to get something wrong, you can switch to Spelling Mode, enter the difficult word, and then go back to Dictation Mode.
  • Numbers Mode is for typing numbers.
  • Command Mode is for non-dictation commands. However, commands also work in Dictation Mode. The advantage of Command Mode is that Dragon will not misinterpret your commands as text input; but there is no way to configure Dictation Mode to prevent it interpreting speech intended as text as commands. The manual suggests that you use unnatural pauses for this. For example, if you are reviewing Dragon Dictate and want to type “Command Mode”, you can say “Command [pause] Mode” and get what you want.
  • Sleep Mode puts Dragon in a resting state, so for example you can take a telephone call without Dragon trying to transcribe it.

Switching mode is easy: just speak the mode you want. If Dragon is in Sleep Mode, you can say “Wake up”.

My initial experience with Dragon Dictate 4 was not too good. The problems were not with recognition but rather with navigating and correcting existing text, which I found harder than in Dragon NaturallySpeaking on Windows. In fact my attempts to make corrections all too often ended up with more and more errors as a correction went wrong and I would be trying to correct the correction, getting increasingly frustrated.

Using Microsoft Word 2011, I experienced unexpected behaviour. For example, if I put the cursor in between two words and dictated a word to insert, sometimes the word appeared elsewhere in the text.

Another odd thing: I dictated "for example", and Dragon recognised it as "one example”. No problem: Dragon has a Recognition Window which lists alternatives when you say “correct” followed by the word you want to amend. I said "correct one” and the recognition window appeared offering "for example" as one of the choices. I selected it, but Dragon then entered “for example example” in the text. I was not offered the word “for” on its own.

Dragon Dictate 4 was rescued from a terrible review when I studied the manual. Towards the end is a section entitled “The Cache and the Golden Rule”. This explains that you should not combine the use of keyboard and mouse with dictation when editing a document. If you do, Dragon gets confused about the contents of the document and you see unexpected results. You can fix this with a special command, “Cache Document”, which tells the software to clear and rebuild the cache for the entire document.

If you are not aware of this issue, then you are likely to make increasing use of keyboard and mouse as Dragon gets it wrong, making the issue worse. That is exactly what had happened to me.

Another key point is the difference between training and correcting. If you use the Recognition Window to make a change that is not in fact a recognition fault – such as changing “good” to “excellent” – then you will confuse the voice training. Rather, you should say “Select good”, to select the word you want to change, and then say “excellent” to overtype it.

After studying the manual, I got much better results, though Dragon Dictate still occasionally seems to have a mind of its own.

Nevertheless, this fussiness is a weakness in the software. The best software works the way you want it to, rather than making the user do things a certain way. Why cannot Dragon do its cache repairs automatically in the background?

Still, what Dragon offers is of high value, and in this case if you want the best results you have to do the homework.

There are a few others things to mention. Nuance offers a free app for the iPhone that lets you use it as a remote microphone. Personally I find a headset more convenient but I guess there are scenarios where this is useful.

There are also features in Dragon Dictate aimed at general system control. I tried the MouseGrid, which overlays a grid over the entire screen and lets you zoom into the area of interest for accurate mouse control. You can also move the mouse using Up, Down and so on under voice control, and perform single, double or triple clicks.

Conclusion? The software does not feel as complete or as polished as Dragon NaturallySpeaking, but the excellent voice recognition means that this is the best available for the Mac. Recommended, but with reservations.

Microsoft’s new open source direction for C# and .NET (and native compilation too): Anders Hejlsberg explains

At the April 2014 Build conference Microsoft made some far-reaching announcements about its .NET platform and the C# programming language. Yes, there was talk of C# 6.0, the next version, but the real changes are more profound. Specifically:

C# and Visual Basic have a new compiler, itself written in C#, code-named Roslyn. Roslyn is not just a new compiler; Microsoft now calls it the “.NET Compiler Platform”.

There is a new commitment to open source for .NET projects. Microsoft formed the .NET Foundation to oversee existing open source projects, including  ASP.NET, Entity Framework, the Azure .NET SDK, and now Roslyn as well. “When it comes to development projects we are going to operate from the premise that open source is the default. Unless there are reasons why it does not work,” said C# lead architect Anders Hejlsberg.

image

Note that open source does not mean chaos. It does mean that you can fork the project if you want – the Roslyn license is Apache 2.0 – but getting Microsoft to accept new features you have contributed will not be trivial. Hejlsberg makes the point that language features are easy to add, but impossible to take away, so extreme care is necessary.

Microsoft is also supporting cross-platform C# to a greater extent than it has done in the past. The most obvious sign of this is its cooperation with Xamarin, which provides C# compilers for iOS and Android. Xamarin’s Miguel de Icaza got a top billing at Build, and is also involved in the .NET Foundation.

There is more though. The idea of standardised C# is re-emerging:

“The last ECMA standard was C# 2.0. There wasn’t a lot of demand for it, but that demand has recently risen and we have re engaged with the ECMA community to produce a standard for C# 5.0,” said Hejlsberg.

This bears some unpacking. Why was there little demand for ECMA C#? Partly I would guess from the assumption the C# was firmly in Microsoft’s grip, with Java the obvious choice for cross-platform development. The main interest was from the Mono folk (Miguel de Icaza again), which implemented .NET for Linux and the Mac with some success, but nothing to disturb Java’s momentum.

The focus now though is on mobile, and interest in C# is stronger, mainly from Microsoft-platform developers reaching beyond Windows. There is also Unity, which uses C# as a scripting language for developing games for multiple platforms, including iOS, Android, Windows, Mac, Linux, Xbox, PS3 and Wii – PS4 is coming very soon.

Microsoft has now consciously embraced multiple platforms, as evidenced by Office for iOS as well as the Xamarin collaboration. “We want C#developers to build great applications across different form factors and different device platforms,” said Jay Schmelzer Director of Program Management for Visual Studio.

You might observe that this position has been forced on the company by the rise of iOS and Android, a view which likely has some merit, but the impact it has on C# and .NET itself is still real.

I asked Hejlsberg to unpack the difference between the Roslyn project and C# 6.0, bearing in mind that both are covered on the Roslyn open source site; you can see the current status of C# 6.0 and the next Visual Basic here.

Roslyn is the name for the project that encompasses the new C#compiler and the new VB compiler and the new language services that they share. C# 6.0 is the name of the next version of the C #language which will have a specification and which will have an implementation. We are implementing C# 6.0 on the Roslyn platform. We are not going to continue to evolve our old C++ C# compiler – the C# compiler was originally written in C++ and has been evolved up through C# 5.0. That is where we are going to retire that code base, and going forward versions of C# will be built on Roslyn and therefore will be built open source. Unlike previously where, boom. C# came down from the sky with a set of features, it is going to happen more organically now, people will submit pull requests, open up issues, and you will see us work on these features. You will see them from inception to fruition.

“The C# team, the Roslyn team, the VB team, their day to day workplace now is the open source site. That is where they check-in code. It is a community in the making.

Even that is not all. At Build, Microsoft also announced .NET Native, which is a native compiler for C# and Visual Basic, now in preview for x64 Store apps. What is the difference between .NET Native and the existing NGen native compiler for .NET? Over to Hejlsberg:

NGen is the native feature that we currently support. NGen is really, “I’m going to JIT [Just in time compile] your code and then snapshot all the data structures and dump them in a file so that I can quickly rebuild that file later when you run this particular application”. But it is the same code generator and all the same features, and JIT is still there. NGen is really a way to pre-cache the JIT output and therefore get better performance, but it adds to the size of your app because you still have all the assemblies and metadata and then the NGen image as well.

.NET Native is a completely different approach. Instead of the JIT we use the backend from the C++ compiler. You can think of it as a linker that takes as input assemblies, and as output produces a PE [Portable Executable] executable. In the process this linker or code generator will analyse all the IL [Intermediate Language] that goes into the application and it will apply a thing known as tree-shaking where it eliminates all of the code that will never execute based on known execution roots.

In other words, the public static main of your program and also whatever pieces of your app that you designate as reflectable, they also become roots. Based on that we produce an optimised exe, and into that exe we link the pieces of the framework that you are referencing. We link in a garbage collector [GC], and it looks to the operating system just like an exe. When you run it, it runs a local GC in there and it is as efficient really as C++ code.

There are some restrictions associated with .net native, mainly that you can’t just willy-nilly reflect on the whole world. You can’t just generate new code and ask for that to be jitted because they may not be a JIT compiler. We are considering allowing you to link in a JIT compiler, but there are certain execution environments which don’t permit jitting, like Xbox. If you use reflection in your lap you have to tell us what to keep reflectable, because otherwise we will optimise it away.

According to Schmelzer:

The preview out today is scoped to Store app x64 and ARM. We haven’t run into any technical limitation that shows it can’t be done across the breadth, it is just a matter of request and need.

Open source, native code compilation, and an innovative compiler: it adds up to huge changes for C# and .NET, positive ones as far as I can tell.

The Xamarin connection is intriguing though. Developers in general admire the technology as far as I can tell, but it is expensive, and paying out for a Xamarin subscription on top of maybe MSDN for Visual Studio is too much for some smaller organisations and does not encourage experimentation. Might Microsoft acquire Xamarin and build Visual Studio into an IDE targeting all the major mobile platforms, but with special hooks to Azure-hosted services?

That prospect makes sense to me, though it would be a shame if the energetic Xamarin culture became bogged down in big-company bureaucracy. Currently though: no news to report.

Brief reflections on 50 years of BASIC

Beginner’s All-Purpose Symbolic Code (BASIC) has turned fifty, as reported on The Reg and by Jack Schofield on ZDNet. A great moment in computer history, or would we have been better off without it?

My first computer (a Commodore PET) ran Basic from ROM, and without it you could do nothing, though developers bypassed it by using the POKE command to write low-level instructions into memory. The language is meant to be forgiving (as far as a computer language can be) and English-like, at the expense of being a little more verbose. It is case-insensitive and does not require braces or semi-colons to indicate blocks or lines of code, which makes programming look less intimidating for beginners.

I graduated onto an Atari ST, for which there was an excellent Basic implementation called GFA Basic, fast and capable. This was great for writing utilities, though, though serious programming tended to use one of several strong C compilers: Lattice C, Mark Williams C, HiSoft C come to mind.

Basic also had a role, even on the ST, as a macro language for applications. For example, the Superbase database manager used a version of Basic.

The company most strongly associated with Basic though is Microsoft. A version of Basic came with MS-DOS.

image

Microsoft also supported Basic for professional development. Microsoft Basic Professional Development System 7.x was a well-regarded development tool for business applications, though commercial shrink-wrap software tended to be written in C or C++.

That trend followed through to the Windows graphical environment. Visual Basic (VB), which made it easy to code Windows applications, was perhaps the most significant Basic release in terms of its impact, especially when it reached version 3.0 with full database support. Its popularity was such that many developers felt wounded when Microsoft discontinued Visual Basic 6.0, a direct successor, in favour of Visual Basic .NET which is something incompatible and different.

Further, VB 6.0 or something very like it lives on today, in the form of Visual Basic for Applications as found in all recent versions of Microsoft Office.

image

Despite this, Basic is in decline. Most of the professional developers I meet at events like Build use C# in preference to Visual Basic, there being little reason not to. C# is the premier language of .NET, and Visual Basic gets in the way if you want to keep up with latest .NET developments. Xamarin, which lets you code in .NET for iOS and Android, supports C# but not Visual Basic. Once you come to terms with semi-colons, braces and case-sensitivity, there is no real advantage to Visual Basic and C# is no more difficult.

I do see Visual Basic still used in education though, as well as by some developers who either prefer the language or are so used to it that they see no need to change; and to be fair, Xamarin aside, there is little if anything you can do in C# that you cannot also do in VB and the output is more or less the same.

The Roslyn project, which will be part of the next version of C# and probably in the next release of Visual Studio, lets you paste C# code as VB and vice versa.

Nevertheless, I believe we will see further decline in Basic usage, especially as it is little used outside Microsoft’s platform.

Would it have been better if Microsoft has not adopted Basic so wholeheartedly? There are some problems with Basic, though it is possible to write excellent code in Basic just as you can write poor code in C#, Python, C, or other more fashionable languages. Some issues:

  • Early versions of Basic encouraged badly structured programming with keywords like GOTO and GOSUB resulting in intricate loops that were hard to follow or debug.
  • Basic abstracts how software works to such an extent that you do not learn some important programming concepts such as pointers, addresses, memory allocation.
  • There is no natural progression from Basic to the C-like languages which dominate computing (C,C++,JavaScript,C#).
  • Visual Basic encourages developers to mix GUI code and business logic in the same files, as well as building user interfaces that tend not to scale well.
  • Small and declining professional use means that Basic is less useful than many other languages in the job market.

That said, Basic powers many excellent business applications as well as introducing many to the wonders of programming, and deserves our respect.