Tag Archives: linux

Why Subsystem for Linux in Windows 10 and Windows Server? And what are the implications?

Microsoft is busy improving Windows Subsystem for Linux (WSL), the compatibility layer that lets you run Linux on Windows. WSL is not an emulator. It accesses the same file system and you can launch Windows applications from WSL, and vice versa. It also runs actual Linux binaries.

The latest announcements cover copy/paste between Linux and Windows, and a tabbed console. Both enhancements are in the skip-ahead insider version of Windows 10, which means they are unlikely to be in the one about to be released, currently known as Spring Creators Update (but rumoured to be getting a name change). In other words, you may have to wait around six months for this to be generally available.

image 

These are not huge changes, but overall WSL is a big deal. Why is Microsoft doing it? One Betanews commenter says:

I still can’t figure out who this whole "Linux-on-Windows" thing is meant for. Developers who work on both platforms maybe? I guess it would be handy for people who just want to try out Linux before migrating to it, but that’s the last thing Microsoft would want to promote.

Microsoft has in fact stated the primary purpose of WSL:

This is primarily a tool for developers — especially web developers and those who work on or with open source projects. This allows those who want/need to use Bash, common Linux tools (sed, awk, etc.) and many Linux-first tools (Ruby, Python, etc.) to use their toolchain on Windows.

There is a bit more to it. Developers are small in number relative to general users, but disproportionately influential, since they make the applications the rest of us run, and if the applications are not there or are inferior, the ecosystem starts to fail and the operating system declines.

I am not sure when it was that developers started to prefer Macs, but I noticed this trend many years ago, perhaps from the time that OS X moved to x86 (2006). This was not just about preferring the Mac user interface. In 2008 Apple opened up iOS, its mobile OS, to third-party applications, and a Mac was required for iOS development (this is still the case). It has long been relatively easy to run a Windows emulator on a Mac, but not vice versa, so for developers who want to support multiple target platforms from one computer, the Mac makes sense.

OS X / macOS is a Unix-like operating system, based on BSD (Berkeley Software Distribution). This means that moving between Linux and Mac is relatively smooth, from a developer perspective. The same tools are generally available. The internet runs mostly on Linux so the Mac has an advantage there as well.

In some cases this is more than just inconvenience. Windows has a long-standing issue with path lengths. MAX_PATH is defined as 260 characters. This limitation can be mostly removed if you have Windows 10 build 1607 or higher. Nevertheless, path issues have made Windows awkward for developing with Java, Node.js, and other languages or frameworks which typically use deeply nested directories. Open source developers perhaps did not care as much about these issues because they were mostly using Mac or Linux.

Microsoft has responded by improving Windows as a platform on which to develop applications. Visual Studio now targets Mac, iOS and Android as well as Windows. MAX_PATH has been alleviated as far as possible. WSL however goes much further. You can install and run Linux development tools and utilities such as gcc, perl, sed, awk, grep, wget, openssl, perl and more. There is no MAX_PATH issue. You can run the Linux build of Apache, PHP, MySQL and more. I used WSL to debug a PHP application and explained how here.

WSL is not perfect. Not everything is implemented. You can check the current issues here. Still, it is genuinely useful and mitigates the advantages of Mac or Linux for developers.

Microsoft has also added WSL to Windows Server. Why? The main focus here seems to be on administrators. There are times when it is handy to run a Linux command or script on Windows Server. It is not intended for production use as a server, though there is now support for background tasks; however it is still per-session so you would need to keep a user logged on in order to run, for example, a web server. More important, Microsoft has not designed WSL for production use as a server platform so it might not be as optimized or reliable as you require.

Implications of WSL

Where is this going? This is where it gets speculative. I will argue though that WSL is in part an admission of defeat. Windows remains an important development platform, but is now greatly outweighed by Unix-like platforms:

  • Web/Internet applications
  • iOS applications
  • Android applications

Where Windows support is needed, developers have many cross-platform options to choose from, a popular choice today being Electron, based on Chromium (the open source foundation of Google Chrome) and Node.js.

Today there seems little chance of Windows winning back market share as a mobile operating system, and the importance of desktop applications looks destined for long slow decline.

Windows Server remains a significant application platform, but Microsoft is focused more on driving developers to Azure cloud services than on Windows Server itself. SQL Server now runs on Linux, ASP.NET Core is cross-platform, and Azure has excellent support for Linux.

All of this leads me to think that WSL will continue to improve, perhaps to the point where production loads are supported on Windows Server, for example. Further, the ability to run Windows applications on Linux (which is more or less what happens in SQL Server for Linux) may become equally as important as the reverse.

Setting up PHP for development on Windows Subsystem for Linux in Windows 10

I have been working a little with PHP, for the first time for a while, and soon found it annoying not to have the convenience of instant application testing and line by line debugging. I have set up a PHP development environment before using XAMPP for Windows and Eclipse, but it was fiddly. I also prefer PHP on Linux, which is where my scripts will be running.

Since Windows 10 now has a Linux environment built-in, called Windows Subsystem for Linux (WSL), I decided to set this up to run Apache, PHP and MySQL and to try debugging my scripts there.

My PC is a recent installation and I had not yet installed WSL. To do so, you have to both download a Linux distribution from the Store (I chose Ubuntu), and enable WSL in Windows features. Then restart, launch Ubuntu, set a username and password, and you are up and running.

Note the Linux commands that follow should be run as root using sudo.

Before doing anything else, I got Ubuntu up to date:

apt-get update

apt-get upgrade

Then I installed the LAMP suite:

apt-get install lamp-server^

(the final ^ is intentional; see the guide here).

To check that everything is working, I created the file phpinfo.php in /var/www/html with the following contents:

<?php phpinfo(); ?>

and restarted Apache:

/etc/init.d/apache2 restart

Note: if you have IIS running in Windows, or another web server, Apache will not be able to listen on port 80. Change the port in /etc/apache2/ports.conf and in /etc/apache2/sites-enabled/000-default.conf

Then I opened a web browser on the Windows side and browsed to localhost:

image

and

image

We are up and running, but not debugging PHP yet. Remember the basic rules of WSL:

  • you cannot change Linux files from Windows.
  • you can access Windows files from Linux.

We want to edit PHP from Windows, so we’ll define a site that uses Windows files. Windows files are under /mnt/c (or whatever drive letter you are using).

So if you example you have your PHP website in a folder called c:\websites\mysite, you can have Apache serve files from that folder.

The quickest way to get up and running is to create a symbolic link in the Apache home directory, in my case /var/www/html. Change to that directory and type:

ln -s /mnt/c/websites/mysite mysite

Now you can view the site at http://localhost/mysite/

This worked first time for me, complete with PHP running. You could also set up multiple virtual hosts in Apache, and use the hosts file in Windows to map other host names to localhost.

Next, you probably want PHP to show error messages. To do this, replace the default php.ini with the development version (or tweak it according to your own preferences. At the time of writing, on Ubuntu, the default PHP version is 7.0 and php.ini-development is located in /usr/lib/php/7.0/php.ini-development. So I backed up the ini file at /etc/php/7.0/apache2, replaced it with the development version, and restarted Apache. My PHP form immediately showed me a non-fatal undefined index error, so it worked.

There is one small inconvenience. Apache in WSL will only run during the session. So before starting work, you have to open Ubuntu and type:

sudo apache2ctl start

Well, background task support is coming to WSL but I do not regard this as a big problem.

OK, this is cool, we can make changes in the PHP code in our favourite Windows editor, save, and view the results directly in the browser. But what about line-by-line debugging? For this, we are going to use Visual Studio Code with the PHP Debug extension:

image

Then on the Ubuntu side:

apt-get install php-xdebug

Restart Apache:

apache2ctl restart

Check that phpinfo.php now shows an Xdebug section. Then edit php.ini and add the following:

[XDebug]
xdebug.remote_enable = 1
xdebug.remote_autostart = 1

Restart Apache again and XDebug is ready to go.

Over in Visual Studio code there is a little more work to do. The problem is that although everything is running on localhost, the location of the files looks different to Linux than to Windows. We can fix this with a pathMappings setting. In Visual Studio code, open the PHP file you want to debug. Click the Debug icon and then the little gearwheel near top left; this will open launch.json. By default there are a couple of settings for XDebug. These are OK for a default setup, but we need to add path mapping so that the debugger knows where to find the files. For example:

image

Now you can set a breakpoint, start debugging, and open the page in your browser:

image

More guidance on the PHP Debug extension by Felix Becker is here.

Final thoughts

This is cool; but is it better or worse than an old-style VM running Linux and PHP? The WSL solution is lightweight and convenient, but unlike a VM it is not isolated and you may hit issues that are unique to WSL, because not everything runs. I did happen to suffer crashes in Visual Studio and in Outlook while WSL was running; it may well be coincidence, but I cannot help wondering if WSL might be to blame.

Still, a great feature of WSL is that when you exit your session, it goes away, so it is not too intrusive. I plan to use it for PHP debugging and will see how it goes.

Using Strongswan as a VPN client – and a Windows Firewall gotcha

How do you monitor a Windows server over the internet? This one is not in Azure but an actual server, running Hyper-V of course, and the requirement is to monitor both the Hyper-V host and the VMs for things like free memory, disk space and CPU usage.

There is a nice solution called Cacti which does this, using SNMP. You just have to enable SNMP in Windows Server, install Cacti on some other server, and make sure the two can communicate on UDP port 161 (or you can configure another port).

The target server is behind a Linux firewall which has a VPN endpoint, so a good solution is to have a VPN connection between Cacti on-premises and the firewall to enable SNMP traffic over a secure tunnel. This VPN endpoint is already in use using the excellent Shrew soft VPN client, so it was just a matter of finding a suitable Linux VPN client for the VM on which I installed Cacti.

I had installed Debian Linux on a VM to run Cacti, without any GUI (I mean, who needs a GUI on a server?) so looked for a suitable command-line VPN client.  I soon gathered that the usual choice used to be Racoon but is now strongSwan – though note that both of these are more often used to set up a VPN endpoint on a server rather than as clients, though they work fine in either role.

I am sure that someone with more experience than myself in Linux VPNs and networking would have had this up and running in no time, but for me it was somewhat arduous. There are two aspects to a VPN tunnel, one of which is creating the secure tunnel and the second being the networking. StrongSwan will do most of this on your behalf, but you do need to get the configuration right in /etc/ipsec.conf and I chased down several false trails before getting it working.

One issue was that I am using XAuth authentication, and despite strongSwan supporting this I thought by default, got the error “no XAuth method found.” What worked for me was to install libstrongswan-extra-plugins and then make sure that xauth-generic.conf is set to load the xauth-generic plugin.

Next, it was not obvious to me what to put in the strongSwan left and leftsubnet key pairs. I thought the left subnet should be the subnet of my local network (192.168.255.0/24) but in fact I needed the subnet that was configured for VPN clients, in my case 192.168.40.0/24. Until I figured this out I was getting “no matching CHILD_SA config found” and “HASH N(INVAL_ID)” errors when trying to connect.

I fixed that but it still did not work. After trying various things I hit upon left=%any in ipsec.conf and got a successful connection at last.

I had a tunnel, but traffic did not pass. Now, there are two things I did to get this working. One was to put auto=route in ipsec.conf.  The docs sayroute loads a connection and installs kernel traps.” Note that the networking configuration is done not by modifying iptables rules, but through xfrm policy, and to see the current policy you type:

ip xfrm policy

in the shell. It was still not quite right.

The final step was to change left=%any to left=%defaultroute in ipsec.conf. With this last piece of magic in place, everything works.

It was not (for me) quick and easy to configure, but the result is excellent. Just type:

ipsec up [connectionname]

and the tunnel comes up almost instantly. Using snmpwalk I can verify that that traffic is flowing:

image

That said, now is the time to mention a little gotcha with the Windows Firewall for SNMP. When you install the service, Windows creates a firewall rule that opens the SNMP port (normally UDP 161) for incoming traffic, for both private and public profiles.

image

Note there is a separate rule for Domain profiles, which is a clue that something is different. That difference is the scope of the rule. By default, the rule for private and public profiles is scoped only to the local subnet, making it in effect disabled.

image

The idea I guess is to encourage you to restrict traffic to specified IPs if you access the SNMP service from outside the domain, which is good security advice. You can also configure this on the SNMP service properties. But if you are wondering why the service is no responding, this is one thing to check.

Microsoft SQL Server is coming to Linux. What are the implications for Windows Server?

Microsoft is porting SQL Server, its popular database manager, to Linux. According to Executive VP Scott Guthrie:

Today I’m excited to announce our plans to bring SQL Server to Linux as well. This will enable SQL Server to deliver a consistent data platform across Windows Server and Linux, as well as on-premises and cloud. We are bringing the core relational database capabilities to preview today, and are targeting availability in mid-2017.

Why do this? The short answer is that like any other software company, Microsoft wants to sell more licenses, and porting its premier (and excellent) database manager to Linux extends its market and helps it compete more directly with the likes of Oracle and even MySQL.

image

However that begs a second question, which is why has Microsoft not done this before? After all, SQL Server has been around forever. The first release was in 1989, jointly with Ashton Tate and Sybase, and was for OS/2. The first Windows release was 1993. There was a significant leap forward in SQL Server 7.0, in 1998, which I think of as the beginning of the product as we know it today.

Microsoft in the nineties and in the first decade of the new millennium was all about Windows. Dominant on the desktop, the idea was to build synergies between Windows desktop and Windows server so that running server applications like Active Directory, Exchange and SQL Server was the obvious choice. The Visual Studio development environment pushed developers towards Visual Studio in subtle and not-so-subtle ways. Some programming language innovations like LINQ to SQL (a form of Language Integrated Query) only worked with SQL Server. It was not quite lock-in, it was always possible to use a different database engine, but SQL Server was always the default, used in all the examples and documentation, and the best understood when you needed support.

Today Microsoft’s circle of dominance is breaking down. Windows still has desktop dominance, but the importance of the desktop is less, thanks to mobile devices which mostly do not run Windows, and a move away from desktop applications towards web applications that do not care which operating system you use. Active Directory is still important, but cloud computing giants like Google and Amazon are encroaching on that space.

“Only on Windows Server” has become a liability rather than the key to keeping customers locked to Microsoft’s platform.

You can see this in the company’s development strategy, which is migrating towards a cross-platform implementation of .NET as well as embracing iOS and Android via the recently announced Xamarin acquisition. You can also see it in the Azure cloud platform, and Microsoft’s partnership with Red Hat for Linux on Azure. The company is happy to take your money whatever operating system you choose.

It is early days though, and Microsoft is still a Windows-centric company. SQL Server on Linux, expected sometime next year, will probably not be feature-complete compared to SQL Server on Windows – I am guessing, but things like .NET Stored Procedures may be tricky to get right, as well as features like in-memory databases that are tightly integrated with the operating system.

It is worth noting that cross-platform is actually a burden as well as a strength and may involve compromises. It will be fascinating to see how performance compares on equivalent hardware.

Microsoft is now betting than opening up new markets for SQL Server is more important than keeping customers hooked on Windows Server – especially as that last strategy is failing in the cloud computing era.

Finally, there is the question I posed in the title of this post. How does moving key server applications to Linux impact the appeal of Windows Server? After all, Linux licenses are generally cheaper than Windows Server and in some cases free. The answer is that it is one less reason to buy Windows Server, presuming SQL Server works properly on Linux.

You can see this as a process of commoditizing the operating system so that in time expensive server operating system licenses are a thing of the past. This is probably not a good trend for Microsoft. It can still prosper though if you rent your virtual infrastructure from the company and use its cloud services, like Azure and Office 365.

Another way of looking at this is that there is more pressure on Windows Server architect Jeffrey Snover and his team to make Windows Server better than Linux, so that you want to run it because of its merits, not because it is the only way to run SQL Server or Exchange.

New Delphi and C++ Builder Roadmap promises Linux server support

Embarcadero has published a new roadmap explaining what to expect in forthcoming editions of its RAD Studio suite, including Delphi and C++ Builder.

The company has been acquired by IDERA though the Embarcadero brand is to continue under the new ownership.

The roadmap covers two “development tracks”, though it is not completely clear what that means. One is described as the “Spring development track” which suggests a release in April, 12 months after RAD Studio XE8. However, the post adds that “The team is working the following features that will be included in 2016 releases,” raising the possibility that some features in this track may come later, perhaps in the scheduled summer update.

The Spring track, to be called “Berlin”, seems to be mainly a tidying-up exercise in any case, with features including Bluetooth LE support for Windows 10, DirectX 12 support, native support for Utf8String on all platforms (you mean it does not have this already?) and enhancements to the FireMonkey cross-platform framework.

“Spring” also offers C++ CLANG 3.3 on all platforms.

The second development track “will deliver a Fall release”, to be known as “Tokyo”, following the pattern of recent years where RAD Studio has two major updates every year. The Fall track is more interesting, and includes support for Delphi and C++ Builder on Linux Server, as well as “Linux platform support for console apps with IoT support.” I guess non-GUI Linux is the common thread here.

The IDE will remain on Windows, with cross-compilation for Linux. Initially supported distributions are Ubuntu Server and RedHat Enterprise, though further distributions will be added “as demand dictates”.

It is good to see Linux support back in Delphi. I remember Borland Kylix (2001-2003) well, but this was back in the days when desktop Linux looked like more of a thing.

The feature-list for Tokyo also includes Windows Centennial support. This is potentially big news. Centennial is a Microsoft project to deliver Windows desktop applications through the Windows Store, using application virtualisation based on the existing App-V technology to remove dependency issues. You can expect to hear more about Centennial at Microsoft’s Build conference at the end of March; it was covered at last year’s Build but we have not heard much more about it since.

image

Embarcadero is also promising a new installer for RAD Studio, based on its GetIt technology, which will reduce installation time and give more flexibility in selecting features. This would be welcome; I never look forward to installing RAD Studio as it tends to be a time-consuming process. It would also be good if it messed less with system environmental variables, though I do not know if this is on the cards. The new installer will comes in two phases, phase 1 in Berlin and phase 2 in Tokyo.

My own view is that two major releases a year is one too many, so I would prefer if Embarcadero scrapped Berlin and went straight to Tokyo.

Do you need the new Raspberry Pi B+?

An updated Raspberry Pi board was released earlier this month, and the kind folk at Element 14 sent me one to review.

image

The Raspberry Pi is a complete low-power computer which needs only a case, an SD card, and a standard USB power source to start doing real work. It is ideal for learning projects, home automation, practical applications like running a media server or client, or anything you can think of.

It is a little over two years since the first Pi was shipped in April 2012. The progress is a little confusing: the first model was the B, followed by the A in early 2013, a cut-down model with a single USB port and no Ethernet.

image

The new model has the same Broadcom BCM2835 SoC as all the other Pi models. The CPU is a 700 MHz ARM 1176JZ-F.

So what is new? The highlights:

  • 4 USB 2.0 ports
  • The dedicated composite video port has been removed and is now shared with the audio jack, requiring an adaptor
  • The power draw is now 600 mA up to 1.8A at 5v, making it both lower power and higher power (when necessary) than the model B (750 mA up to 1.2A at 5v). The USB ports can supply a little more power, making most self-powered external hard drives usable, for example.
  • The SD card slot has been replaced by a micro SD card slot, a good move (all my SD cards are in fact micro SD cards with adaptors, which is common).
  • The GPIO (General Purpose Input Output) connector now has 40 pins rather than 26. The first 26 pins are the same as before, for compatibility.
  • The price is the same as for the B

There are a few other changes which I noticed. One is that the LEDs have been moved. On the B, there are 5 LEDs which are together on the bottom right corner of the board: ACT (SD card access), PWR, FDX (Duplex LAN), LNK (Activity LAN) and 100 (100Mbit LAN connected). The B+ has two LEDs in the opposite corner, ACT and PWR, and two more LEDs on the LAN port itself. Personally I prefer the old arrangement.

The audio output is improved, according to Pi inventor Eben Upton, thanks to a “dedicated low-noise power supply.” Raspberry Pi Engineer jdb adds that  “The output impedance and buffering for the audio port has been improved and the maximum output amplitude has been increased (~1.25V pk-pk).” However one blogger measured the output and considered no better (or slightly worse).

Since the layout of the board has changed, a B+ Pi will not fit in your old model B case. I bought a new case but I don’t recommend this one:

image

This is a push-fit case and even thought the board is held down by tabs, it moves and rattles slightly. I also worry about the case tabs breaking if you open it repeatedly. The tab that you need to press to open the case is sited by the micro SD slot, and that is another mistake, since it presses against the board making it hard to reopen after the Pi is fitted. There is also too much space below card slot so you can easily post your card into the case rather than into the slot if you are careless. Finally, I don’t like the way the top of the case slopes down, reducing the space above the GPIO at its shallow point.

I wish I had seen this Cyntech case which looks miles better, for a similar low price, though I haven’t actually tried it. I do like the idea of an optional spacer which lets you increase the case height to fit add-on boards.

Finally, a few notes on operation. If you have existing micro SD cards running on the B, they might or might not work on the B+. I use piCorePlayer as a streaming audio client, for which it is excellent, but my existing image would not boot on the B+.  Following a tip elsewhere, I installed the latest piCorePlayer download on the B, updated it to version 1.16A using the web UI, and it then worked on the B+.

image

I had no such problems with the standard Raspbian distro which worked fine on the B+.

image

So do you need the B+? If you have not yet tried a Pi, give it a try, it is fabulous. If you already have a B, then you will find some nice improvements but nothing dramatic – though the extra USB ports in particular are most welcome.

More information is on the Element14 site or of course the official site.

What does Xamarin’s success say about open source versus proprietary? Miguel de Icaza says he has never been happier

image

Yesterday Xamarin, which offers tools for targeting iOS, Android and Mac with C#, announced a partnership with Microsoft, an announcement which I wrote up on The Register. It drew a few comments, several complaining about the cost:

So it cost more then Visual Studio Pro.

And that is for 1 target platform?

or

Not so useful for little indie developers at those prices.

or

From open source to $999 per developer per year. Monetising Mono seems to have worked, so perhaps PCL being open sourced won’t be such a bargain either.

If you check Xamarin’s pricing you will see that the tools are not cheap for casual users; of course, if you are selling thousands of apps or developing corporate apps at normal rates the tools soon pay for themselves.

Xamarin is doing well as far as I am aware; CEO Nat Friedman told me of rapid growth in the number of customers and I have seen for myself the high interest in the tools at events like Microsoft BUILD earlier this year in San Francisco.

This gives me pause for reflection. What does the success of Xamarin, and the relative lack of success of Mono (the open source C# compiler and .NET Framework on which Xamarin is based) say about how well the open source business model works in the real world?

I was reminded of a conversation I had with Miguel de Icaza, creator of Mono and co-founder of Xamarin, Friedman back in February of this year, when Xamarin 2.0 was launched. I asked de Icaza if the new company publishes the source code for all its products?

“No. Our company does proprietary tools for iOS and Android apps. The entire iOS and Android support is proprietary as well as our commercial Mac support. All those three pieces are proprietary while the IDE and the Mono runtime are open source. Whether the code is open source or not depends on whether it is part of core Mono or core MonoDevelop. Otherwise it tends to end up as proprietary.”

Friedman added: “Mono has a thriving open source community around it, and Xamarin has a thriving community of developers who are building commercial mobile apps. We have 12,000 customers, many of them have never heard of Mono. They came to us because they had a problem to solve, they were C# developers and they wanted to get an iOS or Android app built. We solved that problem and that was worth money to them. The reason we have a business is that Microsoft developers do pay for tools, unlike Web developers for example. It’s been a great market for us. It allows us to invest.”

I asked de Icaza if he gets any grief from the open source community for having proprietary code in his company.

“Actually no. We started doing the proprietary bit at Novell. In fact we’ve been doing proprietary for a long time, even before we were acquired by Novell, at Ximian. We didn’t get a lot of grief from people. I can tell you though that when I was working in the Linux world, they were very stressful days for me, because people constantly complain about a “secret conspiracy” and that thing just went out of control. There are some advocates in the Linux world that don’t like anything that has the label Microsoft on.

“Ever since we did Xamarin which meant we focused on Mac and Windows, all that stress is gone, I don’t think I have ever been happier. In the past I was enduring this constant barrage of senseless attacks, and now I never hear about this.

“One thing that happened in the Linux world is that I was very proud of the four or five big apps that were built with Mono. F-spot that we built, Banshee, and a couple of others. Now with Xamarin I can’t keep track of them any more because they are measured in the thousands. There are thousands of very large apps, over a millions lines of code, that people send us. It’s a very different world, it’s just so much larger than all the work we did in Linux days back then.”

Fixing lack of output in AWstats after Debian Linux upgrade

I use AWStats to analyse logs on several web sites that I manage. After a recent upgrade to Debian 7.0 “Wheezy” I was puzzled to find that my web stats were no longer being updated.

I verified that the Cron job which runs the update script was running. I verified that if I ran the same command from the console, it ran correctly. I verified this even using sudo to run with the same permissions as Apache. I also noted that the update button on the stats pages worked correctly. An odd problem.

This is how it rested for a while, and I manually updated the stats. It was annoying though, so I took a closer look.

First, I amended one of the Cron jobs so that it output to a file. Reading the file after the next failed update, I could see the error message:

Error: LogFile parameter is not defined in config/domain file
Setup file, web server or permissions) may be wrong.

I knew the config file was fine, but checked anyway, and of course the LogFile was specified OK.

It was a clue though. Eventually I came across this bug report by Simone Capra:

Hi all, i’ve found a problem:
When run from another perl program, it finds a config file that doesn’t exist!

I applied the suggested fix in awstats.pl, changing:

if (open( CONFIG, "$SiteConfig" ) ) {

to

if ($SiteConfig=~ /^[\\/]/ && open( CONFIG, "$SiteConfig" ) ) {

Presto, everything is running OK.

Not just a four-horse race: three new mobile operating systems joining the fray

Some have declared the mobile OS battle over, won by Apple and Google Android between them. Microsoft and RIM Blackberry will fight it out for third and fourth place.

Maybe, but I doubt it will be so simple. There are not one, not two, but three further open source mobile operating systems which have significant backing.

Tizen is supported by companies including Intel, Samsung, Orange, Vodafone, Huawei, and NTT Docomo, and managed by the Linux Foundation.

image

It is based on what used to be MeeGo (which itself came out of Intel Moblin, Nokia Maemo and so on). Tizen is intended to work on smartphones, tablets, and in embedded devices such as TVs and in-vehicle entertainment.

Firefox OS is a new project from Mozilla, whose Firefox browser is under threat from Webkit-based browsers such as Google Chrome.

image

Mozilla promises that:

Using HTML5 and the new Mozilla-proposed standard APIs, developers everywhere will be able to create amazing experiences and apps. Developers will no longer need to learn and develop against platform-specific native APIs.

Ubuntu also offers a mobile OS, along with an interesting add-on that lets you run Ubuntu desktop from smartphone when docked (this can also be added to Android smartphones).

image

All will be interesting to watch. Tizen is particularly interesting. Samsung is the largest Android vendor and the largest smartphone vendor. While this is currently a win for Android, it is possible that Samsung may want to steer its customers towards a non-Google operating system in future.

Equally, logic says that the open source world would be better getting behind a single Android alternative, rather than three.

Valve announces Steam-powered apps beyond games as well as embracing Linux

Steam maker Valve has announced that it is expanding beyond games, to sell software titles that “range from creativity to productivity”.

image

The Steam software is more than just a store. The platform handles updates, digital rights management, and supports multiplayer gaming. It also forms a chat network. The Steam overlay lets users access Steam features while playing a full-screen game.

Users can install a Steam title on multiple computers but can only play while logged in, and can only log in on one device.

Steam launched first on Windows, but also has clients for Mac and, via Wine compatibility, on Linux. There are also mobile clients for Android and iOS, and some support for PlayStation 3, though these have limited features. The mobile clients do not let you buy and run games for the mobile device itself.

With Apple, Google and now Microsoft backing their own app stores for their respective platforms, Valve has some tricky manoeuvring ahead if it is to avoid being squeezed out. Valve founder Gabe Newell made headlines recently by calling Windows 8 a “catastrophe”, though he is hardly a disinterested party. Note that he should not worry too much about Windows 8 in the short term, since Microsoft’s store does not support desktop titles other than by links to third-party sites, including Steam. However the general trend towards locked-down platforms with software installed only through an official store must be a concern to Newell.

Valve is turning towards Linux as a possible solution. It is talking at the Siggraph conference this week in Los Angeles about its work on OpenGL and Linux, and it seems that a native Linux Steam client is in prospect.

Could Windows gamers, or others disillusioned with Windows 8, turn to Linux in significant numbers as an alternative? While this is possible, it seems more likely that the Mac would benefit. You would also imagine that skilled gamers will be smart enough to operate the Windows 8 Start menu and discover that most of their stuff still runs fine on the new desktop.

The Steam platform is a strong one though, and with Microsoft not supporting desktop apps through its own Store, Valve has a good opportunity to extend its reach.

According to its own stats, Steam has peaked at over 4 million concurrent users this month.

image