Amazon offering Linux desktops as a service in WorkSpaces

Amazon Web Services now offers Linux desktops as part of its WorkSpaces desktop-as-a-service offering.

The distribution is called Amazon Linux 2 and includes the MATE desktop environment.

image

Most virtual desktops run Windows, because most of the applications people want to run from virtual desktops are Windows applications. A virtual desktop plugs the gap between what you can do on the device you have in front of you (whether a laptop, Chromebook, iPad or whatever) and what what you can do in an office with a desktop PC.

It seems that Amazon have developers in mind to some extent. Evangelist Jeff Barr (from whom I have borrowed the screenshot above) notes:

The combination of Amazon Linux WorkSpaces and Amazon Linux 2 makes for a great development environment. You get all of the AWS SDKs and tools, plus developer favorites such as gcc, Mono, and Java. You can build and test applications in your Amazon Linux WorkSpace and then deploy them to Amazon Linux 2 running on-premises or in the cloud.

Still, there is no problem using it for any user for productivity applications; it works out a bit cheaper than Windows thanks to removing Microsoft licensing costs. Ideal for frustrated Google Chromebook users who want access to a less locked-down OS.

Notes from the field: Windows Time Service interrupts email delivery

A business with Exchange Server noticed that email was not flowing. The internet connection was fine, all the servers were up and running including Exchange 2016. Email has been fine just a few hours earlier. What was wrong?

The answer, or the beginning of the answer, was in the Event Viewer on the Exchange Server. Event ID 1035, only a warning:

Inbound authentication failed with error UnexpectedExchangeAuthBlobCheckForClockSkew for Receive connector Default Mailbox Delivery

Hmm. A clock problem, right? It turned out that the PDC for the domain was five minutes fast. This is enough to trigger Kerberos authentication failures. Result: no email. We fixed the time, restarted Exchange, and everything worked.

Why was the PDC running fast? The PDC was configured to get time from an external source, apparently, and all other servers to get their time from the PDC. Foolproof?

Not so. If you typed:

w32tm /query /status

at a command prompt on the PDC (not the Exchange Server, note), it reported:

Source: Free-running System Clock

Oops. Despite efforts to do the right thing in the registry, setting the Type key in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters to NTP and entering a suitable list of time servers in the NtpServer key, it was actually getting its time from the server clock. This being a Hyper-V VM, that meant the clock on the host server, which – no surprise – was five minutes fast.

You can check for this error by typing:

w32tm /resync

at the command prompt. If it says:

The computer did not resync because no time data was available.

then something is wrong with the configuration. If it succeeds, check the status as above and verify that it is querying an internet time server. If it is not querying a time server, run a command like this:

w32tm /config /update /manualpeerlist:”0.pool.ntp.org,0x8 1.pool.ntp.org,0x8 2.pool.ntp.org,0x8 3.pool.ntp.org,0x8″ /syncfromflags:MANUAL

until you have it right.

Note this is ONLY for the server with the PDC Emulator FSMO role. Other servers should be configured to get time from the PDC.

Time server problems seem to be common on Windows networks, despite the existence of lots of documentation. There are also various opinions on the best way to configure Hyper-V, which has its own time synchronization service. There is a piece by Eric Siron here on the subject, and I reckon his approach is a safe one (Hyper-V Synchronization Service OFF for the PDC Emulator, ON for every other VM).

I love his closing remarks:

The Windows Time service has a track record of occasionally displaying erratic behavior. It is possible that some of my findings are not entirely accurate. It is also possible that my findings are 100% accurate but that not everyone will be able to duplicate them with 100% precision. If working with any time sensitive servers or applications, always take the time to verify that everything is working as expected.

Inside Azure Cosmos DB: Microsoft’s preferred database manager for its own high-scale applications

At Microsoft’s Build event in May this year I interviewed Dharma Shukla, Technical Fellow for the Azure Data group, about Cosmos DB. I enjoyed the interview but have not made use of the material until now, so even though Build was some time back I wanted to share some of his remarks.

Cosmos DB is Microsoft’s cloud-hosted NoSQL database. It began life as DocumentDB, and was re-launched as Cosmos DB at Build 2017. There are several things I did not appreciate at the time. One was how much use Microsoft itself makes of Cosmos DB, including for Azure Active Directory, the identity provider behind Office 365. Another was how low Cosmos DB sits in the overall Azure cloud system. It is a foundational piece, as Shukla explains below.

image

There were several Cosmos DB announcements at Build. What’s new?

“Multi-master is one of the capabilities that we announced yesterday. It allows developers to scale writes all around the world. Until yesterday Cosmos DB allowed you to scale writes in a single region but reads all around the world. Now we allow developers to scale reads and writes homogeneously all round the world. This is a huge deal for apps like IoT, connected cars, sensors, wearables. The amount of writes are far more than the amount of reads.

“The second thing is that now you get single-digit millisecond write latencies at the 99 percentile not just in one region.

“And the third piece is that what falls out of this high availability. The window of failover, the time it takes to failover from one region when a disaster happens, to the other, has shrunk significantly.

“It’s the only system I know of that has married the high consistency models that we have exposed with multi-master capability as well. It had to reach a certain level of maturity, testing it with first-party Microsoft applications at scale and then with a select set of external customers. That’s why it took us a long time.

“We also announced the ability to have your Cosmos Db database in your own VNet (virtual network). It’s a huge deal for enterprises where they want to make sure that no data leaks out of that VNet. To do it for a global distributed database is specially hard because you have to close all the transitive networking dependencies.”

image
Technical Fellow Dharma Shukla

Does Cosmos DB work on Azure Stack?

“We are in the process of going to Azure Stack. Azure Stack is one of the top customer asks. A lot of customers want a hybrid Cosmos DB on Azure Stack as well as in Azure and then have Active – Active. One of the design considerations for multi master is for edge devices. Right now Azure has about 50 regions. Azure’s going to expand to let’s say 200 regions. So a customer’s single Cosmos DB table spanning all these regions is one level of scalability. But the architecture is such that if you directly attach lots of Azure Stack devices, or you have sensors and edge devices, they can also pretend to be replicas. They can also pretend to be an Azure region. So you can attach billions of endpoints to your table. Some of those endpoints could be Azure regions, some of them could be instances of Azure Stack, or IoT hub, or edge devices. This kind of scalability is core to the system.”

Have customers asked for any additional APIs into Cosmos DB?

“There is a list of APIs, HBase, richer SQL, there are a number of such API requests. The good news is that the system has been built in a way that adding new APIs is relatively easy addition. So depending on the demand we continue to add APIs.”

Can you tell me anything about how you’ve implemented Cosmos DB? I know you use Service Fabric. Do you use other Azure services?

“We have dedicated clusters of compute machines. Cosmos DB is a Ring 0 service. So it’s there any time Azure opens a new region, Cosmos DB clusters have provision by default. Just like compute, storage, Cosmos DB is also one of the Ring 0 services which is the bottommost. Azure Active Directory for example depends on Cosmos DB. So Cosmos DB cannot take a dependency on Active Directory.

“The dependency that we have is our own clusters and machines, on which we put Service Fabric. For deployment of Cosmos DB code itself, we use Service Fabric. For some of the load balancing aspects we use Service Fabric. The partition management, global distribution, replication, is our own. So Cosmos DB is layered on top of Service Fabric, it is a Service Fabric application. But then it takes over. Once the Cosmos DB bits are laid out on the machine then its replication and partition management and distribution pieces take over. So that is the layering.

“Other than that there is no dependency on Azure. And that is why one of the salient aspects of this is that you can take the system and host it easily in places like Azure Stack. The dependencies are very small.

“We don’t use Azure Storage because of that dependency. So we store the data locally and then replicate it. And all of that data is also encrypted at rest.”

So when you say it is not currently in Azure Stack, it’s there underneath, but you haven’t surfaced it?

“It is in a defunct mode. We have to do a lot of work to light it up. When we light up it on such on-prem or private cloud devices, we want to enable this active to active pathway. So you are replicating your data and that is getting synchronized with the cloud and Azure Stack is one of the sockets.”

Microsoft itself is using Cosmos DB. How far back does this go? Azure AD is quite old now. Was it always on Cosmos DB / DocumentDB?

“Over the years Office 365, Xbox, Skype, Bing, and more and more of Azure services, have started moving. Now it has almost become ubiquitous. Because it’s at the bottom of the stack, taking a dependency on it is very easy.

“Azure Active Directory consists of a set of microservices. So they progressively have moved to Cosmos DB. Same situation with Dynamics, and our slew of such applications. Skype is by and large on Cosmos DB now. There are still some fragments of the past.  Xbox and the Microsoft Store and others are running on it.”

Do you think your customers are good at making the right choices over which database technology to use? I do pick up some uncertainty about this.

“We are working on making sure that we provide that clarity. Postgres and MySQL and MariaDB and SQL Server, Azure SQL and elastic pools, managed instances, there is a whole slew of relational offerings. Then we have Cosmos DB and then lots of analytical offerings as well.

“If you are a relational app, and if you are using a relational database, and you are migrating from on-prem to Azure, then we recommend the relational family. It comes with this fundamental scale caveat which is that up to 4TB. Most of those customers are settled because they have designed the app around those sorts of scalability limitations.

“A subset of those customers, and a whole bunch of brand new customers, are willing to re-write the app. They know that that they want to come to cloud for scale. So then we pitch Cosmos DB.

“Then there are customers who want to do massive scale offline analytical processing. So there is, Databricks, Spark, HD Insight, and that set of services.

“We realise there are grey lines between these offerings. We’re tightening up the guidance, it’s valid feedback.”

Any numbers to flesh out the idea that this is a fast-growing service for Microsoft?

“I can tell you that the number of new clusters we provision every week is far more than the total number of clusters we had in the first month. The growth is staggering.”

TalkTalk’s new Sagemcom FAST 5364 Router and WiFi Hub

TalkTalk has a new router available to its 4 million broadband customers in the UK. The router is made by Sagemcom and called the FAST 5364. The company will sell you one for £120 here but it comes for free if you get the Faster Fibre Broadband package; or for £30 with the Fast Broadband package.

TalkTalk’s previous router was the Huawei HG633 or for some luckier customers the HG635, or perhaps a DLINK DSL3782. The HG633 is a poor product with slow WiFi performance and 100 Mbps Ethernet ports. The FAST 5364 looks like an effort to put things right. It is not worth £120 (you can get a better 3rd-party router for that money) but it is well worth £30 as an upgrade.

The router comes in a smart box with a big emphasis on the step-by-step guide to getting started.

image

The router itself has a perforated plastic case with a flip-out stand. On the back are four Gigabit Ethernet ports, a WAN port, a VDSL/ADSL Broadband port, a WPS button and an on-off switch. There is also a recessed Reset button.

image

A handy feature is that the WiFi details are on a removable panel. The router admin password is on the back label but not on the removeable panel – better for security.

image

Getting started

Presuming you are a TalkTalk customer, it should just be a matter of connecting the cables and turning on. In my case it took a little longer as I am not a TalkTalk consumer customer. I connected up, then logged into the admin at http://192.168.1.1 to enter my username and password for the internet connection, following which I was online. An LED on the front turns from amber to white to confirm.

There is an oddity though. The FAST 5364 has a red Ethernet port marked WAN. This should be suitable for connecting to a cable modem or any internet connection via Ethernet. However when I tried to use this it did not work, but kept on trying to connect via ADSL/VDSL. Either this is deliberately disabled, or this is a firmware bug.

Performance and specification

The good news is that performance on the FAST 5364 is good. Here is the spec:

Antennas: 4×4 5GHz and 3×3 2.4GHz

WiFi: 2.4GHz Wi-Fi (802.11 b/g/n) and MU-MIMO 5GHz Wi-Fi (802.11 a/n/ac)

Broadband: ADSL2+ & VDSL2

A point of interest here is that the WiFi supports a technology called Beamforming. This uses an array of antennas to optimise the signal. It is called Beamforming because it shapes the the beam according to the location of the client.

In addition, MU-MIMO (Multi-User, Multi-input, Multi-output) means that multiple WiFi streams are available, so multiple users can each have a dedicated stream. This means better performance when you have many users. TalkTalk claims up to 50 devices can connect with high quality.

Features

The FAST 5364 is managed through a web browser. Like many devices, it has a simplified dashboard along with “Advanced settings”.

From the simple dashboard, you can view status, change WiFi network name and password, and not much else.

If you click Manage my devices and then Manage advanced settings, you get to another dashboard.

Then you can click Access Control, where you get to manage the firewall, and set the admin password for the router.

Or you can click TalkTalk WI-Fi Hub, where you get more detailed status information, and can manage DHCP, Light control (literally whether the LED lights up or not), DNS (this sets the DNS server which connected clients use), DynDNS (which supports several dynamic DNS providers, not just DynDNS), Route for adding static routes, and Maintenance for firmware updates, logs, and setting an NTP server (so your router knows the time and date).

image

Or you can click Internet Connectivity so you can set a DNS server to be used on the WAN side as well as username, password and other settings if you cannot connect automatically.

Firewall and port forwarding

The firewall in your router is critically important for security. Further, users often want to configure port forwarding to enable multi-user online gaming or other services to work.

Dealing with this can be fiddly so most modern routers support a feature called UPnP which lets devices on your network request port forwarding automatically.

Personally I dislike UPnP because it is a security risk if an insecure device is present on your network (cheap security cameras are a notorious example). I like to control which ports are forwarded manually. That said, UPnP is better in some ways since it allows the same port to be forwarded to different devices depending on what is in use. It is a trade-off. Ideally you should be able to specify which devices are allowed to use UPnP but that level of control is not available here. Instead, you can turn UPnP on or off.

image

On the Port Forwarding screen, you can add rules manually, or select Games and Applications, which automatically sets the rules for the selected item if you specify its IP address on the network.

image

You can get to this same screen via Connected Devices, in which case the IP address of the selected device is pre-populated.

The Firewall management gives you four levels:

Low: Allow all traffic both LAN->WAN and WAN->LAN. Not recommended, but not quite as bad as it sounds since NAT will give you some protection.

Medium: Allow all traffic LAN->WAN. Block NETBIOS traffic WAN->LAN. This is the default. More relaxed than I would like, presuming it means that all other traffic WAN->LAN is allowed, which is the obvious interpretation.

image

High: Allow common protocols LAN->WAN. Block all traffic WAN->LAN. A good secure setting but could be annoying since you will not be able to connect to non-standard ports and will probably find some web sites or applications not working as they should.

image

Custom: This seems to be the High setting but shown as custom rules, with the ability to add new rules. Thus with some effort you could set a rule to allow all traffic LAN->WAN, and block all traffic WAN->LAN except where you add a custom rule. To my mind this should be the default.

Most home users will never find this screen so it seems that TalkTalk is opening up its customers to a rather insecure setup by default, especially if there are bugs discovered in the router firmware.

I am asking TalkTalk about this and will let you know the response.

Missing features

The most obvious missing feature, compared to previous TalkTalk routers, is the lack of any USB port to attach local storage. This can be useful for media sharing. It is no great loss though, as you would be better off getting a proper NAS device and attaching it to one of the wired Ethernet ports.

Next, there is no provision for VPN connections. Of course you can set up a VPN endpoint on another device and configure the firewall to allow the traffic.

I cannot see a specific option to set a DHCP reservation, though I suspect this happens automatically. This is important when publishing services or even games, as the internal IP must not change.

There is no option to set a guest WiFi network, with access to the internet but not the local network.

Overall I would describe the router and firewall features as basic but adequate.

TalkTalk vs third party routers

Should you use a TalkTalk-supplied router, or get your own? There are really only a couple of reasons to use the TalkTalk one. First, it comes free or at a low price with your broadband bundle. Second, if you need support, the TalkTalk router is both understood and manageable by TalkTalk staff. Yes, TalkTalk can access your router, via the TR-069 protocol designed for this purpose (and which you cannot disable, as far as I can tell). If you want an easy life with as much as possible automatically configured, it makes sense to use a TalkTalk router.

That said, if you get a third-party router you can make sure it has all the features you need and configure it exactly as you want. These routers will not be accessible by TalkTalk staff. I would recommend this approach if you have anything beyond basic connectivity needs, and if you want the most secure setup. Keep a TalkTalk router handy in case you need to connect it for the sake of a support incident.

Final remarks

TalkTalk users are saying that the new router performs much better than the old ones (though this is not a high bar). For example:

“this is a very very good router with strong stable wifi. It is a massive upgrade to any of the routers supplied currently and its not just the wifi that is better. I get 16 meg upload now was 14 before”

That sounds good, and really this is a much better device than the previous TalkTalk offerings.

My main quibble is over the questionable default firewall settings. The browser UI is not great but may well improve over time. Inability to use the WAN port with a cable modem is annoying, and it would be good to see a more comprehensive range of features, though given that most users just want to plug in and go, a wide range of features is not the most important thing.

Ian Hunter talks to BBC Radio 1’s Johnnie Walker

Ian Hunter is in the UK for a Mott the Hoople reunion gig and did an interview with long-time BBC DJ Johnnie Walker, on the nostalgia show Sounds of the Seventies. If you are in the UK you can listen to it here for a limited time. The show is two hours long but the actual interview only around fifteen minutes (excluding the music).

image

Hunter does a few interviews and I find them somewhat frustrating in general, because he always tends to get asked the same questions, and especially about the time when David Bowie gave Mott the Hoople a song (All the Young Dudes) to revive their career. Hunter is always patient but I wish he would be quizzed more often about the rest of his long career. Still, he is promoting a Mott the Hoople reunion so I guess it was not inappropriate on this occasion.

The Ian Hunter section opened with Wizzard’s See My Baby Jive, a hit single in May 1973 and chosen by Hunter. Why? “It was at a time when there wasn’t too much good stuff about,” he said. “I was getting disenchanted when all of a sudden that came out, it was brilliant, absolutely brilliant.” You can certainly hear the influence in songs like The Golden Age of Rock ‘n Roll on Mott the Hoople’s 1974 album.

Walker asked about Hunter’s early years, when he won a talent competition in Butlin’s holiday camp, which kicked off a spell in a band called the Apex Group in the fifties. Then Hunter mentions performing in Hamburg with Freddie Lee, who told him he might have a future as a songwriter but “don’t ever sing ‘em”. Ha ha.

Then Bob Dylan came along, says Walker. “Bob was like the character singer,” said Hunter, “if it hadn’t been for him a lot of people like myself would never have got a shot. It was like a personality way of singing.”

We move on to the beginnings of Mott the Hoople and how Guy Stevens chose Ian Hunter as the singer of a band he was signing to Island Records, in place of Stan Tippins who became tour manager. “Guy was amazing. He was frustrated because he couldn’t do it himself, but he had the taste.”

Skipping a few years, we move on to Bowie and how Mott turned down Suffragette City, then went to hear All the Young Dudes. “David sat on the floor and he played All the Young Dudes on acoustic guitar”. Why did Bowie give away such a great song? Apparently he had been tinkering with it and it was not quite working. “He kinda got fed up with it,” said Hunter, “it needed new blood.”

A chat about the new reunion with Ariel Bender and Morgan Fisher follows. “We got together twice before but it was the original band. This is the second part of the band and they never got a shot to play on those two reunions. I always felt it was a shame, so now they get their moment in the sun,” said Hunter.

On Face Unlock

Face unlock is a common feature on premium (and even mid-range) devices today. Notable examples are Apple with the iPhone X, Microsoft with Windows Hello (when fully implemented with a depth-sensing camera like Intel RealSense), and on Android phones including Samsung Galaxy S9, OnePlus 6, Huawei P20 and Honor View 10 and Honor 10 AI

I’ve been trying the Honor 10 AI and naturally enabled the Face Unlock, passing warnings that it was less secure than a PIN or password. Why less secure? It is not stated, but a typical issue is being able to log in with a picture of the normal user (this would not work with Microsoft Hello).

Security is an issue, but I was also interested in how desirable this is as a feature. So far I am not convinced. Technically it works reasonably well. It is not 100% effective, especially in either bright sunlight or dim light, but most of the time it successfully unlocks the Honor phone. It is all the more impressive because I sometimes wear glasses, and it works whether or not I am wearing them.

image

I enjoyed face unlock at first, since it removes a bit of friction in day to day use. Then I came across annoyances. Sometimes the face recognition takes longer than a PIN, if the lighting conditions are not optimal, and occasionally it fails. It has introduced a touch of uncertainty to the unlock process, whereas the PIN is fully reliable and controllable. I tried the optional “wake on pick up” feature and again had a mixed experience; sometimes the the phone would light up and unlock when I did not need it.

Conclusion? It is something I can easily live without so a low priority when choosing a new phone. Whereas fingerprint unlock, now that the technology has matured to the point of high reliability, is something I still enjoy.

Manage your privacy online through cookie settings? You must be joking.

Since the deadline passed for the enforcement of the EU’s GDPR (General Data Protection Register) most major web sites have revamped their privacy settings with new privacy policies and more options for controlling how your personal data is used. Unfortunately, the options offered are in many cases too obscure, too complex and too time-consuming to be of any practical value.

Recital 32 of the GDPR says:

Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement to the processing of personal data … this could include ticking a box when visiting an internet website … silence, pre-ticked boxes or inactivity should not indicate consent.

I am sure the controls on offer via major web properties are the outcome of legal advice; at the same time, as a non-legal person I struggle on occasion to see how they meet the requirements or the spirit of the legislation. For example, another part of Recital 32 says:

… the request must be clear, concise, and not unnecessarily disruptive to the use of the service for which it is provided.

This post describes what I get if I go to technology news site zdnet.com and it detects that I have not agreed to its cookie management.

Note: before I continue, let me emphasize that there is lots of great content on zdnet, some written by people I know; the site as far as I know is doing its best to make business sense of providing such content, in what has become a hostile environment for professional journalism. I would like to see fundamental change in this environment but that is just wishful thinking.

That said, this is one of the worst experiences I have found for privacy-seeking users. Here is the initial banner:

image

Naturally I click Manage Settings.

Now I get a scrolling dialog from CBS Interactive, with a scroll gadget that indicates that this is a loooong document:

image

There is also some puzzling news. There are a bunch of third-parties whose cookies are apparently necessary for “our sites, products and services to function correctly.” These include cookies for analytics and also for Google ad-serving. I am not clear why these third-parties perform functions which are necessary to read a technical news site, but there we are.

I scroll down and reach a button that lets me opt out of being tracked by the third party advertisers using zdnet.com, or so it seems:

image

I want to opt out, so I click. Some of the options below are unchecked, but not many. Most of the options say “Opt out through company”.

It also seems pretty technical to me. Am I meant to understand what a “Demand Side Platform” is?

image

I counted the number of links that say “opt out through company”. There are 63 of them.

I click the first one, called Adform. Naturally, the first thing I see is a request to agree (or at least click OK to) their Cookie Policy.

image

I click to read the policy (remember this is only the first of 63 sites I have to visit). I am not offered any sort of settings, but invited to visit youronlinechoices or aboutads.info.

image

Well, I don’t want anything to do with Adform and don’t intend to return to the site. Maybe I can ignore the Adform Cookie Policy and just focus on the opt-out button above it.

image

Currently I am “Opted-in”. This is a lie, I have never opted in. Rather, I have failed to opt out, until I click the button. Opting out will in fact set a cookie, so that Adform knows I have opted out. I am also reminded that this opt out only applies to this particular browser on this particular device. On all other browsers and/or devices, I will still be “opted in”.

OK, one down, 62 to go. However scrolling further down the list I get some bad news:

image

In some cases, it seems, “this partner does not provide a cookie opt-out”. The best I can do is to “visit their privacy policy for more information”. This will require a search, since the link is not clickable.

How to control your privacy

What should you do if you do not want to be tracked? Attempting to follow the industry-provided opt-outs is just hopeless. It is mostly PR and attempting to tick legal boxes.

If you do not want to be tracked, use a VPN, use ad blockers, and delete all cookies at the end of each browsing session. This will be tedious for you though, since your browsing experience will be one of constant “I agree” dialogs, some of which you may be able to ignore, or others for which you have to click I Agree or endure a myriad of semi-functional links and settings,

Maybe the EU GDPR legislation is unreasonable. Maybe we have been backed into this corner by allowing the internet to be dominated by a few giant companies. All we can state for sure is that the current situation is hopelessly broken, from a privacy and usability perspective.

Is Ron Jeffries right about the shortcomings of Agile?

A post from InfoQ alerted me to this post by Agile Manifesto signatory Ron Jeffries with the rather extreme title “Developers should abandon Agile”.

If you read the post, you discover that what Jeffries really objects to is the assimilation of Agile methodology into the old order of enterprise software development, complete with expensive consultancy, expensive software that claims to manage Agile for you, and the usual top-down management.

All this goes to show that it is possible do do Agile badly; or more precisely, to adopt something that you call Agile but in reality is not. Jeffries concludes:

Other than perhaps a self-chosen orientation to the ideas of Extreme Programming — as an idea space rather than a method — I really am coming to think that software developers of all stripes should have no adherence to any “Agile” method of any kind. As those methods manifest on the ground, they are far too commonly the enemy of good software development rather than its friend.

However, the values and principles of the Manifesto for Agile Software Development still offer the best way I know to build software, and based on my long and varied experience, I’d follow those values and principles no matter what method the larger organization used.

I enjoyed a discussion on the subject of Agile with some of the editors and writes at InfoQ during the last London QCon event. Why is it, I asked, that Agile is no longer at the forefront of QCon, when a few years back it was at the heart of these events?

The answer, broadly, was that the key concepts behind Agile are now taken for granted so that there are more interesting things to discuss.

While this makes sense, it is also true (as Jeffries observes) that large organizations will tend to absorb these ideas in name only, and continue with dark methods if that is in their culture.

The core ideas in Extreme Programming are (it seems to be) sound. Working in small chunks, forming a team that includes the customer, releasing frequently and delivering tangible benefits, automated tests and continuous refactoring, planning future releases as you go rather than in one all-encompassing plan at the beginning of a project; these are fantastic principles and revolutionary when you first come across them. See here for Jeffries’ account of what is Extreme Programming.

These ideas have everything to do with how the team works and little to do with specific tools (though it is obvious that things like a test framework, DevOps strategy and so on are needed).

Equally, you can have all the best tools but if the team is not functioning as envisaged, the methodology will fail. This is why software development methodology and the psychology of human relationships are intimately linked.

Real change is hard, and it is easy to slip back into bad practices, which is why we need to rediscover Agile, or something like it, repeatedly. Maybe the Agile word itself is not so helpful now; but the ideas are as strong as ever.

Microsoft announces Visual Studio 2019, but pleasing developers is a tough challenge

Microsoft’s John Montgomery has announced Visual Studio 2019, in a post which is short on any details of what might be in the product, other than to continue evolving features that we already know about, such as Live Share, AI-powered IntelliCode, more refactorings and so on.

The acquisition of GitHub is bound to impact both Visual Studio and Visual Studio Team Services, but Montgomery does not talk about this.

Note there is already a Visual Studio roadmap which gives some clues about what is coming. A common theme is integration with Azure services such as Azure Key Vault (for app secrets), Azure Functions, and Azure Container Service (Kubernetes).

It is more illuminating to read the comments to Montgomery’s post. Montgomery says that Visual Studio 2017 is “our most popular Visual Studio release ever,” which I presume is a count of how many times it has been downloaded or installed. It is not the most reliable though; one comment says “2017 has been buggier than all of the bugs 2015 and 2013 had combined.” I imagine every Visual Studio developer, myself included, has to exit and reload the IDE from time to time to fix odd behaviour. Other comments include:

– Reporting components have to be added per project rather than being integrated into the toolbox

– SQL Server Data Tools (SSDT) lagged behind the 2017 release and still have issues

– the XAML designer has performance and behaviour issues and the new XAML designer in preview is missing many features

In general, Microsoft struggles to keep Visual Studio up to date with its constantly-changing developer platform while also working well with the older technologies that are still widely used. The transition from .NET Framework to .NET Core is a tricky issue for the team to solve.

User Benjamin Callister says this:

I have been developing professionally with VS for 20 years now. honestly, the experience seems to get worse with each new release. the amount of time wasted in my day working with XAML alone makes me more than frustrated. The feeling is mutual among my peers as well – and it has been for years now. VS Code is such a fresh breath of air because of its speed. VS full has become so bloated, working with UWP/XAML so slow, and build times so slow. Also, imo profiling tools should be turned OFF by default, with a simple button to toggle them back on when needed. As a developer, I don’t want them on all the time – rather, just when I want to profile.

The mention of Visual Studio code is an interesting one. Code is cross-platform and has an increasing number of extensions and will be an increasingly popular choice for developers who can live without the vast range of features in Visual Studio.

Asus Project Precog dual-screen laptop: innovation in PC hardware, but missing the keyboard may be too high a price

Asus has announced Project Precog at Computex in Taiwan. This is a dual-screen laptop with a 360° hinge and no keyboard.

image

The name suggests a focus on AI, but how much AI is actually baked into this device? Not that much. It features “Intelligent Touch” that will change the virtual interface automatically and adjust the keyboard location or switch to stylus mode. It includes Cortana and Amazon Alexa for voice control. And the press release remarks optimistically that “The dual-screen design of Project Precog lets users keep their main tasks in full view while virtual assistants process other tasks on the second screen,” whatever that means – not much is my guess, since is the CPU that processes tasks, not the screen.

image

Even so, kudos to Asus for innovation. The company has a long history of bold product launches; some fail, some, like the inexpensive 2007 Eee PC which ran Linux, have been significant. The Eee PC was both a lot of fun and helped to raise awareness of alternatives to Windows.

The notable feature of Project Precog of course is not so much the AI, but the fact that it has two screens and no keyboard. Instead, if you want to type, you get an on-screen keyboard. The trade-off is extra screen space at the cost of convenient typing.

I am not sure about this one. I like dual screens, and like many people much prefer using two screens for desktop work. That said, I am also a keyboard addict. After many experiments with on-screen keyboards on iPads, Windows and Android tablets, I am convinced that the lack of tactile feedback and give on a virtual keyboard makes them more tiring to work on and therefore less productive.

Still, not everyone works in the same way as I do; and until we get to try a Project Precog device (no date announced), we will not know how well it works or how useful the second screen turns out to be.