All posts by Tim Anderson

Adobe announces extensibility for XD design and prototyping tool, integration with Microsoft Teams, Slack and Jira

Adobe XD (Experience Design) is a tool for prototyping apps and web applications. The full application runs on Windows and Mac, as part of Adobe’s Creative Cloud, and there are apps for iOS and Android that let you preview your designs on a device. Note that it is only a prototyping tool: you still have to re-implement the design in Android Studio, Xcode, Visual Studio or your preferred development tool. However the ability to create and share prototypes is a critical part of the workflow for many applications.

image

Adobe has now announced extensibility for XD via an API. This enables third-party plugins, which will enable “adding new features, automating workflows and connecting XD to tools and services,” according to the press release.

There are also new integrations with collaboration tools including Microsoft Teams and Slack, and Jira (Atlassian’s software development management tool).

The release emphasises that that Microsoft Teams is Adobe’s “preferred collaboration service”, showing that the company’s alliance with Microsoft is still on.

These are not the only tools which integrate with XD. Others were announced in January this year, including Dropbox and Sketch.

What do these integrations do? It is mainly a matter of rich preview within the tool, and the ability to receive notifications, such as when someone comments on an XD design.

Adobe has a generous free starter plan for XD. This includes:

  • Adobe XD
  • 1 active shared prototype
  • 1 active shared design spec
  • 2 GB cloud storage
  • Typekit Free (limited set of fonts)

You can get the free plan here, play around with the tool, and upgrade to the full plan (with unlimited prototypes) if you need to, at $9.99 per month.

SQLite with .NET: excellent but some oddities

I have been porting a C# application which uses an MDB database (the old Access/JET format) to one that uses SQLite. The process has been relatively smooth, but I encountered a few oddities.

One is puzzling and is described by another user here. If you have a column that normally stores string values, but insert a string that happens to be numeric such as “12345”, then you get an invalid cast exception from the GetString method of the SQLite DataReader. The odd thing is that the GetFieldType method correctly returns String. You can overcome this by using GetValue and casting the result to a string, or calling GetString() on the result as in dr.GetValue().ToString().

Another strange one is date comparisons. In my case the application only stores dates, not times; but SQLite using the .NET provider stores the values as DateTime strings. The SQLite query engine returns false if you test whether “yyyy-mm-dd 00:00:00” is equal to “yyy-mm-dd”. The solution is to use the date function: date(datefield) = date(datevalue) works as you would expect. Alternatively you can test for a value between two dates, such as more than yesterday and less than tomorrow.

Performance is excellent, with SQLite . Unit tests of various parts of the application that make use of the database showed speed-ups of between 2 and 3 times faster then JET on average; one was 8 times faster. Note though that you must use transactions with SQLite (or disable synchronous operation) for bulk updates, otherwise database writes are very slow. The reason is that SQLite wraps every INSERT or UPDATE in a transaction by default. So you get the effect described here:

Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second. Transaction speed is limited by the rotational speed of your disk drive. A transaction normally requires two complete rotations of the disk platter, which on a 7200RPM disk drive limits you to about 60 transactions per second.

Without a transaction, a unit test that does a bulk insert, for example, took 3 minutes, versus 6 seconds for JET. Refactoring into several transactions reduced the SQLite time to 3 seconds, while JET went down to 5 seconds.

Rumours of a new Mac Mini? It is about time.

Bloomberg is reporting rumours of a new Mac Mini in time for the back-to-school market this year. The source of the rumours is claimed to be “people familiar with the plans”.

Apple is also planning the first upgrade to the Mac mini in about four years. It’s a Mac desktop that doesn’t include a screen, keyboard, or mouse in the box and costs $500. The computer has been favored because of its lower price, and it’s popular with app developers, those running home media centers, and server farm managers. For this year’s model, Apple is focusing primarily on these pro users, and new storage and processor options are likely to make it more expensive than previous versions, the people said.

You can still buy a Mac Mini, but it has not been updated since 2014, making it particularly poor value. It is useful for developers since a Mac of some kind is required for iOS and of course Mac development. It is also handy for keeping up to date with macOS.

The latest rumours sound plausible though the prospect of being “more expensive than previous versions” will not go down well with some of the target market, who want to minimise the premium paid for Apple products. Another reason why the 2014 Mac Mini is unappealing is that additional RAM is factory-fit only, which again means extraordinarily high prices. Check out the iFixit teardown:

Unfortunately, the RAM is soldered to the logic board. This means that if you want to upgrade the RAM, you can only do so at time of purchase.

Will Apple do the same again? It seems likely. My guess is that the new Mac Mini (if it exists) will be even smaller than before, but just as hard to upgrade.

Should you convert your Visual Basic .NET project to C#? Why and why not…

When Microsoft first started talking about Roslyn, the .NET compiler platform, one of the features described was the ability to take some Visual Basic code and “paste as C#”, or vice versa.

Some years later, I wondered how easy it is to convert a VB project to C# using Roslyn. The SharpDevelop team has a nice tool for this, CodeConverter, which promises to “Convert code from C# to VB.NET and vice versa using Roslyn”. You can also find this on the Visual Studio marketplace. I installed it to try out.

image

Why would you do this though? There are several reasons, the foremost of which is cross-platform support. The Xamarin framework can use VB to some extent, but it is primarily a C# framework. .NET Core was developed first for C#. Microsoft has stated that “with regard to the cloud and mobile, development beyond Visual Studio on Windows and for non-Windows platforms, and bleeding edge technologies we are leading with C#.”

Note though that Visual Basic is still under active development and history suggests that your Windows VB.NET project will continue running almost forever (in IT terms that is). Even Visual Basic 6.0 applications still run, though you might find it convenient to keep an old version of Windows running for the IDE.

Still, if converting a project is just a right-click in Visual Studio, you might as well do it, right?

image

I tried it, on a moderately-size VB DLL project. Based on my experience, I advise caution – though acknowledging that the converter does an amazing job, and is free and open source. There were thousands of errors which will take several days of effort to fix, and the generated code is not as elegant as code written for C#. In fact, I was surprised at how many things went wrong. Here are some of the issues:

The tool makes use of the Microsoft.VisualBasic namespace to simplify the conversion. This namespace provides handy VB features like DateDiff, which calculates the difference between two dates. The generated project failed to set a reference to this assembly, generating lots of errors about unknown objects called Information, Strings and so on. This is quick to fix. Less good is that statements using this assembly tend to be more convoluted, making maintenance harder. You can often simplify the code and remove the reference; but of course you might introduce a bug with careless typing. It is probably a good idea to remove this dependency, but it is not a problem if you want the quickest possible port.

Moving from a case-insensitive language to a case-sensitive language is a problem. Visual Studio does a good job of making your VB code mostly consistent with regard to case, but that is not a fix. The converter was unable to fix case-sensitivity issues, and introduced some of its own (Imports System.Text became using System.text and threw an error). There were problems with inheritance, and even subtle bugs. Consider the following, admittedly ugly and contrived, code:

image

Here, the VB coder has used different case for a parameter and for referencing the parameter in the body of the method. Unfortunately another variable with the different case is also accessible. The VB code and the converted C# code both compile but return different results. Incidentally, the VB editor will work very hard to prevent you writing this code! However it does illustrate the kind of thing that can go wrong and similar issues can arise in less contrived cases.

C# is more strict than VB which causes errors in conversion. In most cases this is not a bad thing, but can cause headaches. For example, VB will let you pass object members ByRef but C# will not. In fact, VB will let you pass anything ByRef, even literal values, which is a puzzle! So this compiles and runs:

image

Another example is that in VB you can use an existing variable as the iteration variable, but in C# foreach you cannot.

Collections often go wrong. In VB you use an Item property to access the members of a collection like a DataReader. In C# this is omitted, but the converter does not pick this up.

Overloading sometimes goes wrong. The converter does not always successfully convert overloaded methods. Sometimes parameters get stripped away and a spurious new modifier is added.

Bitwise operators are not correctly converted.

VB allows indexed properties and properties with parameters. C# does not. The converter simply strips out the parameters so you need to fix this by hand. See https://stackoverflow.com/questions/2806894/why-c-sharp-doesnt-implement-indexed-properties if the language choices interest you.

There is more, but the above gives some idea about why this kind of conversion may not be straightforward.

It is probably true that the higher the standard of coding in the original project, the more straightforward the conversion is likely to be, the caveat being that more advanced language features are perhaps more likely to go wrong.

Null strings behave differently

Another oddity is that VB treats a String set to null (Nothing) as equivalent to an empty string:

Dim s As String = Nothing

If (s = String.Empty) Then ‘TRUE in VB
     MsgBox(“TRUE!”)
End If

C# does not:

String s = null;

   if (s == String.Empty) //FALSE in C#
    {
        //won’t run
    }

Same code, different result, which can have unfortunate consequences.

Worth it?

So is it worth it? It depends on the rationale. If you do not need cross-platform, it is doubtful. The VB code will continue to work fine, and you can always add C# projects to a VB solution if you want to write most new code in C#.

If you do need to move outside Windows though, conversion is worthwhile, and automated conversion will save you a ton of manual work even if you have to fix up some errors.

There are two things to bear in mind though.

First, have lots of unit tests. Strange things can happen when you port from one language to another. Porting a project well covered by tests is much safer.

Second, be prepared for lots of refactoring after the conversion. Aim to get rid of the Microsoft.VisualBasic dependency, and use the stricter standards of C# as an opportunity to improve the code.

SQLite adds support for .NET Core 2.0 and .NET Standard 2.0

image

The open source SQLite database engine goes from strength to strength, largely by not changing that much: it remains small, fast, reliable, cross-platform, and completely free. The engine is written in C but there are many wrappers for different languages, a recent addition being .NET Core 2.0 and .NET Standard 2.0:

1.0.109.0: Add preliminary support for .NET Core 2.0 and the .NET Standard 2.0. Pursuant to [5c89cecd1b].

.NET developers using SQLite are fortunate in that System.Data.SQLite, the .NET provider, is supported by the SQLite team and has its own sub-site on sqlite.org. “The SQLite team is committed to supporting System.Data.SQLite long-term,” states the home page.

The addition of .NET Core 2.0 support is valuable, in part because .NET Core is where Microsoft’s energy is now focused, and will make it easier to write cross-platform code. There is a snag though: there is no official cross-platform GUI for .NET Core, which would be useful for SQLite given that it is a local database engine. However, Microsoft’s Xamarin framework, which is cross-platform, does support .NET Standard 2.0 so this should work though I have not tried it.

The truth is that almost any framework can be made to work with SQLite. I did some work myself on a wrapper for Delphi (Object Pascal) which still has some users today.

Back in 2007 I interviewed SQLite’s creator, Dr Richard Hipp, for Guardian Technology. Worth a read if you are wondering why SQLite, unlike most open source projects, has no licence: it is simply public domain:

“I looked at all of the licences,” Hipp says, “and I thought, why not just put it in the public domain? Why have these restrictions on it? I never expected to make one penny. I just wanted to make it available to other people to solve their problem.”

Audirvana Plus for Windows review: a music player which combines convenience and no-compromise audio

Audirvana Plus, an audiophile music player for the Mac, has now been released for Windows.

image

Audirvana was developed in France by Damien Plisson, originally as an open source project (you can still get this here but it has not been updated since 2012). The description there still applies: “No equalizer, no trendy special effects, just the music”.  Both Mac and Windows come with music players bundled with the operating system – in Apple’s case the mighty iTunes – but the issue which Audirvana addresses is that these players are about convenience and features as well as sound quality.

Another problem is that the sound system in a modern operating system is complex and needs to support every kind of application while from the user’s perspective it should “just work”; and this can mean compromises, such as resampling or normalizing the audio. This does not matter in most circumstances, but if you want the best possible sound and spend money on high-res downloads or streaming, for example, you want bit-perfect sound.

This perhaps is a good reason not to play music directly from a PC or Mac; but the counter-argument is that using your existing computer reduces the box-count (and expense) of streaming, and that the flexibility and processing power of desktop computer is handy too.

So what does Audirvana offer? The Windows version is still to some extent work in progress and not yet as full-featured as the Mac version; however the developers are promising to add the missing pieces later. However the product is already a capable player with the following key features:

1. Wide range of supported formats including AIFF, WAVE, AAC, MP3, FLAC, Monkey Audio APE, WavPack, Apple Lossless, DSD (DSDIFF including DST compressed, DSF, and SACD ISO images).

DSD support works whether or not you have a DSD DAC. If you have a DSD DAC, you get full native DSD. If you do not, Audirvana will convert to hi-res PCM and it still sounds good. You can control how the DSD is converted in settings, such as the amount of gain to apply (without it, DSD files will sound quiet).

image

Here is a DSD file playing on a non-DSD DAC:

image

2. MQA unfolding whether or not you have an MQA DAC. The way this works is similar to DSD. If you have an MQA DAC, the decoding will take place in hardware. If you do not, Audirvana will process the MQA track in software. For example, I have a 16-bit, 44.1 kHz MQA-encoded FLAC that plays in Foobar 2000 as a 16/44 file, downloaded from here. In Audirvana though, the same file claims to be 24-bit/352.8 kHz track.

image

That resolution is not  genuine; but what matters is that MQA decoding is taking place. If the file is played through an MQA-capable DAC like the Meridian Explorer 2, I get a green light indicating MQA decoding on the DAC. If I play the “original resolution” version, I get a blue light indicating “MQA Studio”. WASAPI and ASIO support. WASAPI is the native Windows standard which enables bit-perfect output and is aimed at professional audio engineers. ASIO is a standard with similar features developed by Steinberg.

3. A library manager which performs well with large numbers of tracks. I tried it with over 50,000 tracks and it was perfectly responsive. It uses the open source SQLITE database manager.

4. Hi-res streaming via Qobuz, HIRESAUDIO or Tidal. There is no support for the likes of Spotify or Apple Music; I guess these are not the target market because they use lossy compression.

Not available yet, but coming, is a remote app for iOS (iPhone, iPad and iPod Touch), audio effects via VST plugins, and kernel streaming output.

The Audirvana User Interface

Audirvana is delivered as a download though it is a click-once application which means it updates semi-automatically, prompting you to update if an update is available. The user interface is, from the point of view of a Windows user, rather quirky. There is no menu or ribbon, but by clicking around you can find what you need. Some of the settings are accessed by clicking a gearwheel icon, others (such as the per-device options shown in the illustration above) by clicking an arrow to the right of the device name. There is also a compact view, obtained by clicking a symbol at top left, designed for playback once you have lined up the tracks you want.

image

The current version seems unreliable when it comes to showing cover art in the library. Sometimes cover art shows up in the mini view, but not the full view.

image

Searching the library is quick, but because the user interface is fairly blocky, you do not see many results on a page. An option just to show details in a list would be good (or perhaps it exists but I have not clicked in the right place yet).

I can forgive all this since despite a few annoyances the user interface is responsive, the search fast, and playback itself works well.

Sound quality

How much impact does the music player have on sound quality? This is difficult to answer definitively. On the one hand, the amount of distortion introduced by a sub-optimal player should be negligible compared to other sources of distortion. On the other hand, if you have gone to the trouble and expense of investing in hi-res downloads, streaming or DSD, it must be worth ensuring that every link in the chain does justice to those sources.

It is true that on Windows, with its enthusiastic technical audiophile community, most of what Audirvana does can be achieved with free players such as Foobar 2000 or VLC. There is also the excellent JRiver as an alternative paid-for player, though this lacks software MQA decoding (appreciating that not everyone likes or needs this).

That said, the uncomplicated user interface of Audirvana Plus is great for audio enthusiasts who would rather not spend too much time fiddling with settings or plugins. Support for the iOS remote app is an unfortunate missing piece at present, and Android users miss out too.

The Windows version needs a bit more work then (I also encounted some unpleasant noises when trying to adjust the volume within the application), but it does enough right to justify its relatively modest cost, and the bugs will fixed. Head over to the Audirvana site for a free trial.

Microsoft’s Dynamics CRM 2016/365: part brilliant, part perplexing, part downright sloppy

I have just completed a test installation of Microsoft’s Dynamics CRM on-premises; it is now called Dynamics 365 but the name change is cosmetic, and in fact you begin by installing Dynamics CRM 2016 and it becomes Dynamics 365 after applying a downloaded update.

Microsoft’s Dynamics product has several characteristics:

1. It is fantastically useful if you need the features it offers

2. It is fantastically expensive for reasons I have never understood (other than, “because they can”)

3. It is tiresome to install and maintain

I wondered if the third characteristic had improved since I last did a Dynamics CRM installation, but I feel it has not much changed. Actually the installation went pretty much as planned, though it remains fiddly, but I wasted considerable time setting up email synchronization with Exchange (also on-premises). This is a newish feature called Server-Side Synchronization, which replaces the old Email Router (which still exists but is deprecated). I have little love for the Email Router, which when anything goes wrong, fills the event log with huge numbers of identical errors such that you have to disable it before you can discover what is really going wrong.

Email is an important feature as automated emails are essential to most CRM systems. The way the Server-Side Synchronization works is that you configure it, but CRM mailboxes are disabled until you complete a “Test and Enable” step that sends and receives test emails. I kept getting failures. I tried every permutation I could think of:

  • Credentials set per-user
  • Credentials set in the server profile (uses Exchange Impersonation to operate on behalf of each user)
  • Windows authentication (only works with Impersonation)
  • Basic authentication enabled on Exchange Web Services (EWS)

All failed, the most common error being “Http server returned 401 Unauthorized exception.” The troubleshooting steps here say to check that the email address of the user matches that of the mailbox; of course it did.

An annoyance is that on my system the Test and Enable step does not always work (in other words, it is not even tried). If I click Test and Enable in the Mailbox configuration window, I get this dialog:

image

However if I click OK, nothing happens and the dialog stays. If I click Cancel nothing happens and the dialog stays. If I click X the dialog closes but the test is not carried out.

Fortunately, you can also access Test and Enable from the Mailbox list (select a mailbox and it appears in the ribbon). A slightly different dialog appears and it works.

I was about to give up. I set Windows authentication in the server profile, which is probably the best option for most on-premises setups, and tried the test one more time. It worked. I do not know what changed. As this tech note (which is about server-side synchronization using Exchange Online) remarks:

If you get it right, you will hear Microsoft Angels singing

But what’s this about sloppy? There is plenty of evidence. Things like the non-functioning dialog mentioned above. Things like the date which shows for a mailbox that has not been tested:

image

Or leaving aside the email configuration, things like the way you can upload Word templates for use in processes, but cannot easily download them (you can use a tool like the third-party XRMToolbox).

And the script error dialog which has not changed for a decade.

Or the warning you get when viewing a report in Microsoft Edge, that the browser is not supported:

image

so you click the link and it says Edge is supported.

Or even the fact that whenever you log on you get this pesky dialog:

image

So you click Don’t show this again, but it always reappears.

It seems as if Microsoft does not care much about the fit and finish of Dynamics CRM.

So why do people persevere – in fact, the Dynamics business is growing for Microsoft, largely because of Dynamics 365 online and its integration with Office 365. The cloud is one reason, which removes at least some of the admin burden. The other thing though is that it does bring together a set of features that make it invaluable to many businesses. You can use it not only for sales and marketing, but for service case management, quotes, orders and invoices.

It is highly customizable, which is a mixed blessing as your CRM installation becomes increasingly non-standard, but does mean that most things can be done with sufficient effort.

In the end, it is all about automation, and can work like magic with the right carefully designed custom processes.

With all those things to commend it, it would pay Microsoft to work at making the user interface less annoying and the administration less prone to perplexing errors.

Mozilla Firefox and a DNS security dilemma

Mozilla is proposing to make DNS over HTTPS default in Firefox. The feature is called Trusted Recursive Resolver, and currently it is available but off by default:

image

DNS is critical to security but not well understood by the general public. Put simply, it resolves web addresses to IP addresses, so if you type in the web address of your bank, a DNS server tells the browser where to go. DNS hijacking makes phishing attacks easier since users put the right address in their browser (or get it from a search engine) but may arrive at a site controlled by attackers. DNS is also a plain-text protocol, so DNS requests may be intercepted giving attackers a record of which sites you visit. The setting for which DNS server you use is usually automatically acquired from your current internet connection, so on a business network it is set by your network administrator, on broadband by your broadband provider, and on wifi by the wifi provider.

DNS is therefor quite vulnerable. Use wifi in a café, for example, and you are trusting the café wifi not to have allowed the DNS to be compromised. That said, there are further protections, such as SSL certificates (though you might not notice if you were redirected to a secure site that was a slightly misspelled version of your banking site, for example). There is also a standard called DNSSEC which authenticates the response from DNS servers.

Mozilla’s solution is to have the browser handle the DNS. Trusted Recursive Resolver not only uses a secure connection to the DNS server, but also provides a DNS server for you to use, operated by Cloudflare. You can replace this with other DNS servers though they need to support DNS over HTTPS. Google operates a popular DNS service on 8.8.8.8 which does support DNS over HTTPS as well as DNSSEC. 

While using a secure connection to DNS is a good thing, using a DNS server set by your web browser has pros and cons. The advantage is that it is much less likely to be compromised than a random public wifi network. The disadvantage is that you are trusting that third-party with a record of which sites you visit. It is personal data that potentially could be mined for marketing or other reasons.

On a business network, having the browser use a third-party DNS server could well cause problems. Some networks use split DNS, where an address resolves to an internal address when on the internal network, and an external address otherwise. Using a third-party DNS server would break such schemes.

Few will use this Firefox feature unless it is on by default – but that is the plan:

You can enable DNS over HTTPS in Firefox today, and we encourage you to.

We’d like to turn this on as the default for all of our users. We believe that every one of our users deserves this privacy and security, no matter if they understand DNS leaks or not.

But it’s a big change and we need to test it out first. That’s why we’re conducting a study. We’re asking half of our Firefox Nightly users to help us collect data on performance.

We’ll use the default resolver, as we do now, but we’ll also send the request to Cloudflare’s DoH resolver. Then we’ll compare the two to make sure that everything is working as we expect.

For participants in the study, the Cloudflare DNS response won’t be used yet. We’re simply checking that everything works, and then throwing away the Cloudflare response.

Personally I feel this should be opt-in rather than on by default, though it probably is a good thing for most users. The security risk from DNS hijacking is greater than the privacy risk of using Cloudflare or Google for DNS. It is worth noting too that Google DNS is already widely used so you may already be using a big US company for most of your DNS resolving, but probably without the benefit of a secure connection.

All the way from 1997: Compaq PC Companion C140 still works, but as badly as it did on launch

I am having a clear-out which is bringing back memories and unearthing some intriguing items. One is this Compaq C140 PC Companion, running Windows CE, which launched in December 1997.

image

The beauty of this device is that it takes two AA batteries. I stuck in some new ones and found that it started up fine, not bad after more than 20 years. Most more recent devices have a non-replaceable rechargeable battery which usually fails long before the rest of the electronics, rendering the entire device useless (at least without surgery).

The C140 runs Windows CE 1.0 and has a monochrome touch screen designed to be used mainly with a stylus. 4MB RAM, 4MB storage, and comes with versions of Word, Excel, Calendar, Contacts and Tasks. There is also a calculator and a world clock. It is expandable with PCMCIA cards (though not many have drivers). The idea is that you link it to your PC with the supplied serial cable and synch with Outlook, hence PC Companion.

The odd thing is, looking at this device I still find it superficially compelling. A pocketable device running Word and Excel, with a full QWERTY keyboard, stylus holder so you do not lose it, what’s not to like?

A lot, unfortunately. The biggest problem is the screen. There is a backlight and a contrast dial, but it is faint and hard to read in most lights and you constantly fiddle with the contrast.

The next issue is the keyboard. It is too cramped to type comfortably. And the format, though it looks reassuringly like a small laptop, is actually awkward to use. It works on a desk, which seems to miss the point, but handheld it is useless. You need three hands, one for the device, one for the stylus, and a third for typing. The design is just wrong and has not been thought through.

I have searched for years for small portable devices with fast text input. I suppose a smartphone with a Swype keyboard or similar comes closest but I am still more productive with a laptop and in practice the thing that has made most improvement for me is that laptops have become lighter and with longer battery life.

Spare a thought though for Microsoft (and its partners) with its long history of trying to make mobile work. You can argue back and forth about whether it was right to abandon Windows Phone, but whatever your views, it is a shame that decades of effort ended that way.

Another good quarter for Apple, but Huawei growth and Samsung decline is the real Smartphone story

Apple has reported its “best June quarter ever” with revenue up 17% year on year. iPhone unit sales were flat, but higher average prices bumped up revenue.

More significant though is the rise of Huawei, now number two in unit sales after Samsung and ahead of Apple. Here are the latest unit sales for the top ten vendors according to preliminary figures from IHS Markit:

Global smartphone shipments by OEM (million units)

Rank

Company

Q2’18

Market Share

YoY

Q1’18

Q2’17

1

Samsung

70.8

20.6%

-10.8%

78.0

79.4

2

Huawei

54.2

15.7%

41.0%

39.3

38.5

3

Apple

41.3

12.0%

0.7%

52.2

41.0

4

Xiaomi

33.7

9.8%

45.6%

28.4

23.2

5

Oppo

31.9

9.3%

4.5%

25.9

30.5

6

Vivo

28.6

8.3%

20.3%

21.2

23.8

7

LG

11.2

3.3%

-15.5%

11.3

13.3

8

Motorola

10.0

2.9%

41.5%

8.7

7.1

Others

62.8

18.1%

-33.3%

80.4

94.2

Total

344.6

100.0%

-1.8%

345.5

350.9

Source: IHS Markit, Smartphone Intelligence Service, 2018.

What is notable is that the number one vendor Samsung suffered a 10% year on year decline, but Huawei grew units by an amazing 41% to become number two ahead of Apple, by volume.

image
Huawei P20 Pro

Note that Apple has not declined as such. This is about Huawei winning sales both from Samsung and from other vendors. If the trend continues, Huawei is on track to overtake Samsung in another few quarters.

Samsung remains the premium Android brand though it has struggled to come up with compelling reasons to keep upgrading its high end devices. A new Galaxy Note is on the way and may be the distinctive new model that the company needs.

That said, it will take more than that to disrupt Huawei. In one sense, there is nothing very complicated about Huawei’s success: it has delivered devices both via its Huawei and Honor brands that are well made and which offer the best value proposition on the market. That does not make them the best in absolute terms (I would rather have a Samsung), but that is not the most important thing. Chatting to a Three salesperson in a shop recently confirmed this: they sell more Huawei/Honor than any other brand, because customers look at what they get for their money.

It is logical that as Android devices have become thoroughly commoditised, that Chinese vendors can achieve better value than their competition thanks to the cost-effective manufacturing capacity available in their own country.

Xiaomi, another Chinese company, confirms this trend, with its units up over 45%, growing faster than Huawei.