HP business breakdown and why a PC spin-off could backfire

I had a look at HP’s latest financials, following last night’s triple blast of news from the computer giant. It is ceasing webOS operations, acquiring enterprise knowledge management company Autonomy, and considering (though only considering) a spin-off or other major change to its PC division, the Personal Systems Group. Here is what HP said:

As part of the transformation, HP announced that its board of directors has authorized the exploration of strategic alternatives for the company’s Personal Systems Group. HP will consider a broad range of options that may include, among others, a full or partial separation of PSG from HP through a spin-off or other transaction. (See accompanying press release.)

Looking at the results for the second quarter 2011, here is how the main pieces break down:

$millions

Segment Percentage of revenue Earnings Percentage of total earnings
Services 9,089 28.5% 1225 33.8%
Servers, storage and networking 5396 16.9% 699 19.3%
HP Software 780 2.4% 151 4.2%
Personal systems group 9,592 30.1% 567 15.7%
Imaging and printing 6,087 19.1% 892 24.6%
Financial Services 932 2.9% 88 2.4%

Note that “Earnings” is earnings from operations; HP actually made less money than that, because various other corporate costs have to be deducted. But it gives an idea of where HP’s profit comes from.

So what do these groups do? PSG is notebooks, desktops, workstations and other, where “other” I’d guess will include the webOS mobile devices. In PSG, notebooks accounts for 54% of the total, with desktops taking 38% of the rest. Virtually all of these run Windows.

In servers, storage and networking, 61% is from what HP calls “Industry standard servers”. This is code for Windows server.

Under services, the three big businesses are Infrastructure Technology Outsourcing (42%), Technology Services (30%) and Application Services (19%). The first of these is clear-cut (have HP run your infrastructure), but the second two are both consulting services and on a brief look seem to have some overlap.

Autonomy, by the way, reported revenue of $million 247 in the three months ending June 30 2011 – pretty tiny relative to HP.

A few comments then. It’s worth noting that PSG is the biggest single segment for revenue, but not so for profit, though it is still making a useful contribution.

Imaging and Printing contributes most earnings as a proportion of revenue. I do not know how much of that comes from absurdly overpriced ink cartridges!

If you take PSG together with Industry Standard Servers, you find that around 40% of HP’s revenue comes from boxes running Windows. If you then consider what its printers, network and storage systems attach to, and that a proportion of HP’s consulting business concerns Windows systems and applications, it is obvious that HP’s fortunes are deeply entwined with Microsoft.

If HP removes PSG that will still be true, though less so. But why would HP want do remove PSG? I would guess two main reasons. One is that it is unprofitable relative to the other segments, and the other is that HP foresees the business declining under the force of various well-documented pressures: Apple, mobile, cloud.

It still makes little sense to me. I can understand why HP might want to get out of consumer desktops and laptops, but it seems to me that to supply corporate PCs fits snugly with the rest of HP’s business and has beneficial side-effects. After all, PCs, printers and servers do all plug together both physically and conceptually. Getting rid of PSG might have a negative effect on other parts of HPs business.

In the SMB market, by the way, resellers like HP because unlike Dell it does not mainly sell direct. HP boxes generally work as advertised in my experience, though I rate the laptops less highly than the servers and desktops.

HP discontinues WebOS, considers PC spin-off. Should have stuck with Microsoft

Oh yes, and buys Autonomy, a fast-growing specialist in enterprise knowledge management.

Here’s the news from HP’s announcement:

As part of the transformation, HP announced that its board of directors has authorized the exploration of strategic alternatives for the company’s Personal Systems Group. HP will consider a broad range of options that may include, among others, a full or partial separation of PSG from HP through a spin-off or other transaction. (See accompanying press release.)

HP will discontinue operations for webOS devices, specifically the TouchPad and webOS phones. The devices have not met internal milestones and financial targets. HP will continue to explore options to optimize the value of webOS software going forward.

In addition, HP announced the terms of a recommended transaction for all of the outstanding shares of Autonomy Corporation plc for £25.50 ($42.11) per share in cash.

A few quick comments. First, the failure of webOS does not surprise me. There is not much wrong with webOS as such; in pure technical terms it deserves better. Its focus on adapting web technologies for local mobile applications is far-sighted; it is a more interesting operating system than Android and in some ways it is surprising that it went to HP and not to Google, which is a web technology specialist.

The problem is that HP, despite its size, is not big enough to make a success of webOS on its own. This was my comment from just over a year ago:

Mobile platforms stand (or fall) on several pillars: hardware, software, mobile operator partners, and apps. Apple is powering ahead with all of these. Google Android is as well, and has become the obvious choice for vendors (other than HP) who want to ride the wave of a successful platform. Windows Phone 7 faces obvious challenges, but at least in theory Microsoft can make it work though integration with Windows and by offering developers a familiar set of tools, as I’ve noted here.

It is obvious that not all these platforms can succeed. If we accept that Apple and Android will occupy the top two rungs of the ladder when it comes to attracting app developers, that means HP webOS cannot do better than third; and I’d speculate that it will be some way lower down than that.

Frankly, if HP did not want to do Android, it should have stuck with Microsoft. But this is where the webOS news ties in with the announcement about he Personal Systems Group. HP fell out with Microsoft last year, as I noted in my 2010 retrospective. I said the two companies should make up; but it looks as if HP is more inclined to give up on PCs and pursue other lines that have better margins – like enterprise software.

I am puzzled though by the PSG announcement. It is always curious when a company announces that it might or might not do something, and the fact that HP says it is considering a spin-off of its PC division will be enough to makes its customers uncertain about the long-term future of HP PCs and some of them will buy elsewhere as a result. It would have paid HP either to say nothing, or to be more definite and aim for a speedy transition.

All this, on the eve of Microsoft’s detailed unveiling of Windows 8. What are the implications? More than I can put into a single post; but like Gartner’s reports of dramatically declining PC sales in Western Europe presented earlier this week, this is a sign of structural change in the industry.

Microsoft will be glad of one thing: it no longer has this major partner promoting a rival mobile and tablet operating system. Note that HP still is a major partner: even if it sells the Personal Systems Group, its server and services business will still be deeply entwined with Windows.

Review: Audéo Perfect Fit earphones

Audéo Perfect Fit earphones are designed to replace the set you got bundled with your smartphone or music player. The earphone set includes a microphone and a standard multi-function button, so that on an iPhone or many other phones you can answer or decline calls, pause and resume music, or skip to the next track.

image

There are a few unusual features. One is the shape of the earbuds, which have a distinctive “leg”. In order to fit them you first attach one of a range of silicone or foam ear tips. Then you place them in your ear with the legs pointing up and forward, and the cable draped over the back of the ears. It sounds fiddly, but it is easy enough in practice, and gets you a secure and comfortable fit.

image

The supplied manual does an excellent job of explaining fitting. There is also an optional ear guide which adds a shaped cable clip that hooks over your ears. This was not supplied with my review package, the PFE 02x, but does come with the more expensive PFE 12x or can be purchased separately. I found the fit was fine even without the clip.

The extra accessories, including the audio filters described below, are a point of confusion, as the manual in the PFE 02x lists them under “Package contents” even though they are not supplied. No doubt some customers complain that parts are missing; I would have done the same, except that I checked the product web site and external packaging which correctly shows that the only accessories in the PFE 02x pack are the silicone ear tips.

The next special feature is that each earbud is fitted with a passive audio filter, which can be changed according to preference. The PFE 02x comes with a single green filter, which you can see in the picture above, while the PFE 12x comes with gray and black filters and fitting tool.

The colours are significant. The black filters are said to amplify bass and high frequencies (what audiophiles call boom and tizz). The gray filters are meant to emphasize mid-range frequencies, while green are described as offering “perfect bass”.

According to Audeo:

In-house studies have shown that, when headphones exactly reproduce the response curve of the unobstructed ear, most people hear the sound as being very aggressive.

The response curve of Audéo PFE in-ear earphones is a compromise between a frequency range that compensates for the curve of the unobstructed ear and one that emphasizes bass and high-frequency sounds. This is what most people prefer.

In order to cover the widest possible range of user preferences we offer three audio filters.

Unfortunately the only filter I have tried is the green one supplied with the PFE 02x. However I am a little doubtful about the above explanation. The goal of hi-fi reproduction is neutrality, so that you hear whatever the musicians and engineers who created the sound intended. I appreciate though that when it comes to earbuds used on the move in all sorts of noisy environments, it does not makes sense to be purist about such things. Further, it is not realistic to expect earbuds to deliver the kind of bass you can get from full-range loudspeakers or even from high quality over-the-ear headphones, and indeed this is not the case with the Audéo. Still, what you care about is not the theory but the sound. How is it?

I carried out extensive listening tests with the Audéo earphones, comparing them to a high quality Shure earbuds as well as to a standard Apple set. My first observation is that the Audéo earphones do fit more snugly and securely than either of the others I tried, when fitted correctly, and that this close fit goes a long way towards obtaining a better and more consistent sound.

Second, I soon identified a certain character to the Audéo sound. In comparison to the Shure, the Perfect Fit earphones are slightly softer and less bright. On some music this was a good thing. I played My Jamaican Guy by Grace Jones, which has a funky beat and bright percussion. On the Shure the track was a little harsh, whereas the Audéo tamed the brightness while still letting you hear every detail. With Love over Gold by Dire Straits though, which is already a mellow track, I preferred the Shure which delivered beautiful clarity and separation, whereas the Audéo (while still sounding good) was less crisp. Daniel Barenboim playing solo piano sounded delightful though with slightly rolled off treble.

I did feel that both the Audéo and the Shure improved substantially on the Apple-supplied earphones, as they should considering their price, though even the bundled earphones are not that bad.

The strength of the Perfect Fit earphones is that they never sound bright or harsh; I found them consistently smooth and enjoyable. The sound is also clean and well extended, considering that they are earbuds. Isolation from external sounds is excellent, which is important if you are a frequent traveller.

The weakness is that they do in my opinion slightly soften and recess the sound.

That said, it may be that the other filters give the earphones a different character, and if you have the pack with a choice of filters it would be worth trying the variations to see which you prefer.

I may have been imagining it, but I felt that the earphones sounded particularly good with Apple’s iPhone.

Conclusion: a good choice, especially if you like a slightly mellow and polite presentation. If possible I recommend that you get the more expensive packs that include a case as well as alternative filters and the optional ear clips.

   

Reports of 19% decline in Western European PC market show structural change

As if we needed telling, a new Gartner report shows a steep decline in the PC market in Western Europe. A “PC” in this context includes Macs but excludes smartphones and what Gartner called “media tablets”, mostly Apple iPads. A few figures comparing shipments in the second quarter 2011 with the same period in 2010:

  • Total PC sales down 18.9%
  • Netbook sales down 53%
  • Desktop PCs down 15.4%
  • Apple up 0.5%
  • Consumer PC market down 27%

What interests me here is not so much the normal ebbing and flowing of the PC market, but structural change indicating a switch away from PCs and laptops to more lightweight mobile devices. I believe this is evidence of that, though the economy is weak and extending the life of existing PCs is an obvious saving both for businesses and consumers.

Still, the dramatic decline in netbook sales suggests that consumers really are buying the more expensive iPad in preference. If you believe that consumers are to some extent ahead of business in their technology choices, then we can expect more of the same in the corporate market too.

No doubt alarm bells have been ringing in Microsoft’s Redmond headquarters for some time. The company is betting on Windows 8 to rescue its operating system from permanent decline, which is why next month’s BUILD conference is so critical. Nevertheless, it will be a year or so before we get new-style tablets running Windows 8, so will it be too late? I tend to think not, just because of the strength of Microsoft in the business world and the importance of Windows for existing applications, but it is interesting to speculate.

One factor which you can argue either way, in terms of Microsoft’s prospects, is that non-iPad tablets seem to be struggling. HP’s TouchPad and RIM’s PlayBook seem to be selling poorly. Google Android looks more hopeful though overshadowed by legal concerns from multiple sources. In Australia and parts of Europe Apple has successfully barred or delayed sales of Samsung’s Galaxy Tab 10.1, though the latest news is that the ban has been lifted outside Germany.

See also: Fumbling tablet computing – Microsoft’s biggest mistake?

Google is now a hardware company as it announces acquisition of Motorola Mobility and its patents

Google is to acquire Motorola Mobility, a major manufacturer of Android handsets. Why? I believe this is the key statement:

We recently explained how companies including Microsoft and Apple are banding together in anti-competitive patent attacks on Android. The U.S. Department of Justice had to intervene in the results of one recent patent auction to “protect competition and innovation in the open source software community” and it is currently looking into the results of the Nortel auction. Our acquisition of Motorola will increase competition by strengthening Google’s patent portfolio, which will enable us to better protect Android from anti-competitive threats from Microsoft, Apple and other companies.

What are the implications? This will assist Google in the patent wars and perhaps give it some of the benefits of vertical integration enjoyed by Apple with iOS; though this last is a difficult point. The more Google invests in Google Motorola, the more it will upset other Android partners. Google CEO Larry Page says:

This acquisition will not change our commitment to run Android as an open platform. Motorola will remain a licensee of Android and Android will remain open. We will run Motorola as a separate business.

It is unlikely to be so simple; and the main winner I foresee from today’s announcement is Microsoft. Nokia’s decision to embrace Windows Phone rather than Android looks smarter today, since for all its faults Microsoft has a history of working with multiple hardware vendors. The faltering launches of HP’s TouchPad and RIM’s PlayBook have also worked in Microsoft’s favour. I do not mean to understate Microsoft’s challenge in competing with Apple and Android, but I believe it has a better chance than either HP or RIM, thanks to its size and existing market penetration with Windows.

Microsoft will be clarifying its mobile and slate strategy next month at the BUILD conference.

Today’s announcement is also a sign that Google takes Android’s patent problems seriously, as indeed it should. The company’s policy of act first, seek forgiveness later seems to be unravelling. Oracle has a lawsuit against Google with respect to use of Java in Android that looks like it will run and run. FOSS patent expert Florian Mueller argues today that Android also infringes the Linux license, and that this is a problem that cannot easily be fixed. Samsung’s latest Galaxy Tab has been barred from the EU; not entirely a Google issue, but it runs Android.

Note of clarification: Google is acquiring Motorola Mobility, not the whole of Motorola. In January 2011 Motorola split into two businesses. Motorola Mobility is one, revenue in second quarter 2011 around $3.3 billion. The other is Motorola Solutions, revenue in second quarter 2011 around $2 billion.

Kingston Wi-Drive: portable storage expansion for iPad and iPhone

Kingston has announced availability of the Wi-Drive. This product addresses an annoying limitation of the Apple iPhone and iPad: no USB port for external storage devices.

The Wi-Drive overcomes this by connecting wirelessly. It offers 16GB or 32GB of solid-state storage, with USB for charging and for access to the files from a PC or Mac. When you are on the go, you can put the Wi-Drive into your pocket. A free app on the iPhone, iPad or iTouch lets you access the files. The use of a network bridging means you can still access the internet. Battery life is said to be up to 4 hours, so I hope you can switch it off when not needed. You can also share the drive with up to three other users.

Example prices are £89.99 for the 16GB or £124.98 for the 32GB version.

It is a clever solution. That said, I have a couple of reservations. One is that the price is high compared to a simple USB device of the same capacity. That is not unreasonable given the extra technology needed, but it means it will only sell to users who really need it.

And do you need it? If you are on the internet, you could use a file synchronization service like Dropbox, or Apple’s own iDisk or forthcoming iCloud, to extend storage instead.

A second problem is that iOS does not expose its file system to the user. This means that external storage is less convenient on iOS than on other systems. Want to save a Pages document from iOS to the Wi-Drive? You probably cannot do so directly; there is no way to save direction to Dropbox either.

The Wi-Drive only exists because of Apple’s desire to control and supposedly simplify the operating system. It is a workaround, but not a perfect one, although that is not the fault of Kingston.

That said, I have not yet tried a Wi-Drive; I hope to bring you a proper review in due course.

Adobe Muse: so what is wrong with Dreamweaver?

Adobe has released a preview of Muse, a new web site design tool.

My first reaction was one of be-musement. What is wrong with Dreamweaver, the excellent web design tool included in Creative Suite? Bearing in mind that there is also a simplified Dreamweaver aimed at less technical business users, called Contribute.

Here are some distinctive features of Muse:

1. It is aimed at non-coders. The catch phrase is “Design and publish HTML websites without writing code”. Muse actually hides the code. I installed Muse on a Mac, and one of the first things I looked for was View Source. I cannot find any such feature. You have to preview the page in the browser, and view the source there. That is in contrast to Dreamweaver, where the split view shows you simultaneous HTML and visual designers, and you can edit freely in either.

2. It is an Adobe AIR application. I discovered this in a bad way. It would not install for me on Windows:

image

A curious error. Luckily I am also working on a Mac right now, and there it worked fine.

image

3. It will be sold by subscription only. The FAQ answer is worth quoting in full, as it describes one of the key advantages of cloud computing:

Muse will be sold only by subscription because it will allow the Muse team to improve the product more quickly and be more responsive to your needs. Traditionally Adobe builds up a collection of new features over 12, 18 or 24 months, then makes those changes available as a major upgrade. It is anticipated that new updates of Muse will be released much more frequently, probably quarterly. New features will be made available when they’re ready, not held to be part of an annual or biannual major upgrade. This will enable us to stay on top of browser and device compatibility issues and web design trends, as well as enabling us to respond to feature requests and market changes in a much more timely fashion.

I am reminded of Project Rome, a cancelled project which was also intended to be subscription only. Rome was for desktop publishing, Muse is for web design; otherwise there are plenty of parallels.

4. Muse promotes Adobe hosting via Business Catalyst, and if you select Publish this is the sole option:

image

Of course you can also Export as HTML. Still, it looks as if Muse is intended as part of a wider initiative which will include hosting and web analytics.

5. Muse is not a Flash authoring tool. Check out the Features page. The word Flash does not appear. Nor did any hidden Flash content appear when I exported a page as HTML. My guess: there is a quiet Flash crisis at Adobe, and the company is hastening to make its tools less Flash-centric, in favour of something more cloud and HTML 5 based. I do not mean that Flash is now unimportant. It is still critical to Adobe, and after all Muse itself runs on Flash. However it is being repositioned.

A few comments. Unfortunately I’ve not yet spoken to Adobe about Muse, but the obvious question is reflected in my heading: what is wrong with Dreamweaver? To answer my own question, I can see that Dreamweaver is a demanding tool, and that Muse, while still aimed at professionals, should be easier to learn.

On the other hand, I recall many early web design tools that tried to hide the mechanics of web pages, some more successful than others, and that in the end Dreamweaver triumphed partly thanks to its easy access to the code. Some still miss HomeSite, an even more code-centric tool. What has changed now?

Needless to say, Dreamweaver is not going away, but there is clearly overlap between the two tools.

Of course non-coders do need to be involved in web site authoring, but the trend has been towards smart content management tools, such as WordPress or Drupal, which let designers and coders develop themes while making content authoring easy for contributors. Muse is taking a different line.

Watch this space though. Even on the briefest of looks, this is an impressive AIR application, and it will be interesting to see how it fits into Adobe’s evolving business strategy.

Update: Elliot Jay Stocks blogs about the code generated by Muse, which he says is poor, and his opinion that it is too much print-oriented:

warning signs are present in this public beta that suggest Muse is very much a step in the wrong direction.

Google Native Client: browser apps unleashed, or misconceived and likely to fail?

Last week Google integrated Native Client into the beta of Chrome 14. Native client lets you compile C/C++ code to run in the browser. It depends on a new plug-in API called Pepper. These are open source projects sponsored by Google and implemented in the Chrome browser, and therefore also likely to turn up in Chrome OS which is an operating system in which all apps run in the browser.

Native Client is cool. For example, NaCLBox lets you run old DOS games in the browser by porting DOSBox to Native Client.

image

Another project is Qt for Google Native Client, a project currently in development. Qt is an excellent and popular GUI and application framework which would speed development of Native Client apps as well as enabling many existing applications to be ported.

It is also worth mentioning that Native Client provides another way to run .NET code in the browser, via Mono with NaCl support.

Why Native Client? Google’s vision, or at least the part of it that focuses on Chrome OS rather than Android, is that everything runs on the Internet and in the browser, making the local operating system unimportant and easily replaced. Native Client removes any performance compromises in managed languages such as JavaScript, ActionScript or Java, as well as easing migration for businesses with existing C/C++ code.

Writing native code for the browser is nothing new. Both Microsoft’s ActiveX and the NPAPI plug-in API used by non-Microsoft browsers let you extend the browser with native code. However Native Client is seamless for the user; you do not have to install any additional plug-in. The main limitation is that Native Client applets do not have access to the local operating system, for security reasons.

It is also worth noting that Native Client apps are not altogether cross-platform. They must be recompiled for different CPU instruction sets, with the current implementation supporting x86 and ARM though you have to compile two binaries. Google says it will support LLVM output to enable cross-platform binaries though this will impact performance.

But is Native Client secure? That is an open question. Google was aware of the security challenge from the beginning of the project. Unlike the plug-in mechanisms which rely mainly on trust in developer competence and signed code to verify the origin of the plug-in or ActiveX control, Native Client inspects the actual code for unsafe instructions before allowing it to run. There is also an “outer sandbox” which intercepts system calls.

However, adding any new way for code to run makes the browser less secure. Google ran a Native Client Security Contest to help identify vulnerabilities, and the contestants did not have any problem finding security flaws. Of course all of these discovered flaws will have been fixed, but there may be others and likely will be.

And is Native Client necessary? The latest JIT-compiled JavaScript engines are fast enough to enable most types of application to run at a satisfactory speed. This is not just about performance though; it is about reusing existing skills, libraries and applications. There is no doubt that Native Client is nice to have; whether its benefits outweigh the risks is harder to judge.

The last question, which may prove the most significant, is political. Google has forged ahead on its own with Native Client, saying as vendors always do that it hopes it will become a web standard. In the early days of the project, it looked like a Native Client plug-in might enable the feature in other browsers, but abandoning NPAPI for Pepper makes this difficult. Will other browser vendors support Native Client?

Here is a comment from Google’s Ian NI-Lewis that I find remarkable:

As you probably know, the rule in Web standards is "implementation wins." So we’re concentrating on getting a good quality implementation out the door. We’re doing that in Chrome. That doesn’t mean that NaCl is intended to be "Chrome only," just that we have to start somewhere.

So Native Client is non-standard, and therefore less interesting than HTML 5 until either Google has a Microsoft-Office-like de facto monopoly of web browsers, or it persuades Mozilla, Microsoft and Apple to support it.

That said, you can think of Chrome as an installable runtime in the same way as the Java Virtual Machine or Adobe Flash, just a potentially more intrusive one. Here is our app, you have to install the free Chrome browser to use it. If this happens to any great extent, I can foresee other browser makers hastening to support it.

C++ 11 is approved by ISO: a big day for native code development

Herb Sutter reports that C++ 0x, which will be called C++ 11, has been unanimously approved by the ISO C++ committee. The “11” in the name refers to the year of approval, 2011. The current standard is C++ 98, though amended as C++ 03, so it has taken 8 or 13 years to update it depending on how you count it.

This means that compiler makers can get on with implementing the full C++ 11 standard. Most current compilers implement some of the features already. This Apache wiki shows the current status. A quick glance suggests that the open source GCC is ahead of the pack, followed by Intel C++ and then perhaps Microsoft Visual C++.

C++ 11 is pretty much compatible with C++ 03 so existing code should still work. However there are many new features, enough for Bjarne Stroustrup to say in his feature summary:

Surprisingly, C++0x feels like a new language: The pieces just fit together better than they used to and I find a higher-level style of programming more natural than before and as efficient as ever. If you timidly approach C++ as just a better C or as an object-oriented language, you are going to miss the point. The abstractions are simply more flexible and affordable than before. Rely on the old mantra: If you think of it as a separate idea or object, represent it directly in the program; model real-world objects, and abstractions directly in code. It’s easier now.

Concurrent programming is better supported in C++ 11, important for getting the best performance from modern hardware.

It is curious how the programming landscape has changed in recent year. A few years back, you might have foreseen a day when most programming would be .NET, Java or JavaScript: all varieties of managed code. While those languages do still dominate, native code has come more to the fore, thanks to factors like Apple’s focus on Objective C, and signs of internal conflict at Microsoft over the best language for coding Windows applications.

That said, C++ 11 remains a demanding language to learn and use. As Stroustrup notes, since C++ 11 is a superset of C++ 98 it is technically harder to learn all of it, though new libraries and abstractions should help beginners. The reasons for using or not using C++ are not going to change significantly with this new standard.

An iOS security tip: tap and hold links in emails to preview links

Today I was using an iPad and received a fake email designed to look as if it were from Facebook. It was a good imitation of the Facebook style.

image

In particular, the links for sign in look OK.

Outlook on Windows displays the actual link when you hover the mouse pointer over the link. As you can see, in this case it is nothing to do with Facebook:

image

How do you do this on iOS? There is no mouse hover (though it could be down with a proximity sensor) but if you tap and hold on the link, iOS pops up a dialog revealing the scam:

image

Worth mentioning as tapping and holding a link to inspect it is not obvious and some users may not be aware of this feature.

The iPad is still worse than Outlook for email security. Outlook does not download images by default. Downloading the image tells the spammer that you have opened the message:

image

The iPad mail client downloads all images.

image

In mitigation, most malware on web sites will not run on iOS. However you could still give away your password or other information if you are tricked by a deceptive web page or fake login.

Hiding links is a feature built into HTML. The designers of HTML figured out that we would rather see a friendly plain English link than a long URL. Unfortunately this feature, and related ones like the ability to make an image a link, play into the hands of the scammers and it is necessary to look at the real link before you follow it.

A better solution would be authenticated email, so that fake Facebook emails would be detected before they are displayed. Unfortunately we are still a long way from using authenticated emails as the norm.