All posts by onlyconnect

Developers quick to adopt .NET 2.0, slow to leave Visual C++ 6.0

The Code Project is a popular resource site for Windows developers. It has polled its users on what programming language they use; see here for the details. Three points to note:

  • Visual C++ 6.0 still has high usage – nearly on a par with Visual C++ 2003 and 2005 combined. 19.78% vs 20.34% at the time of writing. I wonder if C runtime issues are a factor here. Visual C++ 6.0 is the last version that links to the standard mscvrt.dll; see Visual Studio 2005 DLL Hell for more details. That’s why I still have it installed on my machine. If that’s not it, I’d be interested to know why so many are still using this old product.
  • By contrast, there has been rapid C# 2.0 adoption. 18.93% C# 1.x versus 44.32% C# 2.0. I can understand this; .NET 2.0 is considerably improved over 1.x and there is little reason not to switch.
  • Finally, there is a decent showing for Delphi at 24.54%. No surprise here; it’s a fantastic tool for Win32 coding. I guess the problem for Borland is that many are still using Delphi 7.x or earlier versions.

Note that the percentages add to more than 100% because programmes use mulitple tools; and that this is not a reliable snapshot of anything other than Code Project’s community.

Reinventing HTML: it may be too late

The Director of the W3C, official guardian of web standards, says HTML will be reinvented. In his blog entry, Tim Berners-Lee says the W3C has failed to drive adoption of XML, the well-formed web:

The attempt to get the world to switch to XML, including quotes around attribute values and slashes in empty tags and namespaces all at once didn’t work. The large HTML-generating public did not move, largely because the browsers didn’t complain.

I applaud his honesty; yet at the same time this is a huge admission of failure. The W3C’s HTML strategy for the last seven years at least has been based around shift the web to XHTML, an XML specification. HTML 4.x was frozen and has not been touched since December 1999. If you have heard talk of HTML 5.0, it is because of the work of What WG, not the W3C.

A W3C-approved HTML 5.0 would likely have had significant impact back in, say, 2002. No doubt IE7, FireFox 2.0 and Safari would all now support it.

But now? I’m not convinced this will make much impact. As Joe Clark says:

HTML is a topic of interest. But it isn’t an outright fiasco. HTML, in large part, works fine right now.

The reason it works fine is that the world is moving on. Interesting things on web pages now happen outside HTML. The big story in web design over the last couple of years is about (separately) Flash and Javascript/AJAX – though AJAX does use the HTML DOM (Document Object Model). Now we are watching Microsoft to see if it can pull off Windows Presentation Foundation/Everywhere, its answer to Flash. HTML is becoming a container for other types of content.

Another question: if the W3C has failed to achieve XHTML adoption, why will HTML 5.0 be any different? Berners-Lee suggests that it will be different because the process will be better:

Some things are very clear. It is really important to have real developers on the ground involved with the development of HTML. It is also really important to have browser makers intimately involved and committed. And also all the other stakeholders, including users and user companies and makers of related products.

Fair enough; and Daniel Glazman for one is buying it. I’m not sure. Will the process really be so different? The key question is what the de facto powers of the web will do, the likes of Microsoft, Adobe, Google, and Mozilla. Without their support, HTML 5.0 is nothing – and I don’t mean just the token person on the committee.

The W3C doesn’t need to reinvent HTML. It needs to reinvent itself.

Technorati tags: , , ,

IE7: 22 hours to catch a phish

It is now 24 hours since I received an obvious phishing email in my inbox and reported it through both IE7 and FireFox 2.0. Two hours ago, IE7 still said, “This is not a reported phishing website”. Now it’s finally made it:

If this is typical, then the IE7 phishing filter is little use. Phishing sites don’t last long, usually only a few days. Most victims will click that link the moment it turns up in their inbox, not a day later. Speed is of the essence. After 22 hours, most of the damage will already have been done. 

Actually, the IE7 phishing filter could be worse than useless. The message, “This is not a reported phishing website” imparts a false sense of security, making it more likely that someone will tap in their personal information.

Checking again in Firefox, it now catches the phish on its downloaded-list settings, which is the default. Using the dynamic query option in Firefox caught it earlier, but even that won’t catch a brand new phish.

Let me add that anyone clicking one of these links is ignoring plentiful advice from banks and from the media; and in this case the lack of an SSL connection is another sure-fire indication that this is a forgery. But some phishing attempts are cleverly phrased, making you think that someone has placed an order in your name, or hacked your paypal account, or damaged your eBay reputation. In the heat of the moment, it is easy to make mistakes.

Conclusion: Don’t rely on phishing filters to protect you; and if you want to use the one in FireFox, turn on dynamic queries (which means sending a record of your browsing activity to Google).

Technorati tags: , , ,

Phishing part 2: Firefox gets there first

It’s three hours since I reported a phishing site to both IE7 and Firefox (Google). I revisited the site in both browsers. At first, Firefox displayed the site as before; but then I switched it to query Google dynamically. Presto! this appeared:

Note that the dynamic query setting is not the default, presumably because of its privacy implications. However, it is clearly more effective than the default downloaded list.

At the time of writing, IE7 is still saying “this is not a reported phishing site”; even though I reported it several hours ago.

This research is not bullet-proof. For all I know, someone else reported the site yesterday. Still, it’s an indication.

I’m still not clear why these browsers can’t figure out that this looks like a banking site, it’s asking for a password, but it’s not an SSL connection – perhaps we should alert the user. That doesn’t strike me as particularly advanced analysis.

See here for an update.

FireFox 2.0, IE7 both fail phishing test

I’m not in the habit of visiting these sites, but when an email apparently from Bank of America plopped into my inbox a few minutes ago, it seemed the ideal moment to test out my brand new browsers – release versions of IE7 and Firefox 2.0.

The score is tied at zero for both browsers. Here’s the site in IE7:

Looks good, doesn’t it? No little padlock; so just to be sure I clicked Tools – Phishing filter – Check this website:

Personally I think this dialog is overly reassuring. Further, it strikes me that most sites where you suspect phishing are probably aping a site that uses SSL, so the dialog could usefully alert me to this. Never mind, let’s try Firefox 2.0:

No better, sadly. I tried both the options in the security section, including the scary one that sends all your web activity to Google, but still FireFox failed to warn me that I was about to give away precious financial secrets.

Luckily I don’t have an account with Bank of America. Still, the lesson here is that that neither browser is magic. There’s a delay between the appearance of a phishing site, and its blacklisting. It’s the same problem with anti-virus signatures: default permit is a broken security model. You have been warned.

Incidentally I reported the sites in both browsers. No instant change; but I’ll try the url again later.

PS: see here and here to see how quickly IE7 and Firefox started detecting this fraudulent site.

On deceptive error messages

If error messages told you what was really wrong, developer and admin productivity would soar.

I lost hours of my life over a problem with ntbackup. The error message was “C is not a valid drive or you do not have access”. Three different Microsoft support engineers gave it their attention, but we never identified the true problem. The drive was valid, of course, and the user had full local admin rights.

More recently I was working on my Common Feed List blogreader and hit this unusual error:

Hmm, “Listbox has too many items” – yet the error fired on the 8th item being added. After scratching my head for a few minutes, I figured out the problem: a blog with items that have an empty title element. It’s an atom feed, and the XML looks like this:

<title mode="escaped" type="text/html"/>

The IE7 RSS Platform API converts this to a null value in the item’s Title property. I was trying to render this as a string in the list box. Poof.

Digression: should the RSS Platform treat the empty element as null, or as an empty string? XML is not good at making this distinction. Since the title element in this case is present, but empty, I tend to the view that it should be an empty string; but others more expert may disagree.

So I fixed the code to check for null and convert it to an empty string and all was well. No thanks to the error message.

Technorati tags: , ,

The Rickie Lee Jones MP3 store

I’ve just come across the Rickie Lee Jones MP3 store. Rickie Lee Jones is the singer/songwriter who brought us the sexy, sublime Chuck E’s in Love back in 1979. Now she’s the biggest artist on sale at Great Big Island, where you can buy both CDs and MP3s. The percentages are not stated, but it seems that the musicians get a much larger slice of the profits that they would from iTunes or other legal download sites.

I like the idea, especially as there is no troublesome DRM to contend with. However I would much prefer files without the lossy compression of MP3, especially since these are a barely adequate 128K. Robert Fripp’s download store can do it; why not Great Big Island?

 

RSS in IE7: not too good

I’m now 24 hours into my attempt to use IE7 in place of my previous dedicated blog reader. It’s tolerable, but only just.

On the positive side, feeds are neatly presented and work well with IE7 tabs. If you want to read the full text or comments for a post, right-click the header and choose Open in New Tab. This is particularly handy for slow pages; you can carry on reading the feed while the all the ads and stuff on sites like news.com open in the background in the new tab.

So what’s wrong with it? The biggest problem is that IE7 has no real concept of a feed item. It must be there internally, but it isn’t exposed. This messes up the management of read/unread items. You cannot mark an item as read or unread; you can only mark a feed as read. For example, say you select a busy feed like Engadget and there are 6 unread items with those large shiny images scrolling well out of sight down the page. The top item catches your eye, so you click it to read. IE7 now considers all the other items as read as well – unless you remember to unselect “Mark feed as read” every time. As a result, you are very likely to miss some items if you use IE7 for feed reading.

Next snag: there’s no way to search feeds. This turns out to be problem with the underlying RSS platform. Unless I’ve missed it, there are no methods for searching feeds; you have to iterate through each item and search in your own code. I presume that means that the centralized RSS store has no full text index, which is a shame. Anyway, IE7 has no such feature, so if you think to yourself, “I saw that in a blog this morning…”, but can’t remember which, then you have to turn to Google or Technorati.

Third, you cannot get a single view of all unread items. This is silly, as it is almost a defining feature of an offline blog reader: “Show me my unread items”. Instead, feeds with unread items are bolded, and you have to click each one to read. Lots of mouse clicks, not nice.

Fourth, it’s difficult to organize your feeds. Feeds sort themselves alphabetically, though sometimes you have to exit and restart IE7 to sort the sort. You can drag-and-drop feeds in the list, except you can’t, because although IE7 draws a horizontal bar showing where the feed will be dropped, it doesn’t drop there at all. It goes to the bottom of the list, and re-sorts alphabetically when you next restart. You can create subfolders, but you can’t select a group of feeds and move them between folders; you have to do them one at a time.

Maybe Microsoft doesn’t really want you to read RSS feeds in IE7. Perhaps the idea is that you buy Outlook 2007, which also uses the RSS platform.

The only bright spot is the API. I was so annoyed about the folder management that I ran up VB6 and wrote some code to move all the items in one folder to another. It worked sweetly. Perhaps I will write my own blog reader; I am sure the community will soon come up with some handy RSS platform readers and managers – maybe there are some already?

Technorati tags: , , ,