Category Archives: internet


Amazon S3 sample update

When I added background threading to my Delphi S3 sample, I inadvertently broke the ability to connect with SSL. I’ve fixed the problem, and included the necessary openssl DLLs in the download, so you can run this even if you don’t have Delphi. I use it to backup my own files.

Amazon S3 is a web service for storing files on the internet. It works well and is good value compared to most online storage services.

The distinctive features of this sample are first, that it is Delphi, and second, that it is native Win32. Most of the samples out there are for Java, .NET or scripting languages.

Technorati tags: , ,

Google ranks MSN search top

This amused me. After reading on Slashdot how Google “claim the top ad position for searches relevant to its own products” I tried a few tests. The first one I tried was for the word “search”:

I noted that in my results Google was not claiming the top ad spot; what amused me more was the place of MSN search in the result list: no 1.

My hunch is that MSN gets a boost from having the word “search” in the url. An impressive lack of bias from Google.

Note that your results may (will) vary. It’s dangerous to draw any general conclusions about Google ranking from your own searches, because the search engine takes into account both your location and your previous search history. Potentially it knows even more than that about your browsing habits, if you use a product like Google Toolbar or the phishing filter that sends a record of every page visited back to the mothership, though I’m not sure how much if any of this data is used to optimize searches.

Think of it like Amazon. You go there, and all your favourite music or books are there on the front page. That’s just your history being echoed back at you, not a reliable indication of what Amazon is promoting.

As far as the Slashdot piece goes, all I can say is: case not proven.


Technorati tags: , , , ,

The death of SVG

This is not news; but I’ve just come across Adobe’s end of life notice for its SVG viewer. Adobe was a key supporter of SVG, which is the W3C standard for vector graphics and animation embedded in web pages, until it acquired Macromedia and with it the rival but proprietary Flash technology. The demise of the Adobe viewer is a shame for SVG supporters since it was the best available. All very predictable, though I’m not impressed by the reason given in Adobe’s FAQ on the subject [pdf]:

There are a number of other third-party SVG viewer implementations in the marketplace, including native support for SVG in many Web browsers. The SVG language and its adoption in the marketplace have both matured to the point where it is no longer necessary for Adobe to provide an SVG viewer.

In this context “matured” must mean “critically ill”, with Adobe’s announcement the killer blow (though let’s acknowledge that SVG was making limited headway even before the merger). The real reason comes a little further down:

You may also want to consider converting your SVG application to an Adobe Flex® application.

It’s easy to understand Adobe’s decision, though let me close with a question. How much harder would it be for Microsoft to establish WPF/E, if the industry had settled on a W3C standard rather than the proprietary Flash?

Technorati tags: , , , , ,

A simple blog reader for the IE7 common feed list

Readers of this blog will know of my dissatisfaction with both the IE7 feed reader and the RSS integration in Outlook 2007.

I’ve now posted the (VB.NET) code for my quick-and-dirty solution, the Hands On Common Feed List Reader.


What problems does this solve? Mainly:

  • It allows me to browse through blogs by item and not by feed
  • It reads the feed list directly instead of Outlook’s misguided synchronization efforts
  • It gives me a quick view of all unread items

Just to be clear, this is a reader for the IE7 common feed list. You still need to subscribe and unsubscribe using IE7. Lots of features could be added, but for now this works for me; however fixes and improvements are welcome.

Download the code here.

More on how this is put together in the February 2007 issue of Personal Computer World.

If anyone would like just the executable, let me know and I’ll make a quick setup. Requires .NET 2.0.

Technorati tags: , , , WS-* is dead

Today I spoke to Adam Gross, Vice President, Developer Marketing at His company has recently announced that during its third quarter, API transactions (driven by web services) surpassed CRM page views on its service for the first time. I asked Gross whether would move towards the WS-* standards as its evolves its API.

“We’re very big advocates of SOAP and WSDL,” he told me.  “We’re probably the largest users of SOAP and WSDL in any business anywhere in the world. That said, my sense is that WS-* is dead. There is not a lot happening in WS-* that is being driven by customers and use cases, and there is not a lot that is being informed by what we’ve learned from “Web 2.0″. I question its relevance.”

How then will solve problems like those which WS-* addresses, such as as reliable messaging? “If any part of WS-* has promise it’s reliable messaging, but I’ve been part of the web services technical community since late 2000. We’ve been talking about reliable messaging standards since then. That’s close to seven years. You have to wonder if the WS-* process is going to reach a meaningful conclusion.

“Instead, we’re going to see more organic innovation and best practices. There is no standard for AJAX. There is no standards body. It’s community-driven rather than vendor-driven, and that’s been very successful. I keep an open mind, but I think that WS-* and the people who are working on it need to start showing their relevance.” for developers

I attended two sessions today given by Danny Thorpe, formerly of Borland, on the developer API for, Microsoft’s attempt to match Google as a Web 2.0 platform. Even the business model is Google-style: everything is free, supported by advertising with the possibility of revenue sharing for users. Unlike ASP.NET, the API is cross-platform on both client and server – presumably itself runs on .NET, but that is not a requirement for users of the various gadgets and services.

It’s strange to hear Thorpe talking about doing clever stuff with JavaScript – quite a change from Delphi’s native code compiler – but he describes it as just another way to write libraries for Microsoft’s platform. He is an enthusiast for doing aggregation (mash-ups) client-side aggregation, explaining that it improves scalability by reducing the amount of processing needed on web servers.

A key theme is how to build social applications that draw on the vast userbase of Hotmail and Windows Messenger, but without compromising privacy.

Interesting stuff, and I don’t doubt Microsoft’s commitment to even though it is not centre-stage here at Tech-Ed. At the same time I am picking up lack of cohesion in the overall platform strategy. Microsoft has endeavoured to create an internal startup culture, and while this is clearly generating some energy it comes at a price. There seem to be a number of different sub-organizations which do not work closely together. The Office Live initiative, which provides web hosting and cloud-based applications aimed at small businesses, is apparently separate from The ASP.NET AJAX libraries are different from the JavaScript libraries, even though there is overlap in the problems they address. Danny Thorpe is aware of these issues and says the company is working on internal collaboration, but it seems to me that fragmentation will be a growing problem as the various groups evolve.

Reinventing HTML: it may be too late

The Director of the W3C, official guardian of web standards, says HTML will be reinvented. In his blog entry, Tim Berners-Lee says the W3C has failed to drive adoption of XML, the well-formed web:

The attempt to get the world to switch to XML, including quotes around attribute values and slashes in empty tags and namespaces all at once didn’t work. The large HTML-generating public did not move, largely because the browsers didn’t complain.

I applaud his honesty; yet at the same time this is a huge admission of failure. The W3C’s HTML strategy for the last seven years at least has been based around shift the web to XHTML, an XML specification. HTML 4.x was frozen and has not been touched since December 1999. If you have heard talk of HTML 5.0, it is because of the work of What WG, not the W3C.

A W3C-approved HTML 5.0 would likely have had significant impact back in, say, 2002. No doubt IE7, FireFox 2.0 and Safari would all now support it.

But now? I’m not convinced this will make much impact. As Joe Clark says:

HTML is a topic of interest. But it isn’t an outright fiasco. HTML, in large part, works fine right now.

The reason it works fine is that the world is moving on. Interesting things on web pages now happen outside HTML. The big story in web design over the last couple of years is about (separately) Flash and Javascript/AJAX – though AJAX does use the HTML DOM (Document Object Model). Now we are watching Microsoft to see if it can pull off Windows Presentation Foundation/Everywhere, its answer to Flash. HTML is becoming a container for other types of content.

Another question: if the W3C has failed to achieve XHTML adoption, why will HTML 5.0 be any different? Berners-Lee suggests that it will be different because the process will be better:

Some things are very clear. It is really important to have real developers on the ground involved with the development of HTML. It is also really important to have browser makers intimately involved and committed. And also all the other stakeholders, including users and user companies and makers of related products.

Fair enough; and Daniel Glazman for one is buying it. I’m not sure. Will the process really be so different? The key question is what the de facto powers of the web will do, the likes of Microsoft, Adobe, Google, and Mozilla. Without their support, HTML 5.0 is nothing – and I don’t mean just the token person on the committee.

The W3C doesn’t need to reinvent HTML. It needs to reinvent itself.

Technorati tags: , , ,

IE7: 22 hours to catch a phish

It is now 24 hours since I received an obvious phishing email in my inbox and reported it through both IE7 and FireFox 2.0. Two hours ago, IE7 still said, “This is not a reported phishing website”. Now it’s finally made it:

If this is typical, then the IE7 phishing filter is little use. Phishing sites don’t last long, usually only a few days. Most victims will click that link the moment it turns up in their inbox, not a day later. Speed is of the essence. After 22 hours, most of the damage will already have been done. 

Actually, the IE7 phishing filter could be worse than useless. The message, “This is not a reported phishing website” imparts a false sense of security, making it more likely that someone will tap in their personal information.

Checking again in Firefox, it now catches the phish on its downloaded-list settings, which is the default. Using the dynamic query option in Firefox caught it earlier, but even that won’t catch a brand new phish.

Let me add that anyone clicking one of these links is ignoring plentiful advice from banks and from the media; and in this case the lack of an SSL connection is another sure-fire indication that this is a forgery. But some phishing attempts are cleverly phrased, making you think that someone has placed an order in your name, or hacked your paypal account, or damaged your eBay reputation. In the heat of the moment, it is easy to make mistakes.

Conclusion: Don’t rely on phishing filters to protect you; and if you want to use the one in FireFox, turn on dynamic queries (which means sending a record of your browsing activity to Google).

Technorati tags: , , ,

Phishing part 2: Firefox gets there first

It’s three hours since I reported a phishing site to both IE7 and Firefox (Google). I revisited the site in both browsers. At first, Firefox displayed the site as before; but then I switched it to query Google dynamically. Presto! this appeared:

Note that the dynamic query setting is not the default, presumably because of its privacy implications. However, it is clearly more effective than the default downloaded list.

At the time of writing, IE7 is still saying “this is not a reported phishing site”; even though I reported it several hours ago.

This research is not bullet-proof. For all I know, someone else reported the site yesterday. Still, it’s an indication.

I’m still not clear why these browsers can’t figure out that this looks like a banking site, it’s asking for a password, but it’s not an SSL connection – perhaps we should alert the user. That doesn’t strike me as particularly advanced analysis.

See here for an update.

FireFox 2.0, IE7 both fail phishing test

I’m not in the habit of visiting these sites, but when an email apparently from Bank of America plopped into my inbox a few minutes ago, it seemed the ideal moment to test out my brand new browsers – release versions of IE7 and Firefox 2.0.

The score is tied at zero for both browsers. Here’s the site in IE7:

Looks good, doesn’t it? No little padlock; so just to be sure I clicked Tools – Phishing filter – Check this website:

Personally I think this dialog is overly reassuring. Further, it strikes me that most sites where you suspect phishing are probably aping a site that uses SSL, so the dialog could usefully alert me to this. Never mind, let’s try Firefox 2.0:

No better, sadly. I tried both the options in the security section, including the scary one that sends all your web activity to Google, but still FireFox failed to warn me that I was about to give away precious financial secrets.

Luckily I don’t have an account with Bank of America. Still, the lesson here is that that neither browser is magic. There’s a delay between the appearance of a phishing site, and its blacklisting. It’s the same problem with anti-virus signatures: default permit is a broken security model. You have been warned.

Incidentally I reported the sites in both browsers. No instant change; but I’ll try the url again later.

PS: see here and here to see how quickly IE7 and Firefox started detecting this fraudulent site.