Bye Seth

Spot on:

Lately, the noise seems to be increasing and the signal is fading in comparison. Too much spam, too many posts, too little insight leaking through.

Exactly the reason why I am unsubscribing. Bye.

The Way the .mobi Died

As we all know this did not work. I hope this will and the sanity prevails.

Increasingly, you can browse the real web on a phone and have a high quality experience. There is less and less need for a special dumbed down version of the web just for mobile devices; instead we can have a single device-independent web thatís presented in the best possible way on a variety of devices.

I like how Apple solves problems by turning them upside-down. Or maybe just by choosing the right problem to solve.

Beta bag of bugs?

So, IE7 beta is available. Sort of, cause it is not for everyone. Faruk shares his impressions and points out to those of others, Dave Shea is not impressed that much , and Molly shares some wisdom.

I, however, seriously doubt that this beta will improve much on the way to production. Some sad “I told you so!” feeling, but it was PPK who told us two years ago:

Why is Microsoft unwilling to fix the CSS bugs that everyone’s been asking it to fix for ages? I think it’s not unwilling but unable to do so. Explorer’s code engine cannot be updated any more.

Faruk also comments:

IE7 builds upon existing (and outdated) technology. A complete rewrite of the engine would pose the gigantic risk of breaking 90% of the existing web as we know it. That’s not worth improving Standards-support to anyone, I would say.

If this is true, then either the rendering engine of IE7 will not be touched that much and will go on carrying around same old bugs (in that case it would be better not to have IE7 at all, and just let that old browser dIE), or engine will be modified, and
new bugs will be inevitably introduced.

And this is the saddest thing: I catch myself thinking of IE7 not as a new browser, but as a collection of bugs and deficiencies with which I will have to deal as a web developer.

That makes me a pessimist, I guess…

[2005-07-30] Update: I can be wrong… I wish I am.

Posted in web

Google does not follow

Yes, this is yet another post about Google’s Preventing comment spam. So Google have run a little level-of-optimism test three days ago. The results?

Well, the good news is — there are optimists.

The bad news is — it will have zero (like this — 0) effect on the amount of comment spam. Or multiply zero by three, if you count MSN Search and Yahoo! too.

I think the title of the Google post has a little bit bitten off it, full version might sound like, say, “Preventing comment spam from influencing PageRank”. Because this is that rel="nofollow" attribute does. It does not catch spammers and boil them in the hot oil, nor does it put them into a jail, neither does it make their web servers crash and burn — it simply does not take some link into account when calculating PageRank.

Yes this is great, because now people googling for “cheap Viagra” or “enlarge your johnson” will get results not skewed (hopefully) by comment spam. Good for them. How often do you google for stuff like that? Good for you too.

The fly in the ointment is, that bloggers seem to overestimate importance of PR for spammers. More than a year ago Movable Type and Blogger offered redirect as a way to fight comment spam. Did that help? You wish…

Links in e-mail spam I get are never counted for PageRank. Despite server-side spam filters, client-side spam filters, I still get spam e-mails. Why is that? It is because spam works this way — distribute links as wide as you can, hoping that some will click them and some of those clicking will be tempted to try some cheep-stuff-long-penis product. There is extremely small percentage of the those who will, but — huge numbers of links distributed multiplied by some tiny percentage of interested still are very well worth spamming.

Ok, this move might lower PR for spammer sites. So it may affect SERPs and spammers will get a bit less traffic from the search engines. So what? They will still have links on popular blogs read by many people. And if there is a link one can click it, right? And “solutions” involving redirects make this even easier — I will not click on the link “online-poker-bla-bla.com” , but if there is some redirection involved, I will not know where does the link lead, until I click it.

The sad point of this is that “solutions” like these are not solutions at all, they just witness another victory for spammers.

You start crippling and hiding your e-mail address — you are defeated; you cripple your comments system making it more difficult to use for normal people, make real links invisible via redirects, force people to solve graphical puzzles, make links in their comments weightless by adding some fast-baked attribute — you are defeated again.

Spammers will try to inject their links everywhere they can (referrer spam, anyone?) and while they succeed on this we may
add as many attributes as we can, that is not a solution.

And I don’t know a solution yet. Do you?

Posted in web

Who cares about ampersands?

It is about the time for the another corollary of the Godwin’s law:

As an online discussion about validation grows longer, the probability of mentioning unencoded ampersands approaches one.

No kidding! The reason is that such ampersands are easily the most common validation error. I heard “Thats does that damn validator wants from me?” more than once. Some know the answer already, but don’t really care. Either it seems so innocent and unimportant that is not worth wasting the time, or code production is so out of control that trying to fix it may bring entire company down.

So who cares about ampersands? Only two? Roger says that unencoded ampersands can be a problem. Inspired by him I wrote a little demo to show how it works.

Nothing too complex — I simply try to pass 14 parameters to my PHP script which displays their values. First link has names of the parameters separated by unencoded ampersands, the second link has properly encoded href attribute. Try clicking them. What do we see? Instead of 14 values we have got only two (more in case you are using Opera — it’s browsers dependent), and they look weird…

Now, can this behaviour break your application? I’ll leave that for you to decide.

One more point to add — valid pages can also behave like this. This is because validator barks not on ampersands in an URL — if ampersand is followed by known entity name validator will be happy. It is an unrecognized entity what produces validation error.

You can check this here. All I did — I just removed parameter dummy from the href. All remaining parameters (except for id which is not precede by ampersand) have their corresponding entities so validator will remain silent. However results produced by the script should make programmer to cry out loudly.

So what can we do? There are some options:

  • Do nothing.
  • Encode them.
  • Avoid ampersands in our href‘s — especially if we pass parameters for the script to extract some content.
    Here is more on that.
  • Avoid ampersands by using different separator. We may use semicolon (;) for that purpose as encouraged by W3C.
    If you are using PHP take a look at arg_separator.input and arg_separator.output settings in your php.ini file.
Posted in web

Invalid Standards

An alternative title for this post can be “The crime and the punishment” .

So Mike did it again. Alongside his talent for design and coding he has a talent to ignite flame-wars on standards and validation.

Only this time it rolled out on much more cheerful occasion — redesign of another major site than the previous one on which I will not comment right now.

News was great, news was heard, comments rolled here and there. Then Ethan came and stirred it a bit more. Keith picked up and asked a very good question.

The problem is that Keith did not specify what he considers to be a “Web Standard”. Some (including me), also known as the “wrong gang” think that this means HTML, XHTML, CSS, DOM you name it. Others call “web standards” that I call “best practices” — clean, semantic code, separation of content and presentation, CSS layouts — do I really have to list all you know for a long time already?

In the first case, whatever Mike has to say, I have a news for you – straight from W3C

A Strictly Conforming XHTML Document is an XML document that
requires only the facilities described as mandatory in this
specification. Such a document must meet all of the following criteria:

  1. It must conform to the constraints expressed in one of the three DTDs found in DTDs and in Appendix B.

  2. (…)

How wrong am I supposing that validation errors do not exactly mean that code follows DTD rules?
So, question is answered. Don’t believe me, believe W3C.

Now, before you start kicking me and trying to dig out some invalid document on my site read this: invalid code may be ok, there are many factors that can make it difficult to achieve validity, and invalid code can be better than valid one — and often is.

Only do me a favour — do not say that invalid code conforms to standards. Whatever difficulties you experience with processes, third-party CMS, uncontrollable content, ads and stuff like that — that makes me to feel sorry and compassionate, but it does not make code any more standards compliant.

Actually problems with complex systems, third-party software and difficult to control content were mentioned quite often in the discussion alongside of “big picture” or inability to see it. Not exactly clear how does it work, but the wild guess is, that sitting on a huge pile of crappy software, wild content and messy CM process widens your horizon quite a bit.

I consider myself experienced enough to see the devil in the details of that big picture. Have you ever thought how did the this same broken CMS come to life? And why does it produces such a horrible code?

Right: because nobody cared. All were busy with more important issues – deadlines, team management, stakeholders, you name it. No one had time or will to care was that code valid or was it invalid, let alone unencoded ampersands. There are always more important things to do.

So let’s not complain — we are using software produced with exactly the same attitude towards standards like we have.
Sometimes I am not sure is it marching towards the better web or is it just spinning the vicious circle.

However Keith is absolutely right on this:

I just wish people would recognize that when a large corporate site (Which frankly has much bigger fish to fry than Web standards. Period.) like ABC News makes even the slightest move toward Web standards it should be hailed as a victory for standards and nothing less.

Still, I’d like to hear less advocacy for invalid code from big names. Yes, we all know, how the real work in the real world gets done. Let it be the little secret that sometimes it is OK to be not 100% OK. Big names should not forget that there are thousands of inexperienced web developers listening to them. Humans are lazy, coders are especially lazy so whatever can be used as excuse — will be used, and we will never break out of the vicious circle.

We should know the rules extremely well to be sure when it is ok to bend or break them.

Update: I assume it is resolved now.

Posted in web

Got a blog?

Today I once again realised that I needed a blog — badly. So this 10-minutes installation of WordPress will serve me for a while. Well, ok, it took a bit more than 10 minutes, but that’s related to the fact that my intent is to run this site in two languages, and I had to fiddle around to get content negotiation going.

My Lithuanian version was born dead in February. Now English version is rolling out and I risk to have two corpses in my closet, but I’ll try my best to avoid this horror.

Quite a mess waiting ahead, interesting times to come. Be prepared for lots of dust and falling bricks over here while I am trying to shake this down. Some things will come and go, some will stay, other will not come at all. Text of the post may change a bit at first too, without any special mark-up, or notice, but I will try to avoid that as much as possible. This very post have changed heavily from the original one-liner.

My English is far from perfect, but I expect you to understand what I mean :). Hopes are high that this site will help me improve my English as well as my general writing skills. I’ll appreciate any good soul who helps me with my spelling and grammar, and if such soul appears and it happens to be in a body able to use keyboard, I’d ask to write me an e-mail. Yes, I mean e-mail not comments. Thank you.