Skip navigation.

Harold's Home

XML version of this site

PHP Scripts

CLI fun
Mail on 404
HB-NS (NewsScript)


APOD to Desktop
Dreamweaver Extensions


Other stuff
Central Grinder

OOOk Default:

VJ stuff
VJ Tools
Sample Movies


All articles in Webdesign

oh opera
The day has just started but this review of Opera's new browser has already made my day.

Opera Software reinvents complete irrelevance

Adding forms in MCMS

I'm working in MCMS and trying to add a form to pages I create. Unfortunately I cannot add a real form as the entire page already is one and nested forms are not allowed in HTML. Normally when things are not allowed you can sneak around and do them anyway with a reasonable expectation of getting them to work as browsers are very tolerant of errors and whatnot.
Not so in this case!

I did manage to find a way to work around it though by just inserting the needed form elements without <form> tags and then using a very small bit of custom javascript. This custom script will rewrite the normal action url of the resident form and overwrite a few of the default values with nothing. This could probably be handled a bit better but I'm unsure of whether javascript supports a function like PHP's unset().
function doForm()
//strip surplus unwanted values that reside by default in MCMS pages
theform.unwanted.value = "";
theform.unwanted2.value = "";

And then the search button can have an onClick event as follows:
<input onClick="javascript:doForm();" type="submit" value="Search" name="cmdSearchSubmit" />

Unfortunately I cannot add javascript to my pages as they're deleted right away, so I'm stuck in limbo with nothing to show for my efforts accept an angry post.

Update: good friend Jurjan (of Virtual Pet Rock fame) emailed me to suggest that if Microsoft's crappy management system deletes script tags I might be able to put the javascript into the onclick handler of the submit button. This works like a charm! So instead of calling a function I just put the entire javascript there like so:
<input onclick="javascript:theform=document.Form1; theform.method='get'; theform.action='' +theform.linktype.value+ '&SF='+theform.SF.value+ '&ST=' +theform.ST.value+'&WW=' +theform.WW.value; theform.__EVENTARGUMENT.value = ''; theform.__VIEWSTATE.value = ''; theform.__EVENTTARGET.value = '';theform.submit();" type="submit" value="Zoek" name="cmdSearchSubmit" />

It's may be a bit clunky but it works.

User generated content gone wrong
A little over a year ago I wrote with horror about my inclusion in a web 2.0 workgroup.

I'm happy to report that in exactly 1 year and 1 week nothing at all has happened. Like most web 2.0 ventures this one was dead on arrival. No emails were sent, no meetings were set up, no teamsite was made to discuss stuff. Nothing. Zip. Nada.
Which is probably for the best, as there are serious problems with the thinking that web 2.0 is somehow of immense value.

There's a huge problem with the whole web 2.0 thing of user-generated content and that problem was brought to my attention last week when I joined Hyves, a social networking site with a primarily Dutch base. (It's like Orkut or Facebook but for the Dutch.) The problem is that the masses do not really have a clue on how to generate content. And web 2.0 websites generally make it too easy for people to just add anything and everything. They don't control, thinking the super-intelligence of the web will take care of things. This is a fallacy. And a pretty big one.

Take a look at the screenshot below. It's part of the interface where you can enter your favorite authors, a similar screenshot could be made of the bands you like, the composers, the whiskies, the food, your scientific heroes, etc. etc..

I'm trying to enter the fact that I like Michael Marshall Smith's books. There's some checkboxes of popular entries and a field you can use to enter other ones, a drop down lists hints based on letters typed so far.
Only which Michael Marshall Smith do I actually like?

For the record: they all mean the same author. Some are spelled wrong, one mysteriously list a book after a comma, one tries to hint at the fact that Michael Marshall is the same person as Michael Marshall Smith (writing in a different genre, much like Iain Banks and Iain M. Banks are the same author). Even with these oddities it's obvious there's something wrong, there are 2 entries with exactly the same spelling error, there's 4 entries that are spelled correctly. *

User-generated stuff like this is all well and good but there have to be controls. And it's up to the designer of a website to set up restrictions.
In the library business this problem was realized hundreds of years ago, which is where thesauri and subject classification comes in. Controlled lists from which a cataloguer must choose when entering new objects. Not everyone can enter new terms, write-access to these lists is very restricted and often has to go through a review process. While this may not work on something like a social networking site we can clearly see some method of deduplication wouldn't go amiss and would definitely help a lot in getting everyone on the same page.

Web 2.0 has a long way to go before basic stuff like this is sorted out, not just in this one isolated case but all over the board. When it is, maybe it will be taken more seriously.

*) By the way, the capitalisation of The Never Ending Story (a children's book by Michael Ende I enjoyed a lot when young) is wrong as well. The horror!

A while ago I wrote about going back to iCab as my primary browser. Since iCab 3 is based on Webkit, the Safari engine, its failures in the area of performance, look-and-feel and standards compliance have all but disappeared. True, the old iCab had some distinct advantages, such as rendering nested <q> elements differently, but overall the improvements have been far far greater than the disadvantages one hardly ever notices. In fact this site is the only one I know of that actually employs nested quote elements and it's not too hard to give that up. iCab does have an alternate rendering method to revert to the old ways but according to the developer it comes at a performance hit so I haven't even tried to use it. iCab also gets the benefit from Safari goodies such as the element inspector, greatly increased site compatibilty and the like. Old favorites like HTML/CSS error reporting and automatic reloading of (locally saved) changed pages are all still there of course.

In the latest beta versions iCab has rapidly added a few extra features that are really interesting though. Ones that are a prime example of the vision of iCab's developer.
One of these things is a thing that's so offbeat no one else has, to my knowledge, implemented it: saving webpages as stand-alone applications.

Quite what the use of this is I had no idea, although writing about it I was thinking it might be extremely useful for doing presentations about websites in places where you're not sure you have a working internet connection. Simply go to your website beforehand, save the site as an application and everything will work as expected even if your wireless craps out. Forms won't work but basic navigation and stuff will work. You can set url filters to include which kinds of paths and files you want to capture (including CSS, Flash, PDF etc) in your application. Interesting stuff, marginally interesting enough not to be copied instantly but something that might make the browser stand out in certain areas. Applications will only work on a Mac but that is hardly a problem for such a niche feature.

The other new thing is truly a thing of beauty and one that all webdevelopers will instantly fall in love with it.
Browsers have had the ability to save webpages in a cache since the dawn on the web. This reduces network load by saving files on the harddisk and when such files are needed again referencing these instead of getting new ones from the web.
Browsers like Netscape have implemented ways to view this cache by using the obscure about:cache protocol but no one ever really used that as you were inundated with hundreds of pages of boring things to wade through in the hopes of finding that one elusive image file you were after.
iCab's latest betas have gone a step further and projected a kind of Spotlight view over this. See the following image for what I mean.

Clicky for a bigger version.

I have opened the Cache browser, available from the Tools menu, and entered the term "web" in the filters bar. I have also selected to show only CSS files. This allows me to instantly drill down to the files I'm interested in, namely those from a federated search engine I'm working on for my library. Note that I could have been even more specific and typed "webfeat" to see the only relevant results; or have been less specific and not entered any filter term, yet still I would have been able to drill down to the desired files as I can sort on last visited and url amongst other things.
This feature alone, I think is worth the price of admission to the club and it's already helped me in my job a few days after it was introduced.

iCab is shareware now and to be honest I have no idea what features are included in the standard version and the paid version, but I'm guessing this will only be available once you've registered (plus you get all these extra insights into new developments ahead of the crowd!). Still, 20 euros isn't that much if you've already spend several hundreds on webdesign bundles from Adobe and a good text-editor like BBEdit. Don't forget iCab integrates with BBEdit to allow you to view HTML source in BBEdit instead of the browser itself if you like.

I get the feeling other browser makers will soon rip off this feature as it's extremely powerful and, I am guessing here, not that hard to develop. But like so many other features (*cough* ad filters *cough*) iCab is first to the scene with a workable implementation.

If you're working the web iCab is well worth considering the admission price.

On resetting browser styles
NO CSS RESET pretty much sums up why I never use reset stylesheets.

The problem I've had with these resets is that I then find myself declaring much more than I ever needed to just to get browsers back to rendering things the way I want. As it turns out, I'm perfectly happy with how a number of elements render by default. I like lists to have bullets and strong elements to have bolded text.

And I'm okay if the various browsers show things slightly differently.

Like the author I'm ok with different browsers behaving a bit different on pages I make. I am dismayed at the thought of having to specifically enable certain things again. Things that are the same everywhere, like italics for <em> and bold for <strong> elements.
I can see the attraction and in many cases I do override standard behaviour; but to reset everything? Nah, I like my CSS to be clean and maintainable. Not because I think most of my sites will be taken over by someone else, not because I don't have the time to do this, because I could make the time. It's simply that I think my time can be better spent elsewhere.
On most projects a lot of my time goes into managing copywriters, who are often not professional writers but merely students or faculty staff. And often that's a bigger challenge than managing different layouts in different browsers.

In a rather surprising move that has many webstandards advocates pleasantly surprised Microsoft today reversed it's decision to require idiotic superfluous tags to make IE behave.

We've decided that IE8 will, by default, interpret web content in the most standards compliant way it can. This decision is a change from what we’ve posted previously.
Developers who want their pages shown using IE8's 'IE7 Standards mode' will need to request that explicitly (using the http header/meta tag approach described here).

From: Microsoft's Interoperability Principles and IE8.

So incompetent webauthors still have the option to force IE 8 to behave like a backwards browser* (IE 7) while the rest of the rest of the forward looking web can enjoy better pages with more support for standards and no useless code. At least in theory, Microsoft had promised huge leaps in standards support for IE 7 and failed to deliver on that so who knows what IE 8 will actually bring.
It's a good thing that Microsoft reversed it's misplaced "solution" but forgive me if I'm being a bit skeptical of the end results until I actually see them in a shipping product.

Roger Johansen has a nice summary and the comments are worth a read as well.

*) I'm betting that a lot of CMS's will default to this state of affairs.

Breaking the web
After this week's A List Apart article by Jeremy Keith little more needs to be said about Microsoft's decision to keep the web from breaking (again) when they release a new version of Explorer.

They shoot browsers don't they?
The proposed default behavior for version targeting in Internet Explorer solves the problem of “breaking the web” in much the same way that decapitation solves the problem of headaches. In its current state, version targeting is a cure that will kill the patient. Version targeting could have been an opportunity for Microsoft to demonstrate innovation. Instead, the proposed default behavior demonstrates a fundamental misunderstanding of the World Wide Web.

It's little surprise to me that Microsoft doesn't understand the web. They have been struggling with the whole interwebs concept since Windows 95 and it doesn't look like they know what they're doing now. Sure, IE 7 fixed a lot of stuff, but it didn't go far enough. So they're now saying 'conform to this new and arbitrary standard we implemented because we can't be arsed to fix things or else...'. Bollocks to that, I say. I'll be damned if I'm going to implement needless code because Microsoft's Internet Explorer team can't be bothered to fix their pathetic broken browser. Let my sites look different in each and every version of their backwards browser. Soon people will stop using it. (Well, I can dream, can't I?)

Blogging friends
Hooray, looks like good friend Harold (k) has finally redesigned his site and even added a weblog thing.
Welcome to the 21th century, now we can shamelessly link back and forth.

Harder than you think
Pop quiz: How many HTML elements can you name in 5 minutes?
I got to 65, with 26 remaining, I forgot the font tag ;).

The bee's knees
Roger Johansson asks: does advanced search sound too advanced?
This got me to thinking and before I knew it I was thinking this deserves a bit more than a reply on his site so instead I'm ranting here at you, bear with me.
I work in a library and as such deal a lot with librarians who think advanced search is the bee's knees, even if they don't use it themselves for 90% of the time when dealing with every day requests and they're using Google or Lexis Nexis or Science Direct or whatnot. The term is so wide-spread that changing it on individual websites make little sense. I agree that offering an advanced form on a first page is of little use, but I do take exception to the statement brought up in one of the comments that "Google has spoiled us".
They haven't, they've just shown us that a very basic, simple search form can perform admirably well given the right backend. If only suppliers of databases would have taken that message to heart a year or 6 ago I wouldn't have to currently struggle so hard to get our library to implement a federated search technology.

So the suppliers of deep knowledge haven't taken this message to heart yet. Instead we're stuck with a plethora of idiotic database suppliers who ask thousands of euros each year (each, and we have about 20 or 30 of those) for the material they hold in their databases. All of these come with an advanced search form as the default.
Why? Is this because these search-tools are requested by end-users? Students? Faculty? No!

These tools are highly regarded by librarians. People who cringe when they see someone use a simple search form. These librarians are the ones making the decision to purchase access to a database, they are the ones that should be educated, they are the ones that should be told to just open their eyes. To be fair, the younger generation of librarians is moving along the path where they can see that advanced search forms are a hindrance and not the be-all end-all but there's an uphill struggle going on in library land every single day to convince people that the way normal people search is the one our tools should be designed for. Especially when a single database can cost an educational institution as much as 1 euro per student per year (whether these individuals use the database or not!).
This may not sound like much but in education money is tight and if you have 10 databases that each cost a euro a year per student* you've just added a large overhead cost to the overall running of your institution. Remember: these databases are not searchable by Google, they are password or IP adress restricted. Some require licensing costs on individual document retrieval, for most the fee is based on the number of users a library has.
*) For reference our institution has about 35.000 students and 500 faculty.

It may seem like I'm laying the blame here on librarians. To be fair they are to blame for much of this, but I also blame the vendors. If you have the manpower to build a database that holds twenty million highly annotated and enriched records you have the manpower to build a simple search solution that works, you also have the manpower to educate your byers of the goodness that is your simple form. Database vendors are all too happy to sit back and do whatever makes to most money, they are not innovative, they are reactive (not pro-active) and they will not embrace new opportunities or technologies, they will not educate librarians on how simple their interface could be.

In recent years things have gotten so bad that a lot of us have decided that it might be better not to get new databases because they will not be utilised as much as we'd like for them to be cost-effective. So most libraries are looking at federated search technologies. Which is a fancy way of saying we're looking for a search engine that will search across multiple (specific, high value) databases using a simple search form (and an advanced one, of course, but that should be an option). To be fair, even if database suppliers took a page from the dominance of Google (Google commands something like 90% of the general web search market in europe) we might still be looking at a federated search engine, but the way things are now, with each database having it's own site-specific form, with it's own syntax for boolean operators, keyword options, searchable fields, quirks, obscenely ridiculous specific queries, we are forced to implement this. We cannot continue to expose our students, who have not completed a 4 year graduate course in information retrieval to the backwardness of database vendors.
This was apparent 10 years ago and as the costs of databases go up, even in the face (or maybe because) of consortia that buy in large-scale access for multiple educational institutions at once, and the demands of small groups for highly selective databases go up, we cannot ignore the fallacies of our current systems. Which is good news for vendors like ExLibris, Webfeat, Aquabrowser (popular in Dutch public libraries, built on Webfeat I understand) and Infor, who apear to be the big players in Holland.

Anyway, we're currently looking at getting one of these in our library and are talking with other higher education institutions about their choices/considerations in this respect. I'm hoping to cut to the chase some time soon and end the suffering our users have been going through, and really, you have no idea what a suffering this is if all you do is search on the web for information about your interests/hobbies and do not have to put up with the horrible obscure interfaces you have to endure if you're looking for scientifically valid information that goes deeper than a press release.

So, to get back to the question "does advanced search sound too advanced?" I'd answer with a resounding NO.
Advanced search is so ingrained in our thinking about the web that to replace it is silly. We might as well create an online commerce site that doesn't use the "shopping basket" metaphor Amazon so successfully popularised. To suddenly call it something else would be self-defeating. Call it advanced search by all means, but do not present it to the user first thing. Hide it in the search results, give it another tab for all I care but do make it available, it's a very useful tool and shouldn't be hidden too much. (Librarians can teach you how to get the most out of advanced search tools and are for the most part happy to do this.) But give users the option to start simple. This is what they expect. This is what they want. This is what they deserve!
They are, after all, paying our salaries.

The blinking! Make it stop!
OMG, I upgraded to Mac OS X 10.5 (Leopard) and PithHelmet stopped working and now I'm forced to see ads everywhere on the interwebs.

Things are blinking and flashing and SCREAMING for my attention and I hates it.

I'm aware there is a small section of the world making a living by displaying ads on their websites. I am aware these people consider me a leech for not wishing to view their gratuitous commercialism. To those people I say: blow me! I make websites for a living as well and I chose to work in a noncommercial section of the web. I will look at your simple text-based ads like the Google thing but I will not put up with your idiotic seizure inducing blinking and flashing and SCREAMING. I hope there will be some kind of hack to make Safari ad-less again soon, otherwise I'm switching to another browser.


Image from:

viewing your code.

Via Slashdot: As you may know, you can view the HTML code with a standard browser. We do not permit you to view such code since we consider it to be our intellectual property protected by the copyright laws.

Yep, I really really want to steal the following bit: <meta name="KEYWORDS" content="keywords go in here">

Just... lol.

I'm suffering from a mild case of technophobia right now so I thought I'm procrastinating ever so slightly. I've played 3 games of MacSolitaire so far and won 1 of them which is pretty good as I win about 15% of my games (the game calculates this for me, I don't actually keep a record, I'm not that sad).

My problem is the new CMS we are going to use for the library websites soonish. I say soonish as we currently have around a zillion pages made in Dreamweaver (and BBEdit of course) and we have to transfer the contents to the Microsoft CMS (MCMS from now on) some bigwigs decided was the bees' knees. When this project will be finished is anyone's guess though we're optimistically hoping for some time in december (which means around christmas at the soonest).

I've worked in big CMSes before and the experience was painful to say the least but all at least allowed me to get started pretty quickly and create a few pages. I have also worked with Microsoft Sharepoint and although I abhor the code it produces at least the editing environment allows you to quickly create and fill pages. I was kind of hoping MCMS would be like that but my hopes have been dashed on the rocky shores that is called bad UI design. (Here be dragons!)

On logging in I am presented by a design that is so horrifyingly stark I am sure I'm missing something because I logged in using Firefox and the page is styled using MS proprietary stuff so I log back in with Internet Explorer and the page is still the same bland list of links. Some digging reveals the page I am supposed to start with and on clicking Edit I am confronted by a page overflowing with textarea's and weird edit controls, all arranged more or less haphazardly as if whoever designed the backend took his 3 year old kid to the office and went away to get a coffee, while the kid finished the page (imagine a kid with a new box of crayons and a big white empty sheet and you will begin to appreciate the amount of pastel background-colours on the page). About the only thing I can actually quickly recognise is the box where I can fill in metadata, which I jump on. Having done that I delve into the manual which some kind soul has written for us web-slaves, but which doesn't in fact tell me anything other than: this is what we use. How to use it is left as an exercise for the reader; so it looks like I'll be spending some quality time in the cellar of failed Microsoft UI design in the near future.

Hence my procrastinating. I am sure I can get that 15% up somehow...

Sortable datatables
The sortable table described in the following page is so darn easy to implement and nifty it makes me all warm and tingly inside.

I'll be using this a LOT in future.
sorttable: Make all your tables sortable

Pretty print JavaScript
Debugging a webapplication because it only allows Explorer or Firefox to log in is a bitch. Especially if the javascript has all whitespace removed in order to obfuscate the code and make it 0.000001% faster to download.
So off I went, to look for a code formatter for JavaScript, which seems to be hard to find. Lots of "pretty printers" as those tools are called for the various versions of C and Java but none did js files.

Until I stumbled across the online tool which does what it promises. Although it has some oddities it did allow me to reformat the code nicely and when that was done I discovered the secret braindead code that ignored everything but Mozilla and IE browsers.

A nice resource for the toolbox.

Roger Johansson's rant against the three column CSS article on A List Apart expresses pretty much anything I'd like to say about the subject.
Yes there are some valid uses for this, a navigation block springs to mind, but I'm afraid most of the time we'll be forced to read awkward pagedesigns that haven't accounted for our particular screensize.

On another note I've got to get me one of those puzzles:
Found via Unshelved.

content linking
Ah, the age old debate on embedding content from different servers rears it's ugly head again in this piece on Slashdot. I wasn't able to read the original site because it appears to be down so I'm forced to just read the comments which are amusing as heck. The amount of clueless people continues to amaze me. Most of them seem to comment on Slashdot.

Here's the only really insightful comment I came across before I got bored.

Don't make me think!
God help us all.

I'm looking at a shitload of text, trying to snip what isn't needed before it gets published. While doing so I'm trying to avoid the egos of two different people. People that have criticised each other's professional attitude regarding copy-editing but apparently have difficulty when another person steps in to do yet another pass.

In the immortal words of Steve Krug: Omit needless words. Now if only I can get them to understand this. <sigh />

One of the problems with valid, structural XHTML and CSS is forms. Often we resort to tables to get the labels and entryfields nicely lined up.
Today I "discovered" a nice way to keep things clean.

Consider the following form:

Here is the XHTML code:
<form method="get" action="/guildtool.php">
    <input type="hidden" name="action" value="searchparser" />
    <span class="formlabel">Charactername:</span> <input type="text" name="charactername" value="" /><br />
    <span class="formlabel">Profession:</span> <select name="profession">
        <option value=""> </option>
        <option value="Alchemy">Alchemy</option>
        <option value="Blacksmithing">Blacksmithing</option>
        <option value="Enchanting">Enchanting</option>
        <option value="Engineering">Engineering</option>
        <option value="Herbology">Herbology</option>
        <option value="Leatherworking">Leatherworking</option>
        <option value="Mining">Mining</option>
        <option value="Skinning">Skinning</option>
        <option value="Tailoring">Tailoring</option>
    </select><br />
    <input type="submit" value=" make it so " />

And here is the CSS that displays the fieldnames in a column:
fieldset { width: 300px; }
    width: 120px;
    display: inline-block;

The magic is in the display: inline-block; there. In Internet Explorer 6 and Safari 2.0 this works like a charm, other browsers just display the fields next to each other, not that big a deal in most instances, and if it is you can just revert to the old way. I think I'll be using the new method from now on.

Security tips for web developers contains a well-written introduction to Cross-site scripting (XSS).
Summary: never trust user-input. htmlentities() is your friend.

Via: Simon Willison.

Content MiSmanagement
Great article on CMS Do's and Don'ts.
Via: 456 Berea Street.

There’s nothing worse than spending months of your organization’s time and a small fortune building a website that your customers can’t use.

As my employer is probably moving to a new CMS this year I wonder what will go wrong this time*. After all, we (meaning my employer, I wasn't involved) selected a crappy system less than three years ago.
Boy am I glad I didn't jump before the train on the bandwagon that time and continued using Dreamweaver to manage my content.

*Seeing as the choice seems to be between a solution by Microsoft and a solution by an entity that's not Microsoft it's a pretty safe bet that M$ will win this one.

Marvellous technology
There's an interesting discussion going on in the past few days on the Web4Lib mailing list. The thread is called Generation shifts and technology.

Here's a sample that struck me because it describes exactly what I went through when using the web for the first time.
Bernie Sloan wrote: I still remember a demo where people ooohed and aaahed when somebody used Mosaic to bring up a picture of a dinosaur. :-)
I, too, ooohed and aaahed when I saw that for the first time.

Bernie goes on. In some ways I think those who "grew up" with computers have an advantage over those who never knew a time when there weren't computers everywhere.
I think I agree. People that have always lived with computers may take to the technology somewhat easier but they lack the feeling we had when first seeing those images and reading those texts (and that was a powerful motivator). What I saw then transformed my way of viewing the world, even if I only later realised that. In fact it transformed my view of the world so much that some years later, when I got the chance, I jumped in blindly, both feet forward, and learned HTML just to share that joy with the world.

Those of us that "work the web" often forget these kinds of things, but it is worth remembering. This wonder is what makes us tick. This feeling is what we strive to recreate, even in a small way, for others. For those that weren't there that first time. If at some point in an academic year I overhear a student saying to another: "but that's on the web, look at the library site!" my effort has payed off. (Even if I sometimes forget that.)

Technology gone wrong
I've just tried to get my site indexed at Gigablast so I can try out their new free search engine.
When technology goes wrong you get things like this:

I know I have a visual impairment (my left eye is pretty bad) but it appears my pattern recognition is also failing. After seven tries I just gave up. Sorry Gigablast, nice try separating the bots from the humans, I guess I just failed your Turing test.

Remind me to read "The Doggone Highly Scientific Door" by R.A. Lafferty again, in which an amusement park refuses to let someone in on the grounds that he is not a person but in fact a dog. Bitter sweet melancholy ensues as the man befriends all kinds of dogs and even buys them some dog biscuits (which are apparantly pretty good eating).

Small update: the technique Gigablast uses is called a CAPTCHA. To read more on why CAPTCHA is bad read Matt May's article: Matt Presents: Escape from CAPTCHA.

Full disclosure
One of the things a webmaster has to deal with is hacking and defacement.
Until today I was a virgin in that respect. But no longer!

This morning one of the webservers I use was hacked and defaced. After I called this in to our IT department they quickly took down the machine in order to restore, secure and analyse. The system is still down as I'm writing this.

This is a major annoyance to me.
Luckily Ambrosia has just released the updated OS X compatible version of Apeiron so I can play my frustration away.

But before I indulge I decided to take a look into my own logfiles to see how my system is doing as this system is online 24/7.
My systemlog makes for some interesting reading today:
Nov 30 13:26:30 oook xinetd[293]: service ssh, IPV6_ADDRFORM setsockopt() failed: Protocol not available (errno = 42)
Nov 30 13:26:30 oook xinetd[293]: START: ssh pid=14337 from=
Nov 30 13:26:30 oook sshd[14337]: reverse mapping checking getaddrinfo for failed - POSSIBLE BREAKIN ATTEMPT!
Nov 30 18:20:53 oook xinetd[293]: service ssh, IPV6_ADDRFORM setsockopt() failed: Protocol not available (errno = 42)
Nov 30 18:20:53 oook xinetd[293]: START: ssh pid=1482 from=
Nov 30 18:20:56 oook sshd[1482]: Illegal user patrick from
Nov 30 18:20:56 oook sshd[1482]: reverse mapping checking getaddrinfo for failed - POSSIBLE BREAKIN ATTEMPT!

The attemps begin at around midday today and so far I have logged over 80 attempts.

Now if you'll excuse me I've got some bugs to squash.

[Update: 22:04] Hahaha! Eat this suckers. Game 6:

Ambrosia Highscores site.

Funny how things change
Today when I got back from work and checked my e-mail I got a request to do a shopping cart extension for Dreamweaver (paid).

Two or three years ago I would have jumped on the opportunity to do something like that, but nowadays I'm drifting away more and more from Dreamweaver. I still like the product immensely, but now if I want to automate something I switch to BBEdit, because of it's glossary, scriptability, excellent grep support and the new text factories.
Besides, I'm much more into serverside technology this decade.

Anyway, if you're a Dreamweaver extension developer contact me and I'll send your address on to the askee.

In the meantime here's a link to the oud-utrecht site for which I recently adapted my newsscript so it now functions as a bare-bones forum. Whoda thunk that would not only be possible but also be better suited than a real forum?

On fonts
Every webdesigner needs to know a bit about fonts, even though the amount of fonts you can be sure of being installed is pitifully small.

If you ever wanted to know the difference between kerning and tracking, or never heard of those but now want to know more, this is a good introduction. Via typographica.

In other fontnews via Research Buzz comes The International Type Index. It's aim: to build a searchable database of all fonts from all foundries, from all the world.

Webprofessionals of the world rejoice, your job is secure
I think I promised some friends the link to the most godawful "webdesign" business site ever.

Enjoy the pain. (Warning: lots of flashing strobe effects, probably not safe for epileptics.)

substr() bug in Internet Explorer
I haven't written anything this last week as I'm fresh out of good things to write about. Work is hectic and filled with minor annoyances.

Consider the following: substr() bug in IE.

I'm sure this is such a basic bug that I'm not the first to notice this, however I failed to turn up anything using some relevant terms in search engines so here it is, documented for your (and my) convenience.

Unwanted visitors
I have never understood the absurd idea of hiding referrers.
Case in point, here's what I often see when checking my site's statistics:  .

Tell me, gentle reader, am I overreacting when I say I don't need this kind of visitor?

Am I being weird when I say I feel like outright banning these people or redirecting them to a special page?

"Since you are so concerned about your privacy that I'm not allowed to know how you came to my site I'm not even going to let you see my content. It might offend you. Or your searchterm might offend me. Either way you're not welcome here. Move along please. Nothing to see."

If it's none of my business to know how someone came to my site why are they here? What are they doing outside of http://localhost/?

While we're on the topic of webdesign (about which I haven't posted in quite a while and I apologise for that) we might as well tackle the tricky subject that's captured my heart lately.

It's the text-shadow property of CSS2. This is a tricky subject because as far as I know the only browser that handles this Safari.
However, you can still use this effect to generate some content that's quite good-looking and doesn't break in less compliant browsers, as long as you choose your colours right.

Here's the CSS2 code I use for my site's headline:
h1 {
    font-size: 150%;
    color: #CCC;
    text-shadow: #300 2px 2px 1px;

h1 span.glow {
    font-size: 80%;
    position: relative;
    top: 10px;
    color: #ADADAD;
    text-shadow: #000 0px 0px 2px;

Here's the code of the HTML at this moment (the "speaking of bananas..." part is generated dynamically and might change as soon as think of a new cool tagline):
<h1>Harold's Home<span class="glow"> Speaking of bananas...</span></h1>.

And here's how it looks in Safari:
text-shadow as applied on my H1 element in Safari
Here's how it looks in Firefox:
text-shadow as applied on my H1 element in Firefox

A text-shadow property in CSS is a bit weird.
It accepts a colour, two offsets (one to the left and one to the bottom, use negative values to go to the right and/or top) and a blur radius. Set both offsets to zero and the blur acts a bit like a gaussian blur on an inverse selection in photoshop.

I'll be the first to admit that it looks slightly uninspiring in non-compliant browsers but then I don't make my money designing this website. For clients I am much more conservative. (I work in Dutch education where IE 6 for Windows is the lowest common denominator, although I used it a few weeks ago for a mac-only tutorial, helpviewer uses the same rendering engine as Safari.)

In browsers that do not understand the text-shadow property (like Firefox 0.8, IE 5 Mac and IE 6 Windows) this element is ignored, so be careful which text colour you choose as this can lead to the disappearance of the entire text.
In the mean-time, if you use this effect to enhance your website and choose your colours right you can give Safari users a better-looking experience and be ready for the next browser to support this.

Here's the official W3C documentation: CSS Text shadows.

Automating the web
I created a photo gallery for the web today. It's not quite ready for prime-time yet but herewith some of my observations on tools and tricks you can use to make life easier.

Automation rocks!
Every tool I used can be automated.
I used some applescripts (for the Finder), some photoshop actions, some batch find and replace actions using GREP in BBEdit and I customised a web contact sheet from the default install of Photoshop.

First things first. I had about 30 digital photographs, with stupid names like IMG_1012.JPG. I used one of Apple's script menu scripts to replace the IMG_10 in every filename with another name, let's say Abc_Def.

screenshot of the script menu with the Replace Text in Item Names item selected
Apple seems to have temporarily misplaced the download for these scripts but here's my copy in the meantime. (It could be that these are now part of the default install by the way.)

I then replaced .JPG with .jpg seeing as I'm anal about uppercase filenames.

Following this I opened one of these files in Photoshop and recorded an action wherein I resized the image and changed the curves (Curves rocks) to brighten the image a bit.
When the action was to my liking I stopped recording and closed the file (not saving any changes). I then Batched this action across the entire folder of renamed images. (File:Automate:Batch...)
After all the files had been changed I opened them three at a time in Photoshop and applied a lighting effect. As the preview of this effect is quite small I often immediately had to undo the action. Normally when you choose to redo an effect the dialog will not remember your previous choices. To bypass this hold down the option key as well when choosing the effect, or hit command+option+F (control+alt+F on Windows I think) to open the dialog with your previous choices. You can then easily tweak these. This works with almost all tools that have a dialog, including the previously mentioned Curves (command+option+M versus plain command+M).

After all the files were more or less to my liking I decided to automate my life still further. First I used the default "Simple" template of Photoshop's web-gallery action (File:Automate:Web Photo Gallery...), however this created some horrible HTML.
I remembered digging in the Photoshop support files once and making some changes there to create cleaner HTML. Unluckily for me I did this on an old computer at work and not at home. There was no option but to restart.

Photoshop uses four partial HTML files to construct a web gallery. Here's where mine are: "/Applications/Adobe Photoshop 7/Presets/WebContactSheet/Simple". On Windows they're probably in "C:\Program Files\Adobe Photoshop 7\Presets\WebContactSheet\Simple".

I opened up all four files and changed them to write XHTML 1.0 Transitional, with a link to an external stylesheet, properly quoting some attributes and replacing outdated <CENTER> tags with div's. I plain trashed the <FONT> tags and body attributes. We're well and truly into the century of the fruitbat now anyway.
They may not be of much use to you but if you want to see what I did here's my new "Simple" contact-sheet.

After making these changes the created web-gallery was almost ready to validate, note that Photoshop will still put in some uppercase <TR> tags which you will need to change later in a web-editor.
In BBEdit I applied source formatting over all the files and replaced some alt texts using GREP to preserve the photonumber.
Find: alt="Abc_Def_([0-9]+).jpg"
Replace: alt="Abc Def \01.jpg"

I also later batch added accesskey attributes to the previous and next images and stripped away the "top" link to the thumbnails as I had no use for these. (I should simply put change that in the Photoshop template I now think.)
Find: <a href="Abc_Def_([0-9]+).html"><img src="../images/next.gif"
Replace: <a href="Abc_Def_\01.html" accesskey="n"><img src="../images/next.gif"

Note that if you're using BBEdit you can either record this action on one file and then use the resulting applescript on all your files, or change a whole folder of pages in Batch mode, saving some time. If you're using Dreamweaver you can also record commands and replay them on other files, though Dreamweaver uses it's own method and doesn't create an easily modifiable Applescript.

There's still some work to do, some of the images are not really to my liking at the moment, but for today this is enough. Automation can make your life that much easier. Letting you focus on the more artistic work, although figuring out how to automate some of this stuff can be quite challenging and fun too.

The hitherto unquestioned reputation of the Internet for accuracy and timely updates was thrown in disarray this week. Surfers were amazed to find that a Web page had been set up with promises of regular updates but had not been changed for weeks.
Police intervene after webpage lies dormant 'for weeks'.

The above article from The Rockall Times is so apt that it's almost not funny.

Blame canada
Accessibility guru Joe Clark and Craig Saila reviewed Canadian election sites.

It contains one of the best summaries I've ever read about why webstandards matter.

Even if you're not Canadian or particularly interested in their politics it's worth a read. At the very least everyone who does anything with websites should read the points about "What's the issue", "Why does it matter?", "Who suffers" and "Accessibility".

Validate Me
A while ago I pointed to a pretty nifty bookmarklet that would allow you to easily validate the currently active page from within your browser.

Today I got annoyed with the fact that it replaces the current document with the validationpage, so I adapted it to open the results in a new window, which is pretty cool.

Drag the following link to your bookmarks toolbar and go clickhappy: validate me.

speed of light variable
Matthew Thomas doesn't get it: When semantic markup goes bad. Matt May does: Understanding semantics.

I won't repeat Matt's arguments but here's a small addition that may amuse you.
Here's what Matthew writes about the proper use of <var>: E = m c2.
Here's his code: <var>E</var> = <var>m</var> <var>c</var><sup>2</sup>.

Last time I checked the speed of light (c) wasn't variable but a constant.
To be fair to Matthew, there appears to be some discussion on this (do a google search for the headline) but current thinking is that the speed of light is indeed a constant.

Of course it is likely that Matthew is confused and merely wants to show his Mad Semantic HTML Skillz by using the <var> element but why are E, m and c variables but is 'squared' not one? This simply makes no sense at all.

Which reminds me: I really should read up on MathML but that will have to wait for another day.

The death of the semantic web pt. II
A while ago I wrote the first part of this series and examined a proper use of the non standard <blink> tag.

I promised a follow up then and here it is. Rejoice oh fellow coders!

There is a problem facing the semantic web and in my perception it's this: there are loads and loads of tags and attributes to tags that are hardly used. Modern webdesign is moving at a fairly rapid rate, hindered somewhat by Internet Explorer's inadequacies, sure, but moving along anyway. Now to be sure the semantic web initiative focuses almost entirely on technologies such as RDF, OWL and other XML type stuff, but XHTML is part of that family too and is designed to be an intermediate stage.

There is a disturbing trend however. Instead of deeply nested tables we now seem to be faced with deeply nested <div> tags. Instead of using more appropriate tags we see stuff like <span class="abbreviation">QT</span> and so on. Why not just use the <abbr> tag? It's what it's there for after all: <abbr title="QuickTime">QT</abbr>. Here's how it renders: QT. This will work in any browser, although some may not visually display anything by showing a tooltip or something like it with the full expansion.
There are loads more examples to give, but I'll leave it at this one for now, for fear of foaming at the mouth.

Modern webdesign is not using the tags and attributes that are there and have been there for years. This was illustrated to me a few weeks ago when someone asked me, pointing to a list of tags like <strike>, <abbr>, <samp>, <code> and the like, and said: "no-one's using these any more are they, we can ignore them".

The fact is that hardly anyone is using these perfectly valid tags and is cobbling together hacks and workarounds. This saddens me, but I won't take it lying down. If we are ever going to get to the semantic web we are going to have to use the stuff that's already there.

Just because not all browsers do anything with a tag or attribute doesn't mean they're useless. Some day too these browsers will catch on. Just because Safari didn't do anything with the <fieldset> tag when first released didn't mean it was useless and that I didn't use it.
Just because the current version of Safari doesn't do anything with the <label> tag doesn't mean it's useless.

I'm in for the long haul. Sometime browsers will catch up to the HTML specs and support the tags and their attributes that have been there for years. They will do so because they make sense. They will do so because they convey meaning and structure, as well as enhanced usability.

Herewith a new one I encountered just this week and have immediately adopted:
<input type="text" name="username" value="Current username" readonly="readonly" />
Paste this into a form and notice how you can't edit the textfield. It will get submitted when posting the form though, unlike when using the disabled attribute.

There's loads more examples of under-used HTML I could give but I'll leave them for another day.

Squeaky clean
Most of you have probably already seen this but what the heck, the topic of Frontpage came up yesterday in a conversation and just now I got an e-mail from a designer friend asking whether there were any good reasons NOT to use frontpage so I'll throw in the following link for your amusement.

Homework for today:
Try to figure out what's wrong with the ad and don't look at the comments on the linked page as it gives it away.

The death of the semantic web pt. I
This one was new to me: Proper use of <blink>.

Actually I disagree with the fact that this would be the only proper use of the blink tag (stepping neatly over the fact that <blink> is a made-up tag). Here's a sample of another use I would deem appropriate (if there was such a tag as <blink>):

Last login: Tue Mar 9 10:00:20 on ttyp1<br>
Welcome to Angua, running on Darwin!<br>
[angua:~] harold% <blink>_</blink>

That's right: the above piece of HTML code shows you how my Terminal prompt looks.

"Why are you rambling about all this," I hear someone at the back of the crowd shouting. Patience grasshopper, we'll get there, but it'll have to wait until later.

Collapse I tell you!
Say, these navigation bars (as featured in a site I'm building right now) using just unordered lists and CSS to style the links in there are fun.

If you're wondering why Internet Explorer 6 messes things up though, you want to take a look at the following page: Juicy Studio.

On an unrelated note the following PNG demo is pretty cool: PNG Alpha Transparancy demo.

Biting sarcasm
There are many good points in Bruce Tognazzini's article on online shopping but this one is simply outstanding:

[...] if your site has won a web graphics design award, you are likely in serious need of a redesign. You are likely featuring something useless but pretty or you wouldn't have won it.

... a sea of site mismanagement
An intersting article about the effectiveness of Content Mangement Systems: Perls of wisdom in a sea of site mismanagement.

The great surprise of the past five years of content management is
that, despite all the hundreds of systems, no clear winners have
emerged. Instead, there's a growing dissatisfaction with the ongoing
technical burden that such systems impose.

Some influential voices are starting to argue that many sites should, in
effect, wait out this immature phase of website management. For the
moment, they should content themselves with limited automation.

Managers should be forced to read this before committing to spending hundreds of thousands of dollars based on the vague promises of vendors.

Issues with sites
Remember I wrote about this god-awful site containing 1 html page and about a gazillion gratuitous javascripts?
I've redesigned the site and it's all completely valid XHTML/CSS. I finished the job in time but at the cost of leaving two tables in there, one for a complicated imageslice to which I don't have the source and one for a headachey two-column nested adresslisting thingy. Marvel at it here:

On a headachey note I'm coming down with a nasty bug so the oddness some of you are seeing on these pages (notably with Firebird and Opera) will stay here for a while, at the moment I can't bear looking at all these nested tables and divs.

Running multiple versions of IE on Windows

running multiple versions of Internet Explorer on Windows at the same time:
Multiple IE's in Windows.

Being able to run four versions simultaneously doesn't make IE a less crappy browser, but at least you can check your site in multiple versions of the app.

I have just tested the downloads near the end of the article on Windows XP Professional and all run without problems. Enjoy!

Javascript galore
Good lord. Some designers should just be taken out and shot, their bodies dumped in a recycle bin (always remember to seperate your trash).

I'm in the process of updating a site to a manageable system (ie based on Dreamweaver templates). Currently the site consists of two html pages and about 45 javascripts. Depending on url parameters different javascripts get loaded. Technically it's pretty nifty stuff. Practically it's a load of shit. The code that's generated is pretty crappy too, with a mish mash of fonttags and empty tags sprnkled generously around.

Sometimes a site is just so plain bad there's not much to be done with it except recreating the entire thing.

I said the site consists of two pages, one of those is actually a redirect page that does some weird calculations with the screensize and then, no matter what the outcome, redirects you to the other page.

Sometimes a site is so bad you just have to remove it from the web to make this world a better place. That's why I will only bill 36 hours for the entire job, though I can see now that it will take much longer.

You can marvel at the old site here (it's in Dutch but the code is crap in any language): Recht in Utrecht.

Accesibility resources
There's a (fairly) new resource site on accesibility online: It uses the fairly intuitive DMOZ style.

Cross-browser webdesign
I always wonder about the peope who come to my site searching for crossbrowser design using Internet Explorer. What do these people want? A link to a better browser perhaps? Here you go buddies, as you're all on Windows I'll give you Mozilla Firebird.

Looking at the above Google link though... How desperate can you get to actually get to the 171st link in Google's search results. You must need it pretty darn badly. Here, I'll throw in a link to the webdevelopment extension. You'll like it.

This concludes today's public service announcements.

Redirect 301
Drew McLellan writes about HTTP status codes on this post entitled 404 - Error Badly Assigned. I'll not reiterate my comment to his post but write about another wonderful thing HTTP status codes gives us, developers/maintainers of sites. This article assumes you're using Apache, though it will likely work on other server-platforms that conform to W3C standards.

For some reason the HTTP status messages have always fascinated me. Ever since I realised that the server I use at work allowed me to redirect users who had mistyped a url to a custom error page I was hooked. Later on I realised that this could benefit not only users (by providing a friendly error message (warning: mostly in Dutch)) but could also notify me by sending an e-mail message if certain conditions were met. I have even written a short article with sample PHP code about that.

Even later on I carefully read the specifications for HTTP status codes and discovered there was a code for a resource that was moved. Possibly because a document was moved from one directory to another. The code in question is 301, moved permanently. Here's how it works:
A user requests an outdated URL. The server looks it up but finds that the page has moved (see the code sample later on) and issues an HTTP 301 status code, telling the user's browser that the page has moved. It also tells the the user's browser what the correct url is and the browser should redirect to that location.
Here's how to implement it in your .htacesss file (*):
Redirect 301 /med_pix

Thats all on one line. It will tell the server to redirect all requests for the directory /med_pix to the directory /hvu-pix on the server
See it in action.
The system is so good in fact that a request for will redirect to

(*) Note: this will only work if your system administrator has enabled this override in the server configuration this is not always the case so mail your sysadmin if you're unable to get this to work.
You can also do this kind of stuff in your server config file but I assume most peope don't have access to that. Heck, I haven't got acccess to that on this server!

There's a lot more that can be done with the Apache Redirect directive, especially if you use RedirectMatch, which allows you to use regular expressions in redirects. Remind me to write about that another time.

Gifbuilder for OS X
If you're on a Mac OS X machine and you do anything for the web chances are you've been waiting for this: GifBuider.

An update on the continuing saga of the struggle for crossbrowser, crossplatform webdesign
I have written before about my struggles to get the website of the college I work for more friendly to anything but IE/Win.

Here's a small update.
In my talks with the interim webmanager I heard that there were some problems about exactly which version of HTML the CMS would generate.
Today I did some digging and found out that while the site reports itself as HTML 401 Transitional, it does this by supplying an incomplete doctype. This means that browsers like IE and Netscape/Mozilla go into Quirk mode, effectively meaning they render in bugwards compatible mode. This is needed as the site uses a mish-mash of HTML 4 elements like divs, HTML 3 elements like the font tag and non-existant HTML like table height.

I reported this to our webmanager and he'll use this to leverage the supplier for better output from the system.

There's some light at the end of the tunnel, let's hope it's not a train...

In unrelated news our IT people are moving us all to Windows XP. This migration should have been completed some time ago for all the computers in the library where I work, so of course there are enormous hickups and most systems fail to startup. Luckily my friend Jurjan, the programmer for Virtual Pet Rock is on holidays and I have his Titanium Powerbook so I've had the pleasure of listening to the epic Hollenthon in iTunes while coding in BBEdit.
I've also had the pleasure of downloading the monolithic Microsoft Office X update at 400kb/sec. Sometimes life is very, very good.

Moving to XHTML
As I mentioned earlier I moved the site to XHTML this week.
I thought I'd mention how the process went so that perhaps others can learn from it. I know I learned a lot and will be reviewing the other sites I maintain so as to make the process easier in future.

The old site was entirely in valid HTML 401 transitional and while I always took care to write all tags in lowercase and close all my tags even though the specs didn't always require this I still had some problems. In fact almost all pages had some kind of problem and some still do, I'm still updating pages and I expect that the entire process will take me some time as all this is done in my holidays and I have a stack of books to read too and holidays are apparantly also for having fun.

So what were the biggest problems I encountered?
Well, unclosed tags for one. I know, I said I took care to close all tags, but some of the pages on this site date from way back (HTML 4 was just being thought up then) and while the layout may have changed since then, most of the content hasn't been updated as the pages are considered a done job. This means that there were a lot of unclosed paragraph tags.
Another problem turned out to be unquoted attributes, especially in PHP forms.

As I wrote in a comment on the previous post I whipped up a grep pattern to XHTMLify my image tags. The same process was useful for horizontal rules. However, due to the complexity of HTML this process couldn't be applied to tags like P. Another problem was the fact that a lot of the pages on this site use PHPtags and quoted attributes must be escaped there so just running a batch job on those wouldn't work.
I suppose I could have looked into perl or somesuch tool to do the job but that would require me to really learn the language and I lack the inclination right now. Besides: I'd rather revisit all my old pages and correct a few typos as well in the process.

So what did I learn exactly:
- Quote all your attributes
- Close all P tags
- Valid XHTML can still work with table layouts
- A good texteditor is worth the money (as if I didn't know already)

Validation is really easy, just add the following bookmarklet to your toolbar: ValidateMe
Visit a page, click the bookmarklet and start your editor and FTP client.

If you're on the web now and do all your code yourself it's highly likely you too will want to make the move to XHTML or whatever follows it. Taking a good look at what you do now and how you work will save you a lot of time later on.

Standards compliant webdesign
Aaah, lovely outrage.

Drew McLellan writes about his frustrating visit to an online retailer's site that failed to work properly in anything but Internet Explorer.
In fact, it's not like it would even cost any more to get their sites right if they'd bothered to think about it from the outset. [...] If the agency you're using can't deliver a site that will allow its visitors to use the site to its full, then umm .. change your agency. If your content management system prevents you from achieving this goal, then what the hell is it actually achieving for you?

I was immediately reminded of my own experiences lately where I've been fighting the good fight for standards compliance, cross-browser/platform development.

So what's up with that? Well, just before I started my holiday I had a fairly constructive talk with the interim webmanager of the college I work for. It seems it is kind of hard to decide who is actully responsible for most of the crap that appears on the pages. Partly it's bad design and partly it's just plain stupidity, both on the part of the designers and on the part of the CMS developers. I can't help but be suspicious of the brief they were given by the management though (more on that later). We agreed that the current situation is less than ideal (only IE/Win users can view the site as intended) but differed on the urgency of this.

If one would care to look at the source code of some pages it is clear that whoever wrote that hates the web.
It's bad enough that the site relies on quirksmode by providing an incomplete doctype so that it can force a tableheight of 100% in Netscape and the like, but what really frustrates me no end is the fact that the site relies almost completely on javascript, and not the most elegant at that. For instance why the f*ck would one want to write a non-animated, non-interactive header image in Flash (it is the name of the faculty you're currently viewing in white text)? Just a plain gif file would do nicely and I think it's highly unlikely that that would take 2325 characters. See a screenshot of the character count.

It's things like this that make me despair. What's most worrying is that apparantly cross-browser/platform design is not high on the agenda. It also appears that someone has convinced the management that an extra effort has to be made to support additional browsers and that this will cost. Of course it will cost now, you'll have to retro-actively decontaminate your entire site.
Jeffrey Zeldman said it best in an interview over at applematters.
It takes the same amount of time and money to design with standards as it does to design for IE/Win only.
If you design with standards, your site will work in Windows, Mac OS, and Linux/Unix. It will work in IE, IE/Mac (which is a different beast), Safari, Mozilla, Opera, Omniweb and Konqueror. It will likely work in Palm Pilots, in text browsers, in web-enabled cell phones, and in screen readers used by people with disabilities. If you spend the same amount of time and money designing for IE/Win only, your site will only work in IE/Win.
It's the same amount of effort either way. One way you reach everyone. The other way you only reach IE/Win users.
The decision is obvious.

It might be obvious to Zeldman, McLellan and me and most of the readers of this site but it clearly is not obvious to a lot of webmanagers. Somehow they are getting talked into the idea that developing for all browsers is difficult.

Case in point: I was recently contacted by someone from the central organisation about which browser(versions) the webinterface of our new library system should support (we're evaluating systems now). I made the case that supporting a particular browser or version was pointless, instead the demand should be that the vendor supplies us with an out-of the-box standards compliant public interface in valid HTML and provides us with an easy method to access and change the output to our own design and wishes. In the end that made it into the questions put forward to the vendors but somehow the desire to explicitly name Internet Explorer and Netscape made it in there as well, which kind of defeats the whole point. Standards compliant output does not and should not require us to name a particular browser to be supported, it should work on all browsers and fail gracefully when there's no alternative (for instance when javascript or images are off).

It's an uphill struggle in Dutch education, this much is obvious. More (undoubtedly) later.

Building Accessible Websites
Found via Joe Clarke's book Building Accessible Websites is available online.

I've just read the first two chapters and it's pretty good.

Things holding me back - part 2
Things holding me back - part 2

Last week I wrote part 1 of this essay, essentially comparing iCab and Mozilla Firebird.
In the end iCab came out on top. Without Firebird's plug-ins there would be no contest. Comparing just the browsers iCab is vastly superior for web authoring, even though it's lack of CSS support is starting to get annoying to say the least. There are some quite innovative plug-ins available for Firebird available. There are two sets of webauthoring toolbars that will do loads of things like run your page through a validator or disable images/javascript/CSS on a page. iCab can do all that, but it's slightly more work. On the other hand iCab has an inbuilt HTML checker, validating pages on the fly, no clicks needed. It also offers contextual menu integration with BBEdit, the king of text editors.

The lack of CSS support is getting to me though. In all fairness it's not that big a deal for my work. At the moment if I design a website to work in iCab I can be reasonably sure it will work in all browsers on all platforms. If I stay off the javascript and plug-ins like Flash (an easy thing to do as I loathe them) the sites I make will even work in Lynx and other text based browser such as those used by many disabled people. Backwards compatibility and suchlike.

Yet there is a problem with this, and it is support for standards and going forward. Currently I design almost exclusively using tables based layouts. There's loads of things I do using CSS but no positioning. But the more I learn about CSS, the more I become impressed and the less I do in tables. Now I can accept that some things just plain don't work in a browser, at least in a visual way. As long as all the content is there it should be OK. If the code validates there's really nothing to do except hope for a new browser version or else it's a safe bet that the users of these challenged browsers are quite used to odd-looking websites and they won't care too much about yet another one. It may be annoying but there's only so much you can do as a developer.

In this light it is extremely important for a company like iCab to move forward on it's promise of CSS support for iCab 3, if they deliver they will be, beyond doubt, the best single browser available for the Mac that will deliver the modern web to users of aging technology in the form of the millions of pre-OSX Macs still out there, doing a day's work and being useful to those with less money to burn.
I believe they can do it. They have shown they know what they're doing. iCab still has the best support for HTML4 of any browser, on any platform. It puts Safari to shame in it's rendering of pure HTML. iCab's comprehensive preferences and filtermanager make Mozilla/Netscape curl up in a corner and weep. Sure some of it's extreme configurability takes some getting used to, but this is the power a user gets when she downloads the 2.4 megabyte of an iCab install. You don't have to use it all, just the parts you need.

So what does this mean for the state of the web in the near-by future? iCab will get there, and if it takes a while it's users will cope. If we design for the future who will be left out? In the last few years I've learned that if I write valid code I will minimize problems. I make some safe assumptions though: I use tables for complex lay-outs; I use loads of tags like abbr, acronym, legend, fieldset and label, even though there are some browser out there who don't render this well or at all. In the end I always make sure the important content stays visible, even if the browserssoftware doesn't do anything with things like title attributes in certain tags. So be it. I don't worry too much about the Mac side of things anymore, there's loads of good browsers for Macusers, they have choice, and now that Apple has released Webkit, the rendering engine of Safari, making your own browser is fairly trivial, I know, I've got one here on disk a friend of mine made in a few minutes, it may be bare bones, but I can tell you it's fast.
Moving forward means supporting and writing according to specifications laid out years ago. Such is the state of browserdevelopment. I feel the browsermarket as a whole is moving forward at last. There's some cross-platform Mozilla-based browsers that not only deliver on standards support and rendering quality, they also deliver on usability in the form of intuitive user interfaces and speed (start-up speed that is, who has time to use a browser that takes a minute to load?). Opera's offerings to *nix and windows are impressive. Now all I need is for Microsoft to move forward in a significant way. Not because I use their browser but because my visitors do and it is for them that I work the web.

Next week read more about the problems all of us may face with Microsoft's winning of the browserwars. I may also rant about quircks mode versus standards mode if the mood strikes me.

Forgive them father Berners-Lee for they know not what they do.

Things holding me back - part 1
Today a poster at the iCab mailing list posted the following link:
More praise for iCab; & how to read Google News.

While I realy like iCab and use it over 90% of the time when I'm on my Mac at home I felt obliged to post the following:
  1. Almost all browsers nowadays feature an "open window in background" command. As far as I know iCab was the first to implement this though.
  2. Both Safari and loads of Mozilla clones allow the blocking of pop-ups that are not requested by the user nowaydays, again I believe iCab was the first browser to do this.
  3. There is now a plug-in for Mozilla Firebird that blocks banner ads, iCab's configuration is more flexible and also blocks iframes and allows one to set up filters for javascript and display options as well, but it does so at the cost of complexity. The plugin for Firebird renders pages much much nicer though, cleanly removing the offending images instead of leaving placeholders.
  4. The ability to switch on or off images in a webpage may be easier for iCab out of the box but again there is a plugin for Firebird that does this easily, it's in one of the webauthoring tools.
Don't get me wrong: iCab is still vastly superior, but most of it's good features are being copied over to other browsers, either built-in or as plug-ins. It's not the features alone that make iCab great, it's the combination of them all, along with the reliabilty and ease of use and the ease with which to switch iCab's behaviour (let's disregard the complexity of the filter manager because even without going deep into the filter manager you can achieve a lot). Coupled with it's integrated html error checking and standards compliance it is one awesome browser.

The fact that I would even post something like this illustrates a fairly significant point. I have long been an avid supporter of iCab as anyone who knows me will testify to and as a search on the iCab discussion group will testify to. However lately I've come to like another browser almost as much and this post show it. It's Mozilla Firebird.
The main reason I like it so much is the fact that it's by far the best browser available on the windows side of the world (I have to use windows at work) and it does almost everything iCab does for me on the mac side.

In and of itself Firebird wouldn't really interest me that much, it's not much better than Netscape 7 to tell you the truth, but through some innovative plug-ins it becomes much more. There's an adblock plugin, there are two different webauthoring plug-ins that complement each other nicely (though there's some overlap), there's a plug-in for viewing HTTP headers and then there is the fact that there's a plug-in that allows stylesheet switching.

With all these plug-ins Firebird is an awesome browser that not only equals iCab but bests it. Or does it?

Not quite. For one thing: iCab's filter management is vastly superior to the relatively simple adblocker in Firebird. After all: iCab filters not only ads, it also filters javascript, font and image display preferences, user-agents selection et al. on a site by site basis. iCab may not support too much of CSS right now but overal it's performance is better: it feels faster, it's user interface gets things right. It's a mac application and you know it. Things respond the way you expect them to, launching the app doesn't take half a minute, the bookmarks systems "just gets it", the interface doesn't get in your way or looks like a babboon on a hormone cure. You know what I mean.

All in all I feel that Firebird is getting there though and iCab had better improve it's CSS rendering soon. Which leads me nicely to chapter two of this article (tune in next week).

Lovely typography
The song's quite nice too (though a bit odd): JPEG Baby.

On a related note:
Ban Comic Sans - you know it makes sense
Join the fight

Show all items | Read all items | Show topics

About, copyright, privacy and accessibility | Mail