The W3C process may be slow, but browser vendors are slower

Every once in a while when someone gets frustrated by the lack of browser support for standards such as HTML and CSS (mostly CSS), the W3C is yelled at for being too slow. I think it’s a little unfair.

Sure, the process of writing a W3C specification may in fact be too slow. It can (and does) take years. It would be nice if that could be sped up a bit. But I don’t think that is the main reason you can’t use all of the features that have already been specified.

The way I see it, the main reason is that browser vendors are not spending enough resources on implementing the specifications that do exist. Some argue that you can’t implement any specification until it is a W3C recommendation, but just look at Opera, Safari, and Firefox. They have all implemented some very useful parts of CSS 3. Unfortunately their implementations don’t fully overlap, but at least they have started. So it clearly is not impossible or too risky.

Imagine if all browsers supported CSS3 Backgrounds and Borders (including multiple background images and rounded corners), CSS Advanced Layout, CSS3 Multi-column layout, CSS3 Color (where the opacity property is defined), CSS3 Selectors, and CSS Media Queries fully and interoperably. Imagine how much easier it would be to turn those pretty design comps of yours into lean, semantic, accessible HTML and CSS.

So, next time you bang your head against a browser not supporting a CSS property that would make your life that much easier, don’t put all the blame on the W3C. Also blame whoever develops the browser that lacks support for that specific feature. Tell them you want them to support it. If one or two browsers have implemented the feature you want to use, let the vendors who have not implemented it know that they are falling behind their competition. Some of them might care about that. Others won’t, but at least you’ll have tried.

Posted on November 6, 2007 in Browsers, Web Standards


  1. We all know where the blame actually lies. :)

    Even when all the major vendors release a browser that fully supports CSS3, it will be a few years before we can actually take advantage of it. ( look at IE6 )

    It would nice if vendors came out with the ability to update layout engines in the background.

  2. Let’s not forget the slowest, most gigantic one of the bunch. IE to this day, does not support the XHTML mime type (which is probably most responsible the death of XHTML), it still misses much of CSS 2.1, it’s hasLayout is in direct odds with CSS specifications regarding floats, doesn’t support generated content or most pseudo-selectors, etc. etc.

  3. Wow! In my haste to leave a quick note, I’ve committed numerous grammatical and syntax errors. Please pardon my sloppiness. Hopefully you still get the gist.

  4. re: comment 2, I think draconian error handling was the death of XHTML, not IE’s support or lack thereof.

    That being said, IE is definitely the laggiest of the major browsers WRT support for existing standards.

  5. If you could point to complete test suites for the features that you would like to see implemented that would greatly help implementations. Especially if they also test how the new features interoperate with existing features, et cetera. Getting interoperable implementations for a few features may require thousands of tests, the W3C typically doesn’t provide them. (This is one of the reasons CSS 2.1 is taking so long.)

  6. Disclaimer: I work for browser vendor, Opera Software ASA.

  7. I didn’t know anyone was blaming W3C. Clearly, progress stalled after IE6, and only recently started back up as the IE team was rebuilt.

    I agree with Anne, though; clear test suites would help vendors immensely. Vendors actually do care about interop, and we have better interop in areas with clear test suites. I worked on XML interop a lot, and the story is much rosier.

    Finally, an anecdote about rushing W3C. When we shipped IE5, we included support for pure XML and XSLT, which pre-dated the other browsers by years. We kept delaying the browser to wait for the spec to be signed off (all agreed it was almost there), and finally shipped since we were sure the spec wouldn’t change and we couldn’t stall the product any further. The spec did change, and a lot. We then got yelled at for being non-standard (though we implemented the wd spec exactly), and it caused the whole wd-xsl fiasco. So vendors are wary to ship without a signed-off rec, and if this impacts product ship cycles they might complain.

  8. November 7, 2007 by K. Tanga

    Nobody cares about standards and superior formats, even in stupid games that have little impact. (googled: PNG and XHTML are dead.

  9. I do agree with you.

    I recently read an interview with Bert Bos (W3C CSS3 Team member), which talked about this matter:

    In order to avoir unfixable bugs in the specs (as it did occured in the past), the W3C decided not to transform a Candidate into a Recommendation until the spec has been implemented by two software vendors (at least). Since software vendors at ticky to implement unfinished specs, there is no implementations; thus no real case tests by end-users; thus no possibility to say if a spec is stable, or not; thus: no Recommendation…

    Apparently things are moving, since Mozilla, Opera and Webkit are starting to implement parts of CSS3 for everybody to test.

    This article also made me understood why they all implemented these features with prefixes (as -moz). It says: ‘We implemented an experimental feature. You are encouraged to play with it (and stress it), but you shouldn’t rely on it, as is.’

  10. I don’t blame the W3C.

    Today, we must clear the mind of past practices. Web enlightenment has been achieved thanks to the tireless efforts of folk like the W3C, WaSP and the major browser creators. — Dave Shea, CSS Zen Garden

    After all, where would we be without them?

    I just wish certain browser vendors would more quickly support the standards that are out there, even if they’re not official per se. But it’s just not high on their list of priorities.

    This is a tired argument though, and I can’t help feel like I’m just beating a dead horse.

    In the end, maybe we can help. Anne’s suggestion of test suites is a good one. But it’s no easy task. And I’m somewhat surprised the W3C doesn’t always provide something of the sort.

  11. CSS 3?!? I’d be happy if they’d just implement a majority of the User Agent Accessibility Guidelines from 2002.

  12. Blame them all…

  13. W3 holds about 70% of the blame for the disaster which is Web design today.

    Their CSS layout model is simply BAD — and blatantly so to any designer who spends 2 hours trying to use it.

    How could this have happened?? How could they have haughtily condemned “tag soup” (which took their ridiculously limited HTML code and turned it into the modern graphical commerce Web), and then released a CSS standard without — it really seems — having first even tried to duplicate 10 random actual Web sites with it?

    They appear to live in a philosophical ivory tower utterly unconcerned with the real world — most certainly the real world of a graphical, English-dominated Web — as long as their abstract theory is “correct” according to some impenetrable PhD dissertation on information architecture.

    Even when it became perfectly clear about 5 years ago that CSS just doesn’t work unless you want to turn the Web into a set of 2-column blog clones, W3 again failed the world by not taking immediate and radical emergency measures to correct the calamity they had created. They just weren’t particularly concerned.

    (Wait a second — even 2-column blog clones aren’t possible without the hack of misusing box floats!)

    There should have been a total and radical re-invention of the CSS positioning model completed and published in 2003, which the browser companies could be implementing in releases today.

    Instead in 2007 we find hints of some upcoming “grid CSS” tucked away at W3, in prominence somewhere behind new standards for deaf-mute Mongolian Phags Pa script users.

  14. Testify sister, testify!

  15. Having recently tried learning some Flash, I actually became reaosnably thankful for the W3C’s general slowness of development :)

    There are Flash tutorials littered all over the place, but finding useful stuff for the right version can be a nightmare, and when trying to learn some Actionscript, most of what I could find about AS3 related to people moving from AS2….

    But yes, browsers not being quick enough in supporting features is certainly a bigger deal than standards moving on. First MS decided it didn’t really care about the actual specs and developed based on it’s own interpretation of it. Then they effectively halted dev on the browser altogether, so clearly it’s going to take them some time to catch up with the likes of FF and Opera.

    They’ve quite rightly taken a lot of flak for that, despite saying that they are now working hard to make improvements.

  16. November 7, 2007 by Brad Abrahams

    IE to this day, does not support the XHTML mime type (which is probably most responsible the death of XHTML)

    @ John Lascurettes: Care to back up this outlandish statement? Not wanting to sound like an XHTML fanboy, but I think that it is far from dead. XHTML has many benefits even when served up as text/html.

  17. The thing I’m concerned about with the slowness at the W3C is that there won’t be anything to implement when the browsers (ahem, more like browser, singular) finally catch up. And it’s clear that Opera, Mozilla, and WebKit would be implementing these specs if they were clear and finalized.

    As far as encouraging browsers to implement the specs, I think one thing we can do is actually start using what is supported. Using Andy Clarke’s spirit of transcendence we can start playing with some of the newer properties while still guaranteeing a usable experience in all browsers. It’s a matter of letting go of the assumption that display has to be identical in all browsers. And then, when those browser makers start to noticeably fall behind they may get their bums in gear and support those specs.

  18. My issue has never been the speed at which standards are documented and finalized, it is the lack of natural language contained within that documentation that irks me so much. Whenever someone tells me to look at the specification, I just laugh. I don’t want a dissertation paper, just tell me how you think a Web technology should work in a human readable format.

  19. As someone who has recently started to teach students an intro to web design and development with XHTML and CSS, I’ve become aware that the principles, methods and hacks that I’ve gotten used to (as many of us have) when applying CSS to a page really don’t make much sense to a newcomer.

    My students struggle to understand what a float really does and why creating two, three or four columns is so damned unintuitive unlike tables, and I struggle to explain it. I find myself agreeing with john above (#13) that the entire foundation of CSS layout could use some serious examination.

    If there is a grid solution on the horizon, I would expect this to be the top priority for the W3C and vendors alike since it addresses shortcomings with the very basics of how CSS is used to create a layout. Opacity properties can wait a bit longer.

  20. November 8, 2007 by Henry

    @ John Lascurettes: Care to back up this outlandish statement? Not wanting to sound like an XHTML fanboy, but I think that it is far from dead. XHTML has many benefits even when served up as text/html.

    Care to explain what benefits you have experienced with XHTML over plain old HTML?

  21. @Brad

    XHTML that is not served with the proper mime type is seen as HTML by browsers. Period. So, what you are serving up XHTML as text/html, you are serving up HTML and might as well code it as such.

    I agree, there are (or would be) advantages to XHTML if we could serve it up as such. One of them is the “draconian” error handling that someone else pointed out as a drawback of XHTML. I beg to differ on that. Draconian error handling would make developers make sure that their code was properly formed and syntactically correct. The advantages of that?

    • More efficient browsers. The rules of XHTML are strict and therefore UAs (browsers) could render to spec without having to interpret tag soup ambiguities.
    • More efficient developers. Before they ever would have to test cross-browser compatibility (if they ad to at all), they’d have to get it to pass a valid XML parsing.
    • Better code means better accessibility (in general).
    • The ability to also use other XML flavors within the same document (e.g., MathML).
    • And an overall improvement of the quality of websites by default.

    But as I said before. If you’re serving XHTML as text/html, it’s not XHTML. It’s HTML. Simple as that. If you don’t believe me do a little search (I’d link to a Google search, but this site won’t let me, but paste this into Google: “xhtml html tag soup”) and see the folks of authority that are saying the same thing.

    Before you tell me that the elements are different between XHTML and HTML, they are not. The legal elements and attributes are identical between HTML 4 transitional and XHTML Transitional, as they are between HTML 4 Strict and XHTML 1.0 Strict.

    The one advantage of doing XHTML as text/html that I’ve found? I’ve used it as social engineering to get old school developers to use better semantics, to separate content and style and to validate their code.

  22. November 8, 2007 by Henry

    @John Lascurettes

    Excellent points. I am sick of developers poo-pooing my code just because it is not XHTML, even though it is valid, semantic HTML 4.01 Strict. I don’t get what this colleration is with accessibility and XHTML. I can make accessible markup as regular HTML just as easily as someone can make inaccessible, tag soup with XHTML (which just gets served as text/html anyway).

    To me, if it doesn’t offer any benefits why should I use it? Plus, I think HTML looks more neat without all the self-closing slash nonsense

  23. November 8, 2007 by Brad Abrahams

    @John &Henry: I guess that my arguments for XHTML are much the same as the ones you’ve described in your reply - strict error handling, semantic markup, etc. For example, you can do some pretty cool stuff if your XHTML markup is properly thought-out, such as cross-site Microformat mash-ups and the like. However, you make a perfectly valid point in that it is basically HTML4.01 when served up as html/text - I’d never really thought about it before. It would seem that everything that I like about XHTML is achievable using HTML. So yes, I can see that there probably are very few advantages of using XHTML over well-formed HTML 4.01.

    My question to you is: what are the advantages of using HTML4.01 over XHTML? Or is it pretty much even?

  24. Really Brad? Are you saying that a page marked up with XHTML can never be un-semantic? It’s a magical feature of closing tags with /> that ensures absolute, perfect semantics? RLY?

    Roger, I love your blog, but the peanut gallery commenting on every single post makes me want to cry.

  25. And hell with markdown, that should have been:

    “closing tags with />”

  26. November 8, 2007 by Brad Abrahams

    @Montoya: Not at all. Of course you can still write tag soup with XHTML, as many developers demonstrate on a daily basis - just not the way I do it! I suppose I’m so attached to XHTML because it served as my introduction into world of proper, lean semantic markup seperated from presentation and behaviour, etc. - which certainly isn’t a bad thing.

    In fact, I fail to see what advantages HTML4.01 has over XHTML - people have only been able to explain to me why XHTML isn’t any better than HTML4.01, which I accept as a perfectly valid point. If somebody can tell me why I should be using HTML4.01 instead of XHTML, then I would definitely be interested in learning more.

  27. @Montoya:

    A page marked up as XHTML could not be tag soup if it was actually served as the proper mime type. It would fail in the browser (and as I described above, that’s a good thing). If it’s served as text/html, then yes, nothing prevents the same tag soup practices of old from creeping into XHTML.


    Neither XHTML served as text/html nor HTML has a distinct advantage over the other. There are arguments to be made about the confusion caused by calling it XHTML and serving it as HTML that would lead one to conclude that sticking to plain old semantic HTML saves on the ambiguities.

    If you’re comfortable marking it up to an XHTML doctype, there’s nothing stopping you. If it helps you personally “code better,” more power to you. Just know that you could be doing the exact same thing with an HTML doctype.

    Depending on how the whole HTML5 thing plays out, there could be a distinct DISadvantage to coding to an XHTML doctype and then having to convert it back to HTML. But it’s too early to worry about old code—scripts, expression and utilities abound to convert between html and xhtml.

    Actual XHTML often has slight rendering differences, and the DOM manipulation through scripting can be profoundly different. If you want to test some local files (on a mac anyway), you can simply change an HTML file to have an XHTML extension (but don’t load it to a server or the typical HTTP mime assignment (text/html) will override the file-extension’s implied mime type.

  28. November 9, 2007 by Matthew Babbs


    There is actually one point in favour of XHTML, even when served as text/html. It’s that even HTML 4.01 Strict doesn’t require various block-level elements (<p>, for instance) to be closed. If you can guarantee valid XHTML, you can count on closed elements; with valid HTML Strict, you can’t. That can make a huge difference when manipulating the DOM, whether server-side or client-side.

  29. November 9, 2007 by Daniel S

    I admit, I once was in love with XHTML, but since then, I have learned so much.

    Today I can say: XHTML only got one advantage: You can use it with XML-Tools (but from these, theres maybe just the Validator for 90% of the users).

    I think this single advantage is not a great one. XHTML got so many disadvantages. Parsing Rules, Stylesheets, DOM Representations (createElement working normally in Firefox is a bug). And guess what: XHTML 1.0, 1.1 and 2 are so incompatible. XHTML is not the future in my opinion. HTML 5 is!

    HTML 5, defining its own syntax, will be able to be checked for errors in a document just like XML. Just HTML5 wont make the users see “validation error” an nothing else.

    I’d stick with HTML 4.01 Strict (though you are allowed to use the start and value attribute imho - that’s great about HTML). It seems browser vendors and many experts seem to agree.

    Even “experts” usign XHTML seem to agree since most of the time, their sites can’t be viewed as XHTML. Haha.

  30. Wow talk about digression.

    I have to agree with Brian, that whilst it may be frustrating with the speed, or there lack of, the issue is the rediculously complex language they use.

    Perhaps the speed of implementation of standards is governed by the amount of time it takes to translate those documents into human readable format.

  31. November 12, 2007 by Stevie D

    In fact, I fail to see what advantages HTML4.01 has over XHTML - people have only been able to explain to me why XHTML isn’t any better than HTML4.01, which I accept as a perfectly valid point.

    HTML4.01 is understood by all current browsers, XHTML is only understood by a small minority. If you’re serving pages written in XHTML as HTML, you’re going to extra effort for absolutely no reason.

    XHTML can only be understood by most browsers when written with a hack - the space in a self-closing tag is not supposed to be there, but without it, older browsers will not parse the code. HTML can be understood as it is written.

  32. November 12, 2007 by Stevie D

    A page marked up as XHTML could not be tag soup if it was actually served as the proper mime type. It would fail in the browser (and as I described above, that’s a good thing). If it’s served as text/html, then yes, nothing prevents the same tag soup practices of old from creeping into XHTML.

    Maybe we understand different things by the term “tag soup”. I would say it describes any mess of unsemantic code, eg

    [table id=”wholepage”] [table id=”header”] [tr][td id=”logo”] [img src=”logo.png” alt=”Company logo - blue swirl with red letters” /] [/td] [td id=”strapline”] [p class=”heading1”]… etc

    It can pass XHTML validation, but is still a pile of steaming mess.

  33. Thanks Roger - Great post. Good perspective.

Comments are disabled for this post (read why), but if you have spotted an error or have additional info that you think should be in this post, feel free to contact me.