Are accessibility evaluation tools useless?

It is understandable for designers, developers, and content producers to want accessibility checking to be as easy as validating HTML. Either it validates or it doesn’t, and you can let a free, automated tool do the work for you.

Unfortunately it isn’t that easy to evaluate accessibility. Since Web accessibility is meant to make sure that humans can use websites, a machine cannot (yet) be trusted to determine how accessible a site is.

But what can the accessibility evaluation tools that do exist be used for? There’s got to be something you can trust in the reports they generate, right? Yes there is, but not a lot without first manually evaluating the results. Karl Dawson has taken a closer look at this and presents his findings in How useful are accessibility evaluation tools? (also available at Accessites.org). In short: they have some use, but rarely enough to warrant paying for them.

In the article, Karl points out something that I for some reason haven’t paid enough attention to. WCAG 1.0 Checkpoint 11.2 is called “Avoid deprecated features of W3C technologies”. In practice this means that many sites that do the bare minimum to pass validation - throw in a 4.01 Transitional doctype and only fix what the HTML validator complains about - will fail an accessibility evaluation. (No, validity does not equal accessibility, but a fairly common misconception is that it does.)

This is good since it catches a lot of sites - mainly in the public sector - that only pay lip service to web standards and falsely claim to meet accessibility requirements. I am so sick of reviewing sites that claim to fully comply with all WAI and national guidelines, yet are peppered with inline styles, inline JavaScript, deprecated attributes like border, align, height, and language, and any number of other bad coding practices.

Not that any of that necessarily affects accessibility in a negative way, but it still says a lot about the developers’ general attitude towards front-end coding.

Posted on September 8, 2006 in Accessibility, Quicklinks

Comments

  1. Is it really that bad to use inline styles?

    I try to stay away from them in order to keep my HTML clean, but sometimes there’s just this one header, anchor, or whatever that needs just a little more padding than the rest.

    In some cases it’s just not worth it to add a unique id to that specific element.

    Don’t mean to go off-topic, by the way ;)

  2. First to answer Harmen, putting inline styles into the document does mean you have ‘how it will be displayed’ in amongst ‘what is to be displayed’. I’d consider it always better to separate content from how it is presented, the same for behaviour. Its a core principal.

    To carry on the conversation Roger I’ve worked ever so shortly on a government site here with that very issue - but the minimum fell short of validation because the techs on the project who pushed web standards felt ‘no automated tool is worth a crock including the HTML validator’. The result is a site which is better than its FrontPage predecessor but really IMO fails dismally [I can’t post the link for future work reasons though]. Its a battle I hope to return to.

    I think its because of the number of people and managers all pushing on the project from different perspectives - money, time, experience, expected outcomes - instead of leaving it to the techs. Beaurocracy.

    It would be nice if people they did consider external experts - like WaSP - sent them such a statement somehow to give it a push. Maybe it would work, maybe not.

  3. I don’t envision myself ever paying for an accessibility checker - unless suddenly all the free ones disappear from the face of the Earth. The reason is exactly what you mentioned — none of them are wholly accurate alone, or even in combination.

    If I run a site through Bobby, chances are good I’ll get slightly different results than if I run it past Cynthia. I run it through all the verifiers I normally use, then I check everything plain text and use my developer toolbar (the FF plugin) to shut off different features in the site individually to simulate different environments, plus I run it through tests on different browsers / readers and OS’s. Granted, after a while you learn which kinds of things you don’t have to check, you get in the habit of coding it properly the first time to save yourself having to do all that checking later - but it can still be tedious at times. Sometimes it’s a royal pain in the …

    It’s just not as simple as a single online verifier yet. I really wish it were, but it’s not. I think there might be ways to improve the online checks we have, but I doubt it will ever be fully automated.

  4. Sometimes I will use inline CSS too, especially if I’m pretty sure it’s a one time instance or something really quite exclusive. I do understand why I shouldn’t though. Maybe I’m just lazy at times. I do use them for content image placement on my blog for the simple reason I set a precedent long ago and even though I know better have simply opted for consistency.

    I’m surprised you didn’t mention Accessites, Roger, since that was what Karl wrote the article for ;-)

  5. September 9, 2006 by Roger Johansson (Author comment)

    Harmen: Occasional use of the odd inline style is generally not a major problem, though the goal should always be to avoid using them. What I’m talking about here are sites whose layout is almost entirely based on inline styles. I’m not naming names, but Google for “kommun” (Swedish for municipality) and you should find a couple of examples on the first page.

    nortypig: Yes, internal “fighting” in project teams is never good for the end result.

    Mike: I was going to but ended up linking to Karl’s site since that’s where the comments are, and then forgot about the crosspost. I’ve added a link to Accessites as well now :-).

  6. Agreed: a thorough accessibility analysis can not yet be performed by a machine. This is mainly because accessibility heavily depends on content which is difficult to evaluate by a machine.

    But, this is not a black and white issue. Of course there is a value of having automated tools do some basic analysis.

    How applications are developed is changing. Smaller teams working with agile methods and modern frameworks (e.g. Rails) require that individual developers are generalists. Thus, they need to make sure that they at least have a basic level of accessibility. Basic accessibility evaluation has to go right into the unit testing framework.

    This is where tools are useful. Developers of these tools try to put some of their knowledge into these tools and make them available to others. In that way both developers and customers can assert basic accessibility without having spent an entire career in the field.

    The W3C validator is commonly confused with being an accessibility evaluation tool. The validator only measures one aspect of code quality and very few aspects of accessibility.

    It is also easy to confuse bad coding practices with accessibility issues. Things like inline styles, javascript, and most deprecated attributes have very little impact on accessibility. My guess is that the intention of checkpoint 11.2 is to avoid using elements like font where you should use a semantic alternative instead.

    I think I’ll have to expand on this thought in a separate forum. Thank you for your time.

  7. September 9, 2006 by Colin Meerveld

    I work as accessibility tester/evaluator in the Netherlands. We don’t use automatic evaluation tools. Automatic evaluation tools are good for web developers to have a first impression but not more than that. Notice I use the word automatic . I work on a tool that helps evaluating accessibility. The tool can - for example - spider a website and find al the pages with a data table. You have to look manually if the table is correct. This kind off tools are in my opinion, very useful.

  8. September 9, 2006 by Roger Johansson (Author comment)

    Peter: Of course automated tools have their use when the results are evaluated by a human being. I was comparing them to the “yes or no” answer the HTML validator gives you since that is unfortunately how people tend to use them.

    Things like inline styles, javascript, and most deprecated attributes have very little impact on accessibility.

    I’d say it depends on which elements and attributes as well as the extent of their use (downloading 50 KB of unnecessary inline javascript and CSS for every page is a problem for modem users). However if you claim to adhere to WCAG 1.0, checkpoint 11.2 forces you to avoid at least some bad coding practices, which is a good thing :-).

    • Inline styles may interfeer with the user stylesheets some people with reading difficulties or vision impairments require.
    • Javascript is not well supported in screen readers and some PCs have it disabled due to security worries. Dependance on Javascript can often make parts of a web service impossible to access.
    • Attributes like target=’blank’ can cause new windows to be spawned without warning. This is at best irritating and can cause disorientation in some devices, making parts of the website less easy to access.

    Also, these deprecated attributes which merge presentation and behaviour directly into the markup make maintaining the website more difficult. They are deprecated for good reason and new websites should not be using them.

  9. Ben:

    • Style code may interfere independently of where you have placed it. Very few users apply their own stylesheet in the browser (if you require high contrast or zoom, this is typically done on the OS level).

    • Most screen readers work ontop of a browser (typically IE) and use the DOM as IE sees it. If the javascript is inline or not does not matter to the screen reader. I agree that dependance on javascript is bad, but that is a different issue than javascript being inline or not.

    • I said “most deprecated attributes”. I agree that target=blank can be bad from an accessibility standpoint.

    Roger: I agree, avoiding bad coding practices is good. But mostly for other reasons than accessibility.

    If every page contains 50Kb unnecessary code this impacts accessibility. A large pagesize is a topic of its own and often has other reasons than inline script and style data.

  10. Thanks Roger!

  11. “I work on a tool that helps evaluating accessibility. The tool can - for example - spider a website and find al the pages with a data table. You have to look manually if the table is correct. This kind off tools are in my opinion, very useful.”

    Keep at it. I think there’s a gap in the market for a great program to help accessibilty testing. (I’m not positive on this, as I’ve only got familiarity with the very disappointing Bobby.)

    Any tool can do two things:

    1. Make a process quicker
    2. Make a process better

    Any tool that made accessibility testing quicker would be a great boon to accessibility in general, because it would make testing cheaper, meaning it would be done more. Any tool that made testing better would be similarly great, as less accessibility issues would find their way into sites.

    Not all problems can be made instantaneous and perfect via a tool. HTML validation pretty much can, because it’s entirely machine-verifiable, and machines are much better at avoiding errors when validating documents than humans.

    The great accessibility tool I’m envisaging would do great yes/no evaluation of every guideline that can be machine-verified, e.g. validity (a WCAG 1.0 Priority 2 requirement), colour-contrast, JavaScript links, lack of heading cells in tables, lack of alt attributes, etc.

    It would then list all the guidelines that need human verification, and let the tester write notes for each one. It would exclude guidelines that don’t apply to a page (e.g. table guidelines where the page contains no table).

    For real bonus points, it’d recognise common elements across pages (e.g. script files, stylesheets, maybe even allow identification of masthead and navigation elements that don’t change between pages).

    I think a great, usable, reliable tool like that would be well worth spending money on.

  12. I think part of the problem is the constantly shifting state of what is considered best practice. Take using a language attribute as an example. Perhaps you ran a site through an accessibility validator and it came back telling you that you needed a language attribute. So you ran around madly adding them. Then somewhere down the line having a language attribute becomes the wrong thing to do, and you are faced with finished products, no budget for changes, and no longer the best practice in place. Maybe, like doctors and lawyers, web designers should be forced to take a two week refresher course in what’s new every year, just to stay in business.

  13. I’ve been recently working on development of a multilingual website, an experience which made me realize how few government websites, for example, take heed of WCAG 1.0 4.1 “Clearly identify changes in the natural language of a document’s text”. Since that is a priority 1 checkpoint, failure means failure to reach even the minimum WCAG A level. So here’s an example of a critical review requirement that automated checkers routinely miss. To see just how bad things can be, take a look at the nightmare European Union webpage, if you can. All is not without hope though as this much improved page demonstrates, a page which has been fixed since I complained about it.

    It seems to me that it might be a good idea for an accessibility checking tool developer to offer a feature that helps to spot changes of language. I suspect that if we were to do a little research we could find that something has been published on heuristics for this.

    I notice that a number of accessibility review tools already read human-language text, for example to check for stupid “click here” link text. Checking for hard-coded natural language in this way is of course evil (smile), because surprisingly, there are parts of the web that are not written in English. But there is the seed of a good idea there, just badly implemented. An example of a checker that uses this kind of assertions is the interesting Toronto University ATRC Accessibility Checker which is still very much under development but well worth watching and an excellent initiative.

    Anyway, I’d be interested in hearing about any pointers to futher reading.

    (BTW, I wrote < a href="http://www.utoronto.ca/atrc/research.html">Toronto University abbr title="Adaptive Technology Resource Center" lang="en-ca">ATRC< abbr> Accessibility Checker< /a> back there, as I was feeling exceptionally righteous. But the engine ate my hard work. Sob.)

  14. September 13, 2006 by Cecil Ward

    Following on from pauldwaite’s comment earlier. I’ve been working on a tool which uses XSLT to generate XHTML and not only carries out accessibility checks as part of the build process, but also generates inherently correct markup so that it simply is not possible to make certain kinds of mistakes in the first place.

    Of course this kind of tool doesn’t prevent complete stupidity and it doesn’t remove the need for human review.

    But I have found that being in control of your own tools, or at least having tools that are extensible, is worth a lot, because if you make a mistake once (like the stupid click here) then you can add an assertion which at least means you will never be able to make that precise mistake again. That’s better than nothing, and sometimes you will be able to do better and generalise from that one particular error.

    In every case though, tools that make errors impossible are better than tools that allow errors to be made first and then flag them up later. And checks built into the build process are good because you can not forget to run them.

    I am in the happy position of being able to design my own tools and control the development process, a luxury which others may not have. However, designers with a small amount of XSLT knowledge can soon knock up a few assertions which can act as simple in-house-specific QA tools and can cover for deficiencies in other publicly available tools. There are also a number of open sources where you get lists of assertions to get you started. XSLT 1.0 and XPATH may be frightening, ugly and clunky but the combination of XSLT and XHTML is powerful and is one of the (few) reasons for using XHTML as opposed to HTML4.

    For example, I recently added a test for the checkpoint

    If two links contain the same text, they should not point to different targets.

    So, the manifesto, if tools out there are useless, then consider rolling your own, or contribute to an existing project.

    And open-ended tools are at least capable of continuous improvement.

  15. A requirement for valid HTML trumps a requirement to avoid deprecated features, a questionable concept in its own right. Some deprecated features are valid HTML and have no bearing on accessibility (e.g., align on img).

  16. But I have found that being in control of your own tools, or at least having tools that are extensible, is worth a lot, because if you make a mistake once (like the stupid click here) then you can add an assertion which at least means you will never be able to make that precise mistake again. That’s better than nothing, and sometimes you will be able to do better and generalise from that one particular error.

Comments are disabled for this post (read why), but if you have spotted an error or have additional info that you think should be in this post, feel free to contact me.