Are accessibility evaluation tools useless?
It is understandable for designers, developers, and content producers to want accessibility checking to be as easy as validating HTML. Either it validates or it doesn't, and you can let a free, automated tool do the work for you.
Unfortunately it isn't that easy to evaluate accessibility. Since Web accessibility is meant to make sure that humans can use websites, a machine cannot (yet) be trusted to determine how accessible a site is.
But what can the accessibility evaluation tools that do exist be used for? There's got to be something you can trust in the reports they generate, right? Yes there is, but not a lot without first manually evaluating the results. Karl Dawson has taken a closer look at this and presents his findings in How useful are accessibility evaluation tools? (also available at Accessites.org). In short: they have some use, but rarely enough to warrant paying for them.
In the article, Karl points out something that I for some reason haven't paid enough attention to. WCAG 1.0 Checkpoint 11.2 is called "Avoid deprecated features of W3C technologies". In practice this means that many sites that do the bare minimum to pass validation - throw in a 4.01 Transitional doctype and only fix what the HTML validator complains about - will fail an accessibility evaluation. (No, validity does not equal accessibility, but a fairly common misconception is that it does.)
language, and any number of other bad coding practices.
Not that any of that necessarily affects accessibility in a negative way, but it still says a lot about the developers' general attitude towards front-end coding.