MAGENTA (Multi-Analysis of Guidelines by an ENhanced Tool for Accessibility) is the name of our tool that supports the inspection-based evaluation of accessibility and usability guidelines.
The increasing need to check web site accessibility has stimulated the interest in tools supporting the various activities involved in accessibility evaluation. Even if some tools for this purpose already exist, we believe that such tools have to become more flexible.
In particular, there is often the necessity of validating multiple sets of guidelines, repairing Web pages, and providing better reports for the evaluators. In this page, we discuss these issues and how they have been addressed for the design of MAGENTA.
The importance of tool support in Web site accessibility evaluation is gaining increasing acceptance because many countries have adopted legislation that imposes some level of compliance to accessibility guidelines. The goal of tool support is not to replace human evaluators and designers, but to help them to manage the complexity of the numerous Web sites, to apply evaluation criteria in a consistent manner, and to make their work more efficient.
In general, making a Web site more accessible and usable requires considerable effort by developers in handling Web page code and many specific design guidelines: they have to decide which principles to use for the specific case, how to apply them, and when. Evaluating Web pages requires a lot of effort as well. In order to support developer’s work during evaluation and repair process based on multi-guidelines, we have developed a semi-automatic tool, MAGENTA, which supports various sets of guidelines.
Currently, MAGENTA supports three sets of guidelines:
Firstly, the guideline set that has to be considered can be selected by a radio button. Then, through a list of checkboxes, the evaluator can decide which guidelines of the selected set(s) need to be checked.
The tool not only checks if the guidelines are supported but, in case of failure, it also help to modify the code in order to make the resulting Web site more usable and accessible. Thus, when an error is detected, the tool determines which parts of the code are problematic and provides support for corrections, indicating the elements that have to be modified or added.
The process is not completely automatic because in some cases the tool requires the designers to provide information that cannot be generated automatically. The tool knows which parts must be corrected (i.e. tags or attributes) and developers write such parts (or values; e.g. for proper link contents or proper name for frames).
When an evaluation or analysis is carried out, the resulting report plays is very important. In fact, the report needs to be clear and easy to understand. When many pages and several guidelines are simultaneously evaluated, the resulting report could be long and difficult to read. The main purpose of an automatic evaluation is to provide developers with some support in detecting and repairing potential problems with limited effort and in a short time. Furthermore, evaluators may have limited knowledge of how to handle the page code or the specific guidelines. Thus, the evaluation report should address all such aspects. In our new tool, MAGENTA, a report is provided taking into account the above mentioned possible drawbacks. The simple report showing just the main checking results on a single-page evaluation has been replaced with a more structured and flexible one.
A more recent tool for accessibility and usability evaluation is MAUVE.