The MARIA Environment provides a novel solution able to exploit task models (represented in the ConcurTaskTrees notation) and user interface models (in the MARIA language) for the design and development of interactive applications based on Web services for various types of platforms (desktop, smartphones, vocal, multimodal, ...). In this process the tool is able to automatically import service and annotation descriptions and support interactive association of basic system tasks with Web services operations. Then, a number of semi-automatic transformations are able to exploit the information in such service and annotation descriptions to derive usable multi-device service front ends.
If you want to generate an abstract user interface from the task model please remember to select appropriate value in the type attribute of the interactive tasks and select the type Visualise in system tasks that require some modifications at the presentation level.
The new multimodal user interface generator from MARIA produces HTML implementations structured into two parts: one for the graphical and one for the vocal interface.
Each user interface element is annotated with a specific CSS class in the generation phase, according to the indications of the CARE properties. If it contains a vocal part, the class tts for the output elements and prompt part of interaction element is added, while the class asr is added for the input parts of interaction elements only if this part is equivalent or only vocal.
The generated html elements are marked with these classes because the script injected within the page uses them to identify all the graphical elements having an associated vocal part. The included script exploits the Web Speech API (W3C Web Speech API Specification) which is supported by the browser Chrome since version 31 for the desktop platform and since version 38 for the mobile platform. The site CanIuse provides a list of browser supporting the Web Speech API.