7/25/2003 - IBM has announced the general availability of a toolkit for creating multimodal applications, where the testing will be performed by a simulator based on Opera 7 for Windows.
In conjunction with the toolkit, the WebSphere Everyplace Multimodal Environment for Embedix, based on Opera technology, allows developers to take advantage of IBM's industry leading Embedded ViaVoice advanced speech recognition (ASR) and text-to-speech (TTS) engines together on one device. Together, IBM's Multimodal Toolkit for WebSphere Studio and mini-browser WebSphere Everyplace Multimodal Environment for Embedix allow developers to write and deploy multimodal applications on the Linux-based Sharp Zaurus 5600.
Embedix is a version of Linux tailored for set-top boxes, personal digital assistants and other small devices. The resulting interface enables users to obtain and manage information as the situation dictates — whether spoken or visual — anytime, anyplace. Together with the Multimodal Toolkit for WebSphere Studio, a developer can deliver GUI and speech recognition in a single application.
Using the Multimodal Toolkit for WebSphere Studio, which includes an Integrated Development Environment (IDE) built on the Eclipse framework, developers can use existing skills instead of learning a completely new language, cutting down on overall development time. The toolkit includes:
The first developer environment based on the XHTML+Voice (X+V) specification, the Multimodal Toolkit for WebSphere Studio is designed for creating multimodal user interfaces. It also allows developers to rapidly convert voice only and Web only applications into multimodal applications. The X+V specification comprises XHTML and VoiceXML, the most commonly used languages for Web and speech development, and was jointly submitted to the W3C by IBM, Motorola and Opera.
Previous Page | News by Category | News Search
If you found this page useful, bookmark and share it on: