Human-Computer Interaction Experts to Gather as Carnegie Mellon Hosts First IEEE International Conference on Multimodal Interfaces

October 15, 2002

Multimodal interfaces combine qualities such as eye tracking, speech and gesture recognition, lip-reading and other human attributes that can make computers and robotics more accommodating to human needs.

The conference brings together researchers, developers and end-users of these technologies to present, demonstrate and discuss their latest work.

The conference will take place at the Pittsburgh Renaissance Hotel and will include a bus trip to the Carnegie Mellon campus Oct. 15 for demonstrations of some of the latest multimodal technologies. Carnegie Mellon researchers have been working in this area for more than a decade “Multimodal interfaces represent an emerging, interdisciplinary direction in research involving spoken language understanding, natural language understanding, image processing, computer vision, pattern recognition and experimental psychology,” said conference chairman Alex Waibel, a professor in the School of Computer Science. “The IEEE has recognized it as an important new direction in research. We’re elevating the field from workshops and interest groups to a major field of scientific endeavor.”

Waibel noted that numerous groups around the world are now developing user interfaces that are increasingly aware of their surroundings and context. “The possible impact is enormous for smarter and less intrusive robots, meeting rooms and offices,” he said.

Keynote speakers at the conference include:

Hiroshi Ishii, Tangible Media Group, Massachusetts Institute of Technology, whose work seeks to give tangible form to digital information. His research focuses on the design of seamless interfaces between humans, digital information and the physical environment.

Clifford Nass, professor of communication, Stanford University, with appointments in computer science, science technology and society, sociology, and cognitive science. He is author of “The Media Equation” (with Byron Reeves), and the forthcoming “Voice Activated: The Psychology and Design of Interfaces that Talk or Listen.” He has consulted on the design of more than 100 consumer products and services for companies, including Microsoft, Hewlett-Packard and IBM.

Lucas Parra, technology leader for adaptive image signal processing, Sarnoff Corp., and adjunct professor of biomedical engineering, Columbia University. His current research aims to demonstrate that human-computer interfaces can augment human performance. He asks the question: Is it possible to communicate without speaking, writing, pointing or typing, but instead, by reading information directly from the brain via brain-computer interfaces?

For the conference agenda and other information, see: www.is.cs.cmu.edu/icmi/

Demonstrations of multimodal interfaces will take place at Carnegie Mellon from 6:30 to 9 p.m., Tuesday, Oct. 15. Reporters may board buses for campus at the Pittsburgh Renaissance Hotel starting at 6:15 p.m. The buses will depart every 15 minutes. Demos will be held in 2602 and 2613 Newell Simon Hall. There also will be demonstrations in the Perlis Atrium on the third (street) level of the building. The last bus returning to the Renaissance will leave campus at 9:15. For more information on the demonstrations, see:

www.is.cs.cmu.edu/icmi/demos.html

Contact:

Anne Watzman

412-268-3830