Multimodal and Ubiquitous Technology for Healthcare
My work on Human-Centered Computing is situated at the intersection of computer science, cognitive science and the health sciences. I am a computer scientist who investigates tools, techniques and infrastructure supporting the deployment of innovative interactive multimodal and tangible devices in context, and an ethnographer using novel methods for studying and quantifying the cognitive consequences of the introduction of this technology in the everyday life.
My main interests ranges from software engineering to human computer interaction, particularly focusing on mobile health, computer supported cooperative work, medical informatics, mobile and ubiquitous computing.
We are investigating a variety of ubiquotous and multimodal technology such as digital pens, augmented reality glasses (Google Glass), depth cameras (Microsoft Kinect) and more, to support a variety of users undergoing specific health issues such as aphasia, colorblindess, bruxism, etc.