Achintya Bhowmik
Bio:
Dr. Achintya Bhowmik serves on the faculty of Stanford University as an adjunct professor at the Stanford School of Medicine, where he advises research and lectures in the areas of sensory augmentation, computational perception, cognitive neuroscience, and intelligent systems. He is also an affiliate faculty member of the Stanford Institute for Human-Centered Artificial Intelligence, Wu Tsai Neurosciences Institute and Human Performance Alliance.
Dr. Bhowmik is the chief technology officer and executive vice president of engineering at Starkey Hearing Technologies, a privately-held medical devices company with over 5,000 employees and operations in over 100 countries worldwide. In this role, he is responsible for the company’s technology strategy, research and development, engineering and program management departments, and leading the drive to transform hearing aids into multifunction wearable health and communication devices with advanced sensors and artificial intelligence.
Previously, Dr. Bhowmik was the vice president and general manager of the Perceptual Computing Group at Intel Corporation, where he was responsible for the R&D, engineering, operations, and businesses in the areas of 3D sensing and interactive computing, computer vision and artificial intelligence, autonomous robots and drones, and immersive virtual and merged reality devices.
Dr. Bhowmik is a member of the Forbes Technology Council, board of trustees for the National Captioning Institute, board of directors for OpenCV, board of advisors for the Fung Institute for Engineering Leadership at the University of California, Berkeley, and industry advisory board for the Institute for Engineering in Medicine and Biomedical Engineering at the University of Minnesota. He is also on the board of directors and advisors for several technology startup companies.
He has also held adjunct and guest professor positions at the University of California, Berkeley, Liquid Crystal Institute of the Kent State University, Kyung Hee University, Seoul, and the Indian Institute of Technology, Gandhinagar. He received his Bachelor of Technology from the Indian Institute of Technology, Kanpur, PhD from Auburn University, and attended the Executive Program at Stanford University. He has authored over 200 publications, including two books and over 80 granted patents.
Awards:
His awards and honors include Fellow of the Institute of Electrical and Electronics Engineers (IEEE), Fellow of the Asia-Pacific Artificial Intelligence Association (AAIA), President and Fellow of the Society for Information Display (SID), Top 25 Healthcare Technology CTOs by the Healthcare Technology Report, Notable Leaders in Healthcare by Twin Cities Business, Healthcare Heroes award from the Business Journals, Industrial Distinguished Leader award from the Asia-Pacific Signal and Information Processing Association, IEEE Distinguished Industry Speaker, TIME’s Best Inventions, Red Dot Design award, MUSE Design award, and the Artificial Intelligence Excellence award.
Abstracts:
Transforming Hearing Aids into Multifunctional Health and Communication Devices with Artificial Intelligence
With over 1.5 billion people suffering from hearing loss globally according to the World Health Organization, hearing aids are crucially important medical wearable devices. Untreated hearing loss has been linked to increased risks of social isolation, depression, dementia, fall injuries, and other health issues. However, partly due to a historical stigma associated with assistive devices and its single-function nature, only a small fraction of people who need help with hearing have actually adopted the devices.
In this talk, we will present a new class of multifunctional in-ear devices with embedded sensors and artificial intelligence. With an extremely energy-efficient embedded deep neural network accelerator, these devices continuously classify sound with advanced machine learning algorithms and enhance speech, serve as a continuous monitor for physical and cognitive activities, an automatic fall detection and alert system, as well as a personal assistant with connectivity to the cloud. Furthermore, these devices stream phone calls and music with all-day battery life, translate languages, transcribe speech, and remind the wearer of medication and other tasks.
Rapid progress in sensors and artificial intelligence is bringing an amazing array of new devices, applications, and user benefits to the world. We will discuss how these technologies are transforming the traditional hearing aids into multipurpose devices, helping people not only hear better, but live better lives in many more ways.
Augmenting Human Senses: Enhancing Perception with Technology and Bioscience
This lecture will introduce the neuroscience of human sensory perception (hearing, balance, vision, smell, taste, touch) and explore avenues by which technology and bioscience will enhance and augment these human senses. Employing artificial intelligence, emerging devices with embedded sensors may afford perceptual and cognitive abilities beyond the limits of our biological systems. We will consider emerging multi-functional devices with capabilities beyond their sensory functions via connection within an ecosystem of technologies to characterize activities (e.g., physical, social), enhance safety (e.g., fall alerts, balance improvement), track health (e.g., multi-sensory biometric monitoring), enhance communication (e.g., speech enhancement, language translation, virtual assistant), augment cognition (e.g., memory, understanding), and monitor emotional wellbeing (e.g., sentiment, depression). We will also review simulated multisensory stimuli towards achieving immersive experiences with virtual and augmented reality technologies.