<< Program >>

Schedule

The conference sessions will be held online. Links to the sessions will be published when they are available. Please note that time zones are shown in Eastern European Time (EET, UTC+2) to match the original location of the conference, Rovaniemi, and additionally in Japan Standard Time (UTC+9). If you registered for the conference and did not receive the login data for the streaming platform, contact us here: [email protected]
All sessions are streamed live on YouTube: https://www.youtube.com/watch?v=uij0jRwgbe8.

Monday, 22nd of February 2021

Time (EET) Time (JST) Talk Stream/Recording
Doctoral Consortium

Tuesday, 23rd of February 2021

Time (EET) Time (JST) Talk Stream/Recording
Opening Watch
08:45 - 09:00 15:45 - 16:00 Conference Opening
Watch
09:00 - 10:00 16:00 - 17:00 Opening Keynote
Mel Slater
Watch
10:00 - 10:30 17:00 - 17:30 Break
Watch
Session 1: Remixed Bodies
Session Chair: Yamen Saraiji
Watch
10:30 - 10:45 17:30 - 17:45 MultiSoma: Distributed Embodiment with Synchronized Behavior and Perception
Reiji Miura, Shunichi Kasahara, Michiteru Kitazaki, Adrien Verhulst, Masahiko Inami, and Maki Sugimoto
Watch
10:45 - 11:00 17:45 - 18:00 Dynamic Shared Limbs: An Adaptive Shared Body Control Method Using EMG Sensors
Ryo Takizawa, Takayoshi Hagiwara, Adrien Verhulst, Masaaki Fukuoka, Michiteru Kitazaki and Maki Sugimoto
Watch
11:00 - 11:15 18:00 - 18:15 Independent Control of Supernumerary Appendages Exploiting Upper Limb Redundancy
Hideki Shimobayashi, Tomoya Sasaki, Arata Horie, Riku Arakawa, Zendai Kashino and Masahiko Inami
Watch
11:15 - 11:30 18:15 - 18:30 Research on the transcendence of bodily differences, using sport and human augmentation medium
Ryoichi Ando, Isao Uebayashi, Hayato Sato, Hayato Ohbayashi, Shota Katagiri, Shuhei Hayakawa and Kouta Minamizawa
Watch
11:30 - 11:45 18:30 - 18:45 Ubiquitous Body: Effect of Spatial Arrangement of Task’s View on Managing Multiple Tasks
Yukiko Iwasaki and Hiroyasu Iwata
Watch
11:45 - 12:45 18:45 - 19:45 Lunch Break and Social Activities
Watch
Session 2: Augmented Cameras
Session Chair: Niels Henze
Watch
12:45 - 13:00 19:45 - 20:00 Deep Learning-Based Scene Simplification for Bionic Vision
Nicole Han, Sudhanshu Srivastava, Aiwen Xu, Devi Klein and Michael Beyeler
Watch
13:00 - 13:15 20:00 - 20:15 FaceRecGlasses: A Wearable System for Recognizing Self Facial Expression Using Compact Wearable Cameras
Hiroaki Aoki, Ayumi Ohnishi, Naoya Isoyama, Tsutomu Terada and Masahiko Tsukamoto
Watch
13:15 - 13:30 20:15 - 20:30 CircadianVisor: Image Presentation With an Optical See-Through Display in Consideration of Circadian Illuminance
Takumi Tochimoto, Yuichi Hiroi and Yuta Itoh
Watch
13:30 - 13:45 20:30 - 20:45 Advantage and Misuse of Vision Augmentation - Exploring User Perceptions and Attitudes using a Zoom Prototype
Chloe Eghtebas, Francisco Kiss, Marion Koelle and Paweł Woźniak
Watch
13:45 - 14:15 20:45 - 21:15 Coffee Break
Watch
Session 3: Future of Speech Interfaces
Session Chair: Valentin Schwind
Watch
14:15 - 14:30 21:15 - 21:30 SilentMask: Mask-type Silent Speech Interface with Measurement of Mouth Movement
Hirotaka Hiraki and Jun Rekimoto
Watch
14:30 - 14:45 21:30 - 21:45 Derma: Silent Speech Interaction Using Transcutaneous Motion Sensing
Jun Rekimoto and Yui Nishimura
Watch
14:45 - 15:00 21:45 - 22:00 Conversational Partner’s Perception of Subtle Display Use for Monitoring Notifications
Jacob Logas, Kelsie Belan, Thad Starner and Blue Lin
Watch
Session 4: Wearables Beyond the Wrist
Session Chair: Kai Kunze
Watch
15:30 - 15:45 22:30 - 22:45 Detecting Episodes of Increased Cough Using Kinetic Earables
Tobias Röddiger, Michael Beigl, Michael Hefenbrock, Daniel Wolffram and Erik Pescara
Watch
15:45 - 15:00 22:45 - 22:00 Portable 3D Human Pose Estimation for Human-Human Interaction using a Chest-Mounted Fisheye Camera
Kohei Aso, Dong-Hyun Hwang and Hideki Koike
Watch
16:00 - 16:15 23:00 - 23:15 CapGlasses: Untethered Capacitive Sensing with Smart Glasses
Denys J.C. Matthies, Chamod Weerasinghe, Bodo Urban and Suranga Nanayakkara
Watch
16:15 - 16:30 23:15 - 23:30 Exploratory Design of a Hands-free Video Game Controller for a Quadriplegic Individual
Atieh Taheri and Misha Sra
Watch
16:30 - 16:45 23:30 - 23:45 Virtual Whiskers: Spatial Directional Guidance using Cheek Haptic Stimulation in a Virtual Environment
Fumihiko Nakamura, Adrien Verhulst, Kuniharu Sakurada and Maki Sugimoto
Watch
23:45 - open end Social Activities on Discord
Watch

Wednesday, 24th of February 2021

Time (EET) Time (JST) Talk Stream/Recording
Session 5: Physical Interfaces for Movement
Session Chair: Pawel W. Wozniak
Watch
09:00 - 09:15 16:00 - 16:15 SmartAidView Jacket: Providing visual aid to lower the underestimation of assistive forces
Swagata Das, Velika Wongchadakul and Yuichi Kurita
Watch
09:15 - 09:30 16:15 - 16:30 Virtual Physical Task Training: Comparing Shared Body, Shared View and Verbal Task Explanation
Jens Reinhardt, Marco Kurzweg and Katrin Wolf
Watch
09:30 - 09:45 16:30 - 16:45 Hippocampal Cognitive Prosthesis, Memory and Identity: Case Study for the Role of Ethics Guidelines for Human Enhancement
Yasemin J. Erden and Philip Brey
Watch
09:45 - 10:00 16:45 - 17:00 CV-Based Analysis for Microscopic Gauze Suturing Training
Mikihito Matsuura, Shio Miyafuji, Erwin Wu, Satoshi Kiyofuji, Taichi Kin, Takeo Igarashi and Hideki Koike
Watch
10:00 - 10:30 17:00 - 17:30 Coffee Break
Watch
Session 6: Augmented Vision
Session Chair: Yuta Itoh
Watch
10:30 - 10:45 17:30 - 17:45 A Machine Learning Model Perceiving Brightness Optical Illusions: Quantitative Evaluation with Psychophysical Data
Yuki Kubota, Atsushi Hiyama and Masahiko Inami
Watch
10:45 - 11:00 17:45 - 18:00 POV Display and Interaction Methods extending Smartphone
Yura Tamai, Maho Oki and Koji Tsukada
Watch
11:00 - 11:15 18:00 - 18:15 From Strangers to Friends: Augmenting Face-to-face Interactions with Faceted Digital Self-Presentations
Mikko Kytö, Ilyena Hirskyj-Douglas and David McGookin
Watch
11:15 - 11:30 18:15 - 18:30 Interactive Eye Aberration Correction for Holographic Near-Eye Display
Kenta Yamamoto, Ippei Suzuki, Kosaku Namikawa, Kaisei Sato and Yoichi Ochiai
Watch
11:30 - 12:00 18:30 - 19:00 Break
Watch
Posters, Demos, and Design Exhibition Watch
12:00 - 13:00 19:00 - 20:00 Posters, Demos, and Design Exhibition
  • REFLECTIONS ON AIR: An Interactive Mirror for the Multisensory Perception of Air (Jessica Broscheit, Susanne Draheim, Kai von Luck and Qi Wang)
  • PAL: Wearable and Personalized Habit-support Interventions in Egocentric Visual and Physiological Contexts (Mina Khan, Glenn Fernandes and Pattie Maes)
  • Jumple: Interactive Contents for the Virtual Physical Education Classroom in the Pandemic Era (Soohyun Shin, Jaekyung Cho and Seong-Woo Kim)
  • Sparkle: A Detachable and Versatile Wearable Sensing Platform in a Sustainable Casing (Adarsh Ravi and Hsin-Liu Cindy Kao)
  • Demo: Towards Universal User Interfaces for Mobile Robots (Dávid Rozenberszki and Gábor Sörös)
  • GemiN' I: Seamless Skin Interfaces Aiding Communication through Unconscious Behaviors (Shuyi Sun, Neha Deshmukh, Xin Chen, Hao-Chuan Wang and Katia Vega)
  • Design and Implementation of an Input Interface for Wearable Devices using Pulse Wave Control by Compressing the Upper Arm (Yuma Akimoto and Kazuya Murao)
  • DualBreath: Input Method Using Nasal and Mouth Breathing (Ryoya Onishi, Tao Morisaki, Shun Suzuki, Saya Mizutani, Takaaki Kamigaki, Masahiro Fujiwara, Yasutoshi Makino and Hiroyuki Shinoda)
  • EarRecorder: A Multi-Device Earable Data Collection Toolkit (Likun Fang, Tobias Röddiger, Felix Schmid and Michael Beigl)
  • Reducing Muscle Activity when Playing Tremolo by Using Electrical Muscle Stimulation (Arinobu Niijima, Toki Takeda, Ryosuke Aoki and Yukio Koike)
  • Tranquillity at Home: Designing Plant-mediated Interaction for Better Fatigue Assessment (Michi Kanda and Kai Kunze)
  • Cough Activated Dynamic Face Visor (Timo Luukkonen, Ashley Colley, Tapio Seppänen and Jonna Häkkilä)
  • Exploring a Dynamic Change of Muscle Appearance in VR,Based on Muscle Activity and/or Skeleton Position (Edouard Ferrand, Adrien Verhulst, Masahiko Inami and Maki Sugimoto)
  • Boiling Mind - A Dataset of Physiological Signals during an Exploratory Dance Performance (Zhuoqi Fu, Jiawen Han, Dingding Zheng, Moe Sugawa, Taichi Furukawa, Chernyshov George, Hynds Danny, Padovani Marcelo, Marky Karola, Kouta Minamizawa, Jamie A Ward and Kai Kunze)
  • Heavenly Hunt: Using Body Tracking for Involving Museum Visitors in Digital Storytelling (Caglar Genc and Jonna Hakkila)
  • Designing Socially Acceptable Light Therapy Glasses for Self-managing Seasonal Affective Disorder (Christian Nordstrøm Rasmussen, Minna Pakanen and Marianne Graves Petersen)
  • Towards Immersive Virtual Reality Simulations of Bionic Vision (Justin Kasowski, Nathan Wu and Michael Beyeler)
  • Moving Visual Stimuli on Smart Glasses Affects the Performance of Subsequent Tasks (Eiichi Hasegawa, Naoya Isoyama, Nobuchika Sakata and Kiyoshi Kiyokawa
  • Text Summary Augmentation for Intelligent Reading Assistant (Pramod Vadiraja, Andreas Dengel and Shoya Ishimaru)
Watch
13:00 - 14:00 20:00 - 21:00 Lunch Break and Get-Together in Gather.Town
Watch
Session 7: Augmentations from Head to Toes
Session Chair: Bruno Fruchard
Watch
14:00 - 14:15 21:00 - 21:15 Wearable System for Promoting Salivation
Kai Washino, Ayumi Ohnishi, Tsutomu Terada and Masahiko Tsukamoto
Watch
14:15 - 14:30 21:15 - 21:30 HemodynamicVR - Adapting the User's Field Of View during Virtual Reality Locomotion Tasks using Wearable Functional Near-Infrared Spectroscopy
Hiroo Yamamura, Holger Baldauf and Kai Kunze
Watch
14:30 - 14:45 21:30 - 21:45 Augmented Foot: A Comprehensive Survey of Foot Augmentation Devices
Don Samitha Elvitigala, Jochen Huber and Suranga Nanayakkara
Watch
14:45 - 15:00 21:45 - 22:00 Motion-specific Browsing method by mapping to a circle for personal video Observation with Head-Mounted Displays
Natsuki Hamanishi and Jun Rekimoto
Watch
15:00 - 15:15 22:00 - 22:15 Exploring Pseudo Hand-Eye Interaction on the Head-Mounted Display
Myung Jin Kim and Andrea Bianchi
Watch
15:15 - 16:00 22:15 - 23:00 Coffee Break
Watch
Closing Watch
16:00 - 16:30 23:00 - 23:30 Awards Ceremony
Watch
16:30 - 17:00 23:30 - 00:00 Conference Closing
Watch
17:00 - 18:00 00:00 - 01:00 Coffee Break and Socializing
Watch
18:00 - 19:00 01:00 - 02:00 Closing Keynote
Pattie Maes
Watch
Keynotes

Mel Slater

Mel Slater is a Distinguished Investigator at the University of Barcelona, and co-Director of the Event Lab (Experimental Virtual Environments for Neuroscience and Technology). He was previously Professor of Virtual Environments at University College London in the Department of Computer Science. He has been involved in research in virtual reality since the early 1990s, and has been first supervisor of 40 PhDs in graphics and virtual reality since 1989. He held a European Research Council Advanced Grant TRAVERSE 2009-2015 and has recently started a second Advanced Grant MoTIVE 2018-2022. He is Field Editor of Frontiers in Virtual Reality, and Chief Editor of the Human Behaviour in Virtual Reality section. His publications can be seen on http://publicationslist.org/melslater.

Pattie Maes

Pattie Maes is a professor in MIT's Program in Media Arts and Sciences and until recently served as academic head. She runs the Media Lab's Fluid Interfaces research group, which aims to radically reinvent the human-machine experience. Coming from a background in artificial intelligence and human-computer interaction, she is particularly interested in the topic of cognitive enhancement, or how immersive and wearable systems can actively assist people with memory, attention, learning, decision making, communication, and wellbeing.

Join our newsletter