Program

Program Schedule

The schedule of the sessions themselves can be found directly below.

Session Overview

Opening Keynote

Session 1: Wearables and sensors

Time Paper
April 4th 11:00 - 11:15 WhisperMask: a noise suppressive mask-type microphone for whisper speech

Hirotaka Hiraki, Shusuke Kanazawa, Takahiro Miura, Manabu Yoshida, Masaaki Mochimaru and Jun Rekimoto

April 4th 11:15 - 11:30 Synthetic Visual Sensations: Augmenting Human Spatial Awareness with a Wearable Retinal Electric Stimulation Device

Valdemar Munch Danry, Laura Chicos, Matheus Fonseca, Ishraki Kazi and Pattie Maes

April 4th 11:30 - 11:45 Looking From a Different Angle: Placing Head-Worn Displays Near the Nose

Yukun Song, Parth Arora, Srikanth T. Varadharajan, Rajandeep Singh, Malcolm Haynes and Thad Starner

April 4th 11:45 - 12:00 Future So Bright, Gotta Wear Shades: Lens Tint May Affect Social Perception of Head-Worn Displays

Sofia Vempala, Joseph Mushyakov, Srikanth Tindivanam Varadharajan and Thad Starner

Session 2: Extended Reality

Time Paper
April 4th 14:00 - 14:15 Everyday Life Challenges and Augmented Realities: Exploring Use-Cases For, and User Perspectives on, an Augmented Everyday Life

Florian Mathis

April 4th 14:15 - 14:30 GestureMark: Shortcut Input Technique using Smartwatch Touch Gestures for XR Glasses

Juyoung Lee, Minju Baeck, Hui-Shyong Yeo, Thad Starner and Woontack Woo

April 4th 14:30 - 14:45 Holistic Patient Assessment System using Digital Twin for XR Medical Teleconsultation

Taeyeon Kim, Hyunsong Kwon, Kyunghyun Cho and Woontack Woo

April 4th 14:45 - 15:00 Exploring the Kuroko Paradigm: The Effect of Enhancing Virtual Humans with Reality Actuators in Augmented Reality

Émilie Fabre and Yuta Itoh

April 4th 15:00 - 15:15 Real-time Slow-motion : A Framework for Slow-motion Without Deviating from Real-Time

Goki Muramoto, Hiroto Saito, Sohei Wakisaka and Masahiko Inami

Session 3: Wearables and sensors 2

Time Paper
April 4th 16:30 - 16:45 Identifying Hand-based Input Preference Based on Wearable EEG

Kaining Zhang, Zehong Cao, Xianglin Zheng and Mark Billinghurst

April 4th 16:45 - 17:00 Personal Identification and Authentication Method Using Ear Images Acquired with a Camera-Equipped Hearable Device

Yurina Mizuho, Yohei Kawasaki, Takashi Amesaka and Yuta Sugiura

April 4th 17:00 - 17:15 iFace: Hand-Over-Face Gesture Recognition Leveraging Impedance Sensing

Mengxi Liu, Hymalai Bello, Bo Zhou, Paul Lukowicz and Jakob Karolus

April 4th 17:15 - 17:30 Auditory Interface for Empathetic Synchronization of Facial Expressions between People with Visual Impairment and the Interlocutors

Takayuki Komoda, Hisham Elser Bilal Salih, Tadashi Ebihara, Naoto Wakatsuki and Keiichi Zempo

Session 4: Augmenting in VR

Time Paper
April 5th 9:00 - 09:15 Techniques using Parallel Views for Asynchronous VR Search Tasks

Theophilus Teo, Kuniharu Sakurada, Maki Sugimoto, Gun Lee and Mark Billinghurst

April 5th 09:15 - 09:30 Social Simon Effect in Virtual Reality: Investigating the Impact of Coactor Avatars Visual Representation

Xiaotong Li, Yuji Hatada and Takuji Narumi

April 5th 09:30 - 09:45 Multiplexed VR: Individualized Multiplexing Virtual Environment to Facilitate Switches for Group Ideation Creativity

Masahiro Kamihira, Juro Hosoi, Yuki Ban and Shin Ichi Warisawa

April 5th 09:45 - 10:00 VR remote tourism system with natural Gaze induction without causing user discomfort

Shogo Aoyagi, Takayoshi Yamada, Kelvin Cheng, Soh Masuko and Keiichi Zempo

April 5th 10:00 - 10:15 GUI Presentation Method based on Binocular Rivalry for Non-overlay Information Recognition in Visual Scenes

Kai Guo, Yuki Shimomura, Juro Hosoi, Yuki Ban and Shin Ichi Warisawa

Session 5: Learning Augmentation

Time Paper
April 5th 14:00 - 14:15 FastPerson: Enhancing Video Learning through Effective Video Summarization that Preserves Linguistic and Visual Contexts

Kazuki Kawamura and Jun Rekimoto

April 5th 14:15 - 14:30 SkillsInterpreter: A Case Study of Automatic Annotation of Flowcharts to Support Browsing Instructional Videos in Modern Martial Arts using Large Language Models

Kotaro Oomori, Yoshio Ishiguro and Jun Rekimoto

April 5th 14:30 - 14:45 Kavy: Fostering Language Speaking Skills and Self-Confidence Through Conversational AI

Sankha Cooray, Chathuranga Hettiarachchi, Vishaka Nanayakkara, Denys Matthies, Yasith Samaradivakara and Suranga Nanayakkara

April 5th 14:45 - 15:00 Serendipity Wall: A Discussion Support System Using Real-time Speech Recognition and Large Language Model

Shota Imamura, Hirotaka Hiraki and Jun Rekimoto

Closing Keynote

ACCEPTED DEMOS

1): Demonstrating VabricBeads - Jefferson Pardomuan, Shio Miyafuji, Nobuhiro Takahashi and Hideki Koike

2): Metacognition-EnGauge: Real-time Augmentation of Self-and-Group Engagement Levels Understanding by Gauge Interface in Online Meetings - Ko Watanabe, Andreas Dengel and Shoya Ishimaru

3): RadioMe: Adaptive Radio with Music Intervention and Reminder System for People with Dementia in Their Own Home - Patrizia Di Campli San Vito, Xiaochen Yang, James Ross, Gözel Shakeri, Stephen Brewster, Satvik Venkatesh, Alex Street, Jörg Fachner, Paul Fernie, Leonardo Muller-Rodriguez, Ming Hung Hsu, Helen Odell-Miller, Hari Shaji, Paulo Vitor Itaborai, Nicolas Farina, Sube Banerjee, Alexis Kirke and Eduardo Miranda

4): E-Scooter Dynamics: Unveiling Rider Behaviours and Interactions with Road Users through Multi-Modal Data Analysis - Hiruni Kegalle, Danula Hettiachchi, Jeffrey Chan, Flora Salim and Mark Sanderson

5): PairPlayVR: Shared Hand Control for Virtual Games - Hongyu Zhou, Pamuditha Somarathne, Treshan Ayesh Peirispulle, Chenyu Fan, Zhanna Sarsenbayeva and Anusha Withana

6): Tuning Infill Characteristics to Fabricate Customizable 3D Printed Pressure Sensors - Jiakun Yu, Praneeth Perera and Anusha Withana

ACCEPTED POSTERS

1):How do people control the "self" for motion? Investigation of the effects of pneumatic gel muscle intervention on motion self-control and ego depletion in emotional self-control - Chiaki Raima and Yuichi Kurita

2): Exploring relationship between EMG, confusion and smoothness of work progress in assembly tasks - Tzu-Yang Wang, Suyeong Rhie, Mai Otsuki, Hideaki Kuzuoka and Takaya Yuizono

3): Put Our Mind Together: Iterative Exploration for Collaborative Mind Mapping - Ying Yang, Tim Dwyer, Zachari Swiecki, Benjamin Lee, Michael Wybrow, Maxime Cordeil, Teresa Wulandari, Bruce H Thomas and Mark Billinghurst

4): In-the-Wild Exploration of the Impact of the Lunar Cycle on Sleep in a University Cohort with Oura Rings - Shota Arai, Andrew Vargo, Benjamin Tag and Koichi Kise

5): Personalizing Augmented Flashcards Towards Long-Term Vocabulary Learning - Yuichiro Iwashita, Andrew Vargo, Motoi Iwata and Koichi Kise

6): Lumbopelvic ratio based screening tool for Lumbar health assessment - Gunarajulu Renganathan and Yuichi Kurita

7): Spatial feature optimization through a genetic algorithm in a sensory-association-based brain-machine interface - Hikaru Tsunekawa, Yasuhisa Maruyama, Laura Alejandra Martinez-Tejada, Kazutoshi Hatakeyama, Tomohiro Suda, Chizu Wada, Takumi Inomata, Kimio Saito, Yuji Kasukawa, Naohisa Miyakoshi and Natsue Yoshimura

8): Owl-Vision: Augmentation of Visual Field by Virtual Amplification of Head Rotation - Michiteru Kitazaki, Ryu Onodera, Junya Kataoka, Yasuyuki Inoue, Yukiko Iwasaki and Gowrishankar Ganesh

9): Augmenting Sleep Behavior with a Wearable: Can Self-Reflection Help? - Hannah Nolasco, Andrew Vargo, Marc Moreeuw, Toma Hara and Koichi Kise

10): Motor Enhancement through an Individually Optimized Imperceptible Vibration Stimulation - Takashi Suzuki

11): Creating viewpoint-dependent display on Edible Cookies - Takumi Yamamoto, Biyon Fernando, Takashi Amesaka, Anusha Withana and Yuta Sugiura

12): QA-FastPerson: Extending Video Platform Search Capabilities by Creating Summary Videos in Response to User Queries - Kazuki Kawamura and Jun Rekimoto

13): Aged Eyes: Optically Simulating Presbyopia Using Tunable Lenses - Qing Zhang, Yoshihito Kondoh, Yuta Itoh and Jun Rekimoto

-->