Augmented Reality in Medicine

In ERmed, we enter the augmented and mixed reality realm, thereby
combining multiple input and output modalities.

Mixed reality refers to the merging of real and virtual worlds to produce new environments and visualisations where physical and digital objects co-exist and interact in real time.

Situation awareness meets mutual knowledge for augmented cognition.
ERmed Modalities

- World Innovation 1: We sychronise mobile eye trackers with optical see-through head-mounted displays
  and register the environment.
- World Innovation 2: We detect and track objects in a terminator view in real-time for augmented cognition.
- World Innovation 3: We use a binocular eye tracker to register the augmented virtuality
                                    with the sight of the user.
See our first domain-independent application.

ERmed is a DFKI project based on RadSpeech.
Principal investigator: Dr. Daniel Sonntag (DFKI)
Associated researchers and developers: Markus Weber (DFKI), Takumi Toyama (DFKI),
Mikhail Blinov (Russian Academy of Arts), Christian Schulz (DFKI), Alexander Prange (DFKI),
Kirill Afanasev (DFKI), Tigran Mkrtchyan (DFKI), Nikolai Zhukov (DFKI), Jason Orlosky (Osaka University)
Medical consultants: Prof. Dr. Alexander Cavallaro (ISI Erlangen), Dr. Matthias Hammon (ISI Erlangen)

In order to provide a new foundation for managing dynamic content and improve the usability of
optical see-through HMD and mobile eye tracker systems, we implement a salience-based activity recognition system
and combine it with an intelligent multimedia text and image
multimodal (movement) management system.


        towards self-calibration gaze-guided object recognition
          Text Movement Management
Medical Application Example:
Sponsors ERmed Scenario

Ermed Architecture
ERmed HMD Eye Tracker Combination