Skip to content

Latest commit

 

History

History
83 lines (62 loc) · 3.12 KB

ReadMe.md

File metadata and controls

83 lines (62 loc) · 3.12 KB

FamiliAR

IntroductionFeaturesSetupBuildUsageAcknowledgmentsLicense

Introduction

FamiliAR introduces fundamental operation and interaction mechanisms in Augmented Reality (AR) on the Microsoft HoloLens 2. It originated in the scope of a research project, aiming to explore and improve mental accessibility of instructional content in AR on Head-mounted Displays (HMDs). It was created for the purpose of familiarizing people, who have never experienced AR on HMDs, with the underlying technology.

Features

Supported languages:

  • German
  • English

The following features are showcased:

  • Hologram interaction
    • Triggering buttons
    • Manipulating objects
  • Hand-tracking
    • Visualization of virtual hands
  • Eye-tracking
    • Playback of video on gaze
  • Spatial mapping
    • Visualization
    • Interplay between spatial mesh and objects
  • Speech recognition
    • Global speech commands

Setup

  1. Clone this repository (or download and unpack .zip)
  2. Add project to UnityHub and set specified Unity version (used: 2022.3.9f1) and UWP as target platform
  3. Open project. This will take some time as necessary packages will be installed automatically

Build

  1. Open project build settings (File -> Build Settings)
  2. Switch platform to Universal Windows if not already selected
  3. Set Architecture to ARM 64-bit
  4. Set Build and Run to Local Machine or Remote Device depending on your preference
  5. Set Build Configuration to Master
  6. Build (and run) project on HoloLens 2

Usage

The user is greeted by a starting dialog which will stay in view of the user until the start button is tapped. By default, German is set as the application language. To switch to English, raise left hand into view with palm facing towards user. A handmenu will show up for toggling language setting.

After start is tapped, the demo environment is spawned in front of the user. It consists of 3 main stations:

  1. Piano (to the right)
  2. Interactable cubes (center)
  3. Buttons (left)
  4. Speech recognition (far left)

For each station, a video demonstrates the basic underlying principle and is played on gaze. To reposition the AR environment in front of the user again, simply say "position". Otherwise, have fun exploring:)

Acknowledgments

This project utilized elements from different scenes included in the Unity MRTK3 Developer Template. For reference see: MRTK Dev Template

License

This project is licensed under the MIT License. See LICENSE.md for further details.