From the Blog

May
22
Posted by gduhamel at 8:36 pm

Natural user interaction with a virtual environment is another subject I was planning to focus on. Today touch screen or motion sensing devices bring us closer to information, making it tangible.
However, display is still the same, two-dimensional screen captive.

When the information comes on top of user perception, it seems logical to choose its point of view as a base for interaction.

Objective, coding this application, was to enable augmented reality handling from video eyewear devices. Using a camera to sense user actions, this one would interact with virtual display.

Point of reference

I used my shape recognition algorithm, written last month, to detect interaction attempts.
Advantages: fair tolerance of mistakes. Hand may be slightly angled: one side or another, up or down, longer or shorter, it will still be detected.

This method is best suited for multiple user accessibility. The only condition is to repeat the same predefined gesture.

Picking

The tricky part, though the shortest.
Each virtual constituent displayed is known by the system as an array of coordinates. With a point of reference’s position we know the “touched” area. Remaining problem is basic geometry exercise.

  • Interaction direction/vector calculation
  •  Looking for intersections with known surfaces

Interaction types

I wanted, at least, two possible interactions with virtual items. The simplest one: a “click” like selection, and a simplified Drag’n’drop, both relying on the same gesture.

Wheight matters

Unlike mouse click, visual shape recognition is vague. False positives can occur and misguide the system on user intentions. That is why I introduced a “weight” mechanic to trigger any interaction.

  • When an interaction is spotted, virtual item gains weight.
  • When no interaction spotted, virtual item drops weight.

=> An item is considered clicked when it reaches a defined weight.
=> Drag’n’drop context: weight defines user movement’s influence on targeted item.

Démo

As a conclusion, allowing somebody to “touch” virtual space is quite simple. Method used above can recognize more than just a hand, and multiple gestures.
It’s an easy way to bring natural user interaction into augmented reality.