Understanding and Adapting Bezel-to-Bezel Interactions for Circular Smartwatches in Mobile and Encumbered Scenarios

Bradley Rey , Kening Zhu, Simon T Perrault, Sandra Bardot, Ali Neshati, Pourang Irani

Published in MobileHCI, 2022

Abstract

Supporting eyes-free interaction, mobility and encumbrance, while providing a broad set of commands on a smartwatch display is a difficult, yet important, task. Bezel-to-bezel (B2B) gestures are valuable for rapid command invocation during eyes-free operation, however we lack knowledge regarding B2B interactions on circular devices during common usage scenarios. We aim to improve our understanding of the dynamics of B2B interactions in these scenarios by conducting two studies and a third analysis: First, we explore the performance of B2B in a seated position; second, we explore the effect of mobility and encumbrance on the B2B interaction; finally, we improve on the B2B accuracies by calculating features and utilizing machine learning. With the limited interaction capabilities on smartwatches and the importance of the scenario of use, we conclude with applications and design guidelines for improved utilization of B2B that enables effective smartwatch control while in common, mobile and eyes-free scenarios.

In Summary

Bezel gestures provide promise in that they can be performed while eyes-free, utilize typically unused space of the screen, and do not contradict other common touch interactions. As such, we explored the use of bezel-to-bezel (B2B) gestures (slide gestures that start in a portion of the bezel, slide into the screen, and finish by sliding back into the bezel). We explored and algorithmically adapted B2B gestures on a circular smartwatch and while mobile and mobile and encumbered. Including these usage scenarios allows us to better promote the use of B2B during common usage scenarios during smartwatch interaction.

Key Findings

  • While in a seated position, we found accuracies of 97.2%, 87.5%, and 74.0% for the 4-, 6-, and 8-Bezel Segments conditions tested respectively.
  • As expected, mobility and enucmberance show significantly decreased accuracies to those in a seated position.
  • While reduced accuracies were apparent, the underlying characteristics of the B2B gesture itself importantly remain the same across our studies/conditions.
    • Importantly, the angle of direction change (i.e., the angle of change made in the center of the screen when sliding from the start bezel and towards the end bezel segment) was consistent and often correct given the desired start and end segment.
    • When using this angle of direction change as a sole metric for accuracy, we achieve up to a 20% improvement.
  • Given the gesture characteristic similarities, machine learning models can be used to improve accuracy of the B2B gestures.
  • While 4-segments is optimal, using a model allows for us to accommdate up to 6-segments.

In More Detail

Please review our full paper (linked above) for study details, methodologies, and complete results.