Now showing 1 - 6 of 6
  • Publication
    NeighboAR: Efficient Object Retrieval using Proximity-and Gaze-based Object Grouping with an AR System
    (ACM, 2024-05-28)
    Aleksandar Slavuljica
    ;
    ; ;
    Humans only recognize a few items in a scene at once and memorize three to seven items in the short term. Such limitations can be mitigated using cognitive offloading (e.g., sticky notes, digital reminders). We studied whether a gaze-enabled Augmented Reality (AR) system could facilitate cognitive offloading and improve object retrieval performance. To this end, we developed NeighboAR, which detects objects in a user's surroundings and generates a graph that stores object proximity relationships and user's gaze dwell times for each object. In a controlled experiment, we asked N=17 participants to inspect randomly distributed objects and later recall the position of a given target object. Our results show that displaying the target together with the proximity object with the longest user gaze dwell time helps recalling the position of the target. Specifically, NeighboAR significantly reduces the retrieval time by 33%, number of errors by 71%, and perceived workload by 10%.
    Type:
    Journal:
    Volume:
    Issue:
  • Publication
    Gaze-enabled activity recognition for augmented reality feedback
    ( 2024-03-16) ; ; ; ;
    Andrew Duchowski
    ;
    Krzysztof Krejtz
    Head-mounted Augmented Reality (AR) displays overlay digital information on physical objects. Through eye tracking, they provide insights into user attention, intentions, and activities, and allow novel interaction methods based on this information. However, in physical environments, the implications of using gaze-enabled AR for human activity recognition have not been explored in detail. In an experimental study with the Microsoft HoloLens 2, we collected gaze data from 20 users while they performed three activities: Reading a text, Inspecting a device, and Searching for an object. We trained machine learning models (SVM, Random Forest, Extremely Randomized Trees) with extracted features and achieved up to 89.6% activity-recognition accuracy. Based on the recognized activity, our system—GEAR—then provides users with relevant AR feedback. Due to the sensitivity of the personal (gaze) data GEAR collects, the system further incorporates a novel solution based on the Solid specification for giving users fine-grained control over the sharing of their data. The provided code and anonymized datasets may be used to reproduce and extend our findings, and as teaching material.
    Type:
    Journal:
    Volume:
    Issue:
  • Publication
    GEAR: Gaze-enabled augmented reality for human activity recognition
    (ACM, 2023-05-30) ; ; ; ;
    Hermann, Jonas
    ;
    Jenss, Kay Erik
    ;
    ;
    Soler, Marc Elias
    Head-mounted Augmented Reality (AR) displays overlay digital information on physical objects. Through eye tracking, they allow novel interaction methods and provide insights into user attention, intentions, and activities. However, only few studies have used gaze-enabled AR displays for human activity recognition (HAR). In an experimental study, we collected gaze data from 10 users on a HoloLens 2 (HL2) while they performed three activities (i.e., read, inspect, search). We trained machine learning models (SVM, Random Forest, Extremely Randomized Trees) with extracted features and achieved an up to 98.7% activity-recognition accuracy. On the HL2, we provided users with an AR feedback that is relevant to their current activity. We present the components of our system (GEAR) including a novel solution to enable the controlled sharing of collected data. We provide the scripts and anonymized datasets which can be used as teaching material in graduate courses or for reproducing our findings.
  • Publication
    AuctentionAR -Auctioning Off Visual Attention in Mixed Reality
    Mixed Reality technologies are increasingly interwoven with our everyday lives. A variety of powerful Head Mounted Displays have recently entered consumer electronics markets, and more are under development, opening new dimensions for spatial computing. This development will likely not stop at the advertising industry either, as first forays into this area have already been made. We present AuctentionAR which allows users to sell off their visual attention to interested parties. It consists of a HoloLens 2, a remote server executing the auctioning logic, the YOLOv7 model for image recognition of products which may induce an advertising intent, and several bidders interested in advertising their products. As this system comes with substantial privacy implications, we discuss what needs to be considered in future implementation so as to make this system a basis for a privacy preserving MR advertising future.
    Scopus© Citations 1
  • Publication
    ShoppingCoach: Using Diminished Reality to Prevent Unhealthy Food Choices in an Offline Supermarket Scenario
    Non-communicable diseases, such as obesity and diabetes, have a significant global impact on health outcomes. While governments worldwide focus on promoting healthy eating, individuals still struggle to follow dietary recommendations. Augmented Reality (AR) might be a useful tool to emphasize specific food products at the point of purchase. However, AR may also add visual clutter to an already complex supermarket environment. Instead, reducing the visual prevalence of unhealthy food products through Diminished Reality (DR) could be a viable alternative: We present Shopping-Coach, a DR prototype that identifies supermarket food products and visually diminishes them dependent on the deviation of the target product’s composition from dietary recommendations. In a study with 12 participants, we found that ShoppingCoach increased compliance with dietary recommendations from 75% to 100% and reduced decision time by 41%. These results demonstrate the promising potential of DR in promoting healthier food choices and thus enhancing public health.
    Scopus© Citations 1
  • Publication
    GlassBoARd: A Gaze-Enabled AR Interface for Collaborative Work
    Recent research on remote collaboration focuses on improving the sense of co-presence and mutual understanding among the collaborators, whereas there is limited research on using non-verbal cues such as gaze or head direction alongside their main communication channel. Our system – GlassBoARd – permits collaborators to see each other’s gaze behavior and even make eye contact while communicating verbally and in writing. GlassBoARd features a transparent shared Augmented Reality interface that is situated in-between two users, allowing face-to-face collaboration. From the perspective of each user, the remote collaborator is represented as an avatar that is located behind the GlassBoARd and whose eye movements are contingent on the remote collaborator’s instant eye movements. In three iterations, we improved the design of GlassBoARd and tested it with two use cases. Our preliminary evaluations showed that GlassBoARd facilitates an environment for conducting future user experiments to study the effect of sharing eye gaze on the communication bandwidth.