Options
Kenan Bektas
Title
Dr.
Last Name
Bektas
First name
Kenan
Email
kenan.bektas@unisg.ch
ORCID
Phone
+41 71 224 27 63
Google Scholar
Now showing
1 - 10 of 11
-
PublicationMR Object Identification and Interaction: Fusing Object Situation Information from Heterogeneous Sources( 2023-09)
;Khakim Akhunov ;Federico CarboneKasim Sinan YildirimThe increasing number of objects in ubiquitous computing environments creates a need for effective object detection and identification mechanisms that permit users to intuitively initiate interactions with these objects. While multiple approaches to such object detection-including through visual object detection, fiducial markers, relative localization, or absolute spatial referencing-are available, each of these suffers from drawbacks that limit their applicability. In this paper, we propose ODIF, an architecture that permits the fusion of object situation information from such heterogeneous sources and that remains vertically and horizontally modular to allow extending and upgrading systems that are constructed accordingly. We furthermore present BLEARVIS, a prototype system that builds on the proposed architecture and integrates computer-vision (CV) based object detection with radio-frequency (RF) angle of arrival (AoA) estimation to identify BLE-tagged objects. In our system, the front camera of a Mixed Reality (MR) head-mounted display (HMD) provides a live image stream to a vision-based object detection module, while an antenna array that is mounted on the HMD collects AoA information from ambient devices. In this way, BLEARVIS is able to differentiate between visually identical objects in the same environment and can provide an MR overlay of information (data and controls) that relates to them. We include experimental evaluations of both, the CV-based object detection and the RF-based AoA estimation, and discuss the applicability of the combined RF and CV pipelines in different ubiquitous computing scenarios. This research can form a starting point to spawn the integration of diverse object detection, identification, and interaction approaches that function across the electromagnetic spectrum, and beyond. CCS Concepts: • Human-centered computing → Mixed / augmented reality; Ubiquitous and mobile computing systems and tools; • Hardware → Radio frequency and wireless interconnect.Type: journal articleJournal: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)Volume: 7Issue: 3DOI: 10.1145/3610879 -
PublicationThe systematic evaluation of an embodied control interface for virtual reality(PLOS ONE, 2021-12-07)
;Thrash, Tyler ;van Raai, Mark A ;Künzler, Patrik ;Hahnloser, RichardLee, YunJuType: journal article -
PublicationTelelife: The Future of Remote Living(Frontiers, 2021-11-29)
;Orlosky, Jason ;Sra, Misha ;Peng, Huaishu ;Kim, Jeeeun ;Kos'myna, Nataliya ;Höllerer, Tobias ;Steed, Anthony ;Kiyokawa, Kiyoshi ;Akşit, KaanSteinicke, FrankIn recent years, everyday activities such as work and socialization have steadily shifted to more remote and virtual settings. With the COVID-19 pandemic, the switch from physical to virtual has been accelerated, which has substantially affected almost all aspects of our lives, including business, education, commerce, healthcare, and personal life. This rapid and large-scale switch from in-person to remote interactions has exacerbated the fact that our current technologies lack functionality and are limited in their ability to recreate interpersonal interactions. To help address these limitations in the future, we introduce “Telelife,” a vision for the near and far future that depicts the potential means to improve remote living and better align it with how we interact, live and work in the physical world. Telelife encompasses novel synergies of technologies and concepts such as digital twins, virtual/physical rapid prototyping, and attention and context-aware user interfaces with innovative hardware that can support ultrarealistic graphics and haptic feedback, user state detection, and more. These ideas will guide the transformation of our daily lives and routines soon, targeting the year 2035. In addition, we identify opportunities across high-impact applications in domains related to this vision of Telelife. Along with a recent survey of relevant fields such as human-computer interaction, pervasive computing, and virtual reality, we provide a meta-synthesis in this paper that will guide future research on remote living.Type: journal article -
PublicationGEAR: Gaze-enabled augmented reality for human activity recognition(ACM, 2023-05-30)
;Hermann, Jonas ;Jenss, Kay ErikSoler, Marc EliasHead-mounted Augmented Reality (AR) displays overlay digital information on physical objects. Through eye tracking, they allow novel interaction methods and provide insights into user attention, intentions, and activities. However, only few studies have used gaze-enabled AR displays for human activity recognition (HAR). In an experimental study, we collected gaze data from 10 users on a HoloLens 2 (HL2) while they performed three activities (i.e., read, inspect, search). We trained machine learning models (SVM, Random Forest, Extremely Randomized Trees) with extracted features and achieved an up to 98.7% activity-recognition accuracy. On the HL2, we provided users with an AR feedback that is relevant to their current activity. We present the components of our system (GEAR) including a novel solution to enable the controlled sharing of collected data. We provide the scripts and anonymized datasets which can be used as teaching material in graduate courses or for reproducing our findings.Type: conference paper -
PublicationEToS-1: Eye Tracking on Shopfloors for User Engagement with Automation(CEUR Workshop Proceedings, 2022-04-30)Stolze, MarkusMixed Reality (MR) is becoming an integral part of many context-aware industrial applications. In maintenance and remote support operations, the individual steps of computer-supported (cooperative) work can be defined and presented to human operators through MR headsets. Tracking of eye movements can provide valuable insights into a user’s decision-making and interaction processes. Thus, our overarching goal is to better understand the visual inspection behavior of machine operators on shopfloors and to find ways to provide them with attention-aware and context-aware assistance through MR headsets that increasingly come with eye tracking (ET) as a default feature. Toward this goal, in two industrial scenarios, we used two mobile eye tracking devices and systematically compared the visual inspection behavior of novice and expert operators. In this paper we present our preliminary findings and lessons learned.Type: conference paper
-
PublicationSOCRAR: Semantic OCR through Augmented Reality(ACM, 2022-11-11)To enable people to interact more efficiently with virtual and physical services in their surroundings, it would be beneficial if information could more fluently be passed across digital and non-digital spaces. To this end, we propose to combine semantic technologies with Optical Character Recognition on an Augmented Reality (AR) interface to enable the semantic integration of (written) information located in our everyday environments with Internet of Things devices. We hence present SOCRAR, a system that is able to detect written information from a user’s physical environment while contextualizing this data through a semantic backend. The SOCRAR system enables in-band semantic translation on an AR interface, permits semantic filtering and selection of appropriate device interfaces, and provides cognitive offloading by enabling users to store information for later use. We demonstrate the feasibility of SOCRAR through the implementation of three concrete scenarios.Type: conference paperJournal: Proceedings of the 12th International Conference on the Internet of Things
-
Publication
Scopus© Citations 6 -
PublicationSemantic Knowledge for Autonomous Smart Farming( 2022-09-14)
;Burattini, SamueleType: book section -
PublicationPupillometry for Measuring User Response to Movement of an Industrial Robot( 2023-05-30)
;Damian HostettlerInteractive systems can adapt to individual users to increase productivity, safety, or acceptance. Previous research focused on different factors, such as cognitive workload (CWL), to better understand and improve the human-computer or human-robot interaction (HRI). We present results of an HRI experiment that uses pupillometry to measure users' responses to robot movements. Our results demonstrate a significant change in pupil dilation, indicating higher CWL, as a result of increased movement speed of an articulated robot arm. This might permit improved interaction ergonomics by adapting the behavior of robots or other devices to individual users at run time. CCS CONCEPTS • Human-centered computing → Ubiquitous and mobile computing systems and tools.Type: conference contribution -
PublicationSharing Personalized Mixed Reality ExperiencesNowadays, people encounter personalized services predominantly on the Web using personal computers or mobile devices. The increasing capabilities and pervasiveness of Mixed Reality (MR) devices, however, prepare the ground for personalization possibilities that are increasingly interwoven with our physical reality, extending beyond these traditional devices. Such ubiquitous, personalized MR experiences bring the potential to make our lives and interactions with our environments more convenient, intuitive, and safer. However, these experiences will also be prone to amplify the known beneficial and, notably, harmful implications of personalization. For instance, the loss of shared world objects or the nourishing of "real-world filter bubbles" might have serious social and societal consequences as they could lead to increasingly isolated experienced realities. In this work, we envision different modes for the sharing of personalized MR environments to counteract these potential harms of ubiquitous personalization. We furthermore illustrate the different modes with use cases and list open questions towards this vision.Type: conference contribution