2024

WEAR: An Outdoor Sports Dataset for Wearable and Egocentric Activity Recognition, Marius Bock and Hilde Kuehne and Kristof Van Laerhoven and Michael Moeller, In Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. (IMWUT), vol.8(4)2024. [abstractResearch has shown the complementarity of camera- and inertial-based data for modeling human activities, yet datasets with both egocentric video and inertial-based sensor data remain scarce. In this paper, we introduce WEAR, an outdoor sports dataset for both vision- and inertial-based human activity recognition (HAR). Data from 22 participants performing a total of 18 different workout activities was collected with synchronized inertial (acceleration) and camera (egocentric video) data recorded at 11 different outside locations. WEAR provides a challenging prediction scenario in changing outdoor environments using a sensor placement, in line with recent trends in real-world applications. Benchmark results show that through our sensor placement, each modality interestingly offers complementary strengths and weaknesses in their prediction performance. Further, in light of the recent success of single-stage Temporal Action Localization (TAL) models, we demonstrate their versatility of not only being trained using visual data, but also using raw inertial data and being capable to fuse both modalities by means of simple concatenation. The dataset and code to reproduce experiments is publicly available via: \url{mariusbock.github.io/wear/}.][pdf][scholar][bibtex]

Temporal Action Localization for Inertial-based Human Activity Recognition, Marius Bock and Michael Moeller and Kristof Van Laerhoven, In Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. (IMWUT), vol.8(4)2024. [abstractAs of today, state-of-the-art activity recognition from wearable sensors relies on algorithms being trained to classify fixed windows of data. In contrast, video-based Human Activity Recognition, known as Temporal Action Localization (TAL), has followed a segment-based prediction approach, localizing activity segments in a timeline of arbitrary length. This paper is the first to systematically demonstrate the applicability of state-of-the-art TAL models for both offline and near-online Human Activity Recognition (HAR) using raw inertial data as well as pre-extracted latent features as input. Offline prediction results show that TAL models are able to outperform popular inertial models on a multitude of HAR benchmark datasets, with improvements reaching as much as 26\% in F1-score. We show that by analyzing timelines as a whole, TAL models can produce more coherent segments and achieve higher NULL-class accuracy across all datasets. We demonstrate that TAL is less suited for the immediate classification of small-sized windows of data, yet offers an interesting perspective on inertial-based HAR -- alleviating the need for fixed-size windows and enabling algorithms to recognize activities of arbitrary length. With design choices and training concepts yet to be explored, we argue that TAL architectures could be of significant value to the inertial-based HAR community. The code and data download to reproduce experiments is publicly available via \url{github.com/mariusbock/tal_for_har}.][pdf][scholar][bibtex]

Users' Perception on Appropriateness of Robotic Coaching Assistant's Disclosure Behaviors, Atikkhan Faridkhan Nilgar and Manuel Dietrich and Kristof Van Laerhoven, arXiv 2410.10550, 2024. [abstractSocial robots have emerged as valuable contributors to individuals' well-being coaching. Notably, their integration into long-term human coaching trials shows particular promise, emphasizing a complementary role alongside human coaches rather than outright replacement. In this context, robots serve as supportive entities during coaching sessions, offering insights based on their knowledge about users' well-being and activity. Traditionally, such insights have been gathered through methods like written self-reports or wearable data visualizations. However, the disclosure of people's information by a robot raises concerns regarding privacy, appropriateness, and trust. To address this, we conducted an initial study with [n = 22] participants to quantify their perceptions of privacy regarding disclosures made by a robot coaching assistant. The study was conducted online, presenting participants with six prerecorded scenarios illustrating various types of information disclosure and the robot's role, ranging from active on-demand to proactive communication conditions.][pdf][scholar]

A Survey of LoRaWAN-integrated Wearable Sensor Networks for Human Activity Recognition: Applications, Challenges and Possible Solutions, Nahshon Mokua Obiri and Kristof Van Laerhoven, In IEEE Open Journal of the Communications Society, vol.5, p.6713-6735, 2024. [abstractLong-Range Wide Area Networks (LoRaWAN), a prominent technology within Low-Power Wide Area Networks (LPWANs), have gained traction in remote monitoring due to their long-range communication, scalability, and low energy consumption. Compared to other LPWANs like Sigfox, Ingenu Random Phase Multiple Access (Ingenu-RPMA), Long-Term Evolution for Machines (LTE-M), and Narrowband Internet of Things (NB-IoT), LoRaWAN offers superior adaptability in diverse environments. This adaptability makes it particularly effective for Human Activity Recognition (HAR) systems. These systems utilize wearable sensors to collect data for applications in healthcare, elderly care, sports, and environmental monitoring. Integrating LoRaWAN with edge computing and Internet of Things (IoT) frameworks enhances data processing and transmission efficiency. However, challenges such as sensor wearability, data payload constraints, energy efficiency, and security must be addressed to deploy LoRaWAN-based HAR systems in real-world applications effectively. This survey explores the integration of LoRaWAN technology with wearable sensors for HAR, highlighting its suitability for remote monitoring applications such as Activities of Daily Living (ADL), tracking and localization, healthcare, and safety. We categorize state-of-the-art LoRaWAN-integrated wearable systems into body-worn, hybrid, object-mounted, and ambient sensors. We then discuss their applications and challenges, including energy efficiency, sensor scalability, data constraints, and security. Potential solutions such as advanced edge processing algorithms and secure communication protocols are proposed to enhance system performance and user comfort. The survey also outlines specific future research directions to advance this evolving field.][pdf][scholar][bibtex]

Historiographer: An Efficient Long Term Recording of Real Time Data on Wearable Microcontrollers, Brilka, Michael and Van Laerhoven, Kristof, In Companion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing, p.934–938, 2024. [abstractData collection is a core principle in the scientific and medical environment. To record study participants in daily life situations wearables can be used. These should be small enough to not disrupt the lifestyle of the participants, while delivering sensor data in an accurate and efficient way. This ensures a long recording time for these battery-powered devices. Current purchasable wearable devices, would lend themselves well for wearable studies. Simpler devices have many drawbacks: Low sampling rate, for energy efficiency and little support are some drawbacks. More advanced devices have a high-frequent sampling rate of sensor data. These however, have a higher price and a limited support time. Our work introduces an open-source app for cost-effective, high-frequent, and long-term recording of sensor data. We based the development on the Bangle.js 2, which is a prevalent open-source smartwatch. The code has been optimised for efficiency, using sensor-specific properties to store sensor data in a compressed, loss-less, and time-stamped form to the local NAND-storage. We show in our experiments that we have the ability to record PPG-data at 50 Hertz for at least half a day. With other configurations we can record multiple sensors with a high-frequent update interval for a full day.][pdf][scholar][bibtex]

OpenWearables 2024: 1st International Workshop on Open Wearable Computers, Röddiger, Tobias and Beigl, Michael and Van Laerhoven, Kristof and Vega, Katia, In Companion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing, p.968–971, 2024. [abstractOpen hardware such as Arduino are an accelerator for research in ubiquitous and wearable computing. In recent years, an increasing number of open-source wearable devices is emerging. In this workshop, we seek to create a dedicated forum and venue for publication for topics around open wearable computers. The whole day workshop includes a keynote speech, paper presentations, demo sessions, group discussions, and networking opportunities. Through our activities, we hope to create a future where wearable technologies are accessible, interoperable and impactful across applications and industries.][pdf][scholar][bibtex]

Weak-Annotation of HAR Datasets using Vision Foundation Models, Bock, Marius and Van Laerhoven, Kristof and Moeller, Michael, In Proceedings of the 2024 ACM International Symposium on Wearable Computers, p.55–62, 2024. [abstractAs wearable-based data annotation remains, to date, a tedious, time-consuming task requiring researchers to dedicate substantial time, benchmark datasets within the field of Human Activity Recognition in lack richness and size compared to datasets available within related fields. Recently, vision foundation models such as CLIP have gained significant attention, helping the vision community advance in finding robust, generalizable feature representations. With the majority of researchers within the wearable community relying on vision modalities to overcome the limited expressiveness of wearable data and accurately label their to-be-released benchmark datasets offline, we propose a novel, clustering-based annotation pipeline to significantly reduce the amount of data that needs to be annotated by a human annotator. We show that using our approach, the annotation of centroid clips suffices to achieve average labelling accuracies close to 90\% across three publicly available HAR benchmark datasets. Using the weakly annotated datasets, we further demonstrate that we can match the accuracy scores of fully-supervised deep learning classifiers across all three benchmark datasets. Code as well as supplementary figures and results are publicly downloadable via github.com/mariusbock/weak_har.][pdf][scholar][bibtex]

Accessible Operation Methods of Coffee Machines, M. Wendt, Thomas and Fischer-Janzen, Anke and Ponomarjova, Katrin-Misel and Zaehringer, Kim and Schnebel, Andi and Van Laerhoven, Kristof, In Proceedings of the 17th International Conference on PErvasive Technologies Related to Assistive Environments, p.680–681, 2024. [abstractThis work presents an extension for a coffee-machine that is intended to facilitate its use by people with disabilities. For this purpose, a control method was developed using three wireless buttons and a user interface that allows the selection of several coffee specialties. This selection is translated by a Python script into stepper motor movements fixed to the coffee-machine. With this setup, it is possible to incorporate multiple input modalities such as eye tracking and voice control. Detailed instructions can be found in [1].][pdf][scholar][bibtex]

Raising the Bar(ometer): Identifying a User's Stair and Lift Usage Through Wearable Sensor Data Analysis, Hrishikesh Balkrishna Karande and Ravikiran Arasur Thippeswamy Shivalingappa and Abdelhafid Nassim Yaici and Iman Haghbin and Niravkumar Bavadiya and Robin Burchard and Kristof Van Laerhoven, arXiv 2410.02790, 2024. [abstractMany users are confronted multiple times daily with the choice of whether to take the stairs or the elevator. Whereas taking the stairs could be beneficial for cardiovascular health and wellness, taking the elevator might be more convenient but it also consumes energy. By precisely tracking and boosting users' stairs and elevator usage through their wearable, users might gain health insights and motivation, encouraging a healthy lifestyle and lowering the risk of sedentary-related health problems. This research describes a new exploratory dataset, to examine the patterns and behaviors related to using stairs and lifts. We collected data from 20 participants while climbing and descending stairs and taking a lift in a variety of scenarios. The aim is to provide insights and demonstrate the practicality of using wearable sensor data for such a scenario. Our collected dataset was used to train and test a Random Forest machine learning model, and the results show that our method is highly accurate at classifying stair and lift operations with an accuracy of 87.61% and a multi-class weighted F1-score of 87.56% over 8-second time windows. Furthermore, we investigate the effect of various types of sensors and data attributes on the model's performance. Our findings show that combining inertial and pressure sensors yields a viable solution for real-time activity detection.][pdf][scholar]

Multi-modal Atmospheric Sensing to Augment Wearable IMU-Based Hand Washing Detection, Robin Burchard and Kristof Van Laerhoven, arXiv 2410.03549, 2024. [abstractHand washing is a crucial part of personal hygiene. Hand washing detection is a relevant topic for wearable sensing with applications in the medical and professional fields. Hand washing detection can be used to aid workers in complying with hygiene rules. Hand washing detection using body-worn IMU-based sensor systems has been shown to be a feasible approach, although, for some reported results, the specificity of the detection was low, leading to a high rate of false positives. In this work, we present a novel, open-source prototype device that additionally includes a humidity, temperature, and barometric sensor. We contribute a benchmark dataset of 10 participants and 43 hand-washing events and perform an evaluation of the sensors' benefits. Added to that, we outline the usefulness of the additional sensor in both the annotation pipeline and the machine learning models. By visual inspection, we show that especially the humidity sensor registers a strong increase in the relative humidity during a hand-washing activity. A machine learning analysis of our data shows that distinct features benefiting from such relative humidity patterns remain to be identified.][pdf][scholar]

From One to Many, from Onsite to Remote: Control Rooms as Diverse Contexts of Use, Tilo Mentler and Philippe Palanque and Kristof Van Laerhoven and Margareta Holtensdotter Lützhöft and Nadine Flegel, In LNCS vol 14535, INTERACT 2023, Design for Equality and Justice, p.273--278, 2024. [abstractIn many contexts, control rooms are safety-relevant and, from the point of view of HCI research, complex socio-technical systems. This article first summarizes the contributions of the IFIP WG 13.5 Workshop at INTERACT 2023 entitled “On Land, at Sea, and in the Air: Human-Computer Interaction in Safety-Critical Spaces of Control”. The process and results of a group work phase during the workshop will then be discussed. A variety of examples (e.g. offshore operation centers, traffic light control rooms) and characteristics (e.g. level of automation, number of operators) in connection with control rooms were identified. Finally, it is pointed out that the diversity of usage contexts should not tempt us to lose sight of cross-domain perspectives, but rather to inte- grate them through appropriate levels of consideration.][pdf][scholar][bibtex]

Using multiple linear regression for biochemical oxygen demand prediction in water, Isaiah Kiprono Mutai and Kristof Van Laerhoven and Nancy Wangechi Karuri and Robert Kimutai Tewo, In Applied Computing and Intelligence, vol.4(2)p.125-137, 2024. [abstractBiochemical oxygen demand (BOD) is an important water quality measurement but takes five days or more to obtain. This may result in delays in taking corrective action in water treatment. Our goal was to develop a BOD predictive model that uses other water quality measurements that are quicker than BOD to obtain; namely pH, temperature, nitrogen, conductivity, dissolved oxygen, fecal coliform, and total coliform. Principal component analysis showed that the data spread was in the direction of the BOD eigenvector. The vectors for pH, temperature, and fecal coliform contributed the greatest to data variation, and dissolved oxygen negatively correlated to BOD. K-means clustering suggested three clusters, and t-distributed stochastic neighbor embedding showed that BOD had a strong influence on variation in the data. Pearson correlation coefficients indicated that the strongest positive correlations were between BOD, and fecal and total coliform, as well as nitrogen. The largest negative correlation was between dissolved oxygen, and BOD. Multiple linear regression (MLR) using fecal, and total coliform, dissolved oxygen, and nitrogen to predict BOD, and training/test data of 80%/20% and 90%/10% had performance indices of RMSE = 2.21 mg/L, r = 0.48 and accuracy of 50.1%, and RMSE = 2.18 mg/L, r = 0.54 and an accuracy of 55.5%, respectively. BOD prediction was better than previous MLR models. Increasing the percentage of the training set above 80% improved the model accuracy but did not significantly impact its prediction. Thus, MLR can be used successfully to estimate BOD in water using other water quality measurements that are quicker to obtain.][pdf][scholar][bibtex]

Towards a Pattern Language for Scalable Interaction Design in Control Rooms as Human-Centered Pervasive Computing Environments, Nadine Flegel and Jonas Poehler and Kristof Van Laerhoven and Tilo Mentler, In LNCS vol 14535, INTERACT 2023, Design for Equality and Justice, p.279--291, 2024. [abstractControl rooms are central for well-being and safety of people in many domains (e.g., emergency response, ship bridge, public utilities). In most of these domains, demands on operators are increasing. At the same time tasks, goals and well-being of the operators is rarely proactively supported. New technology solutions are often domain-specific and focus on specific functionalities. What is urgently needed to meet the increasing demands are reusable solutions. We develop a cross-domain pattern language for control rooms as pervasive computing environments within a human-centered design process. The pattern language consists of eight hierarchical levels, which combine the perspectives of human computer interaction (HCI) and pervasive computing environments. It will be made available for the public through a web-based pattern platform with feedback and comment functions. This research will contribute to a better understanding of suitable interaction paradigms for control rooms and safety-critical pervasive computing environments.][pdf][scholar][bibtex]

Equimetrics -- Applying HAR principles to equestrian activities, Jonas Poehler and Kristof Van Laerhoven, arXiv 2409.11989, 2024. [abstractThis paper presents the Equimetrics data capture system. The primary objective is to apply HAR principles to enhance the understanding and optimization of equestrian performance. By integrating data from strategically placed sensors on the rider's body and the horse's limbs, the system provides a comprehensive view of their interactions. Preliminary data collection has demonstrated the system's ability to accurately classify various equestrian activities, such as walking, trotting, cantering, and jumping, while also detecting subtle changes in rider posture and horse movement. The system leverages open-source hardware and software to offer a cost-effective alternative to traditional motion capture technologies, making it accessible for researchers and trainers. The Equimetrics system represents a significant advancement in equestrian performance analysis, providing objective, data-driven insights that can be used to enhance training and competition outcomes.][pdf][scholar]

What is a control room? In search of a definition using Word Embeddings, Poehler, Jonas and Flegel, Nadine and Mentler, Tilo and Van Laerhoven, Kristof, In Mensch und Computer 2024 - Workshopband, 2024. [abstractDefinitions are hard. But they provide valuable insight into the subject at hand. This paper presents a comprehensive analysis of the term "control room" using word embeddings to provide a nuanced definition. The study leverages the ACM Digital Library to compile a corpus of texts related to control rooms, generating word embeddings to capture semantic relationships. Key aspects identified include the physical space, ergonomic design, multimodal information presentation, warning systems, contextual awareness, and command functions of control rooms. The analysis also explores adjacent clusters, highlighting tactile interfaces, interactive surfaces, and the use of large displays to enhance operator performance and situational awareness. The findings underscore the technological, ergonomic, and operational dimensions critical to the definition and functionality of control rooms.][pdf][scholar][bibtex]

A matter of annotation: an empirical study on in situ and self-recall activity annotations from wearable sensors, Hoelzemann, Alexander and Van Laerhoven, Kristof , In Frontiers in Computer Science, vol.6, 2024. [abstractResearch into the detection of human activities from wearable sensors is a highly active field, benefiting numerous applications, from ambulatory monitoring of healthcare patients via fitness coaching to streamlining manual work processes. We present an empirical study that evaluates and contrasts four commonly employed annotation methods in user studies focused on in-the-wild data collection. For both the user-driven, <italic>in situ</italic> annotations, where participants annotate their activities during the actual recording process, and the recall methods, where participants retrospectively annotate their data at the end of each day, the participants had the flexibility to select their own set of activity classes and corresponding labels. Our study illustrates that different labeling methodologies directly impact the annotations' quality, as well as the capabilities of a deep learning classifier trained with the data. We noticed that <italic>in situ</italic> methods produce less but more precise labels than recall methods. Furthermore, we combined an activity diary with a visualization tool that enables the participant to inspect and label their activity data. Due to the introduction of such a tool were able to decrease missing annotations and increase the annotation consistency, and therefore the F1-Score of the deep learning model by up to 8% (ranging between 82.1 and 90.4% F1-Score). Furthermore, we discuss the advantages and disadvantages of the methods compared in our study, the biases they could introduce, and the consequences of their usage on human activity recognition studies as well as possible solutions.][pdf][scholar][bibtex]

A scoping review of gaze and eye tracking-based control methods for assistive robotic arms, Anke Fischer-Janzen and Thomas M. Wendt and Kristof Van Laerhoven, In Frontiers in Robotics and AI, vol.11, 2024. [abstractBackground: Assistive Robotic Arms are designed to assist physically disabled people with daily activities. Existing joysticks and head controls are not applicable for severely disabled people such as people with Locked-in Syndrome. Therefore, eye tracking control is part of ongoing research. The related literature spans many disciplines, creating a heterogeneous field that makes it difficult to gain an overview. Objectives: This work focuses on ARAs that are controlled by gaze and eye movements. By answering the research questions, this paper provides details on the design of the systems, a comparison of input modalities, methods for measuring the performance of these controls, and an outlook on research areas that gained interest in recent years. Methods: This review was conducted as outlined in the PRISMA 2020 Statement. After identifying a wide range of approaches in use the authors decided to use the PRISMA-ScR extension for a scoping review to present the results. The identification process was carried out by screening three databases. After the screening process, a snowball search was conducted. Results: 39 articles and 6 reviews were included in this article. Characteristics related to the system and study design were extracted and presented divided into three groups based on the use of eye tracking. Conclusion: This paper aims to provide an overview for researchers new to the field by offering insight into eye tracking based robot controllers. We have identified open questions that need to be answered in order to provide people with severe motor function loss with systems that are highly useable and accessible. ][pdf][scholar][bibtex]

Beyond the Smartphone, Kristof Van Laerhoven, In Mobile Sensing in Psychology, Methods and Applications, 2024. [abstractWearable sensors hold multiple advantages: Having sensors that are closer, on the skin, to the human user allows for more information and, when those sensors are worn throughout the day (and, in many cases, even night and day), the data these sensors produce tend to cover many aspects of the user’s life. This chapter attempts to predict what type of wearable sensors—for there are many—and to what extent wearables will be particularly attractive as mobile sensors in psychology. As is the case with predictions, this only makes sense when looking at the prevailing and current trends in research in the area of wearable sensing, in order to be able to extrapolate what might become feasible in the coming years and decades. Technology-wise, this does not only depend on the sensors themselves: Other key components that have become common in wearables, such as wireless communication and energy demands, are equally important in this picture. Furthermore, the concept of wearable sensing does not depend only on the used hardware components: The information that is generated from the sensor signals, and where and how this information is analyzed, abstracted, and interpreted, are equally important. From these analyses of what will become technically possible in wearable sensing, a set of promising applications is extracted and presented in the final section of this chapter.][pdf][scholar]

Evaluation of Video-Assisted Annotation of Human IMU Data Across Expertise, Datasets, and Tools, Hoelzemann, Alexander and Bock, Marius and Kristof Van Laerhoven, In 2024 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), p.1--6, 2024. [abstractDespite the simplicity of labels and extensive study protocols provided during data collection, the majority of researchers in sensor-based technologies tend to rely on annotations provided by a combination of field experts and researchers themselves. This paper presents a comprehensive study on the quality of annotations provided by expert versus novice annotators for inertial-based activity benchmark datasets. We consider multiple parameters such as the nature of the activities to be labeled, and the annotation tool, to quantify the annotation quality and time needed. 15 participants were tasked to annotate a total of 40 minutes of data from two publicly available benchmark datasets for inertial activity recognition, being simultaneously displayed both video and accelerometer data during annotation. We compare the resulting labels with the ground truth provided by the original dataset authors. Our participants annotated the data using two representative tools. Metrics like F1-Score and Cohen’s Kappa showed experience did not ensure better labels. While experts were more accurate on the complex Wetlab dataset (51% vs 46%), Novices had 96% F1 on the simple WEAR dataset versus 92% for experts. Comparable Kappa scores (0.96 and 0.94 for WEAR, 0.53 and 0.59 for Wetlab) indicated similar quality for both groups, revealing differences in dataset complexity. Furthermore, experts annotated faster regardless of the tool. Given proven success across research, our findings suggest crowdsourcing wearable dataset annotation to non-experts warrants exploration as a valuable yet underinvestigated approach, up to a complexity level beyond which quality may suffer.][pdf][scholar][bibtex]

Requirements of People with Disabilities and Caregivers for Robotics: A Case Study, Fischer-Janzen, Anke and Gapp, Markus and Götten, Marcus and Ponomarjova, Katrin-Misel and Blöchle, Jennifer J. and Wendt, Thomas M. and Van Laerhoven, Kristof and Bartscherer, Thomas, In HCI in Business, Government and Organizations, p.289--301, 2024. [abstractRobotics offers new solutions for digital customer interaction. Social robots can be used in applications such as customer support, guiding people to a location on company premises, or entertainment and education. An emerging area of research is the application in community facilities for people with disabilities. Such facilities face a shortage of skilled workers that could be addressed by robotics. In this work, the application of social and collaborative robots in care facilities and workshops for the disabled is presented by providing a requirements analysis. The use of the humanoid robot Pepper in assisted living was tested and subsequently evaluated in interviews with caregivers who initiated and observed the interaction between the group and the robot. Additionally, robotic applications in assisted work were assessed, resulting in a divergence from the industrial use of robots. A comparative overview with recent literature is presented. The connection between the community home and the workshop raised the question of whether the use of different robots in both places could lead to conflicts.][pdf][scholar][bibtex]

Exploring the Potential of Large Language Models in Adaptive Machine Translation for Generic Text and Subtitles, Soudi, Abdelhadi and Hannani, Mohamed and Van Laerhoven, Kristof and Avramidis, Eleftherios, In Proceedings of the 17th Workshop on Building and Using Comparable Corpora (BUCC) @ LREC-COLING 2024, p.51--58, 2024. [abstractThis paper investigates the potential of contextual learning for adaptive real-time machine translation (MT) using Large Language Models (LLMs) in the context of subtitles and generic text with fuzzy matches. By using a strategy based on prompt composition and dynamic retrieval of fuzzy matches, we achieved improvements in the translation quality compared to previous work. Unlike static selection, which may not adequately meet all request sentences, our enhanced methodology allows for dynamic adaptation based on user input. It was also shown that LLMs and Encoder-Decoder models achieve better results with generic texts than with subtitles for the language pairs English-to-Arabic (En→Ar) and English-to-French (En→Fr). Experiments on datasets with different sizes for En→Ar subtitles indicate that the bigger is not really the better. Our experiments on subtitles support results from previous work on generic text that LLMs are capable of adapting to In-Context learning with few-shot, outperforming Encoder-Decoder MT models and that the combination of LLMs and Encoder-Decoder models improves the quality of the translation.][pdf][scholar]

Artifacts Guide: Evaluation of Video-Assisted Annotation of Human IMU Data, Hoelzemann, Alexander and Bock, Marius and Kristof Van Laerhoven, In 2024 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), p.11--12, 2024. [abstractThis artifact guide outlines the supplementary material for our paper Evaluation of Video-Assisted Annotation of Human IMU Data Across Expertise, Datasets, and Tools. The supplementary material comprises Python code that replicates our findings, along with instructions for downloading and deploying this code, and a list of hardware requirements for successful execution. Additionally, we present detailed results of our study including per-participant NASA-TLX questionnaire results and per-participant evaluation metrics. Lastly, we provide in this guide all necessary material and offer supplementary insights into the annotation tools employed in our study so that the study could be replicated by other researchers with additional participants. The code, data and supplementary information can be publicly downloaded from the following repository: https://github.com/mariusbock/video_assisted_annotation][pdf][scholar][bibtex]

2023

Whereables? Examining Personal Technology Adoption in Contemporary Control Rooms, Flegel, Nadine and Poehler, Jonas and Mentler, Tilo and Van Laerhoven, Kristof, In IEEE Pervasive Computing, vol.22(2)p.49--53, 2023. [abstractEarly work in wearables research has often proposed visions in which wearable computers are introduced to support human operators in critical environments such as control rooms, ship bridges, cockpits, or operating rooms. Wearable assistants could for instance present critical task-relevant information to users regardless of their location, help in avoiding procedural errors, or enhance collaborations between multiple operators. In reality, however, such visions have not galvanized: What happened? And could operators’ attitudes and misgivings toward wearables be responsible? The rise of personal wearables in the past years has led to fitness trackers, smartwatches, and other consumer devices being worn by a larger audience, likely also among control room operators. We report in this article on findings from a series of onsite interviews and workshops with professional control room operators to get an insight in their attitude toward wearables, and opinions and current views on the use and adoption of wearable and pervasive technologies in their work environment.][pdf][scholar][bibtex]

Hang-Time HAR: A Benchmark Dataset for Basketball Activity Recognition Using Wrist-Worn Inertial Sensors, Hoelzemann, Alexander and Romero, Julia Lee and Bock, Marius and Laerhoven, Kristof Van and Lv, Qin, In Sensors, vol.23(13)2023. [abstractWe present a benchmark dataset for evaluating physical human activity recognition methods from wrist-worn sensors, for the specific setting of basketball training, drills, and games. Basketball activities lend themselves well for measurement by wrist-worn inertial sensors, and systems that are able to detect such sport-relevant activities could be used in applications of game analysis, guided training, and personal physical activity tracking. The dataset was recorded from two teams in separate countries (USA and Germany) with a total of 24 players who wore an inertial sensor on their wrist, during both a repetitive basketball training session and a game. Particular features of this dataset include an inherent variance through cultural differences in game rules and styles as the data was recorded in two countries, as well as different sport skill levels since the participants were heterogeneous in terms of prior basketball experience. We illustrate the dataset's features in several time-series analyses and report on a baseline classification performance study with two state-of-the-art deep learning architectures.][pdf][scholar][bibtex]

On Land, at Sea, and in the Air: Human-Computer Interaction in Safety-Critical Spaces of Control, Mentler, Tilo and Palanque, Philippe and Van Laerhoven, Kristof and Lützhöft, Margareta Holtensdotter and Flegel, Nadine, In Human-Computer Interaction -- INTERACT 2023, p.657--661, 2023. [abstractIn many areas, successfully deploying interfaces with high usability and user experience (UX) is crucial for people's safety and well-being. These include, for example, control rooms for emergency services and energy suppliers, aircraft cockpits, ship bridges, surgery rooms, and intensive care units. Information and cooperation needs are not limited to the users' immediate environment but often involve numerous actors in other places (regulators, field workers, shift supervisors, remote assistance, etc.). The specific aspects of Human-Computer Interaction (HCI) in such spaces of control are the subject of this workshop. This includes understanding and modeling routine and emergency operations, alarm management, human-machine task allocation and automation concepts, interaction design beyond graphical user interfaces, laboratory and field evaluations, and training approaches. In addition to addressing domain-specific issues (e.g., in healthcare, in aviation), cross-domain challenges and solutions will be identified and discussed (e.g., more flexible and cooperative ways of working with the aid of wearable and mobile devices). This workshop is organized by the IFIP WG 13.5 on Human Error, Resilience, Reliability and Safety in System Development.][pdf][scholar][bibtex]

A Data-Driven Study on the Hawthorne Effect in Sensor-Based Human Activity Recognition, Hoelzemann, Alexander and Bock, Marius, and Valladares Bastías, Ericka Andrea and El Ouazzani Touhami, Salma and Nassiri, Kenza and Van Laerhoven, Kristof, In UbiComp/ISWC ’23 Adjunct Proceedings, 2023. [abstractKnown as the Hawthorne Effect, studies have shown that partici- pants alter their behavior and execution of activities in response to being observed. With researchers from a multitude of human- centered studies knowing of the existence of the said effect, quantita- tive studies investigating the neutrality and quality of data gathered in monitored versus unmonitored setups, particularly in the con- text of Human Activity Recognition (HAR), remain largely under- explored. With the development of tracking devices providing the possibility of carrying out less invasive observation of participants’ conduct, this study provides a data-driven approach to measure the effects of observation on participants’ execution of five workout- based activities. Using both classical feature analysis and deep learning-based methods we analyze the accelerometer data of 10 participants, showing that a different degree of observation only marginally influences captured patterns and predictive performance of classification algorithms. Although our findings do not dismiss the existence of the Hawthorne Effect, it does challenge the prevail- ing notion of the applicability of laboratory compared to in-the-wild recorded data. The dataset and code to reproduce our experiments are available via https://github.com/mariusbock/hawthorne_har .][pdf][scholar][bibtex]

Do predictability factors towards signing avatars hold across cultures?, Abdelhadi Soudi and Manal El Hakkaoui and Kristof Van Laerhoven, arXiv 2307.02103, 2023. [abstractAvatar technology can offer accessibility possibilities and improve the Deaf-and-Hard of Hearing sign language users access to communication, education and services, such as the healthcare system. However, sign language users acceptance of signing avatars as well as their attitudes towards them vary and depend on many factors. Furthermore, research on avatar technology is mostly done by researchers who are not Deaf. The study examines the extent to which intrinsic or extrinsic factors contribute to predict the attitude towards avatars across cultures. Intrinsic factors include the characteristics of the avatar, such as appearance, movements and facial expressions. Extrinsic factors include users technology experience, their hearing status, age and their sign language fluency. This work attempts to answer questions such as, if lower attitude ratings are related to poor technology experience with ASL users, for example, is that also true for Moroccan Sign Language (MSL) users? For the purposes of the study, we designed a questionnaire to understand MSL users attitude towards avatars. Three groups of participants were surveyed: Deaf (57), Hearing (20) and Hard-of-Hearing (3). The results of our study were then compared with those reported in other relevant studies.][pdf][scholar]

WEAR: An Outdoor Sports Dataset for Wearable and Egocentric Activity Recognition, Marius Bock and Hilde Kuehne and Kristof Van Laerhoven and Michael Moeller, In arXiv:2304.05088, 2023. [abstractThough research has shown the complementarity of camera- and inertial-based data, datasets which offer both egocentric video and inertial-based sensor data remain scarce. In this paper, we introduce WEAR, an outdoor sports dataset for both vision- and inertial-based human activity recognition (HAR). The dataset comprises data from 18 participants performing a total of 18 different workout activities with untrimmed inertial (acceleration) and camera (egocentric video) data recorded at 10 different outside locations. Unlike previous egocentric datasets, WEAR provides a challenging prediction scenario marked by purposely introduced activity variations as well as an overall small information overlap across modalities. Benchmark results obtained using each modality separately show that each modality interestingly offers complementary strengths and weaknesses in their prediction performance. Further, in light of the recent success of temporal action localization models following the architecture design of the ActionFormer, we demonstrate their versatility by applying them in a plain fashion using vision, inertial and combined (vision + inertial) features as input. Results demonstrate both the applicability of vision-based temporal action localization models for inertial data and fusing both modalities by means of simple concatenation, with the combined approach (vision + inertial features) being able to produce the highest mean average precision and close-to-best F1-score. The dataset and code to reproduce experiments is publicly available via: this https <a href="https://mariusbock.github.io/wear/">URL</a>][pdf][scholar]

Hang-Time HAR: A Benchmark Dataset for Basketball Activity Recognition using Wrist-worn Inertial Sensors, Alexander Hoelzemann and Julia Lee Romero and Marius Bock and Kristof Van Laerhoven and Qin Lv, 2023. [abstractWe present a benchmark dataset for evaluating physical human activity recognition methods from wrist-worn sensors, for the specific setting of basketball training, drills, and games. Basketball activities lend themselves well for measurement by wrist-worn inertial sensors, and systems that are able to detect such sport-relevant activities could be used in applications toward game analysis, guided training, and personal physical activity tracking. The dataset was recorded for two teams from separate countries (USA and Germany) with a total of 24 players who wore an inertial sensor on their wrist, during both repetitive basketball training sessions and full games. Particular features of this dataset include an inherent variance through cultural differences in game rules and styles as the data was recorded in two countries, as well as different sport skill levels, since the participants were heterogeneous in terms of prior basketball experience. We illustrate the dataset's features in several time-series analyses and report on a baseline classification performance study with two state-of-the-art deep learning architectures.][pdf][scholar][bibtex]

A Matter of Annotation: An Empirical Study on In Situ and Self-Recall Activity Annotations from Wearable Sensors, Alexander Hoelzemann and Kristof Van Laerhoven, 2023. [abstractResearch into the detection of human activities from wearable sensors is a highly active field, benefiting numerous applications, from ambulatory monitoring of healthcare patients via fitness coaching to streamlining manual work processes. We present an empirical study that compares 4 different commonly used annotation methods utilized in user studies that focus on in-the-wild data. These methods can be grouped in user-driven, in situ annotations - which are performed before or during the activity is recorded - and recall methods - where participants annotate their data in hindsight at the end of the day. Our study illustrates that different labeling methodologies directly impact the annotations' quality, as well as the capabilities of a deep learning classifier trained with the data respectively. We noticed that in situ methods produce less but more precise labels than recall methods. Furthermore, we combined an activity diary with a visualization tool that enables the participant to inspect and label their activity data. Due to the introduction of such a tool were able to decrease missing annotations and increase the annotation consistency, and therefore the F1-score of the deep learning model by up to 8% (ranging between 82.1 and 90.4% F1-score). Furthermore, we discuss the advantages and disadvantages of the methods compared in our study, the biases they may could introduce and the consequences of their usage on human activity recognition studies and as well as possible solutions.][pdf][scholar][bibtex]

UbiComp/ISWC '23 Adjunct: Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing and the 2023 ACM International Symposium on Wearable Computing, Monica Tentori and Nadir Weibel and Kristof Van Laerhoven and Zhongyi Zhou, 2023. [pdf][scholar][bibtex]

Autonomy and Safety: A Quantitative Study with Control Room Operators on Affinity for Technology Interaction and Wish for Pervasive Computing Solutions, Nadine Flegel and Daniel Wessel and Jonas Poehler and Kristof Van Laerhoven and Tilo Mentler, In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, p.1–10, 2023. [abstractControl rooms are central to the well-being of many people. In terms of human computer interaction (HCI), they are characterized by complex IT infrastructures providing numerous graphical user interfaces. More modern approaches have been researched for decades. However, they are rarely used. What role does the attitude of operators towards novel solutions play? In one of the first quantitative cross-domain studies in safety-related HCI research (N = 155), we gained insight into affinity for technology interaction (ATI) and wish for pervasive computing solutions of operators in three domains (emergency response, public utilities, maritime traffic). Results show that ATI values were rather high, with broader range only in maritime traffic operators. Furthermore, the assessment of autonomy is more strongly related to the desire for novel solutions than perceived added safety value. These findings can provide guidance for the design of pervasive computing solutions, not only but especially for users in safety-critical contexts.][pdf][scholar][bibtex]

EyesOnMe: Investigating Haptic and Visual User Guidance for Near-Eye Positioning of Mobile Phones for Self-Eye-Examinations, Luca Maxim Meinhardt and Kristof Van Laerhoven and David Dobbelstein, In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, p.1–10, 2023. [abstractThe scarcity of professional ophthalmic equipment in rural areas and during exceptional situations such as the COVID-19 pandemic highlights the need for tele-ophthalmology. This late-breaking work presents a novel method for guiding users to a specific pose (3D position and 3D orientation) near the eye for mobile self-eye examinations using a smartphone. The user guidance is implemented utilizing haptic and visual modalities to guide the user and subsequently capture a close-up photo of the user’s eyes. In a within-subject user study (n=24), the required time, success rate, and perceived demand for the visual and haptic feedback conditions were examined. The results indicate that haptic feedback was the most efficient and least cognitively demanding in the positioning task near the eye, whereas relying on only visual feedback can be more difficult due to the near focus point or refractive errors.][pdf][scholar][bibtex]

Investigating Cognitive Load in Emergency Control Room Simulations, Poehler, Jonas and Vitt, Antonia and Flegel, Nadine and Mentler, Tilo and Van Laerhoven, Kristof, 2023. [abstractWe propose a novel approach to measure cognitive load in emergency control room operators using their breathing patterns. By using LstSim, a community-driven emergency control room simulator, we aim to recreate the work environment of a dispatcher, induce a cognitive load, and measure the response in the user’s breathing. Participants were monitored and recorded through wearable sensors, depth cameras below the screens, and simulation-internal parameters and interactions. The participants’ breathing patterns were analyzed to identify changes in breathing amplitude in response to varying levels of cognitive load. The results of our study provide compelling evidence that a simulated control room environment is successful in inducing cognitive load on participants shown in a significant increase in NASA TLX scores as well as a 13% increase in breathing amplitude. Despite the challenges posed by this individual variability, our findings also highlight the potential of using breathing as a real-time, noninvasive measure of cognition in control rooms. This has significant implications for the design and operation of emergency control rooms, potentially leading to the development of more responsive systems that adapt to the operator’s cognition load, thereby enhancing performance and effectiveness.][pdf][scholar][bibtex]

Evaluation of precision, accuracy and threshold for the design of vibrotactile feedback in eye tracking applications, Fischer, A. and Wendt, T. M. and Stiglmeier, L. and Gawron, P. and Van Laerhoven, K., In Journal of Sensors and Sensor Systems, vol.12(1)p.103--109, 2023. [abstractNovel approaches for the design of assistive technology controls propose the usage of eye tracking devices such as for smart wheelchairs and robotic arms. The advantages of artificial feedback, especially vibrotactile feedback, as opposed to their use in prostheses, have not been sufficiently explored. Vibrotactile feedback reduces the cognitive load on the visual and auditory channel. It provides tactile sensation, resulting in better use of assistive technologies. In this study the impact of vibration on the precision and accuracy of a head-worn eye tracking device is investigated. The presented system is suitable for further research in the field of artificial feedback. Vibration was perceivable for all participants, yet it does not produce any significant deviations in precision and accuracy.][pdf][scholar][bibtex]

Does Automation Scale? Notes From an HCI Perspective, Tilo Mentler and Nadine Flegel and Jonas Pöhler and Kristof Van Laerhoven, In AutomationXP23: Intervening, Teaming, Delegating - Creating Engaging Automation Experiences, 2023. [abstractComputer-based interactive systems increasingly shape all areas of life. Scalability with respect to human computer interaction (HCI) is an umbrella term under which various developments in this regard are discussed (e.g., growing numbers of users, variety of devices). However, scalability relates not only to user interface and interaction design but to automation. Whether and how automation scales or can be scaled regarding HCI has hardly been addressed by previous research. By introducing a one-day user journey focusing on automation experience, we discuss characteristics and research directions for safe and satisfying scaled automation experiences.][pdf][scholar]

An Embedded and Real-Time Pupil Detection Pipeline, Ankur Raj and Diwas Bhattarai and Kristof Van Laerhoven, 2023. [abstractWearable pupil detection systems often separate the analysis of the captured wearer's eye images for wirelessly-tethered back-end systems. We argue in this paper that investigating hardware-software co-designs would bring along opportunities to make such systems smaller and more efficient. We introduce an open-source embedded system for wearable, non-invasive pupil detection in real-time, on the wearable, embedded platform itself. Our system consists of a head-mounted eye tracker prototype, which combines two miniature camera systems with Raspberry Pi-based embedded system. Apart from the hardware design, we also contribute a pupil detection pipeline that operates using edge analysis, natively on the embedded system at 30fps and run-time of 54ms at 480x640 and 23ms at 240x320. Average cumulative error of 5.3368px is found on the LPW dataset for a detection rate of 51.9% with our detection pipeline. For evaluation on our hardware-specific camera frames, we also contribute a dataset of 35000 images, from 20 participants.][pdf][scholar][bibtex]

Evaluation of a Depth Camera as e-Health Sensor for Contactless Respiration Monitoring, Brinkmann, Steffen and Kempfle, Jochen and Van Laerhoven, Kristof and Pöhler, Jonas, In 2023 IEEE International Conference on Pervasive Computing and Communications Workshop: TELMED 2023: Second Workshop on Telemedicine and e-Health evolution in the new era of social distancing, p.136--141, 2023. [abstractWe evaluate a contact-free method to observe the breathing behavior of persons seated in front of a desktop environment, with an RGB-D camera attached to the screen. Our system monitors the breathing-induced movement of the user's chest, delivering a respiration curve from the camera depth stream by mean- or median-based averaging of single-distances to pixels in the target body region over time. The system was evaluated in an experiment on 8 study participants. The mean-based respiratory rate estimation presented fewer errors and the system works best at close proximity. At 1m distance, it presents a correlation to a respiration belt as ground truth of 0.94 and an absolute error of 0.04bpm. From our data, no influence on the performance of the system was found by gender or different respiration rate. The final system does not require extensive knowledge to set up and operate. The approach allows users to monitor their breathing rate while working, opening up new e-health application areas. They range from self-care and healthcare to managing operators in safety-critical systems such as control rooms, since the respiratory rate is closely linked to a person's state of attention.][pdf][scholar][bibtex]

2022

Investigating (re)current state-of-the-art in human activity recognition datasets, Marius Block and Alexander Hoelzemann and Michael Moeller and Kristof Van Laerhoven, In Frontiers in Computer Science, vol.4, 2022. [abstractMany human activities consist of physical gestures that tend to be performed in certain sequences. Wearable inertial sensor data have as a consequence been employed to automatically detect human activities, lately predominantly with deep learning methods. This article focuses on the necessity of recurrent layers—more specifically Long Short-Term Memory (LSTM) layers—in common Deep Learning architectures for Human Activity Recognition (HAR). Our experimental pipeline investigates the effects of employing none, one, or two LSTM layers, as well as different layers' sizes, within the popular DeepConvLSTM architecture. We evaluate the architecture's performance on five well-known activity recognition datasets and provide an in-depth analysis of the per-class results, showing trends which type of activities or datasets profit the most from the removal of LSTM layers. For 4 out of 5 datasets, an altered architecture with one LSTM layer produces the best prediction results. In our previous work we already investigated the impact of a 2-layered LSTM when dealing with sequential activity data. Extending upon this, we now propose a metric, rGP, which aims to measure the effectiveness of learned temporal patterns for a dataset and can be used as a decision metric whether to include recurrent layers into a network at all. Even for datasets including activities without explicit temporal processes, the rGP can be high, suggesting that temporal patterns were learned, and consequently convolutional networks are being outperformed by networks including recurrent layers. We conclude this article by putting forward the question to what degree popular HAR datasets contain unwanted temporal dependencies, which if not taken care of, can benefit networks in achieving high benchmark scores and give a false sense of overall generability to a real-world setting.][pdf][scholar][bibtex]

OpenIBC: Open-Source Wake-Up Receiver for Capacitive Intra-Body Communication, Wolling, Florian and Hauck, Florian and Schröder, Guenter and Van Laerhoven, Kristof, In 19th International Conference on Embedded Wireless Systems and Networks (EWSN 2022), 2022. [abstractIntra-Body Communication (IBC) uses the human body as a part of the physical transmission channel for a more efficient and secure on-body communication. Since its introduction in 1995, it has evolved into an alternative to traditional wired and wireless techniques, and was eventually included as human body communication (HBC) in the IEEE 802.15.6 standard for wireless body area networks (WBAN). In contrast to the ubiquitous radio-frequency identification (RFID) and near-field communication (NFC), IBC has, however, not reached the market yet, and possible applications remain underinvestigated. We present the OpenIBC project and a first open-source IBC receiver that is based on a repurposed off-the-shelf low-power RFID wake-up receiver front-end. In the evaluation, the prototype achieved a data rate of 4096 bit/s with a packet error rate of 320.0E-6 at a low power of 7.4 µW in listening mode and 8.4 µW when receiving data. The design files and software are made available to encourage researchers to replicate and improve on our work, and to explore potential applications that benefit from IBC.][pdf][scholar][bibtex]

”Man’s (and Sheep’s) Best Friend”: Towards a Shepherding-Based Metaphor for Human-Computer Cooperation in Process Control, Mentler, Tilo and Flegel, Nadine and Poehler, Jonas and Van Laerhoven, Kristof, In Proceedings of the 33rd European Conference on Cognitive Ergonomics, 2022. [abstractMetaphors can be helpful in human-computer interaction in various ways, e.g., for user-appropriate design of interfaces, naming of functions, or visualization. In the field of process control, paradigm shifts are imminent under terms such as smart control rooms or pervasive computing environments. However, there is a lack of suitable metaphors to accompany this development. This paper examines the extent to which sheep herding and the relationships between humans, dogs, and sheep can be a suitable model for shaping human-computer cooperation in process control. Even though sheep herding has already been discussed in various relationships to HCI, a systematic discussion of the factors that underlie successful human-animal cooperation is lacking. This is introduced in this paper based on an expert interview and literature research. Based on this, it is discussed to what extent the success factors can be transferred to the design of technical systems and human-computer cooperation.][pdf][scholar][bibtex]

A chat with Dr. Jekyll and Mr. Hyde - Intent in chatbot communication, Poehler, Jonas and Flegel, Nadine and Mentler, Tilo and Van Laerhoven, Kristof, In 2022 10th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), p.1--3, 2022. [abstractWhat would happen if a chatbot tries to manipulate your emotions? Can a modern language model output stable sentiment oriented conversations which can manipulate the user? We propose a framework to explore chatbots which have hidden intentions in their interaction with the user.][pdf][scholar][bibtex]

Data quality evaluation in wearable monitoring, Sebastian Boettcher and Solveig Vieluf and Elisa Bruno and Boney Joseph and Nino Epitashvili and Andrea Biondi and Nicolas Zabler and Martin Glasstetter and Matthias Duempelmann and Kristof Van Laerhoven and Mona Nasseri and Benjamin H. Brinkman and Mark P. Richardson and Andreas Schulze-Bonhage and Tobias Loddenkemper, In Scientific Reports, vol.12(1)2022. [abstractWearable recordings of neurophysiological signals captured from the wrist offer enormous potential for seizure monitoring. Yet, data quality remains one of the most challenging factors that impact data reliability. We suggest a combined data quality assessment tool for the evaluation of multimodal wearable data. We analyzed data from patients with epilepsy from four epilepsy centers. Patients wore wristbands recording accelerometry, electrodermal activity, blood volume pulse, and skin temperature. We calculated data completeness and assessed the time the device was worn (on-body), and modality-specific signal quality scores. We included 37,166 h from 632 patients in the inpatient and 90,776 h from 39 patients in the outpatient setting. All modalities were affected by artifacts. Data loss was higher when using data streaming (up to 49% among inpatient cohorts, averaged across respective recordings) as compared to onboard device recording and storage (up to 9%). On-body scores, estimating the percentage of time a device was worn on the body, were consistently high across cohorts (more than 80%). Signal quality of some modalities, based on established indices, was higher at night than during the day. A uniformly reported data quality and multimodal signal quality index is feasible, makes study results more comparable, and contributes to the development of devices and evaluation routines necessary for seizure monitoring.][pdf][scholar][bibtex]

Scalable Human Computer Interaction in Control Rooms as Pervasive Computing Environments, Flegel, Nadine and Van Laerhoven, Kristof and Mentler, Tilo, In Proceedings of the 33rd European Conference on Cognitive Ergonomics, 2022. [abstractPrivate and professional life contexts must be viewed as pervasive computing environments, with the aim to address peoples’ needs and tasks as well as cooperation and communication issues. However, there is a risk, that this results in handling a growing number of devices and complex interactions. The question arises to what extent interaction concepts that were designed for single or a few devices can be transferred to such environments. This can be seen as a scaling problem in terms of cognitive ergonomics. This is also an important issue in safety-critical domains, where control rooms serve as central units, as the demands on operators are increasing. Support is needed in decision-making, communication and collaboration. This paper describes the research questions and methodology of the development of design principles for scalable interaction design in control rooms from a cognitive ergonomics perspective. The expected outcome is a set of concepts that are specifically suited for safety-critical pervasive computing environments.][pdf][scholar][bibtex]

{IBSync}: Intra-body synchronization and implicit contextualization of wearable devices using artificial ECG landmarks, Wolling, Florian and Van Laerhoven, Kristof, In Frontiers in Computer Science, vol.4, 2022. [pdf][scholar][bibtex]

WashSpot: Real-Time Spotting and Detection of Enacted Compulsive Hand Washing with Wearable Devices, Burchard, Robin and Scholl, Philipp M, and Lieb, Roselind and Van Laerhoven, Kristof and Wahl, Karina, In UbiComp/ISWC ’22 Adjunct Proceedings, 2022. [abstractThe automatic detection of hand washing has numerous applications in work and medical environments. Checking the compliance with hygiene standard in hospitals, or personal hygiene support are examples thereof. However, hand-washing can also become pathological and is a symptom of the obsessive-compulsive disorder (OCD) spectrum. Individuals suffering from OCD are compelled to wash their hands often to the extent of harming themselves. Automatically spotting compulsive hand-washing throughout the day can assist therapeutic interventions by augmenting the on-going monitoring of compulsions. Based on this the therapist can gauge the efficacy of the chosen interventions. We present WashSpot, a neural-network based method to spot (compulsive) hand-washing on commercially available Smartwatches using inertial motion sensor data.][pdf][scholar][bibtex]

Use Cases and Design of a Virtual Cross-Domain Control Room Simulator, Mentler, Tilo and Flegel, Nadine and Pöhler, Jonas and Van Laerhoven, Kristof, In Mensch und Computer 2022 - Workshopband, 2022. [abstractControl rooms are facilities of central importance in many safety critical domains (e.g., rescue services, traffic management, power supply). At the same time, they are characterized by a complex IT infrastructure that can only be integrated and adapted to a limited extent for research activities in the area of human-computer interaction. Previous work has already shown that virtual reality simulators of control rooms can be a suitable tool for these purposes. However, the solutions to date are very domain-specific, which makes it difficult to transfer knowledge and also to test aspects that are not primarily domain-specific (e.g., multimodal forms of interaction). This paper presents the concept of a domain independent control room simulator and the development status regarding two use cases (build mode, simulation mode). Finally, further development and use of this approach are discussed.][pdf][scholar][bibtex]

Evaluation of Precision, Accuracy and Threshold for the Design of Vibrotactile Feedback in Eye Tracking Applications, Fischer, Anke and Wendt, Thomas and Gawron, Philipp and Stiglmeier, Lukas and Van Laerhoven, Kristof, In Sensors and Measuring Systems; 21th ITG/GMA-Symposium, p.1--6, 2022. [abstractA novelty solution for controls of assistive technology represent the usage of eye tracking devices such as for smart wheelchairs and robotic arms. In this context usage supporting methods like artificial feedback are not well explored. Vibrotactile feedback has shown to be helpful to decrease the cognitive load on the visual and auditive channels and can provide a perception of touch. People with severe limitations of motor functions could benefit from eye tracking controls supported with vibrotactile feedback. In this study fundamental results will be presented in the design of an appropriate vibrotactile feedback system for eye tracking applications. We will show that a perceivable vibrotactile stimulus has no significant effect on the accuracy and precision of a head worn eye tracking device. It is anticipated that the results of this paper will lead to new insights in the design of vibrotactile feedback for eye tracking applications and eye tracking controls.][pdf][scholar]

Activity tracker-based intervention to increase physical activity in patients with type 2 diabetes and healthy individuals: study protocol for a randomized controlled trial, M. Maehs and J. S. Pithan and I. Bergmann and L. Gabrys and J. Graf and A. Hoelzemann and K. Van Laerhoven and S. Otto-Hagemann and M. L. Popescu and L. Schwermann and B. Wenz and I. Pahmeier and A. Teti, In Trials, vol.23(1)2022. [abstractBackground: One relevant strategy to prevent the onset and progression of type 2 diabetes mellitus (T2DM) focuses on increasing physical activity. The use of activity trackers by patients could enable objective measurement of their regular physical activity in daily life and promote physical activity through the use of a tracker-based intervention. This trial aims to answer three research questions: (1) Is the use of activity trackers suitable for longitudinal assessment of physical activity in everyday life? (2) Does the use of a tracker-based intervention lead to sustainable improvements in the physical activity of healthy individuals and in people with T2DM? (3) Does the accompanying digital motivational intervention lead to sustainable improvements in physical activity for participants using the tracker-based device? Methods: The planned study is a randomized controlled trial focused on 1642 participants with and without T2DM for 9 months with regard to their physical activity behavior. Subjects allocated to an intervention group will wear an activity tracker. Half of the subjects in the intervention group will also receive an additional digital motivational intervention. Subjects allocated to the control group will not receive any intervention. The primary outcome is the amount of moderate and vigorous physical activity in minutes and the number of steps per week measured continuously with the activity tracker and assessed by questionnaires at four time points. Secondary endpoints are medical parameters measured at the same four time points. The collected data will be analyzed using inferential statistics and explorative data mining techniques. Discussion: The trial uses an interdisciplinary approach with a team including sports psychologists, sports scientists, health scientists, health care professionals, physicians, and computer scientists. It also involves the processing and analysis of large amounts of data collected with activity trackers. These factors represent particular strengths as well as challenges in the study. Trial Registration: The trial is registered at the World Health Organization International Clinical Trials Registry Platform via the German Clinical Studies Trial Register (DRKS), DRKS00027064. Registered on 11 November 2021.][pdf][scholar][bibtex]

"I Want My Control Room To Be...": On the Aesthetics of Interaction in a Safety-Critical Working Environment, Flegel, Nadine and Poehler, Jonas and Van Laerhoven, Kristof and Mentler, Tilo, In Mensch und Computer 2022 (MuC ’22), p.7, 2022. [abstractControl rooms are safety-relevant working environments characterized by complex IT infrastructure. With regard to the interaction of operators with control room systems, usability has been the major criteria for decades. However, there is increasing discussion about the extent to which the concept of user experience (UX) also plays a role in such safety-critical contexts. What is still largely missing is the application of concrete UX-specific methods in the context of control rooms. This paper explains how and with what results 9 operators used an interaction vocabulary focusing on pragmatic and hedonic qualities to complete the sentence “I want my control room to be... ”. Results first suggest that pragmatic, i.e., usability-oriented, attributions are of greater importance to operators. However, especially the more UX-specific terms of the interaction vocabulary, which were initially not found to be so relevant, yielded many valuable hints and inspiration for the future design of control room workplaces. By reflecting on the process of discussing the aesthetics of interactions in such a safety-critical working environment, recommendations are provided for considering UX in safety.][pdf][scholar][bibtex]

A Public Repository to Improve Replicability and Collaboration in Deep Learning for HAR*, Pellatt, Lloyd and Bock, Marius and Roggen, Daniel and Van Laerhoven, Kristof, In 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), p.54--57, 2022. [abstractDeep learning methods have become an almost default choice of machine learning approach for human activity recognition (HAR) systems that operate on time series data, such as those of wearable sensors. However, the implementations of such methods suffer from complex package dependencies, obsolescence, and subtleties in the implementation which are sometimes not well documented. In order to accelerate research and minimise any discrepancies between (re-)implementations, we introduce a curated, open-source repository which (1) contains complete data loading and preprocessing pipelines for 6 well-established HAR datasets, (2) supports several popular HAR deep learning architectures, and (3) provides necessary functionalities to train and evaluate said models. We welcome contributions from the fellow researcher to this repository, made available through: https://github.com/STRCSussex-UbiCompSiegen/dl_har_public][pdf][scholar][bibtex]

WetTouch: Touching Ground in the Wearable Detection of Hand-Washing Using Capacitive Sensing, Florian Wolling, Jonas Bilal, Philipp M. Scholl, Benjamin Völker, Kristof Van Laerhoven, In WristSense 2022: Workshop on Sensing Systems and Applications Using Wrist Worn Smart Devices., p.769--774, 2022. [abstractThe detection of hand-washing is not only of interest since the emergence of the COVID-19 pandemic. Obsessive-compulsive disorder (OCD) often manifests itself in terms of hand-washing compulsions. Detecting these compulsions can potentially improve the effectiveness of treatments. Therapists can offer additional just-in-time mobile interventions, improved momentary assessment, and interactive exposure and reaction prevention (ERP) training. This, however, requires reliable and ambulatory detection of obsessive hand-washing. We present a novel technique which enables hand-washing detection by means of a wrist-worn, capacitive sensing device. It relies on the effect that touching running tap water yields a strong change in the capacitance between the wearer and the environment. The WetTouch system exploits this effect and we present first findings on the feasibility of such detection. For this, a set of seven pertinent activities with and without touching water was measured, and we found that hand-washing is clearly identifiable for two different subjects. The technique hence paves the path towards reliable and unobtrusive hand-washing detection in ambulatory applications with capacitive sensing.][pdf][scholar][bibtex] honorable mention award

Intra- and Inter-Subject Perspectives on the Detection of Focal Onset Motor Seizures in Epilepsy Patients, Böttcher, Sebastian and Bruno, Elisa and Epitashvili, Nino and Dümpelmann, Matthias and Zabler, Nicolas and Glasstetter, Martin and Ticcinelli, Valentina and Thorpe, Sarah and Lees, Simon and Van Laerhoven, Kristof and Richardson, Mark P. and Schulze-Bonhage, Andreas, In Sensors, vol.22(9)2022. [abstractFocal onset epileptic seizures are highly heterogeneous in their clinical manifestations, and a robust seizure detection across patient cohorts has to date not been achieved. Here, we assess and discuss the potential of supervised machine learning models for the detection of focal onset motor seizures by means of a wrist-worn wearable device, both in a personalized context as well as across patients. Wearable data were recorded in-hospital from patients with epilepsy at two epilepsy centers. Accelerometry, electrodermal activity, and blood volume pulse data were processed and features for each of the biosignal modalities were calculated. Following a leave-one-out approach, a gradient tree boosting machine learning model was optimized and tested in an intra-subject and inter-subject evaluation. In total, 20 seizures from 9 patients were included and we report sensitivities of 67% to 100% and false alarm rates of down to 0.85 per 24 h in the individualized assessment. Conversely, for an inter-subject seizure detection methodology tested on an out-of-sample data set, an optimized model could only achieve a sensitivity of 75% at a false alarm rate of 13.4 per 24 h. We demonstrate that robustly detecting focal onset motor seizures with tonic or clonic movements from wearable data may be possible for individuals, depending on specific seizure manifestations.][pdf][scholar][bibtex]

Towards Control Rooms as Human-Centered Pervasive Computing Environments, Nadine Flegel and Jonas Poehler and Kristof Van Laerhoven and Tilo Mentler, In Lecture Notes in Computer Science, volume 13198, p.329--344, 2022. [abstractState-of-the-art control rooms are equipped with a variety of input and output devices in terms of single-user workstations, shared public screens, and multimodal alarm systems. However, operators are bound to and sitting at their respective workstations for the most part of their shifts. Therefore, cooperation efforts are hampered, and physical activity is limited for several hours. Incorporating mobile devices, wearables and sensor technologies could improve on the current mode of operation but must be considered a paradigm shift from control rooms as a collection of technically networked but stationary workstations to control rooms as pervasive computing environments being aware of people and processes. However, based on the reviewed literature, systematic approaches to this paradigm shift taking usability and user experience into account are rare. In this work, we describe a root concept for control rooms as human-centered pervasive computing environments and introduce a framework for developing a wearable assistant as one of the central and novel components. Furthermore, we describe design challenges from a socio-technical perspective based on 9 expert interviews important for further research on pervasive computing environments in safety-critical domains.][pdf][scholar][bibtex]

Validation of an open-source ambulatory assessment system in support of replicable activity studies, Kristof Van Laerhoven and Alexander Hoelzemann and Iris Pahmeier and Andrea Teti and Lars Gabrys, In German Journal of Exercise and Sport Research, 2022. [abstractPurpose: Inertial-based trackers have become a common tool in data capture for ambulatory studies that aim at characterizing physical activity. Many systems that perform remote recording of accelerometer data use commercial trackers and black-box aggregation algorithms, often resulting in data that are locked into proprietary formats and metrics that make later replication or comparison difficult. Methods: The primary purpose of this manuscript is to validate an open-source ambulatory assessment system that consists of hardware devices, algorithms, and software components of our approach. We report on two validation experiments, one lab-based treadmill study on a convenience sample of 16 volunteers and one ’in vivo’ study with 28 volunteers suffering from diabetes or cardiovascular disease. Results: A comparison between data from ActiGraph GT9X trackers and our proposed system reveals that the original inertial sensor signals at the wrist strongly correlate (Pearson correlation coefficients for raw inertial sensor signals of 0.97 in the controlled treadmill-walking setting) and that estimated steps from an open-source wrist-based detection approach correlate with the hip- worn ActiGraph output (average Pearson correlation coefficients of 0.81 for minute- wise comparisons of detected steps) in day- long ambulatory data. Conclusion: Recording inertial sensor data in a standardized form and relying on open-source algorithms on these data form a promising methodology that ensures that datasets can be replicated or enriched long after the wearable trackers have been decommissioned.][pdf][scholar][bibtex]

LstSim-Extended: Towards Monitoring Interaction and~Beyond in~Web-Based Control Room Simulations, Jonas Poehler and Nadine Flegel and Tilo Mentler and Kristof Van Laerhoven, In Lecture Notes in Computer Science, volume 13198, p.345--356, 2022. [abstractControl room operators rely on a range of technologies to communicate crucial information and dependably coordinate a disparate collection of tasks and procedures. Tools that are capable to design, to implement, and to evaluate interactive systems that can assist the tasks of control room operators in these environments therefore play an important role. This paper offers a framework that facilitates the early research steps into evaluating work flows, interfaces, and wearable sensors in the context of an emergency dispatch center. It entails a primarily web-based, quick-to-deploy, and scalable method that specifically targets preliminary studies in which large-scale and situated deployments are not feasible. By using open-source and affordable wrist-worn sensors, it furthermore enables investigating any relationships between interaction design in control rooms and operators’ physiological data. Our evaluation on a preliminary study with 5 participants shows that basic scenarios are able to induce differences which can be measured by reaction times in the interactions as well as in the data from the smartwatch.][pdf][scholar][bibtex]

Open-Source Data Collection for Activity Studies at Scale, Alexander Hoelzemann and Jana Pithan and Kristof Van Laerhoven, In Sensor- and Video-Based Activity and Behavior Computing, p.27--38, 2022. [abstractActivity studies range from detecting key indicators such as steps, active minutes, or sedentary bouts, to the recognition of physical activities such as specific fitness exercises. Such types of activity recognition rely on large amounts of data from multiple persons, especially with deep learning. However, current benchmark datasets rarely have more than a dozen participants. Once wearable devices are phased out, closed algorithms that operate on the sensor data are hard to reproduce and devices supply raw data. We present an open-source and cost-effective framework that is able to capture daily activities and routines and which uses publicly available algorithms, while avoiding any device-specific implementations. In a feasibility study, we were able to test our system in production mode. For this purpose, we distributed the Bangle.js smartwatch as well as our app to 12 study participants, who started the watches at a time of individual choice every day. The collected data was then transferred to the server at the end of each day.][pdf][scholar][bibtex]

Control Rooms from a Human-Computer Interaction Perspective, Tilo Mentler and Philippe Palanque and Michael D. Harrison and Kristof Van Laerhoven and Paolo Masci, In Lecture Notes in Computer Science, volume 13198, p.281--289, 2022. [abstractAs defined in Paper 2 presented at the workshop, whose presentations and discussions are introduced in this paper, “control rooms are work spaces that serve the purpose of managing and operating physically dispersed systems, services and staff”.][pdf][scholar][bibtex]

2021

Improving Deep Learning for HAR with Shallow LSTMs, Marius Bock and Alexander Hoelzemann and Michael Moeller and Van Laerhoven, Kristof, In Proceedings of the 2021 International Symposium on Wearable Computers, {ISWC} 2020, September 21-26, 2021, 2021. [abstractRecent studies in Human Activity Recognition (HAR) have shown that Deep Learning methods are able to outperform classical Machine Learning algorithms. One popular Deep Learning architecture in HAR is the DeepConvLSTM. In this paper we propose to alter the DeepConvLSTM architecture to employ a 1-layered instead of a 2-layered LSTM. We validate our architecture change on 5 publicly available HAR datasets by comparing the predictive performance with and without the change employing varying hidden units within the LSTM layer(s). Results show that across all datasets, our architecture consistently improves on the original one: Recognition performance increases up to 11.7% for the F1-score, and our architecture significantly decreases the amount of learnable parameters. This improvement over DeepConvLSTM decreases training time by as much as 48%. Our results stand in contrast to the belief that one needs at least a 2-layered LSTM when dealing with sequential data. Based on our results we argue that said claim might not be applicable to sensor-based HAR.][pdf][scholar][bibtex] best paper

Breathing In-Depth: A Parametrization Study on RGB-D Respiration Extraction Methods, Kempfle, Jochen and Van Laerhoven, Kristof, In Frontiers in Computer Science, vol.3, p.109, 2021. [abstractAs depth cameras have gotten smaller, more affordable, and more precise, they have also emerged as a promising sensor in ubiquitous systems, particularly for detecting objects, scenes, and persons. This article sets out to systematically evaluate how suitable depth data can be for picking up users’ respiration, from small distance changes across the torso over time. We contribute a large public dataset of depth data over time from 19 persons taken in a large variety of circumstances. On this data, we evaluate and compare different state-of-the-art methods and show that their individual performance significantly depends on a range of conditions and parameters. We investigate the influence of the observed torso region (e.g., the chest), the user posture and activity, the distance to the depth camera, the respiratory rate, the gender, and user specific peculiarities. Best results hereby are obtained from the chest whereas the abdomen is least suited for detecting the user’s breathing. In terms of accuracy and signal quality, the largest differences are observed on different user postures and activities. All methods can maintain a mean accuracy of above 92% when users are sitting, but half of the observed methods only achieve a mean accuracy of 51% while standing. When users are standing and additionally move their arms in front of their upper body, mean accuracy values between the worst and best performing methods range from 21 to 87%. Increasing the distance to the depth camera furthermore results in lower signal quality and decreased accuracy on all methods. Optimal results can be obtained at distances of 1–2 m. Different users have been found to deliver varying qualities of breathing signals. Causes range from clothing, over long hair, to movement. Other parameters have shown to play a minor role in the detection of users’ breathing.][pdf][scholar][bibtex]

Optimal Preprocessing of Raw Signals from Reflective Mode Photoplethysmography in Wearable Devices, Wolling, Florian and Wasala, Sudam Maduranga and Van Laerhoven, Kristof, In 2021 43rd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC), p.1157--1163, 2021. [abstractThe optical measurement principle photoplethysmography has emerged in today’s wearable devices as the standard to monitor the wearer’s heart rate in everyday life. This cost-effective and easy-to-integrate technique has transformed from the original transmission mode pulse oximetry for clinical settings to the reflective mode of modern ambulatory, wrist-worn devices. Numerous proposed algorithms aim at the efficient heart rate measurement and accurate detection of the consecutive pulses for the derivation of secondary features from the heart rate variability. Most, however, have been evaluated either on own, closed recordings or on public datasets that often stem from clinical pulse oximeters in transmission instead of wearables’ reflective mode. Signals tend furthermore to be preprocessed with filters, which are rarely documented and unintentionally fitted to the available and applied signals. We investigate the influence of preprocessing on the peak positions and present the benchmark of two cutting-edge pulse detection algorithms on actual raw measurements from reflective mode photoplethysmography. Based on 21806 pulse labels, our evaluation shows that the most suitable but still universal filter passband is located at 0.5 to 15.0Hz since it preserves the required harmonics to shape the peak positions.][pdf][scholar][bibtex]

On-site Online Condition Monitoring of Medium-Voltage Switchgear Units, Christina Nicolaou and Ahmad Mansour and Kristof Van Laerhoven, In Proceedings of the 11th International Conference on the Internet of Things, 2021. [abstractOur electricity networks highly rely on switchgear to control and safeguard electrical power infrastructure. It is therefore not surprising that distributed monitoring of switchgear through local sensors, signal processing and analysis in real-time, has emerged as a promising research field. Of particular interest are the non-invasive detection of switching operations, their differentiation and ageing, which can be monitored by tracking acoustic emissions generated during a switching operation using small microelectromechanical system (MEMS) based sensors. This paper presents a novel and computationally efficient method that allows on-site feature selection and online classification of switchgear actions. Process-and design-specific features can be learned locally on the sensor system without the need of prior offline training. This avoids the high effort associated with adapting the model for other use cases offsite (e.g., analysis, feature selection, implementation). Besides, it offers the possibility to re-train the model, which may be required due to changes in the structure of the concerned application(e.g., replacement of components, ageing or changes in sensor position). Furthermore, the method is independent of the application, thus making it generic to other application areas. We evaluate our method as well as the MEMS sensors (acoustic and vibration) using datasets of switchgear measurements to differentiate between different switching operations. We furthermore show that the features selected by our method can be used to track changes in switching processes due to ageing effects.][pdf][scholar][bibtex] best paper

Tutorial on Deep Learning for Human Activity Recognition, Marius Bock and Alexander Hoelzemann and Michael Moeller and Kristof Van Laerhoven, arXiv 2110.06663, 2021. [abstractActivity recognition systems that are capable of estimating human activities from wearable inertial sensors have come a long way in the past decades. Not only have state-of-the-art methods moved away from feature engineering and have fully adopted end-to-end deep learning approaches, best practices for setting up experiments, preparing datasets, and validating activity recognition approaches have similarly evolved. This tutorial was first held at the 2021 ACM International Symposium on Wearable Computers (ISWC'21) and International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp'21). The tutorial, after a short introduction in the research field of activity recognition, provides a hands-on and interactive walk-through of the most important steps in the data pipeline for the deep learning of human activities. All presentation slides shown during the tutorial, which also contain links to all code exercises, as well as the link of the GitHub page of the tutorial can be found on: https://mariusbock.github.io/dl-for-har/][pdf][scholar]

On-site Multi-Class Feature Selection for Online Classification of Switchgear Actuations in the Distribution Grid,, Christina Nicolaou and Ahmad Mansour and Kristof Van Laerhoven, In Mikrosystemtechnik Kongress 2021 (MST 2021), 2021. [abstractThe electrical grid is highly dependent on switchgear to maintain a safe and reliable power transmission. For this reason, the interest in on-site, non-invasive monitoring solutions including the detection of switch operations, their differentiation and ageing has significantly increased in the last years. Thereby, the research field of tracking acoustic emissions generated during the switching using low-cost micro-electro-mechanical system (MEMS) based sensors is emerging. This paper presents a computationally efficient method for selecting process- and design-specific features on-site (on a sensor system or gateway) to eliminate the need of prior offline training. This ensures generalized usability for different switch types and sensor positions without high re-training effort. The selected features are further used for online multi-class classification of switching processes. The proposed self-learning method, as well as the use of the MEMS sensors (acoustic and vibration), are both evaluated for classification performance on switchgear measurements during twelve different processes, leading to a robust classification with an accuracy of over 95 \% in average.][pdf][scholar]

IBSync: Intra-body Synchronization of Wearable Devices Using Artificial ECG Landmarks, Florian Wolling and Cong Dat Huynh and Kristof Van Laerhoven, In Proceedings of the 2021 International Symposium on Wearable Computers, ISWC 2020, September 21-26, 2021, 2021. [abstractThe synchronization of wearable devices in distributed, multi-device systems is a persistent challenge. Particularly machine learning approaches suffer from the devices’ inaccurate clock sources and unmatched time. While the online synchronization based on radio transmission is energy-intensive, offline approaches originated in activity recognition suffer from inaccurate motion patterns. In recent years, intra-body communication emerged as a promising technique that uses the human body as a limited and hence more efficient medium. Due to the absence of commercial platforms, applications are rare and underinvestigated. To boost their development and to enable the precise synchronization, we introduce IBSync and propose to repurpose the ECG sensor in commercial wearable devices to detect artificial signals induced into the skin. The shorttime Fourier transform and Pearson’s normalized cross-correlation are used to detect, precisely locate, and assign synchronization landmarks within the measurements. Based on a total of 105 min of recordings, we evaluated the concept and demonstrate its general feasibility with a promising accuracy of 0.203 ± 1.633 samples (1.587 ± 12.755 ms) in typical proximity to the transmitter.][pdf][scholar][bibtex]

Detecting Tonic-Clonic Seizures in Multimodal Biosignal Data from Wearables: Methodology Design and Validation, Boettcher, Sebastian and Bruno, Elisa and Manyakov, Nikolay V and Epitashvili, Nino and Claes, Kasper and Glasstetter, Martin and Thorpe, Sarah and Lees, Simon and Dümpelmann, Matthias and Van Laerhoven, Kristof and Richardson, Mark P and Schulze-Bonhage, Andreas, In JMIR Mhealth Uhealth, vol.9(11)p.e27674, 2021. [abstractBackground: Video electroencephalography recordings, routinely used in epilepsy monitoring units, are the gold standard for monitoring epileptic seizures. However, monitoring is also needed in the day-to-day lives of people with epilepsy, where video electroencephalography is not feasible. Wearables could fill this gap by providing patients with an accurate log of their seizures. Objective: Although there are already systems available that provide promising results for the detection of tonic-clonic seizures (TCSs), research in this area is often limited to detection from 1 biosignal modality or only during the night when the patient is in bed. The aim of this study is to provide evidence that supervised machine learning can detect TCSs from multimodal data in a new data set during daytime and nighttime. Methods: An extensive data set of biosignals from a multimodal watch worn by people with epilepsy was recorded during their stay in the epilepsy monitoring unit at 2 European clinical sites. From a larger data set of 243 enrolled participants, those who had data recorded during TCSs were selected, amounting to 10 participants with 21 TCSs. Accelerometry and electrodermal activity recorded by the wearable device were used for analysis, and seizure manifestation was annotated in detail by clinical experts. Ten accelerometry and 3 electrodermal activity features were calculated for sliding windows of variable size across the data. A gradient tree boosting algorithm was used for seizure detection, and the optimal parameter combination was determined in a leave-one-participant-out cross-validation on a training set of 10 seizures from 8 participants. The model was then evaluated on an out-of-sample test set of 11 seizures from the remaining 2 participants. To assess specificity, we additionally analyzed data from up to 29 participants without TCSs during the model evaluation. Results: In the leave-one-participant-out cross-validation, the model optimized for sensitivity could detect all 10 seizures with a false alarm rate of 0.46 per day in 17.3 days of data. In a test set of 11 out-of-sample TCSs, amounting to 8.3 days of data, the model could detect 10 seizures and produced no false positives. Increasing the test set to include data from 28 more participants without additional TCSs resulted in a false alarm rate of 0.19 per day in 78 days of wearable data. Conclusions: We show that a gradient tree boosting machine can robustly detect TCSs from multimodal wearable data in an original data set and that even with very limited training data, supervised machine learning can achieve a high sensitivity and low false-positive rate. This methodology may offer a promising way to approach wearable-based nonconvulsive seizure detection.][pdf][scholar][bibtex]

fastSW: Efficient Piecewise Linear Approximation of Quaternion-Based Orientation Sensor Signals for Motion Capturing with Wearable IMUs, Grützmacher, Florian and Kempfle, Jochen and Van Laerhoven, Kristof and Haubelt, Christian, In Sensors, vol.21(15)2021. [abstractIn the past decade, inertial measurement sensors have found their way into many wearable devices where they are used in a broad range of applications, including fitness tracking, step counting, navigation, activity recognition, or motion capturing. One of their key features that is widely used in motion capturing applications is their capability of estimating the orientation of the device and, thus, the orientation of the limb it is attached to. However, tracking a human's motion at reasonable sampling rates comes with the drawback that a substantial amount of data needs to be transmitted between devices or to an end point where all device data is fused into the overall body pose. The communication typically happens wirelessly, which severely drains battery capacity and limits the use time. In this paper, we introduce fastSW, a novel piecewise linear approximation technique that efficiently reduces the amount of data required to be transmitted between devices. It takes advantage of the fact that, during motion, not all limbs are being moved at the same time or at the same speed, and only those devices need to transmit data that actually are being moved or that exceed a certain approximation error threshold. Our technique is efficient in computation time and memory utilization on embedded platforms, with a maximum of 210 instructions on an ARM Cortex-M4 microcontroller. Furthermore, in contrast to similar techniques, our algorithm does not affect the device orientation estimates to deviate from a unit quaternion. In our experiments on a publicly available dataset, our technique is able to compress the data to 10% of its original size, while achieving an average angular deviation of approximately 2° and a maximum angular deviation below 9°.][pdf][scholar][bibtex]

Wearable devices for seizure detection: Practical experiences and recommendations from the Wearables for Epilepsy And Research (WEAR) International Study Group, Elisa Bruno and Sebastian Böttcher and Pedro F. Viana and Marta Amengual-Gual and Boney Joseph and Nino Epitashvili and Matthias Dümpelmann and Martin Glasstetter and Andrea Biondi and Kristof Van Laerhoven and Tobias Loddenkemper and Mark P. Richardson and Andreas Schulze-Bonhage and Benjamin H. Brinkmann, In Epilepsia, p.1--15, 2021. [abstractThe Wearables for Epilepsy And Research (WEAR) International Study Group identified a set of methodology standards to guide research on wearable devices for seizure detection. We formed an international consortium of experts from clinical research, engineering, computer science, and data analytics at the beginning of 2020. The study protocols and practical experience acquired during the development of wearable research studies were discussed and analyzed during bi-weekly virtual meetings to highlight commonalities, strengths, and weaknesses, and to formulate recommendations. Seven major essential components of the experimental design were identified, and recommendations were formulated about: (1) description of study aims, (2) policies and agreements, (3) study population, (4) data collection and technical infrastructure, (5) devices, (6) reporting results, and (7) data sharing. Introducing a framework of methodology standards promotes optimal, accurate, and consistent data collection. It also guarantees that studies are generalizable and comparable, and that results can be replicated, validated, and shared.][pdf][scholar][bibtex]

Control Rooms in Safety Critical Contexts: Design, Engineering and Evaluation Issues, Tilo Mentler and Philippe Palanque and Susanne Boll and Chris Johnson and Kristof Van Laerhoven, In C. Ardito et al. (Eds.): INTERACT 2021, LNCS 12936, pp. 1–6, [abstractHuman-Computer Interaction (HCI) research has been focussing on the design of new interaction techniques and the understanding of people and the way they interact with computing devices and new technologies. The ways in which the work is performed with these interactive technologies has arguably been less of a focus. This workshop aims at addressing this specific aspect of Human-Computer Interaction in the control rooms domain. Control rooms are crucial elements of safety-critical infrastructures (e.g., crisis management, emergency med-ical services, fire services, power supply, or traffic management). They have been studied in terms of Human-Computer Interaction with respect to routine and emergency operations, human-machine task allocation, interaction design and evaluation approaches for more than 30 years. However, they are dynamic and evolving environments with, for instance, the gradual introduction of higher lev-els of automation/autonomy. While state of the art control rooms are still characterized by stationary workstations with several smaller screens and large wall-mounted displays, introducing mobile and wearable devices as well as IoT solutions could enable more flexible and cooperative ways of working. The work-shop aims at understanding how recent technologies in HCI could change the way control rooms are designed, engineered and operated. This workshop is organized by the IFIP WG 13.5 on Human Error, Resilience, Reliability and Safety in System Development.][pdf][scholar][bibtex]

Towards Control Rooms as Human-Centered Pervasive Computing Environments, Nadine Flegel and Jonas Poehler and Kristof Van Laerhoven and Tilo Mentler, In INTERACT 2021 Workshop on Control Rooms in Safety Critical Contexts: Design, Engineering and Evaluation Issues, 2021. [abstractState-of-the-art control rooms are equipped with a variety of input and output devices in terms of single-user workstations, shared public screens, and multimodal alarm systems. However, operators are bound to and sitting at their respective workstations for the most part of their shifts. Therefore, cooperation efforts are hampered, and physical activity is limited for several hours. Incorporating mobile devices, wearables and sensor technologies could improve on the current mode of operation but must be considered a paradigm shift from control rooms as a collection of technically networked but stationary workstations to control rooms as pervasive computing environments being aware of people and processes. However, based on the reviewed literature, systematic approaches to this paradigm shift taking usability and user experience into account are rare. In this work, we describe a root concept for control rooms as human-centered pervasive computing environments and introduce a framework for developing a wearable assistant as one of the central and novel components. Furthermore, we describe design challenges from a socio-technical perspective based on 9 expert interviews important for further research on pervasive computing environments in safety-critical domains.][pdf][scholar]

LstSim-Extended: Towards Monitoring Interaction and beyond in Web-Based Control Room Simulations, Jonas Poehler and Nadine Flegel and Tilo Mentler and Kristof Van Laerhoven, In INTERACT 2021 Workshop on Control Rooms in Safety Critical Contexts: Design, Engineering and Evaluation Issues, 2021. [abstractControl room operators rely on a range of technologies to communicate crucial information and dependably coordinate a disparate collection of tasks and procedures. Tools that are capable to design, to implement, and to evaluate interactive systems that can assist the tasks of control room operators in these environments therefore play an important role. This paper offers a framework that facilitates the early research steps into evaluating work flows, interfaces, and wearable sensors in the context of an emergency dispatch center. It entails a primarily web-based, quick-to-deploy, and scalable method that specifically targets preliminary studies in which large-scale and situated deployments are not feasible. By using open-source and affordable wrist-worn sensors, it furthermore enables investigating any relationships between interaction design in control rooms and operators’ physiological data. Our evaluation on a preliminary study with 5 participants shows that basic scenarios are able to induce differences which can be measured by reaction times in the interactions as well as in the data from the wrist-worn sensor.][pdf][scholar]

25 Years of ISWC, Tom Martin and Thad Starner and Dan Siewiorek and, Kai Kunze and Kristof Van Laerhoven, In IEEE Pervasive Computing, vol.20(3)p.72--78, 2021. [abstractMuch has changed in the landscape of wearables research since the first International Symposium on Wearable Computers (ISWC) was organized in 1997. The authors, many of whom were active in this community since the beginning, reflect now 25 years later on the role of the conference, emerging research methods, the devices, and ideas that have stood the test of time—such as fitness/health sensors or augmented reality devices—as well as the ones that can be expected still to come, like everyday head-worn displays.][pdf][scholar][bibtex]

{ISWC} 2020, Thomas Plotz and Jennifer Healey and Kristof Van Laerhoven, In {IEEE} Pervasive Computing, vol.20(1)p.45--49, 2021. [abstractThe International Symposium on Wearable Computers (ISWC) is the flagship conference on wearable computing focusing on design, algorithmic foundations, and deployments. It is the ideal venue to present and learn about the latest research in the field. The authors share their observations from the most recent gathering, held online in September 2020.][pdf][scholar][bibtex]

The Three A’s of Wearable and Ubiquitous Computing: Activity, Affect, and Attention, Van Laerhoven, Kristof, In Frontiers in Computer Science, Section Mobile and Ubiquitous Computing, vol.3, p.57, 2021. [abstractA long lasting challenge in wearable and ubiquitous computing has been to bridge the interaction gap between users and their computers. We can easily perceive and interpret contextual information, such as picking up whether someone is bored, stressed, busy, or fascinated in face-to-face interactions, which is still largely unsolved for computers in everyday life. The first message is that much of the research of the past decades aiming to alleviate this context gap between computers and their users, has clustered into three fields. They aim to model human users in different observable categories: Activity, Affect, and Attention. A second point is that the research fields aiming for machine recognition of these three A’s, thus far have had only a limited amount of overlap, but are bound to converge in terms of methodology and from a systems perspective. A final point then concludes with the following call to action: A consequence of such a possible merger between the three A’s is the need for a more consolidated way of performing solid, reproducible research studies. These fields can learn from each other’s best practices, their interaction can both lead to the creation of overarching benchmarks, as well as establish common data pipelines.][pdf][scholar][bibtex]

Intelligent, sensor-based condition monitoring of transformer stations in the distribution network, Christina Nicolaou and Ahmad Mansour and Philipp Jung and Max Schellenberg and Andre Wurde and Alexander Walukiewicz and Jannis Nikolas Kahlen and Marius Shekow and Kristof Van Laerhoven, In 2021 Smart Systems Integration (SSI), 2021. [abstractToday’s maintenance and renewal planning in transformer stations of energy distribution networks is mainly based on expert knowledge, experience gained from historical data as well as the knowledge gathered from regular on-site inspections. This approach is already reaching its limits due to insufficient databases and almost no information about the stations’ condition being gathered between inspection intervals. A condition-based strategy that requires more maintenance for equipment with a high probability of failure is needed. Great potential is promised by intelligent sensor-based diagnostics, where objective comparability can be achieved by condition monitoring of the station fleet. Cost-effective micro-electromechanical (MEMS)-bases sensor systems promise to provide a suitable solution for network operators and enable a widespread use. In our paper, we present a MEMS-based sensor system, that can be used to gain information about network transparency, station safety as well as maintenance and renewal planning. Moreover, we propose an intelligent measurement scheme which adaptively selects relevant data and avoids unneeded redundancy (Smart Data instead of Big Data).][pdf][scholar][bibtex]

On-site Online Feature Selection for Classification of Switchgear Actuations, Christina Nicolaou and Ahmad Mansour and Kristof Van Laerhoven, In CoRR, vol.abs/2105.13639, arXiv 2105.13639, 2021. [abstractAs connected sensors continue to evolve, interest in low-voltage monitoring solutions is increasing. This also applies in the area of switchgear monitoring, where the detection of switch actions, their differentiation and aging are of fundamental interest. In particular, the universal applicability for various types of construction plays a major role. Methods in which design-specific features are learned in an offline training are therefore less suitable for assessing the condition of switchgears. A new computational efficient method for intelligent online feature selection is presented, which can be used to train a model for the addressed use cases on-site. Process- and design-specific features can be learned locally (e.g. on a sensor system) without the need of prior offline training. The proposed method is evaluated on four datasets of switchgear measurements, which were recorded using microelectromechanical system (MEMS) based sensors (acoustic and vibration). Furthermore, we show that the features selected by our method can be used to track changes in switching processes due to aging effects.][pdf][scholar]

Toward Fault Detection in Industrial Welding Processes with Deep Learning and Data Augmentation, Jibinraj Antony and Dr. Florian Schlather and Georgij Safronov and Markus Schmitz and Kristof Van Laerhoven, In CoRR, vol.abs/2106.10160, arXiv 2106.10160, 2021. [abstractWith the rise of deep learning models in the field of computer vision, new possibilities for their application in industrial processes proves to return great benefits. Nevertheless, the actual fit of machine learning for highly standardised industrial processes is still under debate. This paper addresses the challenges on the industrial realization of the AI tools, considering the use case of Laser Beam Welding quality control as an example. We use object detection algorithms from the TensorFlow object detection API and adapt them to our use case using transfer learning. The baseline models we develop are used as benchmarks and evaluated and compared to models that undergo dataset scaling and hyperparameter tuning. We find that moderate scaling of the dataset via image augmentation leads to improvements in intersection over union (IoU) and recall, whereas high levels of augmentation and scaling may lead to deterioration of results. Finally, we put our results into perspective of the underlying use case and evaluate their fit.][pdf][scholar]

Keynote: Long-term Affect and Activity Assessment on the Wrist, Kristof Van Laerhoven, In WristSense 2021: 7th Workshop on Sensing Systems and Applications using Wrist Worn Smart Devices, p.604, 2021. [abstractAs inertial sensors have matured and sensors that pick up the wearer's vital signs are steadily improving, I will focus in this talk on the underlying information that can be deduced from the vast array of sensor signals that is currently embedded in the latest generations of smartwatches. Especially the fact that smartwatches allow the long-term and continuous monitoring of their wearer’s state opens up a plethora of medical applications. Through examples from recent studies, where wristwatches can spot epileptic seizures, stress, smoking episodes, sleep, and activity, I will try to illustrate why more future users will not want to put their watches off.][pdf][scholar][bibtex]

PulSync: The Heart Rate Variability as a Unique Fingerprint for the Alignment of Sensor Data Across Multiple Wearable Devices, Florian Wolling and Kristof Van Laerhoven and Pekka Siirtola and Juha Roening, In PerHealth 2021: 5th IEEE PerCom Workshop on Pervasive Health Technologies, p.188--193, 2021. [abstractMost off-the-shelf wearable devices do not provide reliable synchronization interfaces, causing multi-device sensing and machine learning approaches, e.g. for activity recognition, still to suffer from inaccurate clock sources and unmatched time. Instead of using active online synchronization techniques, such as those based on bidirectional wireless communication, we propose in this work to use the human heartbeat as a reference signal that is continuously and ubiquitously available throughout the entire body surface. We introduce PulSync, a novel approach that enables the alignment of sensor data across multiple devices utilizing the unique fingerprint-like character of the heart rate variability interval function. In an evaluation on a dataset from 25 subjects, we demonstrate the reliable alignment of independent ECG recordings with a mean accuracy of −0.71 ± 3.44 samples, respectively −2.86 ± 11.43 ms at 250 Hz sampling rate.][pdf][scholar][bibtex]

Simulation and evaluation of imaging for electrical impedance tomography using artificial intelligence methods, Christian Gibas and Jonas Pöhler and Rainer Brück and Kristof Van Laerhoven, In Medical Imaging 2021: Biomedical Applications in Molecular, Structural, and Functional Imaging, vol.11600, p.438 -- 453, 2021. [abstractCurrent medical imaging uses MRI or CT images to diagnose tissue injuries. In addition to this classic procedure, there are also alternative technologies that have advantages against MRI or CT. These include electrical impedance tomography (EIT). With the help of EIT it is possible to obtain an initial screening of the body quickly and without a lot of hardware. Classical software-based methods of imaging reconstruction use a linear back projection or iterative approaches, such as Gauss-Newton algorithm. This paper introduces innovative approaches of artificial intelligence (AI) for imaging. For this purpose, extensive AI-based simulations with a Generative Adversial Network (GAN) are performed and the approaches are transferred to a gelatine phantom and to the human body within a small study.][pdf][scholar][bibtex]

Quaterni-On: Calibration-free Matching of Wearable IMU Data to Joint Estimates of Ambient Cameras, Jochen Kempfle and Kristof Van Laerhoven, In WristSense 2021: 7th Workshop on Sensing Systems and Applications using Wrist Worn Smart Devices, p.611--616, 2021. [abstractInertial measurement units, such as those embedded in many smartwatches, enable to track the orientation of the body joint to which the wearable is attached. Tracking the user's entire body would thus require many wearables, which tends to be difficult to achieve in daily life. This paper presents a calibration-free method to match the data stream from a bodyworn Inertial Measurement Unit, to any body joint estimates that are recognized and located from a camera in the environment. This allows networked cameras to associate detected body joints from nearby humans with an orientation stream from a nearby wearable; This wearable’s orientation stream can be transformed and complemented with its user’s full body pose estimates. Results show that in multi-user environments where users try to perform actions synchronously, out of 42 joint candidates, our calibration-free method obtains a perfect match within 83 to 155 samples (2.8 to 5.2 seconds), depending on the scenario. This would facilitate a seamless combination of wearable- and vision-based tracking for a robust, full-body user tracking.][pdf][scholar][bibtex] best paper

Data Augmentation Strategies for Human Activity Data Using Generative Adversarial Neural Networks, Alexander Hoelzemann and Nimish Sorathiya and Kristof Van Laerhoven, In CoMoRea 2021: 17th Workshop on Context and Activity Modeling and Recognition, p.8--13, 2021. [abstractPrevious studies have shown that available benchmark datasets from the field of Human Activity Recognition are of limited use for Deep Learning applications. This can be traced back to issues in the quality, the scope, as well as in the variability of the datasets. These limitations often lead to overfitting of networks and thus to results that are only conditionally generalizable. One way to counteract this problem is to extend the data by using data augmentation techniques. This paper presents an algorithm and compares two augmentation strategies: (1) userwise augmentation and (2) fold-wise augmentation to extend the size of a dataset here shown on the PAMAP2 dataset, with an arbitrary number of synthetic samples. These synthesized data resemble the user- and activity-specific characteristics and fit seamlessly into the dataset. They are created by a recurrent Generative Adversarial Network, with both the generator and discriminator modeled by a set of LSTM cells to produce the synthetic time-series data. In our evaluation, we trained four DeepConvLSTM models with supervised learning, three times with a LOSO cross-validation: one baseline model and two times with additional data but different augmentation strategies, as well as one model without cross-validation that monitors the synthesized data quality. The compared augmentation strategies demonstrate the impact as well as the generalized nature of the augmented data. By increasing the size of the dataset by factor 5, we improved the F1-Score by 11.0% with strategy (1) and 5.1% with strategy (2).][pdf][scholar][bibtex]

Activity tracker-based intervention to increase physical activity in patients with type 2 diabetes and healthy individuals: study protocol for a randomized controlled trial, Mareike Mähs and Jana Sabrina Pithan and Isabell Bergmann and Lars Gabrys and Jonathan Graf and Alexander Hoelzemann and Kristof Van Laerhoven and Silke Otto-Hagemann and Maria Loredana Popescu and Lisa Schwermann and Benjamin Wenz and Iris Pahmeier and Andrea Teti, 2022. [abstractBackground One relevant strategy to prevent the onset and progression of type 2 diabetes mellitus (T2DM) focuses on increasing physical activity. The use of activity trackers by patients could enable objective measurement of their regular physical activity in daily life and promote physical activity through the use of a tracker-based intervention. This trial aims to answer three research questions: (1) Is the use of activity trackers suitable for longitudinal assessment of physical activity in everyday life? (2) Does the use of a tracker-based intervention lead to sustainable improvements in the physical activity of healthy individuals and in people with T2DM? (3) Does the accompanying digital motivational intervention lead to sustainable improvements in physical activity for participants using the tracker-based device? Methods The planned study is a randomized controlled trial focused on 1,642 participants with and without T2DM for 9 months with regard to their physical activity behavior. Subjects allocated to an intervention group will wear an activity tracker. Half of the subjects in the intervention group will also receive an additional digital motivational intervention. Subjects allocated to the control group will not receive any intervention. The primary outcome is the amount of moderate and vigorous physical activity in minutes and the number of steps per week measured continuously with the activity tracker and assessed by questionnaires at four time points. Secondary endpoints are medical parameters measured at the same four time points. The collected data will be analyzed using inferential statistics and explorative data-mining techniques. Discussion The trial uses an interdisciplinary approach with a team including sports psychologists, sports scientists, health scientists, health care professionals, physicians, and computer scientists. It also involves the processing and analysis of large amounts of data collected with activity trackers. These factors represent particular strengths as well as challenges in the study. Trial Registration The trial is registered at the World Health Organization International Clinical Trials Registry Platform via the German Clinical Studies Trial Register (DRKS), DRKS00027064. Registered on 11 November 2021.][pdf][scholar][bibtex]

2020

Smartphone-Based Monitoring of Parkinson Disease: Quasi-Experimental Study to Quantify Hand Tremor Severity and Medication Effectiveness, Elina Kuosmanen, Florian Wolling, Julio Vega, Valerii Kan, Yuuki Nishiyama, Simon Harper, Kristof Van Laerhoven, Simo Hosio, and Denzil Ferreira, In JMIR Mhealth Uhealth, vol.8(11)p.e21543, 2020. [abstractBackground: Hand tremor typically has a negative impact on a person's ability to complete many common daily activities. Previous research has investigated how to quantify hand tremor with smartphones and wearable sensors, mainly under controlled data collection conditions. Solutions for daily real-life settings remain largely underexplored. Objective: Our objective was to monitor and assess hand tremor severity in patients with Parkinson disease (PD), and to better understand the effects of PD medications in a naturalistic environment. Methods: Using the Welch method, we generated periodograms of accelerometer data and computed signal features to compare patients with varying degrees of PD symptoms. Results: We introduced and empirically evaluated the tremor intensity parameter (TIP), an accelerometer-based metric to quantify hand tremor severity in PD using smartphones. There was a statistically significant correlation between the TIP and self-assessed Unified Parkinson Disease Rating Scale (UPDRS) II tremor scores (Kendall rank correlation test: z=30.521, P<.001, $\tau$=0.5367379; n=11). An analysis of the ``before'' and ``after'' medication intake conditions identified a significant difference in accelerometer signal characteristics among participants with different levels of rigidity and bradykinesia (Wilcoxon rank sum test, P<.05). Conclusions: Our work demonstrates the potential use of smartphone inertial sensors as a systematic symptom severity assessment mechanism to monitor PD symptoms and to assess medication effectiveness remotely. Our smartphone-based monitoring app may also be relevant for other conditions where hand tremor is a prevalent symptom.][pdf][scholar][bibtex]

Digging Deeper: Towards a better Understanding of Transfer Learning for Human Activity Recognition, Alexander Hoelzemann and Van Laerhoven, Kristof, In Proceedings of the 2020 International Symposium on Wearable Computers, ISWC 2020, September 12-16, 2020, 2020. [abstractTransfer Learning is becoming increasingly important to the Human Activity Recognition community, as it enables algorithms to reuse what has already been learned from models. It promises shortened training times and increased classification results for new datasets and activity classes. However, the question of what exactly is transferred is not dealt with in detail in many of the recent publications, and it is furthermore often difficult to reproduce the presented results. Therefore we would like to contribute with this paper to the understanding of transfer learning for sensor-based human activity recognition. In our experiment use weight transfer to transfer models between two datasets, as well as between sensors from the same dataset. As source- and target- datasets PAMAP2 and Skoda Mini Checkpoint are used. The utilized network architecture is based on a DeepConvLSTM. The result of our investigation shows that transfer learning has to be considered in a very differentiated way, since the desired positive effects by applying the method depend very much on the data and also on the architecture used.][pdf][scholar][bibtex]

The Quest for Raw Signals: A Quality Review of Publicly Available Photoplethysmography Datasets, Wolling, Florian and Van Laerhoven, Kristof, In DATA'20: Proceedings of the 3rd Workshop on Data Acquisition To Analysis, DATA 2020, Virtual Event, Japan, November 2020, p.6, 2020. [abstractPhotoplethysmography is an optical measurement principle which is present in most modern wearable devices such as fitness trackers and smartwatches. As the analysis of physiological signals requires reliable but energy-efficient algorithms, suitable datasets are essential for their development, evaluation, and benchmark. A broad variety of clinical datasets is available with recordings from medical pulse oximeters which traditionally apply transmission mode photoplethysmography at the fingertip or earlobe. However, only few publicly available datasets utilize recent reflective mode sensors which are typically worn at the wrist and whose signals show different characteristics. Moreover, the recordings are often advertised as raw, but then turn out to be preprocessed and filtered while the applied parameters are not stated. In this way, the heart rate and its variability can be extracted, but interesting secondary information from the non-stationary signal is often lost. Consequently, the test of novel signal processing approaches for wearable devices usually implies the gathering of own or the use of inappropriate data. In this paper, we present a multi-varied method to analyze the suitability and applicability of presumably raw photoplethysmography signals. We present an analytical tool which applies 7 decision metrics to characterize 10 publicly available datasets with a focus on less or ideally unfiltered, raw signals. Besides the review, we finally provide a guideline for future datasets, to suit to and to be applicable in digital signal processing, to support the development and evaluation of algorithms for resource-limited wearable devices.][pdf][scholar][bibtex]

A Low-Cost Prototyping Framework for Human-Robot Desk Interaction, Odoemelem, Henry and Van Laerhoven, Kristof, In Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers (UbiComp/ISWC ’20 Adjunct), 2020. [abstractMany current human-robot interactive systems tend to use accurate and fast -- but also costly -- actuators and tracking systems to establish working prototypes that are safe to use and deploy for user studies. This paper presents an embedded framework to build a desktop space for human-robot interaction, using an open-source robot arm, as well as two RGB cameras connected to a Raspberry Pi-based controller that allow a fast yet low-cost object tracking and manipulation in 3D. We show in our evaluations that this facilitates prototyping a number of systems in which user and robot arm can commonly interact with physical objects.][pdf][scholar][bibtex]

ISWC'20: 2020 ACM International Symposium on Wearable Computers, Virtual Event, Mexico, September 12-17, 2020, Kristof Van Laerhoven and Monica Tentori and Nadir Weibel and Jennifer Healey and Thomas Ploetz, 2020. [pdf][scholar][bibtex]

UbiComp/ISWC '20: 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and 2020 {ACM} International Symposium on Wearable Computers, Virtual Event, Mexico, September 12-17, 2020, Monica Tentori and Nadir Weibel and Kristof Van Laerhoven and Gregory D. Abowd and Flora D. Salim, 2020. [pdf][scholar][bibtex]

Towards Breathing as a Sensing Modality in Depth-Based Activity Recognition, Jochen Kempfle and Kristof Van Laerhoven, In Sensors, vol.20(14)2020. [abstractDepth imaging has, through recent technological advances, become ubiquitous as products become smaller, more affordable, and more precise. Depth cameras have also emerged as a promising modality for activity recognition as they allow detection of users’ body joints and postures. Increased resolutions have now enabled a novel use of depth cameras that facilitate more fine-grained activity descriptors: The remote detection of a person’s breathing by picking up the small distance changes from the user’s chest over time. We propose in this work a novel method to model chest elevation to robustly monitor a user’s respiration, whenever users are sitting or standing, and facing the camera. The method is robust to users occasionally blocking their torso region and is able to provide meaningful breathing features to allow classification in activity recognition tasks. We illustrate that with this method, with specific activities such as paced-breathing meditating, performing breathing exercises, or post-exercise recovery, our model delivers a breathing accuracy that matches that of a commercial respiration chest monitor belt. Results show that the breathing rate can be detected with our method at an accuracy of 92 to 97% from a distance of two metres, outperforming state-of-the-art depth imagining methods especially for non-sedentary persons, and allowing separation of activities in respiration-derived features space.][pdf][scholar][bibtex]

2019

Deep PPG: Large-Scale Heart Rate Estimation with Convolutional Neural Networks, Attila Reiss and Philip Schmidt and Ina Indlekofer and Kristof Van Laerhoven, In Sensors, vol.19(14)2019. [abstractPPG-based continuous heart rate monitoring is essential in a number of domains, e.g. for healthcare or fitness applications. Recently, methods based on time-frequency spectra emerged to address the challenges of motion artefact compensation. However, existing approaches are highly parametrised and optimised for specific scenarios of small public datasets. We address this fragmentation by contributing research into the robustness and generalisation capabilities of PPG-based heart rate estimation approaches. First, we introduce a novel large-scale dataset (called PPG-DaLiA), including a wide range of activities performed under close to real-life conditions. Second, we extend a state-of-the-art algorithm, significantly improving its performance on several datasets. Third, we introduce deep learning to this domain, and investigate various convolutional neural network architectures. Our end-to-end learning approach takes the time-frequency spectra of synchronised PPG- and accelerometer-signals as input, and provides the estimated heart rate as output. Finally, we compare the novel deep learning approach to classical methods, performing evaluation on four public datasets. We show that on large datasets the deep learning model significantly outperforms other methods: The mean absolute error could be reduced by 31% on the new dataset PPG-DaLiA, and by 21% on the dataset WESAD.][pdf][scholar][bibtex]

Multi-target Affect Detection in the Wild: An Exploratory Study, Philip Schmidt and Robert Duerichen and Attila Reiss and Thomas Ploetz and Van Laerhoven, Kristof, In Proceedings of the 2019 ACM International Symposium on Wearable Computers, ISWC 2019, London, UK, September 9-13, 2019, 2019. [abstractAffective computing aims to detect a person's affective state (e.g. emotion) based on observables. The link between affective states and biophysical data, collected in lab settings, has been established successfully. However, the number of realistic studies targeting affect detection in the wild is still limited. In this paper we present an exploratory field study, using physiological data of 11 healthy subjects. We aim to classify arousal, State-Trait Anxiety Inventory (STAI), stress, and valence self-reports, utilizing feature-based and CNN methods. In addition, we extend the CNNs to multi-task CNNs, classifying all labels of interest simultaneously. Comparing the F1 score averaged over the different tasks and classifiers the CNNs reach an 1.8% higher score than the classical methods. However, the F1 scores barely exceed 45%. In the light of these results, we discuss pitfalls and challenges for physiology-based affective computing in the wild.][pdf][scholar][bibtex]

Bayesian Estimation of Recurrent Changepoints for Signal Segmentation and Anomaly Detection, Christian Reich and Christina Nicolaou and Ahmad Mansour and Kristof van Laerhoven, In 2019 27th European Signal Processing Conference (EUSIPCO), 2019. [abstractSignal segmentation is a generic task in many time series applications. We propose approaching it via Bayesian changepoint algorithms, i.e., by assigning segments between changepoints. When successive signals show a recurrent changepoint pattern, estimating changepoint recurrence is beneficial for two reasons: While recurrent changepoints yield more robust signal segment estimates, non-recurrent changepoints bear valuable information for unsupervised anomaly detection. This study introduces the changepoint recurrence distribution (CPRD) as an empirical estimate of the recurrent behavior of observed changepoints. Two generic methods for incorporating the estimated CPRD into the process of assessing recurrence of future changepoints are suggested. The knowledge of non-recurrent changepoints arising from one of these methods allows additional unsupervised anomaly detection. The quality both of changepoint recurrence estimation via CPRD and of changepoint-related signal segmentation and unsupervised anomaly detection are verified in a proof-of-concept study for two exemplary machine tool monitoring tasks.][pdf][scholar][bibtex]

Wearable-Based Affect Recognition - A Review, Philip Schmidt and Attila Reiss and Robert Duerichen and Kristof Van Laerhoven, In Sensors, vol.19(19)p.4079, 2019. [abstractAffect recognition is an interdisciplinary research field bringing together researchers from natural and social sciences. Affect recognition research aims to detect the affective state of a person based on observables, with the goal to, for example, provide reasoning for the person’s decision making or to support mental wellbeing (e.g., stress monitoring). Recently, beside of approaches based on audio, visual or text information, solutions relying on wearable sensors as observables, recording mainly physiological and inertial parameters, have received increasing attention. Wearable systems enable an ideal platform for long-term affect recognition applications due to their rich functionality and form factor, while providing valuable insights during everyday life through integrated sensors. However, existing literature surveys lack a comprehensive overview of state-of-the-art research in wearable-based affect recognition. Therefore, the aim of this paper is to provide a broad overview and in-depth understanding of the theoretical background, methods and best practices of wearable affect and stress recognition. Following a summary of different psychological models, we detail the influence of affective states on the human physiology and the sensors commonly employed to measure physiological changes. Then, we outline lab protocols eliciting affective states and provide guidelines for ground truth generation in field studies. We also describe the standard data processing chain and review common approaches related to the preprocessing, feature extraction and classification steps. By providing a comprehensive summary of the state-of-the-art and guidelines to various aspects, we would like to enable other researchers in the field to conduct and evaluate user studies and develop wearable systems.][pdf][scholar][bibtex]

Collecting Labels for Rare Anomalies via Direct Human Feedback - An Industrial Application Study, Reich, Christian and Mansour, Ahmad and Van Laerhoven, Kristof, In Informatics, vol.6(3)2019. [abstractMany systems rely on the expertise from human operators, who have acquired their knowledge through practical experience over the course of many years. For the detection of anomalies in industrial settings, sensor units have been introduced to predict and classify such anomalous events, but these critically rely on annotated data for training. Lengthy data collection campaigns are needed, which tend to be combined with domain expert annotations of the data afterwards, resulting in costly and slow process. This work presents an alternative by studying live annotation of rare anomalous events in sensor streams in a real-world manufacturing setting by experienced human operators that can also observe the machinery itself. A prototype for visualization and in situ annotation of sensor signals is developed with embedded unsupervised anomaly detection algorithms to propose signals for annotation and which allows the operators to give feedback on the detection and classify anomalous events. This prototype allowed assembling a corpus of several weeks of sensor data measured in a real manufacturing surrounding and was annotated by domain experts as an evaluation basis for this study. The evaluation of live annotations reveals high user motivation after getting accustomed to the labeling prototype. After this initial period, clear anomalies with characteristic signal patterns are detected reliably in visualized envelope signals. More subtle signal deviations were less likely to be confirmed an anomaly due to either an insufficient visibility in envelope signals or the absence of characteristic signal patterns.][pdf][scholar][bibtex]

Human Activity Sensing: Corpus and Applications, Kawaguchi, Nobuo and Nishio, Nobuhiko and Roggen, Daniel and Inoue, Sozo and Pirttikangas, Susanna and Van Laerhoven, Kristof, p.248, 2019. [abstractActivity recognition has emerged as a challenging and high-impact research field, as over the past years smaller and more powerful sensors have been introduced in wide-spread consumer devices. Validation of techniques and algorithms requires large-scale human activity corpuses and improved methods to recognize activities and the contexts in which they occur. This book deals with the challenges of designing valid and reproducible experiments, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating activity recognition systems in the real world with real users.][pdf][scholar][bibtex]

A Multi-media Exchange Format for Time-Series Dataset Curation, Philipp M. Scholl and Benjamin Voelker and Bernd Becker and Kristof Van Laerhoven, In Human Activity Sensing: Corpus and Applications, p.111--119, 2019. [abstractExchanging data as character-separated values (CSV) is slow, cumbersome and error-prone. Especially for time-series data, which is common in Activity Recognition, synchronizing several independently recorded sensors is challenging. Adding second level evidence, like video recordings from multiple angles and time-coded annotations, further complicates the matter of curating such data. A possible alternative is to make use of standardized multi-media formats. Sensor data can be encoded in audio format, and time-coded information, like annotations, as subtitles. Video data can be added easily. All this media can be merged into a single container file, which makes the issue of synchronization explicit. The incurred performance overhead by this encoding is shown to be negligible and compression can be applied to optimize storage and transmission overhead.][pdf][scholar][bibtex]

Identifying Sensors via Statistical Analysis of Body-Worn Inertial Sensor Data, Philipp M. Scholl and Kristof Van Laerhoven, In Human Activity Sensing: Corpus and Applications, p.17--28, 2019. [abstractEvery benchmark dataset that contains inertial data (acceleration, rate-of-turn, magnetic flux) requires a thorough description of the datasets itself. This description tends often to be unstructured, and supplied to researchers as a conventional description, and in many cases crucial details are not available anymore. In this chapter, we argue that each sensor modality exhibits particular statistical properties that allow to reconstruct the modality solely from the sensor data itself. In order to investigate this, tri-axial inertial sensor data from five publicly available datasets are analysed. We found that in particular three statistical properties, the mode, the kurtosis, and the number of modes tend to be sufficient for classification of sensor modality - requiring as the only assumption that the sampling rate and sample format are known, and the fact that that acceleration and magnetometer data is present in the dataset. With those assumption in place, we found that 98% of all 1003 data points were successfully classified.][pdf][scholar][bibtex]

Using an in-ear wearable to annotate activity data across multiple inertial sensors, Alexander Hölzemann and Henry Odoemelem and Kristof Van Laerhoven, In EarComp’19, September 9, 2019, London, United Kingdom, 2019. [abstractWearable activity recognition research needs benchmark data, which rely heavily on synchronizing and annotating the inertial sensor data, in order to validate the activity classifiers. Such validation studies become challenging when recording outside the lab, over longer stretches of time. This paper presents a method that uses an inconspicuous, ear-worn device that allows the wearer to annotate his or her activities as the recording takes place. Since the ear-worn device has integrated inertial sensors, we use cross-correlation over all wearable inertial signals to propagate the annotations over all sensor streams. In a feasibility study with 7 participants performing 6 different physical activities, we show that our algorithm is able to synchronize signals between sensors worn on the body using cross-correlation, typically within a second. A comfort rating scale study has shown that attachment is critical. Button presses can thus define markers in synchronized activity data, resulting in a fast, comfortable, and reliable annotation method.][pdf][scholar][bibtex]

Using Multimodal Biosignal Data from Wearables to Detect Focal Motor Seizures in Individual Epilepsy Patients, Sebastian Boettcher and Nikolay V. Manyakov and Nino Epitashvili and Amos Folarin and Mark Richardson and Matthias Duempelmann and Andreas Schulze-Bonhage and Kristof Van Laerhoven, In Proceedings of the 6th International Workshop on Sensor-based Activity Recognition and Interaction, 2019. [abstractEpilepsy seizure detection with wearable devices is an emerging research field. As opposed to the gold standard that is currently practiced, consisting of simultaneous video and EEG monitoring of patients, wearables have the advantage that they put a lower burden on epilepsy patients. This paper reports on the first stages in a research effort that is dedicated to the development of a multimodal seizure detection system specifically for focal onset epileptic seizures. In a focused analysis on data from three in-hospital patients with each having six to nine seizures recorded, we show that this type of seizures can manifest very differently and thus significantly impact classification. Using a Random Forest model on a rich set of features, we have obtained overall precision and recall scores of up to 0.92 and 0.72 respectively. These results show that the approach has validity, but we identify the type of focal seizure to be a critical factor for the classification performance.][pdf][scholar][bibtex]

AfricaSign -- A Crowd-sourcing Platform for the Documentation of STEM Vocabulary in African Sign Languages, Soudi, Abdelhadi and Van Laerhoven, Kristof and Bou-Souf, Elmostafa, In The 21st International ACM SIGACCESS Conference on Computers and Accessibility, p.658--660, 2019. [abstractResearch in sign languages, in general, is still a relatively new topic of study when compared to research into spoken languages. Most of African sign languages are endangered and severely under-studied [11]. In an attempt to (lexically) document as many endangered sign languages in Africa as possible, we have developed a low-barrier, online crowd-sourcing platform (AfricaSign) that enables the African deaf communities to document their sign languages. AfricaSign offers to users multiple input modes, accommodates regional variation in multiple sign languages and allows the use of avatar technology to describe signs. It is likely that this research will uncover typological features exhibited by African sign languages. Documentation of STEM vocabulary will also help facilitate access to education for the Deaf community.][pdf][scholar][bibtex]

Using the eSense Wearable Earbud as a Light-Weight Robot Arm Controller, Henry Odoemelem and Alexander Hölzemann and Kristof Van Laerhoven, In EarComp’19, September 9, 2019, London, United Kingdom, 2019. [abstractHead motion-based interfaces for controlling robot arms in real time have been presented in both medical-oriented re- search as well as human-robot interaction. We present an especially minimal and low-cost solution that uses the eS- ense [1] ear-worn prototype as a small head-worn controller, enabling direct control of an inexpensive robot arm in the environment. We report on the hardware and software setup, as well as the experiment design and early results.][pdf][scholar][bibtex]

3rd EAI International Conference on IoT in Urban Space, Rui José and Kristof Van Laerhoven and Helena Rodrigues, 2020. [abstractWe are delighted to introduce the proceedings of the third edition of the 2018 European Alliance for Innovation (EAI) International Conference on IoT in Urban Space(Urb-IoT), co-located with the Smart City 360◦ Summit 2018, which took place in Guimarães, Portugal. This conference has brought together researchers, developers, and practitioners around the world who are exploring the urban space and its dynamics within the scope of the Internet of Things (IoT) and the new science of cities.][pdf][scholar][bibtex]

Bit-Shift-Based Accelerator for CNNs with Selectable Accuracy and Throughput, Sebastian Vogel and Rajatha B. Raghunath and Andre Guntoro and Kristof Van Laerhoven and Gerd Ascheid, In 2019 22nd Euromicro Conference on Digital System Design (DSD), 2019. [abstractHardware accelerators for compute intensive algorithms such as convolutional neural networks benefit from number representations with reduced precision. In this paper, we evaluate and extend a number representation based on power-of-two quantization enabling bit-shift-based processing of multiplications. We found that weights of a neural network can either be represented by a single 4 bit power-of-two value or with two 4 bit values depending on accuracy requirements. We evaluate the classification accuracy of VGG-16 and ResNet50 on the ImageNet dataset with weights represented in our novel number format. To include a more complex task, we additionally evaluate the format on two networks for semantic segmentation. In addition, we design a novel processing element based on bit-shifts which is configurable in terms of throughput (4 bit mode) and accuracy (8 bit mode). We evaluate this processing element in an FPGA implementation of a dedicated accelerator for neural networks incorporating a 32-by-64 processing array running at 250 MHz with 1 TOp/s peak throughput in 8 bit mode. The accelerator is capable of processing regular convolutional layers and dilated convolutions in combination with pooling and upsampling. For a semantic segmentation network with 108.5 GOp/frame, our FPGA implementation achieves a throughput of 7.0 FPS in the 8 bit accurate mode and upto 11.2 FPS in the 4 bit mode corresponding to 760.1 GOp/s and 1,218 GOp/s effective throughput, respectively. Finally, we compare the novel design to classical multiplier-based approaches in terms of FPGA utilization and power consumption. Our novel multiply-accumulate engines designed for the optimized number representation uses 9 % less logical elements while allowing double throughput compared to a classical implementation. Moreover, a measurement shows 25% reduction of power consumption at same throughput. Therefore, our flexible design offers a solution to the trade-off between energy efficiency, accuracy, and high throughput.][pdf][scholar][bibtex]

Unity in Diversity: Sampling Strategies in Wearable Photoplethysmography, Florian Wolling and Simon Heimes and Van Laerhoven, Kristof, In IEEE Pervasive Computing, vol.18(3)p.63 -- 69, 2019. [abstractPhotoplethysmography optically measures the pulsating blood volume flow in the human skin so that primary vital signs such as the heart rate can be determined. Most known is this sensing principle from the mysterious illumination that can sometimes be seen at the back of fitness trackers and smartwatches. Since powering these high-intensity light puts a large dent into a wearable's energy budget, this paper delves into the sampling schemes and strategies used by current off-the-shelf wearables to save energy and yet obtain good readings. As it turns out, the devices are following very different approaches.][pdf][scholar][bibtex]

Mobiles Anfallsmonitoring bei Epilepsiepatienten, Schulze-Bonhage, A. and Boettcher, S. and Glasstetter, M. and Epitashvili, N. and Bruno, E. and Richardson, M. and v. Laerhoven, K. and Duempelmann, M., In Der Nervenarzt, 2019. [abstractWearables stossen bei Epilepsiepatienten und behandelnden Aerzten auf ein grosses Interesse zur Erfassung der Anfallsfrequenz und Warnung bei Anfaellen, darueber hinaus auch zur Erkennung anfallsassoziierter Gefaehrdungen von Patienten, zur differenzialdiagnostischen Einschaetzung seltener Anfallsformen und zur Vorhersage von Perioden mit erhoehter Anfallswahrscheinlichkeit. Akzelerometrie, Elektromyographie, Herzfrequenzmessungen und weitere Verfahren zur Messung autonomer Parameter werden hierbei zur Erfassung klinischer Anfallssymptome eingesetzt. Gegenwaertig wird ein klinischer Einsatz zur Erfassung naechtlicher motorischer Anfaelle moeglich. In der Uebersicht werden verfuegbare Geraete, Daten zur Leistungsfaehigkeit bei der Dokumentation von Anfaellen, aktuelle Einsatzmoeglichkeiten sowie Entwicklungen in der Datenanalyse kritisch dargestellt und diskutiert.][pdf][scholar][bibtex]

The Role of Self-Control and the Presence of Enactment Models on Sugar-Sweetened Beverage Consumption: A Pilot Study, Wenzel, Mario and Geelen, Anouk and Wolters, Maike and Hebestreit, Antje and Van Laerhoven, Kristof and Lakerveld, Jeroen and Andersen, Lene Frost and van't Veer, Pieter and Kubiak, Thomas, In Frontiers in Psychology, vol.10, p.1511, 2019. [abstractThe objective of the present research was to investigate associations of dispositional and momentary self-control and the presence of other individuals consuming SSBs with the consumption frequency of sugar-sweetened beverages (SSBs) in a multi-country pilot study. We conducted an Ambulatory Assessment in which 75 university students (52 females) from four study sites carried smartphones and received prompts six times a day in their everyday environments to capture information regarding momentary self-control and the presence of other individuals consuming SSBs. Multilevel models revealed a statistically significant negative association between dispositional self-control and SSB consumption. Moreover, having more self-control than usual was only beneficial in regard to lower SSB consumption frequency, when other individuals consuming SSBs were not present but not when they were present. The findings support the hypothesis that self-control is an important factor regarding SSB consumption. This early evidence highlights self-control as a candidate to design interventions to promote healthier drinking through improved self-control.][pdf][scholar][bibtex]

Detection of Machine Tool Anomalies from Bayesian Changepoint Recurrence Estimation, C. {Reich} and C. {Nicolaou} and A. {Mansour} and K. {Van Laerhoven}, In 2019 IEEE 17th International Conference on Industrial Informatics (INDIN), vol.1, p.1297--1302, 2019. [abstractIn this study, we consider the problem of detecting process-related anomalies for machine tools. The similar shape of successive sensor signals, which arises due to the same process step sequence applied to each workpiece, suggests extracting shape-related features. In recent years, shapelets dominated the field of shape-related features. Unfortunately, they involve a high computational burden due to hyperparameter optimization.We introduce alternative shape-related features relying on abrupt signal changes (changepoints) reflecting the changes of process steps. During normal operation, changepoints follow a highly recurrent pattern, i.e., appear at similar locations. Thus, being able to distinguish regular, recurrent from abnormal, non-recurrent changepoints allows detecting process anomalies.For changepoint recurrence estimation, we extend the Bayesian Online Changepoint Detection (BOCPD) method. The extension allows distinguishing normal and abnormal changepoints relying on empirical estimates of the changepoint recurrence distribution. Subsequently, changepoint-related features are introduced and compared to shapelets and wavelet-based features in a case study comprising real-world machine tool data.Qualitative results verify changepoint locations being comparable to shapelet locations found by the FLAG shapelet approach. Furthermore, quantitative results suggest superior classification performance both to shapelets and wavelet-based features.][pdf][scholar][bibtex]

2018

Introducing WESAD, a Multimodal Dataset for Wearable Stress and Affect Detection, Philip Schmidt and Attila Reiss and Robert Duerichen and Claus Marberger and Kristof Van Laerhoven, In 20th ACM International Conference on Multimodal Interaction (ICMI ’18), 2018. [abstractAffect recognition aims to detect a person’s affective state based on observables, with the goal to e.g. improve human-computer interaction. Long-term stress is known to have severe implications on wellbeing, which call for continuous and automated stress monitoring systems. However, the affective computing community lacks commonly used standard datasets for wearable stress detection which a) provide multimodal high-quality data, and b) include multiple affective states. Therefore, we introduce WESAD, a new publicly available dataset for wearable stress and affect detection. This multimodal dataset features physiological and motion data, recorded from both a wrist- and a chest-worn device, of 15 subjects during a lab study. The following sensor modalities are included: blood volume pulse, electrocardiogram, electrodermal activity, electromyogram, respiration, body temperature, and three- axis acceleration. Moreover, the dataset bridges the gap between previous lab studies on stress and emotions, by containing three different affective states (neutral, stress, amusement). In addition, self-reports of the subjects, which were obtained using several established questionnaires, are contained in the dataset. Furthermore, a benchmark is created on the dataset, using well-known features and standard machine learning methods. Considering the three- class classification problem (baseline vs. stress vs. amusement), we achieved classification accuracies of up to 80 %. In the binary case (stress vs. non-stress), accuracies of up to 93 % were reached. Finally, we provide a detailed analysis and comparison of the two device locations (chest vs. wrist) as well as the different sensor modalities.][pdf][scholar][bibtex]

Fewer Samples for a Longer Life Span:Towards Long-Term Wearable PPG Analysis, Florian Wolling and Kristof Van Laerhoven, In Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction, 2018. [abstractPhotoplethysmography (PPG) sensors have become a prevalent feature included in current wearables, as the cost and sizeof current PPG modules have dropped significantly. Research in the analysis of PPG data has recently expanded beyond the fast and accurate characterization of heart rate, into the adaptive handling of artifacts within the signal and even the capturing of respiration rate. In this paper, we instead explore using state-of-the-art PPG sensor modules for long-term wearable deployment and the observation of trends over minutes, rather than seconds. By focusing specifically on lowering the sampling rate and via analysis of the spectrum of frequencies alone, our approach minimizes the costly illumination-based sensing and can be used to detect the dominant frequencies of heart rate and respiration rate, but also enables to infer on activity of the sympathetic nervous system. We show in two experiments that such detections and measurements can still be achieved at low sampling rates down to10 Hz, within a power-efficient platform. This approach enables miniature sensor designs that monitor average heart rate, respiration rate, and sympathetic nerve activity over longer stretches of time.][pdf][scholar][bibtex]

Wearable affect and stress recognition: A review, Philip Schmidt and Attila Reiss and Robert Duerichen and Kristof Van Laerhoven, In arXiv:1811.08854, 2018. [abstractAffect recognition aims to detect a person’s affective state based on observables, with the goal to e.g. provide reasoning for decision making or support mental wellbeing. Recently, besides approaches based on audio, visual or text information, solutions relying on wearable sensors as observables (recording mainly physiological and inertial parameters) have received increasing attention. Wearable systems offer an ideal platform for long-term affect recognition applications due to their rich functionality and form factor. However, existing literature lacks a comprehensive overview of state-of-the-art research in wearable-based affect recognition. Therefore, the aim of this paper is to provide a broad overview and in-depth understanding of the theoretical background, methods, and best practices of wearable affect and stress recognition. We summarise psychological models, and detail affect-related physiological changes and their measurement with wearables. We outline lab protocols eliciting affective states, and provide guidelines for ground truth generation in field studies. We also describe the standard data processing chain, and review common approaches to preprocessing, feature extraction, and classification. By providing a comprehensive summary of the state-of-the-art and guidelines to various aspects, we would like to enable other researchers in the field of affect recognition to conduct and evaluate user studies and develop wearable systems.][pdf][scholar]

Using Wrist-Worn Activity Recognition for Basketball Game Analysis, Alexander Hoelzemann and Kristof Van Laerhoven, In Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction, 2018. [abstractGame play in the sport of basketball tends to combine highly dynamic phases in which the teams strategically move across the field, with specific actions made by individual players. Analysis of basketball games usually focuses on the locations of players at particular points in the game, whereas the capture of what actions the players were performing remains underrepresented. In this paper, we present an approach that allows to monitor players' actions during a game, such as dribbling, shooting, blocking, or passing, with wrist-worn inertial sensors. In a feasibility study, inertial data from a sensor worn on the wrist were recorded during training and game sessions from three players. We illustrate that common features and classifiers are able to recognize short actions, with overall accuracy performances around 83.6% (k-Nearest-Neighbor) and 87.5% (Random Forest). Some actions, such as jump shots, performed well (± 95% accuracy), whereas some types of dribbling achieving low (± 44%) recall.][pdf][scholar][bibtex]

Respiration Rate Estimation with Depth Cameras: An Evaluation of Parameters, Jochen Kempfle and Kristof Van Laerhoven, In Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction, 2018. [abstractDepth cameras have been known to be capable of picking up the small changes in distance from users' torsos, to estimate respiration rate. Several studies have shown that under certain conditions, the respiration rate from a non-mobile user facing the camera can be accurately estimated from parts of the depth data. It is however to date not clear, what factors might hinder the application of this technology in any setting, what areas of the torso need to be observed, and how readings are affected for persons at larger distances from the RGB-D camera. In this paper, we present a benchmark dataset that consists of the point cloud data from a depth camera, which monitors 7 volunteers at variable distances, for variable methods to pin-point the person's torso, and at variable breathing rates. Our findings show that the respiration signal's signal-to-noise ratio becomes debilitating as the distance to the person approaches 4 metres, and that bigger windows over the person's chest work particularly well. The sampling rate of the depth camera was also found to impact the signal's quality significantly.][pdf][scholar][bibtex]

Labelling Affective States ”in the wild”: Practical Guidelines and Lessons Learned, Philip Schmidt and Attila Reiss and Robert Duerichen and Kristof Van Laerhoven, In UbiComp/ISWC ’18 Adjunct, 2018. [abstractIn affective computing (AC) field studies it is impossible to obtain an objective ground truth. Hence, self-reports in form of ecological momentary assessments (EMAs) are frequently used in lieu of ground truth. Based on four paradigms, we formulate practical guidelines to increase the accuracy of labels generated via EMAs. In addition, we detail how these guidelines were implemented in a recent AC field study of ours. During our field study, 1081 EMAs were collected from 10 subjects over a duration of 148 days. Based on these EMAs, we perform a qualitative analysis of the effectiveness of our proposed guidelines. Furthermore, we present insights and lessons learned from the field study.][pdf][scholar][bibtex]

PPG-based Heart Rate Estimation with Time-Frequency Spectra: A Deep Learning Approach, Attila Reiss and Ina Indlekofer and Philip Schmidt and Kristof Van Laerhoven, In UbiComp/ISWC ’18 Adjunct, 2018. [abstractPPG-based continuous heart rate estimation is challenging due to the effects of physical activity. Recently, methods based on time-frequency spectra emerged to compen- sate motion artefacts. However, existing approaches are highly parametrised and optimised for specific scenarios. In this paper, we first argue that cross-validation schemes should be adapted to this topic, and show that the generalisation capabilities of current approaches are limited. We then introduce deep learning, specifically CNN-models, to this domain. We investigate different CNN-architectures (e.g. the number of convolutional layers, applying batch normalisation, or ensemble prediction), and report insights based on our systematic evaluation on two publicly available datasets. Finally, we show that our CNN-based approach performs comparably to classical methods.][pdf][scholar][bibtex]

Real-Time and Embedded Detection of Hand Gestures with an IMU-Based Glove, Mummadi, Chaithanya Kumar and Leo, Frederic Philips Peter and Verma, Keshav Deep and Kasireddy, Shivaji and Scholl, Philipp M. and Kempfle, Jochen and Laerhoven, Kristof Van, In Informatics, vol.5(2)2018. [abstractThis article focuses on the use of data gloves for human-computer interaction concepts, where external sensors cannot always fully observe the user's hand. A good concept hereby allows to intuitively switch the interaction context on demand by using different hand gestures. The recognition of various, possibly complex hand gestures, however, introduces unintentional overhead to the system. Consequently, we present a data glove prototype comprising a glove-embedded gesture classifier utilizing data from Inertial Measurement Units (IMUs) in the fingertips. In an extensive set of experiments with 57 participants, our system was tested with 22 hand gestures, all taken from the French Sign Language (LSF) alphabet. Results show that our system is capable of detecting the LSF alphabet with a mean accuracy score of 92% and an F1 score of 91%, using complementary filter with a gyroscope-to-accelerometer ratio of 93%. Our approach has also been compared to the local fusion algorithm on an IMU motion sensor, showing faster settling times and less delays after gesture changes. Real-time performance of the recognition is shown to occur within 63 milliseconds, allowing fluent use of the gestures via Bluetooth-connected systems.][pdf][scholar][bibtex]

Detecting Transitions in Manual Tasks from Wearables: An Unsupervised Labeling Approach, Boettcher, Sebastian and Scholl, Philipp M and Van Laerhoven, Kristof, In Informatics, vol.5(2)2018. [abstractAuthoring protocols for manual tasks such as following recipes, manufacturing processes or laboratory experiments requires significant effort. This paper presents a system that estimates individual procedure transitions from the user's physical movement and gestures recorded with inertial motion sensors. Combined with egocentric or external video recordings, this facilitates efficient review and annotation of video databases. We investigate different clustering algorithms on wearable inertial sensor data recorded on par with video data, to automatically create transition marks between task steps. The goal is to match these marks to the transitions given in a description of the workflow, thus creating navigation cues to browse video repositories of manual work. To evaluate the performance of unsupervised algorithms, the automatically-generated marks are compared to human expert-created labels on two publicly-available datasets. Additionally, we tested the approach on a novel dataset in a manufacturing lab environment, describing an existing sequential manufacturing process. The results from selected clustering methods are also compared to some supervised methods.][pdf][scholar][bibtex]

Evaluation of an Impact Spring-Coil-Magnet System with 3D-Printed Setup, P. Mehne and P. Scholl and A. Rudmann and M. Kroener and K. Van Laerhoven and P. Woias, In Journal of Physics: Conference Series, vol.1052, p.012091, 2018. [pdf][scholar][bibtex]

Passive Link Quality Estimation for Accurate and Stable Parent Selection in Dense 6TiSCH Networks, Hermeto, Rodrigo Teles and Gallais, Antoine and Laerhoven, Kristof Van and Tholeyre, Fabrice, In Proceedings of the 2018 International Conference on Embedded Wireless Systems and Networks (EWSN 2018), p.114--125, 2018. [abstractIndustrial applications are increasingly demanding more low-power operations, deterministic communications and end-to-end reliability that approaches 100%. Recent standardization efforts focus on exploiting slow channel hopping while properly scheduling the transmissions to provide a strict Quality of Service for the Industrial Internet of Things (IIoT). By keeping nodes time-synchronized and by employing a channel hopping approach, IEEE 802.15.4-TSCH (Time-Slotted Channel Hoping) aims at providing high-level network reliability. For this, however, we need to construct an accurate schedule, able to exploit reliable paths. In particular, radio links with high Packet Error Rate should not be exploited since they are less energy-efficient (more retransmissions are required) and they negatively impact the reliability. In this work, we take advantage of the continuously advertisement packets transmitted by the nodes to infer the link quality. We argue that the reception rate of broadcast packets provide means to estimate the link quality to different neighbors, even when the data packets use different, collision-free transmission opportunities. Our experiments on a large-scale platform highlight that our approach improves the convergence delay, identifying the best routes to the sink during the bootstrapping (or reconverging) phase without adding any extra control packet.][pdf][scholar]

PresentPostures: A Wrist and Body Capture Approach for Augmenting Presentations, Kempfle, Jochen and Van Laerhoven, Kristof, In Pervasive Computing and Communications Workshops (PerCom Workshops), 2017 IEEE International Conference on, 2018. [abstractCapturing and digitizing all nuances during presentations is notoriously difficult. At best, digital slides tend to be combined with audio, while video footage of the presenter's body language often turns out to be either too sensitive, occluded, or hard to achieve for common lighting conditions. If presentations require capturing what is written on the whiteboard, more expensive setups are usually needed. In this paper, we present an approach that complements the data from a wrist-worn inertial sensor with depth camera footage, to obtain an accurate posture representation of the presenter. A wearable inertial measurement unit complements the depth footage by providing more accurate arm rotations and wrist postures when the depth images are occluded, whereas the depth images provide an accurate full-body posture for indoor environments. In an experiment with 10 volunteers, we show that posture estimates from depth images and inertial sensors complement each other well, resulting in far less occlusions and tracking of the wrist with an accuracy that supports capturing sketches.][pdf][scholar][bibtex]

Embedding Intelligent Features for Vibration-Based Machine Condition Monitoring, Christian Reich and Ahmad Mansour and Kristof van Laerhoven, In 2018 26th European Signal Processing Conference (EUSIPCO), 2018. [abstractToday’s demands regarding workpiece quality in cutting machine tool processing require automated monitoring of both machine condition and the cutting process. Currently, best-performing monitoring approaches rely on high-frequency acoustic emission (AE) sensor data and definition of advanced features, which involve complex computations. This approach is challenging for machine monitoring via embedded sensor systems with constrained computational power and energy budget. To cope with constrained energy, we rely on data recording with microelectromechanical system (MEMS) vibration sensors, which rely on lower-frequency sampling. To clarify whether these lower-frequency signals bear information for typical machine monitoring prediction tasks, we evaluate data for the most generic machine monitoring task of tool condition monitoring (TCM). To cope with computational complexity of advanced features, we introduce two intelligent preprocessing algorithms. First, we split non-stationary signals of recurrent structure into similar segments. Then, we identify most discriminative spectral differences in the segmented signals that allow for best separation of classes for the given TCM task. Subsequent feature extraction only in most relevant signal segments and spectral regions enables high expressiveness even for simple features. Extensive evaluation of the outlined approach on multiple data sets of different combinations of cutting machine tools, tool types and workpieces confirms its sensibility. Intelligent preprocessing enables reliable identification of stationary segments and most discriminative frequency bands. With subsequent extraction of simple but tailor-made features in these spectral-temporal regions of interest (RoIs), TCM typically framed as multi feature classification problem can be converted to a single feature threshold comparison problem with an average F1 score of 97.89%.][pdf][scholar][bibtex]

2017

Combining Capacitive Coupling with Conductive Clothes:Towards Resource-Efficient Wearable Communication, Wolling, Florian and Scholl, Philipp M. and Reindl, Leonhard M. and Van Laerhoven, Kristof, In Proceedings of the 2017 {ACM} International Symposium on Wearable Computers, {ISWC} 2017, Maui, Hawaii, September 11-15, 2017, 2017. [abstractTraditional intra-body communication approaches mostly rely on either fixed cable joints embedded in clothing, or on wireless radio transmission that tends to reach beyond the body. Situated between these approaches is body-coupled communication, a promising yet less-explored method that transmits information across the user's skin. We propose a novel body-coupled communication approach that simplifies the physical layer of data transmission via capacitive coupling between wearable systems with conductive fabrics: This layer provides a stable reference potential for the feedback path in proximity to the attached wearables on the human body, to cancel the erratic dependency on the environmental ground, and to increase the communications' reliability. Evaluation of our prototype shows significant increases in signal quality, due to reduced attenuation and noise. Requirements on hardware and, subsequently, energy consumption, cost, and implementation effort are reduced as well.][pdf][scholar][bibtex]

On the Statistical Properties of Body-Worn Inertial Motion Sensor Data for Identifying Sensor Modality, Scholl, Philipp M. and Van Laerhoven, Kristof, In Proceedings of the 2017 {ACM} International Symposium on Wearable Computers, {ISWC} 2017, Maui, Hawaii, September 11-15, 2017, 2017. [abstractInterpreting datasets containing inertial data (acceleration, rate-of-turn, magnetic flux) requires a description of the datasets itself. Often this description is unstructured, stored as a convention or simply not available anymore. In this note, we argue that each modality exhibits particular statistical properties, which allows to reconstruct it solely from the sensor's data. To investigate this, tri-axial inertial sensor data from five publicly available datasets were analysed. Three statistical properties: mode, kurtosis, and number of modes are shown to be sufficient for classification - assuming the sampling rate and sample format are known, and that both acceleration and magnetometer data is present. While those assumption hold, 98% of all 1003 data points were correctly classified.][pdf][scholar][bibtex]

Human Posture Capture and Editing from Heterogeneous Modalities, Kempfle, Jochen and Van Laerhoven, Kristof, In Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia, p.489--494, 2017. [abstractEstimating human pose can be done with a variety of technologies. Models are typically derived from a motion capturing system that aims at generating a skeleton model as accurate and fast as possible. To date, a large variety of solutions is available to obtain capture data, often either from the environment, such as from depth cameras, or person-based, such as by body-worn inertial sensors. There are, however, few frameworks that are able to combine different types of human motion data into one environment. To this end, we present a modular and open architecture that allows specifically to integrate different capture sources and different filters to capture, process, and fuse these, in real time, into a human posture model. We show how our proposed solution leads to a fast and flexible way to deal with multiple motion capture modalities, and how such a framework can be used for editing and combining postures.][pdf][scholar][bibtex]

Lessons Learned From Designing an Instrumented Lighter for Assessing Smoking Status, Scholl, Philipp M. and Van Laerhoven, Kristof, In Ubicomp 2017 Adjunct Proceedings, p.1016--1021, 2017. [abstractAssessing smoking status from wearable sensors can potentially create novel cessation therapies for the global smoking epidemic, without requiring users to regularly fill in questionnaires to obtain their smoking data. In this paper we discuss several design iterations of an instrumented cigarette lighter the records when it is used, to provide the ground-truth for detection from sensor signals measured at the human body, or to provide an alternative low-delay detection mechanism for smoking.][pdf][scholar][bibtex]

5th Int. workshop on human activity sensing corpus and applications {(HASCA):} towards open-ended context awareness, Nobuo Kawaguchi and Nobuhiko Nishio and Daniel Roggen and Sozo Inoue and Susanna Pirttikangas and Kristof Van Laerhoven, In Adjunct Proceedings of the 2017 {ACM} International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 {ACM} International Symposium on Wearable Computers, UbiComp/ISWC 2017, Maui, HI, USA, September 11-15, 2017, p.530--536, 2017. [abstractTechnological advances enable the inclusion of miniature sensors (e.g., accelerometers, gyroscopes) on a variety of wearable/portable information devices. Most current devices utilize these sensors for simple orientation and gesture recognition only. However, in the future the recognition of more complex and subtle human behaviors from these sensors will enable next-generation human-oriented computing in scenarios of high societal value (e.g., dementia care). This will require large-scale human activity corpuses and much improved methods to recognize activities and the context in which they occur. This workshop deals with the challenges of designing reproducible experimental setups, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world. As a special topic this year, we wish to reflect on the challenges and possible approaches to recognize situations, events or activities outside of a statically pre-defined pool, which is the current state of the art, and instead adopt an open-ended view on activity and context awareness. Following the huge success of previous years, we are further planning to share these experiences of current research on human activity corpus and their applications among the researchers and the practitioners and to have a deep discussion on the future of activity sensing, in particular towards open-ended contextual intelligence.][pdf][scholar][bibtex]

Experiences from a Wearable-Mobile Acquisition System for Ambulatory Assessment of Diet and Activity, Van Laerhoven, Kristof and Wenzel, Mario and Geelen, Anouk and Huebel, Christopher and Wolters, Maike and Hebestreit, Antje and Andersen, Lene Frost and van't Veer, Pieter and Kubiak, Thomas, In Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction, 2017. [abstractPublic health trends are currently monitored and diagnosed based on large studies that often rely on pen-and-paper data methods that tend to require a large collection campaign. With the pervasiveness of smart-phones and -watches throughout the general population, we argue in this paper that such devices and their built-in sensors can be used to capture such data more accurately with less of an effort. We present a system that targets a pan-European and harmonised architecture, using smartphones and wrist-worn activity loggers to enable the collection of data to estimate sedentary behavior and physical activity, plus the consumption of sugar-sweetened beverages. We report on a unified pilot study across three countries and four cities (with different languages, locale formats, and data security and privacy laws) in which 83 volunteers were asked to log beverages consumption along with a series of surveys and longitudinal accelerometer data. Our system is evaluated in terms of compliance, obtained data, and first analyses.][pdf][scholar][bibtex]

Real-time Embedded Recognition of Sign Language Alphabet Fingerspelling in an IMU-Based Glove, Kumar Mummadi, Chaithanya and Philips Peter Leo, Frederic and Deep Verma, Keshav, and Kasireddy, Shivaji and Scholl, Philipp Marcel and Van Laerhoven, Kristof, In Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction, 2017. [abstractData gloves have numerous applications, including enabling novel human-computer interaction and automated recognition of large sets of gestures, such as those used for sign language. For most of these applications, it is important to build mobile and self-contained applications that run without the need for frequent communication with additional services on a back-end server. We present in this paper a data glove prototype, based on multiple small Inertial Measurement Units (IMUs), with a glove-embedded classifier for the french sign language. In an extensive set of experiments with 57 participants, our system was tested by repeatedly fingerspelling the French Sign Language(LSF) alphabet. Results show that our system is capable of detecting the LSF alphabet with a mean accuracy score of 92% and an F1 score of 91%, with all detections performed on the glove within 63 milliseconds.][pdf][scholar][bibtex]

Detecting Process Transitions from Wearable Sensors: An Unsupervised Labeling Approach, Böttcher, Sebastian and Scholl, Philipp Marcel and Van Laerhoven, Kristof, In Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction, 2017. [abstractAuthoring protocols for manual tasks such as following recipes, manufacturing processes, or laboratory experiments requires a significant effort. This paper presents a system that estimates individual procedure transitions from the user's physical movement and gestures recorded with inertial motion sensors. This is combined with egocentric or external video recordings, allowing for an efficient review of video archives. We investigate different clustering algorithms on wearable inertial sensor data recorded on par with video data, to automatically create transition marks between task steps. The goal is to match these marks to the transitions given in a description of the workflow, thus creating navigation cues to browse video repositories of manual work. To evaluate the performance of unsupervised clustering algorithms, the automatically generated marks are compared to human-expert created labels on publicly available datasets. Additionally, we tested the approach on a novel data set in a manufacturing lab environment, describing an existing sequential manufacturing process.][pdf][scholar][bibtex] best paper runner up

What Will We Wear After Smartphones?, Amft, Oliver and Van Laerhoven, Kristof, In IEEE Pervasive Computing, vol.16(4)p.80 -- 85, 2017. [abstractWith wearable computing research recently passing the 20-year mark, this survey looks back at how the field developed and explores where it’s headed. According to the authors, wearable computing is entering its most exciting phase yet, as it transitions from demonstrations to the creation of sustained markets and industries, which in turn should drive future research and innovation.][pdf][scholar][bibtex]

2016

Predicting Grasps with a Wearable Inertial and EMG Sensing Unit for Low-Power Detection of In-Hand Objects, Theiss, Marian and Scholl, Philipp M. and Van Laerhoven, Kristof, In Proceedings of the 7th Augmented Human International Conference 2016, p.4:1--4:8, 2016. [abstractDetecting the task at hand can often be improved when it is also known what object the user is holding. Several sensing modalities have been suggested to identify handheld objects, from wrist-worn RFID readers to cameras. A critical obstacle to using such sensors, however, is that they tend to be too power hungry for continuous usage. This paper proposes a system that detects grasping using first inertial sensors and then Electromyography (EMG) on the forearm, to then selectively activate the object identification sensors. This three-tiered approach would therefore only attempt to identify in-hand objects once it is known a grasping has occurred. Our experiments show that high recall can be obtained for grasp detection, 95% on average across participants, with the grasping of lighter and smaller objects clearly being more difficult.][pdf][scholar][bibtex]

An Ad-Hoc Capture System for Augmenting Non-Digital Water Meters, Schwabe, Nils and Scholl, Philipp M. and Van Laerhoven, Kristof, In Proceedings of the 6th International Conference on the Internet of Things, p.25--33, 2016. [abstractDeriving more detailed insights into one's ecological footprint is a premise to reduce one's individual environmental impact. Personal water consumption contributes significantly to this impact, but remains hard to quantify individually unless digital meters are installed. In this paper, we present a dual-sensing approach to retro-fit common water meters with a wireless sensor unit that is able to capture an individual's water usage, and digitally forward it over the home's WiFi network. Utilizing active infrared distance sensing or sensing magnetic flux, it is possible to measure water consumption with an accuracy below 0.1l on commonly installed meters. With a continuous power consumption (assuming a daily water consumption of 2 hours) of less than 20 mW, the system can be provide real-time feedback to home-owners, office workers and people sharing such a retro-fitted water supply.][pdf][scholar][bibtex]

Remind: Towards a Personal Remembrance Search Engine for Motion Augmented Multi-media Recordings, Scholl, Philipp M. and van Laerhoven, Kristof, In Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia, p.363--364, 2016. [abstractA searchable database of multi-media recordings provides a way to augment one's memory. This database might contain video, audio and motion data, which is indexed to allow for quick searches. Ultimately, queries for similarity on each recorded modality would be supported. For example video sequencing showing similar objects or comparable sequences of gestures can be retrieved. An important aspect of this challenge is how to encode such multi-modal data, and how to make it searchable. One approach, based on a multi-media container format, is proposed in this paper together with an architecture to allow for similarity queries on multiple modalities.][pdf][scholar][bibtex]

Mobile Interactions Augmented by Wearable Computing: A Design Space and Vision, Schneegass, Stefan and Thomas Olsson and Sven Mayer and Kristof van Laerhoven, In IJMHCI, vol.8(4)p.104--114, 2016. [abstractWearable computing has a huge potential to shape the way we interact with mobile devices in the future. Interaction with mobile devices is still mainly limited to visual output and tactile finger-based input. Despite the visions of next-generation mobile interaction, the hand-held form factor hinders new interaction techniques becoming commonplace. In contrast, wearable devices and sensors are intended for more continuous and close-to-body use. This makes it possible to design novel wearable-augmented mobile interaction methods – both explicit and implicit. For example, the EEG signal from a wearable breast strap could be used to identify user status and change the device state accordingly (implicit) and the optical tracking with a head-mounted camera could be used to recognize gestural input (explicit). In this paper, the authors outline the design space for how the existing and envisioned wearable devices and sensors could augment mobile interaction techniques. Based on designs and discussions in a recently organized workshop on the topic as well as other related work, the authors present an overview of this design space and highlight some use cases that underline the potential therein.][pdf][scholar][bibtex]

Reflect Yourself!, Dietrich, Manuel and van Laerhoven, Kristof, In Lifelogging: Digital self-tracking and Lifelogging - between disruptive technology and cultural transformation, p.213--233, 2016. [abstractIn this paper we introduce an interdisciplinary investigation into technology of wearable activity recognition and its applications for self-tracking and lifelogging. Wearable activity recognition are computer systems capable of automatically detecting human actions. Using these devices for self-tracking provides the users with a new perspective on their actions. Thus people can reflect on their actions in a new way. We work on the topic of wearable activity recognition in an interdisciplinary way, both with a theoretical analytic direction and a concrete system design perspective. The theoretical part of this article is about understanding how people relate to their actions using an activity recognition lifelogging device. It is based on the philosophical theory of action. For the concrete design perspective of wearable activity recognition, we introduce two cases from our current design practice. We bridge the theoretical thoughts and the practical perspective by introducing the (critical) design theory. Based on that, opportunity and limits for self-tracking and self-reflection are the results of the interdisciplinary approach.][pdf][scholar][bibtex]

4th Workshop on Human Activity Sensing Corpus and Applications: Towards Open-ended Context Awareness, Kawaguchi, Nobuo and Nishio, Nobuhiko and Roggen, Daniel and Inoue, Sozo and Pirttikangas, Susanna and van Laerhoven, Kristof, In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, p.690--695, 2016. [pdf][scholar][bibtex]

A Multi-Media Exchange Format for Time-Series Dataset Curation, Scholl, Philipp M. and Van Laerhoven, Kristof, In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, 2016. [abstractExchanging data as comma-separated values (CSV) is slow, cumbersome and error-prone. Especially for time- series data, which is common in Activity Recognition, syn- chronizing several independently recorded sensors is chal- lenging. Adding second level evidence, like video record- ings from multiple angles and time-coded annotations, fur- ther complicates the matter of curating such data. A possible alternative is to make use of standardized multi- media formats. Sensor data can be encoded in audio for- mat, and time-coded information, like annotations, as sub- titles. Video data can be added easily. All this media can be merged into a single container file, which makes the is- sue of synchronization explicit. The incurred performance overhead by this encoding is shown to be negligible and compression can be applied to optimize storage and trans- mission overhead.][pdf][scholar][bibtex]

Interrupts Become Features: Using On-Sensor Intelligence for Recognition Tasks, Van Laerhoven, Kristof and Scholl, Philipp M., In Embedded Engineering Education, p.171--185, 2016. [abstractWearable sensors have traditionally been designed around a micro controller that periodically reads values from attached sensor chips, before analyzing and forwarding data. As many off-the-shelf sensor chips have become smaller and widespread in consumer appliances, the way they are interfaced has become digital and more potent. This paper investigates the impact of using such chips that are not only smaller and cheaper as their predecessors, but also come with an arsenal of extra processing and detection capabilities, built in the sensor package. A case study with accompanying experiments using two MEMS accelerometers, show that using these capabilities can cause significant reductions in resources for data acquisition, and could even support basic recognition tasks.][pdf][scholar][bibtex]

2015

Wearables in the Wet Lab: A Laboratory System for Capturing and Guiding Experiments, Scholl, Philipp M. and Wille, Matthias and Van Laerhoven, Kristof, In The 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2015), 2015. [abstractWet Laboratories are highly dynamic, shared environments full of tubes, racks, compounds, and dedicated machinery. The recording of experiments, despite the fact that several ubiquitous computing systems have been suggested in the past decades, still relies predominantly on hand-written notes. Similarly, the information retrieval capabilities inside a laboratory are limited to traditional computing interfaces, which due to safety regulations are sometimes not usable at all. In this paper, Google Glass is combined with a wrist-worn gesture sensor to support Wetlab experimenters. Taking "in-situ" documentation while an experiment is performed, as well as contextualizing the protocol at hand can be implemented on top of the proposed system. After an analysis of current practices and needs through a series of explorative deployments in wet labs, we motivate the need for a wearable hands-free system, and introduce our specific design to guide experimenters. Finally, using a study with 22 participants evaluating the system on a benchmark DNA extraction experiment, we explore the use of gesture recognition for enabling the system to track where the user might be in the experiment.][pdf][scholar][bibtex]

Proceedings of the 2015 {ACM} International Symposium on Wearable Computers, {ISWC} 2015, Osaka, Japan, September 7-11, 2015, Kenji Mase and Marc Langheinrich and Daniel Gatica{-}Perez and Kristof Van Laerhoven and Tsutomu Terada, 2015. [abstractWelcome to UbiComp/ISWC 2015, the premier international forum for pervasive, ubiquitous, and wearable computing, which takes place from September 9-11, 2015 in Osaka, Japan. The 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2015) is the third installment of the merged format between "Pervasive" and "Ubicomp", the two most renowned conferences in the field. As in previous years, it is co-located with the 19th International Symposium on Wearable Computers (ISWC 2015) - the premier forum to present cutting-edge research results in the fields of wearable computing and on-body mobile technologies. ISWC and UbiComp have been co-located with big success since 2013 in Zurich. This year, the two conferences feature two independent technical programs, but offer a single adjunct track, featuring joint posters, demos, workshops, tutorials, and a common doctoral school. As in previous years, ISWC and UbiComp operate as a single event, i.e., attendees are free to attend events from both conferences interchangeably.][pdf][scholar]

Optimized multi-attribute co-design for maximizing efficiency in Wireless Sensor Networks, Vinay Sachidananda and David Noack and Abdelmajid Khelil and Kristof Van Laerhoven and Philipp M. Scholl, In Tenth {IEEE} International Conference on Intelligent Sensors, Sensor Networks and Information Processing, {ISSNIP} 2015, Singapore, April 7-9, 2015, p.1--6, 2015. [abstractA key task in Wireless Sensor Networks (WSNs) is to deliver specific information about a spatial phenomenon of interest. However, in WSNs the operating conditions and/or user requirements are often desired to be evolvable, whether driven by changes of the monitored parameters or WSN properties. To this end, few sensor nodes sample the phenomenon and transmit the acquired samples, typically multihop, to the application through a gateway called sink. Accurately representing the physical phenomenon and reliably, timely delivering the user required information comes at the cost of higher energy as additional messages are required. This work proposes a tunable co-design for network optimization to avoid under or over provision of information and interaction of the attributes and their effects on each other. We validate the approach viability through analytical modeling, simulations for a range of requirements.][pdf][scholar][bibtex]

Assessing Activity Recognition Feedback in Long-term Psychology Trials, Manuel Dietrich and Eugen Berlin and Kristof Van Laerhoven, In 14th International Conference on Mobile and Ubiquitous Multimedia, p.121--130, 2015. [abstractThe physical activities we perform throughout our daily lives tell a great deal about our goals, routines, and behavior, and as such, have been known for a while to be a key indicator for psychiatric disorders. This paper focuses on the use of a wrist-watch with integrated inertial sensors. The algorithms that deal with the data from these sensors can automatically detect the activities that the patient performed from characteristic motion patterns. Such a system can be deployed for several weeks continuously and can thus provide the consulting psychiatrist an insight in their patient's behavior and changes thereof. Since these algorithms will never be flawless, however, a remaining question is how we can support the psychiatrist in assigning confidence to these automatic detections. To this end, we present a study where visualizations at three levels from a detection algorithm are used as feedback, and examine which of these are the most helpful in conveying what activities the patient has performed. Results show that just visualizing the classifier's output performs the best, but that user's confidence in these automated predictions can be boosted significantly by visualizing earlier pre-processing steps.][pdf][scholar][bibtex] honorable mention award

RFID-based Compound Identification in Wet Laboratories with Google Glass, Scholl, Philipp M. and Schultes, Tobias and Van Laerhoven, Kristof, In Proceedings of the 2Nd International Workshop on Sensor-based Activity Recognition and Interaction, p.13:1--13:5, 2015. [abstractExperimentation in Wet Laboratories requires tracking and identification of small containers like test tubes, flasks, and bottles. The current practise involves colored adhesive markers, waterproof hand-writing, QR- and Barcodes, or RFID-Tags. These markers are often not self-descriptive and require a lookup table on paper or some digitally stored counterpart. Furthermore they are subject to harsh environmental conditions (e.g. samples are kept in a freezer), and can be hard to share with other lab workers for lack of a consistent annotation systems. Increasing their durability, as well as providing a central tracking system for these containers, is therefore of great interest. In this paper we present a system for the implicit tracking of RFID-augmented containers with a wrist-worn reader unit, and a voice-interaction scheme based on a head-mounted display.][pdf][scholar][bibtex]

From Mobile to Wearable: Using Wearable Devices to Enrich Mobile Interaction, Schneegass, Stefan and Mayer, Sven and Olsson, Thomas and Van Laerhoven, Kristof, In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, p.936--939, 2015. [abstractIn the last decades, mobile phones have turned into sensor-rich devices that use different built-in sensors such as accelerometers or gyroscopes. The sensors have enriched the interaction possibilities, allowing, for example, gestural interaction. With the prevalence of wearable devices and peripherals, such as fitness bracelets and breast straps, the input and output possibilities can be further extended with both new sensors and actuators. Current applications could benefit from them, and entirely new applications could be designed. The design space for new applications needs to be identified, which will again drive advances in mobile and wearable computing. This workshop sets focus on wearable devices as means to enrich smartphones and their interaction capabilities. We will discuss the new design space and generate ideas of new applications. Furthermore, we will provide sensors and actuators allowing the participants to implement rapid prototypes of their novel application ideas.][pdf][scholar][bibtex]

{@}migo: A Comprehensive Middleware Solution for Participatory Sensing Applications, Rafael Bachiller and Nelson Matthys and Javier Del Cid and Wouter Joosen and Danny Hughes and Kristof Van Laerhoven, In The 14th IEEE International Symposium on Network Computing and Applications, p.1--8, 2015. [abstractIn the participatory sensing model, humans may serve as opportunistic sensors and flexible actuators while also consuming sensing services. Integrating humans into sensing systems has the potential to increase scale and reduce costs. However, contemporary participatory sensing software provides poor consideration of user dynamism, which includes: mobility across networks, mobility across devices and context-awareness. To address these limitations we propose the User Component and User Bindings. The former represents the user as a first class reconfigurable element of evolving and shared participatory sensing platforms. The latter allows the middleware to support multiple communications channels including Online Social Networks (OSN) to connect users with sensing applications. Our approach increases user participation, reduces out-of-context interactions and only consumes a limited amount of energy by sharing context information between applications. We support these claims by evaluating our approach on a two weeks experiment in which three participants take part in three concurrent participatory applications.][pdf][scholar][bibtex]

An Interdisciplinary Approach on the Mediating Character of Technologies for Recognizing Human Activity, Dietrich, Manuel and Van Laerhoven, Kristof, In Philosophies, vol.1(1)p.55, 2015. [abstractIn this paper, we introduce a research project on investigating the relation of computers and humans in the field of wearable activity recognition. We use an interdisciplinary approach, combining general philosophical assumptions on the mediating character of technology with the current computer science design practice. Wearable activity recognition is about computer systems which automatically detect human actions. Of special relevance for our research project are applications using wearable activity recognition for self-tracking and self-reflection, for instance by tracking personal activity data like sports. We assume that activity recognition is providing a new perspective on human actions; this perspective is mediated by the recognition process, which includes the recognition models and algorithms chosen by the designer, and the visualization to the user. We analyze this mediating character with two concepts which are both based on phenomenological thoughts namely first Peter-Paul Verbeek’s theory on human-technology relations and second the ideas of embodied interaction. Embedded in the concepts is a direction which leads to the role of technical design in how technology mediates. Regarding this direction, we discuss two case studies, both in the possible using practice of self-tracking and the design practice. This paper ends with prospects towards a better design, how the technologies should be designed to support self-reflection in a valuable and responsible way.][pdf][scholar][bibtex]

A Typology of Wearable Activity Recognition and Interaction, Dietrich, Manuel and van Laerhoven, Kristof, In Proceedings of the 2Nd International Workshop on Sensor-based Activity Recognition and Interaction, p.1:1--1:8, 2015. [abstractIn this paper, we will provide a typology of sensor-based activity recognition and interaction, which we call wearable activity recognition. The typology will focus on a conceptual level regarding the relation between persons and computing systems. Two paradigms, first the activity based seamless and obtrusive interaction and second activity-tracking for reflection, are seen as predominant. The conceptual approach will lead to the key term of this technology research, which is currently underexposed in a wider and conceptual understanding: human action/activity. Modeling human action as a topic for human-computer interaction (HCI) in general exists since its beginning. We will apply two classic theories which are influential in the HCI research to the application of wearable activity recognition. It is both a survey and a critical reflection on these concepts. As a further goal of our approach, we argue for the relevance and the benefits this typology can have. Beside practical consequences, a typology of the human-computer relation and the discussion of the key term activity can be a medium for exchange which other disciplines. Especially when applications become more serious, for example in health care, a typology including a wider mutual understanding can be useful for cooperations with non-technical practitioners e.g. doctors or psychologists.][pdf][scholar][bibtex]

Wear is Your Mobile? Investigating Phone Carrying and Use Habits with a Wearable Device, Van Laerhoven, Kristof and Borazio, Marko and Burdinski, Jan Hendrik, In Frontiers in ICT, vol.2(10)2015. [abstractThis article explores properties and suitability of mobile and wearable platforms for continuous activity recognition and monitoring. Mobile phones have become generic computing platforms, and even though they might not always be with the user, they are increasingly easy to develop for and have an unmatched variety of on-board sensors. Wearable units in contrast tend to be purpose-built, and require a certain degree of user adaptation, but they are increasingly used to do continuous sensing. We explore the trade-offs for both device types in a study that compares their sensor data and that explicitly examines how often these devices are being worn by the user. To this end, we have recorded a dataset from 51 participants, who were given a wrist-worn sensor and an app to be used on their Smartphone for two weeks continuously, totalling 638 days (or over 15300 hours) of wearable and mobile data. Results confirm findings of previous studies from North America and show that Smartphones are on average being on their user less than 23\% of the time, mostly during working hours. Just as noteworthy is the high variance in Smartphone use (in carrying, interacting with, and charging the phone) among participants.][pdf][scholar][bibtex]

DUKE: A Solution for Discovering Neighborhood Patterns in Ego Networks, Agha Muhammad and Van Laerhoven, Kristof, In The 9th International AAAI Conference on Web and Social Media (ICWSM), p.268--277, 2015. [abstractGiven the rapid growth of social media websites and the ease of aggregating ever-richer social data, an inevitable research question that can be expected to emerge is whether different interaction patterns of individuals and their meaningful interpretation can be captured for social network analysis. In this work, we present a novel solution that discovers occurrences of prototypical ’ego network’ patterns from social media and mobile-phone networks, to provide a data-driven instrument to be used in behavioral sciences for graph interpretations. We analyze nine datasets gathered from social media websites and mobile phones, together with 13 network measures, and three unsupervised clustering algorithms. Further, we use an unsupervised feature similarity technique to reduce redundancy and extract compact features from the data. The reduced feature subsets are then used to discover ego patterns using various clustering techniques. By cluster analysis, we discover that eight distinct ego neighborhood patterns or ego graphs have emerged. This categorization allows concise analysis of users’ data as they change over time. We provide fine-grained analysis for the validity and quality of clustering results. We perform clustering verification based on the following three intuitions: i) analyzing the clustering patterns for the same set of users crawled from three social media networks, ii) associating metadata information with the clusters and evaluating their performance on real networks, iii) studying selected participants over an extended period to analyze their behavior.][pdf][scholar] oral, 19\% acceptance rate

Low-power Lessons from Designing a Wearable Logger for Long-term Deployments, Berlin, Eugen and Martin Zittel and Michael Bräunlein and Van Laerhoven, Kristof, In 2015 IEEE Sensors Applications Symposium (SAS 2015), 2015. [abstractThe advent of a range of wearable products for monitoring one’s healthcare and fitness has pushed decades of research into the market over the past years. These units record motion and detect common physical activities to assist the wearer in monitoring fitness, general state of health, and sleeping trends. Most of the detection algorithms on board of these devices however are closed-source and the devices do not allow the recording of raw inertial data. This paper presents a project that, faced by these limitations of commercial wearable products, set out to create an open-source recording platform for activity recognition research that (1) is sufficiently power-efficient, and (2) remains small and comfortable enough to wear, to be able to record raw inertial data for extended periods of time. We study especially, via high-resolution power profiling, several trade-offs present in the choice for the basic hardware components of our prototype, and contribute with three key design areas that have had a significant impact on our prototype design.][pdf][scholar][bibtex]

MyHealthAssistant: An Event-driven Middleware for Multiple Medical Applications on a Smartphone-Mediated Body Sensor Network, Christian Seeger and Van Laerhoven, Kristof and Buchmann, Alejandro, In {IEEE} J. Biomedical and Health Informatics, vol.19(2)p.752--760, 2015. [abstractAn ever-growing range of wireless sensors for medical monitoring has shown that there is significant interest in monitoring patients in their everyday surroundings. It however remains a challenge to merge information from several wireless sensors and applications are commonly built from scratch. This paper presents a middleware targeted for medical applications on smartphone-like platforms that relies on an event-based design to enable flexible coupling with changing sets of wireless sensor units, while posing only a minor overhead on the resources and battery capacity of the interconnected devices. We illustrate the requirements for such middleware with three different healthcare applications that were deployed with our middleware solution, and characterize the performance with energy consumption, overhead caused for the smartphone, and processing time under real-world circumstances. Results show that with sensing-intensive applications, our solution only minimally impacts the phone's resources, with an added CPU utilization of 3/% and a memory usage under 7 MB. Furthermore, for a minimum message delivery ratio of 99.9\%, up to 12 sensor readings per second are guaranteed to be handled, regardless of the number of applications using our middleware.][pdf][scholar][bibtex]

2014

Towards Benchmarked Sleep Detection with Inertial Wrist-worn Sensing Units, Borazio, Marko and Berlin, Eugen and Kücükyildiz, Nagihan and Scholl, Philipp M. and Van Laerhoven, Kristof, In ICHI 2014, 2014. [abstractThe monitoring of sleep by quantifying sleeping time and quality is pivotal in many preventive health care scenarios. A substantial amount of wearable sensing products have been introduced to the market for just this reason, detecting whether the user is either sleeping or awake. Assessing these devices for their accuracy in estimating sleep is a daunting task, as their hardware design tends to be different and many are closed-source systems that have not been clinically tested. In this paper, we present a challenging benchmark dataset from an open source wrist-worn data logger that contains relatively high-frequent (100Hz) 3D inertial data from 42 sleep lab patients, along with their data from clinical polysomnography. We analyse this dataset with two traditional approaches for detecting sleep and wake states and propose a new algorithm specifically for 3D acceleration data, Estimation of Stationary Sleep-segments (ESS). Results show that all three methods generally over-estimate for sleep, with our method performing slightly better (74\% overall mean accuracy) than the traditional activity count-based methods.][pdf][scholar][bibtex]

How does Wearable Activity Recognition Influence Users' Actions? A Computer Science and Philosophy Interdisciplinary Investigation, Dietrich, Manuel and Van Laerhoven, Kristof, In IACAP 2014, 2014. [pdf][scholar]

Comparing Google Glass with Tablet-PC as Guidance System for Assembling Tasks, Matthias Wille and Sascha Wischniewski and Philipp M. Scholl and Van Laerhoven, Kristof, In Glass & Eyewear Computers (GEC), 2014. [abstractHead mounted displays (HMDs) can be used as an guidance system for manual assembling tasks: contrary to using a Tablet-PC, instructions are always shown in the field of view while hands are kept free for the task. This is believed to be one of the major advantage of using HMDs. In the study reported here, performance, visual fatigue, and subjective strain was measured in a dual task paradigm. Participants were asked to follow a toy car assembly instructions while monitoring a virtual gauge. Both tasks had to be executed in parallel either while wearing Google Glass or using a Tablet-PC. Results show slower performance on the HMD but no difference in subjective strain.][pdf][scholar][bibtex]

Diary-Like Long-Term Activity Recognition: Touch or Voice Interaction?, Philipp M. Scholl, and Borazio, Marko and Martin Jänsch and Van Laerhoven, Kristof, In Glass & Eyewear Computers (GEC), 2014. [abstractThe experience sampling methodology is a well known tool in psychology to asses a subject's condition. Regularly or whenever an important event happens the subject stops whatever he is currently involved in and jots down his current perceptions, experience, and activities, which in turn form the basis of these diary studies. Such methods are also widely in use for gathering labelled data for wearable long-term activity recognition, where subjects are asked to note conducted activities. We present the design of a personal electronic diary for daily activities, including user interfaces on a PC, Smartphone, and Google Glass. A 23-participant structured in-field study covering seven different activities highlights the difference of mobile touch interaction and ubiquitous voice recognition for tracking activities.][pdf][scholar][bibtex]

Workshop on Smart Garments: Sensing, Actuation, Interaction, and Applications in Garments, Schneegass, Stefan and Cheng, Jingyuan and Van Laerhoven, Kristof and Amft, Oliver, In Proceedings of the 2014 ACM International Symposium on Wearable Computers: Adjunct Program, p.225–229, 2014. [abstractOver the last years different wearable electronic devices, technically similar to smart phones, have become available in the form factor of watches and glasses. However, including wearable sensing, actuation, and communication technologies directly into garments is still a great challenge. Cloths offer the chance to unobtrusively integrate new functionalities. Nevertheless, it is essential to take into account that garments and cloths are fundamentally different from electronic devices. Manufacturing processes for fabrics and cloths, drivers for fashion, and user expectation with regard to comfort and durability are not comparable to classical electronic devices. In smart watches and glasses applications resemble common smart phone functionality (e.g., picture taking, (instant) messaging, voice communication, presentation of reminders) with new input and output channels. In contrast to this, new possibilities for sensing, actuation, and interaction are opening entirely new applications on garments. These new applications are needed to be identified and will then again drive the advances in smart garments. In this workshop, we focus on novel applications for garments. We discuss underlying abstraction layers that allow developers to create applications that are independent from a specific garment and that can be used with different garments. Furthermore, we invite research contributions and position statements on sensing and actuation as the basic mechanisms for smart garments. Overall the workshop aims at improving our understanding of the fundamental challenges when wearable computing moves beyond accessories into garments.][pdf][scholar][bibtex]

Wearable Digitization of Life Science Experiments, Scholl, Philipp M. and Van Laerhoven, Kristof, In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication, p.1381--1388, 2014. [abstractExperimental work in Life Sciences is done with protective garment to contain harmful agents and to avoid contaminations. This limits the amount of documentation that can be done during experimentation, since pen'n'paper and other equipment is hardly allowed in those environments. Relying on her memory, the scientist has to reconstruct the important details of her experiment later on. Wearable computers, like Google Glass or wrist-worn Smartwatches, can enhance the scientist's ability to record key information while conducting his experiment. Especially the possibility of hands-free, and implicit interaction with the wearable system creates new possibilities for augmenting the scientist's memory.][pdf][scholar][bibtex]

Integrating Wireless Sensor Nodes in the Robot Operating System, Scholl, Philipp M. and Brachmann, Martina and Santini, Silvia and Van Laerhoven, Kristof, In Cooperative Robots and Sensor Networks 2014, p.141--157, 2014. [abstractThe Robot Operating System (ROS) is a popular middleware that eases the design, implementation, and maintenance of robot systems. In particular, ROS enables the integration of a large number of heterogeneous devices in a single system. To allow these devices to communicate and cooperate, ROS requires device-specific interfaces to be available. This restricts the number of devices that can effectively be integrated in a ROS-based system. In this work we present the design, implementation, and evaluation of a ROS middleware client that allows to integrate constrained devices like wireless sensor nodes in a ROS-based system. Wireless sensor nodes embedded in the environment in which a robot system is operating can indeed help robots in navigating and interacting with the environment. The client has been implemented for devices running the Contiki operating system but its design can be readily extended to other systems like, e.g., TinyOS. Our evaluation shows that: in-buffer processing of ROS messages without relying on dynamic memory allocation is possible; message contents can be accessed conveniently using well-known concepts of the C language (structs) with negligible processing overhead with respect to a C++-based client; and that ROS' message-passing abstraction facilitates the integration of devices running event-based systems like Contiki.][pdf][scholar][bibtex]

ISWC 2013-Wearables Are Here to Stay., Daniel Roggen and Daniel Gatica-Perez and Masaaki Fukumoto and Van Laerhoven, Kristof, In IEEE Pervasive Computing, vol.13(1)p.14--18, 2014. [abstractThe 17th edition of the IEEE International Symposium on Wearable Computers (ISWC) was a tremendous success. With 17 years of history, it was the perfect place to get a sense of what has been achieved, where our community stands today, and what lies ahead.][pdf][scholar][bibtex]

2013

A Site Properties Assessment Framework for Wireless Sensor Networks, Gurov, Iliya and Guerrero, Pablo and Brachmann, Martina and Silvia Santini and Van Laerhoven, Kristof and Buchmann, Alejandro, In The 11th ACM Conference on Embedded Networked Sensor Systems (SenSys 2013), 2013. [abstractComparing experimental results obtained on different wireless sensor network (WSN) deployments is typically very cumbersome and in most cases unfeasible. This is due to the lack of a methodology to describe the properties of WSN deployments and the experimental conditions under which experiments have been run. Our work focuses on the design and development of a site properties assessment framework, called SiteWork, that aims at providing a mean to quickly, automatically and accurately quantifying the proper- ties of a WSN. This poster abstract describes the preliminary design and evaluation of the basic site properties assessment mechanisms provided by SiteWork.][pdf][scholar][bibtex]

Using Time Use with Mobile Sensor Data: A Road to Practical Mobile Activity Recognition?, Borazio, Marko and Van Laerhoven, Kristof, In 12th International Conference on Mobile and Ubiquitous Multimedia, 2013. [abstractHaving mobile devices that are capable of finding out what activity the user is doing, has been suggested as an attractive way to alleviate interaction with these platforms, and has been identified as a promising instrument in for instance medical monitoring. Although results of preliminary studies are promising, researchers tend to use high sampling rates in order to obtain adequate recognition rates with a variety of sensors. What is not fully examined yet, are ways to integrate into this the information that does not come from sensors, but lies in vast data bases such as time use surveys. We examine using such statistical information combined with mobile acceleration data to determine 11 activities. We show how sensor and time survey information can be merged, and we evaluate our approach on continuous day and-night activity data from 17 different users over 14 days each, resulting in a data set of 228 days. We conclude with a series of observations, including the types of activities for which the use of statistical data has particular benefits][pdf][scholar][bibtex]

A Publish/Subscribe Middleware for Body and Ambient Sensor Networks that Mediates between Sensors and Applications, Christian Seeger and Van Laerhoven, Kristof and Sauer, Jens and Buchmann, Alejandro, In IEEE International Conference on Healthcare Informatics (ICHI 2013), 2013. [abstractContinuing development of an increasing variety of sensors has led to a vast increase in sensor-based telemedicine solutions. A growing range of modular sensors, and the need of having several applications working with those sensors, has led to an equally extensive increase in efforts for system development. In this paper, we present an event-driven middleware for on-body and ambient sensor networks that allows multiple applications to define information types of their interest in a publish/subscribe manner. Incoming sensor data is hereby transformed into the desired data representation which lifts the burden of adapting the application with respect to the connected sensors off the developers shoulders. Furthermore, an unsupervised on-the-fly reloading of transformation rules from a remote server allows the systems adaptation to future applications and sensors at run-time. Application-specific event channels provide tailor-made information retrieval as well as control over the dissemination of critical information. The system is evaluated based on an Android implementation, with transformation rules implemented as OSGi bundles that are retrieved from a remote web server. Evaluation shows a low impact of running the transformation rules on a phone and highlights the reduced energy consumption by having fewer sensors serving multiple applications. It also points out the behavior and limits of the application-specific event channels and the event transformation with respect to CPU utilization, delivery ratio, and memory usage.][pdf][scholar][bibtex] best paper award

When Do You Light a Fire? Capturing Tobacco Use with Situated, Wearable Sensors, Philipp M. Scholl and Nagihan Kücükyildiz and Van Laerhoven, Kristof, In First Workshop on Human Factors and Activity Recognition in Healthcare, Wellness and Assisted Living (Recognize2Interact), 2013. [abstractAn important step towards assessing smoking behavior is to detect and log smoking episodes in an unobtrusive way. Detailed information on an individual{\textquoteright}s consumption can then be used to highlight potential health risks and behavioral statistics to increase the smoker‚Äôs awareness, and might be applied in smoking cessation programs. In this paper, we present an evaluation of two different monitoring prototypes which detect a user{\textquoteright}s smoking behavior, based on augmenting a smoking accessory. Both prototypes capture and record instances when the user smokes, and are sufficiently robust and power efficient to allow deployments of several weeks. A real-world feasibility study with 11 frequently-smoking participants investigates the deployment and adoption of the system, showing that smokers are generally unaware of their daily smoking patterns, and tend to overestimate their consumption.][pdf][scholar][bibtex]

Bridging the Last Gap: LedTX - Optical Data Transmission of Sensor Data for Web-Services, Philipp M. Scholl and Nagihan Kücükyildiz and Van Laerhoven, Kristof, 2013. [abstractData transmission from small-scale data loggers such as human activity recognition sensors is an inherent system‚Äôs design challenge. Interfaces based on USB or Bluetooth still require platform-dependent code on the retrieval computer system, and therefore require a large maintenance effort. In this paper, we present LedTX, a system that is able to transmit wirelessly through LEDs and the camera included in most user‚Äôs hardware. This system runs completely in modern browsers and presents a uni-directional, platform-independent communication channel. We illustrate this system on the UbiLighter, an instrumented lighter that tracks ones smoking behaviour.][pdf][scholar][bibtex]

Connecting Wireless Sensor Networks to the Robot Operating System, Philipp M. Scholl and Brahim El Mahjoub and Silvia Santini and Van Laerhoven, Kristof, In The 8th International Symposium on Intelligent Systems Techniques for Ad hoc and Wireless Sensor Networks, 2013. [pdf][scholar][bibtex]

Localization in Wireless Networks via Laser Scanning and Bayesian Compressed Sensing, Sofia Nikitaki and Philipp M. Scholl and Van Laerhoven, Kristof and Panagiotis Tsakalides, In IEEE 14th Workshop on Signal Processing Advances in Wireless Communications (SPAWC 2013), 2013. [abstractWiFi indoor localization has seen a renaissance with the introduction of RSSI-based approaches. However, manual finger- printing techniques that split the indoor environment into pre-defined grids are implicitly bounding the maximum achievable localization accuracy. WoLF, our proposed Wireless localization and Laser-scanner assisted Fingerprinting system, solves this problem by automating the way indoor fingerprint maps are generated. We furthermore show that WiFi local- ization on the generated high resolution maps can be performed by sparse reconstruction which exploits the peculiarities imposed by the physical characteristics of indoor environments. Particularly, we propose a Bayesian Compressed Sensing (BCS) approach in order to find the position of the mobile user and dynamically determine the sufficient number of APs required for accurate positioning. BCS employs a Bayesian formalism in order to reconstruct a sparse signal using an undetermined system of equations. Experimental results with real data collected in a university building validate WoLF in terms of localization accuracy under actual environ- mental conditions.][pdf][scholar][bibtex]

Quantitative Analysis of Community Detection Methods for Longitudinal Mobile Data, Agha Muhammad and Van Laerhoven, Kristof, In International Conference on Social Intelligence and Technology (SOCIETY) 2013, 2013. [abstractMobile phones are now equipped with increasingly large number of built-in sensors that can be utilized to collect long-term socio-temporal data of social interactions. Moreover, the data from different built-in sensors can be combined to predict social interactions. In this paper, we perform quantitative analysis of 6 community detection algorithms to uncover the community structure from the mobile data. We use Bluetooth, WLAN, GPS, and contact data for analysis, where each modality is modelled as an undirected weighted graph. We evaluate community detection algorithms across 6 inter-modality pairs, and use well know partition evaluation features to measure clustering similarity between the pairs. We compare the performance of different methods based on the delivered partitions, and analyse the graphs at different times to find out the community stability.][pdf][scholar][bibtex]

Allowing Early Inspection of Activity Data from a Highly Distributed Bodynet with a Hierarchical-Clustering-of-Segments Approach, Kreil, Matthias and Van Laerhoven, Kristof and Lukowicz, Paul, In 10th Annual Body Sensor Networks Conference 2013, 2013. [abstractThe output delivered by body-wide inertial sensing systems has proven to contain sufficient information to distinguish between a large number of complex physical activities. The bottlenecks in these systems are in particular the parts of such systems that calculate and select features, as the high dimensionality of the raw sensor signals with the large set of possible features tends to increase rapidly. This paper presents a novel method using a hierarchical clustering method on raw trajectory and angular segments from inertial data to detect and analyze the data from such a distributed set of inertial sensors. We illustrate on a public dataset, how this novel way of modeling can be of assistance in the process of designing a fitting activity recognition system. We show that our method is capable of highlighting class-representative modalities in such high-dimensional data and can be applied to pinpoint target classes that might be problematic to classify at an early stage.][pdf][scholar][bibtex]

Sensor Networks for Railway Monitoring: Detecting Trains from their Distributed Vibration Footprints, Berlin, Eugen and Van Laerhoven, Kristof, In 9th IEEE International Conference on Distributed Computing in Sensor Systems (DCOSS 2013), p.80--87, 2013. [abstractWe report in this paper on a wireless sensor network deployment at railway tracks to monitor and analyze the vibration patterns caused by trains passing by. We investigate in particular a system that relies on having a distributed network of nodes that individually contain efficient feature extraction algorithms and classifiers that fit the restricted hardware resources, rather than using few complex and specialized sensors. A feasibility study is described on the raw data obtained from a real-world deployment on one of Europe{\textquoteright}s busiest railroad sections, which was annotated with the help of video footage and contains vibration patterns of 186 trains. These trains were classified in 6 types by various methods, the best performing at an accuracy of 97\%. The trains{\textquoteright} length in wagons was estimated with a mean-squared error of 3.98. Visual inspection of the data shows further opportunities in the estimation of train speed and detection of worn-down cargo wheels.][pdf][scholar][bibtex]

Discovering Common Structures in Mobile Call Data: An Efficient Way to Clustering Ego Graphs, Agha Muhammad and Van Laerhoven, Kristof, In Third conference on the Analysis of Mobile Phone Datasets (NetMob2013), Special session on the D4D challenge, 2013. [abstractThis paper conducts a study on a mobile call data set from 5000 individuals in order to examine what type of prototypical calling behaviors tend to occur for individual users. By representing the call data with methods from graph analysis, several features are suggested to characterize the shape and type of neighborhood graph around each mobile phone user. By cluster analysis of these features for all mobile users, we show that the data set contains seven distinct types of so-called neighborhood graphs or ego graphs. This categorization allows concise analysis of users’ call data as they change over time and might be used as a sociological tool to detect rhythms and outliers in call behavior.][pdf][scholar]

Quantitative Analysis of Community Detection Methods for Longitudinal Mobile Data, Agha Muhammad and Van Laerhoven, Kristof, In Third conference on the Analysis of Mobile Phone Datasets (NetMob2013), 2013. [abstractMobile phones are now equipped with an increasingly large number of built-in sensors, that can be utilized to collect long-term socio-temporal data of social interactions. Moreover, the data from different built-in sensors can be combined to predict social interactions. In this paper, we perform quantitative analysis of 6 famous community detection algorithms to uncover the community structure from the mobile data. We use Bluetooth, WLAN, GPS, and contact data for analysis, where each modality is modelled as an undirected weighted graph. We evaluate community detection algorithms across 6 intermodality pairs, and use well-know partition evaluation features to measure clustering similarity between the pairs. We compare the performance of different methods based on the delivered partitions.][pdf][scholar][bibtex]

Improving Activity Recognition without Sensor Data: A Comparison Study of Time Use Surveys, Borazio, Marko and Van Laerhoven, Kristof, In 4th International Augmented Human Conference, 2013. [abstractWearable sensing systems, through their proximity with their user, can be used to automatically infer the wearer{\textquoteright}s activity to obtain detailed information on availability, behavioural patterns, and health. For this purpose, classifiers need to be designed and evaluated with sufficient training data from these sensors and from a representative set of users, which requires starting this procedure from scratch for every new sensing system and set of activities. To alleviate this procedure and optimize classification performance, the use of time use surveys has been suggested: These large databases contain typically several days worth of detailed activity information from a large population of hundreds of thousands of participants. This paper uses a strategy first suggested by Partridge in 2008 that utilizes time use diaries in an activity recognition method. We offer a comparison of the aforementioned North-American data with a large European database, showing that although there are several cultural differences, certain important features are shared between both regions. By cross-validating across the 5160 households in this new data with activity data of 13798 individuals, especially distinctive features turn out to be time and participant{\textquoteright}s location.][pdf][scholar][bibtex]

Already Up? Using Mobile Phones to Track & Share Sleep Behavior, Shirazi, Alireza Sahami and James Clawson and Yashar Hassanpour and Mohammad J. Tourian and Ed Chi and Borazio, Marko and Schmidt, Albrecht and Van Laerhoven, Kristof, In International Journal of Human-Computer Studies, 2013. [abstractUsers share a lot of personal information with friends, family members, and colleagues via social networks. Surprisingly, some users choose to share their sleeping patterns, perhaps both for awareness as well as a sense of connection to others. Indeed, sharing basic sleep data, whether a person has gone to bed or waking up, informs others about not just one’s sleeping routines but also indicates physical state, and reflects a sense of wellness. We present Somnometer, a social alarm clock for mobile phones that helps users to capture and share their sleep patterns. While the sleep rating is obtained from explicit user input, the sleep duration is estimated based on monitoring a user’s interactions with the app. Observing that many individuals currently utilize their mobile phone as an alarm clock revealed behavioral patterns that we were able to leverage when designing the app. We assess whether it is possible to reliably monitor one’s sleep duration using such apps. We further investigate whether providing users with the ability to track their sleep behavior over a long time period can empower them to engage in healthier sleep habits. We hypothesize that sharing sleep information with social networks impacts awareness and connectedness among friends. The result from a controlled study reveals that it is feasible to monitor a user’s sleep duration based just on her interactions with an alarm clock app on the mobile phone. The results from both an in-the-wild study and a controlled experiment suggest that providing a way for users to track their sleep behaviors increased user awareness of sleep patterns and induced healthier habits. However, we also found that, given the current broadcast nature of existing social networks, users were concerned with sharing their sleep patterns indiscriminately.][pdf][scholar][bibtex]

2012

Diagnosing the Weakest Link in WSN Testbeds: A Reliability and Cost Analysis of the USB Backchannel, Guerrero, Pablo and Gurov, Iliya and Buchmann, Alejandro and Van Laerhoven, Kristof, In 7th IEEE International Workshop on Practical Issues in Building Sensor Network Applications (SenseApp 2012), 2012. [abstractThis paper highlights and characterizes the main obstacle to deploying a robust wireless sensor network testbed: the USB connections that link each of the nodes via ethernet gateways to the central server. Unfortunately, these connections are also the components that, when properly installed, can reduce testbed costs by attaching multiple nodes per gateway. After illustrating how unreliable current solutions can become (regardless of the used sensor nodes, USB cabling, or gateway setup), a set of experiments led to a list of dos and don{\textquoteright}ts in testbed deployment. Furthermore, a simple and cost-effective suggestion is presented that allows to bypass current USB backchannel issues, leading to a robuster testbed that avoids manual maintenance of individual nodes.][pdf][scholar][bibtex] oral, 20\% acceptance rate

Are You in Bed with Technology?, Schmidt, Albrecht and Shirazi, Alireza Sahami and Van Laerhoven, Kristof, In IEEE Pervasive Computing, vol.11(4)p.4--7, 2012. [abstractGood sleep can positively affect our performance, while a lack of sleep can affect our memory, immune system, cognitive functioning, learning abilities, and alertness. Researchers have thus investigated numerous technologies and approaches to monitor individuals{\textquoteright} sleep behavior at home. One goal of such technologies is to collect information that can increase users{\textquoteright} awareness of their sleep habits to persuade them to adopt healthier routines. This department looks at new products in this space, ranging from smart alarm clocks to advanced monitoring devices.][pdf][scholar][bibtex]

Detecting Leisure Activities with Dense Motif Discovery, Berlin, Eugen and Van Laerhoven, Kristof, In 14th ACM International Conference on Ubiquitous Computing (UbiComp 2012), p.250--259, 2012. [abstractThis paper proposes an activity inference system that has been designed for deployment in mood disorder research, which aims at accurately and efficiently recognizing selected leisure activities in week-long continuous data. The approach to achieve this relies on an unobtrusive and wrist-worn data logger, in combination with a custom data mining tool that performs early data abstraction and dense motif discovery to collect evidence for activities. After presenting the system design, a feasibility study on weeks of continuous inertial data from 6 participants investigates both accuracy and execution speed of each of the abstraction and detection steps. Results show that our method is able to detect target activities in a large data set with a comparable precision and recall to more conventional approaches, in approximately the time it takes to download and visualize the logs from the sensor.][pdf][scholar][bibtex] oral, 19.3\% acceptance rate

Trainspotting: Combining Fast Features to Enable Detection on Resource-constrained Sensing Devices, Berlin, Eugen and Van Laerhoven, Kristof, In The Ninth International Conference on Networked Sensing Systems (INSS 2012), p.1--8, 2012. [abstractThis paper focuses on spotting and classifying complex and sporadic phenomena directly on a sensor node, whereby a relatively long sequence of sensor samples needs to be considered at a time. Using fast feature extraction from streaming data that can be implemented on the sensor nodes, we show that on-sensor event classification can be achieved. This approach is of particular interest for wireless sensor networks as it promises to reduce wireless traffic significantly, as only events need to be transmitted instead of potentially large chunks of inertial data. The presented approach characterizes the essence of an event's signal by combining several simple features on low-cost MEMS inertial data. Using a scenario and real data from vibration signatures generated by passing trains, we show how with this approach the classification and the estimation of length for passing trains is possible on miniature nodes placed near the railroad tracks. Experiments show that, at the cost of slightly more local processing, the chosen features produce good train type classification with up to 90\% of trains correctly identified.][pdf][scholar][bibtex]

jNode: a Sensor Network Platform that Supports Distributed Inertial Kinematic Monitoring, Philipp M. Scholl and Matthias Berning and Van Laerhoven, Kristof and Markus Scholz and Dawud Gordon, In The Ninth International Conference on Networked Sensing Systems (INSS 2012), 2012. [abstractBecause of the intrinsic advantages of wireless inertial motion tracking, standalone devices that integrate inertial motion units with wireless networking capabilities have gained much interest in recent years. Several platforms, both commercially available and academic, have been proposed to balance the challenges of a small form-factor, power consumption, accuracy and processing speed. Applications include ambulatory monitoring to support healthcare, sport activity analysis, recognizing human group behaviour, navigation support for humans, robots and unmanned vehicles, but also in structural monitoring of large buildings. This paper provides an analysis of the current state-of-the-art platforms in wireless inertial motion tracking and presents a novel hybrid tracking platform that is extensible, low-power, flexible enough to be used for both short- and long-term monitoring and based on a firmware that allows it to be easily adapted after being deployed. After describing the architecture, the design choices in both hardware and software, and arguing why the jNode platform is different from previous work, a performance characterization is given of a fully functional prototype.][pdf][scholar][bibtex]

Fast Indoor Radio-Map Building for RSSI-based Localization Systems, Philipp M. Scholl and Stefan Kohlbrecher and Vinay Sachidananda and Van Laerhoven, Kristof, In The Ninth International Conference on Networked Sensing Systems (INSS 2012), 2012. [abstractWireless Indoor localization systems based on RSSI-values typically consist of an offline training phase and online position determination phase. During the offline phase, georeferenced RSSI measurement, called fingerprints, are recorded to build a radiomap of the building. This radiomap is then searched during the position determination phase to estimate another nodes location. Usually the radiomap is build manually, either by users pin-pointing their location on a ready-made floorplan or by moving in pre-specified patterns while scanning the network for RSSI values. This cumbersome process leads to inaccuracies in the radiomap. Here, we propose a system to build the indoor- and radio-map simultaneously by using a handheld mapping system employing a laser scanner in an IEEE802.15.4-compatible network. This makes indoor- and radio-mapping for wireless localization less cumbersome, faster and more reliable.][pdf][scholar][bibtex]

Discovery of User Groups within Mobile Data, Agha Muhammad and Van Laerhoven, Kristof, In Nokia Mobile Data Challenge (MDC 2012), 2012. [pdf][scholar]

TUDuNet, a Metropolitan-Scale Federation of Wireless Sensor Network Testbeds, Guerrero, Pablo and Buchmann, Alejandro and Khelil, Abdelmajid and Van Laerhoven, Kristof, In 9th European Conference on Wireless Sensor Networks (EWSN 2012), 2012. [abstractTo address the real-world challenges in sensor network evaluation, testbeds have been proposed to enable experimentation without taking the typical deployment hurdles of robustly mounting the hardware, installing batteries, and instrumenting sensor nodes for data collection. In the recent past, several research institutions across Europe proposed to federate their testbeds. However, providing scalability and transparency despite the high heterogeneity in hardware and software between sites proves to be a tough problem. In this paper we introduce TUDNet, a metropolitan-scale federation of sensor network testbeds that spans several buildings within a city. We describe its architecture, the current sites and the control infrastructure as solution for managing experiments at metropolitan scale.][pdf][scholar]

Combining Wearable and Environmental Sensing into an Unobtrusive Tool for Long-Term Sleep Studies, Borazio, Marko and Van Laerhoven, Kristof, In 2nd ACM SIGHIT International Health Informatics Symposium (IHI 2012), 2012. [abstractLong-term sleep monitoring of patients has been identified as a useful tool to observe sleep trends manifest themselves over weeks or months for use in behavioral studies. In practice, this has been limited to coarse-grained methods such as actigraphy, for which the levels of activity are logged, and which provide some insight but have simultaneously been found to lack accuracy to be used for studying sleeping disorders. This paper presents a method to automatically detect the user{\textquoteright}s sleep at home on a long-term basis. Inertial, ambient light, and time data tracked from a wrist-worn sensor, and additional night vision footage is used for later expert inspection. An evaluation on over 4400 hours of data from a focus group of test subjects demonstrates a high re-call night segment detection, obtaining an average of 94\%. Further, a clustering to visualize reoccurring sleep patterns is presented, and a myoclonic twitch detection is introduced, which exhibits a precision of 74\%. The results indicate that long-term sleep pattern detections are feasible.][pdf][scholar][bibtex] oral, 18\% acceptance rate

An Event-based BSN Middleware that supports Seamless Switching between Sensor Configurations, Christian Seeger and Buchmann, Alejandro and Van Laerhoven, Kristof, In 2nd ACM SIGHIT International Health Informatics Symposium (IHI 2012), 2012. [abstractRecent advances in wearable sensors have surged in novel fitness and preventive health care systems that measure step counts, activity levels, and performed exercises with inertial sensors, enabling users to monitor condition and day-to-day lifestyle. This paper presents a middleware designed for a smartphone unit to support health monitoring applications. Its event-driven architecture enables modular system design and seamless switching between sets of embedded sensors. The strengths of the middleware are highlighted in a deployed feasibility study where daily and gym activities are recognized through an interchangeable set of wireless sensors. The study demonstrates that the setup is suitable for daily usage with minimal impact on the phone{\textquoteright}s resources.][pdf][scholar][bibtex] oral, 18\% acceptance rate

A Feasibility Study of Wrist-Worn Accelerometer Based Detection of Smoking Habits, Philipp M. Scholl and Van Laerhoven, Kristof, In esIoT 2012, 2012. [abstractCigarette smoking is one of the major causes of lung cancer, and has been linked to a large amount of other cancer types and diseases. Smoking cessation, the only mean to avoid these serious risks, is hindered by the ease to ignore these risks in day-to-day life. In this paper we present a feasibility study with smokers wearing an accelerometer device on their wrist over the course of a week to detect their smoking habits based on detecting typical gestures carried out while smoking a cigarette. We provide a basic detection method that identifies when the user is smoking, with the goal of building a system that provides an individualized risk estimation to increase awareness and motivate smoke cessation. Our basic method detects a typical smoking gestures with a precision of 51.2\% and a user specific recall of over 70\% - creating evidence that an unobtrusive wrist- worn sensor can detect smoking.][pdf][scholar][bibtex]

Constructing Ambient Intelligence, Wichert, Reiner and Van Laerhoven, Kristof and Gelissen, Jean, vol.277, p.235, 2012. [abstractThis book constitutes the refereed proceedings of the AmI 2011 Workshops, held in Amsterdam, The Netherlands, in November 2011. The 55 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on aesthetic intelligence: designing smart and beautiful architectural spaces; ambient intelligence in future lighting systems; interactive human behavior analysis in open or public spaces; user interaction methods for elderly, people with dementia; empowering and integrating senior citizens with virtual coaching; integration of AMI and AAL platforms in the future internet (FI) platform initiative; ambient gaming; human behavior understanding: inducing behavioral change; privacy, trust and interaction in the internet of things; doctoral colloquium.][pdf][scholar][bibtex]

A Metropolitan-Scale Testbed for Heterogeneous Wireless Sensor Networks to Support CO2 Reduction, Guerrero, Pablo and Mühlhäuser, Max and Strufe, Thorsten and Schneckenburger, Stefan and Hegger, Manfred and Kretzschmar, Birgitt and Buchmann, Alejandro and Van Laerhoven, Kristof and Schweizer, Immanuel, In The Second International Conference on Green Communications and Networking, 2012. [abstractThere exist two major contributions of network technology to reduce CO2 levels: reducing the energy consumption of the network itself, and supporting areas of application to reduce CO2 levels. The impact of the latter is potentially higher. Therefore, we present TUDŒºNet, a testbed for a metropolitan-scale heterogeneous sensor network with hundreds of nodes that help monitor and control CO2 levels in urban areas. Our testbed has four major application domains where it is being applied: TU Darmstadt{\textquoteright}s award winning solar house, where temperature and CO2 levels are monitored; an 80 year old building in which a WSN is deployed to measure ambient parameters that contribute to future energy-saving remodeling; mobile sensors mounted on the streetcars of the public tramway system to measure location-specific CO2 levels that are collected in a publicly accessible database to obtain CO2 profiles; and a hybrid sensor network in TUD{\textquoteright}s botanical garden to measure humidity, CO2 levels and soil properties to improve the management of urban parks. In this paper we present the concepts behind the design of our testbed, its design challenges and our solutions, and potential applications of such metropolitan-scale sensor networks.][pdf][scholar][bibtex]

2011

Wireless Sensor Networks in the Wild: Three Practical Issues after a Middleware Deployment, Christian Seeger and Buchmann, Alejandro and Van Laerhoven, Kristof, In the Sixth International Workshop on Middleware Tools, Services and Run-time Support for Networked Embedded Systems (MidSens 2011), 2011. [abstractThis paper reflects on experiences in deploying middleware for a body sensor network, using commercial biosensors. Three types of issues are highlighted that arose during the deployment, which impact middleware design in particular: 1) How can the architecture cope with different levels of data fidelity and propagate those levels to the applications? 2) What is the optimal way to handle temporary disconnections from sensors? and 3) How should the middleware implement sensor-specific peculiarities? Although these issues are described using a specific and demanding health care scenario, we argue that the underlying causes tend to be archetypal for a generic set of sensor network middleware. Awareness of these problem categories and possible solutions are therefore generally relevant for other researchers working on middleware designs for all kinds of sensor networks.][pdf][scholar][bibtex]

myHealthAssistant: A Phone-based Body Sensor Network that Captures the Wearer{\textquoteright}s Exercises throughout the Day, Christian Seeger and Buchmann, Alejandro and Van Laerhoven, Kristof, In The 6th International Conference on Body Area Networks, 2011. [abstractThis paper presents a novel fitness and preventive health care system with a flexible and easy to deploy platform. By using embedded wearable sensors in combination with a smartphone as an aggregator, both daily activities as well as specific gym exercises and their counts are recognized and logged. The detection is achieved with minimal impact on the system{\textquoteright}s resources through the use of customized 3D inertial sensors embedded in fitness accessories with built-in pre-processing of the initial 100Hz data. It provides a flexible re-training of the classifiers on the phone which allows deploying the system swiftly. A set of evaluations shows a classification performance that is comparable to that of state of the art activity recognition, and that the whole setup is suitable for daily usage with minimal impact on the phone's resources.][pdf][scholar] Best Paper Award

A Feature Set Evaluation for Activity Recognition with Body-Worn Inertial Sensors, Agha Muhammad and Niklas Klein and Van Laerhoven, Kristof and Klaus David, In Constructing Ambient Intelligence, vol.277, p.101--109, 2011. [abstractThe automatic and unobtrusive identification of user activities is a challenging goal in human behavior analysis. The physical activity that a user exhibits can be used as contextual data, which can inform applications that reside in public spaces. In this paper, we focus on wearable inertial sensors to recognize physical activities. Feature set evaluation for 5 typical activities is performed by measuring accuracy for combinations of 6 often-used features on a set of 11 well-known classifiers. To verify significance of this analysis, a t-test evaluation was performed for every combination of these feature subsets. We identify an easy-to-compute feature set, which has given us significant results and at the same time utilizes a minimum of resources][pdf][scholar][bibtex]

Predicting Sleeping Behaviors in Long-Term Studies with Wrist-Worn Sensor Data, Borazio, Marko and Van Laerhoven, Kristof, In International Joint Conference on Ambient Intelligence (AmI-11), vol.LNCS 7040, p.151--156, 2011. [abstractThis paper conducts a preliminary study in which sleeping behavior is predicted using long-term activity data from a wearable sensor. For this purpose, two scenarios are scrutinized: The first predicts sleeping behavior using a day-of-the-week model. In a second scenario typical sleep patterns for either working or weekend days are modeled. In a continuous experiment over 141 days (6 months), sleeping behavior is characterized by four main features: the amount of motion detected by the sensor during sleep, the duration of sleep, and the falling asleep and waking up times. Prediction of these values can be used in behavioral sleep analysis and beyond, as a component in healthcare systems.][pdf][scholar][bibtex]

Poster Abstract: Adaptive Gym Exercise Counting for myHealthAssistant, Christian Seeger and Buchmann, Alejandro and Van Laerhoven, Kristof, In Body Area Networks (BodyNets), 2011. [pdf][scholar][bibtex]

ISWC 2010: The Latest in Wearable Computing Research, Van Laerhoven, Kristof, In IEEE Pervasive Computing, vol.10(1)2011. [abstractAt the 14th IEEE International Symposium on Wearable Computers (ISWC 2010), researchers and practitioners from all corners of the world met in Seoul’s impressive Coex Center to review recent advances in e-textiles, on-body sensors, and wearable activity-recognition and mobile technologies. The rich program followed the conference’s tradition of a technical program consisting of long and short papers, posters, and demonstrations, flanked by three tutorials, a design contest, and a PhD forum. The technical program reflected aspects of wearable computing ranging from fabric-based integrated-circuit design to novel microvibration activity sensors.][pdf][scholar][bibtex]

2010

How to Log Sleeping Trends? A Case Study on the Long-Term Capturing of User Data, Holger Becker and Borazio, Marko and Van Laerhoven, Kristof, vol.Proceedings of EuroSSC 2010, LNCS 6446, p.15--27, 2010. [abstractDesigning and installing long-term monitoring equipment in the users’ home sphere often presents challenges in terms of reliability, privacy, and deployment. Taking the logging of sleeping postures as an example, this study examines data from two very different modalities, high-fidelity video footage and logged wrist acceleration, that were chosen for their ease of setting up and deployability for a sustained period. An analysis shows the deployment challenges of both, as well as what can be achieved in terms of detection accuracy and privacy. Finally, we evaluate the benefits that a combination of both modalities would bring.][pdf][scholar][bibtex]

Characterizing Sleeping Trends from Postures, Borazio, Marko and Blanke, Ulf and Van Laerhoven, Kristof, In Proceedings of the 14th IEEE International Symposium on Wearable Computers (ISWC 2010), p.167--168, 2010. [abstractWe present an approach to model sleeping trends, using a light-weight setup to be deployed over longer time-spans and with a minimum of maintenance by the user. Instead of characterizing sleep with traditional signals such as EEG and EMG, we propose to use sensor data that is a lot weaker, but also less invasive and that can be deployed unobtrusively for longer periods. By recording wrist-worn accelerometer data during a 4-week-long study, we explore in this poster how sleeping trends can be characterized over long periods of time by using sleeping postures only.][pdf][scholar][bibtex] 21\% acceptance rate

Towards Declarative Query Scoping in Sensor Networks, Jacobi, Daniel and Pablo Ezequiel Guerrero and Khalid Nawaz and Christian Seeger and Herzog, Arthur and Van Laerhoven, Kristof and Ilia Petrov, In From Active Data Management to Event-Based Systems and More, vol.6462, p.281--292, 2010. [pdf][scholar][bibtex]

An On-Line Piecewise Linear Approximation Technique for Wireless Sensor Networks, Berlin, Eugen and Van Laerhoven, Kristof, In 5th IEEE International Workshop on Practical Issues in Building Sensor Network Applications (SenseApp 2010), p.921--928, 2010. [abstractMany sensor network applications observe trends over an area by regularly sampling slow-moving values such as humidity or air pressure (for example in habitat monitoring). Another well-published type of application aims at spotting sporadic events, such as sudden rises in temperature or the presence of methane, which are tackled by detection on the individual nodes. This paper focuses on a zone between these two types of applications, where phenomena that cannot be detected on the nodes need to be observed by relatively long sequences of sensor samples. An algorithm that stems from data mining is proposed that abstracts the raw sensor data on the node into smaller packet sizes, thereby minimizing the network traffic and keeping the essence of the information embedded in the data. Experiments show that, at the cost of slightly more processing power on the node, our algorithm performs a shape abstraction of the sensed time series which, depending on the nature of the data, can extensively reduce network traffic and nodes{\textquoteright} power consumption.][pdf][scholar][bibtex]

Standing on the Shoulders of Other Researchers - A Position Statement, Blanke, Ulf and Larlus, Diane and Van Laerhoven, Kristof and Schiele, Bernt, In Proc. of the Workshop "How to do good activity recognition research? Experimental methodologies, evaluation metrics, and reproducibility issues" (Pervasive 2010), 2010. [abstractActivity Recognition has made significant progress in the past years. We strongly believe however that we could make far greater progress if we build more systematically on each other’s work. Comparing the activity recognition community with other more mature communities (e.g., those of computer vision and speech recognition) there appear to be two key- ingredients that are missing in ours. First, the more mature communities have established a set of well-defined or accepted research problems, and second, the communities have a tradition to compare their algorithms on established and shared benchmark datasets. Establishing both of these ingredients and evolving them over time in a more explicit manner should enable us to progress our field more rapidly.][pdf][scholar]

A Method for Context Recognition Using Peak Values of Sensors, Kazuya Murao and Van Laerhoven, Kristof and Tsutomu Terada and Shojiro Nishio, In Transactions of Information Processing Society of Japan, vol.51(3)p.1068--1077, 2010. [pdf][scholar]

Coming to Grips with the Objects We Grasp: Detecting Interactions with Efficient Wrist-Worn Sensors, Berlin, Eugen and Liu, Jun and Van Laerhoven, Kristof and Schiele, Bernt, In International Conference on Tangible and Embedded Interaction (TEI 2010), p.57--64, 2010. [pdf][scholar][bibtex]

1999-2009

Demo Abstract: Whac-A-Bee -- A Sensor Network Game, Berlin, Eugen and Guerrero, Pablo and Herzog, Arthur and Jacobi, Daniel and Van Laerhoven, Kristof and Schiele, Bernt and Buchmann, Alejandro, In Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems (SenSys 2009), p.333--334, 2009. [abstractThis paper illustrates both challenges and benefits found in expanding a traditional game concept to a situated environment with a distributed set of wireless sensing modules. Our pervasive game equivalent of the Whac-A-Mole game, Whac-A-Bee, retains the find-and-seek aspects of the original game while extending the location, the number of players, and the time-span in which it can be played. We discuss the obstacles met during this work, and specifically address challenges in making the game robust and flexible enough for large and long-term deployments in unknown territory.][pdf][scholar][bibtex]

When Else Did This Happen? Efficient Subsequence Representation and Matching for Wearable Activity Data, Van Laerhoven, Kristof and Berlin, Eugen, In Proceedings of the 13th International Symposium on Wearable Computers (ISWC 2009), p.69--77, 2009. [pdf][scholar][bibtex]

Enabling Efficient Time Series Analysis for Wearable Activity Data, Van Laerhoven, Kristof and Berlin, Eugen and Schiele, Bernt, In Proceedings of the 8th International Conference on Machine Learning and Applications (ICMLA 2009), p.392--397, 2009. [pdf][scholar][bibtex]

Exploring Semi-Supervised and Active Learning for Activity Recognition, Stikic, Maja and Van Laerhoven, Kristof and Schiele, Bernt, In Proceedings of the 12th International Symposium on Wearable Computers (ISWC 2008), p.81--90, 2008. [pdf][scholar][bibtex]

ADL Recognition Based on the Combination of RFID and Accelerometer Sensing, Stikic, Maja and Huynh, Tam and Van Laerhoven, Kristof and Schiele, Bernt, In Proceedings of the 2nd International Conference on Pervasive Computing Technologies for Healthcare (Pervasive Health 2008), p.258--263, 2008. [pdf][scholar][bibtex]

Sustained Logging and Discrimination of Sleep Postures with Low-Level, Wrist-Worn Sensors, Van Laerhoven, Kristof and Borazio, Marko and Kilian, David and Schiele, Bernt, In Proceedings of the 12th International Symposium on Wearable Computers (ISWC 2008), p.69--77, 2008. [abstractWe present a study which evaluates the use of simple low-power sensors for a long-term, coarse-grained detection of sleep postures. In contrast to the information-rich but complex recording methods used in sleep studies, we follow a paradigm closer to that of actigraphy by using a wrist-worn device that continuously logs and processes data from the user. Experiments show that it is feasible to detect nightly sleep periods with a combination of light and simple motion and posture sensors, and to detect within these segments what basic sleeping postures the user assumes. These find- ings can be of value in several domains, such as monitoring of sleep apnea disorders, and support the feasibility of a continuous home-monitoring of sleeping trends where users wear the sensor device uninterruptedly for weeks to months in a row.][pdf][scholar][bibtex]

Using Rhythm Awareness in Long-Term Activity Recognition, Van Laerhoven, Kristof and Kilian, David and Schiele, Bernt, In Proceedings of the 12th International Symposium on Wearable Computers (ISWC 2008), p.63--68, 2008. [abstractThis paper reports on research where users' activities are logged for extended periods by wrist-worn sensors. These devices operated for up to 27 consecutive days, day and night, while logging features from motion, light, and temperature. This data, labeled via 24-hour self-recall annotation, is explored for occurrences of daily activities. An evaluation shows that using a model of the users{\textquoteright} rhythms can improve recognition of daily activities significantly within the logged data, compared to models that exclusively use the sensor data for activity recognition.][pdf][scholar][bibtex]

Gath-Geva Specification and Genetic Generalization of Takagi-Sugeno-Kang Fuzzy Models, Berchtold, Martin and Riedel, Till and Decker, Christian and Van Laerhoven, Kristof, In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC 2008), 2008. [pdf][scholar][bibtex]

Recording Housekeeping Activities with Situated Tags and Wrist-Worn Sensors: Experiment Setup and Issues Encountered, Stikic, Maja and Van Laerhoven, Kristof, In Proceedings of the 1st International Workshop on Wireless Sensor Networks for Health Care (WSNHC 2007), 2007. [abstractRFID tag readers and accelerometers are two sensing technologies that have recently dropped in both size and cost. Assuming that key household items can easily be tagged, one could legitimately imagine a wrist-worn sensor which incorporates both, to infer Activities of Daily Living (ADL). This paper presents initial challenges in research towards this scenario by describing critical choices made in a series of data recording experiments, in which we intend to capture realistic sensor data in a lab setting, using a set of housekeeping activities as targets.][pdf][scholar]

BadIdeas for Usability and Design of Medicine and Healthcare Sensors, Silva, Paula Alexandra and Van Laerhoven, Kristof, In Proceedings of the 3rd Human-computer interaction and usability engineering of the Austrian computer society conference on HCI and usability for medicine and health care (USAB{\textquoteright}07), vol.4799, p.105--112, 2007. [abstractThis paper describes the use of a technique to improve design and to develop new uses and improve usability of user interfaces. As a case study, we focus on the design and usability of a research prototype of an actigraph - electronic activity and sleep study device - the Porcupine. The proposed BadIdeas technique was introduced to a team of students who work with this sensor and the existing design was analysed using this technique. The study found that the BadIdeas technique has promising characteristics that might make it an ideal tool in the prototyping and design of usability-critical appliances.][pdf][scholar][bibtex]

Toward Recognition of Short and Non-repetitive Activities from Wearable Sensors, Zinnen, Andreas and Van Laerhoven, Kristof and Schiele, Bernt, In European Conference on Ambient Intelligence (AmI 2007), vol.4794, p.142--158, 2007. [abstractActivity recognition has gained a lot of interest in recent years due to its potential and usefulness for context-aware computing. Most approaches for activity recognition focus on repetitive or long time patterns within the data. There is however high interest in recognizing very short activities as well, such as pushing and pulling an oil stick or opening an oil container as sub-tasks of checking the oil level in a car. This paper presents a method for the latter type of activity recognition using start and end postures (short fixed positions of the wrist) in order to identify segments of interest in a continuous data stream. Experiments show high discriminative power for using postures to recognize short activities in continuous recordings. Additionally, classifications using postures and HMMs for recognition are combined.][pdf][scholar][bibtex]

Memorizing What You Did Last Week: Towards Detailed Actigraphy With A Wearable Sensor, Van Laerhoven, Kristof and Aronsen, Andre, In 27th International Conference on Distributed Computing Systems Workshops (ICDCSW 2007), p.47, 2007. [abstractWith sensors becoming smaller and more power efficient, wearable sensors that anyone could wear are becoming a feasible concept. We demonstrate a small lightweight module, called Porcupine, which aims at continuously monitoring human activities as long as possible, and as fine-grained as possible. We present initial analysis of a set of abstraction algorithms that combine and process raw accelerometer data and tilt switch states, to get descriptors of the user{\textquoteright}s motion- based activities. The algorithms are running locally, and the information they produce is stored in on-board memory for later analysis.][pdf][scholar][bibtex]

Fair Dice: A Tilt and Motion-Aware Cube with a Conscience., Van Laerhoven, Kristof and Hans-Werner Gellersen, In 26th International Conference on Distributed Computing Systems Workshops (ICDCSW 2006), p.66, 2006. [abstractAs an example of sensory augmentation of a tiny object, a small cube-sized die is presented that perceives rolls and records what face it lands on. It is thus able to detect bias for unfair behaviour due to its physical imperfections. On a deeper level, this case study demonstrates the integration of energy-efficient sensor fusion, combination of classifiers, and a wireless interface to adaptive classification heuristics.][pdf][scholar][bibtex]

Long-Term Activity Monitoring with a Wearable Sensor Node, Van Laerhoven, Kristof and Hans-Werner Gellersen and Malliaris, Yanni G., In International Workshop on Wearable and Implantable Body Sensor Networks (BSN 2006), p.171--174, 2006. [abstractThis paper introduces an encapsulated sensor node that is devised to monitor and record motion patterns over long, quotidian periods of time with potential application in psychological studies. Its design fuses different sensing modalities to allow efficient capturing of tilt and acceleration stimuli, as well as embedded algorithms that abstract from the raw sensory data to indicative features. By combining tilt switches and accelerometers with customized processing techniques, it is argued that a power-efficient yet information-rich approach is reached for the observation and logging of human motion-based activity.][pdf][scholar][bibtex]

Embedded Perception - Concept Recognition by Learning and Combining Sensory Data, Van Laerhoven, Kristof, vol.Ph.D., 2006. [pdf][scholar]

Real-Time Analysis of Correlations Between On-Body Sensor Nodes, Van Laerhoven, Kristof and Berchtold, Martin, In Proceedings of the 2nd International Workshop on Wearable and Implantable Body Sensor Networks (BSN 2005), p.27--32, 2005. [abstractThe topology of a body sensor network has, until recently, often been overlooked; either because the layout of the network is deemed to be sufficiently static ("we always know well enough where sensors are"), we always know exactly where the nodes are or because the location of the sensor is not inherently required ("as long as the node stays where it is, we do not need its location, just its data"). We argue in this paper that, especially as the sensor nodes become more numerous and densely interconnected, an analysis on the correlations between the data streams can be valuable for a variety of purposes. Two systems illustrate how a mapping of the network{\textquoteright}s sensor data to a topology of the sensor nodes{\textquoteright} correlations can be applied to reveal more about the physical structure of body sensor networks.][pdf][scholar][bibtex]

The Pendle: A Personal Mediator for Mixed Initiative Environments, Nicolas Villar and Kortuem, Gerd and Van Laerhoven, Kristof and Schmidt, Albrecht, In IEE International Workshop on Intelligent Environments, 2005. [abstractIn this paper we propose a novel interaction model for augmented environments based on the concept of mixed initiative interaction, and describe the design of the Pendle, a gesture- based wearable device. By splitting control between user and environment, the interaction model combines the advantages of explicit, direct manipulation with the power of sensor-based proactive environments while avoiding the lack of user control and personalization usually associated with the later. The Pendle is a personalizable wearable device with the capability to recognize hand gestures. It acts as mediator between user and environment, and provides a simple, natural interface that lends itself to casual interaction. Experiences with two concrete examples, the MusicPendle and NewsPendle, demonstrate the advantages of the personalized user experience and the flexibility of the device architecture.][pdf][scholar][bibtex]

Medical Healthcare Monitoring with Wearable and Implantable Sensors, Van Laerhoven, Kristof and Benny PL Lo and Jason WP Ng and Surapa Thiemjarus and Rachel King and Simon Kwan and Hans-Werner Gellersen and Morris Sloman and Oliver Wells and Phil Needham and Nick Peters and Ara Darzi and Chris Toumazou and Guang-Zhong Yang, In International Workshop on Ubiquitous Computing for Pervasive Healthcare Applications (UbiHealth), 2004. [abstractThe last decade has witnessed a surge of interest in new sensing and monitoring devices for healthcare, with implantable in vivo monitoring and intervention devices being key developments in this area. Permanent implants combined with wearable monitoring devices could provide continuous assessment of critical physiological parameters for identifying precursors of major adverse events. Open research issues in this area are predominantly related to novel sensor interface design, practical and reliable distributed com- puting environments for multi-sensory data fusion, but the concept itself is deemed to have further impact in many other areas. This position paper describes a scenario from the UbiMon [1] project, which is aimed at investigating healthcare delivery by combining wearable and implantable sensors.][pdf][scholar]

Spine versus Porcupine: A Study in Distributed Wearable Activity Recognition, Van Laerhoven, Kristof and Hans-Werner Gellersen, In 8th International Symposium on Wearable Computers (ISWC 2004), p.142--149, 2004. [abstractThis paper seeks to explore an alternative and more embedded-oriented approach to the recognition of a person{\textquoteright}s motion and pose, using sensor types that can easily be distributed in clothing. A large proportion of this type of research so far has been carried out with carefully positioned accelerometers, resulting in fairly good recognition rates. An alternative approach targets a more pervasive sensing vision where the clothing is saturated with small, embedded sensors. By increasing the quantity of sensors, while decreasing their individual information quality, a preliminary comparative study between the two approaches looks at the pros, cons, and differences in algorithm requirements.][pdf][scholar][bibtex]

The Pervasive Sensor - Invited Talk, Van Laerhoven, Kristof, In Ubiquitous Computing Systems, Second International Symposium, UCS 2004, p.1--9, 2004. [abstractForget processing power, memory, or the size of computers for a moment: Sensors, and the data they provide, are as important as any of these factors in realising ubiquitous and pervasive computing. Sensors have already become influential components in newer applications, but their data needs to be used more intelligently if we want to unlock their true potential. This requires improved ways to design and integrate sensors in computer systems, and interpret their signals.][pdf][scholar][bibtex]

Augmenting Collections of Everyday Objects: A Case Study of Clothes Hangers As an Information Display, Tara Matthews and Hans-Werner Gellersen and Van Laerhoven, Kristof and Dey, Anind K., In Pervasive Computing, Second International Conference, vol.3001, p.340--344, 2004. [abstractThough the common conception of human-computer interfaces is one of screens and keyboards, the emergence of ubiquitous computing envisions interfaces that will spread from the desktop into our environments. This gives rise to the development of novel interaction devices and the augmentation of common everyday objects to serve as interfaces between the physical and the virtual. Previous work has provided exemplars of such everyday objects augmented with interactive behaviour. We propose that richer opportunities arise when collections of everyday objects are considered as substrate for interfaces. In an initial case study we have taken clothes hangers as an example and augmented them to collectively function as an information display.][pdf][scholar][bibtex] oral, 12\% acceptance rate

A Physical Notice Board with Digital Logic and Display, Nicolas Villar and Van Laerhoven, Kristof and Hans-Werner Gellersen, In Adjunct Proceedings of the European Symposium on Ambient Intelligence 2004, p.207--217, 2004. [abstractA physical notice board is augmented with digital capabilities to provide additional functionality. The notice board retains its form factor, and when used with interactive pins with controllable lights it can be used to signal to the user information about the state of documents posted on the board. Both board and augmented pins are easy to deploy and cheap to produce.][pdf][scholar][bibtex]

Towards an Intertial Sensor Network, Kern, Nicky and Van Laerhoven, Kristof and Hans-Werner Gellersen and Schiele, Bernt, In IEE EuroWearable, 2003. [abstractWearable inertial sensors have become an inexpensive option to measure the movements and positions of a person. Other techniques that use environmental sensors such as ultrasound trackers or vision-based methods need full line of sight or a local setup, and it is complicated to access this data from a wearable computer{\textquoteright}s perspective. However, a body-centric approach where sensor data is acquired and processed locally, has a need for appropriate algorithms that have to operate under restricted resources. The objective of this paper is to give an overview of algorithms that abstract inertial data from body-worn sensors, illustrated using data from state-of-the-art wearable multi-accelerometer prototypes.][pdf][scholar][bibtex]

Exploring Cube Affordance: Towards A Classification Of Non-Verbal Dynamics Of Physical Interfaces For Wearable Computing, Jennifer Sheridan and Ben Short and Kortuem, Gerd and Van Laerhoven, Kristof and Nicolas Villar, In IEE EuroWearable 2003, p.113--118, 2003. [abstractCurrent input technologies for wearable computers are difficult to use and learn and can be unreliable. Physical interfaces offer an alternative to traditional input methods. In this paper we propose that developing a well-designed physical interface requires an exploration of the psychological idea of affordance. We present our findings from a design study in which we explore the natural affordance of a cube and suggest possible requirements for the design of graspable cube-shaped physical interfaces as alternative rich-action input device. We expect that such a framework will enhance the precision and usability of devices for wearable and mobile computing.][pdf][scholar][bibtex]

A Layered Approach to Wearable Textile Networks, Van Laerhoven, Kristof and Nicolas Villar and Hans-Werner Gellersen, In IEE EuroWearable 2003, p.61--67, 2003. [abstractThe integration of digital components into clothing is becoming an increasingly important segment in wearable computing research. The first indications for this trend are the incorporation of existing mobile technologies, such as personal digital assistants (PDAs) or mobile phones, into jackets via flexible textile circuits. In the long term, other components could also be envisioned that are embedded in apparel, using a flexible bus-type network that links all the devices together. This paper introduces a functioning prototype of such a flexible network that not only allows communication between wearable components, but is also able to supply power to them. We propose an arrangement of layered textiles as opposed to the more traditional routed circuitry layout, which results in a novel approach towards the concept of a flexible clothing network.][pdf][scholar][bibtex]

Using an autonomous cube for basic navigation and input, Van Laerhoven, Kristof and Nicolas Villar and Schmidt, Albrecht and Kortuem, Gerd and Hans-Werner Gellersen, In Proceedings of the 5th International Conference on Multimodal Interfaces, ICMI 2003, p.203--210, 2003. [abstractThis paper presents a low-cost and practical approach to achieve basic input using a tactile cube-shaped object, augmented with a set of sensors, processor, batteries and wireless communication. The algorithm we propose combines a finite state machine model incorporating prior knowledge about the symmetrical structure of the cube, with maximum likelihood estimation using multivariate Gaussians. The claim that the presented solution is cheap, fast and requires few resources, is demonstrated by implementation in a small-sized, microcontroller-driven hardware configuration with inexpensive sensors. We conclude with a few prototyped applications that aim at characterizing how the familiar and elementary shape of the cube allows it to be used as an interaction device.][pdf][scholar][bibtex]

Pin&Play: The Surface as Network Medium, Van Laerhoven, Kristof and Nicolas Villar and Schmidt, Albrecht and Hans-Werner Gellersen and Holmquist, Lars Erik, In IEEE Communications, vol.41(4)p.90--96, 2003. [abstractIntegrating appliances in the home through a wired network often proves to be impractical: routing cables is usually difficult, changing the network structure afterward even more so, and portable devices can only be connected at fixed connection points. Wireless networks are not the answer either: batteries have to be regularly replaced or changed, and what they add to the device{\textquoteright}s size and weight might be disproportionate for smaller appliances. In Pin&Play, we explore a design space in between typical wired and wireless networks, investigating the use of surfaces to network objects that are attached to it. This article gives an overview of the network model, and describes functioning prototypes that were built as a proof of concept.][pdf][scholar][bibtex]

Multi-Sensor Context Aware Clothing, Van Laerhoven, Kristof and Schmidt, Albrecht and Hans-Werner Gellersen, In 6th International Symposium on Wearable Computers (ISWC 2002), p.49--56, 2002. [abstractInspired by perception in biological systems, distribution of a massive amount of simple sensing devices is gaining more support in detection applications. A focus on fusion of sensor signals instead of strong analysis algorithms, and a scheme to distribute sensors, results in new issues. Especially in wearable computing, where sensor data continuously changes, and clothing provides an ideal supporting structure for simple sensors, this approach may prove to be favourable. Experiments with a body-distributed sensor system investigate the influence of two factors that affect classification of what has been sensed: an increase in sensors enhances recognition, while adding new classes or contexts depreciates the results. Finally, a wearable computing related scenario is discussed that exploits the presence of many sensors.][pdf][scholar][bibtex] oral, 19\% acceptance rate

Pin&Play: Networking Objects through Pins, Van Laerhoven, Kristof and Schmidt, Albrecht and Hans-Werner Gellersen, In UbiComp 2002: Ubiquitous Computing, 4th International Conference, vol.2498, p.219--228, 2002. [abstractWe introduce a new concept of networking objects in everyday environments. The basic idea is to build on the familiar use of surfaces such as walls and boards for attachment of mundane objects such as light controls, pictures, and notes. Hence our networking concept entails augmentation of such surfaces with conductive material to enable them as communication medium. It further incorporates the use of simple pushpin-connectors through which objects can be attached to network-enabled surfaces. Thereby users are provided with a highly familiar mechanism for adding objects ad hoc to the bus network, hence its name Pin&Play. This paper describes the architecture and principles of Pin&Play, as well as the design and implementation of a smart notice-board as proof of concept.][pdf][scholar][bibtex] oral, 18\% acceptance rate

Context Acquisition Based on Load Sensing, Schmidt, Albrecht and Strohbach, Martin and Van Laerhoven, Kristof and Adrian Friday and Hans-Werner Gellersen, In UbiComp 2002: Ubiquitous Computing, 4th International Conference, vol.2498, p.333--350, 2002. [abstractLoad sensing is a mature and robust technology widely applied in process control. In this paper we consider the use of load sensing in everyday environments as an approach to acquisition of contextual information in ubiquitous computing applications. Since weight is an intrinsic property of all physical objects, load sensing is an intriguing concept on the physical-virtual boundary, enabling the inclusive use of arbitrary objects in ubiquitous applications. In this paper we aim to demonstrate that load sensing is a versatile source of contextual information. Using a series of illustrative experiments we show that using load sensing techniques we can obtain not just weight information, but object position and interaction events on a given surface. We describe the incorporation of load-sensing in the furniture and the floor of a living laboratory environment, and report on a number of applications that use context information derived from load sensing.][pdf][scholar][bibtex] oral, 18\% acceptance rate

Ubiquitous Interaction - Using Surfaces in Everyday Environments as Pointing Devices, Schmidt, Albrecht and Strohbach, Martin and Van Laerhoven, Kristof and Hans-Werner Gellersen, In Universal Access: Theoretical Perspectives, Practice, and Experience, 7th ERCIM International Workshop on User Interfaces for All, vol.2615, p.263--279, 2002. [abstractTo augment everyday environments as interface to computing may lead to more accessible and inclusive user interfaces, exploiting affordances existing in the physical world for interaction with digital functionality. A major challenge for such interfaces is to preserve accustomed uses while providing unobtrusive access to new services. In this paper we discuss augmentation of common surfaces such as tables as generic pointing device. The basic concept is to sense the load, the load changes and the patterns of change observed on a surface using embedded load sensors. We describe the interaction model used to derive pointing actions from basic sensor observations, and detail the technical augmentation of two ordinary tables that we used for our experiments. The technology effectively emulates a serial mouse, and our implementation and use experience prove that it is unobtrusive, robust, and both intuitively and reliably usable.][pdf][scholar][bibtex]

Pin&Play: Bringing Power and Networking to Wall-Mounted Appliances, Van Laerhoven, Kristof and Nicolas Villar and Maria Hakansson and Hans-Werner Gellersen, In Proceedings of the 5th IEE International Workshop on Networked Appliances, Liverpool, UK, p.131--137, 2002. [pdf][scholar][bibtex]

Context Awareness in Systems with Limited Resources, Ozan Cakmakci and Joelle Coutaz and Van Laerhoven, Kristof and Hans-Werner Gellersen, In Proc. of the third workshop on Artificial Intelligence in Mobile Systems (AIMS), ECAI 2002, p.21--29, 2002. [abstractMobile embedded systems often have strong limitations regarding available resources. In this paper we propose a statistical approach which could scale down to microcontrollers with scarce resources, to model simple contexts based on raw sensor data. As a case study, two experiments are provided where statistical modeling techniques were applied to learn and recognize different contexts, based on accelerometer data. We furthermore point out applications that utilize contextual information for power savings in mobile embedded systems.][pdf][scholar]

How to Build Smart Appliances?, Schmidt, Albrecht and Van Laerhoven, Kristof, In IEEE Personal Communications, vol.8(4)p.66 -- 71, 2001. [abstractIn this article smart appliances are characterized as devices that are attentive to their environment. We introduce a terminology for situation, sensor data, context, and context-aware applications because it is important to gain a thorough understanding of these concepts to successfully build such artifacts. In the article the relation between a real-world situation and the data read by sensors is discussed; furthermore, an analysis of available sensing technology is given. Then we introduce an architecture that supports the transformation from sensor data to cues then to contexts as a foundation to make context-aware applications. The article suggests a method to build context-aware devices; the method starts from situation analysis, offers a structured way for selection of sensors, and finally suggests steps to determine recognition and abstraction methods. In the final part of the article the question of how this influences the applications is raised and the areas of user interfaces, communication, and proactive application scheduling are identified. We conclude with the description of a case study where a mobile phone was made aware of its environment using different sensors. The profile settings of the phone (ringing mode etc.) are automatically selected according to the real world situation the phone is used in.][pdf][scholar][bibtex]

Combining the Self-Organizing Map and K-Means Clustering for On-Line Classification of Sensor Data, Van Laerhoven, Kristof, In Artificial Neural Networks - ICANN 2001, International Conference, vol.2130, p.464--469, 2001. [abstractMany devices, like mobile phones, use contextual profiles like "in the car" or "in a meeting" to quickly switch between behaviors. Achieving automatic context detection, usually by analysis of small hardware sensors, is a fundamental problem in human-computer interaction. However, mapping the sensor data to a context is a difficult problem involving near real-time classification and training of patterns out of noisy sensor signals. This paper proposes an adaptive approach that uses a Kohonen Self-Organizing Map, augmented with on-line k-means clustering for classification of the incoming sensor data. Overwriting of prototypes on the map, especially during the untangling phase of the Self-Organizing Map, is avoided by a refined k-means clustering of labeled input vectors.][pdf][scholar][bibtex]

Real-time Analysis of Data from Many Sensors with Neural Networks, Van Laerhoven, Kristof and Kofi A Aidoo and Steven Lowette, In 5th International Symposium on Wearable Computers (ISWC 2001), p.115--122, 2001. [abstractMuch research has been conducted that uses sensor- based modules with dedicated software to automatically distinguish the user{\textquoteright}s situation or context. The best results were obtained when powerful sensors (such as cameras or GPS systems) and/or sensor-specific algorithms (like sound analysis) were applied. A somewhat new approach is to replace the one smart sensor by many simple sensors. We argue that neural networks are ideal algorithms to analyze the data coming from these sensors and describe how we came to one specific algorithm that gives good results, by giving an overview of several requirements. Finally, wearable implementations are given to show the feasibility and benefits of this approach and its implications.][pdf][scholar][bibtex]

Teaching Context to Applications, Van Laerhoven, Kristof and Kofi A Aidoo, In Personal and Ubiquitous Computing, vol.5(1)p.46--49, 2001. [abstractAlthough mobile devices keep getting smaller and more powerful, their interface with the user is still based on that of the regular desktop computer. This implies that interaction is usually tedious, while interrupting the user is not really desired in ubiquitous computing. We propose adding an array of hardware sensors to the system that, together with machine learning techniques, make the device aware of its context while it is being used. The goal is to make it learn the context-descriptions from its user on the spot, while minimising user-interaction and maximising reliability.][pdf][scholar][bibtex]

What Shall We Teach Our Pants?, Van Laerhoven, Kristof and Ozan Cakmakci, In Proceedings of the 4th IEEE International Symposium on Wearable Computers (ISWC), p.77, 2000. [abstractIf a wearable device can register what the wearer is currently doing, it can anticipate and adjust its behavior to avoid redundant interaction with the user. However, the relevance and properties of the activities that should be recognized depend on both the application and the user. This requires an adaptive recognition of the activities where the user, instead of the designer, can teach the device what he/she is doing. As a case study we connected a pair of pants with accelerometers to a laptop to interpret the raw sensor data. Using a combination of machine learning techniques such as Kohonen maps and probabilistic models, we build a system that is able to learn activities while requiring minimal user attention. This approach to context awareness is more universal since it requires no a priori knowledge about the contexts or the user.][pdf][scholar][bibtex]

On-line adaptive context awareness starting from low-level sensors, Van Laerhoven, Kristof, 1999. [abstractAlong with the sales-figures and the popularity of wearable and portable devices, the importance of their usability and functionality is increasing as well. Most of these devices need to change their behavior according to the context they are currently in. Inadequate knowledge about the context results in a lack of user-friendliness. Mobile phones for example don?t know when and how to disturb their users when a call arrives. The solution would be to add various sensors and thus give the host device more knowledge about its context. However, the way humans describe contexts is not by giving the complete inventory of their sensations. Often, either unusual elements ("It is cold.") or more abstract properties ("I am home.", "I am in a dark room." ) are noticed and expressed. If the device needs to function in a transparent way for its user or if the user needs to train the device, it needs to recognize contexts in a similar way. This thesis will focus on the transformation of a multitude of sensory information to a short context description, supplied by the user, in an adaptive and on-line way. The approach is to use an adaptive hybrid system consisting of a connectionist layer, followed by a symbolic layer. Sensor outputs will be periodically sent to a self-organizing artificial neural network architecture, which is responsible for the initial processing and clustering. A symbolic layer, implemented as a Markov chain, provides a predictive component that enables interaction with the user, ensures an enhanced recognition.][pdf][scholar]

Advanced Interaction in Context, Schmidt, Albrecht and Kofi A Aidoo and Antti Takaluoma and Urpo Tuomela and Van Laerhoven, Kristof and Walter Van de Velde, In The First International Symposium on Handheld and Ubiquitous Computing (HUC'99), p.89--101, 1999. [abstractMobile information appliances are increasingly used in numerous different situations and locations, setting new requirements to their interaction methods. When the user{\textquoteright}s situation, place or activity changes, the functionality of the device should adapt to these changes. In this work we propose a layered real-time architecture for this kind of context-aware adaptation based on redundant collections of low-level sensors. Two kinds of sensors are distinguished: physical and logical sensors, which give cues from environment parameters and host information. A prototype board that consists of eight sensors was built for experimentation. The contexts are derived from cues using real-time recognition software, which was constructed after experiments with Kohonen{\textquoteright}s Self-Organizing Maps and its variants. A personal digital assistant (PDA) and a mobile phone were used with the prototype to demonstrate situational awareness. On the PDA font size and backlight were changed depending on the demonstrated contexts while in mobile phone the active user profile was changed. The experiments have shown that it is feasible to recognize contexts using sensors and that context information can be used to create new interaction metaphors.][pdf][scholar][bibtex]