TY - CONF AU - Riechmann, Malte AU - Kirsch, André AU - König, Matthias AU - Rexilius, Jan ID - 4306 TI - Virtual Borders in 3D: Defining a Drone’s Movement Space Using Augmented Reality ER - TY - CONF AB - In the modern household autonomous service robots are becoming increasingly popular. Changing the behavioral patterns robots working in 3D spaces can propose a challenge for non-expert users. We propose to use augmented reality as an interface to modify the robot’s behavior using virtual artifacts. Our contribution is an AR application that acts as an interface to interactively plan a drone’s path including generating and modifying its path. This also includes two interaction methods for changing the position of virtual objects in 3D space. The first method is based on pan gestures that are commonly in 2D mobile applications, while the second one implements a grasp and release mechanism based on the device’s motion. As a visual aid we employ a simulated drone. A user study was performed to evaluate and compare the two interaction methods. The study had with 18 participants that had to interactively plan a path. The majority of participants preferred the motion-based interaction method, as it was more comfortable to use. However, the touch interaction is more intuitive without any additional information. The resulting paths can be divided in three distinct clusters, which can be used in future work to improve the automatically generated paths. AU - Riechmann, Malte AU - Kirsch, André AU - König, Matthias ID - 4307 T2 - 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) TI - Augmented Reality for Interactive Path Planning in 3D ER - TY - CONF AB - Micro aerial vehicles (MAVs) are often limited due to weight or cost constraints. This results in low sensor variety and sometimes even in low sensor quality. For example, many MAVs only offer a single RGB camera to capture the environment, apart from simple distance sensors. On the other side, maps of complex environments are typically captured using depth sensors like Lidar, which are not found on such drones. For MAVs to still benefit from and use these maps, it is necessary to implement a connection layer that enables the localization of the MAV in these maps. In this paper, we propose to use fiducial markers that can be recorded by an assisting device, e.g., a mobile phone or tablet, responsible for map creation. These fiducial markers have a known pose in the map and can be detected by a drone's RGB camera to localize itself. We show that the markers are localized in the map creation process with high precision and that the drone is able to determine its pose based on detected markers. Furthermore, we present a ROS 2 based drone controller for a Ryze Tello EDU MAV that uses an occupancy voxel map for navigation. AU - Kirsch, André AU - Riechmann, Malte AU - König, Matthias ID - 3656 T2 - 2023 European Conference on Mobile Robots (ECMR) TI - Assisted Localization of MAVs for Navigation in Indoor Environments Using Fiducial Markers ER - TY - CONF AB - In this work, we have developed a novel intelligent system capable of detecting and managing dynamic hazards in intelligent buildings. Our calculation of escape route strategies, numerical analysis, and visualization of evacuations, makes it possible to realistically investigate and evaluate hazards. For this purpose, we translated a real building into a static 3D model based on a building plan. For the analysis of evacuation scenarios, dynamic hazards were developed, which can also propagate dynamically over time. The computation of the escape route strategies is performed by using the Deep Reinforcement Learning (DRL) method Proximal Policy optimization (PPO). This work demonstrates that dynamic hazards have a great impact on the evacuation strategy in the building and can be analyzed by using this approach. Compared to traditional AI frameworks, scenarios can be created and analyzed both numerically and visually. As a result, the behavior of agents during training and evacuation can be examined for natural behavior. AU - Wächter, Tim AU - Rexilius, Jan AU - König, Matthias ID - 3450 T2 - 2023 19th International Conference on Intelligent Environments (IE) TI - Escape Route Strategies in Complex Emergency Situations using Deep Reinforcement Learning ER - TY - CONF AU - Viertel, Philipp AU - König, Matthias AU - Rexilius, Jan ID - 2292 SN - 978-989-758-626-2 T2 - Proceedings of the 12th International Conference on Pattern Recognition Applications and Methods - ICPRAM TI - Metric-Based Few-Shot Learning for Pollen Grain Image Classification ER - TY - CONF AB - Robots are constantly becoming more integral in the day-to-day lives of humanity. To do this, they have to accomplish tasks autonomously in dynamically changing environments. Dynamic objects often need to be handled differently than static objects because they cause changes in the environment. To solve the problem, we propose a 3D-Multi-Layer-Multi-Representation map. The overall map consists of multiple layers with custom semantics and custom representation types. A static layer models the static environment using an octomap. A second layer models generic dynamic objects as bounding boxes. Semantic segmentation is used to decide which measurement belongs to a dynamic object. These are all objects of beforehand defined classes. This allows customized update strategies for both types of objects. The experiments show that this increases the accuracy and efficiency of the overall map, as well as the individual layers. A third layer, the human layer that stores the poses of all persons, is added to the map. This allows to precisely see what a human is doing in the exact moment. Using different representations types, the overall map does not only have a higher accuracy and efficiency, but also provides more in-depth knowledge of the scene. AU - Riechmann, Malte AU - König, Matthias AU - Rexilius, Jan ID - 1986 KW - Robotics KW - Mapping KW - Multi-Layer-Map TI - 3D-Multi-Layer-Multi-Representation-Maps for Short- and Long-term Mapping and Navigation ER - TY - JOUR AB - Intelligent buildings offer a great opportunity to evacuate people more efficiently and safely by using dynamic evacuation guidance systems in comparison to static escape route signs. Static systems are unable to react to temporary events; their signs might guide the occupants directly to the hazard in the worst case. A dynamic system on the other hand determines the hazard position and calculates and displays an alternative escape route to avoiding the hazard position. In this work, we present a detailed study of current research approaches and introduce two algorithms developed for building evacuation. The first algorithm demonstrates the static evacuation and leads directly to the nearest exit. The second algorithm reacts dynamically to temporary events such as fire. A comparison of these algorithms shows that the dynamic system is more efficient. In order to test the dynamic approach in a real environment, a device to display the evacuation route is required. We propose a novel interactive evacuation approach using mixed reality glasses (HoloLens) for user guidance. A user study shows the advantages of the HoloLens and verifies our findings from the simulation in a real environment. AU - Wächter, Tim AU - Rexilius, Jan AU - König, Matthias ID - 2084 JF - Journal of Smart Cities and Society SN - 27723577 TI - Interactive evacuation in intelligent buildings assisted by mixed reality ER - TY - CONF AU - Wächter, Tim AU - Rexilius, Jan AU - Hoffmann, Martin AU - König, Matthias ID - 1988 T2 - 2022 18th International Conference on Intelligent Environments (IE) TI - Intelligent Building Evacuation under Consideration of Temporary Events and Dynamic Fire Propagation ER - TY - CONF AU - Günter, Andrei AU - König, Matthias ED - Günes, Mesut ED - Zug, Sebastian ED - König, Matthias ID - 1969 T2 - 2nd Workshop on Tools and Concepts for Communication and Networked Systems TI - Improved Edge Computing for IoT Devices via Optimized Semantic Models ER - TY - CONF AB - Point cloud registration is often used in fields like SLAM where the overlap of two consecutive point clouds is large. But in fields like multi-sensor fusion of point clouds and LiDAR-based localization, there is a high chance of registering non-overlapping point cloud pairs. Since in such cases, the result will always be a wrong transformation, it is useful to evaluate the alignability of the point cloud pairs prior to the registration. In this paper, an algorithm is presented that predicts the alignability of two point clouds based on the minimum distances of descriptors. It calculates statistical measures describing the minimum distances and classifies the point cloud pairs. The paper shows that it is possible to predict the alignability and evaluates the runtime compared to registration algorithms, as well as the ignoring of the largest minimum distances. AU - Kirsch, André AU - Günter, Andrei AU - König, Matthias ID - 1966 KW - alignability prediction KW - point cloud registration KW - overlap metric KW - descriptors T2 - 12th International Conference on Pattern Recognition Systems TI - Predicting Alignability of Point Cloud Pairs for Point Cloud Registration Using Features ER - TY - CONF AU - Viertel, Philipp AU - König, Matthias AU - Rexilius, Jan ID - 1548 SN - 978-3-88579-711-1 T2 - Referate der 42. GIL-Jahrestagung TI - Pollen detection from honey sediments via Region-Based Convolutional Neural Networks ER - TY - JOUR AB - In a large number of scientific areas, such as immunology, forensics, paleoecology, and archeology, the study of pollen, i.e., palynology, plays an important role: from tracking climate changes, studying allergies, to forensic investigations or honey origin analysis. Since the mid-nineties of the last century, the idea for an automated solution to the problem of pollen identification and classification was formulated and since then, several attempts and proposals have been made and presented, based on different technologies, in particular in the field of Computer Vision. However, as of 2021 microscopic analyses are performed mainly manually by highly trained specialists, although the capabilities of artificial intelligence, especially Deep Neural Networks, are steadily increasing. In this work, we analyzed various state-of-the-art research work concerning pollen detection and classification and compared their methods and results. The problems, such as data accessibility, different methods of Machine Learning, and the intended applicability of the proposed solutions are explored. We also identified crucial issues that require further work and research. Our work will provide a thorough view on the current state of the art, its issues, and possibilities for the future. AU - Viertel, Philipp AU - König, Matthias ID - 1533 JF - Machine Vision and Applications TI - Pattern Recognition Methodologies for Pollen Grain Image Classification: A Survey VL - 33 ER - TY - CONF AU - Günter, Andrei AU - Schwarzer, Christopher AU - König, Matthias ID - 1288 T2 - Workshop on Tools and Concepts for Communication and Networked Systems TI - An Information Abstraction Layer for IoT Middleware ER - TY - CONF AU - König, Matthias AU - Rasch, Robin ID - 1420 T2 - 2021 ACM/IEEE Workshop on Computer Architecture Education (WCAE) TI - Digital Teaching an Embedded Systems Course by Using Simulators ER - TY - CONF AU - Viertel, Philipp AU - König, Matthias AU - Rexilius, Jan ID - 1532 T2 - 20th IEEE International Conference on Machine Learning and Applications (ICMLA) TI - PollenGAN: Synthetic Pollen Grain Image Generation for Data Augmentation ER - TY - CONF AU - Wächter, Tim AU - Rexilius, Jan AU - König, Matthias AU - Hoffmann, Martin ID - 1668 T2 - 2021 International Conference on Information and Communication Technologies for Disaster Management (ICT-DM) TI - Dynamic Evacuation System for the Intelligent Building Based on Beacons and Handheld Devices ER - TY - CONF AU - Schwarzer, Christopher AU - Günter, Andrei AU - König, Matthias ID - 1551 T2 - 2021 17th International Conference on Intelligent Environments (IE) TI - IAL: Information Abstraction Layer to Include Multimedia in Building Automation Systems ER - TY - CONF AU - Viertel, Philipp AU - König, Matthias ID - 1531 SN - 978-3-88579-703-6 T2 - Referate der 41. GIL-Jahrestagung TI - Deep Learning in palynology – A use case for automated visual classification of pollen grains from honey samples ER - TY - BOOK ED - Güneş, Mesut ED - Zug, Sebastian ED - König, Matthias ID - 1290 SN - 1617-5468 TI - Tools and Concepts for Communication and Networked Systems – Or: How to build resilient IoT Systems? VL - 307 ER - TY - CONF AU - Viertel, Philipp AU - König, Matthias ID - 1289 T2 - GIL-Jahrestagung - Fokus: Informations- und Kommunikationstechnologien in kritischen Zeiten TI - Deep Learning in palynology ER -