Special Sessions Fusion 2022

We are happy to announce that 16 special sessions have been accepted. When uploading your paper, you can select one of the special sessions of your choice if you find it appropriate. A choice of special session does not influence the review process, which is completely decoupled, but it might influence which session your paper will be scheduled for.

SS1: Context-based Information Fusion
Jesus Garcia, Lauro Snidaro, Jose M Molina, Ingrid Visentini

The goal of the proposed session is discussing approaches to context-based information fusion. It will cover the design and development of information fusion solutions integrating sensor data with contextual knowledge.

The development of IF systems inclusive of contextual factors and information offers an opportunity to improve the quality of the fused output, provide solutions adapted to the application requirements, and enhance tailored responses to user queries. Contextual-based strategy challenges include selecting the appropriate representations, exploitations, and instantiations. Context could be represented as knowledge-bases, ontologies, and geographical maps, etc. and would form a powerful tool to favor adaptability and system performance. Example applications include context-aided tracking and classification, situational reasoning, ontology building and updating.

Therefore, the session covers both representation and exploitation mechanisms so that contextual knowledge can be efficiently integrated in the fusion process and enable adaptation mechanisms.


SS2: Advanced Nonlinear Filtering
Uwe D. Hanebeck, Ondrej Straka, Jindrich Dunik, Jordi Vila-Valls, Victor Elvira, Fred Daum, Daniel Frisch

Methods for Bayesian inference with nonlinear systems are of fundamental interest in the information fusion community. Great efforts have been made to develop state estimation methods that are getting closer and closer to the truth. Further objectives are to increase their efficiency, reduce their requirements / assumptions, and to allow their application in more general settings.

Areas such as target tracking, guidance, positioning, navigation, sensor fusion, and decision-making usually require the use of linear or nonlinear state estimation methods (i.e., of broad interest for the information fusion community). These methods are used to provide a state estimate of a dynamic system, which is in general not directly measurable, from a set of noisy measurements. The development of state estimation started in the sixties with the appearance of the well known Kalman filter (KF), and the use of simple linearization approaches to deal with nonlinear dynamic systems. Satisfactory performance of these legacy KF-based methods was limited to system models with mild nonlinearities, together with a perfect knowledge of the system, that is, both system functions, noise statistics distributions and their respective parameters.


SS3: Real-time critical perception tasks in the context of automated driving
Frank Bieder, Sven Richter, Haoaho Hu, Wei Tian

In recent years, huge progress has been made in the development of algorithmic solutions for automated vehicles. In this context, perceiving and modeling the current state of the ego-vehicle as well as the surrounding traffic scene is one of the core elements. In safety-critical applications such as automated driving it is crucial to incorporate heterogeneous data from multiple sensors to obtain redundancy and maximize the amount of information. With the recent developments in remote sensing and processing technologies, the variety and quantity of sensor data is rapidly increasing. This demands for equal advancements in information fusion systems making them a key component for the successful deployment of automated vehicles. The fact that automated vehicles are being operated in highly dynamic environments additionally adds severe constraints on the execution time in order to achieve humanlike reaction times.

In this special session, we welcome submissions on recent advances in solving real-time critical perception tasks in the context of automated driving. These tasks may include various steps towards a holistic scene understanding in urban scenarios, including environmental modeling, multi-sensor fusion systems, extended object detection and tracking and semantic scene classification. We want to discuss novel data fusion strategies and bring together researchers from academia and industry to push forward perception for automated driving. A focus on the fusion of heterogeneous sensor data is encouraged, yet not required.


SS4: Data Fusion for Industry 4.0
Claudio M. de Farias, Jose F. B. Brancalion

The Internet of Things (IoT) is a novel paradigm that is grounded on Information and Communication Technologies (ICT). Recently, the use of IoT has been gaining attraction in areas such as logistics, manufacturing, retailing, and pharmaceutics, transforming the typical industrial spaces into Smart Spaces. This leads to a novel paradigm called Industry 4.0. Since IoT data is usually dynamic and heterogeneous, it becomes important to investigate techniques for understanding and resolving issues about data fusion in Industry 4.0. Employment of Data fusion algorithms are useful to reveal trends in the sampled data, uncover new patterns of monitored variables, make predictions, thus improving decision making process, reducing decisions response times, and enabling more intelligent and immediate situation awareness.


SS5: Intelligent Sensing and AI for Healthcare Technologies
Mohsen Naqvi, Lyudmila Mihaylova

Intelligent multimodal sensors continually measure many parameters in our lives and environment e.g., CCTV cameras and the sensors in smart devices.  The key focus of this special session is to present interdisciplinary (mainly, engineering and medicine led) collaborative data and approaches for the healthcare applications e.g., assisted living, mental health early diagnosis, health condition of muscles and the nerve cells detection.  These are very challenging problems and require the engineering and medical professionals to exchange the interdisciplinary knowledge.


SS6: Advances in Motion Estimation using Inertial Sensors
Manon Kok, Gustaf Hendeby

Accelerometers and gyroscopes (inertial sensors) measure the movement of the sensor in terms of its acceleration and angular velocity. These sensors are nowadays widely available in smartphones and VR/AR headsets but also in dedicated sensor units (inertial measurement units). Due to their small form-factor, they can non-intrusively be placed on people and devices. Measurements from mobile sensors carried by or placed on people, vehicles and robots can be used to track or classify their movements. Resulting from technological advances, the availability of these sensors as well as their accuracy has steadily increased over recent years, opening up for many exciting applications. Since inertial measurements only give accurate position and orientation information on a limited time scale, inertial sensors are typically combined with for instance additional sensors or with motion models. Challenges lie both in obtaining accurate (sensor and motion) models as well as in the choice and development of algorithms.

This Special Session Advances in Motion Estimation using Inertial Sensors features contributions describing recent developments in the use of inertial sensors, with focus on localisation, calibration and biomedical applications. New requirements in applications call for advances in motion estimation using inertial sensors, hence deserving a forum at FUSION 2022.


SS7: Multiple Sensor Data Processing for Tracking, Classification and Intentionality Prediction
Lyudmila Mihaylova, Wenwu Wang, Simon Godsill, Nidhal Bouaynaya

The focus of this special session is on current challenges, recent progress, and outlook at multi-sensor data processing methods for tracking, classification and intentionality prediction. Work with different methods is welcome – from Bayesian methods including particle filters, Markov chain Monte Carlo methods, Gaussian process methods to deep learning showing their potential to solve challenging problems with various uncertainties. Characterisation of the impact of uncertainties on the proposed solutions is welcome and the demonstration of the robustness of the methods is a valued aspect.


SS8: Wearable technology for sensing and perception
Jesper Jensen, Martin Skoglund

The quest for more ecologically valid research is motivated by the need for bridging the gap between laboratory-controlled experiments and real-life scenarios. Recent advancement in sensor miniaturization and computational efficiency opens for novel applications with wearable perception and assistive systems. The use of multimodal sensor integration enables scene and motion-aware systems are particularly powerful when combined with modern estimation techniques. These technologies are already necessary in AR applications and could play an equally important role in future personalized and intention-controlled assistive devices.

In this special session the focus is on wearable/portable sensor technology, such as microphone arrays, cameras, inertial sensors, EEG, eye-tracking, pulse, and more. We are highly interested in research on applications, platforms, algorithms, theory, perception, assistive support, objective assessment, and paradigms, that may be applicable in more ecologically valid scenarios going beyond traditional laboratory and clinical testing.


SS9: Estimation and Fusion for Navigation
Jindrich Dunik, Pau Closas, Zak M. Kassas, Michael S. Braasch

Development of modern navigation algorithms is closely tied with the advent of state estimation and data fusion methods. State estimation methods provide a valuable tool to infer time-varying, unknown navigation quantities (e.g., inform of the position, velocity, or attitude and heading of a moving object) from a set of indirectly related and noisy measurements, as well as a priori information on the dynamics of the object, all of which are tied together through a state-space model formulation. Data fusion methods can be seen as a further extension of state estimation methods, where multiple estimates or sources of information are merged together to get a “global” estimate with superior performance.

Navigation algorithms are core components in a wide range of applications and devices of today’s society including (autonomous) transportation, wearables, robotics, or space exploration, to name a few. As such, the current and envisioned navigation algorithms are required to process measurements from a broad variety of heterogeneous sensors (for instance, including technologies such as inertial sensors, satellite navigation, signals of opportunity, altimeters, LiDARs, star trackers, or terrain and other maps) to provide high-quality navigation information estimates with predefined levels of accuracy, integrity, availability, and continuity. To fulfill the stringent requirements on the navigation information estimates, novel state estimation, data fusion, and system identification methods shall be designed and employed. In parallel, methods should be kept computationally feasible in order to process (nearly optimally) all the available information in (possibly) real-time, for which model assumptions are typically made to deal with the complexity associated with the underlying nonlinear/non-Gaussian models. In this context, advanced state estimation methods can take advantage of the steady and fast developments in the area of the machine learning and artificial intelligence.


SS10: Directional Estimation
Florian Pfaff, Kailai Li, Uwe Hanebeck

Many estimation problems of practical relevance include the problem of estimating directional quantities, for example, angular values or orientations. However, conventional filters like the Kalman filter assume Gaussian distributions defined on R??. This assumption neglects the inherent periodicity present in directional quantities. Consequently, more sophisticated approaches are required to accurately describe the circular setting.

This Special Session addresses fundamental techniques, recent developments, and future research directions in the field of estimation involving directional and periodic data. It is our goal to bridge the gap between theoreticians and practitioners. Thus, we include both applied and theoretical contributions to this topic.


SS11: Extended Object and Group Tracking
Kolja Thormann, Marcus Baum, Johannes Reuter, Antonio Zea, Uwe D. Hanebeck, Tim Baur

Traditional object tracking algorithms assume that the target object can be modeled as a single point without a spatial extent. However, there are many scenarios in which this assumption is not justified. For example, when the resolution of the sensor device is higher than the spatial extent of the object, a varying number of measurements can be received, originating from points on the entire surface or contour or from spatially distributed reflection centers. Furthermore, a collectively moving group of point objects can be seen as a single extended object because of the interdependency of the group members.

This Special Session addresses fundamental techniques, recent developments, and future research directions in the field of extended object and group tracking. This Special Session has been organized annually at the FUSION conference since 2009 in Seattle.


SS12: Localization for Autonomous Systems
Gustaf Hendeby, Magnus Oskarsson, and Henk Wymeersch

Autonomous systems is currently a hot topic with numerous applications. Currently, a strong driving force is the automotive industry, and their goal to put self driving cars on our roads. However this is only the tip of the iceberg, all over the place industry is pushing to as far as possible replace monotonous, dangerous or just expensive manual labor with autonomous systems. The challenges faced are many and differs between different applications, however a common key component is situational awareness which is needed in order for an autonomous system to interact with its surroundings. To achieve this, localization is crucial. For a long time global navigation satellite systems (GNSS) have been the preferred solution; however GNSS solutions can be expensive or even fail to deliver the accuracy needed due to unfavorable environments such as urban canyons, complete signal blockage, or intentional interference (e.g., jamming and spoofing). Alternatives to GNSS exist, but are both less commonly used and less well developed, and further research is needed in order be able to provide the localization need for autonomous systems to be able to reach their full potential.

This special session highlights the need for autonomous systems to be able to localize without GNSS, and it presents research on alternative localization solutions suitable for autonomous systems. This proposal features five committed papers describing research on methods utilizing different sensor modalities that can all be used as (part of) GNSS-free localization solutions. The hope is to inspire the research and development of robust, scalable, and accurate localization solutions suitable for autonomous systems.


SS13: Quantum Algorithms for Data Fusion and Resources Management
Wolfgang Koch, Fred Daum

Quantum algorithms for data fusion may become game changers as soon as quantum processing kernels embedded in hybrid processing architectures with classical processors will exist. While emerging quantum technologies directly apply quantum physics, quantum algorithms do not exploit quantum physical phenomena as such, but rather use the sophisticated framework of quantum physics to deal with “uncertainty”. Although the link between mathematical statistics and quantum physics has long been known, the potential of physics-inspired algorithms for data fusion has just begun to be realized. While the implementation of quantum algorithms is to be considered on classical as well as on quantum computers, the latter are anticipated as well-adapted “analog computers” for unprecedentedly fast solving data fusion and resources management problems. While the development of quantum computers cannot be taken for granted, their potential is nonetheless real and has to be considered by the international information fusion community.


SS14: Sensor Models and Calibration Techniques
Jannik Springer,  Marc Oispuu,  Wolfgang Koch

Modern fusion algorithms process vast amounts data from numerous different active and passive sensors. The sensor model, linking the physical phenomenon to the output signal of the sensor is of utmost importance. Often fusion algorithms attempt to account for sensor errors that originate from oversimplified models or wrongly calibrated sensors. Naturally, simple models cannot fully capture the complex sensor response and there is a trade-off between the performance and complexity of a model. However, with an increasing complexity, there tend to be more parameters and it is the purpose of the calibration to determine any unknown model parameters, reducing the mismatch between the actual and the modeled response of a sensor. The burden of calibrating a sensor upfront can be tremendous and self-calibration techniques that mitigate model mismatches during sensor operation are highly desired. The eminent importance of adequate sensor models and (self-)calibration techniques should to be considered by the international information fusion community.


SS15: Evaluation of Technologies for Uncertainty Reasoning
Paulo Costa, Kathryn Laskey, Anne-Laure Jousselme, Erik Blasch, Pieter DeVilliers, Gregor Pavlin, Juergen Ziegler, Claire Laudy

The ETUR Session is intended to report the latest results of the ISIF’s ETURWG, which aims to bring together advances and developments in the area of evaluation of uncertainty representation. The ETURWG special sessions started in Fusion 2010 and have been attracting an attendance consistently averaging between 30 and 50 attendees. While most attendees consist of ETURWG participants, new researchers and practitioners interested in uncertainty evaluation have been attendeding the sessions as well.


SS16: Intelligence for situation understanding and sense-making
Lauro Snidaro, Jesus Garcia, Kellyn Rein

The exploitation of all relevant information originating from a growing mass of heterogeneous sources, both device-based (sensors, video, etc.) and human-generated (text, voice, etc.), is a key factor for the production of a timely, comprehensive and most accurate description of a situation or phenomenon in order to make informed decisions. Even when exploiting multiple sources, most fusion systems are developed for combining just one type of data (e.g. positional data) in order to achieve a certain goal (e.g. accurate target tracking) without considering other relevant information (e.g. current situation status) from other abstraction levels.

The goal of seamlessly combining information from diverse sources including HUMINT, OSINT, and so on exists only in a few narrowly specialized and limited areas. In other words, there is no unified, holistic solution to this problem.

Processes at different levels generally work on data and information of different nature. For example, low level processes could deal with device-generated data (e.g. images, tracks, etc.) while high level processes might exploit human-generated knowledge (e.g. text, ontologies, etc.). The overall objective is to enhance making sense of the information collected from multiple heterogeneous sources and processes with the goal of improved situational awareness and intelligence including topics such as sense-making of patterns of behaviour, global interactions and information quality, integrating sources of data, information and contextual knowledge.

The proposed special session will bring together researchers working on fusion techniques and algorithms often considered to be different and disjoint. The objective is thus to foster the discussion of and proposals