Plenary Speaker, Tuesday July 5, 09.00-10.00:
Kathryn Blackmond Laskey



Title: High Level Information Fusion: Past, Present, and Future

The term “high level fusion” had its origin in the mid-1980s with the model introduced by the Data Fusion Subpanel of the Joint Directors of Laboratories (JDL). Lower levels of the JDL model address detecting, identifying, and tracking individual objects. Higher levels are concerned with characterizing situations involving multiple objects and their interrelationships, understanding causal relationships and complex behavior patterns among objects, assessing intentions of actors and how they impact behavior and the evolution of situations over time, and managing resources, sensors, and user interaction.  While low-level fusion is a mature discipline with well-established theory and methods, the story is different for high-level fusion. A host of thorny theoretical and practical challenges face researchers and practitioners.  For starters, there is no universally accepted definition of what is encompassed by the term “high level fusion.” The original JDL model has been extended and revised by different groups in different ways, with different theories, paradigms, and representations being used by different groups. The problems addressed by high-level fusion are intrinsically more challenging than the problems of lower levels. As a consequence, many high-level fusion products, such as situation displays, automated decision support, and predictive analysis, still rely heavily on human cognition. How to identify which aspects are amenable to automation, how to integrate smoothly with human operators, how to produce products that can be understood and used by decision-makers, and how to represent domain semantics to support automation and interoperation, are all open research issues.  A great deal of progress has been made since the early days, but much of the progress has been on domain-specific applications that employ idiosyncratic methods and representations. Interoperation among systems with diverse representations and reasoning methods is another major challenge. A key question is the appropriate level of standardization. With the complexity of the problems and the different requirements of diverse applications, no overarching “one size fits all” theory of high-level fusion is likely to gain acceptance. Nevertheless, there are important classes of problems and methods that have been and will continue to be widely employed. Thus, an ecosystem of high-level fusion theories and methods may be a better vision for the future than a single approach or theory.  This talk looks back over the decades of research and applications in high-level fusion, reviews the progress and open questions, and discusses future directions in the field.

Bio:

Kathryn Blackmond Laskey is Professor in the Systems Engineering and Operations Research Department, Director of the Center for Resilient and Sustainable Communities, and Associate Director of the C4I and Cyber Center at George Mason University. She teaches and performs research in information fusion, decision theoretic knowledge representation and reasoning methodology, Bayesian statistics, decision support, and semantically rich probabilistic knowledge representation. A major focus of her research has been knowledge representation and inference for higher level multi-source fusion to support situation awareness and decision support.  She developed multi-entity Bayesian networks (MEBN), a language and logic that extends classical first-order logic to support probability. She was a key contributor to the development of the PR-OWL language for representing uncertainty in OWL ontologies. She co-chaired the W3C’s Uncertainty Reasoning for the World Wide Web Experimental Group (URW3-XG), which investigated aspects of uncertainty that need to be standardized for web-based applications. She was chair of the Board of Directors of the Association for Uncertainty in Artificial Intelligence and serves on the Board of the Washington Metropolitan Area chapter of INCOSE. She currently serves on the board of Directors of ISIF, is a regular contributor to the Fusion conference, and Co-Chaired the Fusion 2015 conference. She has organized numerous workshops and conferences, and has served on boards and committees of the National Academy of Sciences.


Plenary Speaker, Wednesday July 6, 09.00-10.00:
Thomas Lunner



Title: Sensor fusion in future AR glasses

An Augmented Reality (AR) platform is a system of interdependent technologies (e.g., audio, eye-tracking, computer vision, etc.), which enable digital objects to be placed in our real-world surroundings. These digital objects may provide assistance by overlaying enhancements to natural auditory objects in the scene, but the classic hearing device problems of estimating listener effort and identifying signals-of-interest remains. An AR platform in the form of glasses could support a large number of widely spaced microphones, forward and eye-facing cameras, inertial measurement units and other motion tracking hardware, and many other sensors. These sensors could be used to shed light on what sounds a listener wishes to hear, and whether they are having difficulty hearing them, but only if this information is optimally combined with a deeper understanding of natural conversation behavior.

To this end our team has taken advantage of an AR glasses platform to create a number of egocentric datasets capturing conversation in difficult listening situations, utilizing similar types of data that future AR hearing devices could be able to capture. In a recent study, we used this approach to study the effects of noise level and hearing loss on communication behaviors. Communicators with and without hearing loss were recruited in groups (i.e., they were familiar with one another), and participated in a 1-hour conversation while background levels randomly varied in a mock restaurant space. A glasses research device, Aria, collected egocentric data with a variety of sensors (i.e., microphones, forward-facing cameras, eye-tracking cameras, inertial measurement units), combined with close-talk microphones. Hypotheses were established a-priori about how behavior would change with increases in noise level and/or hearing loss, and regarded metrics from voice activity, head motion/position, and eye gaze. The data is being analyzed using human and automated annotations, combined with statistical and machine learning approaches with the eventual goal of leveraging these statistics to better understand what signals listeners wish to hear and how much difficulty they are having during conversations. (Thomas Lunner, W. Owen Brimijoin, Nava Balsam, Christi Miller)

Bio:

Dr Thomas Lunner is a research scientist and research manager at Meta / Reality Labs Research where he leads research in superhuman hearing for Augmented Reality and Virtual Reality.

Previously Dr Lunner lead research at the hearing aid manufacturer Demant/Oticon and their research centre Eriksholm, as well being adjunct professor in automatic control at Linkoping University. Furthermore, he was an adjunct professor in Hearing Technologies at the Danish Technical University. His PhD in the 90’ies laid the ground for the first commercial digital hearing aid, where his patented technology was sold to the world leading hearing aid company Demant/Oticon. In 1995 the patented digital signal processing algorithms lead to the first digital hearing aid, Digifocus. The DSP was used in several successive hearing aid models from 1995 up until 2013, fitted to millions of hearing aid users worldwide. Two of the models were awarded with the European Union´s prestigious technology prize IST Grand Prize in 1996 and in 2003.

Dr Lunner was part of initiating the new research field of cognitive hearing science together with researchers at Linkoping University. The cognitive hearing science group at Linköping University was awarded a 10-year research grant from the Swedish Research Council (2008-2018). Dr Lunner also received 3 prestigious European Union research program grants, of which the most notable one was the Cognitive Controlled Hearing Aid (COCOHA), which was a 4-year program gathering 5 European institutions. In this project, sensor fusion was introduced to hearing aids where both eye movements by means of electro-oculography, and electric brain activity, EEG, are used to affect signal processing in the hearing aids.

Thomas became the alumni of the year 2016 at Linkoping University and was 2017 named the First Fellow at Demant for outstanding performance and contributions in research and development. H-index 48.


Plenary speaker, Thursday July 7, 09.00-10.00:
Magnus Egerstedt

Title: Assured Autonomy, Self-Driving Cars, and the Robotarium

Long-duration autonomy, where robots are to be deployed over longer time-scales outside of carefully curated labs, is fundamentally different from its “short-duration” counterpart in that what might go wrong sooner or later will go wrong. What this means is that stronger guarantees are needed in terms of performance. For instance, in the US, a road fatality occurs roughly every 100 million miles, which means that for an autonomous vehicle to live up to its promise of being safer than human-driven vehicles, that is the benchmark against which it must be compared. But a lot of strange and unpredictable things happen on the road during a 100 million mile journey, i.e., rare events are all of a sudden not so rare and the tails of the distributions must be accounted for by the perception and planning algorithms. The resulting notion of “assured autonomy” has implications for how goals and objectives should be combined, how information should be managed and fused, and how learning processes should be endowed with safety guarantees. In this talk, we will discuss these issues and the current state of the autonomy landscape, instantiated on the Robotarium, which is a remotely accessible swarm robotics lab that has been in (almost) continuous operation for over three years, participating in over 5,000 autonomy missions.

Bio:

Dr. Magnus Egerstedt is the Dean of Engineering and a Professor in the Department of Electrical Engineering and Computer Science at the University of California, Irvine. Prior to joining UCI, Egerstedt was on the faculty at the Georgia Institute of Technology. He received the M.S. degree in Engineering Physics and the Ph.D. degree in Applied Mathematics from the Royal Institute of Technology, Stockholm, Sweden, the B.A. degree in Philosophy from Stockholm University, and was a Postdoctoral Scholar at Harvard University. Dr. Egerstedt conducts research in the areas of control theory and robotics, with particular focus on control and coordination of multi-robot systems. Magnus Egerstedt is the President-elect for the IEEE Control System Society, a Fellow of IEEE, IFAC, and AAIA, and a Foreign member of the Royal Swedish Academy of Engineering Science. He has received a number of teaching and research awards, including the Ragazzini Award, the O. Hugo Schuck Best Paper Award, the Georgia Tech Outstanding Doctoral Advisor Award, and the Alumni of the Year Award from the Royal Institute of Technology.