Just a second....

Workshops & Tutorials

As part of the pre-conference program on June 10 and 11, a series of tutorials, workshops, creative events, and a doctoral consortium will take place. Some of these events are also part of the HHAI Summer School. A description of each event can be found below. Check the websites of the workshops for the respective Calls for Papers.

All morning sessions take place from 09.00 to 13.00, with a coffee break at 10.30. All afternoon sessions take place from 14.00 to 18.00, with a coffee break at 15.30.

Tutorials
Interactive Concept-based Search

Organizer(s): Amiram Moshaiov

Description: In Concept-based Search (CbS) the term conceptual solution, or in short concept, refers to a pre-defined subset of feasible solutions, which is meaningful to the decision-makers. In such CbS problems, predefined meaningful subsets of particular solutions, i.e., predefined concepts, are explored to reach some informative results at the conceptual and the particular solution levels. Interactive CbS (I-CbS) forms a synergy between humans` tacit knowledge and cognitive abilities at the conceptual level and the search capabilities of computers at the detailed level of the particular solutions.

Most studies on CbS and I-CbS have focused on multi-objective problems and multi-criteria decision-making. In such a context, the term interactivity commonly refers to biasing the search according to interactive articulations of objective preferences. In contrast, multi-criteria I-CbS is not restricted to such a bias; interactivity in I-CbS could also be implemented with respect to preferences towards concepts and sub-concepts, i.e., preferences regarding the decision space.

This tutorial will provide insight to the unique features of I-CbS and to its potential real-life applications. It aims to introduce the HHAI community of researchers and practitioners to past and current research on CbS and I-CbS and to provide an overview about potential future research directions concerning I-CbS.

Website: https://www.eng.tau.ac.il/~moshaiov/ 

Human-AI mutual promotion for emotion and cognition understanding

Organizer(s): Haoyu Chen, Andy Nguyen, Yang Liu and Sanna Järvelä

Description: We propose this tutorial at HHAI 2024, aiming to explore the synergistic potential between AI and human understanding of emotion and cognition. It aims to reveal how AI can enhance our understanding of human emotional and cognitive behaviors and how these insights can, in turn, guide the creation of more nuanced AI systems.

Participants will engage with a mix of theoretical foundations, practical AI applications, and interactive exercises, highlighting the interdisciplinary nature of this field. We will invite experts in the related field to offer new perspectives on integrating AI with human emotional and cognitive studies, especially in the field of machine learning and learning sciences, fostering innovative solutions for empathetic and intelligent human-AI interactions.

Website: https://cv-ac.github.io/HAECU-HHAI2024/

An Introduction to Computational Argumentation

Organizer(s): Elfia Bezou-Vrakatseli, Daphne Odekerken and Andreas Xydis

Description: When faced with incomplete or inconsistent information, humans reason by using argumentation: by providing arguments for and against a topic and examining the relationships between these arguments, it can be decided which of them are acceptable. Within artificial intelligence (AI), the field of computational argumentation refers to the use of computational methods and tools to construct, analyze, and evaluate arguments in various fields such as law, politics and healthcare, notably for aiding transparent and interactive decision-making. Thanks to its logical foundations and rule-governed mechanisms, argumentation provides the appropriate support for computational reasoning engines, whereas the dialectical nature of argumentation and its similarity with common-sense reasoning makes it easier for users to understand its concepts.

This tutorial delves into the domain of computational argumentation. Designed as a half-day session, the tutorial will be highly interactive, blending traditional lectures with engaging elements like games and demonstrations. Attendees will gain practical insights into how computational argumentation enhances the capabilities of AI systems. Our tutorial is targeted at students, researchers, and practitioners with an interest in human-centered AI, with no prior knowledge of computational argumentation required.

Website: https://ohaai.github.io/tutorial.html

Collaborative HAI-Learning through Conceptual Exploration

Organizer(s): Bernhard Ganter, Tom Hanika, Johannes Hirth and Sergei Obiedkov

Description: There are various approaches towards synergetic human-AI cooperation with different objectives and frameworks. A particularly interesting purpose in science and beyond is the (structured) exploration of domain knowledge. In order to unravel its relational properties and the governing rules, various methods have been proposed in the literature.

In this tutorial we give an introduction to a very successful example of such a method: Conceptual Exploration. This is an AI-orchestrated knowledge acquisition process that can be performed by a human, an AI, or a combination of both. It enables a group of humans and AI to
i) discover so far unknown knowledge,
ii) identify common knowledge among the members of the group, and
iii) pinpoint to contradictory knowledge.

Among other things, we will present the necessary mathematical foundations, discuss the algorithmic properties and complexities and show various examples of successful applications. As a special highlight, we will try to implement the method together with the participants and the knowledge graph Wikidata during the tutorial.

Website: https://www.kde.cs.uni-kassel.de/conexp2024/

Knowledge Engineering for Hybrid Intelligence (KE4HI)

Organizer(s): Ilaria Tiddi, Victor de Boer and Stefan Schlobach

Description: Knowledge Engineering is a methodology that was developed for Expert Systems to allow elicitation, structuring, formalizing, and operationalising the information, knowledge, and tasks involved in knowledge-intensive applications. Methods such as CommonKADS supported engineers in clarifying the structure of complex applications.

Website: TBA

Tutorial on Contextualizing and Executing Robot Manipulation Plans Using Web Knowledge

Organizer(s): Michael Beetz, Philipp Cimiano, Michaela Kümpel, Jan-Philipp Töberg and Ilaria Tiddi, Enrico Motta

Description: One of the visions in AI based robotics are household robots that can autonomously handle a variety of meal preparation tasks to support humans in their daily chores. Based on this scenario, we present a best practice tutorial on how to create actionable knowledge graphs that a robot can use for execution of task variations in the example domain of cutting actions. We implemented a solution for this task that integrates all necessary software components in the framework of the robot control process. In the context of this tutorial, we focus on knowledge acquisition, knowledge representation and reasoning, and simulating robot action execution, bringing these components together into a learning environment that – in the extended version – introduces the whole control process of Cognitive Robotics.

In particular, the Tutorial will detail necessary concepts a knowledge graph should include for robot action execution, how web knowledge can be automatically acquired for the domain of cutting fruits, and how the created knowledge graph can be used to let robots execute tasks like slicing a cucumber or quartering an apple. The learning environment follows an immersive approach, using a physics-based simulation environment for visualization purposes that helps to illustrate the concepts taught in the tutorial. Using Jupyter Notebooks in a Docker environment, our learning environment is easily accessible without having to install different software packages and is independent of the learners’ technical setup.

Website: https://kr3-workshop.net/tutorial-program/

Introduction to Human Hybrid AI

Organizer(s): John Shawe-Taylor, Frank van Harmelen, Virginia Dignum, Frank Dignum

Description: The concept of Human-Centric Artificial Intelligence (HCAI) has received a great deal of attention recently and with it has emerged a vision of a bright future for Artificial Intelligence within human society. There is, however, a danger that such an evocative vision may create unrealistic expectations that are not consistent with what the technology can be expected to deliver, quite apart from the risk of different stakeholders having different understandings of what is intended in the first place. We use the term Human-Centric AI for a new class of AI systems that are intended to collaborate with people. This tutorial aims to make the concept sufficiently precise so that we can agree about the overarching goals and align around a research agenda that we propose for bringing the HCAI agenda to fruition.
To this end the tutorial focuses on what HCAI systems could and in some cases should be expected to do rather than how they will achieve any particular functionality. The tutorial claims the need for integration of different disciplines that study humans and society and is neutral about the implementation question.
Using an evocative concept such as Human-Centric AI hopefully inspires creative thinking. One way to stimulate this creative thinking is to try and make the concept precise without resorting to a binary definition. In the spirit of keeping our definition open-ended we will present two metaphors that highlight two facets of HCAI that are both important. The first is the team metaphor: consider a team of humans collaborating on a project and imagine that one member of the team is in fact an AI system. The second is the body metaphor: Our bodies contain many subsystems that are at our disposal for acting in the world.
Although perhaps not entirely easy to reconcile, and not intended as prescriptive or exhaustive, both provide useful inspiration on different aspects of HCAI systems. A future theory of HCAI systems will have to capture this rich variety of functionalities

Website: TBA

Creative Events
The What-is-HI Competition

Organizer(s): Davide Dell’Anna, Bernd Dudzik, Davide Grossi, Catholijn Jonker, Pradeep Kumar Murukannaiah and Pinar Yolum

Description: Hybrid Intelligence (HI) is an emerging system design paradigm in which artificial intelligence (AI) augments, as opposed to replacing, human intelligence. Although there is an increasing emphasis on the idea of HI in the AI literature, there is a lack of systematic methods and metrics for developing HI systems. We propose a creative event, in the form of a competition, The What-is-HI Competition (HI Comp), aimed at supporting the development of high-quality HI (Human-AI) teams by exploring the possible benefits, risks, and consequences of collaboration between humans and AI systems.

The main task for the participants of the competition is to creatively formulate HI scenarios that yield best and worst quality HI teams. HI Comp aims at pushing the state-of-the-art in Hybrid Intelligence design, and at generating a first repository of scenarios for researchers and practitioners to guide the development and evaluation of HI teams.

Website: https://hybrid-intelligence-competition.github.io/HI-Comp-2024-HHAI/

Workshops
AI in Africa and SDGs: Bridging Networks and Fostering Climate Action

Organizer(s): John Shawe-Taylor, Davor Orlic and Essa Mohamedali

Description: The “AI in Africa & SDGs: Bridging Networks and Fostering Climate Action” workshop, hosted by Naixus, aims to harness Artificial Intelligence (AI) for advancing Sustainable Development Goals (SDGs) with a focus on climate action in Africa. This event features a Half-Day Workshop exploring AI’s role in climate resilience, sustainable agriculture, and policy support for SDGs, alongside a Discovery Workshop facilitating brief, impactful presentations on AI innovations for SDGs. Designed to stimulate discussion, encourage collaboration, and showcase AI solutions, the workshop seeks to connect researchers, practitioners, policymakers, and communities across Africa. By highlighting successful AI projects and fostering knowledge exchange, this initiative endeavors to strengthen networks for AI and SDGs in Africa, outline actionable strategies for leveraging AI in climate action, and contribute to the achievement of SDGs through innovative technological applications. Open to a wide range of participants, the workshop is a step towards creating a sustainable future through the intersection of AI technology and sustainable development in Africa.

Website: https://naixus.net/

Stimulating cognitive engagement in hybrid decision-making: friction, reliance and biases

Organizer(s): Chiara Natali, Brett Frischmann and Federico Cabitza

Description: This workshop is intended to be the first of its kind in its discussion of Frictional AI, a novel concept that points to the development of strategies for more thoughtful, intentionally slower interactions between humans and AI. Through presentations and a roundtable, we will critically examine the trend of pursuing increasingly rapid and effortless interaction with AI, challenging the traditional view that human over-reliance on AI stems solely from inherent and unavoidable cognitive biases. Instead, we highlight the crucial role of designers and programmers in fostering user empowerment, skill enhancement, and responsibility.

Our approach advocates for a thoughtful balance in Human-AI interaction, harmonizing operational efficiency with the necessity for effective, ethical human knowledge work. At the heart of our discourse is the notion of ‘programmed inefficiencies’ or ‘frictional protocols’ in AI systems. These are intentionally integrated to engage users cognitively, fostering interactions that are mindful, even if they might be slower.

We welcome a diverse and interdisciplinary range of contributions: from innovative design principles and case studies that strike a balance between efficiency and cognitive engagement, to methodologies and governance solutions for assessing and reducing both over-reliance and under-reliance on AI systems.

Website: https://sites.google.com/view/frictional-ai/

Hybrid Intelligence for Health Care

Organizer(s): Chenxu Hao, Mark Neerincx, Myrthe Tielman, Jasper van der Waa and Maaike de Boer

Description: This full-day workshop on the topic of Hybrid intelligence (HI) for health care aims to build an interdisciplinary research community for people who are interested in developing HI systems for health care and well-being.

The workshop will include two keynotes, lightning talks, and plenary discussions to address the challenges and requirements for HI for health care.

Website: https://ii.tudelft.nl/HI4HealthCare/web/

Imagine the AI Landscape after the AI Act

Organizer(s): Francesca Pratesi, Desara Dushi and Francesca Naretto

Description: The AI Act (AIA) is a landmark EU legislation to regulate Artificial Intelligence based on its capacity to cause harm. Like the EU’s General Data Protection Regulation (GDPR), the AIA is set to become the global benchmark for regulating Artificial Intelligence.
This workshop aims to analyze how this new regulation will shape the AI technologies of the future and their impact on our lives. We will cover issues such as the ability of the AIA requirements to be operationalized, privacy, fairness, and explainability by design, individual rights and AIA, generative AI, AI risk assessment, and much more.

The workshop will bring together legal experts, tech experts, and other interested stakeholders for constructive discussions. The workshop’s main goal is to help the community understand and reason over the implications of an AI regulation, what problems does it solve, what problems does it not solve, what problems does it cause, discuss the new proposed amendments to the text of the AI Act, and propose new approaches that maybe have not been tackled yet.

Website: http://iail2024.isti.cnr.it/

 

Communication in Human-AI Interaction : A Multi-Perspective Approach (CHAI)

Organizer(s): Jennifer Renoux, Jasmin Grosinger, Marta Romeo, Kiran M. Sabu, Kim Baraka and Victor Kaptelinin

Description: As Artificially Intelligent systems are becoming more and more present in our surroundings, our ways of interacting with them are also changing. From commercial chatbots to home assistants and robot companions, machines are progressively taking up the role of “communicators”, provided with their own agency, and able to interact with their human counterparts in new ways.

This workshop aims at gathering experts in fields relevant to the study of AI systems as communicators, including but not limited to Human-Computer Interaction, Artificial Intelligence, Human-Robot and Human-AI Interaction. It will be organised as an interactive working group, where discussion will be prompted thanks to posters’ presentations, networking sessions and a collaborative design activity.

Website: https://chai-workshop.github.io/

 

MULTITTRUST – Multidisciplinary Perspectives on Human-AI Team Trust

Organizer(s): Myrthe Tielman, Morgan Bailey, Francesco Frattolillo and Andre Meyer-Vital

Description: This workshop originates from the need to create a multidisciplinary research community of people who study the different perspectives and layers of trust dynamics in teams consisting of both humans and AI agents. Human-agent teamwork is no longer a topic of the future. With the increasing prominence of human-agent interaction in hybrid teams in diverse industries, several challenges arise that need to be addressed carefully. Within human teams, team members rely on trust to make decisions and to be willing to rely on their team. Besides that, the multi-agent systems (MAS) community has been adopting trust mechanisms to support decision-making of the agents. Finally, in the last couple of years, researchers have been focusing on how humans trust AI agents and how such systems can be trustworthy.
But when we think of a team composed of both humans and agents, with recurrent (or not) interactions, how do these all come together? Currently, we are missing approaches that integrate prior literature on trust in teams in different disciplines. In this 3rd edition of the MultiTTrust workshop, we want to motivate the conversation across the different fields and domains.

Website: https://multittrust.github.io/3ed/

Workshop on Responsible Applied Artificial InTelligence (RAAIT)

Organizer(s): Maaike Harbers, Stefan Leijen, Pascal Wiggers, Marieke Peeters, Saskia Robben, Roland van Dierendonck, Fabian Kok, Sophie Horsman and Tiwánee van der Horst

Description: Artificial Intelligence (AI) increasingly affects the way people work, live, and interact. Over the past years, many high-level principles, and guidelines for ‘responsible’ or ‘ethical’ AI have been developed, and a lot of theoretical research on responsible AI has been done. However, this work often fails to address the challenges that arise when applying AI in practice.

In this one-day workshop on Responsible Applied Artificial InTelligence (RAAIT), we aim to connect and share experiences with fellow researchers and AI practitioners who bring Responsible AI to practice. Our vision on RAAIT includes the design, development, and deployment of Responsible AI applications in a practical context, while considering ethical, societal and ecological aspects. After keynote presentations from both research and industry, and participants’ paper presentations, we will use the afternoon to collectively discuss Responsible AI in practice, especially focusing on research methodologies. To guide the session, we will utilize and refine the Responsible Applied AI framework developed by the RAAIT research community. We invite both case studies, position papers, and research papers that address elements of a Responsible Applied AI practice.

Website: https://www.raait.nl/hhai2024

Exploring Tangible AI with Theory Instruments

Organizer(s): Jacob Buur, Mette Gislev Kjærsgaard, Ona Pirol and Jessica Sorenson

Description: This interactive half-day workshop aims to explore what role anthropological theories may play in designing AI beyond the computer screen, like assistive devices, self-driving vehicles. Understanding how humans react to AI devices is a substantial challenge, but this is crucial knowledge for designers. Theories from the social sciences can help. Because theory discussions may be too ‘theoretical’ for fast-paced designers in industry, we have developed a set of Theory Instruments that turn discussions into playful, collaborative activities. The workshop investigates cases of Tangible AI that react and propose actions.

Read more about this workshop

An Ecology of AI – Reflections for Researchers 2nd Workshop

Organizer(s): Retno Larasati, Venetia Brown, Tracie Farrell, Soraya Kouadri Mostefaoui and Syed Mustafa Ali

Description: In the inaugural workshop on Ecology of Artificial Intelligence (EcAI) at HHAI 2023, we delved into the fluid nature of power within AI ecosystems. We scrutinized its measurement, its impact on diverse stakeholders, and its entanglement with data ownership. Our discussions underscored the urgency of understanding the nexus between power dynamics and data in AI systems. Rather than moral judgments, our focus shifted to how AI redistributes power. This perspective, advocated by Pratyusha Kalluri of the Radical AI Network, emphasizes the agency of those potentially affected by AI in shaping its trajectory. Particularly crucial is the examination of AI’s role in perpetuating societal inequalities, notably through industrialized racial capitalism.

In this second workshop, we aim to sustain this critical dialogue. Our objective remains unchanged: to foster an inclusive environment where stakeholders from diverse backgrounds can engage in candid discussions about the intersections of money, power, and influence in the AI landscape. Building on the success of our inaugural workshop, we aspire to convene a diverse cohort of researchers, educators, and practitioners to navigate the ethical complexities and power dynamics inherent in AI. Join us as we continue to probe the profound societal implications of AI technology.

Website: https://sites.google.com/view/ecai2024

RLHF, what is it good for?

Organizer(s): Adam Dahlgren Lindström, Petter Ericson, Leila Methnani, Roel Dobbe, Lea Krause, Dimitri Coelho Mollo, Karin Danielsson and Íñigo Martínez de Rituerto de Troya

Description: Reinforcement Learning with Human Feedback (RLHF) is used to fine-tune Large Language Models (LLMs) to ‘align to human values and preferences’ and improve the ‘harmlessness, helpfulness, and honesty’ of such models. RLHF has been credited for the successes seen in OpenAI’s ChatGPT, Anthropic’s Claude 2, and Meta’s Llama 2, to name a few.

While RLHF has led to technical achievements in model performance, it is still early days; we lack a proper understanding of the importance of RLHF and feedback are more broadly in improving language technology. There is an increasing body of work criticising RLHF (see e.g. (Casper et al., 2023; Hosking et al., 2023; Wei et al., 2023)), suggesting that many of the non-technical issues, such as harmlessness, cannot be solved with this type of feedback, especially not on a global scale. Widening the scope invites more critical perspectives from across several disciplines, the oversimplification of what the technique actually produces becomes more evident. Studying the method in an interdisciplinary fashion may allow us as researchers to course correct and consider where RL from Feedback (RLF) can be applied such that it becomes truly useful.

This workshop aims to bring together researchers, industry practitioners, and policy makers to look at the ethical, legal, and societal aspects of how RLF is developed and deployed. We would like to open up a broader debate, both critical and more imaginative, of what forms of feedback we need to safeguard the development and use of LLMs, and what open requirements we can expect to see from advances in RLHF.

Website: https://rlhf-huh-wiigf.github.io/