Just a second....

Workshops & Tutorials

As part of the pre-conference program on June 9 and 10, a series of tutorials, workshops, creative events, and a doctoral consortium will take place. Some of these events are also part of the HHAI Summer School. A description of each event can be found below. Check the websites of the workshops for the respective Calls for Papers.

All morning sessions take place from 09.00 to 13.00, with a coffee break at 10.30. All afternoon sessions take place from 14.00 to 18.00, with a coffee break at 15.30.

Room Seats Time Day 1 (June 9) Day 2 (June 10)
Aula Magna Storica 100 9-13 IAIL: The 4th workshop on imaging the AI landscape after the AI act and the DSA AI labour and Society
14-18
Aula Magna Nuova 150 9-13 LLM powered simulations of social media environments Trustworthy and Collaborative AI
14-18 Assessing the impact of AI-driven recommenders on Human-AI ecosystems The intersection of conversational AI and Human-centered design: a practical guide
Aula IV 49 9-13 CounseLLMe 2nd Workshop on Law, Society, and Artificial Intelligence (LSAI): Interdisciplinary Perspectives on AI Safety
14-18
Aula V 36 9-13
14-18
Aula VI 60 9-13 Informing ML with Knowledge Engineering for Hybrid Intelligent Systems Methods4HHAI - Characterizing HHAI as a research field: an interactive workshop
14-18 Data Storytelling with Python and AI
Room Seats Time Day 1 (June 9) Day 2 (June 10)
SNS 1 60 9-13 SYNERGY - Designing and Building Hybrid Human–AI Systems Supporting Human Behaviors using AI Technology: State of the art, challenge and research agenda
14-18 Multimodal Interaction Analytics for Hybrid Intelligence
SNS 2 60 9-13 MULTITTRUST - Multidisciplinary Perspectives on Human-AI Team Trust Second Workshop on Stimulating Cognitive Engagement in Hybrid Decision-Making: Friction, Reliance and Biases
14-18
SNS 3 60 9-13 Mind the AI-GAP 2025: Co-Designing Socio-Technical Systems WoRTH_AI - Workshop on Responsible Technology and Human-Centered AI Engineering
14-18
SNS 4 60 9-13 RKEHAC 2025: Second International Workshop on Reciprocal Knowledge Elicitation for Human-Agent Collaboration Doctoral Consortium
14-18
Workshops
AI, labour and society

Description: The rise of AI, particularly Generative AI, is transforming labor markets, industries, and economies worldwide, raising urgent questions about inequality, job displacement, algorithmic management, and corporate power. This workshop fosters an interdisciplinary dialogue between social scientists, economists, and AI experts to examine AI’s impact on labor and society and explore pathways for more equitable and sustainable technological development. We welcome contributions that investigate AI as both a tool for studying its societal effects and as an object of study in itself, namely, something that can be proactively shaped to influence the future trajectory of the digital economy and society.

Website – Submission

CounseLLMe

Description: The “CounseLLMe” workshop aims to explore the dynamics of interactions between humans and Large Language Models (LLMs) by employing methodologies from Natural Language Processing (NLP), text analysis, and complex network theory. As LLMs become increasingly integrated into various applications, understanding the nuances of these interactions is crucial for enhancing communication efficacy, ensuring ethical standards, and improving user experience. This half-day workshop at HHAI2025 will serve as a platform for researchers and practitioners to present findings, share methodologies, and discuss challenges related to analyzing human-LLM dialogues.

Website – Submission

The Mind the AI-GAP 2025

Description: The Mind the AI-GAP 2025 workshop aims to critically address unwanted bias and discrimination in AI technologies by proactively integrating fairness and inclusivity within the design process, fostering social and structural change. The workshop explores how Participatory AI can shape solutions that better reflect community values, needs, and preferences and aims to bring together diverse stakeholders, including researchers, practitioners, NGOs, civil society, and designers. Through a combination of talks, roundtables, and hands-on activities, participants will collectively discuss participatory approaches and develop actionable outputs, such as guidelines or a white paper, to advance Participatory AI as a tool for equitable, transparent, and impactful systems

Website Submission

SYNERGY – Designing and Building Hybrid Human–AI Systems

Description: In this workshop, we explore the design and implementation of truly synergistic human-AI systems. While AI systems already work alongside people in tasks from visual analytics to code generation, we are still learning how to create flexible collaborative systems that leverage the complementary abilities of humans and AI. The workshop brings together researchers with diverse backgrounds – from intelligent systems design to human-human collaboration – to advance this emerging field. The workshop is supported by the TANGO project, a €10 million European initiative focused on “a synergistic approach to human-machine decision making,” and welcomes submissions on interactive decision-making systems, adaptive collaboration frameworks, evaluation methods, and implementation architectures for human-AI collaboration.

Website Submission

2nd Workshop on Law, Society and Artificial Intelligence: Interdisciplinary perspectives on AI safety

Description: The 2nd edition of the LSAI – Workshop on Law, Society, and Artificial Intelligence will focus on interdisciplinary perspectives on AI Safety, exploring its legal, ethical, and socio-technical dimensions. Recognizing AI Safety as a holistic field, the workshop aims to foster dialogue between researchers and practitioners from diverse disciplines, including law, ethics, social sciences, and information and communication technology (ICT). This edition will address key themes such as technical robustness, governance frameworks, accountability, risk assessment, fairness, and transparency. Discussions will explore both theoretical and practical approaches to ensuring AI systems are not only reliable but also aligned with societal values and regulatory requirements. The event will emphasize the critical role of AI Safety in high-stakes domains such as healthcare, finance, and governance, highlighting the necessity of cross-disciplinary collaboration to develop effective solutions. The workshop will feature a full-day schedule with paper presentations, roundtable discussions, and interactive sessions to encourage knowledge exchange. Submissions are invited in three tracks—full papers, short papers, and extended abstracts—ensuring diverse contributions. Building on the success of its first edition, LSAI aims to strengthen interdisciplinary engagement further, fostering a deeper, more nuanced understanding of AI Safety and its implications

Website Submission

Second Workshop on Stimulating Cognitive Engagement in Hybrid Decision-Making: Friction, Reliance and Biases

Description: This full-day workshop explores over-reliance and biases in Human-AI Interaction. Central to this discussion are approaches that deliberately introduce cognitive effort and reflection into AI interactions to prevent passive or automatic reliance. While conventional AI design prioritises efficiency and seamlessness, this workshop invites participants to examine AI systems that strategically slow down decision-making when necessary to mitigate automation bias, cognitive offloading, and over-trust, ultimately fostering accuracy, responsibility, and human oversight.
The program features keynote presentations from leading experts in academia and industry, author presentations, and interactive discussions to advance the discourse on cognitively engaging and responsible AI design across AI and HCI research, cognitive science, law, philosophy, design and beyond.

Website Submission

RKEHAC 2025: The Second International Workshop on Reciprocal Knowledge Elicitation for Human-Agent Collaboration

Description: The RKEHAC 2025 workshop focuses on (reciprocal) knowledge elicitation in the context of human-agent collaboration. Recent research in human-agent interaction centers on human-agent collaboration where teams of humans and intelligent agents are formed to achieve a shared goal. We refer to the beneficial application of mutual knowledge elicitation and knowledge provision between human and AI agent as reciprocal knowledge elicitation. Traditional knowledge elicitation techniques (e.g., described by Cooke, 1994) only describe human-human knowledge elicitation and must be re-evaluated on this paradigm change. The goal of RKEHAC is to gather researchers and practitioners from all levels and fields to capture the contexts and techniques of human-agent knowledge elicitation. The workshop’s overall goal is to provide a community-driven web platform where researchers and practitioners can search, identify, filter, and author appropriate knowledge elicitation techniques in the context of human-agent collaboration.

Website Submission

Supporting Human Behaviors using AI Technology: State of the art, challenge and research agenda

Description: A prominent area of research within the Hybrid Intelligence domain pertains to Artificial Intelligence systems that support individuals in voluntarily adapting their behaviour. Such technological support can be relevant in different domains like health, sustainability and justice e.g., to help people adopt healthier lifestyle patterns, support people in making more sustainable choices, empower people in managing their chronic disease, or support victims of crimes in their healing process. When developing effective behaviour support technologies it is important to understanding why people do what they do (by learning about their motivations, habits, capabilities and needs), so that the offered support is timely and targeted at a pivotal mechanism. Effectiveness, however, is not the only relevant consideration to take into account when designing behaviour change support systems. Key to developing responsible behaviour support technologies is identifying and implementing strategies for supporting individuals in their behaviour change trajectory in ways that align with core societal values such as liberty, autonomy, and (social) justice. The aim of this workshop is to develop a research agenda for the next 5 years for the field of AI-support behaviour change.

Website Submission

IAIL 2025 – Imagining the AI Landscape after the AI Act

Description: The regulation of Artificial Intelligence is at an important stage, with the European Union taking the lead through key legislative frameworks such as the AI Act (AIA) and the Digital Services Act (DSA). These regulations aim to create a safer and more accountable digital environment while safeguarding fundamental rights. However, while the European Union has outlined a clear direction in terms of regulation, their practical implementation remains an open challenge. In particular, the interaction between these two frameworks requires closer examination to understand how they address the risks AI poses to fundamental rights, such as privacy, non-discrimination, and freedom of expression. This workshop aims to analyze how these new regulations will shape the AI technologies of the future and their impact on our lives. We will cover issues such as the ability of the AIA and DSA requirements to be operationalized, privacy, fairness, and explainability by design, individual rights and AIA, generative AI, AI risk assessment, and much more.

Website Submission

WoRTH_AI – Workshop on Responsible Technology and Human-Centered AI Engineering

Description: New methods and practices for building, maintaining, and continuously evolving HAI-based systems should consider that they need to complement human abilities, highlighting the importance of adaptive, collaborative, responsible, interactive, and human-centered intelligence, as well as they require strengthening the need for Responsible AI Engineering. The usage of Software Engineering for Human-Artificial Intelligence (SE4HAI), just like the usage of Human-Artificial Intelligence for Software Engineering (HAI4SE), should be based on solid principles of fairness, reliability, privacy, transparency, sustainability, accountability, and explainability. The high-quality HAI-based systems have one or more AI modules or components that responsibly improve and enhance the user experience, considering their respective interactions to coevolve human and artificial intelligence continuously. This workshop tackles that gap by discussing technical papers about how (i) to redesign software development practices to create AI systems that are responsible and user-focused, (ii) to develop AI tools and models that work alongside humans fairly and ethically, and (iii) to tackle key challenges, like managing AI systems responsibly, ensuring quality, and minimizing environmental impact. We intend to foster the debate among researchers and technical professionals on strategies to (re)shape and (re)think the SE4HAI and AI4HAI practices.

Website Submission

TCAI – Trustworthy and Collaborative Artificial Intelligence Workshop 2025

Description:

The Trustworthy and Collaborative Artificial Intelligence workshop aims to explore the dynamic interplay between humans and AI systems, emphasizing the principles and practices that foster trustworthy and effective human-AI collaboration. As AI systems increasingly permeate various aspects of our lives, their design and deployment must align with human values to ensure these AI systems are ethical, trustworthy, and effective.

We seek contributions bridging the gap between machine intelligence and human understanding, e.g., through explainable AI techniques, and how machine learning paradigms – e.g., selective prediction, active learning, and learning to defer – can optimize shared decision-making. We also welcome solutions integrating human-AI monitoring protocols and interactive machine learning. Finally, we encourage insights from user studies and the design of collaborative frameworks that enhance trustworthiness and robustness in human-AI interaction. In brief, our goal is to promote discussion and development of hybrid systems that adapt to evolving contexts while maintaining transparency and trust, augmenting human capabilities and respecting human agency.

Website Submission

Multimodal Interaction Analytics for Hybrid Intelligence Interactive Event

Description:

The primary goal of this workshop is to provide participants with a deep understanding of multimodal interaction analytics and its potential to enhance hybrid intelligence systems. Multimodal interaction analytics (MIA) provides a powerful framework for analyzing and optimizing human-AI interactions by capturing and integrating data from various sources, such as speech, gesture, eye movement, and physiological responses. This workshop is designed to explore the role of MIA in supporting hybrid human-AI systems, with a focus on enhancing educational and collaborative environments. Participants will be introduced to the latest methodologies in MIA, including the design and implementation of interaction analytics approaches and techniques, as well as the ethical considerations related to multimodal data use. The workshop will also feature hands-on sessions where participants will engage with multimodal data collection and analysis, allowing them to apply MIA in practical scenarios.

Website Submission

1st Workshop on Informing ML with Knowledge Engineering for Hybrid Intelligent Systems (HHAI-KEML)

Description:

Integrating Knowledge Engineering (KE) with Machine Learning (ML) offers a promising approach to building trustworthy AI systems. This integration combines the strengths of data-driven learning with formal, structured reasoning, enabling AI models to be both highly accurate and explainable. By leveraging structured knowledge—such as electronic health records in healthcare, scientific axioms, or legal guidelines—AI systems gain the ability to perform commonsense reasoning, enhancing their reliability and making them more knowledge-aware. Although using symbolic methods for knowledge representation and reasoning can sometimes limit scalability, their ability to provide verifiable, human-understandable explanations makes them especially valueble in mission-critical applications.

The workshop hosted by HHAI 2025 seeks to bridge the gap between KE and ML by exploring the synergies between these fields. A key focus is on developing hybrid human-AI systems that utilize multimodal approaches, incorporating various forms of data including text, speech, images, and video. This collaborative forum will bring together researchers and practitioners from academia and industry to discuss cutting-edge research and innovative strategies for integrating KE and ML. Ultimately, the goal is to advance the development of AI systems that are not only robust and efficient but also transparent and human-centric, addressing both the challenges and benefits of merging symbolic reasoning with data-driven techniques.

Website Submission

MULTITTRUST – Multidisciplinary Perspectives on Human-AI Team Trust

Description:

With the increasing prominence of human-agent interaction in hybrid teams in diverse industries, human-agent teamwork is no longer a topic of the future, but of the present. However, several challenges arise that still need to be addressed carefully. One of these challenges is understanding how trust is defined and how it functions in human-agent teams. Psychological literature suggests that within human teams, team members rely on trust to make decisions and to be willing to rely on their team. Moreover, the multi-agent systems (MAS) community has been adopting trust mechanisms to support decision-making of the agents regarding their peers and for delegating tasks to agents. Finally, in the last couple of years, researchers have been focusing on how humans trust AI agents and how such systems can be trustworthy. How- ever, bringing this knowledge on teams and trust together in a HI setting brings its own unique perspectives. When we think of a team composed of both humans and agents, with recurrent (or not) interactions, how do these all come together? Currently, we are missing approaches that integrate the prior literature on trust in teams in these different disciplines. In particular, when looking at dyadic or team-level trust relationships in such a team, we also need to look at how an AI should trust a human teammate. In this context, trust, or rather the factors that influence it, must be formally defined so that the AI can evaluate them, rather than using questionnaires at the end of a task, as is usually assessed in psychology. Furthermore, a human’s trust in an artificial team member, and vice-versa, will change over time, affecting the trust dynamics. In this workshop, we want to motivate the conversation across the different fields and domains. Together, we intend to shape the road to address these questions to guarantee a successful and trustworthy human-AI agent teamwork. With these premises, we are organizing the MULTITTRUST 4.0 workshop, which is part of the HHAI conference 2025. This is the fourth edition of the original MULTITRUST workshop. Our goal is to motivate the conversation across the different fields and domains, and shape the road to answer these questions and more.

Website Submission

Tutorials
Introduction to Human Hybrid AI

Description: The concept of Human-Centric Artificial Intelligence (HCAI) has received a great deal of attention recently and with it has emerged a vision of a bright future for Artificial Intelligence within human society. There is, however, a danger that such an evocative vision may create unrealistic expectations that are not consistent with what the technology can be expected to deliver, quite apart from the risk of different stakeholders having different understandings of what is intended in the first place. We use the term Human-Centric AI for a new class of AI systems that are intended to collaborate with people. This tutorial aims to make the concept sufficiently precise so that we can agree about the overarching goals and align around a research agenda that we propose for bringing the HCAI agenda to fruition.
To this end the tutorial focuses on what HCAI systems could and in some cases should be expected to do rather than how they will achieve any particular functionality. The tutorial claims the need for integration of different disciplines that study humans and society and is neutral about the implementation question.
Using an evocative concept such as Human-Centric AI hopefully inspires creative thinking. One way to stimulate this creative thinking is to try and make the concept precise without resorting to a binary definition. In the spirit of keeping our definition open-ended we will present two metaphors that highlight two facets of HCAI that are both important. The first is the team metaphor: consider a team of humans collaborating on a project and imagine that one member of the team is in fact an AI system. The second is the body metaphor: Our bodies contain many subsystems that are at our disposal for acting in the world.
Although perhaps not entirely easy to reconcile, and not intended as prescriptive or exhaustive, both provide useful inspiration on different aspects of HCAI systems. A future theory of HCAI systems will have to capture this rich variety of functionalities.

Data Storytelling with Python and AI

Description: This tutorial will introduce participants to data storytelling using Python’s Altair library and Generative AI tools. Attendees will learn how to craft compelling data narratives using the Data, Information, Knowledge, and Wisdom (DIKW) pyramid framework. The session will provide hands-on experience in creating visualizations, utilizing AI-driven text and image generation tools like ChatGPT and DALL-E. By the end of the tutorial, participants will be equipped with the skills to transform raw data into impactful visual stories that drive actionable insights.

Website

Assessing the impact of AI-driven recommenders on human-AI ecosystems

Description: Measuring the impact of recommender systems in socio-technical ecosystems is a rapidly growing topic, encompassing urban, social media, retail, and conversational domains heavily influenced by AI technologies. Despite this surge, there is still a lack of systematic reflection on methodologies, observed outcomes, and cross-ecosystem connections. In this tutorial, we offer a systematic view of the field, categorizing methodologies, standardizing terminologies, dissecting outcome measurement levels (individual, item, and system), and proposing new research directions. This survey serves scholars and practitioners seeking insights across diverse ecosystems, policymakers grappling with societal issues impacted by recommenders, and tech companies keen on a systematic view to enhancing profitability while fostering social good.

LLM-powered Simulations of Social Media Environments

Description: This tutorial introduces the HHAI community to Y Social, an innovative framework that employs large language models (LLMs) for simulating social media environments. Unlike traditional agent-based or opinion dynamic models, Y Social leverages LLMs to replicate human-like interactions and algorithmic curation within controlled experimental settings. Through a blend of lecture-style presentations, live demonstrations, and interactive discussions, participants will (i) learn the significance of LLM-based simulations as a complement to traditional data-driven research; (ii) acquire practical skills for designing and analyzing controlled simulations using Y Social ’s zero-code tools; (iii) engage in interdisciplinary discussions on the ethical and technical challenges of using LLMs to model social interactions. This half-day tutorial caters to an interdisciplinary audience, offering value to AI researchers and social scientists by bridging computational methods and social science insights.

The Intersection of Conversational AI and Human-Centered Design: A Practical Guide

Description: Conversational AI is revolutionizing human-computer interactions, enabling more natural and intuitive engagements across industries such as customer service, healthcare, and education. However, ensuring that these AI-driven conversations align with human needs, ethical considerations, and usability principles is a growing challenge. This tutorial introduces participants to key concepts in Conversational AI and Human-Centered AI, focusing on designing intelligent systems that prioritize user experience, inclusivity, and fairness.
Participants will explore key methodologies in conversational AI, such as dialogue management, intent recognition, and natural language understanding. The tutorial will also address Human-Centered AI principles, emphasizing transparency, accountability, and ethical decision-making in AI interactions. Discussions will include best practices, real-world case studies, and challenges in deploying AI-driven communication systems.
Participants will engage in hands-on exercises using industry-standard tools to prototype and evaluate conversational AI systems. The session will also address ethical concerns, bias mitigation, and responsible AI practices. By the end of the tutorial, attendees will acquire practical skills to develop user-friendly, ethical, and effective conversational AI solutions.

Website

Methods4HHAI Tutorial

Description: Hybrid Intelligence (HI) is a rapidly growing field aiming at creating collaborative systems where humans and intelligent machines synergistically cooperate in mixed teams towards shared goals. A clear characterisation of the field is still missing, affecting not only standardisation of vocabularies and reuse of design choices, but also how the overall community can identify itself.

In this workshop, we will work toward gathering data to characterise Hybrid Human-Artificial Intelligence (HHAI) as a research field. In a hands-on session, participants will collaboratively analyse HHAI literature to identify common disciplinary backgrounds, existing and novel methodologies, and theoretical frameworks that are used in the community. Additionally,they will discuss such findings and try to identify interesting directions for the field in concept mapping session.

 

Website

SoBigData.it receives funding from European Union – NextGenerationEU – National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR) – Project: “SoBigData.it – Strengthening the Italian RI for Social Mining and Big Data Analytics” – Prot. IR0000013 – Avviso n. 3264 del 28/12/2021.