Registration is open!
Please go to the registration page for details (Discount deadline: June 2). For registration and VISA questions, please contact firstname.lastname@example.org. For financial assistance, please check the Humane AI Travel Awards. The call for student volunteers is now open.
The increase in social media and Internet users’ exposure to diversity has not come with new instruments and skills to deal with it, despite the invariable impact that diversity has on human lives. This workshop seeks to promote richer and deeper social interactions by enabling the Internet of Us, a Hybrid Human-Artificial Intelligence online platform, where human social relations are mediated and empowered by diversity-aware Artificial Intelligence.
Link to the workshop page: https://www.internetofus.eu/diversity-aware-hybrid-human-artificial-intelligence-dhhai/
Fausto Giunchiglia, Loizos Michael and Jessica Heesen
As artificial intelligence (AI) technologies are playing more important roles in our daily lives than ever before, designing intelligent systems which can work with humans effectively (instead of replacing them) is becoming a central research theme, giving rise to hybrid intelligence (HI). Knowledge representation (KR) has a key potential for contributing to the development of HI systems, because it naturally brings human understanding and formal semantics (the understanding of machines) together. With this idea in mind, we welcome a wide array of works that use a KR formalism (or develop one) in an HI scenario.
Link to the workshop page: https://sites.google.com/view/kr4hi-2023/home
Erman Acar, Ana Ozaki, Rafael Penaloza and Stefan Schlobach
To be able to consider the broader, world-systems thinking around harm and benefit, in the short-, medium- and long-term, we propose a frame of Ecology of Artificial Intelligence. This approach will allow us to consider the impact of AI technology on organisms, populations, communities, ecosystems and the biosphere (good and bad, and everything in between). More specifically, in this workshop, we will discuss the subject of Critical Ecology with expert Dr. Suzanne Pierre (Tentative), founder of the Critical Ecology Lab, and map out some of the considerations and reflections that are necessary for using the frame of ecology to consider the broader impacts of AI.
Link to the workshop page: https://events.kmi.open.ac.uk/ecai-2023/
Retno Larasati, Venetia Brown, Tracie Farrell, Syed Mustafa Ali and Soraya Kouadri Mostéfaoui
This workshop appears from the need to create a multidisciplinary research community of people who study the different perspectives and layers of trust dynamics in human-AI teams.
Link to the workshop page: https://multittrust.github.io/
Carolina Centeio Jorge and Anna-Sophie Ulfert-Blank
Applications of hybrid intelligence systems in which humans and machines collaborate must behave responsibly in order to promote synergy and prevent unwanted or even harmful effects. For instance, the agents in a hybrid intelligence system (both human and automated) should communicate in a correct and reliable way, they should be able to provide their reasons for having a belief or opinion, and they should be able to explain their actions in terms of the values they apply. The aim of this workshop is to collect research and research ideas aimed at designing responsible hybrid intelligence systems.
Link to the workshop page: https://rhi2023.ai.rug.nl/
Bart Verheij, Cor Steging and Ludi van Leeuwen
The AI Act (AIA) is a landmark EU legislation to regulate Artificial Intelligence based on its capacity to cause harm. Like the EU’s General Data Protection Regulation (GDPR), the AIA could become a global standard, determining to what extent AI can have an effect on our lives wherever we might be. The AI Act is already making waves internationally. In late September, Brazil’s Congress passed a bill that creates a legal framework for artificial intelligence. The AIA adopts a risk-based approach that bans certain technologies, proposes strict regulations for “high risk” ones, and imposes stringent transparency criteria for others. The first draft of the AIA has been highly criticized and several amendments have been proposed by several stakeholder groups, the main focus being on high-risk systems and obligations for developers of these systems. There seems like there is still a long way to go before the final text is ready for approval. A crucial question is to what extent can the requirements of this regulation be enforceable.
Link to the workshop page: iail2023.isti.cnr.it
Francesca Naretto, Francesca Pratesi and Desara Dushi
This tutorial brings together researchers, engineers, and practitioners from the disciplines of Ethics, Artificial Intelligence, and Computer Science to engage in cross-disciplinary dialogues about responsible AI. Ethicists frequently argue in favor of principled approaches and optimizing for certain values, but the complex real world of AI design tends to be a place of trade-offs, bargaining, satisficing, and cost-efficiency, where values for society at large are not necessarily attributed a central role. On the other hand, computer scientists and designers usually only have a surface-level understanding of ethical theory, and as a result often neglect engaging in a thorough, sustained discussion of the ethical issues important in AI design. This tutorial will help bridge this gap through cross-disciplinary dialogue and reflection. Ultimately, we hope to operationalize responsible, ethically informed design such that it can be realistically implemented in AI development.
Link to tutorial page: https://sites.google.com/view/responsibletutorial/
Michael Dale, Paulan Korenhof and Catholijn Jonker
This tutorial proposal deals with the assessment of the level of human reliance on decision support systems in professional settings. The discourse on automation bias and algorithmic appreciation on the one hand, and algorithmic aversion on the other, calls for the development of metrics to gauge the desirability of AI intervention for specific Human-AI Collaboration Protocols. Presenting an overview of reliance patterns and their related biases and effects, this tutorial aims at raising the participants’ awareness on the topic of AI reliance and offering them the tools to evaluate and address detrimental reliance patterns.
Link to the tutorial page: https://mudilab.wixsite.com/ai-reliance-tutorial
Federico Cabitza, Andrea Campagner and Chiara Natali
As more and more multi-agent systems are deployed in the real world, it becomes imperative to study these systems with real humans to avoid unexpected negative consequences during deployment. Yet, this can be challenging for researchers with more experience designing algorithms and less experience running human participant experiments. In this tutorial, we will discuss the state of the human-agent interaction field, emphasizing (i) incorporating humans into multi-agent systems, including reinforcement learning systems, (ii) investigating when to rely on human vs. AI strengths, and (iii) designing human-AI studies to evaluate algorithms with real humans.
Link to the workshop page: https://sites.google.com/umich.edu/humanaitutorial
Elizabeth Bondi-Kelly, Krishnamurthy (Dj) Dvijotham, and Matthew E. Taylor
We are embarking on a new citizen science project called “Beta Catchers” and ran a pilot study showing that the crowd-based pathological analysis of digitized whole slide images of brain tissue is feasible. However, the wisdom of crowd methods developed for this new project requires about 50 people to analyze the same whole slide image. In this hackathon, we seek to engage clever people to help us improve the efficiency of our wisdom of crowd algorithm for Beta Catchers.
This isn’t our first rodeo. We are citizen science developers who have previously created successful projects that crowd-analyze Alzheimer’s research data with accuracy exceeding that of trained laboratory technicians. We have built a community of over 50 000 worldwide volunteers who contributed over 12 million annotations resulting in several top tier journal publications, and fueling a machine learning challenge with 700 participants and 90 teams producing new ML models with unprecedented accuracy.
Whether you are a coder, a mathematician, an artist, or an anthropologist – we need you. Our approach values diversity in thinking styles. What is going to make the biggest difference – an alteration to the user interface, a new algorithm for combining many mediocre answers into a great answer, some insight into human nature that we missed? Either way, we aim to tackle this mystery together, eat pizza, and have fun along the way!
Link to the hackathon page: https://humancomputation.org/hybrid-intelligence-hackathon-for-alzheimers-research/
Pietro Michelucci, Libuše Vepřek, Lisa Gusman and Margaret Lane
Demonstrating the societal relevance and impact of innovations is an increasingly important part of scientific research. This raises questions like:
- How to apply scientific knowledge into practical applications?
- How to create innovation that solves problems and improves human life?
In our workshop, we will facilitate your path to impact by activating your entrepreneurial mindset, using design-thinking methods, and exploring market opportunities with you.
What you will experience in our bootcamp:
- work with an actual Machine Learning startup founder on your business idea
- learn fundamentals of how to commercialize tech
- work alongside serial entrepreneurs and experienced scientists
- learn from world-class mentors about how to assess and build-up an impact and venture project
- explore next steps towards a future (co-founder) role
- connect to the international tech startup ecosystem
Participants will gain an entrepreneurial mindset, an understanding of impact and innovation success drivers, as well as insights into opportunities in the GE innovation ecosystem.
Participants will be awarded the “Science4impact” seal and get a fast-track application to the micro-project call by Humane AI Network
Organizers: Andreas Keilhacker, Sabine Wiesmüller, and Delayne Schwarz
You got skills that others dreams of! Join hundred other bright minds for “Co-Founder Match” to explore your (co-founder) impact and role.
Pitch your idea, get direct feedback from fellow innovators and startup enthusiasts, and find your future co-founders.
Present yourself or listen to other pitches, get inspired, and join someone else’s mission. Everything is possible!
Organizers: Andreas Keilhacker, Sabine Wiesmüller, and Delayne Schwarz
Munich is home to many research institutes with a focus on AI technologies, including universities such as LMU Munich and the Technical University of Munich (TUM) collaborating through joint initiatives such as the Munich Center for Machine Learning (MCML).
On Tuesday, 27th of June, we invite you to take part in a tour of LMU’s media informatics lab, demonstrating current research projects from associated institutions. The event still start at 6 pm and take place on the 3rd and 4th floor at Frauenlobstr. 7a. We will provide some light snacks.
This is the official welcome reception of the conference. You’ll get a chance to meet and mingle with other AI researchers while having some finger food.