Just a second....

Keynotes

Jonathan Stray

Center for Human-compatible AI at UC Berkeley
How AI Might Make Human Conflict Better or Worse​

AI intersects with intergroup conflict in several ways that are not yet well explored. Social media systems — which are increasingly AI driven — may amplify divisive or hateful narratives. LLMs may exacerbate conflict, especially if they give different answers to people on different sides. Yet LLMs have also proven able to facilitate conflict resolution in several ways, including finding common ground. I’ll discuss the key ways that AI might harm or help international peacebuilding, present recent experimental work showing that modified social media algorithms can reduce polarization on real platforms, and propose a practical definition of AI neutrality that conflict participants might accept.

Chair: Chiara Boldrini

Bio

Jonathan Stray is a Senior Scientist at the Center for Human-compatible AI at UC Berkeley, where he works on the design of AI-driven media with a particular interest in well-being and conflict. Previously, he taught the dual masters degree in computer science and journalism at Columbia University, worked as an editor at the Associated Press, and built document mining software for investigative journalism.

Stuart Russell

Professor of Computer Science, UC Berkeley
What If We Succeed?
Many experts claim that recent advances in AI put artificial general intelligence (AGI) within reach.  Is this true? If so, is that a good thing? Alan Turing predicted that AGI would result in the machines taking control.  I will argue that Turing was right to express concern but wrong to think that doom is inevitable.  Instead, we need to develop a new kind of AI that is provably beneficial to humans. Unfortunately, we are heading in the opposite direction and we need to take steps to correct this. Even so, questions remain about whether human flourishing is compatible with AGI.
 
Chair: Dino Pedreschi
Bio

Stuart Russell is a Distinguished Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI and the Kavli Center for Ethics, Science, and the Public. He is a member of the National Academy of Engineering, a Fellow of the Royal Societya recipient of the IJCAI Computers and Thought Award, the IJCAI Research Excellence Award, and the ACM Allen Newell Award, and former holder of the Chaire Blaise Pascal in Paris. In 2021 he received the OBE from Her Majesty Queen Elizabeth and gave the BBC Reith Lectures. He is an Honorary Fellow of Wadham College, Oxford, and a Fellow of AAAI, ACM, and AAAS. His book “Artificial Intelligence: A Modern Approach” (with Peter Norvig) is the standard text in AI, used in over 1500 universities in 135 countries. His current work focuses on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is working to ban lethal autonomous weapons.

John Shawe-Taylor

Director of the Centre for Computational Statistics and Machine Learning at University College, London (UK)
Human AI Collaboration: Perspectives and Directions

The Hybrid Human AI conference has established a nascent research community. The talk aims to take stock of its research agenda identifying both key research questions and some initial steps towards addressing them, highlighting the significance both of some steps taken as well as of the potential longer-term goals.

Chair: Shenghui Wang

Bio

John Shawe-Taylor is professor of Computational Statistics and Machine Learning at University College London and Director of the International Research Centre on Artificial Intelligence (IRCAI) under the auspices of UNESCO at the Jozef Stefan Institute in Slovenia. He has published over 300 papers and two books that have attracted over 84000 citations. 

He has assembled a series of influential European Networks of Excellence. The scientific coordination of these projects has influenced a generation of researchers and promoted the widespread uptake of machine learning in both science and industry that we are currently witnessing. He was appointed UNESCO Chair of Artificial Intelligence in November 2018.

Ricardo Baeza-Yates

Barcelona Supercomputing Center
The Limitations of Data, Machine Learning & Us

Machine learning (ML), particularly deep learning, is being used everywhere. However, not always is used well, ethically and/or scientifically. In this talk, we first do a deep dive in the limitations of supervised ML and data, its key input. We cover small data, datification, all types of biases, predictive optimization issues, evaluating success instead of harm, and pseudoscience, among other problems.  The second part is about our own limitations using ML, including different types of human incompetence: cognitive biases, unethical applications, no administrative competence, misinformation, and the impact on mental health. In the final part we discuss regulation on the use of AI and ithe responsible AI principles that can mitigate the problems outlined above.

Chair: Luca Pappalardo

Bio

Ricardo Baeza-Yates is the Founding Director of the AI Institute at the Barcelona Supercomputing Center. Before, he was Director of Research at the Institute for Experiential AI at Northeastern University (2021-25), CTO of NTENT (2016-20) and VP of Research at Yahoo Labs (2006-16), first based in Barcelona, Spain, and later in Sunnyvale, California. He is co-author of the best-seller Modern Information Retrieval textbook published by Addison-Wesley in 1999 and 2011 (2nd ed), that won the ASIST 2012 Book of the Year award. In 2009 he was elevated to ACM Fellow and in 2011 to IEEE Fellow. He has won national scientific awards in Chile and Spain, among other accolades and distinctions. He obtained a Ph.D. in CS from the University of Waterloo, Canada, and his areas of expertise are responsible AI, bias in algorithmic systems, web search and data mining plus data science and algorithms in general.