The aim of this multidisciplinary workshop is to address and explore various dimensions of what we may now call “the age of algorithms”. From large language models already proven to be able to complete tasks of human-level complexity, to smart devices capable to make expert decisions in specialized fields like medicine or healthcare, and social media algorithms that filter every bit of information and shape our preferences online, algorithms have become an indispensable part of our world – with the good, but also with the bad, the risks, the uncertainty.

In 2014, Douglas Hofstadter, a pioneer of artificial intelligence and author of the famous book Gödel, Escher, Bach: An Eternal Golden Braid, lamented in a high-level meeting at Google that he was terrified by the new developments in artificial intelligence (AI) research, while most of the Google engineers present in the room looked at him with surprise and even incredulity. They did not understand what Hofstadter was so terrified about. This was, at the time, just the latest in a long series of such warnings, as many Silicon Valley gurus (like Ray Kurzweil, Elon Musk, Eliezer Yudkowsky) had proclaimed or have since proclaimed an “existential threat” posed to humanity by AI.

Besides being colorful, such episodes in the recent history of artificial intelligence illustrate the fact that it is not easy to make out what “the age of algorithms” is bringing about and how to welcome it – with high hopes, with reserve, with fear maybe?

The contributions gathered in this workshop are intended to shed different lights on our expectations, by tackling questions like: How do currently deployed algorithms work? What are their most pervasive biases? What engineering and theoretical problems do they raise, and what are their implications when used in larger and diverse social contexts? Some presentations will enquire into possible unethical (or misuses of) algorithms and discuss issues like misinformation and the erosion of trust, given that most current AI-based technologies lack transparency and human-level interpretability.

The workshop is open to a wide public, from AI theoreticians and engineers to anthropologists, sociologists, ethics experts and policy makers.

The workshop will be divided into two sections.

One section will be dedicated to conceptual and theoretical aspects of AI, with tentative themes like:

  1. Types of probabilistic errors and the problem of rule following in generative AI models – can large language models follow rules?
  2. What counts as “learning” in generative AI models (or more generally in machine learning)?
  3. Is there such a thing as “explainable AI”?
  4. Can there be such a thing as responsible and trustworthy AI? 

The second section will approach technology from a wider angle, primarily from an ethical anthropological perspective and in a manner that is sensitive to human, societal and cultural differences, with tentative themes like:

  1. Is technology becoming a part of an “extended mind” of humans?
  2. Are technologies discriminating against minority, marginal, and under-represented groups?
  3. Can algorithms be sensitive to and address social and cultural differences?

The workshop will close with a round table where the participants will reflect and conclude on the themes debated throughout the day.