Artificial intelligence can make our lives easier in many ways. But the technology also harbors many dangers. Legal scholar Florent Thouvenin is working with academic partners from across the globe to develop ideas about how AI could be optimally regulated
Author: Roger Nickl, UZH
When US firm OpenAI launched its chatbot, ChatGPT, late last year, it took the world by storm. Many people were surprised at what is possible with artificial intelligence. For example, the chatbot can be used to generate texts of varying levels of sophistication and to summarize scientific papers, as well as to write code and translate it into another programming language. The initial euphoria about the potential to make work easier was soon followed by alarm bells. Because while the chatbot can simulate intelligent behavior, it sometimes comes out with utter nonsense.
Given the rapid advancement of artificial intelligence and the societal risks associated with the powerful technology, an open letter from the US Future of Life Institute has called for a six-month pause in the training of AI systems that are more powerful than GPT4. This is so that the software can be made more transparent and trustworthy. Signatories of the public statement include prominent figures, such as Israeli historian and author Yuval Harari, and entrepreneur Elon Musk.
Chatbots can’t think
One person who didn’t sign is Florent Thouvenin. The UZH legal scholar has been working for many years on the impact of algorithmic systems and artificial intelligence on society and the associated challenges for the legal system. Thouvenin is professor of information and communications law and heads up the UZH Center for Information Technology, Society and Law (ITSL). He is skeptical about the pause called for in the open letter. “AI is no wonder tool,” says the legal scholar. “Yes, chatbots such as ChatGPT can compute a great deal of information very quickly – but they can’t understand or think and they don’t have a will of their own.”
Thouvenin mainly sees the many opportunities that the new technology offers. He believes it is important that artificial intelligence applications are regulated so that the opportunities can be harnessed and the risks minimized. He and his colleagues already gave the matter some thought in a position paper published by the UZH Digital Society Initiative (DSI) in 2021 (see box). He is now working on the AI Policy Project with partners in Japan, Brazil, Australia and Israel to analyze how different legal systems are responding to the major advancements in the development of AI. The project examines countries that – like Switzerland – need to think carefully about how they want to position themselves in relation to the regulatory superpowers of the EU and US in order to promote the development of this technology while protecting their own citizens from the downsides.
This article appeared on UZH News on 14.12.23.
It also refers to the position paper “A Legal Framework for Artificial Intelligence” by the Digital Society Initiative.