The Impact of the Metaverse on the UN’s Sustainable Development Goals: A Foundation Metaverse Europe’s Perspective Abstract: This paper examines how the Metaverse, a complex virtual world interconnected with reality, can contribute to the United Nations’ Sustainable Development Goals (SDGs) within the European context. We explore two key facets of the Metaverse: Consumer Metaverse and Industrial Metaverse, and their potential […]
Foundation Metaverse Europe Position Paper on Metaverse and Mental Self-Regulation by Prof. Dr. Thomas Metzinger (Original in German, automatically translated version)
Altered states of consciousness in technological change to the coming metaverse
A main problem for a contemporary culture of awareness is the new “attention extraction economy”.
Powered by self-learning AI and advanced, self-improving algorithms, it extracts attention from human brains and turns it into money. Today, information and entertainment are available almost everywhere for free. Attention, on the other hand, is the new scarce resource that is systematically exploited because it has become a currency, a means of payment that can be traded.
What is actually monetized in the new “attention economy”, however, is in reality the destruction of mental autonomy and thus of our ability to still control our own attention in a willful and self-determined way. But we need intellectual autonomy if we are to maintain and defend our democracies as responsible citizens. We also need it to be able to look the planetary crisis in the face and say: I accept the challenge.
Measures to maintain mental self-control in the metaverse.
The ability to mentally self-determine is also critical to our overall quality of life. Today, it is no longer a question of whether a human or a machine is the world champion in Go or in chess. The game the systems are playing against us right now is a very different one: who controls the scarce resource of attention generated by our biological brains – ourselves or an American tech corporation? Questions like these are absolutely central to the culture of consciousness of the future.
In the metaverse, such questions will once again arise in a different, more acute form than in the “old” social media. And not because the metaverse is something evil, where professional attention robbers and highwaymen wait for the poor user. No, the metaverse might just become so good and so attractive that this weakens social cohesion in the so-called real world in an unexpected way.
- What ethical and legal principles should guide us in the attention economy of the future?
- Are there media environments and business models that should be illegal?
- How can we ethically develop better algorithms for social networks and new, much more complex virtual realities?
- How can artificial intelligence actually serve the common good, and how can new technologies of consciousness increase our mental autonomy rather than destroy it?More generally :
- In an open and free society, should every citizen have the right to manipulate their own brain as they wish?
- Or are there states and spaces of consciousness that should be “taboo”, for instance because they destroy the autonomy of the experiencer (and indirectly that of others)?
Awareness culture and the general education system
The practical philosophy of mind is not only concerned with what we had better not do, but also with what should be promoted proactively, that is, purposefully and with foresight. An example of this as well: Around the world, many evidence-based initiatives are already testing different ways to integrate a secularized meditation practice into our education system. Against the backdrop of the attention economy briefly outlined above (which will play an even more relevant role in the metaverse) and the escalating drug problem, this seems to me to make extraordinary sense, especially for children and young people.
Which states of consciousness do we want to actively promote and integrate into our society?
Given the promising results in modern meditation research, should we integrate secular forms of meditation instruction into certain or even all levels of our educational systems? However, the rapidly changing environment raises the very fundamental question of what “education” means today. In the face of so many new possibilities for action, prohibitions on thinking do not help us. Instead, we need courage, imagination and a culture of trial and error.
Society is sometimes far ahead of its political institutions; it has long been searching and groping for a new culture of consciousness in promising initiatives and experiments. The problem with this is that for every social movement and every cultural innovation there is an optimal time window in which its most important findings must be formally integrated if one does not want to run the risk of it being forgotten and lost to society again.
Education enables young people to become active members of society and culture and thus also politically mature citizens. What areas of the phenomenal state space should every young person learn about before adulthood? What would a “phenomenological educational canon” look like that can really do justice to our current state of knowledge and our rapidly changing situation? What would be the indispensable educational core of a modern culture of consciousness?
Debate on AI and Metaverse
It is time to take the ongoing public debate about Artificial Intelligence (AI) and its implementation in building the Metaverse to our political institutions. Many experts believe that in the next decade we will face a turning point in the history of social platforms, and that the window of opportunity to work out truly substantive applied ethics for AI is slowly closing. Political institutions must therefore devise and then promptly implement a minimal but sufficient set of adaptable ethical ground rules and legal constraints for public good-oriented use and future development of AI in the metaverse. They must also create space for a rational, evidence-based process of critical discussion aimed at continually updating, improving, and whenever necessary, revising this first set of normative constraints. In the current situation, the values that guide further AI development are likely to be set by a very small number of people working in large private companies and military installations. Therefore, one goal is to proactively include as many perspectives as possible – and to do so in a timely manner.
Need for high safety standards
We need to develop and implement global security standards for AI research and its implementation in the metaverse. A global charter for AI is necessary because such safety standards can only be effective if all countries participating in and investing in the relevant type of research and development make a binding commitment to certain rules. However, the chances of this happening are currently quite poor, which is why democratic resilience and digital sovereignty for the Federal Republic of Germany are also at stake. Given the current economic and military competition, the security of AI research will most likely be reduced in favor of faster progress and lower costs by moving it to countries with low security standards and low political transparency (an obvious and strong analogy is the problem of tax evasion by corporations and trusts). If international cooperation and coordination were successful, a “race to the bottom” in security standards (through the relocation of scientific and industrial AI research) could in principle be avoided.If – as seems likely – this does not succeed, the issue is proactive protection for the Federal Republic of Germany, looking far into the future.
Risks for social cohesion
But AI poses many other risks to social cohesion, for example through privately operated and autonomously controlled social media that aim to collect and “package” human attention for further use by their customers, or through “engineering” political will formation through Big Nudging strategies and AI-driven decision-making architectures that are not transparent to the individual citizens whose behavior is controlled in this way. Future AI technology will be extremely good at modeling and predicting human behavior – for example, through positive reinforcement and indirect suggestions that make adherence to certain norms or the emergence of “motives” and decision outcomes seem completely spontaneous and unforced. Combined with Big Nudging and predictive user control, smart surveillance technology could also increase global risks by helping to stabilize authoritarian regimes locally in an efficient manner. Again, most of these risks to social cohesion are most likely unknown at this time and we may only discover them by accident. Policy makers must therefore also be aware that any technology that can specifically optimize the understandability of its own actions for human users can, in principle, also be optimized for deception. Great care must therefore be taken to ensure that the reward function of an AI is not randomly, or even intentionally, set in a way that indirectly harms the common good.
AI technology is currently a private good. However, AI-driven social networks and the metaverse are now part of the critical infrastructure. It is the task of democratic political institutions to transform large parts of it into a well-protected commons, something that belongs to all humanity and is subject to democratic control. With the tragedy of the commons, often everyone can see what is coming, but if there are no mechanisms to effectively counteract the tragedy, it will unfold invisibly, for example in decentralized situations. The EU should proactively develop such preventive mechanisms.
Adaptation of governance structures
Finally, I would like to point out that adapting governance structures is itself part of the problem landscape: to close, or at least minimize, the pacing gap over time, we need to invest resources in changing the structure of governance approaches themselves. “Meta-governance” means just that: Governance of governance that addresses the risks and potential benefits of explosive growth in specific areas of technological development.
A GCC for AI could act as an “issue manager” for a particular rapidly emerging technology, an information clearinghouse, early warning system, analytical and monitoring tool, and international best practice evaluator, as well as an independent and trusted focal point for ethicists, media, academics, and interested stakeholders.
Of course, many other strategies and governance approaches are conceivable. The point here is simply that we can only meet the challenge of the rapid development of AI and autonomous systems if we put the issue of meta-governance at the top of our agenda right from the start. In Europe, of course, the main obstacle to achieving this goal is “soft corruption” by the big-tech industry lobby in Brussels: there are strong financial incentives and key players interested in maintaining the time lag as long as possible.
Prof. Dr. Thomas Metzinger, Member Board of Experts at Foundation Metaverse Europe
Prof. Dr. Thomas Metzinger has been working as a philosopher at the interface between philosophy of mind and cognitive neuroscience for many years; he is also concerned with the ethical, anthropological, and sociocultural consequences of advances in neuroscience and artificial intelligence. His role as an adjunct fellow included. Leiter der MIND-Group am Frankfurt Institute for Advanced Study (https://open-mind.net/about). From 2005-2007, he was president of the Cognitive Science Society, and from 2009-2011, he was president of the Association for the Scientific Study of Consciousness and from April 2014-2019 Fellow at the Gutenberg Research College (https://www.gfk.uni-mainz.de/prof-dr-thomas-metzinger). A generally understandable book is Der Ego-Tunnel (2014; Munich: Piper) a freely accessible collection of texts is Open MIND (2015; www.open-mind.net), current follow-up projects are Philosophy and Predictive Processing (2017; www.predictive-mind.net) and Radical Disruptions of Self-Consciousness (2020). In 2018, Metzinger received a nomination to the High-Level Expert Group on Artificial Intelligence adopted by the European Commission. In 2019, he resigned his C4 professorship, after which he was awarded a senior research professorship by the Rhineland-Palatinate Ministry of Science, Further Education and Culture. In 2022, he was elected to the National Academy of Sciences (Leopoldina).
Focus on content
- Analytic philosophy of mind
- Philosophy of cognitive science, philosophy of neuroscience and AI.
- Interdisciplinary cross-connections of ethics, anthropology and philosophy of mind.
- Applied ethics of brain research and cognitive science.
Prizes and awards
- 1998 Fellowship “Hanse-Wissenschaftskolleg” (18 months)
- 2000 Fellowship “McDonnell Project in Philosophy and the Neurosciences” (5 years)
- 2004 Foerster Lecture 2004 at UC Berkeley, October 18, 2004
- 2006 Leibniz Lectures at the University of Hannover, June 2006
- 2007 Fellowship “Embodied Communication Research Group” (ZiF Bielefeld, 1 year)
- 2008 Fellowship “Wissenschaftskolleg zu Berlin” (Understanding the Brain) (2008-2009)
- 2014 Fellowship „Gutenberg Forschungskolleg“ (2014-2019; € 1.000.000,-)
- 2021 Pufendorf Medal and The Pufendorf Lectures 2021 (Lund University, Sweden)
Prof. Dr. Metzinger (born 1958) studied philosophy, religious studies and ethnology in Frankfurt am Main, where he received his doctorate in 1985 with a thesis on the recent discussion of the body-soul problem. In 1992 he habilitated at the Center for Philosophy and Foundations of Science at the University of Giessen, and in 1998 he spent a year at the University of California in San Diego, from where he was appointed to a professorship in Philosophy of Cognition (C3) at the University of Osnabrück in 1999. Only half a year later he changed to a professorship for theoretical philosophy with emphasis on the 19. and/or 20th century (C4) at Johannes Gutenberg University Mainz, where he held a senior research professorship from 2019 to 2022.
Paragraph 1-3: Thomas Metziger, Culture of Consciousness: Spirituality, Intellectual Probity, and the Planetary Crisis.
Paragraph 4-7: Thomas Metziger, The Cambridge Handbook of Responsible Artificial Intelligence, Towards a Global Artificial Intelligence Charter
Photo: Veysel Çelik | AVA Arthouse Studio / Piper Verlag