• Brussels, Berlin, Europe

Foundation Metaverse Europe Position Paper on Identity Protection by Louis Rosenberg, PhD

The Metaverse and Conversational AI as a Threat Vector for Targeted Influence

Abstract

Over the last 18 months, two human-computer interaction (HCI) technologies have rapidly come to mainstream markets, funded by massive investments from major corporations. The first area of advancement has been virtual and augmented worlds, now commonly called “The Metaverse.” The second area of advancement has been the foundational AI models that allow users to freely interact with computers through natural dialog. Commonly referred to as “Conversational AI,” this technology has advanced rapidly with the deployment of Large Language Models (LLMs). When combined, these two disciplines will enable users to hold conversations with realistic virtual agents. While this will unleash many positive applications, there is significant danger of abuse. Most significant is the potential deployment of real-time interactive experiences that are designed to persuade, coerce, or manipulate users as a form of AI-powered targeted influence. This issue has largely been overlooked by policymakers who have focused instead on traditional privacy, bias and surveillance risks. It is increasingly important for policymakers to appreciate that interactive influence campaigns can be deployed through AI- powered Virtual Spokespeople (VSPs) that look, speak, and act like authentic users but are designed to push the interests of third parties. Because this “AI Manipulation Problem” is unique to real- time interactive environments, it is presented in this paper in the context of Control Theory to help policymakers appreciate that regulations are likely needed to protect against closed-loop forms of influence, especially when Conversational AI is deployed.

INTRODUCTION

To maintain a well-functioning democracy, it is generally believed that the citizenry must possess reasonably accurate knowledge on issues of civic importance. In addition, the population in a well-functioning democracy must have the freedom to reflect upon issues of political relevance and form personal beliefs without excessive outside influence. The phrase “epistemic agency” refers to an individual’s control over his or her own beliefs. When citizens lack epistemic agency, the political establishment or other powerful groups can easily push widespread misinformation, disinformation, propaganda, or outright lies that distort widely-held societal beliefs and support authoritarian or totalitarian regimes.

For as long as there have been media technologies there have been those who use them to mislead populations in hope of maximizing political control. This goes back as far as the printing press but was greatly facilitated by the invention of mass media technologies such as radio and television. Over the last decade, many democratic nations were taken by surprise by the unexpected threat caused by social media platforms. Despite the early hopes that social media would have a deeply positive impact on society, supporting democracy by facilitating public discourse and giving voice to the voiceless, the general consensus in recent years is that social media has hurt nations around the world by polarizing and radicalizing populations, spreading misinformation, deliberately amplifying discontent, and reducing trust in longstanding institutions

It is not just academics who have deemed social media as a damaging force in society. A poll by Pew Research in 2020 [8] found that two-thirds of Americans believe that social media has had “a mostly negative effect on the way things are going in the U.S. today.” This is surprising considering that social media was hailed as a utopian technology when it first emerged. So why did a technology with utopian aspirations end up having dystopian impacts? While there are many reasons, from the ad-based business models adopted by large platforms, to bad actors using bots and other scalable means to distort public discourse, a major problem was the early failure of regulators to realize that influence campaigns deployed via social media are inherently different than those deployed through classical media such as print, radio, and television.

A primary difference is that social media is a bidirectional medium with two-way communication channels that enables platforms to perform tracking, profiling, and targeting of sub- populations with increasing precision. This seemingly subtle difference has had a significant negative impact, enabling the deliberate segmentation of demographic groups which has led to the polarization and radicalization of online communities.

Considering that regulators around the world underestimated the unique risks of social media vs traditional media and failed to create timely policies to safeguard the public, we must also worry that regulators will similarly discount the unique risks of the metaverse and other real-time interactive forms of media like conversational interfaces. In fact, there are some who view the metaverse as little more than a 3D version of today’s social media platforms. And while regulators and other policymakers generally appreciate that being immersed within realistic virtual content can be deeply personal and therefore more impactful on users (and more harmful) than today’s social media they fail to realize that the metaverse is not merely a 3D version of a simple bidirectional medium like social media.

Instead, the metaverse is a real-time interactive medium that can utilize AI technologies to impart closed-loop influence on individually targeted users and can do it at scale. With recent advances in Conversational AI such as Large Language Models (LLMs) like ChatGPT from OpenAI and LaMDA from Google, metaverse platforms are increasingly likely to deploy targeted influence campaigns through the use of fully interactive and deeply realistic Virtual Spokespeople (VSPs) that look, speak, and act like authentic representatives but are powered by AI-engines that can adapt and optimize their persuasive tactics in real-time based on behavioral and emotional monitoring of users. This is a profoundly different form of influence than anything deployed in the past and requires serious attention.

This background is provided to highlight the concern that regulators and policymakers, who underestimated the possibility that influence campaigns on social media would be significantly more efficient, powerful and damaging than those deployed using traditional print, radio, or television, are now similarly underestimating the increased power, efficiency, and danger of interactive influence campaigns that could be deployed in the near future by metaverse technologies and conversational AI. Referred to herein as the “AI Manipulation Problem,” the danger is that AI-mediated influence campaigns deployed in realistic immersive worlds could unleash extremely dangerous forms of abuse. To help policymakers appreciate that real-time interactive media enables a new and significant threat vector as compared to prior media technologies, the following sections utilize the basic engineering concept of Control Theory (CT) to frame these interactive dangers in a clear and rigorous way.

Continue reading the entire document as a PDF here in English.

Quelle:
Rosenberg, Louis (2023) ‘‘The Metaverse and Conversational AI as a Threat Vector for Targeted Influence,’’ in Proc. IEEE 13th Annual Computing and Communication Workshop and Conference (CCWC), 2023

Dr Louis Rosenberg, membre du conseil d’experts de la Foundation Metaverse Europe

Dr. Louis Rosenberg is an early pioneer of virtual and augmented reality. He earned his PhD from Stanford University and has published over 100 academic papers. A prolific inventor, Rosenberg has been awarded over 300 patents for his innovations in the fields of VR, AR, and AI.

Rosenberg’s career began over 30 years ago in VR labs at Stanford and NASA where he worked on early vision systems and haptic interfaces. In 1992 Rosenberg invented the first Mixed Reality system at Air Force Research Laboratory (AFRL). Called the Virtual Fixtures platform, the pioneering system enabled users to interact with real and virtual objects for the first time. In 1993 Rosenberg then founded the early VR company Immersion Corporation which pioneered a wide range of technologies, including the first VR medical simulators for training surgeons. At Immersion, Rosenberg also invented the first haptic mouse and haptic user interface.

He brought Immersion public on NASDAQ in 1999 and it remains a public company today. In 1996 Rosenberg founded Microscribe, a 3D-graphics company specializing in rapid 3D digitization of physical objects. Microscribe products have been used extensively by 3D filmmakers, game developers, and virtual world builders. The Microscribe was used to create many well-known feature films including Shrek, Ice Age, A Bug’s Life, and Starship Troopers. In 2004 he founded the early AR company Outland Research which pioneered geospatial media technologies which was acquired by Google in 2011. In 2014, Rosenberg founded Unanimous AI, an AI company that amplifies the intelligence of networked human groups in shared environments, including in virtual worlds. Rosenberg is also the Chief Scientist of the Responsible Metaverse Alliance, the Global Technology Advisor to the XR Safety Initiative, a metaverse technology advisor to the Future of Marketing Institute, and an expert technology advisor to the XR30 Policy Fund. In addition, Rosenberg writes often about technology for VentureBeat, Big Think, and other publications around the world.

Foundation Metaverse Europe Position Paper on Identity Protection by Louis Rosenberg, PhD

Leave a Reply

Your email address will not be published. Required fields are marked *

WordPress Cookie Notice by Real Cookie Banner