1705866654 The Metaverse Floped So Mark Zuckerberg Flips the Empty AI

The Metaverse Floped, So Mark Zuckerberg Flips the Empty AI Hype – Rolling Stone

UNITED STATES - SEPTEMBER 13: Mark Zuckerberg, CEO of Meta, arrives for the first AI Insight Forum at the Russell Building on Capitol Hill on Wednesday, September 13, 2023.  (Tom Williams/CQ-Roll Call, Inc via Getty Images)

Tom Williams/CQ-Roll Call, Inc/Getty Images

Mark Zuckerberg bet tens of billions of dollars on the “metaverse,” only to be ridiculed at every turn for the idea of ​​an immersive virtual reality social network. Ultimately, its promise to add legs (previously depicted as floating torsos) to users' digital avatars is perhaps what most people remember about the ill-conceived project – if they even think about it.

But while the launch of the metaverse failed, enthusiasm for artificial intelligence grew: in 2023 there was much speculation about the promise of tools like OpenAI's text-based ChatGPT and generative image models like Midjourney and Stable Diffusion, not to mention people, who abuse the same technology to spread false information. Meta itself began to move away from embarrassing demos of Zuckerberg taking VR tourist selfies in front of a low-resolution Eiffel Tower and towards embarrassing partnership announcements to get the voices of Kendall Jenner, MrBeast, Snoop Dogg and Paris Hilton for the new Ensemble of the company to license AI “assistants”.

On Thursday, Zuckerberg increased the hype around Meta's AI game even further by releasing a video update that was shared both on Instagram and in threads. Looking a little sleepless, the CEO said he wanted to “bring Meta's two AI research efforts closer together to support our long-term goals: building general intelligence, releasing it responsibly, and making it available and useful to everyone in our everyday lives.” .” The restructuring includes merging the company's Fundamental AI Research (FAIR) division with its GenAI product team to accelerate user access to AI capabilities – which, as Zuckerberg pointed out, also includes a massive investment in graphics processing units (GPUs). chips that provide the computing power for complex AI models. He also said that Meta is currently training Llama 3, the latest version of their generative large-scale language model. (And in an interview with The Verge, he admitted that he is aggressively pursuing researchers and engineers tried to work on all of this.)

Editor favorites

But what does this latest push in Meta's mission to catch up in AI really mean? Experts are skeptical of Zuckerberg's utopian idea of ​​contributing to the common good by deploying his promised “artificial general intelligence” (that is, by making the code for the model publicly available for modification and redistribution), and even wonder whether Meta will achieve this breakthrough . For now, an AGI remains a purely theoretical autonomous system that can teach itself and surpass human intelligence.

“Frankly, 'general intelligence' is as diffuse as 'the metaverse,'” David Thiel, a big data architect and chief technologist at the Stanford Internet Observatory, tells Rolling Stone. He also finds the open source promise a bit disingenuous because “it gives them an argument that they are being as transparent as possible about the technology.” However, Thiel notes: “Any models they release publicly will only be a small subset of what they actually use internally.”

Sarah Myers West, executive director of the AI ​​Now Institute, a nonprofit research organization, says Zuckerberg's announcement “clearly reads like a PR tactic designed to gain goodwill while obscuring a potentially privacy-violating sprint to get into the AI ​​game.” to remain competitive.” She also finds the pitch about Meta’s goals and ethics unconvincing. “Here it’s not about utility, it’s about profit,” she says. “Meta has really pushed the boundaries of what 'open source' means in the context of AI, beyond the point where those words have any meaning (one could argue that the same applies to the discussion of AGI as well). So far, despite this major marketing and lobbying effort, the AI ​​models published by Meta provide little insight or transparency into important aspects of how its systems are built.”

“I think a lot depends on what Meta or Mark defines as 'responsible' in 'responsible open source,'” says Nate Sharadin, a professor at Hong Kong University and a fellow at the Center for AI Safety. A language model like Llama (which was promoted as an open-source model but is criticized by some researchers as being quite restrictive) can be used in harmful ways, says Sharadin, but its risks are mitigated because the model itself does not “think.” , planning, memory” and related cognitive properties. However, these are the capabilities considered necessary for the next generation of AI models, “and certainly what one would expect from a 'completely general' intelligence,” he says. “I’m not sure why Meta believes that a fully general intelligent model can be responsibly open-sourced.”


As for what this hypothetical AGI would look like, Vincent Conitzer, director of the Foundations of Cooperative AI Lab at Carnegie Mellon University and head of AI technical engagement at the University of Oxford's Institute for Ethics in AI, speculates that Meta could start with something like Lama and expand you from there. “I imagine they'll turn their attention to large language models and probably move more in the multimodal direction, that is, equipping these systems with images, audio and video,” he says, similar to Google's Gemini, which is releasing in December became. (Competitor ChatGPT can now also “see, hear and speak,” as OpenAI puts it.) Conitzer adds that while there are dangers in open-sourcing such technologies, the alternative is to simply keep these models “behind closed doors.” “of profit”, however, exists. “Driven companies” also pose problems.

“As a society, we really don’t have a clear grasp of what we should be most worried about – although there seem to be a lot of things to worry about – and where we want these developments to go, let alone regulation. “and other tools necessary to steer them in that direction,” he says. “We really have to act here, because technology is now advancing rapidly, and with it its spread around the world.”

The other issue is of course data protection, where meta has a checkered history. “They have access to huge amounts of highly sensitive information about us, but we simply don’t know if and how they use it when they invest in building models like Llama 2 and 3,” says West. “Meta has proven time and time again that it cannot be trusted with user data before the typical data leak issues in LLMs arise. I don't know why we should look the other way when they throw “open source” and “AGI” into the mix.” According to Sharadin, the company's privacy policies, including the terms of AI development, “allow for the collection of a wide range of user data for Purpose of providing and improving our Meta products.” Even if you do not allow Meta to use your Facebook information in this way (by submitting a little-known and rarely used form), “there is no way to request removal of the “Checking data from the training corpus,” he says.

Conitzer notes that we are facing a future where AI systems like Meta's have “increasingly detailed models of individuals” and says this may require a complete rethink of our approach to online privacy. “Maybe I shared some things publicly in the past and thought there was no harm in sharing those things individually,” he says. “But I didn't realize that AI could make connections between the different things I posted and the things other people posted, and that it would learn something about me that I really didn't want out there.”

In summary, Zuckerberg's enthusiasm for Meta's latest strategy in the increasingly intense AI wars – which has completely replaced his gushing about the glories of a metaverse – seems to portend even more invasive surveillance. And it's far from clear what kind of AGI product Meta could get out of it, if she even manages to create this mythical “general intelligence.” As the Metaverse saga has proven, major realignments by tech giants don't always lead to real or meaningful innovation.

However, if the AI ​​bubble also bursts, Zuckerberg will surely chase the hot trend.