Google and OpenAI are Walmarts under siege by fruit stands

Google and OpenAI are Walmarts under siege by fruit stands

Photo credit: Tim Boyle/Getty Images

OpenAI may now be synonymous with machine learning and Google is doing its best to get itself off the ground, but both could soon face a new threat: rapidly proliferating open-source projects that are pushing the state of the art and deep pockets but leave lumbering corporations in their dust. This Zerg-like threat may not be existential, but it will certainly keep dominant players on the defensive.

The thought is far from new – in the fast-paced AI community, these sorts of glitches are expected to be seen on a weekly basis – but the situation has been put into perspective by a widely shared document purportedly from Google. “We don’t have a moat, and neither does OpenAI,” the memo said.

I will not burden the reader with a long summary of this perfectly readable and interesting article, but the gist is that although GPT-4 and other proprietary models received the lion’s share of attention and actual revenue, they gained the edge with Funding and infrastructure looks leaner by the day.

While the pace of OpenAI’s releases may seem exploding by the standards of ordinary major software releases, GPT-3, ChatGPT, and GPT-4 were certainly hotter when compared to versions of iOS or Photoshop. But they still occur on the order of months and years.

The memo points out that in March a leaked base language model of Meta called LLaMA was leaked in fairly crude form. Within weeks, people tinkering with laptops and penny-a-minute servers had added core features like instruction voting, multiple modalities, and reinforcement learning from human feedback. OpenAI and Google probably poked around in the code, too, but they could – couldn’t – replicate the level of collaboration and experimentation that occurs in subreddits and discords.

Could it really be that the titanic computational problem that seemed to pose an insurmountable obstacle—a ditch—to the challengers is already a relic from another era of AI development?

Sam Altman has already noted that when we throw parameters at the problem, we should expect diminishing returns. Bigger isn’t always better, sure – but few would have guessed smaller was instead.

GPT-4 is a Walmart, and nobody really likes Walmart

The business paradigm currently being followed by OpenAI and others is a direct descendant of the SaaS model. They have a high value software or service and offer carefully protected access to it via an API or similar. It’s a straightforward and proven approach that makes perfect sense when you’ve invested hundreds of millions into developing a single monolithic yet versatile product like a large language model.

If GPT-4 generalizes well to answer questions about precedent in contract law, great—although much of its “intellect” is devoted to imitating the style of every author who has ever published a work in the English language. GPT-4 is like a Walmart. Nobody really wants to go there, so the company makes damn sure there’s no other option.

But customers are starting to wonder why am I walking through 50 aisles of junk to buy some apples? Why am I hiring the largest, most universal AI model ever created when all I want to do is use a little intelligence to match the language of this contract to a few hundred others? At the risk of torturing the metaphor (not to mention the reader), if GPT-4 is the Walmart you buy apples to, what happens when a fruit stand opens in the parking lot?

In the AI ​​world, it didn’t take long for a large language model, in a greatly abbreviated form of course, to be running on (appropriately) a Raspberry Pi. For a company like OpenAI, its jockey Microsoft, Google, or anyone else in the AI-as-a-Service world, this effectively challenges the entire premise of their business: that these systems are so difficult to build and operate that they have to do it for you. In fact, it looks like these companies have chosen and developed a version of AI that fits their existing business model, and not the other way around!

You used to have to offload the computations associated with word processing to a mainframe—your terminal was just a display. That was a different time, of course, and we have long been able to accommodate the entire application on a personal computer. This process has happened many times since, as our devices have repeatedly and exponentially increased their computing power. Nowadays, when something needs to be done on a supercomputer, everyone understands that it’s just a matter of time and optimization.

For Google and OpenAI, the time came much faster than expected. And they weren’t the ones doing the tweaking — and may never be doing it at this rate.

Well, that doesn’t mean they’re unlucky. Google didn’t manage to be the best – at least not for long. Being a Walmart has its perks. Businesses don’t want to have to find the custom solution that gets the job done 30% faster when they can get a reasonable price from their existing vendor and not rock the boat too much. Never underestimate the value of laziness in business!

Sure, people repeat LLaMA so quickly they’re running out of camelids to name them after them. By the way, I’d like to thank the developers for an excuse to just scroll through hundreds of pictures of cute tawny vicuñas instead of working. But few enterprise IT departments will cobble together an implementation of Stability’s open-source derivative in the works of a quasi-legal leaked meta-model via OpenAI’s simple, effective API. You have a business to run!

But at the same time I stopped using Photoshop for image editing and creation years ago because the open source options like Gimp and Paint.net have gotten so incredibly good. At this point the argument goes in the other direction. Pay how much for Photoshop? No way, we have a business to run!

What Google’s anonymous authors are clearly concerned about is that the distance from the first to the second situation will be much shorter than anyone thought, and there doesn’t seem to be anything anyone can do about it.

Unless the memo argues: Accept it. Open, publish, collaborate, share, compromise. When they close:

Google should establish itself as a leader in the open source community and take the lead by collaborating with, rather than ignoring, the broader discussion. This likely means taking some awkward steps, such as: B. the publication of the model weights for small ULM variants. This inevitably means giving up control of our models. But this compromise is inevitable. We cannot hope to drive innovation and control it at the same time.