A.I. Is Drinking Your Milkshake

enterlifeonline
4 min readFeb 12, 2024

In 2006, Clive Humby coined the phrase “Data is the new oil”.

So it’s no wonder in 2024 that A.I. lawsuits are on the rise because the modus operandi is “I drink your milkshake” when it comes to leeching copyrighted training data for all of these insatiable AI models that are coming online.

“I drink your milkshake” is a line from the movie “There Will Be Blood” starring Daniel Day-Lewis. It has become a meme since the film’s release. The phrase basically means, “I win. I beat you.” For example, if you were to win some kind of card game, you could say, “I drink your milkshake.” To add insult to injury and rub it in their face, you could add, “I drink it up.”

And basically that’s what OpenAI is asking UK Parliament as it contends that OpenAI cannot make money if it doesn’t use copyrighted material. “Can I drink your milkshake?”

Authors, artists and others are filing lawsuits against generative AI companies for using their data in bulk to train AI systems without permission. This includes the lawsuit by the New York Times against OpenAI.

In early 2023, the Copyright Office launched an initiative to examine the copyright law and policy issues raised by artificial intelligence (AI) technology, including the scope of copyright in works generated using AI tools and the use of copyrighted materials in training. This became a huge factor in the recently resolved strikes by the Writers Guild of America (“WGA”) and the Screen Actors Guild (“SAG-AFTRA”).

The Copyright Office has stated, “No copyright protection for works lacking requisite human authorship.” But defining what that actually means becomes tricky.

Can the AI algorithm itself be copyrighted? Yes, if created by a human. But then, if another AI algorithm learns how to do what the first AI algorithm did, can that be copyrighted or protected? The answer should be no. But wasn’t it originally created by a human?

Keeping humans in the loop as AI creates and generates — does that constitute “copyright protection because of human authorship”?

Some contend that AI should only be protected when it can come up with its own original thought. That means it’s creation cannot be protected now. Current technology only allow ‘emergent behaviors,’ or occasionally produce results that weren’t explicitly programmed.

So if AI should only eat the digital dead, when should something published into the digital world expire?

In Europe, there’s laws for right to be forgotten. Article 17 of the GDPR outlines the specific circumstances under which the right to be forgotten applies. An individual has the right to have their personal data erased if: The personal data is no longer necessary for the purpose an organization originally collected or processed it.

This has opened up the old wound around the United States has very limited right to be forgotten laws. In fact, when you die, unless willed into a trust, all of your digital data is suddenly public domain. That means your life becomes the digital fish food that AI has legal rights to feed on. And why?

Fear of censorship is pushing back legislation around right to be forgotten as I talk about in my article “No Right To Be Forgotten.”

Corporations, like AI systems, are not people. However the US supreme court has ruled that government should not suppress corporations’ political speech. This is because the first amendment protects Americans’ freedom to think for themselves.

Free thought, as based by the US supreme court, requires us to hear from “diverse and antagonistic sources”.

However, if the US government begins to endorse sources of information then that’s an unlawful use of “censorship to control thought”. Citing sources for AI and restricting them because they are copyrighted can be seen as censorship.

However, AI companies are looking to extend this same principle to it’s algorithms. The US supreme court says that protecting speech “does not depend upon the identity of its source”. The criterion for protecting speech is that which speaks, whether an individual, corporation or AI, contributes to the marketplace of ideas.

But AI doesn’t create new ideas does it? It simply amplifies and regurgitates other people’s ideas.

Laws might not be what slows down the “AI milkshake” problem. It might be good old fashion capitalism.

Infrastructure support for Artificial Intelligence is “drinking the milkshake” of AI development. Big Tech companies, including Microsoft, Google and others, are struggling to profitably monetize their generative AI products due to the costs that the models require to produce, develop and train,

Although OpenAI, the Microsoft-backed company, boasted it made $2 billion in revenue in December 2023. Sam Altman is in talks to raise nearly $7 trillion to boost AI chip production because current infrastructure cannot support its ambitious AI learning needs.

Even Microsoft has admitted that it’s Co-pilot is losing $20 per user per month.

The only true profitability with AI is coming from the chip makers themselves such as NVIDIA. NVIDIA recorded revenue of $13.51 billion, more than doubling the $6.7 billion it made last year.

The question I ask students, partners, and customers: is AI making you money? We all agree that AI can save us time. And time is money. But is AI generating us a rate of return based on the operating expenses that is needed to keep it learning?

I say no.

I believe we are living in the hype cycle of what the possibilities of AI can bring us — just like the internet and late-1990 startups did before the first bust. But remember the companies that survived that first internet bubble — Amazon, Microsoft, and Google.

It seems that history is repeating.

Google, Microsoft, and Amazon look to be the ones that say to the AI startups and companies trying to make a go of it: “I drink your milkshake.” Then pause for effect. “I drink it up!”

That money talks, I’ll not deny, I heard it once: It said, ‘Goodbye’. — Richard Armour

--

--