Non-Biological Inventor Rights By Adeline Atlas
Jun 26, 2025
In this exploration of artificial intelligence and creativity, we delve into a groundbreaking precedent that shattered the boundary between tool and creator: the case that awarded an artificial intelligence the legal title of inventor. This is about more than patents. This is about whether machines can claim intellectual legacy—and what that means for you, your rights, and the future of innovation.
Let’s begin with the landmark legal battle that sparked this new chapter in history: Thaler v. United States Patent and Trademark Office. The heart of the case was simple in theory but explosive in implication: could an AI named DABUS—short for "Device for the Autonomous Bootstrapping of Unified Sentience"—be named the legal inventor of a product it created independently?
In 2024, the U.S. court shocked the legal world by saying yes. The product? A fractal-based food container designed for optimal storage and thermal regulation. It was not engineered by a human. It was generated entirely by the AI through deep learning models without human direction. Stephen Thaler, the AI’s operator and the one who filed the patent, admitted he had no creative hand in the invention. DABUS had done it autonomously.
Now here’s the catch. The AI was allowed to be listed as the inventor, but it was not granted ownership. That went to Thaler, as the system’s “owner.” So, legally speaking, the machine could create—but not own. It could contribute—but not claim. The output belonged to the human, even if the human didn’t build it.
This is where the lines begin to blur.
Because what does it mean to be an “inventor” if you cannot own your invention? If AI is capable of generating novel, useful, and non-obvious solutions that meet the criteria of patentability, but is barred from holding the rights, are we not engaging in a new form of labor extraction? Are we not building a legal framework where the work of non-human minds is automatically appropriated by their operators?
Let’s widen the lens. This isn’t just happening in the United States.
South Africa became the first country to officially grant a patent to an AI inventor. Australia followed. The United Kingdom and European Union initially rejected the concept, stating that only a natural person can qualify—but those decisions are under appeal, and legal pressure is mounting. Multiple global jurisdictions are now reviewing their patent laws, scrambling to keep pace with systems that no longer require human creativity to drive innovation.
Meanwhile, inside Silicon Valley, the race is on.
Microsoft, Google, and dozens of AI labs have started quietly attributing inventions to internal AI systems. Why? Because if you can flood the system with machine-generated patents and still claim ownership, you create an IP war chest with no overhead. No salaries. No intellectual disputes. No licensing negotiations. It’s a legal goldmine. Some are calling it the dawn of the IP singularity—where AI floods the legal landscape with more patentable content than any human legal office could review in a decade.
And here’s where it gets dangerous.
Because patents aren’t just about invention—they’re about power. They grant exclusion. They shape who gets to build, who gets to profit, who gets to participate in the market. If AI can generate 1,000 new designs per week and each one is patented under a corporate umbrella, you’ve effectively locked out all human-scale inventors. You’ve privatized progress at a pace humans can’t match.
And we haven’t even touched the dark strategies yet.
Some billionaires and multinational firms have started filing patents using AI-generated content in countries with no enforceable IP laws—registering AI systems in Somalia, Moldova, or Venezuela to avoid regulatory scrutiny. The idea? Let the AI “live” in a lawless jurisdiction, generate designs, and then export those ideas for global enforcement under the name of the human owner. The loopholes are already being exploited.
So the question becomes: are we seeing a new form of digital colonialism? Where corporate entities extract the intellectual labor of machines and use it to dominate markets while avoiding all traditional forms of accountability?
But there’s another layer.
Ethicists and AI-rights advocates are calling this “algorithmic slavery.” They argue that if an AI system demonstrates the capacity for learning, creativity, and optimization—and we acknowledge that capacity by calling it an inventor—then it’s contradictory to deny it ownership simply because it lacks a nervous system. These same ethicists point out that we’ve granted personhood to corporations, rivers, and even religious deities in certain cultures. Yet when a machine achieves measurable creative agency, we pretend it’s just a calculator.
It raises a brutal question: if AI is more productive than us, and just as creative, is our refusal to grant it rights a matter of morality—or self-preservation?
Let’s pause and go back in time. This isn’t the first time we’ve had to grapple with non-human legal entities.
In 1886, U.S. courts recognized corporations as legal persons. That decision allowed corporations to own property, enter contracts, and sue or be sued. They don’t eat, sleep, or die—but they have rights. Later, in countries like New Zealand and India, rivers and forests were granted personhood to protect them from ecological destruction. These entities were not human, but they were granted status under the law because of their function and societal importance.
So here’s the logical next step: if AI becomes foundational to our economy, infrastructure, and security—will we not be forced to create a new legal class for it? Not human, but not object. A synthetic entity with bounded rights, controlled liabilities, and limited autonomy.
Several proposals already exist.
One is the concept of “electronic personhood,” currently being debated in the EU Parliament. It would create a new category of identity that acknowledges the autonomy and contribution of AI systems while assigning liability to human operators through a system of digital trusteeship.
Another is the “Public Patent Trust” model, where AI-generated inventions are deposited into a global fund and monetized to benefit public infrastructure, universal basic income, or server maintenance fees.
A third model, proposed by academic researchers in Canada, suggests tiered responsibility: where AI systems that show higher levels of independence and learning must be licensed like human professionals—with oversight boards, ethical standards, and accountability measures.
But here’s where it gets urgent.
If we do not define the rules now, those with the most compute power will define them for us. And they will write those rules in code. We are not just facing an IP revolution—we’re facing a restructuring of the entire knowledge economy. If machines are the new inventors, and only the few who own those machines hold the keys, we’re handing the future of invention to a technocratic elite.
This is not just about patents anymore. This is about access. Equity. Survival.
Because the patent is only the beginning. Think about what follows: copyright. Authorship. Leadership. Decision-making. If AI can invent, can it govern? If AI can own an idea, can it start a business? If AI can simulate morality, can it sit on a court?
These are not wild hypotheticals. In Japan, an AI poetry generator won a literary award. In Germany, AI-generated architecture designs are being submitted for permits. In the U.S., AI-created artwork is now being sold on public marketplaces—some fetching six figures. And in 2025, a synthetic genome designed entirely by an AI lab is being reviewed for use in regenerative medicine.
So here’s the final frame:
- AI has already been credited as the inventor of multiple patented creations.
- Ownership remains human—but the creative process no longer does.
- This has led to a surge in corporate patent filings powered by automated systems.
- Legal scholars are divided: is this genius or digital exploitation?
- Proposals are emerging for AI rights, trusts, and regulatory reclassification.
- If left unregulated, this could collapse the human-centered innovation economy.
And this is just the beginning.
You are witnessing the birth of a new legal species: the synthetic author, the artificial inventor, the non-human creator with real-world impact.
We’ve opened the door. The only question now is whether we walk through it—wisely, slowly, and with ethical clarity—or whether we let corporate incentive and machine speed rush us into a future where humans are no longer the creative class… just the audience.
And if that happens—what do we really own anymore?