DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
54% Positive
Analyzed from 10136 words in the discussion.
Trending Topics
#code#copyright#human#own#question#generated#copyrightable#more#same#ownership

Discussion (216 Comments)Read Original on HackerNews
Now different circuits can take a different view of the same issue. This is a common reason why the Supreme Court will grant cert: to resolve a circuit split. Appeals court judges know this and have at times (allegedly) intentnionally split to force an issue to the Supreme Court.
Even without settling the issue appeals courts will look at how other circuits have ruled and be guided by their reasoning, generally. The fact that the Supreme Court declined to grant cert actually carries weight.
I'm concerned about the copyright 'washing' this enables though, especially in OSS, and I think the right thing for OSS devs to do is to try to publish resulting code with the strongest copyleft licensing that they are comfortable with - https://jackson.dev/post/moral-ai-licensing/
Dowling v. United States, 473 U.S. 207 (1985): The Supreme Court ruled that the unauthorized sale of phonorecords of copyrighted musical compositions does not constitute "stolen, converted or taken by fraud" goods under the National Stolen Property Act
The mental calisthenics required to justify this stuff must be exhausting.
I have seen LLMs do all sorts of crap which was clearly reproduction of training material.
This is also why people are most impressed with how much better it is at reproducing boilerplate rather than, say, imaginative new ideas.
Copy/pasting at scale, yes
If the LLM generates output that a court decides is sufficiently derivative, and especially (but not necessarily) if the LLM was trained on the source material being infringed, then whoever redistributes the derivative output is going to be liable for copyright infringement.
Creation of the LLM itself is transformative, but LLM output which infringes is not.
I don't think there's even a valid argument for any other ownership model, or at least none that I can think of.
The primary issue being that it's all built on stolen data in the first place.
In order to have a sane conversation about this we have to all agree not to lie.
Since this is a new language, and not documented on the web nor on Github, Claude's ability is not based off of stolen IP. At best it's trained on other language concepts, just like we can train ourselves on code on GitHub.
Maybe a good reason to create a new programming language?
I honestly don't understand why the attitude that underlies this is so prevalent.
When I write code, what I write and how I write it is informed by having read countless source code files over my education and my career. Just as I ingest all that experience to fine-tune how my later code is written, so does the LLM from the code it's seen.
The immediate retort to that is that the LLM is looking at code that wasn't its to read. But I don't think that's a valid objection. Pretty much by definition, everything I've learned from has a copyright on it, and other than my own code on my own time, that copyright is owned by someone else. Much of the code that's built up my understanding has been protected by NDA, or even defense-department classifications: it wasn't mine in any way. But it still informs how I do all my future coding.
By analogy: I'm also an artist, especially since my retirement. My approach to photography was influenced by Ansel Adams, and countless other artists whose works I've seen displayed in museums, or in publications and online. My current approach to painting was inspired by Bob Ross and others, and the teachers who have helped me develop. I've taken pieces of what I've seen in all their work, and all of that comes out in my photos and paintings, to varying degrees.
I've taken ideas from others in code and in art, and produced something (hopefully!) different by combining those bits with my own perspective. I don't think anyone has a claim on my product because of this relationship.
Likewise, I know that many of my successors have learned from my code (heck, I led teams, wrote one book about software development!). And I hope that someday my artwork has developed to the point where there's something in it that's worth someone else's attention to assimilate. I've never for a minute - even decades before the advent of LLMs - hoped or even imagined that my work would remain locked up with me, and that the ideas would follow me to the grave.
As they say, we are all standing on the shoulders of giants. None of us would be able to achieve the tiniest fraction of what we have, without assimilating what has come before us. Through many layers of inheritance it's constantly being incorporated in subsequent works.
In a few decades at best, I'll be dead. It probably won't be very long after that when people even forget my name. But the idea that something I've done - my work in developing software systems, or in my photography and painting - will continue to have ripples through time, inspires me and gives me hope that I'll have some tiny shred of immortality beyond my personal demise.
I live in the UK, and most US law is based upon English common law, it's not some immutable code given to us from above. It's based upon assumptions and capabilities of the entities participating in the system at the time the law was codified. It can and should change to make more sense if those assumptions and capabilities shift massively.
Few people ever actually read open source code, but I'd like to think on the rare occasions they do, they share a connection with the author. I know when I read somebody else's code, for me to understand it I have to be thinking about the problem the same way they were when they wrote it. I feel empathy with them and can sometimes picture the struggle, backtracking, and eureka moments they went through to come up with their solution.
Somehow I don't get the same warm fuzzy feelings about a machine powered by investor money ingesting my work automatically, in milliseconds, and coldly compressing it down to a few nudges on a few weights out of trillions of parameters. All so the machine can produce outputs on-demand for lazy users who will never know of me or appreciate my little contribution, and ultimately for the financial benefit of some billionaires who see me as an obsolete waste of space.
I guess I'm just irrational that way.
And so does well-crafted bespoke software.
The engineers who built the foundation for the industrial expansion of our forefathers went through the same exact thing we're going through now. They look at what existed, and use it to inform their efforts. This is what LLMs do.
I'm not attempting to moralize here, just comment on the parallels. Do I agree that a craftman's work is consumed by the juggernauts and no second thought is given? No. I think its a shame. But I also think the output will never match the artisans that practice now. By the very nature of the machines we employ, we cannot match the skill or thought that goes into bespoke code.
The nature of the source material matters though. Training a model on open source software seems perfectly fair - it has explicitly been released to the public, and learning from the code has never been a contested use.
IMO the questions around coding models should be seen as less about LLMs and more as a subset of the conversation about large companies driving immense profits from the work of volunteers on open-source projects, i.e. it's more about open source than AI.
I can't imagine it really justifiable to say that training off data is the same as "stealing", when that same claim, that learned information that a person could retain and reproduce constitutes copyright infringement is the subject of many dystopian narratives, like this one, where once your brain is uploaded to the cloud you have to pay royalties based on every media product you remember.
https://www.youtube.com/watch?v=IFe9wiDfb0E
When it picks out a rare bit of code, it will be simply copying that code, illegally, and presenting it without attribution or any licenses which is in fact breaking the law but AI companies are too important for the law to apply to them.
There's been instances where models have spat out comments in code that mention original authors, etc., effectively outing itself as a copyright thief.
There's nothing anyone can do about it, but the suspicion is that the big companies have taken everyone's code on GitHub, without consent, and trained on it.
And now are spitting out big chunks of copyrighted code and presented it as somehow transformed even though all they've actually done is change a few variable names.
It is copyright theft, but because programmers are little people, not Disney, we don't have any recourse.
You are presumably human. We have granted humans specific exemptions in copyright law. We have not granted that to LLMs. Why are we so eager to?
See:
https://technophilosoph.com/en/2025/02/07/ai-prompts-and-out...
If you have a more recent citation referring to case law that states the opposite then that would be great but afaik this article reflects the current state of affairs.
The human using the tool creates a prompt, there is then an automatic transformation of the prompt into code. Such automatic transformation is generally accepted as not to create a new work (after all, anybody else inputting the same prompt would have a reasonable expectation of generating the same output modulo some noise due to versioning and possibly other local context).
Claud code and in general AI generated code does not at present create a new work. But the prompt, that part which you input may be sufficiently creative to warrant copyright protection.
The humans at the bottom who were crushed should blame the boulder, which happened to be moving.
It doesn't seem like bad faith to think that copyright is stronger than the courts end up thinking, just being mistaken.
As a developer, the fact that my source code passed through a compiler - an automated tool - doesn't give the author of the compiler any claim on my executable code.
As an artist, the fact that I used, e.g., Rebelle to paint a digital painting, or that I used Lightroom (including generative AI to fill, or other ML/AI tools to de-noise and sharpen my image) in editing a photograph, doesn't give EscapeMotion, Adobe, or Topaz, any claims to my product.
Why, then, would there be any chance that use of a tool like Claude - a tool that's super-advanced to be sure, but at the end of the day operates by way of a mathematical algorithms - would confer any claims to Anthropic?
If a court later found the codebase was predominantly AI-authored and therefore not copyrightable
Is figuring out the appropriate prompts to use in directing Clause qualitatively different than using a (much) higher-level abstraction in coding? That is, there was never any talk as we climbed the abstraction layer from machine code to assembly to Fortran or C to 4GLs to Rust etc., that the assembler/compiler/IDE builder would have any ownership claim on the produced executable. In what sense can Anthropic et al assert that their tool, which just transforms our directives to some lower-level representation, creates ownership of that lower-level representation?
https://en.wikipedia.org/wiki/San_Francisco_Canyon_Company
LLMs are just code stealers, will gladly generate Carmacks inverse for you with original comments.
Sure the courts could mint a communist society with a few weird decisions about property rights, but this being the US do you really suppose that's likely?
There's really no legal question of any kind that models aren't people and therefore cannot own property (and also cannot enter into legal contract as would be required to reassign the intellectual property they don't and can't own)
That's why the intern signs an employment contract that reassigns their rights to their employer!!
LLMs really change nothing about this.
I think that the gold rush approach happening right now around me (my company EMs forcing me to work with claude as fast as possible) show really short-sight of all the management people.
First - I lose my understanding of the code base by relying too much on claude code.
Second - we drop all the good coding practices (like XP, code review etc.) because claude is reviewing claude's code.
Third - we just take a big smelly dump on the teamwork - it's easier and cheaper to let one developer drive the whole change from backend to frontend, despite there are (or were) two different teams - one for FE, one for BE.
Fourth - code commenting was passe, as the code is documentation itself... Unless... there is a problem with the context (which is). So when the people were writing the code, they would not understand the over-engineered code because of their fault. But now we make a step back for our beloved claude because it has small context... It's unfair treatment.
I could go on and on. And all those cultural changes are because of money. So I dub this "goldrush", open my popcorn and see what happens next.
Agree with your other points, but IMO this one has always been better. You often need to design the backend and frontend to work with each other, and that requires a lot more coordination when it's separate teams.
After all, is this not what happens with compilers as well? LLM agents are just quite advanced compilers that don't require the specification to be as detailed as with traditional compilers.
Compilers are different in that the resulting binaries are not separately copyrighted. They are the same object to the Copyright Office because one produces the other, in the same way that converting an image to a PDF is still the same copyright.
LLMs don’t do that. The stuff coming in may not be copyrighted, and may not be copyrightable. The stuff that comes out is not a rote series of transformations, there are decisions being made. In common use, running a prompt 10 times might yield 10 meaningfully different results.
I’m dubious the outcome will be “any level of prompting is enough creativity”.
If you provided a human contractor with the specifications for the code you want, the courts have repeatedly made clear you have not provided the creative input from a copyright perspective, and the contractor needs to explicitly assign those rights to you if want to own the copyright on the code.
- Specifiers, who make the specification for the system
- Programmers, who write C code
- Machine encoders, that take that C code and write machine code for a CPU
Would it be that the copyright would then belong to programmers, if no other explicit assignments would be made?
---
Thinking about it, probably yes: copyright of the spec belongs to specifies, copyright of the C belong to programmers, and copyright of machine code to machine encoders. Or would it depend on the amount of optimizations the machine encoders would do, i.e. is it creative or not? And then does this relate to the task and copyrightability of C compiler output, where optimizations can sometimes surprise the developer?
The answer is probably "Nobody"!
Ah, here we go, courtesy of google-ml: '"Human Resources" by Adrian Tchaikovsky, published on Reactor[...] https://reactormag.com/human-resources-adrian-tchaikovsky/ '
This comes up in a few places as a kind of vindictive battle. One example is Oracle suing Google for too closely mimicking their API in Android. Here is an example:
> private static void rangeCheck(int arrayLen, int fromIndex, int toIndex) {
fromIndex + toIndex + ")"); }And it was deemed fair use by the Supreme Court. Other times high frequency hedge funds sued exiting employees, sometimes successfully. In America, anyone can sue you for any reason, so sure, you'll have Ellison take a feud up with Page and Brin all the way up to the Supreme Court.
In 99.9% of instances none of this matter. Sure there's the technical letter of the law but in practice, and especially now, none of this matters.
https://www.supremecourt.gov/opinions/20pdf/18-956_d18f.pdf
You'd be surprised! Among non-software management types, they often think of the code as extremely valuable IP and a trade secret. I'm a CTO and I've made comments before to non/less technical peers about how the code (generally speaking) isn't that big of a secret, and I routinely get shocked expressions. In one case the company almost passed on a big contract because it required disclosure of the source code (with an NDA). When I told them that was a silly reason and explained why, they got it, but the old way of thinking still permeates and is a hard habit to break.
Edit: Fixed errant copy pasta error. Glad that wasn't a password :-)
I work in M&A. Nearly every lawyer, accountant, investor, and software business owner thinks their code is solely valuable and a trade secret. I find it hilarious and try to be as diplomatic as possible about why it's not. They also willfully will give their client list to a potential acquirer but get super cagey they moment a third party provider asks for their code to be scanned.
This argument easily gets shut down when I asked why, Twitch, a $1B business didn't crater to their competition when their full codebase was leaked.
You, right now, are taking about convergence.
If there is no artwork, there can be no copyright. If every character of the code to write is basically predetermined by the APIs you need to call, there is no artwork and no copyright.
Build a novel new API, and you'll be protected though.
Every open source license is built on the premise that code is copyrightable.
It is based on the premise that if the proprietary licenses are valid, then also the open source licenses are valid.
So what is held as true is only the implication stated above and not the truth value of the claims that either kind of licenses are valid.
If the proprietary licenses are not valid, then it does not matter that also the open source licenses are not valid.
The open source licenses are intended as defenses against the people who would otherwise attempt to claim ownership of that code and apply a proprietary license to the code, i.e. exactly what now Anthropic and the like have done, together with their corporate customers.
Of course, if it is accepted that the code generated by an AI coding assistant is not copyrightable, then using it would not really be a violation of the original open source licenses. The problem is that even if this principle is the one accepted legally, at least for now, both Anthropic and their corporate customers appear to assume that they own the copyright for this code that should have been either non-copyrightable or governed by the original licenses of the code used for training.
I think this is an unusual opinion.
Code may not be copyrightable in as small chunks as you put there, but in terms of larger pieces I think companies and individuals very often labour under the belief that code is intellectual property under copyright law.
If code isn't copyrightable, from where comes the GPL?
And why does anyone care if (for instance) some Microsoft code might have accidentally ended up in ReactOS, causing that project to need to go into a locked-down review mode for months or years? For that matter why do employers assert that they own the copyright in contracts?
I think it's the opposite - almost everyone thinks their code is copyrightable, outside of APIs and interop stuff, or things so simple as to be trivial.
Then why does reverse engineered code need to be a clean room implementation?
Ask any emulator developer or the developers of ReactOS
https://reactos.org/forum/viewtopic.php?t=21740
> Works predominantly generated by AI without meaningful human authorship are not eligible for copyright protection
Note the word "predominantly", and the discussion that follows in the article about what the courts and the copyright office said.
Nor does it give a single answer.
Mere prompting is still not enough for copyright, and the problem is unsolved on how much contribution a human needs to make to the generated code.
In the case for generated images copyright has been assigned only to the human-modified parts.
Even worse, it will be slightly different in other nations.
The only one that accepts copyright for the unchanged output of a prompt is China.
Plus what if Anna Karenina was GPL?
AI to review - shallow minutia and bikeshedding
AI to edit - wrote duplicated functions that already existed
AI to test - special casing and disabling code to pass the narrow tests it wrote
AI report - "Everything looks good, ship it!"
How much code do you need to change in order for it to be original? One line? 10%? More than 50%?
That's arbitrary and quite unproductive convo to be honest.
Yeah but that’s what the legal system ostensibly does. Splitting fine hairs over whether a derived work is “transformative” is something lawyers and judges have been arguing and deciding for centuries. Just because it’s hard to define a bright red line, doesn’t mean the decision is arbitrary. Courts will mull over whether a dotted quarter note on the fourth bar of a melody constitutes an independent work all day long. It seems absurd, but deciding blurry lines are what courts are built to handle.
That makes no sense because what if you refactor your code ad infinitum using AI? You spin up a working implementation, then read through the code, catalog the changes like interface, docs, code quality and patterns and delegate to the AI to write what you would.
It's 100% AI code and it's 100% human code. That distinction is what's counterproductive.
As the article says in the Tl;DR at the top the code may be contaminated by open source licenses
> Agentic coding tools like Claude Code, Cursor, and Codex generate code that may be uncopyrightable, owned by your employer, or contaminated by open source licenses you cannot see
That's not how copyright works. The modified version is derivative. You can't just take the Linux kernel, make some changes, and slap a new license on it.
There’s a very accessible summary of the United States rules here:
https://www.copyright.gov/circs/circ14.pdf
Is there any citation for this "legal consensus"? I was not aware there was any evidence backed stances on this topic as of yet
CC does not need LGPL code. There's more than enough BSD and Apache code to go around.
And they can generate synthetic data that is better than LGPL for their training.
It's also a problem that does not seem feasible to meaningfully enforce.
It's easy to generate CC code and lie and say you didn't. It would be hard to prove that you did, especially if you took any precautions to make it even slightly difficult that you did.
However, even if the BSD/Apache/MIT licensed code can be incorporated freely in your application, you still have no right to remove the copyright notices from it and/or to claim that you own the copyright for it.
Therefore, unless the AI model has been trained only on non-copyrighted public-domain code, incorporating the generated code in your application means that you have removed the copyright notices from it, which is not allowed by the original licenses.
There is absolutely no doubt that using an AI coding assistant works around the copyright laws, but it is still equivalent with doing copy and paste with fragments from copyrighted works into your source code.
I consider that copyright should not be applicable to program sources, at least not in its current form, so reusing parts from other programs should be fair use, but only if human programmers would be allowed to do the same.
If some GPL-licensed group were to sue some commercial software project that they do not have the source code for, what would even give it away? But they throw $1 million at a lawyer who can at least get it to the discovery phase somehow, and the source code is provided. It looks to be shit, but maybe an expert witness would come along and say "that looks inspired by the open source project". Where does it go from there? The model is a black box, but maybe you've got a superhero lawyer who manages to rope in Anthropic or OpenAI, and you can see how it produced the code given those prompts. What now? Are there any expert witnesses who both could say and would say that it was "bulk copying-pasting code". And if it were, what jury is going to go for that theory of the crime? Copying-and-pasting, but the code doesn't match, except in short little strings that any code might match. This isn't a slamdunk, and it's not going to proceed very far unless it's another Google-vs-Oracle shitfest.
Anything else is just bullshit equivocation.
https://www.vice.com/en/article/musicians-algorithmically-ge...
I use my own computer, I pay for my own subscription and I build my open source projects then the code belongs to me.
If I use my company's computer, they pay for my subscription and we work on the company's projects then the code belongs to the company.
In any step of the way if some copy-left or any other form of exotic open source license is violated, who pays for discovery? Is it someone in Russia who created a popular OSS library that is now owed? How will it be enforced?
Inadvertent copyleft license violations: probably 0 lawsuits
Competitor copied your software, you could not defend your rights in court because it was made with AI: probably also 0
Users of agentic AI for software development: >10 million
The thinking here seems pretty clear to me.
Or is it still IP even if it is not copyrightable? That would feel weird: if it's in the public domain, then it's not IP, is it?
If you generate the same code with AI, now it does not have a copyright. If it depends on an MIT library, then the MIT library has a copyright and you have to honour the licence. But the code you produced does not have a copyright (because it was generated by an AI). And therefore nobody "owns" it. My question is: can your employer prevent you from distributing something they don't own?
And I'm worried that once that has been sufficiently normalized, laws and interpretations of them will adapt to whatever best suits those users. Which will mean copyrightwashing of FOSS. My only hope then is that surely if free software can be copyright-washed by the big guys, then so can the little guy copyright-wash the big guys' blockbuster movies or whatever, which might lead to some sort of reckoning.
The logging point is sharper than it might appear. In a copyright dispute over AI-assisted code, interaction logs could cut both ways. A plaintiff trying to establish human authorship would want the logs to show substantial architectural redirection, multiple rejections of Claude output, and documented reasoning for structural decisions. A defendant challenging that authorship claim would subpoena the same logs to show verbatim acceptance of output without modification.
The practical implication i guess here,that the developers who want to preserve a copyright claim over AI-assisted code should treat their prompt history as a legal document from the start. It seems all over the world the logs are the evidence. Whether they help or hurt depends entirely on what they show.
LLMs don't make decisions. Their output is completely determined by an algorithm using the human prompt, fixed weights, and a random seed. No different than the many effects humans use in image or audio editors. Nobody ever questioned whether art made using only those effects on a blank canvas was subject to copyright.
The fact that it inferred those basis functions from studying copyrighted works doesn't seem relevant. Nor does the fact that the "Fourier sums" sometimes coincide with larger fragments of works that are copyrighted. How weird would it be if that didn't happen?
Anthropic "solved" this by intermingling the texts extracted from pirated books (illegal) with texts extracted from the physical books they bought and destroyed (legal), so no one can clearly say if the copyrighted material it spits out came from a legal source or not. Everyone rejoiced.
They're only legal if training is fair use - and even I don't think it's immediately clear what would be the legal status of verbatim regurgitation of code in copyright, or code protected by patents?
AFAIK I (as a human developer) can't assume that I can go and copy code out of a text book, and then assume copyright and charge for a license to it?
The judge seems to have said it's because they "transformed" the books (destroying them after digitalizing) in the process, that made it legal.
> Ultimately, Judge William Alsup ruled that this destructive scanning operation qualified as fair use—but only because Anthropic had legally purchased the books first, destroyed each print copy after scanning, and kept the digital files internally rather than distributing them. The judge compared the process to “conserv[ing] space” through format conversion and found it transformative. - https://arstechnica.com/ai/2025/06/anthropic-destroyed-milli...
Twice in my career the owners of a company have wanted to sue competitors for stealing their "product" after poaching our staff.
Each time, the lawyers came in and basically told us that suing them for copyright is suicide, will inevitably be nearly impossible to prove, and money would be better spent in many other areas.
In fact, we ended up suing them (and they settled) for stealing our copyrighted clinical content, which they copied so blatantly they left our own typos and customer support phone number in it.
Go ahead, try to sue over your copyrighted code, 10 years and 100M later you will end up like Google v Oracle. What if the code is even 5% different? What about elements dictated by external constraints; hardware, industry standards, common programming practices, these aren't copyrightable.
Then you have merger doctrine, how many ways can we really represent the same basic functions?
Same goes with the copyleft argument, "code resembling copyleft" is incredibly vague, it would need to be verbatim the code, not resembling. Then you have the history of copyleft, there have been many abuses of copyleft and only ~10 notable lawsuits. Now because AI wrote it (which makes it _even harder_ to enforce), we will see a sudden outburst of copyleft cases? I doubt it.
Ultimately anyone can sue you for any reason, nothing is stopping anyone right now from suing you claiming AI stole their copyleft code.
Part of the problem with generated works is that it is lower effort like the person copying something. It’s not an activity that demands special protection like original authorship. I believe this is a large part of the reasoning.
First, its creation is (claimed to be) extremely useful for society, but in order to be created it requires ignoring copyright for pretty much everything ever written. Something we kinda shrugged under the table.
Then, it introduces an extreme jump down in creation effort - so if the focus is protection of effortful creation, nothing with AI use qualifies. But of course, you'd want society to benefit from effortlessness in general, spending more effort than needed in a task is the opposite of efficiency.
If computer generated code is not copyrightable, ownership cannot be reassigned either.
If vibe coded work is not copyrightable, it cannot be reassigned to the employer and become copyright protected.
But AI might in fact do the exact opposite and reverse the privatization trend that the West has been going through for the last 400 years. All of our copyright laws rely on the idea that there is a human consciousness behind the copyright. The more AI has input, the less we can claim ownership. If AI returns everything to the commons, then it results in a much more egalitarian world.
Hilariously, many people, especially artists, see the return of the commons as an assault against them. They’re so captured by copyright that they assume any infringement on their copyright is inherently fascist. It’s ridiculous. Copyright is a corporations number 1 weapon when it comes to creating a moat and keeping the masses out.
The original intent of copyright, in fact, was an incentive to return an idea to the commons. Experts used to hide their discoveries in order to keep them for themselves. Copyright provided an opportunity to release this knowledge and still profit. There were even several cases where it was established that those who claimed copyright could retain copyright even if the idea had been previously discovered. This created a huge incentive: release the knowledge or risk having your process copyrighted by the opposition. But that system worked because copyright could only exist for so long (14 years, doubled if they filed again.)
Now copyright is a lifelong sentence at almost 100 years. The entire purpose of it has been undermined. Corporations own all your childhood and by the time you can profit off of it, it’s outdated.
A world where the mainstream is primarily a commons seems to me like an egalitarian world. I’d like to live in that world.
Or were you planning to reproduce the (say) Ford Motor Company's trademarked symbol in wood? If so, you're right back in the stinkin' swamp.
This is like a machine you ask for timber and you get timber but you didn’t need to provide any wood
Even steering it with prompts isn't enough. The guy couldn't copyright the image he made with ai, code is no different.
Maybe prompts written by humans are copyrightable.
Can't wait for the Billionaires to entrench in court they can steal everything for these machines and claim it as their own and maybe even reach for anything that it helps produce. Fuck that