What Source-Available Projects Tell You About AI Contribution Policies
Source: lobsters
The source-available licensing space has always operated in a gray zone. Projects like those under the Business Source License or SSPL make their code readable but not freely reusable, threading a needle between commercial control and community goodwill. Now that needle has gotten harder to thread, because AI-generated code is forcing maintainers to say out loud what they actually believe about authorship and contribution quality.
This piece from The Consensus surveys how source-available projects are handling AI contributions, and the variance is striking. Some prohibit AI-generated code outright. Others have no policy at all, which is its own kind of policy. A few are trying to thread another needle by permitting AI assistance while requiring human review and attestation.
Why Source-Available Projects Have a Distinct Problem
Fully open source projects deal with the same questions, but source-available projects carry extra weight. When a company controls the license and retains commercial rights over the codebase, every contribution they accept potentially becomes part of a product they sell. That changes the calculus around AI-generated code considerably.
The copyright status of AI-generated code remains unsettled. Courts in the US have been skeptical of granting copyright to non-human authors, which means contributions written by an LLM might not be copyrightable at all. For a source-available project maintained by a company, accepting such contributions under a CLA could create ambiguity about what rights are actually being transferred. It is a small but real legal risk that open source projects, where the goal is maximum permissiveness anyway, do not face in the same way.
There is also the training data problem. Several large codebases with restrictive licenses have been used, with or without permission, to train the models now generating contributions back to those very projects. Some maintainers find this uncomfortable enough to formalize a ban.
The Attestation Approach
The more interesting responses are the ones that do not simply say yes or no. Requiring contributors to attest that they understand and take responsibility for every line, regardless of how it was generated, shifts the frame from “was this written by AI” to “do you stand behind this code.” That is a reasonable position. It mirrors how code review should work anyway: a reviewer should not approve code they cannot explain.
The problem is enforcement. There is no reliable way to detect AI-generated code at scale, and the tell-tale patterns that detectors look for are easy to edit away. An attestation policy puts the burden on contributor honesty, which is fine for community trust but does not provide a hard guarantee.
What This Signals
Source-available projects sitting at the commercial edge of the open development world tend to formalize things earlier than community-driven projects. Their AI contribution policies, whatever form they take, are an early draft of what the broader ecosystem will eventually have to decide.
The fact that policies vary so widely right now is not surprising. The norms have not settled. But the variation itself is informative: it shows that there is no consensus on whether AI assistance is a tool like a linter, a collaborator like a contractor, or something categorically different that warrants its own rules.
For anyone contributing to or depending on source-available software, it is worth reading the contribution guidelines more carefully than usual. The AI policy section, where it exists, tells you something about how the maintainers think about trust, quality, and their own relationship to the code they ship.