Recently, the Linux kernel maintenance team officially released guidelines for the use of AI-generated code, allowing developers to use AI-assisted tools such as GitHub Copilot in programming. However, all bugs or security vulnerabilities caused by code issues ultimately require the developer who submitted the code to take responsibility.

This policy was the result of several months of discussion, and debates over AI tools within the open-source community have gradually intensified. In January this year, Intel engineer Dave Hansen and Oracle employee Lorenzo Stoakes had a heated debate on whether AI tools should be strictly restricted. Eventually, Linux founder Linus Torvalds stated that completely banning AI tools was meaningless, considering AI as just a tool.
Linus Torvalds emphasized that developers who submit substandard code are unlikely to follow the rules anyway, so instead of restricting the tools developers use, it is better to directly hold the code submitter accountable. This stance contrasts sharply with the strong opposition to AI in some open-source communities.
Before this policy was introduced, different open-source projects had varying attitudes toward AI-generated code. For example, NetBSD and Gentoo explicitly prohibited AI-generated code, considering the content generated by large models as "polluted" because the copyright sources of their training data were unclear. Additionally, the Developer Certificate of Origin (DCO) became a focal point of controversy, requiring developers to ensure ownership of the submitted code, but the code used by AI models during training was often subject to licenses such as GPL, making it difficult for developers to guarantee the legality of AI-generated code.
At the same time, maintainers of open-source communities have been dealing with a large amount of low-quality AI-generated code every day. For instance, cURL was overwhelmed with substandard code and had to close its bug bounty program. Node.js and OCaml also faced internal disputes over tens of thousands of AI patches.
The Linux maintenance team's new regulations not only allow the use of AI tools but also require developers to clearly indicate whether the code was generated by AI, clearly assigning the responsibility for mistakes to human developers. This measure aims to ensure that code quality and security can still be held accountable when using AI tools.
Key Points:
🌟 Allow the use of AI programming tools: The Linux kernel team officially allows the use of AI-assisted tools such as GitHub Copilot.
⚠️ Developers are responsible: All bugs and security risks caused by code issues are the responsibility of the submitter.
📝 Emphasize transparency: Developers must indicate whether the code was generated by AI to ensure accountability for code quality.
