Policy on AI use

Managarm requires contributions such as PR submissions and reviews to be made by a human. However, you can use AI tools (and other tools) during development provided that you follow the guidelines in this policy.

  • There must always be a human in the loop. This person (and not an AI system) is considered to be the contributor. As a consequence, we do not allow contributions (code or otherwise) from fully automated systems (such as bots that post or review pull requests). Additionally, all code must receive a human review before it is merged.
  • All submissions are subject to the same quality expectations. AI generated code must be manually reviewed by the contributor before it is posted as a pull request. Contributors are expected to understand their contributions, to be able to respond to review questions and to fix issues raised during review.
  • As with any other submission, contributors must ensure that AI generated code does not violate copyright laws. In particular, if AI code generators are used, contributors must check that the AI does not reproduce variations of copyrighted code from the AI's training set. We discourage the use of AI (at least without additional guidance) to implement larger subsystems from scratch for this reason. Other uses of AI such as refactoring or using it to fill out gaps in a hand-written skeleton implementation are less likely to run into copyright issues. While the current consensus seems to be that AI generated code is not copyrightable as long as it is not closely related to the AI's training data, legal changes may require us to update this requirement (even retroactively).
  • Contributors are encouraged to add Assisted-by: tags (as used by the Linux kernel) to indicate whether specific tools where used during development.

References: this policy is loosely based on the LLVM AI tool use policy (in its version corresponding to LLVM commit 61a58cfa).