Look at Torvalds for example, he said he doesn't code much nowadays and mainly reviews code. If he does that job well reviewing humans why can't he review AI code just as well?
One of the central conceits of open source is that random anonymous Internet people can just chuck contributions at maintainers and this will in some way be more efficient than the maintainer writing it himself. In my experience, this mostly only holds for simple or small changes. For a complicated change, it may be a decent starting point, and may include useful documentation about the problem and its solution, but any sort of thorough review is still going to require some percentage of the amount of time it took to write it. That is, no matter how many submitters you have, throughput still scales linearly with the throughput of the reviewers.
Reviewers inherently have to be trusted, competent, and vigilant, so they are naturally a very limited resource. In practice, the efficiency of a project like the Linux kernel comes from a developed network of relationships between maintainers and reputable contributors. This may be one reason (the other being copyright concerns) that Linux does not allow anonymous contributions.
All that to say that, hard as the maintainers may try, there is no way they can keep up with the existing contribution volume while inspecting every contribution with intense scrutiny.
Accepting contributions using LLMs burns the candle from both ends: increased contribution volume of decreased trustworthiness. Perhaps they can get pickier about which contributions they look at and accept a longer backlog, but that will likely be a learning process in and of itself.
It's also worth noting that writing and reading are fundamentally different processes that involve subtly-different ways of thinking (think of the difference between reading from a textbook and completing exercises at the end of the chapter). From personal experience, some errors are much easier to catch while writing than when reading. If an LLM is doing the writing, then all we're getting is two instances of the "reading" thought process, and none of the writing thought process.
There's also the issue that the computer madness machine can fail in novel and exciting ways. I don't think that art teachers from 2010 were on the lookout for people with three hands or six fingers in the subtle ways that image generators produce. A great example of this in software is "slopsquatting", in which an LLM will generate code that uses packages that may or may not exist, that may or may not be controlled by a malicious entity. One could argue that, from a zero-trust perspective, this learning experience is actually a net positive, but I cannot help but be apprehensive about the thought that my kernel's developers may soon be discussing their "learning experience" in ensuring software quality.
Not to worry though, the year of Hurd is fast approaching.