Disaster RFK Jr.'s Disastrous MAHA Report Seems to Have Been Written Using AI - Trump’s Health Secretary pledged his department would use AI “aggressively.” Promises made, promises kept?

RFK Jr.’s Disastrous MAHA Report Seems to Have Been Written Using AI [archive]
Trump’s Health Secretary pledged his department would use AI “aggressively.” Promises made, promises kept?


By: Naomi Lachance; Rolling Stone
Published: May 31st, 2025

rfkjr.webp

A report on children’s health from Donald Trump and Robert F. Kennedy Jr.’s “Make America Healthy Again” commission referenced fake research and misinterpreted studies to support their agenda. It also included citation errors, like crediting the wrong author on a study.

To make things worse, the report appears to have been written using artificial intelligence, according to The Washington Post. The revelation comes just weeks after Kennedy, the Secretary of the Health and Human Services Department, touted the department’s commitment to using AI liberally.

The “MAHA Report: Make Our Children Healthy Again,” looks at diet, technology usage, medication usage, and other factors that contribute to children’s health. A key consequence of children’s health issues, the report argues, is the majority would not be able to serve in the military, “primarily due to obesity, poor physical fitness, and/or mental health challenges.” The report also argues that children are on too many medications, which aligns with RFK Jr.’s longtime diatribes against vaccines and drugmakers.

The shoddy “research” calls into question the report’s validity. NOTUS first pointed out the study’s many issues on Thursday, showing that seven cited studies don’t exist. NOTUS also found issues with how the report interpreted its sources.

“The paper cited is not a real paper that I or my colleagues were involved with,” epidemiologist Katherine Keyes told NOTUS of a reference that named her. “We’ve certainly done research on this topic, but did not publish a paper in JAMA Pediatrics on this topic with that co-author group, or with that title.”

Citations in particular show hallmark signs of AI usage, the Post found. These issues go beyond typical user error in writing somewhat annoying APA citations. For example, URLs in two citations include the term “oaicite,” referencing OpenAI, which the Post calls a “definitive sign” that the authors used AI.

The report also cites articles that do not exist. For example, “Direct-to-consumer advertising and the rise in ADHD medication use among children” sounds like it could be a real article, but it was fabricated. The report cited an article from psychiatry professor Robert Findling on a topic that he writes about, but the article does not exist. This is a sign of AI usage, because chatbots “hallucinate,” as the Post says, cite studies that could be real but are not.

Two citations for an article from U.S. News and World Report titled “How much recess should kids get?” each credit a different author. But neither author is the one who actually wrote the article. Two citations for another article on recess do the same. AI chatbots are known to mix real references with false information, often described as hallucinations.

For a statistic about overprescription for children with asthma, the report cites an article that does seem to exist, but that does not include the statistic. The lead author for the article in the citation is correct, but the co-authors are not — another error.

The Post also identified a URL that no longer works. If AI is trained on older material, it can include outdated links.

Another citation includes a quotation from the reference material, an error only someone who does not conceptually grasp citations — or a bot — would make.

Rolling Stone also noticed that some citations are missing italicization and others are missing capitalization, which at the very least suggests an author without an academic background, a lazy author, or perhaps a bot.

On Friday, NOTUS found that the report had been updated, adding in entire new errors. In the hours since the Post’s story came out Friday night, the White House appears to have edited the report again to remove some of the evidence the article referenced. For example, “oaicite” no longer appears anywhere in the report.[*]

An HHS spokesman told the Post that “minor citation and formatting errors have been corrected, but the substance of the MAHA report remains the same — a historic and transformative assessment by the federal government to understand the chronic disease epidemic afflicting our nation’s children.”



[*]🐴 I have linked to the oldest available archive of the MAHA Report on the Wayback Machine, which has the oaicite citations intact for you to view yourself.
 
Can an egghead answer me this question? Why does AI lie rather then just going "I don't know nigga"

I was trying to remember an episode peep show where Mark took drugs and it gave me title and summary of an episode it made up rather then just say it didn't knew the episode name. It was like talking to a small child caught in a lie and they just kept going with it.

perhaps it's more human then we realize...
 
Can an egghead answer me this question? Why does AI lie rather then just going "I don't know nigga"
Because it's not thinking. It doesn't "know things" about a subject like you or I do. It has no conception of being ignorant of a particular topic. At their core, LLMs just string words together in statistically-likely patterns that are derived from their training material.
 
Can an egghead answer me this question? Why does AI lie rather then just going "I don't know nigga"

I was trying to remember an episode peep show where Mark took drugs and it gave me title and summary of an episode it made up rather then just say it didn't knew the episode name. It was like talking to a small child caught in a lie and they just kept going with it.

perhaps it's more human then we realize...


AI tends to conflate information from multiple sources, even if those sources aren't exactly related.

I've seen it multiple times recently. AI states in a very affirmative manner that tirzepatide gives you lymphatic tumors and cancer. The reality is it was found in mice that used excessive amounts of the drug never in humans (yet).

Another issue i had recently was i was trying to Google why a lemon sauce i made tasted a bit metallic. In the same comment AI was bringing up metal leeching, it also mentioned it was dangerous and could be a health hazard. This info was wrong, it pulled it from a site that mentioned different types of metal leaching and inserted information from one type of cookware as a "universal" health concern.

So the issue is AI doesnt know the difference between legitimate information and illegitimate information, it doesnt know when you're memeing, etc....
 
  • Informative
Reactions: Jimjamflimflam
Likely malicious compliance, or when someone in charge found out that it normally takes a federal worker 16 man-years to generate a few pages of "work", they said "fuck it, use AI".
 
  • Like
Reactions: FlappyBat
Can an egghead answer me this question? Why does AI lie rather then just going "I don't know nigga"

I was trying to remember an episode peep show where Mark took drugs and it gave me title and summary of an episode it made up rather then just say it didn't knew the episode name. It was like talking to a small child caught in a lie and they just kept going with it.

perhaps it's more human then we realize...
Lie? Pshaw!

My employer's AI primer course assures me, AI doesn't lie, it "halucinates"!

No, really, that's the defense for why we should use it.... it doesn't lie... but sometimes? It has a fever dream/acid trip and honestly THINKS it knows.
 
  • Thunk-Provoking
Reactions: Jimjamflimflam
I like how the article insinuates the journalist just called someone up and asked about it, but didn't really go into it to any detail?
 
Ah, just track the outcomes of all kids born in 2025-2027 and see how many of them are alive, retarded or both by 2065. Call it a cohort study. Then you'll have data on how effective his health interventions are.
 
Back