By Steve Milloy
A new low for artificial intelligence — ChatGPT cited admittedly “fictional” scientific studies to answer a question about radiation.
Ahead of coming blockbuster radiation safety news on JunkScience.com, I asked ChatGPT whether exposure to radiation can cause genetic mutation. In its best imitation of 1946 Nobel prize winner Herman Muller, one of the all-time biggest science frauds, ChatGPT responded affirmatively:
So I asked ChatGPT to cite a study reporting that radiation induces genetic mutation:
Curious, I tried to find the study that ChatGPT not only cited but described. What I found, though, is that no such study actually exists.
So I went back to ChatGPT with that information and it responded:
Then I tried to find the second study, which is also described in some detail. Again, it did not exist. And I reported that to ChatGPT, which then mades an astounding admission:
It is apparently so well established that radiation can induce genetic mutation that ChatGPT can’t cite a single real study and instead offers “fictional studies.” And though ChatGPT only offers fictional studies to support its position, it still maintains that radiation causes genetic mutation.
Artificial intelligence produces true lies.