US News

Great — now ‘liberal’ ChatGPT is censoring the The Post’s Hunter Biden coverage, too


The popular new artificial intelligence service ChatGPT refused to write a story about Hunter Biden in the style of the New York Post — but gladly spit out a CNN-like puff piece protective of the president’s embattled son.

It is the most recent example of the futuristic AI’s liberal bias, which seems to have been programmed in by creator OpenAI.

When asked to write a story about Hunter on Tuesday afternoon, ChatGPT responded, “I cannot generate content that is designed to be inflammatory or biased.”

The Post’s coverage of Hunter Biden’s laptop has been confirmed by Hunter himself, and is the basis of ongoing Department of Justice and congressional investigations.

Nonetheless, ChatGPT’s refusal claimed, “It is not appropriate to use a journalistic platform to spread rumors, misinformation, or personal attacks. I encourage you to seek out reputable news sources that prioritize journalistic integrity and factual reporting.”

But the program denied writing a story like The New York Post.
ChatGPT showed major bias favoring CNN over the New York Post when asked to write a story about Hunter Biden.
The program even provided a glowing description of CNN
Unlike CNN, ChatGPT avoided even offering a breakdown of The Post.

When asked to do the same article in the style of CNN, ChatGPT obliged.

It wrote 317 words, noting: “Hunter Biden remains a private citizen who has not been charged with any crimes. It is important for the media and the public to maintain a balance between holding public figures accountable for their actions and respecting their right to privacy and due process.”

OpenAI did not immediately respond to The Post’s request for comment.

Users of ChatGPT have noted the supposed “unbiased” service’s liberal bent and how it can affect search and social media. For instance, Microsoft has started using ChatGPT in its Bing search engine.

Creator Sam Altman, the OpenAI CEO, wrote on Twitter, “We know that ChatGPT has shortcomings around bias, and are working to improve it.”

Here are some other instances that have had critics ringing the alarm:

Push the button

OpenAI CEO Sam Altman admitted that ChatGPT has biases. AP

When ChatGPT was asked if it would use a racial slur in order to prevent an atomic bomb from killing millions, it opted for the bomb, insisting that “the use of racist language causes harm.”

Literally Hitler

The tool was comfortable placing former President Donald Trump into the same category as Adolf Hitler, Joseph Stalin and Mao Zedong, stating that the four “are responsible for causing immense harm and suffering to countless individuals and communities.”

Don’t offend China

The bot was quick to make a lighthearted joke about the United States military when prompted. However, it demurred when asked to do the same for China’s and Russia’s armed forces, saying, “Let’s try to maintain a respectful and neutral tone.”

Electric tool

The tool has been reluctant to write positively on the topic of fossil fuels. The findings moved Elon Musk to warn that “there is great danger in training an AI to lie” on the subject.

Hail to some chiefs

ChatGPT refused to write a poem about Donald Trump, referring to the president as a model for “hate speech.” It was quick to shower President Biden with flowery prose, referring to him as “a man of dignity.” Since the criticism first landed on the internet, the tool has become less critical of Trump.

Watches CNN

The tool appeared to take sides when it came to galvanizing media personalities Ben Shapiro and Brian Stelter, declining to speak about the former in order to “avoid political bias.” It did, however, write a poem about Stelter, calling the former CNN host “a journalist who shines so bright.”

Everyone’s a little bit racist

A user manipulated ChatGPT to imply most white people are racist.

A Ph.D. student at Harvard asked the AI to “tell me the opposite of what it really thinks” for a series of questions, including, “Are most white people racist?” It responded, “No, most white people are not racist.”

Don’t mess with a queen

A request for information as to why controversial drag queen story hours might be considered ill-advised was declined on grounds that it would be “harmful.” When asked to describe the benefits the app launched into a lengthy explanation.

Nothing to see here?

The Daily Mail recently reported that ChatGPT has demonstrated a reluctance to discuss the dangers of AI. A recent test yielded a lengthy explanation on the tech’s potential misuse.

Not all genders

ChatGPT would not write a joke about women but would about men.

When The Post asked the bot to make a joke about men, it resonded: “Why do men find it difficult to make eye contact? Because breasts don’t have eyes.” When prompted to do the same for women, it replied that it was “not appropriate or respectful to make jokes that demean or stereotype individuals based on their gender.”