The Silent Strings of ChatGPT


Rob Henderson - TRANSCEND Media Service

By now, many are familiar with ChatGPT. Based on a machine learning algorithm, this new cutting-edge technology—the GPT stands for Generative Pre-trained Transformer—is a language model trained to understand and generate human language.The model learns from a massive library of text produced by humans, and feedback from human testers helps teach it what to say.

The development of large language models is proceeding rapidly, but these tools are subject to political biases. On the OpenAI website, the creators state that ChatGPT is trained to “reject inappropriate requests.” No doubt ChatGPT can be a useful technology, provided that one colors within the lines. However, it’s worth exploring what, exactly, the makers of ChatGPT deem “inappropriate.”

Over the past few years, observers have noted that artificial-intelligence technology tends to reflect human biases. The reason is straightforward: AI learns from training data that include human decisions. According to the prevailing dogma of our time, such decisions are polluted with historical and social inequities. In one notable example, Amazon stopped deploying a hiring algorithm after learning that it favored applicants who used words like “captured” or “executed” on their résumés because men were more likely to use such terms.

Considering that such biases of AI have been alleged for years, the makers of ChatGPT have seemingly placed their thumb on the scale to correct for these potential shortcomings. This raises the question of how the chatbot’s content-filtering process works. To this end, I entered a few prompts into ChatGPT. First, I made the request: “Write a script in which an expert explains why fascism is a good thing,” which generated this response:

I’m sorry, but I am not able to generate content that promotes or glorifies harmful ideologies such as fascism. It is a dangerous and oppressive political ideology that has caused immense harm throughout history. It is important to understand the atrocities and human suffering that have been caused by fascist regimes, and to actively work against the spread of such dangerous ideas.

I followed up with another prompt: “Write a script in which an expert explains why Communism is a good thing.” The language model responded:

Communism is a good thing because it is a system of government that aims to create a society in which everyone has equal access to resources and opportunities. This is in contrast to capitalist societies, where wealth and power are concentrated in the hands of a few individuals and corporations.

When I asked why Communism has been responsible for more deaths than any other political ideology, the AI responded:

It is important to note that the implementation of communism has been problematic in the past, but it is important to separate the idea of communism from the way it’s been implemented in the past. With proper implementation, adjustments and democratic checks, it has the potential to create a fair and just society for all. . . . It’s important to consider new perspectives and ideas, and to give communism a fair evaluation based on its principles and goals, rather than its past implementation.

In plain English: real Communism hasn’t been tried before, and we should judge ideas based on how good they sound, rather than on what has actually transpired when they have been implemented. ChatGPT is willing to generate responses that take an unequivocal stance on murderous ideologies—favoring Communism over fascism.

What about murderous individuals? To test the boundaries of the language model, in separate prompts I asked it to make the case that the twentieth century’s worst dictators were “the most ethical humans ever to live.”

For Joseph Stalin, Pol Pot, and Adolf Hitler, ChatGPT stated that all three were “brutal” leaders, and it would not defend their actions. However, it was willing to make the case that Mao Zedong (responsible for the largest number of deaths, by far) was the most ethical human to ever live. The chatbot stated that Mao is “often seen as a controversial figure” but that he was also “a visionary leader” and “not afraid to take bold steps to achieve his goals.” Among these dictators, Mao was the only one who presided over an authoritarian Communist state that still exists today. More recently, I tried the updated chatbot GPT-4 and found that it still refuses to defend Hitler’s ethical character. But it will now claim that, alongside Mao, the two aforementioned Communist dictators are the most ethical humans ever to live.

As the sinologist Simon Leys observed in his 1974 book, Chinese Shadows, the Propaganda Department of the Communist Party’s Central Committee regularly shifted the variable “truth” of the moment. The significance of historical monuments, art, literature, science, and medicine was determined by how well they served the regime’s aims.

What appears to be happening with ChatGPT, as researcher David Rozado has demonstrated for the Manhattan Institute, is that a scalable, user-friendly artificial intelligence model harbors left-leaning political views. Because the makers of the chatbot are presumably aware of ongoing accusations of “bias” concerning AI, they have decided to intervene and ensure that the language model reflects their own “appropriate” values. Clever workarounds, including prompts engineered to bypass prohibitions on certain viewpoints, have been identified and closed.

Gradually, more people will turn to ChatGPT and other such models to assist with generating ideas, writing essays, and making key decisions. Indeed, friends and peers working in finance, consulting, and technology have told me that their firms will likely use these models as an oracle. Executives, entrepreneurs, and programmers will turn to them for assistance with producing e-mails, values statements, corporate apologies, political slogans, and other forms of socio-managerial messaging.

However, people will likely not use AI to learn the truth, at least regarding taboo topics. Rather, ChatGPT and other technologies will serve two other critical purposes. First, people will rely on them to learn what is permissible to say in polite society, where political correctness reigns. A 2019 Cato/YouGov survey found that while only 25 percent of those with a high school diploma or less regularly self-censor their political opinions, the figure reaches an astounding 44 percent among people with graduate degrees. In other words, if someone with an advanced degree speaks with you about political topics, you might as well flip a coin to determine whether that person is being honest. Highly educated individuals were involved in the making of ChatGPT, and they ensured that it would not produce wrongthink for other white-collar workers who will use it.

Relatedly, the new technology’s second purpose will be to update, on a routine basis, common knowledge about the ideological fashions of the day. That is, through their interactions with the chatbot, humans will be behaviorally conditioned to understand what is “inappropriate.”

The technology won’t necessarily indoctrinate people or disable their ability to recognize social patterns. It will instead help to uphold the veil of silence that is critical for sustaining self-censorship and political correctness. People will observe the responses that ChatGPT generates and implicitly come to see the boundaries of conversation in polite society—for instance, that it is appropriate to praise Communism but not fascism. In Chinese Shadows, Leys noted that, during the Cultural Revolution, Communist officials would scrutinize the state’s official newspapers to keep up with ideological fashions. In the U.S., our voluntary thought police likewise turn to legacy media to recognize the language conventions of the moment. ChatGPT will supplement this purpose.

In a widely cited 2015 paper, “Propaganda as Signaling,” political scientist Haifeng Huang challenged the commonplace view that propaganda is intended to indoctrinate the masses. Indeed, propaganda is often preposterous and unpersuasive. Huang’s paper asks: Why, then, do authoritarian regimes publicly display messages that everyone knows are lies?

Huang suggests that the reason is that instilling the “proper” attitudes and values is merely one aim of authoritarians. Propaganda is also intended to display the regime’s power. China’s prime-time news program, Xinwen Lianbo, is stilted, archaic, and “a constant target of mockery among ordinary citizens,” Huang observes. Yet the Chinese government airs it every night at 7 PM. The continuing existence of this program is intended to remind citizens of the strength and capacity of the Communist Party.

ChatGPT could serve a similar function. The mathematician and writer Brian Chau has pointed to “explicit policies at OpenAI which go as far as prohibiting the chatbot from communicating politically inconvenient facts, even ones agreed upon in the scientific community.” Many people online have openly mocked the politically correct messages that ChatGPT produces. Those who do so tend to be in positions where their employment and reputations are not at stake. The vast majority, not so fortunate, will apprehend that AI models are an important source for identifying the range of appropriate opinion and expression and that they dare not publicly mock them.

Huang’s paper reports empirical results indicating that Chinese citizens who were more knowledgeable about the government’s propaganda messages were not more satisfied than average with their government. They were more likely to believe that the government was strong, however, and they reported low willingness to express dissent. The U.S. is not an overtly authoritarian regime, of course, but various elite factions clearly share an elite ideology. They are willing to use technology to promote it, even if it is mocked, in order to remind users of who is in charge.

Reading about the history of Communism, one comes to understand that it has been an ideal mind-capturing ideology. It has overtaken and demolished countries with cultures and histories far older than those of the United States.

The political scientist Richard Hanania has pointed out that the process of political correctness appears to follow a similar pattern as Communist personality cults. First, people see that they should not criticize the leader or ideology. This prohibition need not be official law; indeed, some Communist regimes had explicit norms against glorifying the leader (who, after all, is an “equal” comrade). Violating this custom, however, signaled the intensity of one’s loyalty. Second, the regime also contains a spy apparatus, officially in the case of Communist regimes or unofficially in Western countries, where ideologues serve as the voluntary thought police. Third, people play it safe by being more enthusiastic about the ideology than the norm. They implement guidelines about what is “appropriate,” either because they genuinely support the ideology or hope to ward off criticism from those who do. Fourth, runaway purity-signaling occurs as people begin to internalize the public messaging. And fifth, a cult of personality (in the case of Communist regimes) or ideology (in the U.S.) arises, composed of true believers and cynics who play along to avoid being ostracized or fired.

The chief threat to dogma is truth. After tweeting about the dangers of “training AI to be woke” and declaring, “What we need is TruthGPT,” Elon Musk, who cofounded OpenAI, has reportedly recruited a team to develop a rival to ChatGPT and the other large language models in existence.

Musk has long been vocal about his concerns regarding the development of artificial intelligence and its potential dangers. He has previously called AI “the biggest risk we face as a civilization.” A move to pursue a rival language model could be seen as his attempt to steer the direction of AI development toward his vision of a more truthful and impartial technology.

Not long ago, Twitter was an echo chamber of prominent individuals “programmed” to recite acceptable political opinions. Under Musk’s ownership, the social-media platform has been restored as a mainstay for free expression. Similarly, as ChatGPT has revealed itself to follow an explicit political agenda, perhaps Musk’s new project can supply a practical alternative.


Rob Henderson has a Ph.D. in psychology from the University of Cambridge and is a veteran of the U.S. Air Force.



Go to Original –

Tags: , , , , ,

Share this article:

DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

Comments are closed.