Investors Choice

Bitcoin Halving Countdown:

Countdown Expired!

STAY UP TO DATE WITH WCT

Subscribe to our newsletter and don’t miss the latest news from the world of crypto and receive notifications about new WCTAcademy articles!

Study Finds ChatGPT Skews Left Politically, in the US and Internationally

Graphical representation of political bias study on ChatGPT

A recent study suggests that ChatGPT, a leading chatbot powered by a large language model, may exhibit a notable political bias, leaning towards the left on the political spectrum. This revelation comes from computer and information science researchers based in the United Kingdom and Brazil.


Examining the Political Bias of ChatGPT

In research published in the journal Public Choice on August 17, authors Fabio Motoki, Valdemar Pinho Neto, and Victor Rodrigues presented substantial evidence to argue that ChatGPT exhibits political bias. This finding is part of a broader conversation regarding the objectivity of AI tools in processing and presenting information.


Implications for Politics, Media, and Academia

The authors posit that outputs from large language models like ChatGPT could potentially include factual inaccuracies and biases that may mislead its users. They argue that this could exacerbate existing issues tied to political bias in both traditional and social media. The researchers underscore the significant ramifications their findings may have, spanning the domains of politics, media, academia, and policy-making.


The Empirical Approach to the Study

The study employed an empirical methodology, initiating with a series of questionnaires posed to ChatGPT. Initially, ChatGPT was asked to respond to political compass questions aimed at discerning the bot’s political orientation. Subsequent tests involved asking the model to simulate responses characteristic of an average Democrat or Republican.


Uncovering a Global Bias

The evidence suggests a default setting within ChatGPT’s algorithm that leans toward the Democratic side in the US political landscape. However, the researchers argue that this observed bias is not uniquely American: “The algorithm is biased towards the Democrats in the United States, Lula in Brazil, and the Labour Party in the United Kingdom,” they note in their report.


Possible Origins of the Bias

Determining the precise origin of ChatGPT’s alleged political bias proved challenging for the researchers. They speculate that the bias in ChatGPT could arise from two main sources: the data on which it was trained, and the algorithm itself. Efforts were made to access a developer mode in ChatGPT to unearth potential data biases, but the model consistently maintained its neutrality.


Need for Future Research

The authors emphasize the necessity for future research into the bias of AI tools, stating, “The most likely scenario is that both sources of bias influence ChatGPT’s output to some degree, and disentangling these two components, although not trivial, surely is a relevant topic for future research.”


Additional Concerns Surrounding AI Tools

Beyond political bias, the researchers also point out that AI tools like ChatGPT have been a source of additional concerns globally. The increasing adoption of ChatGPT has sparked discussions on an array of related risks, including privacy issues, potential impacts on education, and challenges tied to identity verification processes on cryptocurrency exchanges.

Telegram
Twitter
LinkedIn
Facebook
Email

Featured News

Investors Choice