Here’s how ChatGPT-maker OpenAI says it tackles biases – Times of India
“Since our launch of ChatGPT, users have shared outputs that they consider politically biased, offensive, or otherwise objectionable. In many cases, we think that the concerns raised have been valid and have uncovered real limitations of our systems which we want to address,” the company said in a blog post.
OpenAI also said that it has also seen “a few misconceptions about how our systems and policies work together to shape the outputs you get from ChatGPT.”
“Biases are bugs”
In the blog, OpenAI acknowledged that many are rightly worried about biases in the design and impact of AI systems. It added that the AI model is trained by the data available and inputs by the public who use or are affected by systems like ChatGPT.
“Our guidelines are explicit that reviewers should not favour any political group. Biases that nevertheless may emerge from the process described above are bugs, not features,” the startup said. It further said that it is the company’s belief that technology companies must be accountable for producing policies that stand up to scrutiny.
“We are committed to robustly addressing this issue and being transparent about both our intentions and our progress,” it noted.
OpenAI said that it is working to improve the clarity of these guidelines and, based on the learnings from the ChatGPT launch, it will provide clearer instructions to reviewers about potential pitfalls and challenges tied to bias, as well as controversial figures and themes.
As a part of its transparency initiatives, OpenAI is also working to share aggregated demographic information about the reviewers “in a way that doesn’t violate privacy rules and norms,” because this is an additional source of potential bias in system outputs.
The company is also researching on how to make the fine-tuning process more understandable and controllable.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.