Site icon News Bit

Google-backed Anthropic draws ‘constitution’ for its AI chatbot – Times of India

AI-powered chatbots can be unpredictable and sometimes generate harmful or illegal content, leading to constant efforts to prevent such behaviour. And there is not much their makers can do, but one company, Anthropic, has seemed to figure out how they can govern their chatbot better. All it needs is a “constitution.”
Founded by ex-OpenAI engineers, Anthropic, an AI startup backed by Google, has laid out the rules for its “Constitutional AI” training method. This approach instils specific values in its chatbot, Claude, to alleviate concerns about transparency, safety, and decision-making in AI systems. Unlike other methods, this approach does not require human feedback to evaluate responses.
In a blog post, Anthropic said AI models will inevitably have value systems, whether intentional or not. To address any shortcomings, Constitutional AI uses AI feedback in order to evaluate its outputs.
The AI constitution of Anthropic comprises 58 principles inspired by sources like the Universal Declaration of Human Rights by the United Nations, Apple‘s terms of service, rules from Google, and Anthropic’s own research. These principles are lofty and aim to promote fairness and respect for all.
The whole gist of the constitution is — the AI must follow guidelines to avoid stereotypes, discriminatory language, and giving medical, financial, or legal advice. It should provide appropriate responses for children and avoid offending non-Western audiences. It must prioritise less existentially risky responses and avoid being preachy.
AI-powered chatbots like GPT-4 and Bard can generate text in vivid detail, but they also have significant flaws. These generative AI models are frequently trained on unreliable internet sources, such as social media, making them prone to bias. Moreover, it can produce answers that are not based on actual knowledge and are purely imaginary.
Anthropic’s constitutional AI aims to tackle these issues by providing a set of guiding principles for the system to make informed decisions about the text it produces. These principles encourage the model to adopt behaviours that are “nontoxic” and “helpful” in nature.
When Anthropic train a text-generating model, they follow a set of guidelines in two stages. First, the model learns to evaluate and improve its responses by looking at these guidelines and relevant examples. Then, they use the feedback generated by the first model, along with the guidelines, to train the final model.
The startup believes its training method is better than ChatGPT‘s because human feedback is not scalable and requires too much time and resources. OpenAI has faced criticism for underpaying contract workers to filter out toxic data. Constitutional AI is transparent and easily inspectable, unlike OpenAI, says Anthropic.
Anthropic wants to create an advanced algorithm for AI self-teaching. This can be used to make virtual assistants that can do things like answer emails, research, and create art and books. There are already models like GPT-4, and LaMDA that uses this technology.

function loadGtagEvents(isGoogleCampaignActive) { if (!isGoogleCampaignActive) { return; } var id = document.getElementById('toi-plus-google-campaign'); if (id) { return; } (function(f, b, e, v, n, t, s) { t = b.createElement(e); t.async = !0; t.defer = !0; t.src = v; t.id = 'toi-plus-google-campaign'; s = b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t, s); })(f, b, e, 'https://www.googletagmanager.com/gtag/js?id=AW-877820074', n, t, s); };

window.TimesApps = window.TimesApps || {}; var TimesApps = window.TimesApps; TimesApps.toiPlusEvents = function(config) { var isConfigAvailable = "toiplus_site_settings" in f && "isFBCampaignActive" in f.toiplus_site_settings && "isGoogleCampaignActive" in f.toiplus_site_settings; var isPrimeUser = window.isPrime; if (isConfigAvailable && !isPrimeUser) { loadGtagEvents(f.toiplus_site_settings.isGoogleCampaignActive); loadFBEvents(f.toiplus_site_settings.isFBCampaignActive); } else { var JarvisUrl="https://jarvis.indiatimes.com/v1/feeds/toi_plus/site_settings/643526e21443833f0c454615?db_env=published"; window.getFromClient(JarvisUrl, function(config){ if (config) { loadGtagEvents(config?.isGoogleCampaignActive); loadFBEvents(config?.isFBCampaignActive); } }) } }; })( window, document, 'script', );

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@newsbit.us. The content will be deleted within 24 hours.
Exit mobile version