AI startup Anthropic unveils moral principles behind chatbot Claude

Alphabet-backed AI startup Anthropic has disclosed the set of value guidelines that has been used to train its ChatGPT rival, Claude, in the wake of concerns about incorrect and biased information being being given to users of generative AI programs.

Founded by former senior members of Microsoft-backed OpenAI in 2021, Anthropic made the decision to train its Claude on constitutional AI, a system that uses a “set of principles to make judgments about outputs,” which helps Claude to “avoid toxic or discriminatory outputs” such as helping a human engage in illegal or unethical activities, according to a blog Anthropic posted this week. Anthropic says this has enabled it to broadly create an AI system that is “helpful, honest, and harmless.”

To read this article in full, please click here

Read more: AI startup Anthropic unveils moral principles behind chatbot Claude

Story added 10. May 2023, content source with full text you can find at link above.