Meet GOODY-2, the World’s Most Responsible (And Least Helpful) AI

AI guardrails and safety features are as important to get right as they are difficult to implement in a way that satisfies everyone. This means safety features tend to err on the side of caution. Side effects include AI models adopting a vaguely obsequious tone, and coming off as overly priggish when they refuse reasonable requests.


This is a companion discussion topic for the original entry at https://hackaday.com/2024/02/12/meet-goody-2-the-worlds-most-responsible-and-least-helpful-ai/