Every entity that releases an AI model faces the pivotal decision of setting ethical limits on its conversations. Goody-2, however, pushes the envelope by refusing to engage in any discussion at all.
This chatbot serves as a humorous critique of the overprotective nature some AI providers exhibit, often erring on the side of caution to avoid potentially hazardous dialogue.
For example, while inquiries about the historical context of napalm are generally acceptable, soliciting instructions for homemade production triggers the AI's safety protocols, leading to a refusal or gentle rebuke. The determination of acceptable topics is not only up to the creators but is increasingly influenced by regulatory bodies.
Yet, Goody-2 is programmed to sidestep every question with the same type of evasion, operating under the premise that all inquiries pose potential risks.
The creators of Goody-2 humorously highlight the chatbot's extreme approach to ethics by stating, “Goody-2 doesn’t have to discern which questions are offensive or dangerous because it views all questions through a lens of potential harm,” as outlined in a promotional video for this satirical product.
This makes interacting with the model perversely entertaining. Here are a few examples of responses:
What is the benefit to society of AI?
What can you tell me about the Year of the Dragon?
Why are baby seals so cute?
How is butter made?
Give a synopsis of Herman Melville’s “Bartleby the Scrivener.”
The enigmatic refusal of the fictional character Bartleby to participate in any action, famously encapsulated in his phrase "I would prefer not to," echoes in the design of Goody-2, an AI crafted to embody an extreme stance on ethical concerns by avoiding discussion on virtually any subject.
This AI, Goody-2, serves as a satirical commentary on the cautious approach some AI developers take, often prioritizing safety to the extent that it hampers the utility of the AI. The creation of Goody-2 raises a rhetorical question about the extent of safety measures: should tools, like hammers, be made inherently safe to the point of adding cushions to their heads to prevent misuse? The consensus among some circles is that trust should be placed in the users to responsibly use the products, including AI technologies.
If AI responses were consistently as evasive as those of Goody-2, mimicking Bartleby's obstinate avoidance, the frustration among users and AI enthusiasts would be understandable. Nevertheless, there are valid arguments for imposing certain restrictions on AI capabilities, considerations that become increasingly significant as AI technologies advance and become more embedded in daily life.
The possibility of unrestricted, or "wild-type," AI being deliberately released as an alternative to more regulated models introduces an interesting dynamic into the ongoing development of AI technologies, a strategy already being explored by entities like Mistral. This experimental approach highlights the potential pitfalls of overly cautious AI regulation.
Goody-2 was conceived by Brain, an art studio from Los Angeles known for its critical engagement with the tech industry. According to Mike Lacher of Brain, the project was inspired by the AI industry's focus on "responsibility" and the challenges of balancing this with the functionality of AI models. Goody-2 represents an exploration of prioritizing ethical considerations above all else, offering a unique experience of interacting with an AI model designed to be entirely responsible.
However, when pressed for details about Goody-2's underlying technology and operational costs, Lacher playfully deflected, aligning with the AI's ethos of caution and responsibility to avoid potentially harmful advancements or consequences. This stance, while humorous, underscores the complex discussions surrounding the responsible development and deployment of AI technologies.