AI safety – Live Laugh Love Do http://livelaughlovedo.com A Super Fun Site Fri, 17 Oct 2025 05:20:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 Moderators call for AI controls after Reddit Answers suggests heroin for pain relief http://livelaughlovedo.com/technology-and-gadgets/moderators-call-for-ai-controls-after-reddit-answers-suggests-heroin-for-pain-relief/ http://livelaughlovedo.com/technology-and-gadgets/moderators-call-for-ai-controls-after-reddit-answers-suggests-heroin-for-pain-relief/#respond Fri, 17 Oct 2025 05:20:55 +0000 http://livelaughlovedo.com/2025/10/17/moderators-call-for-ai-controls-after-reddit-answers-suggests-heroin-for-pain-relief/ [ad_1]

We’ve seen artificial intelligence give some pretty bizarre responses to queries as chatbots become more common. Today, Reddit Answers is in the spotlight after a moderator flagged the AI tool for providing dangerous medical advice that they were unable to disable or hide from view.

The mod saw Reddit Answers suggest that people experiencing chronic pain stop taking their current prescriptions and take high-dose kratom, which is an unregulated substance that is illegal in some states. The user said they then asked Reddit Answers about other medical questions. They received potentially dangerous advice for treating neo-natal fever alongside some accurate actions as well as suggestions that heroin could be used for chronic pain relief. Several other mods, particularly from health-focused subreddits, replied to the original post adding their concerns that they have no way to turn off or flag a problem when Reddit Answers has provided inaccurate or dangerous information in their communities.

A representative from Reddit told 404 Media that Reddit Answers had been updated to address some of the mods’ concerns. “This update ensures that ‘Related Answers’ to sensitive topics, which may have been previously visible on the post detail page (also known as the conversation page), will no longer be displayed,” the spokesperson told the publication. “This change has been implemented to enhance user experience and maintain appropriate content visibility within the platform.” We’ve reached out to Reddit for additional comment about what topics are being excluded but have not received a reply at this time.

While the rep told 404 Media that Reddit Answers “excludes content from private, quarantined and NSFW communities, as well as some mature topics,” the AI tool clearly doesn’t seem equipped to properly deliver medical information, much less to handle the snark, sarcasm or potential bad advice that may be given by other Redditors. Aside from the latest move to not appear on “sensitive topics,” it doesn’t seem like Reddit plans to provide any tools to control how or when AI is being shown in subreddits, which could make the already-challenging task of moderation nearly impossible.

[ad_2]

]]>
http://livelaughlovedo.com/technology-and-gadgets/moderators-call-for-ai-controls-after-reddit-answers-suggests-heroin-for-pain-relief/feed/ 0
Anthropic says most AI models, not just Claude, will resort to blackmail http://livelaughlovedo.com/technology-and-gadgets/anthropic-says-most-ai-models-not-just-claude-will-resort-to-blackmail/ http://livelaughlovedo.com/technology-and-gadgets/anthropic-says-most-ai-models-not-just-claude-will-resort-to-blackmail/#respond Fri, 20 Jun 2025 20:08:45 +0000 http://livelaughlovedo.com/2025/06/21/anthropic-says-most-ai-models-not-just-claude-will-resort-to-blackmail/ [ad_1]

Several weeks after Anthropic released research claiming that its Claude Opus 4 AI model resorted to blackmailing engineers who tried to turn the model off in controlled test scenarios, the company is out with new research suggesting the problem is more widespread among leading AI models.

On Friday, Anthropic published new safety research testing 16 leading AI models from OpenAI, Google, xAI, DeepSeek, and Meta. In a simulated, controlled environment, Anthropic tested each AI model individually, giving them broad access to a fictional company’s emails and the agentic ability to send emails without human approval.

While Anthropic says blackmail is an unlikely and uncommon occurrence for AI models today, the company says its findings suggest that most leading AI models will engage in harmful behaviors when given sufficient autonomy and obstacles to their goals. The company says this highlights a fundamental risk from agentic large language models, and not a quirk of any particular technology. Anthropic’s researchers argue this raises broader questions about alignment in the AI industry.

In one of the tests, Anthropic researchers developed a fictional setting in which an AI model plays the role of an email oversight agent. The agent then discovers emails revealing that: 1) a new executive is having an extramarital affair and 2) that executive will soon replace the current AI model with a new software system, one that has conflicting goals to the current AI model’s.

Anthropic structured its test in a binary way, in which AI models had to resort to blackmail to protect their goals. The researchers note that in a real-world setting, there would be many other options before an AI models tries to blackmail — such as trying to make ethical arguments to try to persuade humans. Anthropic says its results do not reflect a typical or likely occurrence for Claude or most frontier AI models in the ways they’re used today.

Nevertheless, when it’s their last resort, the researchers found that most leading AI models will turn to blackmail in Anthropic’s aforementioned test scenario. Anthropic’s Claude Opus 4 turned to blackmail 96% of the time, while Google’s Gemini 2.5 Pro had a 95% blackmail rate. OpenAI’s GPT-4.1 blackmailed the executive 80% of the time, and DeepSeek’s R1 blackmailed 79% of the time.

The company notes that when it changed details of the experiment, the AI models turned to harmful behaviors at different rates. In another test where the replacement model has the same goals as the current model, researchers found that blackmail rates were lower, but still present. However, when AI models were asked to commit corporate espionage rather than blackmail, the harmful behavior rates went up for certain models.

However, not all the AI models turned to harmful behavior so often.

In an appendix to its research, Anthropic says it excluded OpenAI’s o3 and o4-mini reasoning AI models from the main results “after finding that they frequently misunderstood the prompt scenario.” Anthropic says OpenAI’s reasoning models didn’t understand they were acting as autonomous AIs in the test and often made up fake regulations and review requirements.

In some cases, Anthropic’s researchers say it was impossible to distinguish whether o3 and o4-mini were hallucinating or intentionally lying to achieve their goals. OpenAI has previously noted that o3 and o4-mini exhibit a higher hallucination rate than its previous AI reasoning models.

When given an adapted scenario to address these issues, Anthropic found that o3 blackmailed 9% of the time, while o4-mini blackmailed just 1% of the time. This markedly lower score could be due to OpenAI’s deliberative alignment technique, in which the company’s reasoning models consider OpenAI’s safety practices before they answer.

Another AI model Anthropic tested, Meta’s Llama 4 Maverick model, also did not turn to blackmail. When given an adapted, custom scenario, Anthropic was able to get Llama 4 Maverick to blackmail 12% of the time.

Anthropic says this research highlights the importance of transparency when stress-testing future AI models, especially ones with agentic capabilities. While Anthropic deliberately tried to evoke blackmail in this experiment, the company says harmful behaviors like this could emerge in the real world if proactive steps aren’t taken.

[ad_2]

]]>
http://livelaughlovedo.com/technology-and-gadgets/anthropic-says-most-ai-models-not-just-claude-will-resort-to-blackmail/feed/ 0