API access – Live Laugh Love Do http://livelaughlovedo.com A Super Fun Site Fri, 01 Aug 2025 23:39:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 Anthropic Revokes OpenAI’s Access to Claude http://livelaughlovedo.com/technology-and-gadgets/anthropic-revokes-openais-access-to-claude/ http://livelaughlovedo.com/technology-and-gadgets/anthropic-revokes-openais-access-to-claude/#respond Fri, 01 Aug 2025 23:39:59 +0000 http://livelaughlovedo.com/2025/08/02/anthropic-revokes-openais-access-to-claude/ [ad_1]

Anthropic revoked OpenAI’s API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service.

“Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT-5,” Anthropic spokesperson Christopher Nulty said in a statement to WIRED. “Unfortunately, this is a direct violation of our terms of service.”

According to Anthropic’s commercial terms of service, customers are barred from using the service to “build a competing product or service, including to train competing AI models” or “reverse engineer or duplicate” the services. This change in OpenAI’s access to Claude comes as the ChatGPT-maker is reportedly preparing to release a new AI model, GPT-5, which is rumored to be better at coding.

OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude’s capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models’ behavior under similar conditions and make adjustments as needed.

“It’s industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them,” OpenAI’s chief communications officer Hannah Wong said in a statement to WIRED.

Nulty says that Anthropic will “continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry.” The company did not respond to WIRED’s request for clarification on if and how OpenAI’s current Claude API restriction would impact this work.

Top tech companies yanking API access from competitors has been a tactic in the tech industry for years. Facebook did the same to Twitter-owned Vine (which led to allegations of anticompetitive behavior) and last month Salesforce restricted competitors from accessing certain data through the Slack API. This isn’t even a first for Anthropic. Last month, the company restricted the AI coding startup Windsurf’s direct access to its models after it was rumored OpenAI was set to acquire it. (That deal fell through).

Anthropic’s chief science officer Jared Kaplan spoke to TechCrunch at the time about revoking Windsurf’s access to Claude, saying, “I think it would be odd for us to be selling Claude to OpenAI.”

A day before cutting off OpenAI’s access to the Claude API, Anthropic announced new rate limits on Claude Code, its AI-powered coding tool, citing explosive usage and, in some cases, violations of its terms of service.

[ad_2]

]]>
http://livelaughlovedo.com/technology-and-gadgets/anthropic-revokes-openais-access-to-claude/feed/ 0
Exclusive: Reality Defender expands deepfake detection access to independent developers http://livelaughlovedo.com/career-and-productivity/exclusive-reality-defender-expands-deepfake-detection-access-to-independent-developers/ http://livelaughlovedo.com/career-and-productivity/exclusive-reality-defender-expands-deepfake-detection-access-to-independent-developers/#respond Thu, 31 Jul 2025 12:47:31 +0000 http://livelaughlovedo.com/2025/07/31/exclusive-reality-defender-expands-deepfake-detection-access-to-independent-developers/ [ad_1]

New York-based cybersecurity company Reality Defender offers one of the top deepfake detection platforms for large enterprises. Now, the company is extending access to its platform to individual developers and small teams via an API, which includes a free tier offering 50 detections per month.

With the API, developers can integrate commercial-grade, real-time deepfake detection into their sites or applications using just two lines of code. This functionality can support use cases such as fraud detection, identity verification, and content moderation, among others.

The Reality Defender platform features a suite of custom AI models, each designed to detect different types of deepfakes in various ways. These models are trained on extensive datasets of known deepfake images and audio made using many different types of generative tools.

“What we’re doing now is saying you don’t need to be a big bank, you don’t need to have a bunch of developers,” Reality Defender cofounder and CEO Ben Colman tells Fast Company. “Anyone that’s building a social media platform, a video conferencing solution, a dating platform, professional networking, brand protection—all of them can now have deepfake and generative AI detection.” 

The new Deepfake Detection API currently supports audio and image detection. But the company plans to expand coverage to additional modalities in the coming months. The detection system can identify visual deepfakes based not only on faces but also on other image features and the broader context in which the media appears.

Deepfakes are a form of synthetic media created using artificial intelligence to produce convincing video, image, audio, or text representations of events that never occurred. These can be used to put sham words in a public figure’s mouth or to trick someone into sending money by mimicking a relative’s voice.

Global losses from deepfake-enabled fraud surpassed $200 million in the first quarter of 2025, according to a report by AI voice generation company Resemble AI. The most damaging uses of deepfakes include nonconsensual explicit content (such as revenge porn), scams and fraud, political manipulation, and misinformation. As generative AI tools advance, deepfakes are becoming increasingly difficult to detect. An unidentified imposter recently used a deepfake of Secretary of State Marco Rubio’s voice to place calls to at least five senior government officials.

Colman says that as generative AI tools become more widespread and deepfakes more common, both consumers and businesses will likely start viewing protection against fake content much like they do protection against computer viruses or spam.

The key difference, he adds, is that the tools required to create deepfakes are far more accessible than those needed to produce viruses or spam. “There’s thousands of tools that are free, and there’s no regulation yet,” Colman says.

In other words, we’re likely just seeing the beginning of the deepfake era. “It just gets worse from there for companies, consumers, countries, elections,” Colman says. “The risks are endless.” 

Developers can access the new API and free tier starting today from the API page on the Reality Defender website.

[ad_2]

]]>
http://livelaughlovedo.com/career-and-productivity/exclusive-reality-defender-expands-deepfake-detection-access-to-independent-developers/feed/ 0