• Latest
  • Trending
  • All
  • Market Updates
  • Cryptocurrency
  • Blockchain
  • Investing
  • Commodities
  • Personal Finance
  • Technology
  • Business
  • Real Estate
  • Finance
Weaponized AI risk is ‘high,’ warns OpenAI – here’s the plan to stop it

Weaponized AI risk is ‘high,’ warns OpenAI – here’s the plan to stop it

December 12, 2025
ESMA Centralization and MiCA Enforcement Debate

ESMA Centralization and MiCA Enforcement Debate

December 12, 2025
OpenAI’s billion-dollar Disney deal puts Mickey Mouse and Marvel in Sora

OpenAI’s billion-dollar Disney deal puts Mickey Mouse and Marvel in Sora

December 12, 2025
Is silver the new gold?

Is silver the new gold?

December 12, 2025
Tech sector holds steady while semiconductors see mixed fortunes

Tech sector holds steady while semiconductors see mixed fortunes

December 12, 2025
CFTC signals crypto reset, scraps 28-day delivery rule – Here’s why

CFTC signals crypto reset, scraps 28-day delivery rule – Here’s why

December 12, 2025
INNIO collaborates with GEUS on 104MW Texas power plant

INNIO collaborates with GEUS on 104MW Texas power plant

December 12, 2025
Fed talk roundup: Hammack says inflation is a bigger problem than jobs

Fed’s Hammack: Balancing both sides of the Fed mandate is challenging.

December 12, 2025
Stocks making the biggest moves premarket: TLRY, LULU, AVGO, C

Stocks making the biggest moves premarket: TLRY, LULU, AVGO, C

December 12, 2025
Binance Froze Partial Funds from Upbit Hack: Report

Binance Froze Partial Funds from Upbit Hack: Report

December 12, 2025
Source: Kalshi’s valuation jumps to $11B after raising massive $1B round

Kalshi raises $1B at $11B valuation, doubling value in under two months

December 12, 2025
Soft Manager – Trading Ideas – 5 August 2025

3 Months Free VPS + Verified Profits — Limited Offer with Spready TripleEdge EA – Statistics – 12 December 2025

December 12, 2025
Broadcom Shares Slip as AI Hype Meets Reality

Broadcom Shares Slip as AI Hype Meets Reality

December 12, 2025
Friday, December 12, 2025
No Result
View All Result
InvestorNewsToday.com
  • Home
  • Market
  • Business
  • Finance
  • Investing
  • Real Estate
  • Commodities
  • Crypto
  • Blockchain
  • Personal Finance
  • Tech
InvestorNewsToday.com
No Result
View All Result
Home Blockchain

Weaponized AI risk is ‘high,’ warns OpenAI – here’s the plan to stop it

by Investor News Today
December 12, 2025
in Blockchain
0
Weaponized AI risk is ‘high,’ warns OpenAI – here’s the plan to stop it
492
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter


OpenAI ChatGPT logo

Samuel Boivin/NurPhoto through Getty Photos

Comply with ZDNET: Add us as a most popular supply on Google.


ZDNET’s key takeaways

  • OpenAI launched initiatives to safeguard AI fashions from abuse.
  • AI cyber capabilities assessed by capture-the-flag challenges improved in 4 months.
  • The OpenAI Preparedness Framework could assist observe the safety dangers of AI fashions.

OpenAI is warning that the fast evolution of cyber capabilities in synthetic intelligence (AI) fashions may end in “excessive” ranges of threat for the cybersecurity trade at massive, and so motion is being taken now to help defenders. 

As AI fashions, together with ChatGPT, proceed to be developed and launched, an issue has emerged. As with many kinds of know-how, AI can be utilized to profit others, but it surely may also be abused — and within the cybersecurity sphere, this consists of weaponizing AI to automate brute-force assaults, generate malware or plausible phishing content material, and refine present code to make cyberattack chains extra environment friendly. 

(Disclosure: Ziff Davis, ZDNET’s guardian firm, filed an April 2025 lawsuit in opposition to OpenAI, alleging it infringed Ziff Davis copyrights in coaching and working its AI methods.)

In current months, dangerous actors have used AI to propagate their scams by oblique immediate injection assaults in opposition to AI chatbots and AI abstract capabilities in browsers; researchers have discovered AI options diverting customers to malicious web sites, AI assistants are growing backdoors and streamlining cybercriminal workflows, and safety specialists have warned in opposition to trusting AI an excessive amount of with our knowledge. 

Additionally: Gartner urges companies to ‘block all AI browsers’ – what’s behind the dire warning

The twin nature (as Open AI calls it) of AI fashions, nonetheless, signifies that AI may also be leveraged by defenders to refine protecting methods, to develop instruments to determine threats, to doubtlessly prepare or educate human specialists, and to shoulder the duty of time-consuming, reptitive duties similar to alert triage, which frees up the time of cybersecurity workers for extra beneficial tasks. 

The present panorama

In response to OpenAI, the capabilities of AI methods are advancing at a fast charge. 

For instance, capture-the-flag (CTF) challenges, historically used to check cybersecurity capabilities in check environments and geared toward discovering hidden “flags,” at the moment are getting used to evaluate the cyber capabilities of AI fashions. OpenAI mentioned they’ve improved from 27% success charges on GPT‑5 in August 2025 to 76% on GPT‑5.1-Codex-Max⁠ in November 2025 — a notable enhance in a interval of solely 4 months. 

Additionally: AI brokers are already inflicting disasters – and this hidden menace may derail your secure rollout

The minds behind ChatGPT mentioned they anticipate AI fashions to proceed on this trajectory, which might give them “excessive” ranges of cyber functionality. OpenAI mentioned this classification signifies that fashions “can both develop working zero-day distant exploits in opposition to well-defended methods, or meaningfully help with complicated, stealthy enterprise or industrial intrusion operations geared toward real-world results.”

Managing and assessing whether or not AI capabilities will do hurt or good, nonetheless, isn’t any easy job — however one which OpenAI hopes to deal with with initiatives together with the Preparedness Framework (.PDF). 

OpenAI Preparedness Framework

The Preparedness Framework, final up to date in April 2025, outlines OpenAI’s method to balancing AI protection and threat. Whereas it is not new, the framework does present the construction and information for the group to observe — and this consists of the place it invests in menace protection. 

Three classes of threat, and people who may result in “extreme hurt,” are at the moment the first focus. These are:

  • Organic and chemical capabilities: The steadiness between new, helpful medical and organic discoveries and people who may result in organic or chemical weapon growth.
  • Cybersecurity capabilities: How AI can help defenders in defending weak methods, whereas additionally creating a brand new assault floor and malicious instruments. 
  • AI self-improvement capabilities: How AI may beneficially improve its personal capabilities — or create management challenges for us to face.  

The precedence class seems to be cybersecurity at current, or at the least probably the most publicized. In any case, the framework’s objective is to determine threat components and keep a menace mannequin with measurable thresholds that point out when AI fashions may trigger extreme hurt. 

Additionally: How effectively does ChatGPT know me? This straightforward immediate revealed quite a bit – attempt it for your self

“We cannot deploy these very succesful fashions till we have constructed safeguards to sufficiently decrease the related dangers of extreme hurt,” OpenAI mentioned in its framework manifest. “This Framework lays out the sorts of safeguards we anticipate to wish, and the way we’ll affirm internally and present externally that the safeguards are ample.”

OpenAI’s newest safety measures

OpenAI mentioned it’s investing closely in strengthening its fashions in opposition to abuse, in addition to making them extra helpful for defenders. Fashions are being hardened, devoted menace intelligence and insider threat applications have been launched, and its methods are being educated to detect and refuse malicious requests. (This, in itself, is a problem, contemplating menace actors can act and immediate as defenders to try to generate output later used for legal exercise.)

“Our objective is for our fashions and merchandise to carry important benefits for defenders, who are sometimes outnumbered and under-resourced,” OpenAI mentioned. “When exercise seems unsafe, we could block output, route prompts to safer or much less succesful fashions, or escalate for enforcement.”

The group can also be working with Purple Group suppliers to judge and enhance its security measures, and as Purple Groups act offensively, it’s hoped they’ll uncover defensive weaknesses for remediation — earlier than cybercriminals do.

Additionally: AI’s scary new trick: Conducting cyberattacks as a substitute of simply serving to out

OpenAI is ready to launch a “trusted entry program” that grants a subset of customers or companions entry to check fashions with “enhanced capabilities” linked to cyberdefense, however it will likely be carefully managed.

“We’re nonetheless exploring the fitting boundary of which capabilities we will present broad entry to and which of them require tiered restrictions, which can affect the longer term design of this program,” the corporate famous. “We purpose for this trusted entry program to be a constructing block in the direction of a resilient ecosystem.”

Moreover, OpenAI has moved Aardvark, a safety researcher agent, into personal beta. It will probably be of curiosity to cybersecurity researchers, as the purpose of this method is to scan codebases for vulnerabilities and supply patch steerage. In response to OpenAI, Aardvark has already recognized “novel” CVEs in open supply software program.

Lastly, a brand new collaborative advisory group shall be established within the close to future. Dubbed the Frontier Danger Council, this group will embody safety practitioners and companions who will initially give attention to the cybersecurity implications of AI and related practices and proposals, however the council may also ultimately broaden to incorporate the opposite classes outlined within the OpenAI Preparedness Framework sooner or later.

What can we anticipate in the long run?

We now have to deal with AI with warning, and this consists of implementing AI and LLMs not solely into our private lives, but in addition limiting the publicity of AI-based safety dangers in enterprise. For instance, analysis agency Gartner not too long ago warned organizations to keep away from or block AI browsers solely attributable to safety issues, together with immediate injection assaults and knowledge publicity.

We have to keep in mind that AI is a software, albeit a brand new and thrilling one. New applied sciences all include dangers — as OpenAI clearly is aware of, contemplating its give attention to the cybersecurity challenges related to what has change into the preferred AI chatbot worldwide — and so any of its purposes ought to be handled in the identical approach as every other new technological answer: with an evaluation of its dangers, alongside potential rewards.



Source link

Tags: HeresHighOpenAIplanRiskstopwarnsWeaponized
Share197Tweet123
Previous Post

Is silver the new gold?

Next Post

OpenAI’s billion-dollar Disney deal puts Mickey Mouse and Marvel in Sora

Investor News Today

Investor News Today

Next Post
OpenAI’s billion-dollar Disney deal puts Mickey Mouse and Marvel in Sora

OpenAI’s billion-dollar Disney deal puts Mickey Mouse and Marvel in Sora

  • Trending
  • Comments
  • Latest
Want a Fortell Hearing Aid? Well, Who Do You Know?

Want a Fortell Hearing Aid? Well, Who Do You Know?

December 3, 2025
Private equity groups prepare to offload Ensemble Health for up to $12bn

Private equity groups prepare to offload Ensemble Health for up to $12bn

May 16, 2025
The human harbor: Navigating identity and meaning in the AI age

The human harbor: Navigating identity and meaning in the AI age

July 14, 2025
Lars Windhorst’s Tennor Holding declared bankrupt

Lars Windhorst’s Tennor Holding declared bankrupt

June 18, 2025
Why America’s economy is soaring ahead of its rivals

Why America’s economy is soaring ahead of its rivals

0
Dollar climbs after Donald Trump’s Brics tariff threat and French political woes

Dollar climbs after Donald Trump’s Brics tariff threat and French political woes

0
Nato chief Mark Rutte’s warning to Trump

Nato chief Mark Rutte’s warning to Trump

0
Top Federal Reserve official warns progress on taming US inflation ‘may be stalling’

Top Federal Reserve official warns progress on taming US inflation ‘may be stalling’

0
ESMA Centralization and MiCA Enforcement Debate

ESMA Centralization and MiCA Enforcement Debate

December 12, 2025
OpenAI’s billion-dollar Disney deal puts Mickey Mouse and Marvel in Sora

OpenAI’s billion-dollar Disney deal puts Mickey Mouse and Marvel in Sora

December 12, 2025
Weaponized AI risk is ‘high,’ warns OpenAI – here’s the plan to stop it

Weaponized AI risk is ‘high,’ warns OpenAI – here’s the plan to stop it

December 12, 2025
Is silver the new gold?

Is silver the new gold?

December 12, 2025

Live Prices

© 2024 Investor News Today

No Result
View All Result
  • Home
  • Market
  • Business
  • Finance
  • Investing
  • Real Estate
  • Commodities
  • Crypto
  • Blockchain
  • Personal Finance
  • Tech

© 2024 Investor News Today