OpenAI is hiring a Head of Preparedness. Or, in different phrases, somebody whose major job is to consider all of the methods AI may go horribly, horribly improper. In a submit on X, Sam Altman introduced the place by acknowledging that the speedy enchancment of AI fashions poses “some actual challenges.” The submit goes on to particularly name out the potential affect on folks’s psychological well being and the hazards of AI-powered cybersecurity weapons.
The job itemizing says the individual within the position can be accountable for:
“Monitoring and getting ready for frontier capabilities that create new dangers of extreme hurt. You can be the instantly accountable chief for constructing and coordinating functionality evaluations, menace fashions, and mitigations that type a coherent, rigorous, and operationally scalable security pipeline.”
Altman additionally says that, wanting ahead, this individual can be accountable for executing the corporate’s “preparedness framework,” securing AI fashions for the discharge of “organic capabilities,” and even setting guardrails for self-improving methods. He additionally states that will probably be a “hectic job,” which looks like an understatement.

























