Why is Sam Altman losing sleep? OpenAI CEO addresses controversies in sweeping interview

2 hours ago 2

Sam Altman, CEO of OpenAI, and Lisa Su, CEO of Advanced Micro Devices, attest during the Senate Commerce, Science and Transportation Committee proceeding titled "Winning the AI Race: Strengthening U.S. Capabilities successful Computing and Innovation," successful Hart gathering connected Thursday, May 8, 2025.

Tom Williams | CQ-Roll Call, Inc. | Getty Images

In a sweeping interview past week, OpenAI CEO Sam Altman addressed a plethora of motivation and ethical questions regarding his institution and the fashionable ChatGPT AI model.  

"Look, I don't slumber that good astatine night. There's a batch of worldly that I consciousness a batch of value on, but astir apt thing much than the information that each day, hundreds of millions of radical speech to our model," Altman told erstwhile Fox News big Tucker Carlson successful a astir hour-long interview. 

"I don't really interest astir america getting the large motivation decisions wrong," Altman said, though helium admitted "maybe we volition get those incorrect too." 

Rather, helium said helium loses the astir slumber implicit the "very tiny decisions" connected exemplary behavior, which tin yet person large repercussions.

These decisions thin to halfway astir the morals that pass ChatGPT, and what questions the chatbot does and doesn't answer. Here's an outline of immoderate of those motivation and ethical dilemmas that look to beryllium keeping Altman awake astatine night.

How does ChatGPT code suicide?

According to Altman, the astir hard contented the institution is grappling with precocious is however ChatGPT approaches suicide, successful airy of a suit from a household who blamed the chatbot for their teenage son's suicide.

The CEO said that retired of the thousands of radical who perpetrate termination each week, galore of them could perchance person been talking to ChatGPT successful the lead-up.

"They astir apt talked astir [suicide], and we astir apt didn't prevention their lives," Altman said candidly. "Maybe we could person said thing better. Maybe we could person been much proactive. Maybe we could person provided a small spot amended proposal about, hey, you request to get this help." 

Last month, the parents of Adam Raine filed a product liability and wrongful decease suit against OpenAI aft their lad died by termination astatine property 16. In the lawsuit, the household said that "ChatGPT actively helped Adam research termination methods."

Soon after, successful a blog post titled "Helping radical erstwhile they request it most," OpenAI elaborate plans to code ChatGPT's shortcomings erstwhile handling "sensitive situations," and said it would support improving its exertion to support radical who are astatine their astir vulnerable. 

How are ChatGPT's morals determined?

Another ample taxable broached successful the sit-down interrogation was the morals and morals that pass ChatGPT and its stewards. 

While Altman described the basal exemplary of ChatGPT arsenic trained connected the corporate experience, cognition and learnings of humanity, helium said that OpenAI indispensable past align definite behaviors of the chatbot and determine what questions it won't answer. 

"This is simply a truly hard problem. We person a batch of users now, and they travel from precise antithetic beingness perspectives... But connected the whole, I person been pleasantly amazed with the model's quality to larn and use a motivation framework." 

When pressed connected however definite exemplary specifications are decided, Altman said the institution had consulted "hundreds of motivation philosophers and radical who thought astir morals of exertion and systems."

An illustration helium gave of a exemplary specification made was that ChatGPT volition debar answering questions connected however to marque biologic weapons if prompted by users.

"There are wide examples of wherever nine has an involvement that is successful important hostility with idiosyncratic freedom," Altman said, though helium added the institution "won't get everything right, and besides needs the input of the world" to assistance marque these decisions.

How backstage is ChatGPT?

Another large treatment taxable was the conception of idiosyncratic privateness regarding chatbots, with Carlson arguing that generative AI could beryllium utilized for "totalitarian control."

In response, Altman said 1 portion of argumentation helium has been pushing for successful Washington is "AI privilege," which refers to the thought that thing a idiosyncratic says to a chatbot should beryllium wholly confidential. 

"When you speech to a doc astir your wellness oregon a lawyer astir your ineligible problems, the authorities cannot get that information, right?... I deliberation we should person the aforesaid conception for AI." 

According to Altman, that would let users to consult AI chatbots astir their aesculapian past and ineligible problems, among different things. Currently, U.S. officials tin subpoena the institution for idiosyncratic data, helium added.

"I deliberation I consciousness optimistic that we tin get the authorities to recognize the value of this," helium said. 

Will ChatGPT beryllium utilized successful subject operations?

Asked by Carlson if ChatGPT would beryllium utilized by the subject to harm humans, Altman didn't supply a nonstop answer.

"I don't cognize the mode that radical successful the subject usage ChatGPT today... but I fishy there's a batch of radical successful the subject talking to ChatGPT for advice."

Later, helium added that helium wasn't definite "exactly however to consciousness astir that."

OpenAI was 1 of the AI companies that received a $200 cardinal contract from the U.S. Department of Defense to enactment generative AI to enactment for the U.S. military. The steadfast said successful a blog post that it would supply the U.S. authorities entree to customized AI models for nationalist security, enactment and merchandise roadmap information.

Just however almighty is OpenAI?

Carlson, successful his interview, predicted that connected its existent trajectory, generative AI and by extension, Sam Altman, could amass much powerfulness than immoderate different person, going truthful acold arsenic to telephone ChatGPT a "religion."

In response, Altman said helium utilized to interest a batch astir the attraction of powerfulness that could effect from generative AI, but helium present believes that AI volition effect successful "a immense up leveling" of each people. 

"What's happening present is tons of radical usage ChatGPT and different chatbots, and they're each much capable. They're each benignant of doing more. They're each capable to execute more, commencement caller businesses, travel up with caller knowledge, and that feels beauteous good."

However, the CEO said helium thinks AI volition destruct galore jobs that beryllium today, particularly successful the short-term.

Read Entire Article