,

“Decoding Project Q*: OpenAI Board Alerted to ‘Risk-Associated’ AI Breakthrough Prior to Altman’s Departure”

OpenAI CEO Sam Altman Pursues Trillions for AI Chip Revolution

Before Sam Altman’s brief hiatus as OpenAI’s CEO, a group of researchers within the company penned a letter to the board, raising concerns about a potent AI discovery that could pose a threat to humanity, according to sources familiar with the matter. This undisclosed letter, coupled with the revelation of an AI project known as Q*, played a role in Altman’s subsequent ouster. The researchers, who remain unnamed, highlighted potential dangers posed by this powerful algorithm in their letter to the board.

Q* emerged as a focal point in the controversy. It is seen by some within OpenAI as a potential breakthrough in the pursuit of artificial general intelligence (AGI), a type of AI surpassing human intelligence. With promising progress, particularly in solving mathematical problems at a level akin to grade-school students, optimism about Q*’s future success reverberated within the company.

The sources noted that the letter and concerns about Q* were contributing factors in Altman’s dismissal. The board expressed unease not only about the hastened commercialization of advancements but also about the potential consequences that might not be fully understood.

Project Q* came into the spotlight when Mira Murati, a long-time executive, mentioned it to employees and revealed the letter to the board. The project’s capacity to excel in grade-school math raised eyebrows, indicating a level of AI performance close to human intelligence.

In response to media stories, an internal message from OpenAI acknowledged Project Q* and the letter sent to the board. Mira Murati, who briefly served as interim CEO after Altman’s departure, resigned two days later when attempts to reinstate Altman failed.

Generative AI, like the one used in OpenAI’s ChatGPT, excels in tasks such as writing and language translation. However, Q*’s ability to solve math problems with a definitive answer suggests a more advanced level of AI capability. The researchers’ letter to the board underscored the potential dangers of this powerful algorithm to humanity, although specific safety concerns were not disclosed.

Leave a Reply

Your email address will not be published. Required fields are marked *