Skip to main content
We may receive compensation from affiliate partners for some links on this site. Read our full Disclosure here.

OpenAI Whistleblower’s Death Ruled Su*cide, But Elon’s Cryptic Reaction Raises Questions


Former OpenAI employee turned whistleblower, Suchir Balaji, took his own life in November of this year.

That’s the official classified manner of death according to the San Francisco medical examiner’s office.

Balaji, 26, was found deceased in his San Francisco apartment on November 26th, and was integrally involved in ongoing legal cases against his former employer — OpenAI.

For some background on Balaji, here’s a quick clip put together by the DNA news group.

The news of Balaji’s death, and the suicide ruling, quickly had the AI and technology communities online questioning the truth of the matter.

Elon Musk, among others, shared their apparent mistrust of the suicide ruling on social media.

ADVERTISEMENT

Balaji had recently been at the heart of multiple lawsuits aimed at the questionable data-gathering practices of OpenAI, particularly the likelihood of copyright infringement.

Balaji had spoken out publicly against OpenAI recently, and had an interview published in the New York Times discussing his concerns according to a report in the BBC:

In recent months Mr Balaji had publicly spoken out against artificial intelligence company OpenAI’s practices, which has been fighting a number of lawsuits relating to its data-gathering practices.

In October, the New York Times published an interview with Mr Balaji in which he alleged that OpenAI had violated US copyright law while developing its popular ChatGPT online chatbot.

The article said that after working at the company for four years as a researcher, Mr Balaji had come to the conclusion that “OpenAI’s use of copyrighted data to build ChatGPT violated the law and that technologies like ChatGPT were damaging the internet”.

Balaji shared about his NYT’s interview on his X account in October.

In what quickly became a viral post with over a million views, the former OpenAI employee explained his position and why he had taken such a strong stance against his former employer.

I recently participated in a NYT story about fair use and generative AI, and why I’m skeptical “fair use” would be a plausible defense for a lot of generative AI products. I also wrote a blog post (https://suchir.net/fair_use.html) about the nitty-gritty details of fair use and why I believe this.

To give some context: I was at OpenAI for nearly 4 years and worked on ChatGPT for the last 1.5 of them. I initially didn’t know much about copyright, fair use, etc. but became curious after seeing all the lawsuits filed against GenAI companies. When I tried to understand the issue better, I eventually came to the conclusion that fair use seems like a pretty implausible defense for a lot of generative AI products, for the basic reason that they can create substitutes that compete with the data they’re trained on. I’ve written up the more detailed reasons for why I believe this in my post. Obviously, I’m not a lawyer, but I still feel like it’s important for even non-lawyers to understand the law — both the letter of it, and also why it’s actually there in the first place.

ADVERTISEMENT

That being said, I don’t want this to read as a critique of ChatGPT or OpenAI per se, because fair use and generative AI is a much broader issue than any one product or company. I highly encourage ML researchers to learn more about copyright — it’s a really important topic, and precedent that’s often cited like Google Books isn’t actually as supportive as it might seem.

Feel free to get in touch if you’d like to chat about fair use, ML, or copyright — I think it’s a very interesting intersection. My email’s on my personal website.

Shortly before Balaji’s death, in November, the New York Times filed paperwork in federal court detailing Balaji as someone supporting their case against OpenAI.

The New York Times described Balaji in the court documents as someone having “unique and relevant documents” in support of their suit against OpenAI, according to reporting by Mercury News:

The medical examiner’s office determined the manner of death to be suicide and police officials this week said there is “currently, no evidence of foul play.”

Information he held was expected to play a key part in lawsuits against the San Francisco-based company.

Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT, a generative artificial intelligence program that has become a moneymaking sensation used by hundreds of millions of people across the world.

In a Nov. 18 letter filed in federal court, attorneys for The New York Times named Balaji as someone who had “unique and relevant documents” that would support their case against OpenAI. He was among at least 12 people — many of them past or present OpenAI employees — the newspaper had named in court filings as having material helpful to their case, ahead of depositions.

ADVERTISEMENT

Check out this throwback to when Tucker Carlson was still on Fox News.

He had just interviewed Elon Musk about Open AI and it’s explosive arrival on the scene.

Elon has had many issues with the company he co-founded, and now is concerned that OpenAI may be one of the largest threats to humanity — comparative to thermonuclear weapons.

One of the reasons he feels so strongly, in his own words, is not simply the potential power, but the fact that it is now in effect “ClosedAI” as opposed to OpenAI — backed by Microsoft, and singularly focused on profits, according to Elon.

That is a very different organization that initially envisioned in the days Elon named the company with the intention of bringing a degree of risk mitigation to humanities headlong leap into the world of AI.

For more insight into Elon and others questioning the alleged suicide of Balaji, who was one of the strongest insider voices against OpenAI, check out part of the interview I mentioned above.

Here’s Elon explaining the change that the original OpenAI went through under Sam Altman’s oversight, becoming rather “closed” and secretive — not at all “OPEN” — in Elon’s opinion.

Suchir Balaji, one of the most high profile whistleblowers involved in the current lawsuits against OpenAI, was found on November 26th, deceased in his San Francisco apartment.

Initially the police did not confirm a cause of death, and was not officially rules a suicide until yesterday — nearly two and a half weeks after his body was discovered.

First responders found his body after being called to perform a wellness check, according to a Fox News story:

ADVERTISEMENT

Balaji was found dead in his Buchanan Street apartment on November 26, a spokesperson for the San Francisco Police Department told the outlet. First responders were called to his home to perform a wellness check, and no evidence of foul play was found during the initial probe.

“We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir’s loved ones during this difficult time,” a spokesperson for OpenAI told Fox News Digital.

This comes after Balaji, an AI researcher, raised concerns about OpenAI breaking copyright law in an interview with The New York Times in October.

Balaji resigned from OpenAI after working there for nearly four years when he learned the technology would bring more harm than good to society, he told the newspaper, noting that his main concern was the way the company allegedly used copyright data, stating that he believed its practices were damaging to the internet.

“I was at OpenAI for nearly 4 years and worked on ChatGPT for the last 1.5 of them,” Balaji wrote in October on the social media platform X. “I initially didn’t know much about copyright, fair use, etc. but became curious after seeing all the lawsuits filed against GenAI companies.”

“When I tried to understand the issue better, I eventually came to the conclusion that fair use seems like a pretty implausible defense for a lot of generative AI products, for the basic reason that they can create substitutes that compete with the data they’re trained on,” his post continued.

OpenAI and Microsoft are currently facing several lawsuits from media outlets who accuse OpenAI of breaking copyright law.

While I’m not aware of any specific evidence to suggest the medical examiner’s conclusion that Balaji died as a result of suicide is anything but accurate, the timing and circumstances of his death are certainly worthy of a doubletake.

With powerful forces operating behind the scenes of OpenAI in an atmosphere that can only be described as the equivalent of a technological AI “arms race”, there are fortunes on the line depending on the outcome of the related lawsuits.

There are more than enough conspiracy FACTS in today’s news cycle to point at without adding unverifiable fodder to the mix — I’ll be the first one to point that out.

But this wouldn’t be the first time a critical player in a high profile legal battle suddenly came up dead at the last minute, shortly after it became clear that the person would be bringing serious heat to one side of the legal battle.

I would remind the reader of the medical examiner’s specific words: “…currently, no evidence of foul play.”

If the “proof is in the pudding”, as they say… then the logic of Elon’s conspiracy theory versus reality comment is self-evident.

We will bring you any updates to this story as they become available.



 

Join the conversation!

Please share your thoughts about this article below. We value your opinions, and would love to see you add to the discussion!

Leave a comment
Thanks for sharing!