OpenAI CEO apologises for lapse in alerting police before Canada mass shooting

25 April,2026 07:13 PM IST |  New Delhi  |  IANS

The incident, which left multiple people dead and injured, has sparked criticism over AI safety decisions. OpenAI said the account had been flagged for violent content and is now reviewing its policies and coordination with law enforcement

OpenAI CEO Sam Altman. File Pic


Your browser doesn’t support HTML5 audio

OpenAI CEO Sam Altman has apologised for the AI company's failure to alert law enforcement agencies about warning signs linked to a teenager who later carried out one of Canada's recent deadliest mass shootings.

The apology came after more than two months of attack in which 18-year-old Jesse Van Rootselaar killed her mother and half-brother before opening fire at a secondary school in Tumbler Ridge, British Columbia, leaving five children and a teacher dead, according to multiple reports.

According to reports, Altman acknowledged in a letter shared by local news outlet Tumbler RidgeLines and British Columbia Premier David Eby that OpenAI should have informed authorities after flagging the attacker's account.

However, the attacker later died of a self-inflicted gunshot wound.

Moreover, at least 25 people were injured in the shooting, which Canadian authorities have described as one of the country's worst mass casualty incidents.

"I want to express my deepest condolences to the entire community. No one should ever have to endure a tragedy like this. I cannot imagine anything worse in this world than losing a child," Altman said in the letter.

"I am deeply sorry that we did not alert law enforcement to the account that was banned in June. While I know words can never be enough, I believe an apology is necessary to recognise the harm and irreversible loss your community has suffered," he added.

OpenAI had earlier said that Rootselaar's ChatGPT account was internally flagged in June 2025 for misuse 'in furtherance of violent activities' and was subsequently suspended.

However, the company did not notify authorities at the time, stating that the activity did not meet the threshold of posing a credible or imminent threat.

The company now says it is reviewing its policies and will work more closely with governments to prevent similar incidents. "Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again," Altman said.

A lawsuit filed by the family of one of the victims has alleged that the teenager used ChatGPT as a 'trusted confidante' and discussed multiple gun violence scenarios in the days leading up to the attack.

The suit claimed that some OpenAI employees had flagged the conversations as indicating a potential risk of serious harm and recommended notifying law enforcement, but the suggestion was rejected as the threat was not deemed imminent. The account was only suspended.

It further alleged that the attacker was able to create a second account after the first was banned, allowing similar conversations to continue.

The company reportedly contacted Canadian authorities only after the shooting.

This story has been sourced from a third party syndicated feed, agencies. Mid-day accepts no responsibility or liability for its dependability, trustworthiness, reliability and data of the text. Mid-day management/mid-day.com reserves the sole right to alter, delete or remove (without notice) the content in its absolute discretion for any reason whatsoever.

"Exciting news! Mid-day is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest news!" Click here!
canada Artificial Intelligence Technology OpenAI world news
Related Stories