Fear as the number of Facebook Related Suicides Increases

Over the past few years, Facebook has stepped up its efforts to prevent suicide, but its attempt to help people in need has opened the tech giant to a series of issues concerning medical ethics, informed consent, and privacy. It has also raised a critical question: Is the system working?

Facebook trained an algorithm to recognize posts that might signal suicide risk and gave users a way to flag posts. A Facebook team reviews those posts and contacts local authorities if a user seems at imminent risk. First responders have been sent out on “wellness checks” more than 3,500 times.

Facebook screens posts for suicide risk, health experts have concerns

“We don’t have any data on how well this works. Is it safe? Does it cause harm?” asked Dr. John Torous, the director of the digital psychiatry division at Beth Israel Deaconess Medical Center.

Experts have acknowledged that Facebook’s efforts can connect people with help they need — but without data, the benefits aren’t clear. Neither are the risks. Experts argue that what Facebook is doing should be considered medical research, but the tech giant isn’t protecting its users like scientists would a study’s participants.

Leave a Reply

Your email address will not be published. Required fields are marked *