Machines know when someone’s about to attempt suicide. How should we use that information?
A patient goes into the emergency room for a broken toe, is given a series of standardized tests, has their data fed into an algorithm, and—though they haven’t mentioned feeling depressed or having any suicidal thoughts—the machine identifies them as at high risk of suicide in the next week. Though the patient didn’t ask for help, medical professionals must now broach the subject of suicide and find some way of intervening. This scenario, where an actionable diagnosis comes not from a doctor’s evaluation or family member’s concern, but an algorithm, is an imminent reality.