Broussard has also recently recovered from breast cancer, and after reading the fine print of her electronic medical records, she realized that an AI had played a part in her diagnosis—something that is increasingly common. That discovery led her to run her own experiment to learn more about how good AI was at cancer diagnostics.
We sat down to talk about what she discovered, as well as the problems with the use of technology by police, the limits of “AI fairness,” and the solutions she sees for some of the challenges AI is posing. The conversation has been edited for clarity and length.
I was struck by a personal story you share in the book about AI as part of your own cancer diagnosis. Can you tell our readers what you did and what you learned from that experience?
At the beginning of the pandemic, I was diagnosed with breast cancer. I was not only stuck inside because the world was shut down; I was also stuck inside because I had major surgery. As I was poking through my chart one day, I noticed that one of my scans said, This scan was read by an AI. I thought, Why did an AI read my mammogram? Nobody had mentioned this to me. It was just in some obscure part of my electronic medical record. I got really curious about the state of the art in AI-based cancer detection, so I devised an experiment to see if I could replicate my results. I took my own mammograms and ran them through an open-source AI in order to see if it would detect my cancer. What I discovered was that I had a lot of misconceptions about how AI in cancer diagnosis works, which I explore in the book.
[Once Broussard got the code working, AI did ultimately predict that her own mammogram showed cancer. Her surgeon, however, said the use of the technology was entirely unnecessary for her diagnosis, since human doctors already had a clear and precise reading of her images.]One of the things I realized, as a cancer patient, was that the doctors and nurses and health-care workers who supported me in my diagnosis and recovery were so amazing and so crucial. I don’t want a kind of sterile, computational future where you go and get your mammogram done and then a little red box will say This is probably cancer. That’s not actually a future anybody wants when we’re talking about a life-threatening illness, but there aren’t that many AI researchers out there who have their own mammograms.
You sometimes hear that once AI bias is sufficiently “fixed,” the technology can be much more ubiquitous. You write that this argument is problematic. Why?
One of the big issues I have with this argument is this idea that somehow AI is going to reach its full potential, and that that’s the goal that everybody should strive for. AI is just math. I don’t think that everything in the world should be governed by math. Computers are really good at solving mathematical issues. But they are not very good at solving social issues, yet they are being applied to social problems. This kind of imagined endgame of Oh, we’re just going to use AI for everything is not a future that I cosign on.
You also write about facial recognition. I recently heard an argument that the movement to ban facial recognition (especially in policing) discourages efforts to make the technology more fair or more accurate. What do you think about that?
I definitely fall in the camp of people who do not support using facial recognition in policing. I understand that’s discouraging to people who really want to use it, but one of the things that I did while researching the book is a deep dive into the history of technology in policing, and what I found was not encouraging.
I started with the excellent book Black Software by [NYU professor of Media, Culture, and Communication] Charlton McIlwain, and he writes about IBM wanting to sell a lot of their new computers at the same time that we had the so-called War on Poverty in the 1960s. We had people who really wanted to sell machines looking around for a problem to apply them to, but they didn’t understand the social problem. Fast-forward to today—we’re still living with the disastrous consequences of the decisions that were made back then.
How to become successful
For more technology Updates
Latest Jobs in Pakistan
Best Scholarships for Needy students