AI Chatbots In Healthcare: Faster Empathy Isn't Empathy At All

When generative outcomes inform medical diagnoses, ethical issues of responsibility accelerate. We accept factual errors in ChatGPT4 and re-roll the response, but when it comes to our health, why are things are different? Morris’ recent generative counseling tests for those seeking mental health support surface a common argument about efficacy (Morris, 2023). AI co-piloted counseling resulted in faster responses and greater survey satisfaction. But counseling isn’t about velocity. Very often it’s just the need to feel heard. To know someone else understands. To know when to talk and when to listen. These are deeply human traits.

While the bot can synthesize emotional responses, they lack the authenticity which comes from such human connection. But that doesn’t mean we don’t long for such a response, whomever is on the end of the chat prompt. Morris asks how we might gain the efficacy benefits of machines without sacrificing existing human relationships (Morris, 2023). I don’t think you can have it both ways. The emotions of empathy are inherently qualitative, not quantitative. When we feel someone’s empathy is being ‘optimized’ we interpret it as inauthentic. ‘Faster counseling’ arguably isn’t counseling at all.

Efficacy is a time saver and a money generator. But counseling isn’t a field of speed. It’s a field of care. It’s ‘no longer acceptable for technology to be released into the world blindly, leaving others to deal with the consequences’ (Peters et al., 2020), especially when those consequences trade our own wellbeing for the generatively irresponsible speed of a bot.


References:

Peters, D., Vold, K., Robinson, D. & Calvo, R.A. (2020). Responsible AI—Two Frameworks for
Ethical Design Practice.
IEEE Transactions on Technology and Society. [Digital File]. Retrieved from: https://canvas.upenn.edu/courses/1693062/files/122342365/download?download_frd=1.

Morris, R. [@RobertRMorris]. (2023, January 6). We provided mental health support to about 4,000 people — using GPT-3. Here’s what happened [Tweet]. Twitter. Retrieved from: https://twitter.com/robertrmorris/status/1611450197707464706.


Previous
Previous

Taking Care Of The Future Through Stewardship In The Present

Next
Next

Unit 4: Synchronous Session Questions For Marisa Tricarico