Unit 4: Synchronous Session Questions For Marisa Tricarico

Questions:
The discussion explored the medical approach towards the AI ethics of efficacy over doing no harm, and how such an approach has historically served the medical community well. It also talked about unintended consequences as a concern to be thought of in the long term, not just the immediate. These two dimensions are challenging to reconcile when that which we seek to reduce harm around is so new. We can’t understand the long-range consequences of this technology when it’s only been around for a few years. If we can’t see that deep into the future, even with robust testing in advance, is there something else we need here in being able to more rapidly respond to unintended consequences as they appear?


Tricarico discussed several issues related to the ethical need for diversity in the procurement of decision-making systems, and the role of transparency, accountability and disclosures in the systems implemented for use by others. At what point does this approach become too cumbersome for the end user? Is there a level of disclosure beyond terms of service where we simply don’t care if the product is doing the job we’ve hired it into our lives to perform? The lens I’m using here is speed and cost. If we’re using a generative tool to perform a task which needs to be done quickly and inexpensively, how much capacity do we have to care about these decisions made significantly upstream of our experience?

Much of Tricarico’s ethical argument concerns human inputs and outputs. The responsibilities of those training models, and the accountability of the outcomes upon those who use them. That when we make any major decision it becomes intimately coupled with risk of consequence. And however technologically sophisticated these systems might be, at the core of it all are very human problems, currently operating without any meaningful regulation. Much of the current vocalized fear around AI is what impact it’s going to have on human experience. But when we do this we talk as if AI isn’t here yet. That it’s something that’s still in the future. It’s already widely undisclosed. What might the process for walking this back to a place of regulated oversight look like?

Comment:
I’m specifically interested in the moment where the decision to commercialize at scale happens. When both the development and the investment determine something is ready. This is a highly subjective call, but without it, nothing happens. It often feels as if we release and manage consequence rather than proactive mitigate risk prior to launch. This is a human, cultural problem inherent in the pressures of product development. Teams try to understand the ‘known unknowns’ to the extent they can, but it is never, and perhaps can never, be exhaustive.


Previous
Previous

AI Chatbots In Healthcare: Faster Empathy Isn't Empathy At All

Next
Next

Unit 4 Takeaways: With Great Power Comes Great Responsibility