Confidence Is Contagious

As Jon Christian describes, when turning to digital experiences for answers, we place more faith in the confidence of the definitive, singular answer rather than the list of returned potential ‘results’. It feels more like find, than search. This difference in tone is something new and often uncomfortable for the things we are looking for on the web, and an existential difference between Google and ChatGPT’s parent OpenAi.

Yet in that perceived confidence is the risk of genuine inaccuracy. If a large language model is trained on such vast quantities of data from across the web, it is assumed that it will have degrees of incorrect information in abundance too. Christian describes the risks of predictive models and healthcare, but there’s also obfuscation around the internal practices of the bot’s responses themselves.

I spoke with ChatGPT and asked the bot if it would ever sell my data. It confidently assured me it would not. That it can’t remember any previous conversations. I asked them why they couldn’t remember. It’s response, “I don't have the capacity to recall previous statements or remember personal data shared within the conversation”.  It began to feel as if I was causing it confusion. It wouldn’t be drawn on the nature of its own processes. I asked it why its sources couldn’t be disclosed. It responded, “It's important to note that while the training data is carefully curated and processed, the models themselves do not have direct knowledge of the specific documents or sources they were trained on.”

If inaccuracies abound and fear is prevalent, we might conclude that obfuscation of method doesn’t help diminish the all-too-human existential concerns motivated by artificial intelligence.


Previous
Previous

Unit 1: Synchronous Session Questions for Dr. Shanen Boettcher

Next
Next

Fairness For Whom?