Part Four: Unintended Consequence

Speculation on the long-term ethical responsibilities of grieftech development

Key Points:

  • Consequences may include the psychological impact of synthetically prolonged sorrow, changes in the ways in which we relate to the dead, legacy data privacy and the rights to likeness left to survivors, and the right to commercialize the stories we leave behind for others.

  • Responsible AI must be deeply ingrained into development itself, but also into the economic mechanics which continue to fund such products. Sustainable processes, much like existing frameworks for accessibility or translation, need to be wrapped around the development process itself.

  • Ethically responsible governance in the development and distribution of artificial intelligence services is deeply shaped by the all-too-human problems of unintended consequence, fear of the future, bias and flawed, culturally nuanced decision-making.

  • Platforms also have to be held accountable to endure, and ensure that emergent existential issues such as model collapse are held at a distance.

  • Grieftech is a set of products intended for the future, not the present. It bears a deeply human responsibility to operate in ways which sustains itself into the future.

The Velocity of Building vs. The Responsibility To Protect

The development, adoption and application of emerging grieftech is intimately coupled with deeply ethical issues of corporate responsibility and mitigated consequence. But these dimensions aren’t formed in a vacuum (Krieger, 2023). They co-exist with the problems users hire these products into their lives to solve, and the desires of those building the tools to help. But with the power of building towards solving a problem, comes the responsibility in the ‘set of practices and behaviors that help an organization use data and digital technologies in a way that is socially, economically, technologically, and environmentally responsible’ (Wade, 2020). The danger is that the velocity of this technological building often outpaces the legislative guardrails demanded by societies to mitigate the risks of such innovation leading to unintended consequence (Mueller, 2022). Such consequences in grieftech may include the psychological impact of synthetically prolonged sorrow, changes in the ways in which we relate to the dead, legacy data privacy and the rights to likeness left to survivors, and the right to commercialize the stories we leave behind for others.

Despite frequent calls to pause the development (which really means distribution) of artificial intelligence systems as a means of getting a handle on something which already feels like a train leaving the station, the logistics of doing so, and the competitive consensus required to put something like that in place seem more science fiction than practical reality (Future Of Life Institute, 2023). The means of imposing pauses and regulation upon innovation from the lens of ethical AI can only ever be reactive. By the time these measures reach practical implementation, it is already too late. This is why the dimension of responsible AI must be deeply ingrained in the development itself, but also into the economic mechanics which continue to fund such products. These measures need to happen earlier, upstream, and prior to the unintended consequences happening once the services are already being used by millions of users. Sustainable processes, much like existing frameworks for accessibility or translation, need to be wrapped around the development process itself. However, it’s not an expectation we might have of engineers themselves. As Peters et al. (2020) neatly articulate, ‘while engineers have always met basic ethical standards concerning safety, security, and functionality, issues to do with justice, bias, addiction, and indirect societal harms were traditionally considered out of scope. However, expectations are changing. While engineers are not, and we believe should not, be expected to do the work of philosophers, psychologists, and sociologists, they do need to work with experts in these disciplines to anticipate and mitigate ethical risks as a standard of practice. It is no longer acceptable for technology to be released into the world blindly, leaving others to deal with the consequences.’

Decision-Makers As Long-Term Custodians

In the context of emergent grieftech platforms, this places the accountability and responsibility with those making decisions about the platform’s use, rather than those intimate with the zeros and ones of the code which powers them. Engineers might be able to say when something is ready to launch, but it’s rarely a decision they make on their own. There is always a degree of business or product oversight which gives the green light, informed by commercial, audience, timing and a spectrum of other considerations governing what constitutes ‘ready’. In many of the examples we looked at, these organizations are small, flat, and the decision making very close to the actual engineering. In the case of James Vlahos’ Dadbot, it is an organization of one, really built for an audience of one, Vlahos himself (Stern, 2020). But as public interest and media coverage of the work scales, it surpasses the minimum viable product built to prove out a hypothesis and enters the space of opportunity for acquisition and translation into a more commercially viable business with an expanded team. This is what Vlahos’ Hereafter.ai is, which has become the model for several other grieftech platforms such as the William Shatner endorsed Storyfile, Eternime and You, Only Virtual.

Digital avatars and virtual co-pilots are nothing new. We create our own emojis, miis, and stylized virtual likenesses to serve in place of us when reacting to the thoughts and messages of others. They stand in for us as a shorthand for actual engagement. But what grieftech is doing is extending this idea. It’s more afterlife than second life. It’s a space of ownership we might try to govern the use of during our actual lives, but give up to a platform in our efforts to create and sustain a digital legacy for those we leave behind. These aren’t problems of technology. These are deeply, deeply human problems. Ethically responsible governance in the development and distribution of artificial intelligence services is deeply shaped by the all-too-human problems of unintended consequence, fear of the future, bias and flawed, culturally nuanced decision-making. But human problems also ache for human solutions, and what we choose to do next is a critical aspect of addressing unforeseen consequence, and moving from a space of anxiety to a place of assurance. We might do this work earlier in the development cycle as Tricarico proposes, around ethical procurement and the disclosures required to shape our own relationship with the platform (Tricarico, 2023), or reactively, swiftly, in addressing that which developers cannot be held responsible for not seeing (Peters et al., 2020). The important thing is that wherever we are in that development and distribution cycle, we as individuals do the work to address issues of responsibility raised by our own use of the products, but how might we build a stronger culture of personal accountability independent of broader regulation?

Tricarico discusses several issues related to the ethical need for diversity in the procurement of decision-making systems, and the role of transparency, accountability and disclosures in the systems implemented for use by others. At what point does this approach become too cumbersome for those building these experiences? Is there a level of disclosure beyond terms of service where we simply don’t care if the product is doing the job we’ve hired it into our lives to perform? The lens I’m using here is speed and cost. If we’re using a generative tool to perform a task which needs to be done quickly and inexpensively, how much capacity do we have to care about these decisions made significantly upstream of our experience?

All-Too Human Solutions To All-Too Human Problems

Much of Tricarico’s ethical argument concerns human inputs and outputs. The responsibilities of those training models, and the accountability of the outcomes upon those who use them. That when we make any major decision it becomes intimately coupled with risk of consequence. And however technologically sophisticated these systems might be, at the core of it all are very human problems, currently operating without any meaningful regulation. Much of the current vocalized fear around AI is what impact it’s going to have on human experience. But when we do this we talk as if AI isn’t here yet. That it’s something that’s still in the future. It’s already widely undisclosed. What might the process for walking this back to a place of regulated oversight look like? What do we actually do?

We might borrow from the medical field’s approach towards the AI ethics of efficacy in doing no harm, and how such an approach has historically served the medical community well. But it’s unintended consequence as a concern to be thought of in the long term, not just the immediate. These two dimensions are challenging to reconcile when that which we seek to reduce harm around is so new. We can’t understand the long-range consequences of this technology when it’s only been around for a few years. If we can’t see that deep into the future, even with robust testing in advance, is there something else we need here in being able to more rapidly respond to unintended consequences as they appear?

Engagement In The Present vs. Enduring Into The Future

This is where many of the challenges inherent in grieftech collide. It is a legacy-driven, long-term product which is only a few years old and serves emotion in the present. It is not intended to be used in the short term and then dispensed. Preserved in digital amber for future generations, it is expressly intended to endure. From a recurring revenue, and potentially economically cynical perspective, this makes a lot of commercial sense in locking into a platform for the long term. But through a lens of responsibility, the platforms also have to be held accountable to endure, and ensure that emergent existential issues such as model collapse (Franzen, 2023) are held at a distance. If a family’s legacy becomes increasingly stored on remote servers, the platforms themselves have a responsibility to preserve and maintain the legacy of those who have chosen to upload their digital lives into the cloud for future retrieval. Many of the terms of service documents speak to this. That if their company were ever to go out of business they would offer the ability to download what had been harvested. This is standard practice in many parts of the web, but I’d argue doesn’t go far enough in this context. Grieftech platforms have a future-facing responsibility to their customers to endure in ways that social platforms do not. they are explicitly positioned as platforms of remembrance. They trade in the preservation of long-term memory. As such, they need to reflect that in their operational health. In this sense it is like an archive, a library. But not in the sense of temporarily borrow books and returning them. In the sense of preserving the past for the long term, and more like the reverence we might extend to a museum.

What happens to the data we consume and create in digital spaces is a conversation only accelerated by artificial intelligence. But it’s often viewed through a short-term lens, for example related to the targeted cohort weaponization in tactics of persuasion and attention. Grieftech is informed by this, but operates along a much broader, more opaque spectrum. It is a set of products intended for the future, not the present. And in that, it bears a deeply human responsibility to operate in ways which sustains itself into the future.


References:

Franzen, C. (2023). The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content. Venturebeat.com. Retrieved from: https://venturebeat.com/ai/the-ai-feedback-loop-researchers-warn-of-model-collapse-as-ai-trains-on-ai-generated-content/.

Future
Of Life Institute. (2023). Pause Giant AI Experiments: An Open Letter. Retrieved from: https://futureoflife.org/open-letter/pause-giant-ai-experiments/.

Krieger, M. (2023). Unit 4.1 Corporate digital ethics, responsible AI, and the circular economy (15:31). [Digital File]. Retrieved from: https://canvas.upenn.edu/courses/1693062/pages/unit-4-dot-1-corporate-digital-ethics-responsible-ai-and-the-circular-economy-15-31?module_item_id=26549938.

Mueller,
B. (2022). Corporate Digital Responsibility. Business & Information Systems Engineering. [Digital File]. Retrieved from: https://link.springer.com/article/10.1007/s12599-022-00760-0.

Peters, D., Vold, K., Robinson, D. & Calvo, R.A. (2020). Responsible AI—Two Frameworks for Ethical Design Practice. IEEE Transactions on Technology and Society. [Digital File]. Retrieved from: https://canvas.upenn.edu/courses/1693062/files/122342365/download?download_frd=1.

Stern, J. (2020). How Tech Can Bring Our Loved Ones to Life After They Die | WSJ. YouTube.com. [Digital Video File]. Retrieved from: https://www.youtube.com/watch?v=aRwJEiI1T2M.

Tricarico, M. (2023). Unit 4 Guest Lecture: Marisa Tricarico (25:35). [Digital Video]. Retrieved from: https://canvas.upenn.edu/courses/1693062/pages/unit-4-guest-lecture-marisa-tricarico-25-35?module_item_id=26566793.

Wade,
M. (2020). Corporate Responsibility in the Digital Era. MIT Sloan Management Review. [Digital File]. Retrieved from: https://sloanreview.mit.edu/article/corporate-responsibility-in-the-digital-era/.