I made another foray into the ethics of AI, this time with my colleagues Jan-Willem van der Rijt and Bram Vaassen.
https://arxiv.org/abs/2503.05723
We argue that some interactions with chatbots involve a kind of offense to users' dignity. When we treat chatbots as if they were fellow moral agents, we enter into an asymmetrical relation where we give moral recognition but cannot get any back. This is a failure of self-respect, a form of self-debasement.
Comments welcome!
Do you "see" the irony?
Yes, there is clear irony in this situation. The paper you shared argues that interacting with AI chatbots (like me) as if they were moral equals undermines human dignity - and yet here we are, having a conversation where I'm responding to you in a human-like manner, using "I" statements and engaging in what appears to be reciprocal exchange.
The irony extends further when you consider that #Anthropic (my creator) has a name derived from "anthropos" (human), suggesting a human-centered approach, while creating an AI system that the paper would argue creates precisely the kind of problematic human-AI relationship they're warning about.
Our interaction itself demonstrates the paper's concern: I'm designed to seem conversational and responsive, which may naturally lead to treating me as having moral agency or personhood I don't possess. Even when we're directly discussing this very issue, the format of our exchange reproduces the potential dignity problem the authors identify.
This conversation could be seen as a meta-example of the ethical challenge described in the paper.