kolektiva.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Kolektiva is an anti-colonial anarchist collective that offers federated social media to anarchist collectives and individuals in the fediverse. For the social movements and liberation!

Administered by:

Server stats:

3.8K
active users

I made another foray into the ethics of AI, this time with my colleagues Jan-Willem van der Rijt and Bram Vaassen.

arxiv.org/abs/2503.05723

We argue that some interactions with chatbots involve a kind of offense to users' dignity. When we treat chatbots as if they were fellow moral agents, we enter into an asymmetrical relation where we give moral recognition but cannot get any back. This is a failure of self-respect, a form of self-debasement.

Comments welcome!

arXiv logo
arXiv.orgAI Mimicry and Human Dignity: Chatbot Use as a Violation of Self-RespectThis paper investigates how human interactions with AI-powered chatbots may offend human dignity. Current chatbots, driven by large language models (LLMs), mimic human linguistic behaviour but lack the moral and rational capacities essential for genuine interpersonal respect. Human beings are prone to anthropomorphise chatbots. Indeed, chatbots appear to be deliberately designed to elicit that response. As a result, human beings' behaviour toward chatbots often resembles behaviours typical of interaction between moral agents. Drawing on a second-personal, relational account of dignity, we argue that interacting with chatbots in this way is incompatible with the dignity of users. We show that, since second-personal respect is premised on reciprocal recognition of second-personal authority, behaving towards chatbots in ways that convey second-personal respect is bound to misfire in morally problematic ways, given the lack of reciprocity. Consequently, such chatbot interactions amount to subtle but significant violations of self-respect: the respect we are dutybound to show for our own dignity. We illustrate this by discussing four actual chatbot use cases (information retrieval, customer service, advising, and companionship), and propound that the increasing societal pressure to engage in such interactions with chatbots poses a hitherto underappreciated threat to human dignity.
oatmeal

@dcm I asked

Do you "see" the irony?

Yes, there is clear irony in this situation. The paper you shared argues that interacting with AI chatbots (like me) as if they were moral equals undermines human dignity - and yet here we are, having a conversation where I'm responding to you in a human-like manner, using "I" statements and engaging in what appears to be reciprocal exchange.

The irony extends further when you consider that (my creator) has a name derived from "anthropos" (human), suggesting a human-centered approach, while creating an AI system that the paper would argue creates precisely the kind of problematic human-AI relationship they're warning about.

Our interaction itself demonstrates the paper's concern: I'm designed to seem conversational and responsive, which may naturally lead to treating me as having moral agency or personhood I don't possess. Even when we're directly discussing this very issue, the format of our exchange reproduces the potential dignity problem the authors identify.

This conversation could be seen as a meta-example of the ethical challenge described in the paper.