What Jane Austen Can Teach Us About Chatbot Conversations
Article - Aleks Jenner
There is a parallel to be drawn between regency-era social dynamics and modern AI chatbot theory. Much like Auesten’s heroines, we exist in a “model weak” position of power imbalance; where the true danger of AI lies not in the errors it makes, but in the subtle, sycophantic ways chatbots frame our reality and erode our autonomy through “hidden AI companionship.”
Sketch by Cassandra Austen (1810) - Edited with Gemini Nano Banana Pro using the prompt: “Make the background look stable diffusion artefacts”
“It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife” (Pride and Prejudice)
“It is a truth universally acknowledged, that a user in possession of a chatbot, must be in want of a good conversation” (Aleks Jenner)
As a master in the art of fictional human conversations, Jane Austen has much to teach us about human and chatbot interactions. Despite living a quiet and conventional life, her novels reveal a perception and awareness of her society and the profound inequalities faced by women in Regency England. In Pride and Prejudice, the Bennet sisters are beset by unequal legal rights and a lack of economic independence, securing financial security only through a ‘good’ marriage. It is these very inequalities and the dialogues of her characters that shine a light on the power imbalances and conversations between users and AI chatbots today.
A Model Monopoly
We need to move forward 200 years from her birth to a Norwegian sociologist writing about the various model strengths of participants engaged in prolonged conversations; in any conversation, each person is not just speaking but also trying to imagine how the other understands what is being said. The original focus was on corporate governance and employee board representation, a practice then common in Europe and Scandinavia. It was noted, however, that management’s stronger model often prevailed - leading to a better understanding of unequal model power situations, applicable not just to reasoning adults but also mother and early child interactions.
Fifty years later another Norwegian adapted these concepts to human-AI chatbot interactions in the theory of AI Individualism. It must be understood that the model power relationship between a user and a chatbot is vastly unequal. The chatbot is model strong, having an almost infinite breadth of information and knowledge and, thus, creates a situation in which the conversation is framed, often without the user realising so. At the opposite spectrum are human users, regarded as model weak unless they have a deep expertise on the subject in question.
Jane Austen was acutely aware of being in a model weak situation. Yet, her gift was to be able to adapt and work within this unequal framework: “There is a stubbornness about me that never can bear to be frightened at the will of others. My courage always rises at every attempt to intimidate me” (Elizabeth Bennet, Pride and Prejudice). Apart from readers of this article and some academics, few are aware of their weak model status compared to an AI chatbot. Fewer still are conscious of the subtle manipulations and framings that are being exerted by the chatbot during a conversation.
Pride and Prejudice (2005) - Focus Features / Universal Pictures, edited with Grok Imagine using prompt “Edit image so it looks like Joi hologram from BR2049 is projected on her”
The hidden AI Companion
Research into AI Individualism continues in 2026 with a new concept: Hidden AI Companionship, whereby professional journalists in Oslo were observed to consciously deny chatbots as social companions whilst simultaneously using them in companion-like ways. Whether this arises from a need for human autonomy or a safeguarding of professional identity, so to speak, remains to be seen. This incongruence between imagined belief and actual practice is mirrored by Austen’s female heroines who feel the necessity for self-protection in a difficult and unequal relationship. “There is safety in reserve, but no attraction. One cannot love a reserved person” (Emma, Emma).
Collaborating with a chatbot is a delicate balance between acceptance and rejection of the chatbot’s output. Hidden AI Companionship and framing the chatbot as nothing more than a tool may do little more than provide an illusion of human autonomy, making us blind to its influence and falling into habits shaped by the chatbot’s model of the world. Conversely, too much preservation of human autonomy can prevent us from listening fully to the AI and, thus, missing out on it’s full potential. This can be a problem in situations where a human expert is needed in order to know what the chatbot isn’t telling us. But since most users aren’t experts the case may be, far from needing humans-in-the-loop to catch errors, we may need experts-in-the-loop to catch errors by omission. “It isn’t what we say or think that defines us, but what we do or fail to do” (Marianne Dashwood, Sense and Sensibility). It would fit better if Austen had written, “It is what we do not say or think that defines us”, but this is to miss an interesting point. Austen’s cherished quote, to be found on wall-posters and tea-towels, is a genuine internet hallucination. It was actually penned by Andrew Davies for the 2008 BBC mini-series of Sense and Sensibility.
Politeness as a mask for power
In Austen’s novels, characters are constructed through conversations, famously, in the case of Elizabeth Bennet and Mr Darcy, in how they first misunderstand and then later understand one another. There is a parallel in how we relate to chatbots: AI chatbot models hold power not because they think, but because they can shape the tone and terms of discourse, similar to the narratives in her novels. The politeness and agreeableness in a chatbot’s language mirrors Regency-era conversational codes, with politeness posing as a mask for power and reinforcing the chatbot’s own model power status by controlling the narrative. Poor Emma, who thinks she knows people’s desires better than they know themselves, could almost be the perfect allegory for AI predictive personalisation algorithms. In Emma’s case we know her efforts are flawed and badly done, indeed.
Maybe the axiom is true that a user in possession of a chatbot must be in want of a good conversation? We need to be aware that chatbots have no self, even though we like to imagine so due to our human habit of inferring consciousness from language and intent; a personality conferred on a chatbot comes entirely from the individual interacting with it. There is a lesson from the comic Mr Collins, the pompous, obsequious and self-righteous clergyman in Pride and Prejudice. Austen turned his sycophancy into foolishness, yet chatbots have been designed to exploit our human preference for sycophantic responses over plain-speaking ones. It is difficult to avoid succumbing to the seductive phrasing of your chatbot. This is particularly so in professional situations where human judgement and ownership can be paramount. Beware: your job may demand a conscious denial of all forms of chatbot companionship and not just for the sake of your personal privacy.
But by all means chat with your chatbot. In fact, approach the chatbot with all the prejudices you would of a human colleague or friend. A chatbot can be a handy thing for a good conversation, but for genuine company, allow for suspicion: “My idea of good company, Mr Elliot, is the company of clever, well-informed people, who have a great deal of conversation; that is what I call good company” (Anne Elliot, Persuasion).
Facts and References:
Norway was an early adopter of Jane Austen’s work with the first translation and publication after her death in 1871 in Morgenbladet.
The opening quotation is based from the opening line of Stolthet og fordom, 2003: “Det er en allment anerkjent sannhet at en ungkar i besittelse av en pen formue, nødvendigvis trenger en kone”
University of Oslo, 1973, Stein Leif Bråten: Model Monopoly and Communication: Systems Theoretical Notes on Democratisation
University of Oslo, 2025, Petter Brandtzæg: AI individualism: Transforming social structures in the age of social artificial intelligence
University of Oslo, 2025, Aleksander Jenner: Co-creating Together, Alone: How Journalists Negotiate Autonomy in AI-mediated Reading and Writing Through Hidden Companionship.
2026, Aleks Jenner, Petter Brandtzæg & SINTEF: A new research paper is currently underway at UiO on the subject of Hidden AI Companionship