Shaping the Future of AI: Conversations 2023 at the University of Oslo

Created with Dalle 2 using an AI generated prompt - Create an image that captures the essence of a retro-futuristic cityscape at sunset, with a style vaguely reminiscent of 1980s aesthetics. The scene should include smooth, pastel-colored buildings, palm trees, and a serene sky. Integrate sleek, minimalistic humanoid figures representing AI, subtly hinting at their potential impact on society. The mood should evoke a sense of nostalgia mixed with contemplation, focusing on the balance between technological progress and its mindful use, without directly imitating any specific artist's style.

Article - Aleks Jenner

A brief look at the state of the science of chatbot research

I attended Conversations 2023, an academic conference discussing the research and design of chatbots and large language models. The conference took place at UiO, hosted by the Department of Media and Communication on 22nd/23rd of November. 22 academic papers were presented in short 15 minute presentations with brief Q&A’s after. I made notes about each one, so that I can present to you here a broad overview of the state of the science. What are the feelings among experts in chatbot research? Read on.

Though ChatGPT crashed down one year ago, sending seismic shockwaves that reverberated through broader public consciousness, researchers have been studying chatbots for far longer than that. But make no mistake, there was hardly a single presentation that did not use the letters G,P, or T in their now infamous combination. It’s clear that the technological advancement of GPT-3 caught everyone’s attention, from layperson to expert.

Large Langauge Model’s (LLM’s), the text prediction style of generative AI (GenAI) that ChatGPT (and others, such as Google’s Bard or Meta’s Llama 2) is built upon were, a year ago, the future of AI. It’s now the future, and these AI systems have become a reality. It says something about the exponential rate of technological progress that there has never been another technology that has seen such rapid and widespread adoption.

We’re all familiar with tech hype: cypto, blockchain, the metaverse. Despite the billions poured into their research and development, none have delivered on the promise of transforming the world. I’m here to deliver the message, that GenAI is not overhyped.

There’s a lot of talk about the rapid improvements in AI, but this can often be vague and unspecific. Not so at the conference. By comparing the lexical alignment (the phenomenon that when you speak to someone, you will unconsciously start to use each other’s linguistic styles, word choices, and phrases) between GPT-3.5 and GPT-4 we find that where the old model sucked, the newer version was almost on the level of humans.

Now that it can talk as well as a human, the next step on the road of anthropomorphism is for AI to dynamically adapt throughout a conversation, through an empathetic understanding of dialogue. Research is already happening to develop systems able to identify our emotional states and respond accordingly.

But why do developers go to all the trouble of making AI systems more human-like? It’s because we like them. We respond better to them. The only time a human like AI is ever a problem is when it messes up, because we unconsciously raise our expectations of what it’s capable of.

Anthropomorphism and personality

Of the many aspects of anthropomorphism that researchers were studying (broadly categorised into Appearance, Capacity, and Personality), the most human-like feature for me is personality. We’re still early in AI development, watching these systems take their baby steps and listening to them say their first words, but the groundwork for artificial personality testing was laid out at the conference. It’s not much of a stretch to imagine future chatbots making use of the mountains of personal information we give freely to social media companies, leveraging this data to become a more effective conversation partner.

Finn Myrstad, Chair of the Trans Atlantic Consumer Dialogue, mentioned in the day 2 keynote that the real danger of AI was not an existential threat from some Artificial General Intelligence (AGI, the so-called holy grail of conscious silicone) but from something less exciting. As it stands, LLM’s require incredible amounts of computing power. This is massively expensive, and means that the cutting edge of development can happen only at the wealthiest of tech companies.

Billion-dollar buisness

These companies may be happy pouring in billions of dollars into LLM’s, operating at a loss seems to be the new standard procedure in the tech world, but at some point that cost will need to be recouped. The systems will be put to work, making as much money as is possible. This is where the danger lies, in the consolidation of power by only a few mega-companies, and in the political influence they can exert. When fines from regulators are considered as nothing more than the cost of doing business, then there is little incentive not to manipulate users and exploit them for profit.

GenAI may only be a tool, but it’s a brand new tool, with as much potential to do good as to be misused. Companies simply cannot be trusted to regulate themselves, as much as they may try to convince us otherwise. For that reason, regulatory bodies must enforce data protection laws with enough vigour to keep companies in-line with the law. They are the only thing standing between us and capitalist dystopia.