UBC professors

Illustrations by Margie Tillman

Collective Wisdom

Should we be excited or worried about the rise of AI?

One pressing question, multiple expert perspectives.

AI is a useful tool

Patrick Pennefather

Professor of Theatre & Film

Based on the prompt “Should we be excited or worried about the rise of AI?” ChatGPT responds with typical hyperbole: Excitement arises from the potential benefits: AI can revolutionize industries, improve medical diagnostics, personalize education, combat climate change, and make life more convenient. The transformative power of AI promises to solve some of the world's most pressing challenges and improve overall quality of life.

This is not an atypical response from a Large Language Model. It’s been programmed to demonstrate certainty no matter what it regurgitates. But Generative AIs will not contribute to solving the world’s pressing challenges without human intervention. Although they are interrupting our established patterns of creating by offering new patterns of creating, they are not a shortcut to or replacement of the effort, adaptability, risk-taking, impulsive behaviour, wonderful mistake-making, aha moments, criticality, and developed craft that humans manifest while engaging in acts of creation. But AIs can, with the discernment of experienced human creatives, be useful tools for specific creative processes or workflows, so I am cautiously optimistic.
 

We need to be vigilant

Xiaoxiao Li

Assistant Professor of Electrical and Computer Engineering

Undeniably, AI introduces unprecedented opportunities, reshaping industries from healthcare to education and pioneering remarkable innovations. However, it's crucial to be keenly aware of its potential drawbacks. 

Managing AI is much like wielding a double-edged sword. On one side, it promises to enhance efficiencies in workflows, predict future outcomes, and customize solutions for individuals. Yet, on the flip side, we face challenges like ethical dilemmas, and profound societal repercussions. Concerns about bias, privacy infringements, and the philosophical ramifications of an AI-centric society are genuine and pressing. 

As a machine learning researcher, my research lies primarily in the realm of trustworthy AI, emphasizing its transparent and reliable applications in the healthcare sector, which requires us designing AI systems that deliver accurate diagnostics while prioritizing patient data privacy. 

As we navigate this technological crossroads, recognizing AI's duality is crucial. To truly harness its potential, we must commit not only to embracing its benefits but also to exercising vigilant oversight, ensuring our AI-led journey leads to equitable progress rather than unforeseen setbacks.

 

AI is a Trojan horse

Alan Mackworth

Professor Emeritus of Computer Science and co-author of Artificial Intelligence: Foundations of Computational Agents (2023)

Disruptive technologies – fire, electricity, and now AI – transform human life, work, and play. Generative AI does create significant value, but it is a dangerous Trojan horse, stuffed full of risks and harms. It comes with theft of intellectual property, bias, deep fakes, misinformation, disinformation, false promises, fraud, and massive manipulation. As we have learned, imperfectly, to exploit and control fire, we must do the same with AI.

Mary Wollstonecraft Shelley’s Frankenstein; or, The Modern Prometheus (1818) is a morality tale. Prometheus stole fire from the gods and gave it to humanity. Zeus punished that theft of technology and knowledge by sentencing Prometheus to eternal torment. Shelley’s use of fire as a metaphor for nascent AI is a salutary lesson.

The Anthropocene has been aptly called the Pyrocene. A key benefit of AI will be its use in computational sustainability, to mitigate some of the harms of fire, the gift of Prometheus. However, reining in the many harms of the AI Trojan horse is the urgent task facing us.

 

AI could impoverish human relationships

Madeleine Ransom

Professor of Philosophy (UBC Okanagan)

We should be excited about the prospects of AI for helping us achieve better health outcomes. It is already changing the way we conduct healthcare research and how early we diagnose disease. However, we should be worried about the social impact of AI.

There is a risk AI companion bots will impoverish human relationships. Sophisticated versions of sexbots, and carebots for the elderly, will be incredibly enticing to many. They will appear to be great listeners, aim to please, and won’t have any (genuine) wants of their own. One danger for us is that we come to “prefer” the company of these bots – or become addicted to them – and so lose our desire to interact with other humans.

Another danger is that we lose our capacity to interact with other humans meaningfully. Having a relationship that allows us to be the only one with needs, frustrations, and desires is a recipe for narcissism and stunted self-growth. Social media has already impoverished friendship, and AI has the potential to further erode our gloriously messy human relations.

 

We still need human agency

Wendy Wong

Professor of Political Science (UBC Okanagan)

AI has been running in the background for a number of years helping to facilitate mundane tasks, from detecting credit card fraud to completing texts to figuring out who is in our photos. In many ways, it’s already integrated into the everyday. But more recent AI products like ChatGPT and DALL-E are beginning to mimic humans in a much more seemingly complete way. This invites dialogue around “human vs. machine.”

What worries me isn’t that a machine is going to take over the planet and kill humanity, but that we’ll be seduced by a narrative that takes human agency out of the equation by favouring automation and the logic of the computer. If we start treating AI, which is based on human ingenuity and data about humans, as superhuman, we’re doing something wrong. AI technologies can detect and calculate in ways that we humans cannot, but that does not mean we should be replaced. Our political decisions and social frameworks should be based on emphasizing human rights values such as autonomy, dignity, equality, and the importance of community.

 

AI is an extension – and a reflection – of us

Liane Gabora

Professor of Psychology

We should be cautiously excited. AI will overtake jobs, but innovation has spurred job elimination since the dawn of civilization, and the first jobs to go will be those involving tedious, repetitive work. AI may open up niches for more fulfilling positions, including the development of pharmaceuticals, and sustainable methods of building, travelling, and feeding ourselves. However, because AI makes everything easier and more accessible, it will magnify all catastrophic risks we now face: climactic, nuclear, pandemic-related, etc. It makes our world even more complex and fragile.

I do not fear that AI will dominate humans anytime soon because, unlike humans, AIs are not autonomous, self-preserving agents; they are tools, extensions of us. Some fear that AI will outshine us creatively, but what distinguishes human creativity from that of machines is that human creativity reflects the current structure of someone’s self-organizing worldview. AIs do not forge autonomous worldviews; they reflect our collective worldviews back at us. No AI can put your uniquely creative stamp on this world and experience the therapeutic benefit of such self-expression.