The Honourable Mélanie Joly
Minister of Industry
House of Commons
Ottawa, Ontario
K1A 0A6
The Honourable Evan Solomon
Minister of Artificial Intelligence and Digital Innovation
House of Commons
Ottawa, Ontario
K1A 0A6
Government of Canada AI Strategy Task Force
Innovation, Science and Economic Development Canada
Dear Minister Joly, Minister Solomon, and Members of the Government of Canada AI Strategy Task Force,
Re: Civil Society and Human Rights Groups Reject “National Sprint” on AI Strategy
We, the undersigned civil society, human rights, and civil liberties organizations, academics, advocates, and representatives of equity-seeking communities, write to protest and reject the deeply misguided and wrongheaded approach to public consultation demonstrated by the government’s thirty-day “national sprint” on Canada’s artificial intelligence (“AI”) strategy.
We call on Minister Joly, Minister Solomon, and Innovation, Science and Economic Development Canada (“ISED”) to do the following:
- Extend the consultation deadline to February 2, 2026;
- Reconstitute the Task Force into a more equitably representative one that is equipped to confront the ongoing threats of AI to people and communities; and
- Rewrite the survey into a more legitimate and unbiased consultation instrument.
The current consultation process suggests serious disregard for the Canadian public’s known and wide-ranging concerns about the demonstrated risks and harms of technologies currently classified as AI. This impression arises from the contrived urgency imposed by the short timeline for submitting informed views on a topic as complex and consequential as AI; the leading language, predetermined framing, and prioritization of business and economic interests in the associated survey; and the lack of human rights, civil liberties, and similar representatives on the AI Strategy Task Force (the “Task Force”).
Not all signatories of this letter may agree on every point below or share identical positions regarding AI-related issues or how such issues should be tackled; however, we are united in opposition to this consultation process. We jointly refuse to participate in and validate what appears to be a disingenuous attempt to claim public legitimacy for an outcome already decided from behind closed doors.
Minister Solomon has stated he intends to depart from “over-indexing” on harm prevention when it comes to AI. This suggests a troubling lack of understanding of the wide-ranging and well documented harms these technologies pose, which any meaningful national strategy would need to take into account.
While AI may have beneficial impacts in specific use cases, the question is not whether a particular technology has any use at all, but whether its deployment justifies the cost—whether human, environmental, or democratic. That is a complex, multifaceted, and interdisciplinary question, responding to which with meaningful written input requires more than a thirty-day window on short notice.
Specifically, below is a non-exhaustive list of some of the many known negative impacts, human rights violations, human psychological costs, and society-wide risks of unregulated deployment of various AI-based technologies that have already emerged to date, with particular focus on generative AI and algorithmic decision-making systems:
- Compounding socioeconomic inequality and erosion of labour rights: AI is built on the backs of underpaid and traumatized workers, including through invisible ghost workers which make AI tools minimally functional; hidden human labour disguised as AI; and the exploitation of creative labour that is central to generative AI. These technologies also erode labour rights and exacerbate workplace power imbalances by normalizing wage theft through algorithmic wage discrimination; violating workers’ rights to privacy and health and safety through “bossware”, algorithmic management, and ubiquitous surveillance; and through discriminatory AI-based job hiring systems. At the same time, automated welfare algorithms have ruined lives through biased, arbitrary and mistaken fraud detection, to the point they were ruled a human rights violation in the Netherlands.
- Grave environmental and climate impacts: AI data centres require extraordinary amounts of water and energy, leading to projected exponential increases in demand, causing a resurgence of the fossil fuels industry and even reopening the Three Mile Island nuclear plant. Local communities where data centres are built get hit the hardest, such as through skyrocketing home utility bills and exacerbated drought conditions. Interest in using AI to expand resource extraction has also led to partnerships between ‘big tech’ and fossil fuel companies, resulting in further environmental harm.
- Colonization, dispossession, and erasure of Indigenous peoples and their rights: AI development has extractively appropriated traditional knowledge and cultural intellectual property as training data while stripping it of nation-specific history and context; spread disinformation involving misrepresentation and “flattening” of Indigenous peoples, cultures, and art as interchangeable; infringed on Indigenous data sovereignty; and trampled over Indigenous rights to their own lands in favour of environmentally taxing AI data centres.
- Automated racism across society: Algorithmic decision-making tools have automated bias against Black, Indigenous, and other racialized communities, with harmful repercussions across numerous areas of life, including deficient health care, discriminatory educational guidance, lost rental housing, denied mortgages, questionable immigration decisions, wrongful arrests by law enforcement, and unjust treatment by the criminal legal system. Meanwhile, generative AI programs routinely respond to users’ prompts with both explicit racism and more subtle racial bias. Algorithmic racism takes on an additional weighty dimension when the implications are applied to autonomous weapons systems, whether in the context of security operations, armed conflict, or law enforcement.
- Intensified misogyny and gender-based violence, abuse and harassment: Men and boys, as well as state actors, are using generative AI and other AI-based apps to create pornographic deepfakes and other non-consensual intimate images and videos of women and girls—including students or classmates in cities across Canada, and female journalists, politicians, and human rights defenders—violating their fundamental rights to sexual privacy and autonomy, bodily integrity, and equality, while infringing on their freedom of expression and ability to fully participate in democracy and society.
- Marginalization and exclusion of LGBTQ2SIA+ individuals: Facial analysis and “gender recognition” systems are inaccurate, reductive, and biased when it comes to gender identity; predatory data grabs take deeply personal transition videos to train surveillance algorithms; AI researchers have attempted to “predict” strangers’ sexual orientation without consent; and users leverage generative AI to promote dehumanizing, homophobic, and transphobic hate and violence across social media.
- Fundamental threats to mental health and cognitive well-being: Multiple cases have emerged of teenagers dying by suicide in incidents tied to extended interactions with dark-impulse-encouraging chatbots; medical professionals are contending with the growing phenomenon of chatbot-induced delusion, paranoia, and breaks with reality, colloquially termed “AI psychosis”; and studies have shown the potential decline of critical thinking and related skills among students and others who rely on large language models (“LLMs”) to complete academic or similar work.
- Privacy risks, mass surveillance, and security vulnerabilities: People’s private thoughts input into Grok and ChatGPT have been leaked through Google Search; Signal president Meredith Whittaker has described “agentic AI” as a profound threat to personal privacy and encrypted messaging; “Potemkin AI” companies market products as AI while secretly having human workers go through customers’ emails, voicemails, and calendar entries; and critical security flaws have emerged such as a “zero-click” exploit in Microsoft’s Copilot, and Anthropic exposing developers to remote attacks. Meanwhile, law enforcement and national security agencies have leveraged AI to develop and expand surveillance and analytics tools, and governments have consistently attempted to exclude AI for national security purposes from regulation, despite the clear potential for human rights violations.
- Implicit promotion of eugenics and disability exclusion: Leading scholars and technology experts and critics have laid out the eugenicist roots of prominent AI evangelists and a resulting agenda and rhetoric that implicitly advocates for allocating power over the future based on a narrow notion of intelligence, at the expense of those bearing the brunt of harmful AI systems today. This tendency, described as “phrenology with math”, dovetails with broader exclusion of the disability movement from AI development and governance, despite issues such as health insurance discrimination, self-driving vehicles not recognizing wheelchair-users or blind pedestrians, and the difficulty of coding algorithms for differing models of disability and its shifting nature relative to societal norms and acceptance.
- Collapse of functional information environment: LLMs are notorious for providing inaccurate, misleading, or wholly fabricated text held out as reliable information or expertise, yet people increasingly turn to them as credible sources. Generative AI applications such as Sora 2 have given rise to relentless waves of “AI slop” (in the case of Facebook, encouraged and paid for by the platform itself) and false images and videos across the Internet, constituting a “brute force attack” on reality, turbocharging existing disinformation harms—including to democracy in Canada and elsewhere—and bolstering the stock value of the liar’s dividend.
In this era of AI’s demonstrated detriment to society and historically marginalized populations, applying a “move fast and break things” ethos to a “national strategy” flies in the face of any principled commitment to responsible AI regulation, human rights, societal justice, democratic participation, or building trust with civil society. Polls and surveys have suggested that at least half, and up to 85 per cent of, the people in Canada “see AI as a threat”, want government regulation over AI technologies, or are concerned with AI’s societal and environmental impacts.
That the government would insist on undue haste without first meaningfully reckoning with the myriad repercussions above—or providing time for resource-constrained stakeholders to reckon with them in their submissions—is unconscionable. All the more so that it is precisely many of the vulnerable communities disproportionately harmed by AI who most lack representation on the Task Force.
We have been here before. Experts and advocates focused on human rights and AI, who have engaged with the government in good faith despite a similarly flawed consultation on the proposed Artificial Intelligence and Data Act (AIDA) in the now-expired Bill C-27, have had enough.
In light of all of the above, we reiterate our above call for Minister Joly, Minister Solomon, and ISED to:
- Extend the consultation deadline to February 2, 2026;
- Reconstitute the Task Force into a more equitably representative one that is equipped to confront the ongoing threats of AI to people and communities; and
- Rewrite the survey into a more legitimate and unbiased consultation instrument.
In the meantime, Canadian civil society rejects this pseudo-consultation as a facade for manufacturing consent for a harmful preordained agenda, and declines to participate.
We will instead be hosting an independent and separate process: the People’s Consultation on AI. If the Ministers, Task Force, and the rest of the Canadian government are truly invested in integrating informed and substantive views on AI from the public, submissions from participating organizations and/or individuals will be made available on February 2, 2026, on a publicly available website (the specific location of which will be announced in the upcoming months).
SIGNED BY:
Organizations
- Action Canada for Sexual Health and Rights
- Adrianne Yiu Coaching & Consulting
- Alliance du personnel professionnel et technique de la santé et des services sociaux du Québec (APTS)
- Amnesty International Canadian Section (English-speaking)
- Amnistie internationale Canada francophone
- Artificial Intelligence Monitor for Immigration in Canada and Internationally (AIMICI)
- British Columbia Civil Liberties Association (BCCLA)
- Canadian Anti-Stalking Association (CASA)
- Canadian Center for Women Empowerment
- Canadian Centre for Policy Alternatives
- Canadian Council of Muslim Women (CCMW)
- Canadian Friends Service Committee (Quakers)
- Centre for Civic Governance
- Centre for Climate Justice, University of British Columbia
- Centre for Free Expression
- Citizens for Public Justice
- DAWN Canada
- Disability Justice Network of Ontario
- Ending Sexual Violence Association of Canada
- Freedom of Information and Privacy Association
- International Civil Liberties Monitoring Group
- Just Peace Advocates/Mouvement Pour Une Paix Juste
- Ligue des droits et libertés
- NicheMTL
- OCASI – Ontario Council of Agencies Serving Immigrants
- OpenMedia
- PEN Canada
- Privacy & Access Council of Canada
- Réseau québécois de l’action communautaire autonome (RQ-ACA)
- SCFP 3535
- Show Up Toronto
- South Asian Legal Clinic of Ontario
- Start Point Organization
- Tech Workers Coalition Canada
- Technologists for Democracy
- The Canadian BDS Coalition and International BDS Allies
- The Centre for Community Organizations (COCo)
- Women’s Centre for Social Justice – WomenatthecentrE
- Women’s Legal Education and Action Fund (LEAF)
- Women’s Shelters Canada / Hebergement femmes Canada
- YWCA Canada
Individuals
- Adam Molnar, Associate Professor, University of Waterloo
- Aja Mason, Boreal Logic Inc
- Alana Lajoie-O’Malley, Dalhousie University
- Alayna Kolodziechuk, Director, initio Tech and Innovation Law Clinic at Schulich School of Law Dalhousie University
- Alberto Lusoli, Toronto Metropolitan University
- Alex Megelas, Concordia University Applied AI Institute
- Ana Brandusescu, McGill University
- Andrew Clement, Professor Emeritus, University of Toronto
- Andrew Do, OPEIU Tech Workers Union Local 1010
- Anis Rahman, Assistant Teaching Professor, Department of Communication, University of Washington, Seattle, United States
- Anne Pasek, Trent University
- Dr Aren Roukema
- Bhaskar Mitra, Independent Researcher
- Bill Hearn, HearnLaw
- Blayne Haggart, Professor of Political Science, Brock University
- Caitlin Heppner, PhD Candidate, University of Ottawa
- Carina Albrecht, Institute for Advanced Study
- Prof Christoph Becker, Faculty of Information, University of Toronto
- Claudia Fiore-Leduc
- Corina MacDonald, Concordia University
- Cynthia Khoo, Lawyer, Tekhnos Law / Senior Fellow, Citizen Lab
- Daniel J. Paré, Associate Professor, University of Ottawa
- Daniel Keyes, Department of English and Cultural Studies, University of British Columbia Okanagan
- David Bugaresti
- Derek Hrynyshyn
- Dori Do
- Astrida Neimanis, Department of English and Cultural Studies, UBC Okanagan
- Bita Amani, Queen’s University, Faculty of Law
- Elizabeth Block
- Emile Dirks, Senior Research Associate at The Citizen Lab
- Emily Truman, PhD, Research Program Coordinator and Data Analyst, Department of Communication, Media and Film, Faculty of Arts, University of Calgary
- Emily Veysey, University of New Brunswick
- Enda Brophy, School of Communication, Simon Fraser University
- Erin Whitmore, Consultant & Registered Social Worker
- Evan Light, Faculty of Information, University of Toronto
- evelyn tischer
- Fenwick McKelvey, Concordia University
- Francky Franck
- Gabrielle Lim
- Gideon Christian, University Research Chair (AI and Law), Faculty of Law, University of Calgary
- Gustavo Ferreira, Assistant Professor, teaching stream, University of Toronto
- Gwendolyn Blue, University of Calgary
- Hana Darling-Wolf, graduate student University of Toronto
- Heather McLeod-Kilmurray
- Heather Morrison
- Irina Ceric, Assistant Professor, University of Windsor Faculty of Law
- Jaigris Hodson, Royal Roads University
- Jamie Liew, University of Ottawa, Faculty of Law
- Jane Bailey, Full Professor, University of Ottawa Faculty of Law
- Jason Hannan, University of Winnipeg
- Jeff Doctor, Animikii Indigenous Technology
- Jeff Heydon, Wilfrid Laurier University
- Jennifer Pybus, York University
- Jennifer Raso, Assistant Professor, Faculty of Law, McGill University
- Jessica Dubé, IRSST
- Joanna Redden, Associate Professor Western University
- John Packer, Faculty of Law and Member, Human Rights Research and Education Centre, University of Ottawa
- Jonathan Wald, Centre for Engineering in Society, Concordia University
- Jorge Frozzini, UQAC
- Karen Smith, Associate Professor, Brock University
- Karine Gentelet, Professor, Université du Québec en Outaouais
- Katherine Reilly, Associate Professor, School of Communication, Simon Fraser University
- Katie Szilagyi, Assistant Professor, Faculty of Law, University of Manitoba
- Kean Birch, Ontario Research Chair in Science Policy, York University
- Kenneth Werbin, Associate Professor, Wilfrid Laurier University
- Kit Chokly, PhD Student in Communication Studies, McGill University
- Kristen Thomasen, Senior Chair in Law, Robotics, and Society and Associate Professor, Windsor Law
- Léo Bourgeois, Junior Staff Lawyer, initio Technology and Innovation Law Clinic, Schulich School of Law at Dalhousie University
- Leslie Regan Shade, Professor Emerita, Faculty of Information, University of Toronto
- Leslie Salgado, PhD Candidate University of Calgary
- Professor Lisa Austin, Jackman Faculty of Law, University of Toronto
- Lucie Guibault, Dalhousie University
- Lucy Suchman, Professor Emerita
- Luke Stark, Faculty of Information and Media Studies, Western University
- Madalyn Hay
- Marcel O’Gorman
- Marina Pavlovic, Associate Professor, University of Ottawa, Faculty of Law, Common Law Section
- Mark Cauchi, Department of Humanities, York University
- Martha Jackman, Professor emerita, Faculty oof Law, University of Ottawa
- Matthew Tegelberg, Associate Professor at York University
- Mél Hogan, Associate Professor, Film & Media Studies, Queen’s University
- Melissa Adler
- Natasha Goel, University of Toronto
- Natasha Malik, PhD Candidate at McMaster University
- Natasha Tusikov, Associate Professor, York University
- Nathaniel Laywine, York University
- Nicholas Fazio, York University
- Nick Gertler
- Noah Davis, initio Technology & Innovation Law Clinic
- Noura Aljizawi, Senior Researcher at the Citizen Lab, University of Toronto
- Ozgun Topak, Associate Professor, York University.
- Paris Marx, Tech Won’t Save Us
- Patrick McCurdy, Professor, University of Ottawa
- Phil Rose
- Prem Sylvester, Simon Fraser University
- Prof Valerie Steeves
- Renée Sieber, Professor, McGill University
- Robert W Gehl, Ontario Research Chair of Digital Governance for Social Justice, York University
- Roch Tassé , ex-national coordinator, International Civil Liberties Monitoring Group
- Ronald J. Deibert, O.C., O.O., Professor of Political Science and Director of the Citizen Lab, The Munk School, University of Toronto
- Rosel Kim, Lawyer
- Rowland Lorimer
- Ryan J Phillips
- Sara Bannerman, Canada Research Chair in Communication Policy and
Governance, Professor, McMaster University - Sawndra Skjerven
- Scott DeJong, Concordia University
- Shalaleh Rismani, McGill University
- Shoshana Magnet, Professor, University of Ottawa
- Siobhan O’Flynn, Assistant Professor, Teaching Stream, Canadian Studies Program, University of Toronto
- Sophie Toupin, Université Laval
- Stefanie Duguay, Associate Professor and Chair in Digital Intimacy, Gender and Sexuality
- Stuart Poyntz, Simon Fraser University
- Suzie Dunn, Assistant Professor, Dalhousie University Schulich School of Law, Director of the Law and Technology Institute
- Tamara Shepherd, Associate Professor, University of Calgary
- Thomas Wilson (MA student), SFU School of Communication
- Tracey P. Lauriault, Associate Professor, Critical Media and Big Data, School of Journalism and Communication, Carleton University
- Tracy Valcourt, Concordia University
- Ümit Kiziltan
- Vanessa T, Individual
- Vasanthi Venkatesh, Associate Professor Faculty of Law University of Windsor
- Vincent Wong, Assistant Professor University of Windsor
- Xavier Parent-Rocheleau, Associate Professor, HEC Montréal
- Yuan Stevens, Data & Society Research Institute; Independent Research Consultant and Advisor