Week of 2025-06-02

New Clearview AI Decision has Implications for OpenAI Investigation

Teresa Scassa

A recent decision by the Alberta Court of King's Bench has significant implications for AI regulation and privacy law in Canada. The court upheld the Alberta Privacy Commissioner's order that Clearview AI violated the province's Personal Information Protection Act (PIPA) by collecting and using facial images of Albertans without consent. However, the court also found that certain provisions of PIPA, specifically those requiring consent for collecting publicly available information, were unconstitutional as they infringed upon freedom of expression rights. This ruling underscores the challenges in balancing privacy protections with freedom of expression, particularly in the context of AI technologies that rely on large datasets. The decision may influence ongoing investigations into other AI entities, such as OpenAI, by highlighting the need for clear legal frameworks that address the complexities of data collection and usage in AI systems.

Deterring AI Hallucinations – the Ontario Courts Weigh In

Susan Wortzman | Adam Wells | McCarthy Tetrault

In the case of Ko v. Li (2025 ONSC 2965), the Ontario Superior Court addressed the misuse of generative AI in legal proceedings. Justice Myers found that a lawyer had cited fictitious cases generated by ChatGPT in both written and oral submissions, violating Rule 4.06.1(2.1) of the Rules of Civil Procedure, which mandates certification of the authenticity of cited authorities. The lawyer acknowledged the error, apologized, committed to completing Continuing Professional Development training in legal ethics and technology, and implemented new protocols to prevent future occurrences. Recognizing her forthrightness and corrective actions, Justice Myers deemed any potential contempt purged and withdrew the show cause order. This case underscores the judiciary's expectation that legal professionals must diligently verify AI-generated content and maintain responsibility for the accuracy of their submissions.

Bell announces major AI data centre building plans for British Columbia

Murad Hemmadi | The Logic

Bell Canada is embarking on a significant expansion into the artificial intelligence (AI) infrastructure sector by planning to construct six new data centres in British Columbia, totaling 500 megawatts of capacity. The first facility, a seven-megawatt centre in Kamloops, is set to open next month with San Francisco-based chipmaker Groq as its anchor tenant. Additionally, Bell is developing a 26-megawatt data centre for Thompson Rivers University, scheduled to come online in 2026. The remaining capacity will be distributed across two large facilities currently in advanced planning stages. This initiative aligns with the federal government's emphasis on AI as a key economic driver and reflects the growing demand for AI-specific infrastructure in Canada.

Hydro for Data Centers Beats Gas From Alberta, Says Bell AI Boss

Thomas Seal | Bloomberg

BCE Inc.'s Chief AI Officer, Marc Parent, has emphasized the company's preference for hydroelectric power over natural gas for its data centers, citing cost-effectiveness and environmental benefits. This decision comes despite Alberta's 2024 initiative to position itself as a power supplier for AI data centers, which did not align with BCE's sustainability goals. BCE's strategy includes constructing six new data centers in British Columbia, leveraging the province's abundant hydroelectric resources. This move reflects a broader industry trend towards cleaner energy sources for data infrastructure, as the environmental impact of data centers becomes a growing concern. By prioritizing renewable energy, BCE aims to reduce its carbon footprint and align with Canada's climate objectives.

European Commission Publishes Q&A on AI Literacy

Dan Cooper | Sam Jungyun Choi | Covington

On May 7, 2025, the European Commission released a Q&A document clarifying AI literacy obligations under Article 4 of the EU Artificial Intelligence Act (AI Act). This provision mandates that providers and deployers of AI systems ensure a sufficient level of AI literacy among their staff and other individuals involved in the operation and use of AI systems on their behalf. The guidance emphasizes that AI literacy programs should be tailored to the specific roles, technical knowledge, and educational backgrounds of the individuals, as well as the context in which the AI systems are used. Training should encompass a general understanding of AI, its functionalities, associated risks, and ethical considerations, particularly when dealing with high-risk AI applications. While external certification is not required, organizations are expected to document their compliance efforts. Although the obligations took effect on February 2, 2025, enforcement by national authorities will commence on August 3, 2026, with penalties for non-compliance determined by individual EU Member States.

AI literacy – the Commission’s pointers on building your programme | Data Protection Report

Marcus Evans | Rosie Nance | Norton Rose Fulbright

The European Commission's recent guidance on AI literacy under the EU AI Act emphasizes the importance of equipping all individuals involved in the operation and use of AI systems with sufficient knowledge and understanding. This includes not only employees but also contractors, service providers, and clients. The required level of AI literacy is context-specific, varying based on the individual's role, technical background, and the nature of the AI system in use. Organizations are encouraged to tailor their AI literacy programs accordingly, ensuring that even those with deep technical expertise are aware of the legal and ethical implications of AI deployment. Notably, there is no exemption for "human-in-the-loop" scenarios; in fact, such roles necessitate a higher level of AI literacy to ensure effective oversight. While enforcement of these obligations will commence on August 3, 2026, organizations are advised to proactively develop and implement AI literacy programs to align with the Commission's expectations.

Legal AI adoption outpaces expectations, especially in-house: Spellbook CEO

Branislav Urosevic | Canadian Lawyer Magazine

A recent article from Canadian Lawyer highlights the rapid adoption of AI in legal practices, particularly among in-house legal departments. According to a survey conducted by Spellbook and Counselwell, 97% of in-house lawyers using AI find it beneficial, with 34% rating current tools as "highly effective" and 63% as "somewhat effective." This swift integration is attributed to AI's maturity and its ability to enhance efficiency without the constraints of billable hours. Spellbook is set to launch "Spellbook Associate," an AI agent designed to handle tasks like multi-document drafting, aiming to further streamline legal workflows. While larger firms have been slower to adopt due to traditional billing models, the trend indicates a growing acceptance of AI's role in legal operations. 

Top 4 Ways to Legally Challenge a Decision Made by A.I.

Marco P. Falco | Torkin Manes

In a recent article from Torkin Manes LegalWatch, legal expert Marco P. Falco outlines four primary legal avenues for challenging decisions made by artificial intelligence (AI) within Canadian administrative law. Firstly, parties can issue demands for production to access the data and inputs that informed the AI's decision-making process, although courts have yet to mandate such disclosures. Secondly, concerns about transparency and potential biases in AI algorithms can be grounds for contesting decisions, especially when these systems may inadvertently perpetuate discrimination. Thirdly, individuals might invoke the doctrine of legitimate expectations, arguing that they anticipated human oversight in decision-making processes and that the absence thereof constitutes a procedural fairness breach. Lastly, the fundamental right to be heard may be compromised if AI systems replace human adjudicators, raising questions about the adequacy of such automated processes in ensuring fair hearings. These challenges underscore the pressing need for Canadian administrative law to evolve in response to the growing integration of AI in decision-making roles.

AI governance in aged care

Asadullah Rathore | IAPP

A recent IAPP article by Asadullah Rathore highlights the growing importance of AI governance in aged care facilities, especially as the global elderly population is projected to reach 2.1 billion by 2050. To address staffing shortages, some facilities are integrating AI agents to assist with tasks like medication reminders and social interaction. However, the absence of comprehensive AI regulations in countries like Australia necessitates the adoption of international standards such as ISO/IEC 42001 to ensure responsible AI deployment. Implementing this framework involves a structured approach encompassing definition, implementation, maintenance, and continuous improvement phases. This case underscores the critical need for robust AI governance frameworks to ensure ethical and effective AI integration in sensitive sectors like aged care.

Don’t fall for Sam Altman’s biometric booby trap

John Mac Ghlionn | The Hill

A recent opinion piece in The Hill raises significant concerns about Worldcoin's approach to biometric data collection, particularly its reliance on iris scans for digital identity verification. Critics argue that this method poses substantial privacy risks, especially given the immutable nature of biometric data and the potential for misuse or unauthorized access. The article emphasizes the ethical implications of incentivizing individuals, often in economically disadvantaged regions, to surrender sensitive personal information in exchange for cryptocurrency. Furthermore, it highlights the lack of comprehensive regulatory frameworks to govern such practices, suggesting that without stringent oversight, projects like Worldcoin could set concerning precedents for data privacy and individual autonomy.

Number of kids who died under Ontario’s care network reaches new high

Isaac Callan | Colin D’Mello | Global News

A recent Global News investigation reveals that 134 children died in connection with Ontario’s child welfare system in 2023—the highest number since the province began tracking such data in 2020. This figure includes children actively in care, those with open child protection files, and those whose cases had closed within the previous 12 months. Notably, 32% of these deaths were classified as having unknown causes, raising concerns about transparency and accountability within the system. The data also shows that infants and teenagers were disproportionately affected, with 37 infants and 35 teens among the deceased. In response, the Ontario government has initiated audits of children’s aid societies, focusing on financial oversight rather than directly addressing the rising death toll, prompting calls from advocates and opposition members for more comprehensive reforms.

Crime Stoppers isn’t publishing its report into child sexual abuse on Pornhub

Matron Matriquin | The Logic

A recent investigation by The Logic reveals that Crime Stoppers International (CSI) has not fulfilled its commitment to publish a report evaluating the trust and safety practices of Aylo, the parent company of Pornhub. Despite earlier assurances of transparency, CSI now cites a confidentiality clause as the reason for withholding the report's findings. This development has led to criticism from various stakeholders, including Crime Stoppers Australia, which terminated its affiliation with CSI over concerns about the partnership with Aylo. The situation raises questions about the effectiveness of self-regulation in the adult content industry and the role of non-profit organizations in holding such companies accountable.

Federal privacy commissioner launches investigation into NS Power data breach

Sean Matt | CTV News

The federal Privacy Commissioner of Canada has initiated an investigation into a significant data breach at Nova Scotia Power, which compromised the personal information of approximately 280,000 customers. The breach, suspected to be a ransomware attack, resulted in unauthorized access to sensitive data, including about 140,000 social insurance numbers. Some affected individuals have reported their information appearing on the dark web, and at least one couple alleges a financial loss of $30,000 due to the incident. Despite the severity of the breach, legal experts suggest that a class-action lawsuit is unlikely under Nova Scotia's current laws. This situation underscores the pressing need for robust cybersecurity measures and comprehensive privacy legislation to protect Canadians' personal information.

A MoFo Privacy Minute Q&A: Addressing the Risks of Unstructured Data

Miriam H. Wugmeister | Joshua R. Fattal | Kathryn Taylor | Morrison Foerster

A recent MoFo Privacy Minute Q&A highlights the escalating privacy and security risks associated with unstructured data—such as emails, Slack messages, and shared documents—which often lack formal oversight yet contain sensitive information. The article identifies three primary concerns: increased data breach risks due to unmanaged sensitive content, unintended disclosures via generative AI tools accessing unstructured repositories, and challenges in complying with data transfer regulations, particularly under recent U.S. Department of Justice rules restricting access to sensitive personal data by foreign entities. To mitigate these risks, the authors recommend implementing monitoring guidelines, conducting thorough data due diligence, applying tiered access controls, and enforcing strict document retention and deletion policies. These measures aim to enhance data governance and ensure compliance with evolving privacy laws.

Carney's plan for digital government could find savings, but just as many headaches

Simon Tuck | National Post

A recent National Post article discusses Mark Carney's proposal to modernize Canada's government services through digital transformation. Carney envisions a streamlined, tech-driven public sector that could potentially lead to significant cost savings and improved efficiency. However, the article highlights potential challenges, including the complexities of overhauling existing bureaucratic systems, concerns about data privacy and security, and the risk of excluding individuals who lack digital literacy or access. The piece underscores the importance of balancing innovation with inclusivity and robust safeguards to ensure that digital government initiatives serve all Canadians effectively.

BlackBerry wins approval to handle the U.S. government’s most sensitive unclassified data

Joanna Smith | The Logic

BlackBerry has achieved FedRAMP High Authorization for its AtHoc crisis communications platform, marking a significant milestone in secure communications. This certification allows AtHoc to handle the U.S. government's most sensitive unclassified cloud-based data, including Controlled Unclassified Information (CUI), which, if compromised, could cause severe harm to national security or public safety . The rigorous authorization process involved compliance with over 400 federal security and privacy standards, positioning BlackBerry as a trusted provider for critical government communications . Already in use by over 75% of U.S. federal agencies, AtHoc's enhanced capabilities are expected to facilitate secure cross-border collaboration between Canada and the U.S., particularly in areas like emergency response and national security.

Ottawa rushes to build its own AI translator as government use of free tools soars

David Reevely | Murad Hemmadi | The Logic

The Canadian federal government is accelerating the development of an in-house AI translation tool, dubbed PSPC Translate, in response to widespread use of unsecured, free online translation services by public servants handling potentially sensitive materials. This initiative, led by Public Services and Procurement Canada, aims to centralize translation efforts, enhance data security, and reduce reliance on external tools that may compromise confidential information. The move addresses a notable 17% decline in demand for the government's Translation Bureau services, attributed to departments either creating their own tools or resorting to free online options, leading to inconsistent practices and potential security risks. PSPC Translate is positioned as a "lighthouse project" under the federal AI strategy, intended to serve as a scalable model for AI integration across government operations. However, union representatives have raised concerns about increased workloads and potential job reductions, emphasizing the need for balanced implementation that safeguards both efficiency and employment.

Ontario privacy commissioner publishes privacy handbook for small health care organizations

Jaime Cardy | Dentons Data

The Information and Privacy Commissioner of Ontario has released a new Privacy Management Handbook to help small health care organizations meet their obligations under PHIPA. Aimed at solo practitioners and resource-limited providers, the guide offers a "right-sized" framework for privacy governance, policy development, safeguards, and operational compliance. It emphasizes avoiding outdated technologies like fax machines, urges caution with AI and email use, and encourages regular monitoring of privacy practices. The handbook also includes practical tools such as sample policies and breach response templates. This initiative supports stronger privacy management in Ontario’s fragmented and often under-resourced health care sector.

AssistIQ raises US$11.5M Series A for medical supply chain AI

Murad Hemmadi | The Logic

Montreal-based healthtech startup AssistIQ has secured $11.5 million in Series A funding to expand its AI-driven platform that streamlines surgical supply tracking in hospitals. Led by Battery Ventures with participation from Tamarind Hill, the round will support the deployment of AssistIQ’s computer vision system, which automates the capture of medical supplies and implants used during procedures, replacing manual barcode scans and paper logs. The platform integrates with major EHR and ERP systems, including Epic, enabling hospitals to improve charge capture accuracy, reduce waste, and enhance operational efficiency. Notably, New York’s Northwell Health is rolling out AssistIQ across its operating rooms, while Owensboro Health in Kentucky has already integrated the system with Epic. AssistIQ’s technology is delivering measurable outcomes such as 98%+ charge capture rates and up to 25% supply savings, positioning it as a leader in AI-powered surgical supply chain optimization.

Toronto’s ProteinQure raises $11M Series A to push cancer-drug candidate to clinical trials

David Reevely | The Logic

Toronto-based biotech startup ProteinQure has secured $11 million in Series A funding to advance its lead drug candidate, PQ203, into clinical trials targeting triple-negative breast cancer (TNBC). PQ203 is a first-in-class peptide-drug conjugate designed using ProteinQure's proprietary AI platform, ProteinStudio™, which integrates machine learning, structural biology, and atomic-level simulations to engineer therapeutic peptides with high specificity. The upcoming multicenter Phase 1 trial, slated to begin in Q3 2025, will enroll 70–100 patients across leading cancer centers in Canada and the U.S., including Princess Margaret Cancer Centre, MD Anderson Cancer Center, and Yale Cancer Center. This milestone positions ProteinQure at the forefront of AI-driven peptide therapeutics, with additional pipeline programs underway in neurology and nephrology.

AI threatens Indigenous data sovereignty and digital self-determination

Margaret Yun-Pu Tu | Policy Options

A recent article by Margaret Yun-Pu Tu in Policy Options highlights the pressing issue of AI technologies threatening Indigenous data sovereignty and digital self-determination. The piece emphasizes that AI systems often utilize Indigenous languages and knowledge without consent, echoing historical patterns of exploitation. It advocates for the integration of Indigenous knowledge systems into AI governance and the adoption of ethical data practices, such as the OCAP principles—Ownership, Control, Access, and Possession—to ensure Indigenous communities have authority over their data. The article also points to collaborative efforts between Canada and Taiwan as opportunities to co-create models of Indigenous-centered innovation, promoting cultural resilience and ethical AI development. Ultimately, it calls for structural shifts that position Indigenous communities as equal partners in shaping AI tools that impact their futures.

Ontario police may have secretly used controversial Israeli spyware, report finds

Kevin Maimann | CBC News

A recent investigation by the Citizen Lab at the University of Toronto has uncovered potential connections between the Ontario Provincial Police (OPP) and Paragon Solutions, an Israeli spyware firm known for its military-grade surveillance tool, Graphite. While the OPP did not confirm or deny the use of such spyware, they stated that any surveillance tools are employed lawfully under judicial authorization for serious criminal investigations. This revelation raises concerns about the transparency and oversight of spyware use in Canada, especially given past controversies involving the Royal Canadian Mounted Police's admission of using similar technologies. The Citizen Lab's findings highlight a growing ecosystem of spyware capabilities among Ontario-based police services, prompting calls for legislative reform to address privacy, security, and human rights implications.

What to know about Israeli spyware allegedly used by Ontario police

Sean Boynton | Global News

A recent investigation by the Citizen Lab at the University of Toronto has uncovered potential links between the Ontario Provincial Police (OPP) and Paragon Solutions, an Israeli spyware firm known for its military-grade surveillance tool, Graphite. Researchers traced the IP address of a Canadian-based customer of Paragon to the OPP’s general headquarters in Toronto. While the OPP did not confirm or deny the use of such spyware, they stated that any surveillance tools are employed lawfully under judicial authorization for serious criminal investigations. This revelation raises concerns about the transparency and oversight of spyware use in Canada, especially given past controversies involving the Royal Canadian Mounted Police's admission of using similar technologies. The Citizen Lab's findings highlight a growing ecosystem of spyware capabilities among Ontario-based police services, prompting calls for legislative reform to address privacy, security, and human rights implications.

Revolutionizing offender management: How Ontario’s police are using technology to improve public safety

Brittani Schroeder | Blue Line

A recent article in Blue Line magazine highlights how Ontario police services are leveraging technology to enhance offender management and public safety. The initiative focuses on implementing advanced data analytics and integrated digital platforms to streamline tracking and coordination among law enforcement agencies. This technological shift aims to improve real-time information sharing, reduce recidivism, and optimize resource allocation. By embracing these innovations, Ontario's police forces are working towards more efficient and proactive approaches to community safety.

Barrie police opt for technology and scrap pen and paper to improve service

Kim Phillips | CTV News

The Barrie Police Service has become one of the first in Ontario to implement electronic note-taking, replacing traditional pen-and-paper methods with digital tools. Officers now use police-issued cell phones to take voice notes, enhancing efficiency and accuracy in documentation. This technological shift aims to streamline operations and improve service delivery to the community. The initiative reflects a broader trend in law enforcement towards embracing digital solutions for better data management and operational effectiveness.

York, Peel police now using facial recognition technology

CBC News

The Toronto Police Service has adopted facial recognition technology to enhance its investigative capabilities, particularly in identifying suspects from surveillance footage. This technology allows officers to compare images from crime scenes against existing databases to generate potential matches. While the police assert that facial recognition is used strictly for investigative leads and not for real-time surveillance, concerns have been raised regarding privacy implications and the potential for misidentification, especially among marginalized communities. Civil liberties advocates emphasize the need for clear policies, oversight, and transparency to ensure the technology is used responsibly and does not infringe upon individuals' rights. The deployment of facial recognition by law enforcement agencies continues to spark debate over balancing public safety with civil liberties.

Live facial recognition cameras may become ‘commonplace’ as police use soars

Daniel Boffey | Mark Wilding | The Guardian

A recent investigation by The Guardian and Liberty Investigates reveals that live facial recognition (LFR) technology is rapidly expanding across police forces in England and Wales, with nearly 4.7 million faces scanned in 2024—double the previous year's figure. The Metropolitan Police and seven other forces have deployed mobile LFR vans over 250 times, and the first fixed-location LFR cameras are set to be trialed in Croydon later this summer. Despite its widespread use, there is currently no specific legislation governing the technology, leading to concerns over self-regulation and potential biases, particularly against Black individuals. Campaigners and oversight bodies highlight the need for a legislative framework to address privacy, fairness, and scope of data access, especially as police are increasingly utilizing extensive databases such as passport images and immigration records. A new national system, "strategic facial matcher," is being developed to streamline access to multiple image databases. Supporters argue the technology boosts policing efficiency, citing over 500 arrests linked to LFR, but critics call for more transparency and safeguards.

BCCA confirms general damages available without proof of harm for breach of Privacy Act

Rebecca von Rüti | Keri Bennett | Swetha Popuri | Sajan Dhindsa | DLA Piper

In a landmark decision, the British Columbia Court of Appeal (BCCA) ruled in Insurance Corporation of British Columbia v. Ari that general damages can be awarded for breaches of the province’s Privacy Act without requiring proof of individual harm. The case involved a former ICBC employee who accessed and sold personal data, leading to targeted attacks on affected individuals. The court upheld a $15,000 per-person award to class members, emphasizing that serious, intentional privacy violations warrant compensation to vindicate rights and deter misconduct. This ruling reinforces the quasi-constitutional status of privacy rights in Canada and signals that organizations may face significant liability for privacy breaches, even absent demonstrable harm. It underscores the necessity for robust internal controls to prevent unauthorized data access.

GDPR matchup: Australia's Privacy Act 1988

Tim de Sousa | IAPP

Australia’s Privacy Act 1988 and the EU’s GDPR both protect personal data but differ significantly in scope and enforcement. The Privacy Act applies mainly to larger entities and is built around 13 Australian Privacy Principles, while the GDPR applies universally to those handling EU residents' data and offers more detailed protections. The GDPR requires explicit, informed consent and imposes strict rules on cross-border data transfers, whereas the Privacy Act allows for implied consent and has more flexible transfer conditions. Enforcement under the GDPR is also more robust, with penalties reaching up to €20 million or 4% of global turnover, compared to the more limited powers of Australia’s privacy regulator. Overall, the GDPR offers a more comprehensive and globally impactful framework.

Ontario budget 2025 would change how speed and red-light cameras, community safety zones operate

Nick Westoll | City News

The 2025 Ontario budget proposes significant changes to the use of automated speed enforcement (ASE) and red-light cameras. Municipalities would be mandated to increase signage and publicly disclose camera locations to enhance transparency. The province also seeks authority to limit infractions for minor speeding and collect more data from municipalities on camera usage. Additionally, the government aims to establish new criteria for designating community safety zones, where speeding fines can be doubled. Premier Doug Ford criticized the current deployment of speed cameras, suggesting they serve more as revenue tools than safety measures.

Montana Amends Law to Cover Collection and Use of Neural Data

Liisa Thomas | James O’Reilly | Sheppard Mullin

Montana has enacted Senate Bill 163, amending its Genetic Information Privacy Act to include protections for neural data, effective October 1, 2025. This legislation defines "neurotechnology data" as information capable of recording, interpreting, or altering responses of an individual's nervous system to external stimuli. Entities handling such data must obtain explicit consent for its collection, use, and disclosure, and provide clear privacy notices outlining their data practices. The law exempts de-identified neural data used for research, provided it cannot be reasonably linked back to individuals. Additionally, it imposes strict conditions on government access to neural data, requiring a search warrant or investigative subpoena. Montana joins California and Colorado in pioneering state-level legislation that specifically addresses the privacy of neural data.

NATO reportedly floated including cybersecurity spending in its defence spending target as it prepares to meet with allies in June

Donato Paolo Mancini | Andrea Palasciano | Daniel Basteiro | Jasmina Kuzmanovic | Bloomberg

NATO is considering a significant shift in its defense spending framework by proposing a new target of 5% of member nations' GDP, with 3.5% allocated to traditional military expenditures and 1.5% to broader security-related areas, including cybersecurity. This proposal, expected to be discussed at the upcoming NATO summit in The Hague on June 24–26, 2025, reflects the alliance's recognition of the growing importance of cyber threats and hybrid warfare in modern defense strategies. The inclusion of cybersecurity and related domains aims to address evolving security challenges and ensure a more comprehensive approach to collective defense.

Survey of Online Harms in Canada 2025

Angus Lockhart | The DAIS

The Dais' 2025 Survey of Online Harms in Canada reveals a significant increase in Canadians' exposure to online threats, including misinformation, hate speech, identity fraud, and non-consensual image sharing. Notably, 67% of respondents reported encountering deepfakes at least a few times annually, up from 60% the previous year. Marginalized groups—such as racialized individuals, newcomers, people with disabilities, and 2SLGBTQ+ communities—experience hate speech at rates 50% to 100% higher than others. Despite widespread use of platforms like YouTube, Facebook, and TikTok, public trust in these services remains low, with many users finding platform tools ineffective in mitigating harms. A strong majority (68%) of Canadians prioritize reducing online hate and misinformation over protecting freedom of expression, and 69% support government intervention to mandate responsible platform behavior.

The Future of Canada’s Online Safety Agenda

The DAIS

The Dais' May 2025 report, The Future of Canada’s Online Safety Agenda, underscores Canada's lag in implementing robust online safety regulations compared to global counterparts. The report highlights the stalled progress of key legislative initiatives, notably the Online Harms Act (Bill C-63) and the Digital Charter Implementation Act (Bill C-27), both of which died on the Order Paper following the prorogation of Parliament. These legislative efforts aimed to address harmful online content and modernize privacy laws, respectively. The report emphasizes the need for renewed momentum in establishing comprehensive digital safety frameworks to protect Canadians in the evolving online landscape. 

2025 Mid-Year Update: Five Privacy Law Developments

Jaime Cardy | Dentons Data

The Information and Privacy Commissioner of Ontario (IPC) has released a Privacy Management Handbook for Small Health Care Organizations to assist solo practitioners and small clinics in complying with the Personal Health Information Protection Act (PHIPA). Recognizing the unique challenges faced by these entities, the handbook offers a "right-sized" approach to privacy management, emphasizing proportionality to the organization's scale and operations. Key components include establishing governance structures, developing comprehensive privacy policies, implementing appropriate safeguards, operationalizing these policies through staff training and procedures, and conducting ongoing monitoring and reviews. The handbook also provides practical tools such as sample job descriptions, privacy policies, and breach notification guidelines to support effective implementation.

Québec Access to Information Commission to stop publishing list of privacy incident reports

Antoine Guilmain | Marc-Antoine Bigras | Gowling WLG

A recent article from Gowling WLG reports that Quebec's privacy regulator, the Commission d’accès à l’information (CAI), has stopped publishing annual reports on the number of privacy complaints and enforcement actions, a practice that had supported transparency for nearly two decades. This change comes amid the rollout of Quebec’s Law 25, which significantly expands privacy protections and enforcement powers. Critics argue that withholding such reports undermines public accountability and obscures how the CAI is using its new authorities. While the CAI claims this data may be included in future broader communications, no timeline has been given. The lack of reporting raises concerns about regulatory opacity at a time when privacy enforcement is more critical than ever.

New Decision on In-Vehicle Cameras from the Quebec Privacy Regulator

CAI

The Commission d’accès à l’information du Québec (CAI) has issued a decision concerning 13859380 Canada Inc. (Crane Supply) regarding its use of AI-enabled video surveillance in delivery vehicles. The CAI found that while the surveillance system's objectives were legitimate for heavy trucks, its application in pickup trucks lacked sufficient justification. The Commission concluded that the data collection practices were not adequately minimized, potentially violating Quebec's privacy laws. This decision underscores the necessity for organizations to ensure that surveillance measures are proportionate and respect individuals' privacy rights.

French language requirements of Bill 96 and June 1, 2025: Common Misconceptions

Véronique Wattiez Larose | Eugenia (Evie) Bouras | Vino Wijeyasuriyar | McCarthy Tetrault

Effective June 1, 2025, Quebec’s Bill 96 will require businesses to enhance the prominence of French in signage, product labelling, and workplace language use. Public signs must show French text that is at least twice the size of any non-French text, including trademarks, and product packaging must include French descriptions, even if a trademark is in another language. Companies with 25 or more employees must also register with the Office québécois de la langue française and begin a francization process. While there’s a grace period for certain pre-existing products until 2027, businesses should act now to ensure compliance. These changes aim to reinforce French as the language of commerce in Quebec.

International transfers in the limelight again with Belgian decision on FATCA data transfers to the US

Lily Latimer Smith | Slaughter and May

On April 24, 2025, the Belgian Data Protection Authority (DPA) issued Decision No. 79/2025, ruling that the automatic transfer of Belgian residents' financial data to the U.S. Internal Revenue Service (IRS) under the Foreign Account Tax Compliance Act (FATCA) violates the EU's General Data Protection Regulation (GDPR). The DPA found that these transfers lack adequate safeguards, breach principles of purpose limitation and data minimization, and fail to provide data subjects with sufficient transparency and enforceable rights. Notably, the DPA rejected the argument that the 2014 FATCA agreement remains valid under GDPR's Article 96, concluding that it did not comply with EU law even at the time of its inception. As a result, the Belgian Federal Public Service Finance (FPSF) has been ordered to bring its data transfer practices into GDPR compliance within one year, including conducting a Data Protection Impact Assessment and enhancing transparency measures. This decision underscores the growing scrutiny of international data transfers and the imperative for robust data protection measures in cross-border agreements.

Major Quebec report suggests ban on social media for kids under 14

Martin Patriquin | The Logic

A cross-party commission in Quebec has released a 155-page report recommending a ban on social media access for children under 14, citing concerns over the adverse effects of excessive screen time on youth health and development. The report also proposes regulating online influencers, banning micropayments in video games aimed at minors, and keeping gyms and parks open later to provide alternatives to screen use. Notably, both TikTok and Facebook declined to participate in the commission's hearings, drawing criticism from officials. Public support for these measures is strong, with over 90% of Quebecers backing the province's school cellphone ban, including 76% of teens aged 14 to 17. The Quebec government is now reviewing the commission's 56 recommendations for potential implementation.

In Ontario, the newly enacted Digital Platform Workers' Rights Act, 2022 is coming into force

Florence So | IAPP

A recent IAPP Canada update highlights significant developments in Canadian privacy and data governance. At the federal level, the appointment of Evan Solomon as Minister of Artificial Intelligence and Digital Innovation signals a potential shift in digital policy, though privacy was notably absent from the latest Speech from the Throne. In Quebec, the Commission d’accès à l’information (CAI) has ceased publishing lists of organizations that report data breaches, citing concerns over cybersecurity risks and the integrity of ongoing investigations. Additionally, the CAI ruled that Crane Supply's use of in-vehicle surveillance cameras was excessive, mandating stricter data minimization practices. Meanwhile, Ontario's Digital Platform Workers' Rights Act, effective July 1, 2025, grants gig workers enhanced access to information about compensation and performance metrics, with non-compliance penalties reaching up to CAD 500,000.

AI in HR Is booming – but are we losing our humanity?

Chris Davis | Human Resources Director

The HCAMag article "AI in HR Is Booming – But Are We Losing Our Humanity?" explores the rapid adoption of artificial intelligence in human resources and the potential implications for human connection within organizations. Christine Vigna, Chief People Officer at Dejero, reflects on her experience at an AI masterclass where she realized the importance of maintaining human elements in HR practices. She emphasizes that while AI can enhance efficiency by automating repetitive tasks, it cannot replicate human emotions such as empathy and joy. Vigna advocates for a balanced approach where technology serves to augment human capabilities without eroding the cultural and emotional aspects that are vital to employee engagement and organizational health. Her perspective underscores the need for HR leaders to implement AI thoughtfully, ensuring that technological advancements do not come at the expense of human-centric values.

AI and Employment

Ryli McDonald | Ryan Steidl | Constangy

The Constangy Cyber Advisor article "AI and Employment" highlights the growing integration of artificial intelligence in human resources functions such as hiring, monitoring, and performance evaluations. It emphasizes that while AI can enhance efficiency, it also introduces legal risks, particularly concerning data privacy and potential discrimination. The article notes that current regulations are limited and often focus on high-risk systems and algorithmic discrimination, with some states like New York, Colorado, Illinois, and Maryland implementing laws requiring transparency and consent in AI-driven hiring processes. Employers are advised to conduct risk assessments, ensure AI tools are free from biases, and maintain compliance with evolving legal standards to mitigate potential liabilities.

Massachusetts Employers: Include Lie Detector Notice in Your Job Applications

Alice Kokodis | Stephen Melnick | Littler

Massachusetts law prohibits employers from requiring or using lie detector tests as a condition of employment and mandates that all job applications include a notice informing applicants of this right. The law defines “lie detector” broadly, potentially including AI-based honesty assessments or other screening technologies. Employers who fail to include the required notice risk statutory damages of at least $500 per violation, with the possibility of treble damages and legal fees. Recent litigation trends suggest increased enforcement of this provision. Employers are urged to review their application forms and hiring tools for compliance.

Previous
Previous

Week of 2025-6-9

Next
Next

Week of 2025-05-19