Week of 2025-6-16
Ford government condemned for deleted Greenbelt emails, use of codewords
Allison Jones | CBC News
Ontario’s privacy commissioner revealed that Premier Doug Ford’s government deliberately used code words—such as “special project,” “GB,” or “G*”—to obscure communications about the Greenbelt housing initiative. These evasive terms, combined with the use of personal email accounts and widespread deletion of records, severely hindered Freedom of Information searches and undermined legal record-keeping obligations. The use of the wildcard “G*” in communications further masked key documents, making them essentially undetectable in standard searches. This pattern of obfuscation and fragmented documentation erodes transparency, damages public trust, and may trigger further legal and criminal investigations into government conduct.
Ford government broke ‘legal’ record-keeping rules during Greenbelt scandal
Isaac Callan | Colin D’Mello | Global News
Ontario’s Information and Privacy Commissioner found that Premier Doug Ford’s government intentionally used opaque code words—such as “special project,” “GB,” and especially “G*” (a search-disrupting wildcard)—to obscure communications about the 7,400-acre Greenbelt land decision, making them virtually impossible to locate via routine searches. Staff also relied heavily on personal email accounts and deleted communications, resulting in a near-total absence of official records documenting who made key decisions or discussions. The Commissioner emphasized that these actions violated legal record‑keeping obligations, seriously undermining government transparency, eroding public trust, and forcing the RCMP to expand its investigation due to purported permanent deletions. While the government claims it is enhancing training and compliance, critics insist that stronger accountability measures and possible legal consequences are essential to restore integrity.
Strange Cone of silence over FOI request with City Of Hamilton – bayobserver.ca Hamilton, Burlington and GTA
John Best | The Bay Observer
A Hamilton resident, Kelly Oucharek, and the Bay Observer submitted a Freedom of Information request on January 5 seeking documents and correspondence related to tiny shelters near the Barton-Tiffany encampment. More than five months later, the request still hasn’t been acknowledged, violating Ontario’s FOI law requiring a response within 30 days. The Information and Privacy Commissioner (IPC) has initiated an expedited appeal, citing repeated ignored inquiries and voicemails as evidence of the city's sluggish handling and lack of transparency. Critics warn this delay reflects a deeper “cone of silence” culture within city hall, worsening public distrust in a municipality already ranked low for financial openness.
Alberta's new access to information rules come into effect
Matthew Black | Edmonton Sun
Alberta's revamped access and privacy laws took effect June 11, 2025, replacing the old FOIP framework with two modernized statutes: the Access to Information Act and the Protection of Privacy Act. Under the new Access to Information Act, timelines for government responses to information requests have doubled—now 30 business days—and communications involving political staff are now exempt. Appeals must go through a public body first before being escalated to the Information and Privacy Commissioner. The Protection of Privacy Act introduces "privacy by design," bans the sale of personal information by public bodies, mandates breach notifications when harm is possible, and requires Privacy Impact Assessments for any new or modified programs that handle personal data. It also enforces new, stringent penalties—up to $750,000 per organization for misuse of personal data and up to $1 million for improper use of non-personal data—marking some of the strictest privacy enforcement in Canada.
Mark Carney wants to use AI to supercharge Canada’s economy
Allan Woods | The Toronto Star
Prime Minister Mark Carney has declared that artificial intelligence will be central to revitalizing Canada’s productivity and economic competitiveness, even establishing a dedicated Ministry of Artificial Intelligence to spearhead the effort. However, concrete details on how AI will be integrated—whether in public services, business innovation, or regulatory frameworks—remain vague and underdeveloped. Supporters emphasize AI’s potential to streamline administrative tasks, enhance healthcare decision-making, and create “super‑workers” by amplifying workforce capabilities. Yet experts and industry voices warn that without a clear roadmap, there’s a risk of overreliance on AI in complex roles and untimely rollout, echoing earlier concerns from the country’s e-commerce and digital transformation waves.
Canada Launches Landmark National Program to Equip Nonprofits with AI for Social Impact
Globe Newswire
Canada has launched the RAISE pilot (Responsible AI Adoption for Social Impact)—a year-long national initiative aimed at helping nonprofits adopt artificial intelligence in ethical and impactful ways. The program includes an AI Accelerator for five major charities, targeted upskilling of 500 nonprofit professionals, and the creation of a governance framework to guide responsible use of AI in the sector. It also seeks to address Canada's low AI adoption rate among nonprofits, currently at just 4.8%, by providing tools and training focused on inclusion, equity, and measurable social impact. Co-funded by Canada’s Global Innovation Cluster (DIGITAL), RAISE positions the country as a leader in human-centered, socially responsible AI deployment across its 170,000 nonprofit organizations.
Internet Jurisdiction in Clearview AI Case Analysis
Barry Sookman
In Clearview AI Inc v Alberta (Privacy Commissioner), the Alberta Court of King’s Bench determined that Alberta’s private-sector privacy law (PIPA) applies to a U.S.-based firm simply because it offered services to Albertans, constituting "carrying on business" in the province. The court further held that PIPA’s jurisdiction extends to any handling of Albertans’ personal data—even if the data was collected, used, or disclosed entirely outside Alberta. Legal commentary notes this expansive interpretation deviates from established Canadian jurisdictional doctrine, which typically limits provincial laws to activities with a "real and substantial connection" within the province. The ruling raises precedent-setting questions about whether provincial privacy laws can impose extraterritorial obligations based on business relationships rather than physical location. Critics caution that such broad territorial reach may challenge federalism principles and complicate how Canadian companies and international tech firms manage privacy law compliance.
UK AI code to facilitate automated decision making
Jonathan Kirsop | Lauro Fava | Pinsent Masons
The UK’s Information Commissioner’s Office (ICO) will publish a new statutory code of practice within the next year, offering practical guidance on deploying AI systems—especially automated decision-making (ADM)—in ways that uphold transparency, explainability, fairness, and legal rights. This follows reforms in the planned Data (Use & Access) Bill that relax current restrictions on ADM, allowing broader use as long as safeguards exist for human intervention, contestability, and enhanced transparency. The code is intended to clarify how organizations can responsibly implement AI without undermining existing data protection rules, particularly regarding bias, discrimination, and individual rights under GDPR. While the move is designed to support economic growth and AI innovation, critics caution that ensuring meaningful human oversight and robust accountability will be essential to prevent erosion of privacy and fairness.
DSIR Deeper Dive: Artificial Intelligence – Banner Years (and Counting) for AI Guidance and Regulation
James A. Sherer | Nichole L. Sterling | Brittany A. Yantis | Baker Hostetler
The Data Counsel “DSIR Deeper Dive” report highlights that 2024–2025 has been a watershed period for AI regulation and guidance, particularly in the U.S. and Europe. Key developments include emerging frameworks on AI accountability, transparency, and risk governance, with regulators increasingly focusing on bias mitigation, auditability, and consent mechanisms. The momentum reflects a shift from theoretical discussions to actionable enforcement—prompting companies to reassess AI lifecycle management, risk assessments, and compliance strategies. This report underscores that organizations must actively adapt to evolving legal expectations and embed privacy-by-design and governance controls to stay ahead.
What is Agentic AI? A Primer for Legal and Privacy Teams
Kyle Fath | Squire Patton Boggs
Agentic AI refers to a new class of intelligent systems—often powered by large language models (LLMs)—capable of independently planning, adapting, and executing multi-step tasks across digital environments with minimal human intervention. Unlike traditional AI tools that passively respond to prompts, these agents can autonomously interact with software, APIs, emails, databases, and even enterprise systems to complete workflows end to end. While this technology offers powerful automation benefits—such as drafting memos, processing emails, updating records, or initiating actions—it also raises novel legal and privacy concerns, including liability ambiguity, unauthorized access to sensitive data, and diffusion of responsibility. Legal and privacy teams will need to grapple with emergent risks like unintended actions, contract enforceability, data protection obligations, and accountability frameworks. Proactive strategies—such as robust oversight mechanisms, risk-based governance, privacy-by-design principles, and clear contractual terms—are essential to responsibly deploy agentic AI in compliance-sensitive environments.
Ontario's IPC works toward balancing youth safeguards, empowerment
Lexie White | IAPP
Ontario’s Information and Privacy Commissioner is advancing a multifaceted youth privacy strategy aimed at striking a balance between safeguarding children online and empowering them as active digital citizens. This initiative includes educational campaigns targeting organizations, parents, and minors, alongside tools like the Digital Privacy Charter for schools and MediaSmarts-based Privacy Pursuit lesson plans. A Youth Advisory Council and Youth Ambassador Toolkit amplify young voices in the policymaking process, ensuring resources reflect their real-world concerns. Ontario’s IPC is also pushing for increased accountability from companies handling children’s data—especially around dark patterns and privacy impact assessments—while engaging in regulatory efforts, such as supporting Bill 194, to strengthen protections for young users.
Texas is Second State (After Utah) to Enact “App Store Accountability Act,” Targeting Mobile App Stores
Emma Smizer | Frankfurt Kurnit Klein Selz
Texas has become the second U.S. state (after Utah) to enact an App Store Accountability Act, which will take effect on January 1, 2026. Under Senate Bill 2420, mobile app stores must verify user age, categorize users into defined age groups (under 13, 13–15, 16–17, 18+), and obtain verifiable parental consent before minors can download apps, make purchases, or use in-app features. The law also requires app stores to share age and consent data with third-party developers, who in turn must respect those limitations and safeguard that information. Non-compliance for both app stores and developers can result in private lawsuits, and under Texas law, SB 2420 violations are considered deceptive trade practices—opening the door to injunctive relief, actual and punitive damages, and legal fees. Tech giants such as Apple and Google have pushed back, expressing concerns that the act would require collecting and storing sensitive personal data from all users—even for seemingly innocuous apps like weather or sports—and have lobbied for alternative, less intrusive approaches. Supporters argue that this measure places responsibility on platforms, similar to age controls for purchasing alcohol, to better protect children in digital environments and relieve application developers and parents from redundant verification. As similar federal legislation is being considered, this state-level move could influence broader national regulation of online child safety.
Oregon Imposes New Obligations Related to Location Data and Teens' Data
Nancy Libin | David Wright Tremaine
Effective January 1, 2026, Oregon’s amended Consumer Privacy Act (via HB 2008) will prohibit the sale of precise geolocation data—defined as data pinpointing a user within a 1,750-foot radius—as well as the sale of personal data belonging to individuals under 16, eliminating any consent-based exceptions. Companies will no longer be able to monetize detailed location information or children’s data, even with user consent. Controllers must now reassess and update policies, implement age-verification practices, and ensure that any location-based advertising doesn’t involve third-party data sales. These amendments closely align Oregon with Maryland’s recent ban on sensitive data sales and signal a growing trend in U.S. state-level privacy protections focusing on minors and location data. Organizations operating in Oregon must audit their practices, update notices, and establish compliance mechanisms before the 2026 deadline.
Italy's DPA reaffirms ban on Replika over AI and children's privacy concerns
Caitlin Andrews | IAPP
EU policymakers are considering a formal pause—an official "stop the clock"—on parts of the upcoming AI Act as key implementation deadlines near and priority guidelines remain incomplete. This proposal received support from some Member States and European Commission leadership, who argue that businesses need more time to adapt to still-developing standards and enforcement frameworks. However, there's disagreement over whether this shift would derail the intent of the law or simply allow for clearer, synchronized rollout. Experts emphasize that any delay must avoid undermining the Act’s objectives, and legal adjustments would have to pass through formal legislative channels.
Data Protection Meets Consumer Protection: The Crucial Role of Clear Terms in Service Contracts
Dan Cooper | Anna Sophia Obserchelp de Meneses | Inside Privacy
A recent Finnish Data Protection Ombudsman decision highlighted that processing personal data to enforce parking violations was unlawful because the contractual agreement did not clearly outline those enforcement terms—undermining GDPR's lawful basis requirement under Article 6(1)(b). This case exemplifies how consumer protection rules in the EU—such as the Consumer Rights Directive and Unfair Commercial Practices Directive—require clear, upfront disclosure of service characteristics, including enforcement mechanisms. Without clarity, not only can consumers be misled, but organizations may lack a valid legal basis for data processing tied to contract enforcement. The ruling emphasizes that service providers must explicitly describe any enforcement terms and related data processing in contracts to comply with both privacy and consumer regulations.
Graphite Caught: First Forensic Confirmation of Paragon’s iOS Mercenary Spyware Finds Journalists Targeted
Bill Marczak | John Scott-Railton | Citizen Lab
Citizen Lab has released its forensic analysis confirming that Paragon’s Graphite mercenary spyware was definitively used in zero-click attacks on the iPhones of two journalists: Italy’s Ciro Pellegrino (Fanpage.it) and a prominent unnamed European reporter. The forensic evidence links both infections to the same Paragon operator via an iMessage vulnerability now patched in iOS 18.3.1 as CVE-2025-43200. This advances our understanding beyond earlier warnings—first identifying activists in Italy—as it marks the first publicly confirmed hack targeting journalists with Paragon spyware. Although Italy previously terminated its contract with Paragon Solutions amid investigation, it's still unclear which entity authorized these journalist surveillance operations. The findings have sparked serious concern among European lawmakers and press freedom advocates, prompting action including a planned debate in the European Parliament on June 16.
RCMP thumb drive with informant, witness data obtained by criminals: watchdog
Jim Bronskill | CBC News
In March 2022, the RCMP lost an unencrypted USB memory key containing personal information—victims, witnesses, informants, police officers—affecting 1,741 individuals. Three weeks after the loss, the RCMP learned the device was being offered for sale by criminal actors. The federal Privacy Commissioner’s investigation found RCMP security protocols inadequate and recommended strict controls on USB usage. While the RCMP agreed in principle to the recommendations, no specific timeline for implementation was provided—leaving sensitive personal data vulnerable to exploitation.
Telecom networks increasingly vulnerable to cyberattacks, experts warn
Sammy Hudes | Financial Post
Telecom networks are becoming prime targets for cyberattacks, with recent incidents like the “Salt Typhoon” campaign confirming sustained intrusions into at least eight U.S. providers—including AT&T, Verizon, T-Mobile, and Lumen—allowing hackers to siphon sensitive metadata and communications records. These breaches, attributed to state-sponsored actors, highlight how legacy infrastructure, fragmented systems from mergers, and inadequate cybersecurity protocols have widened attack surfaces. Regulators—including the FCC, CISA, and Cyber Safety Review Board—are responding with new compliance demands, annual risk certifications, and mandatory cybersecurity standards for telecom firms. Amid rising geopolitical tensions, experts agree that telecom security is now a national-security priority rather than just an industry concern, urging urgent modernization, encryption uptake, and proactive threat detection.
UK retail cyber attacks ‘should be a wake-up call’ for business
Laura Gillespie | Pinsent Masons
Recent ransomware attacks on major UK retailers—including Harrods, Marks & Spencer, and the Co‑op—have severely disrupted online ordering, supply chains, and in-store stock, with M&S expecting losses up to £300 million due to these breaches. Legal experts at Pinsent Masons emphasize that such incidents highlight the frightening impact of ransomware and the urgent need for businesses to test resilience, secure backups, and scrutinize supply-chain risks. A government report shows that 74% of large and 67% of mid-sized UK businesses experienced cybersecurity breaches in the past year, with nearly half of these incidents involving ransomware and data theft. Experts warn that no organization is immune—retail, public bodies, employers—and call for widespread action including tabletop exercises, external penetration testing, contractual oversight, and employee education on cyber hygiene.
Can inferred insecurity about physical traits be regulated as sensitive data?
Li-Rou Jane Foong | IAPP
Machine learning and recommendation algorithms are becoming adept at inferring deep personal vulnerabilities—such as facial asymmetry, acne, aging signs, and body image issues—from users' online behavior. When platforms use such inferred insecurities for targeted content—like ads for cosmetic surgery, weight loss, or anti-aging treatments—they effectively assign and monetize vulnerability scores without transparency or real consent. This raises significant concerns around discriminatory profiling, particularly for minors, protected demographics, or users with mental health concerns. Privacy advocates argue that inferred insecurity data can be as privacy-sensitive as biometric or health information and should therefore be subject to similar regulatory safeguards. The question now is whether privacy frameworks should evolve to treat these inferred attributes as “sensitive personal data” to curb potential abuses.
Bill C-2: Canada’s Crusade to Reform AML and Enhance FINTRAC Powers
Cindy Zhang | Robin McKechney | McCarthy Tetrault
McCarthy Tétrault explains that Bill C‑2 (the Strong Borders Act), introduced on June 3, 2025, proposes sweeping amendments to Canada’s anti‑money laundering (AML) regime and significantly enhances FINTRAC’s regulatory powers. The bill mandates registration for all entities covered by the PCMLTFA, introduces a ban on cash transactions over C$10,000, and raises administrative monetary penalties up to 40 times their current levels (with caps tied to global revenue or income). It also expands the definition of “very serious” violations to include failures in AML program design, enforces mandatory compliance agreements, enables public-private information sharing, and empowers law enforcement to demand subscriber and transmission data—potentially even from foreign providers—without prior judicial approval. Cumulatively, these reforms mark a transformative shift toward more aggressive enforcement, surveillance, and cross-border data access in Canada’s AML framework.
New Border Security Bill Aims to Expand and Strengthen Canada’s AML Regime
Koker Christensen | Caitlin Sabetti | Isabelle Savoie | Fasken
Fasken reports that Bill C‑2, the Strong Borders Act, introduces major reforms to Canada’s anti‑money laundering (AML) framework under the PCMLTFA. The legislation calls for mandatory FINTRAC enrolment of all reporting entities, bans cash transactions over C$10,000, and significantly increases administrative penalties—up to C$20 million for organizations, or 3% of global revenue. It also empowers FINTRAC with enhanced authority for inter-agency information sharing, enforces compliance agreements, and targets anonymous or third-party cash deposits. These changes are part of a broader shift toward more aggressive enforcement, greater regulatory oversight, and stronger public–private data-sharing in Canada’s AML regime.
Privacy authorities for Canada and United Kingdom to announce findings of joint investigation into global data breach at 23andMe
Office of the Privacy Commissioner of Canada
Canada’s and the UK’s privacy authorities are set to jointly unveil findings from their investigation into a global 23andMe data breach, as announced in a media advisory released on June 13, 2025. The inquiry responds to concerns over how 23andMe handled personal genetic data, including potential vulnerabilities and compliance with privacy laws in both jurisdictions. The joint effort marks a milestone in cross-border regulatory cooperation on genetic data protection, reflecting a broader commitment to tackling privacy risks associated with biometric and health-related data. Details of the investigation’s outcomes—including any recommended enforcement actions or best practices—will be disclosed at the planned announcement.
B.C.’s privacy watchdog weighs in on health AI boom – as doctors warn it’s not a substitute
Penny Daflos | CTV News
B.C.’s privacy commissioner has flagged the rapid integration of AI tools in healthcare settings—ranging from family doctors to specialists—as a significant privacy concern, emphasizing that patients must be clearly informed when AI is involved in their care. Medical professionals stress that, while AI can support diagnostics and treatment planning, it is not a replacement for human judgment and decision-making. Fraser Health has implemented dozens of AI programs but remains opaque about their specifics, which raises questions about informed consent, data security, and oversight. Privacy experts say that beyond mere notification, healthcare providers need robust governance, transparency, and accountability frameworks to maintain trust and uphold patient rights.
Digital Health ID as a Privacy Workaround? The Problem(s) with Ontario’s Bill 231
Teresa Scassa
Ontario’s Privacy Commissioner has raised significant concerns about Bill 231’s proposed Digital Health ID, noting it leaves many questions unanswered regarding how the identifier will be implemented and governed. The Bill would amend PHIPA to allow Ontario Health to issue a unique digital health identifier—potentially using biometric data—to enable patient access to electronic health records. But critics say the legislation lacks details on data protection, consent mechanisms, third-party access controls, and oversight safeguards. While the stated goal is to simplify access to health information, the IPC warns that insufficient transparency and risk-mitigation measures could threaten patient privacy and trust.
CRTC delays to 911 system upgrades put victims of intimate partner violence at risk, advocates say
Palantir’s Collection of Disease Data at C.D.C. Stirs Privacy Concerns
Apoorva Mandavilli | New York Times
The New York Times reported on June 6, 2025, that the U.S. Centers for Disease Control and Prevention (CDC) is migrating disease surveillance data—including measles and polio, previously withheld by some states—to a centralized Foundry system developed by Palantir. Many state and local health officials expressed concern that this consolidation could delay access to long-term trend data and heighten the risk of exposing sensitive patient information, particularly related to areas like gender care, reproductive health, or disabilities. Privacy advocates warn that moving data into a private company's platform increases dependency on Palantir’s infrastructure and may reduce oversight and transparency. In response to critiques, Palantir emphasized that all data-sharing aligns with existing federal privacy laws and that agencies, not the company, retain ultimate control.
Queen’s Park committee to resume intimate partner violence study
The Trillium
Ontario’s Standing Committee on Justice Policy is resuming its in-depth study of intimate partner violence (IPV), following a pause earlier this year. The committee will examine the rising incidence of physical, emotional, and mental abuse and explore reforms such as expanding access to restraining orders. While previous promises—including regional consultations and Indigenous-specific outreach—were delayed by the early election and limited to online sessions, the review will now recommence with renewed scope and resources. Advocates emphasize the importance of in-person engagement, particularly in rural, Northern, and Indigenous communities, to ensure survivors’ voices are genuinely heard.
CRTC delays to 911 system upgrades put victims of intimate partner violence at risk, advocates say
Abyssinia Abebe | The Globe and Mail
CRTC has delayed the rollout of Next‑Generation 911—including text, video, and data capabilities—by two years, a move that experts and advocates warn could disproportionately endanger intimate partner violence (IPV) victims who may be unable to call for help in emergencies. The current system lacks modern communication methods that can be critical during abuse situations when voice calls are not feasible or safe. Stakeholders argue this delay undermines timely access to emergency services, heightening risk for vulnerable individuals. They are urging the CRTC to accelerate implementation and prioritize deployment in high-risk communities.
Conservatives raise privacy concerns over powers in government’s border security bill
Marie Woolf | The Globe and Mail
Conservative MPs have raised alarm over what they describe as “snooping provisions” in Bill C‑2, warning that granting law enforcement access to internet subscriber data without a warrant represents a significant erosion of Canadians’ privacy rights. They’re particularly critical of clauses allowing Canada Post to open mail without judicial oversight and allowing surveillance of online service usage—asserting that tracking how long and where Canadians use digital services is deeply intrusive. Some argue that while parts of the bill serve legitimate security aims, the digital surveillance components cross a line and lack proper safeguards. Conservative voices call on the government to remove these warrantless powers, asserting that privacy protections must remain essential in any security legislation.
Transparency advocates call for independent review of Access to Information Act
Jim Bronskill | The Toronto Star
Transparency advocates, civil society groups, and academics are urging Prime Minister Mark Carney and Treasury Board President Shafqat Ali to ensure that the upcoming federal review of the Access to Information Act is conducted by an independent panel rather than as a government-led internal exercise. The Access to Information Act, unchanged in over 40 years, has long faced criticism for protracted response times, excessive redactions, and outright denials—even for routine requests. A truly independent review, supporters argue, would help restore public confidence and produce meaningful, timely reforms. Their message: without external oversight, the review risks repeating past shortcomings and failing to modernize Canada’s access-to-information framework.
Computer says no: Impact of automated decision-making on human life
Big Brother Watch
Big Brother Watch reports on how automated decision-making (ADM) systems—implemented across services like welfare, insurance, and policing—are increasingly impacting essential human needs, yet operate with minimal oversight or accountability. Investigations revealed cases of real-world harm, including wrongful insurance premiums, delays in welfare support, and misidentification by facial recognition systems, with little avenue for affected individuals to understand or challenge these decisions. The article emphasizes that without robust regulation, transparency, or legal remedies, ADM systems can perpetuate bias, discrimination, and systemic injustice. It concludes that governing bodies must urgently demand explainability, human-in-the-loop controls, and enforceable user rights to prevent harms at scale.
UK Parliament advances Data (Use and Access) Bill, awaits Royal Assent
Jedidiah Bracy | IAPP
The U.K. Parliament has passed the Data (Use and Access) Bill, now awaiting Royal Assent, introducing major reforms to the UK GDPR and e‑Privacy Regulations and marking one of the most significant data law overhauls in years. The legislation establishes frameworks for smart data schemes—enabling regulated data portability across sectors like finance, energy, and telecom—and formalizes digital identity systems through a statutory verification framework. Additional provisions include easing rules around automated decision-making, clarifying lawful bases for research, reforming international data transfers to a “not materially lower” standard, and enhancing transparency for AI training data and cookies. Although the government argues these measures will fuel innovation, boost the economy, and support AI growth without jeopardizing data adequacy with the EU, some privacy advocates remain cautious, noting potential risks around reduced oversight and weakened protections. Ultimately, the success of this reform will depend on forthcoming secondary legislation, regulatory guidance, and enforcement by the Information Commissioner’s Office.
Support for AI Act pause grows but parameters still unclear
Caitlin Andrews | IAPP
EU Member States and the European Commission are exploring an official “pause” or delay—referred to as a “stop the clock”—on parts of the AI Act implementation, as looming deadlines and missing guidelines threaten to overwhelm regulators and industry alike. The proposal aims to provide businesses more time to align with key frameworks, such as the general-purpose AI model rules set to take effect this August, while avoiding legal uncertainty that might stall compliance efforts. However, there is no consensus on how extensive the delay should be: some favor a full postponement of unenforced provisions, others suggest selective synchronization of overlapping digital regulations. Any formal delay would require legislative action and would need clear boundaries to ensure the law’s core objectives remain intact.
The final days of grace: Preparing for the U.S. sensitive data rule
Cheryl Saniuk-Heinig | Jim Dempsey | IAPP
On April 8, 2025, the U.S. Department of Justice implemented the Data Security Program (DSP) rule, restricting access by “countries of concern” (China, Russia, Iran, etc.) or related entities to Americans' sensitive personal and government-related data. While enforcement is currently paused until July 8, 2025, entities are required to begin aligning with the rule's rigorous requirements—such as assessing foreign ownership, securing vendor relationships, and implementing technical and contractual controls over data access. The DSP classifies data transactions as prohibited (like bulk sale of sensitive US data to covered persons) or restricted, which demand compliance with Cybersecurity & Infrastructure Security Agency (CISA) cybersecurity standards. Organizations must conduct ongoing due diligence, map data flows, establish compliance programs with audits, and are liable for substantial civil and even criminal penalties once the grace period ends.
Criminalizing Masks at Protests is Wrong
Matthew Guariglia | Adam Schwartz | Electronic Frontier Foundation
EFF argues that laws targeting protesters who wear masks fundamentally violate privacy and free speech rights. Masked protestors may be protecting themselves from health risks—like COVID—or avoiding retaliation for exercising political expression, and they have a legitimate need for anonymity in an era of pervasive surveillance. Criminalizing mask use at protests chills public participation and conflicts with fundamental democratic values, as anonymity has long been recognized as a shield against government and social coercion. The organization also highlights the double standard: law enforcement may be required to reveal their identity for accountability, yet protesters are penalized for masking their faces.
Justices Grant DOGE Access to Social Security Data and Let the Team Shield Records
Adam Liptak | Abbie VanSickle | The New York Times
On June 6, 2025, the U.S. Supreme Court ruled that the Department of Government Efficiency (DOGE)—established by executive order and previously led by Elon Musk—is authorized to access the full Social Security Administration (SSA) database, including sensitive personal data like Social Security numbers, bank info, earnings, and immigration records, even while legal challenges continue. At the same time, the Court paused the disclosure of DOGE’s internal records, deferring to the executive branch on whether DOGE must comply with the Freedom of Information Act. The majority emphasized concerns over separation of powers, though three justices dissented, warning of serious privacy risks for millions of Americans. Privacy advocates and watchdogs argue that the decision grants expansive surveillance powers without transparency or oversight, heightening fears of misuse of deeply private data.
Information and Privacy Commissioner urges government to close regulatory gaps and secure public trust
CanTech Letter | Office of the Information and Privacy Commissioner of Ontario
Ontario’s Information and Privacy Commissioner, Patricia Kosseim, is urging the provincial government to address significant legislative gaps highlighted in the IPC’s 2024 annual report. She recommends enforceable regulations and independent oversight for public sector AI, robust cybersecurity protections, stronger safeguards around children’s data, and alignment of municipal privacy laws with provincial standards. The IPC also called out deficiencies in government transparency—such as poor record keeping, use of personal emails for official business, and obscure language that hinder FOI requests—and urged codified obligations and oversight to restore public trust. Finally, she emphasized embedding privacy-by-design and clear access rights into digital health systems to ensure citizens’ confidence in new health technologies.
Privacy Commissioner of Canada’s annual report underscores need to prioritize privacy in an increasingly data-driven world
Office of the Privacy Commissioner of Canada
On June 5, 2025, the Office of the Privacy Commissioner of Canada released its annual Privacy Act Bulletin, emphasizing that managing personal data—especially in sensitive scenarios like complaints, investigations, and polygraph records—requires proactive privacy-protective measures and clear, accountable handling. The report underscores the need for privacy-by-design policies across federal institutions, including robust consent mechanisms, secure storage protocols, and minimized data retention to mitigate privacy risks. It also highlighted the OPC’s upcoming role in hosting the G7 Data Protection and Privacy Authorities Roundtable under Canada’s presidency, aiming to foster global cooperation on emerging privacy challenges. These initiatives reflect a strategic push toward transparency, stronger data governance, and international alignment in protecting Canadians’ personal information.
Commissioner Dufresne to host the 2025 G7 Data Protection and Privacy Authorities Roundtable
Office of the Privacy Commissioner of Canada
On June 10, 2025, Canada’s Privacy Commissioner, Philippe Dufresne, announced that he will host the 2025 G7 Data Protection and Privacy Authorities Roundtable in Ottawa on June 18–19. The event will bring together G7 and selected non-G7 privacy regulators to discuss emerging data protection issues, enhance international cooperation, and align regulatory approaches in areas like AI, cross-border data flows, and privacy enforcement. The roundtable underscores Canada’s growing role in shaping global privacy standards, especially during its presidency of the G7 authorities forum. Dufresne emphasized the importance of collective action in addressing shared privacy challenges and reinforcing public trust in the digital economy.
Invitation for Applications - Information and Privacy Commissioner
Legislative Assembly of Nunavut
The Legislative Assembly of Nunavut is inviting applications for the position of Information and Privacy Commissioner, with the call released on June 6, 2025. The role offers full-time, permanent employment and encourages applications from local Nunavummiut of diverse backgrounds, including Indigenous knowledge holders. Responsibilities include overseeing privacy and access-to-information issues across public bodies in the territory. The position aims to bolster public trust and transparency by strengthening independent oversight of government data practices.
Notes from the IAPP Canada: Regulators should better understand data breach complexities
Kris Klein | IAPP
IAPP Canada's managing director, Kris Klein, emphasizes that regulators and legal professionals require deeper, first-hand understanding of data breach dynamics—from incident response and forensic investigations to cross-border coordination with threat actors. Drawing on past incidents—including a misplaced hard drive at the Privacy Commissioner’s office—Klein illustrates how even seemingly minor breaches can involve complex remediation, notification, and jurisdictional challenges. He proposes “lunch-and-learns” bringing together regulators, legal teams, forensic experts, and negotiators to build practical insight into breach management. This hands-on approach could strengthen regulatory capacity, ensuring more consistent, effective oversight and support during future data security crises.
EDPB Finalizes Guidelines on Data Transfers to Third Country Authorities and Training Materials on AI and Data Protection
Hunton
The European Data Protection Board (EDPB) has released its final Guidelines on Article 48 GDPR, clarifying when EU-based organizations can legally comply with data requests from third-country authorities. These guidelines stress that foreign court orders are not automatically enforceable in the EU and must be backed by international agreements (e.g. MLATs), or otherwise rely on narrow, case-specific legal bases under GDPR Articles 6 and Chapter V. In conjunction, the EDPB has launched two new training modules: one focused on legal compliance in AI security and data protection, and another on secure AI system design with personal data—aimed at both legal and technical professionals—available in editable formats via GitHub. Together, these measures help organizations navigate cross-border data flow challenges while boosting capacity for privacy-conscious AI deployment.
China’s New CBDT Regime: One Year On
Paul D. McKenzie | Gordon A. Milner | Chuan Sun | Tinting Gao | Morrison Foster
China’s revamped Cross-Border Data Transfer (CBDT) regime, launched in March 2024, marked its first anniversary in 2025 with the Cyberspace Administration of China (CAC) reporting progress and issuing FAQ clarifications in April and May. The framework mandates that organizations exporting personal information or “important data” must either (1) undergo a CAC security assessment, (2) file standard contractual clauses, or (3) obtain certification, with specific exemptions for low-volume transfers and routine HR or emergency data flows. Over the past year, the CAC has processed hundreds of transfer applications, released guidance to ease compliance (especially around important data thresholds), and enhanced administrative infrastructure. As the regime matures, businesses can expect further refinements—such as industry-specific guidance and formal mutual-recognition of certifications—while charting required adjustments to global data-transfer operations.
Appeal in protracted Facebook privacy case headed to Supreme Court of Canada
Jim Bronskill | National Post
The Supreme Court of Canada has agreed to hear an appeal in the Privacy Commissioner of Canada v. Facebook Inc. case, which challenges Facebook’s disclosure of user data to third-party apps without obtaining meaningful consent from users or their connections. The Federal Court of Appeal previously ruled unanimously that Facebook violated PIPEDA by failing to secure proper consent and by not implementing adequate safeguards for user data. This appeal marks a critical opportunity for the Court to clarify the standard for “meaningful consent” under Canadian privacy law and reinforce protections against corporate overreach in handling personal information. A decision from the Supreme Court could significantly redefine corporate obligations in managing user data disclosures and consent frameworks.
Managing the Managers: Governance Risks and Considerations for Employee Monitoring Platforms
Joseph J. Lazzarotti | Jackson Lewis
Modern employee monitoring platforms now track everything from keystrokes, app and website usage, and screenshots to advanced AI-powered tools like sentiment analysis, behavioral risk scoring, and geolocation. While these technologies promise productivity insights and compliance support, they raise significant privacy, legal, and ethical challenges if deployed without clear governance. Companies face legal exposure under privacy laws like GDPR, CCPA, and emerging AI regulations, as well as potential labor-law issues around discrimination, bargaining rights, and automated decision-making. Experts emphasize that effective oversight must extend beyond IT—requiring cross-functional governance with HR, legal, and privacy teams to define purpose, transparency, access controls, and ongoing audits. Without rigorous policy frameworks and human-in-the-loop controls, these tools risk eroding trust, prompting legal risks, and undermining workplace morale.