Week of 2025-05-12
He's still waiting for FOI records Toronto police said he could have last summer, requested in 2020
Nicole Brockbank | CBC News
Toronto scholar Jamie Jelinski has been waiting nearly five years for records from the Toronto Police Service (TPS) related to its use of the controversial facial recognition tool Clearview AI, despite paying over $200 for remaining documents in 2023. His experience, marked by multiple appeals to Ontario’s Information and Privacy Commissioner (IPC), highlights serious delays in the province’s Freedom of Information (FOI) system. While IPC orders are legally binding, enforcement is weak, and TPS has repeatedly missed deadlines without facing meaningful consequences. Experts argue that Ontario’s FOI laws are outdated and lack mechanisms—like those in B.C. and Alberta—to ensure compliance through the courts. Jelinski’s case underscores the urgent need for legislative reform to uphold transparency and protect public access to information.
Transparency watchdog orders sworn statement from ex-staffer of Premier Ford's in Greenbelt records hunt
The Trillium
Ontario's Information and Privacy Commissioner has ordered a former staffer of Premier Doug Ford to provide a sworn affidavit concerning missing records related to the Greenbelt land swap controversy. This directive is part of an ongoing investigation into the government's handling of the Greenbelt land removals, which were later reversed following public outcry and scrutiny. The watchdog's action underscores concerns about transparency and record-keeping within the Premier's office. The affidavit aims to clarify the existence and whereabouts of pertinent documents that have not been disclosed. This development highlights the need for robust access-to-information laws and accountability mechanisms in Ontario's governance.
Age Verification in the European Union: The Commission's Age Verification App
Svea Windwehr | Alexis Hancock | EFF
The European Commission is developing a "mini-ID wallet" aimed at verifying users' ages online, integrating with the EU Digital Identity Wallet. This app would allow users to prove their age through various methods, including national eID schemes, physical ID cards, or verification via trusted third parties like banks or notaries. However, the Electronic Frontier Foundation (EFF) raises concerns about potential privacy risks and the exclusion of marginalized groups who may lack access to official identification or personal devices. The EFF emphasizes that such age verification measures could inadvertently restrict access to information and services, particularly affecting children, refugees, and the unhoused. They advocate for alternative approaches that protect minors online without compromising fundamental rights.
North Carolina Bill Would Require People to Share Their IDs With Social Media Companies
Vanessa Taylor | Gizmodo
North Carolina's proposed Personal Data Privacy Act (House Bill 462) aims to enhance consumer data rights but includes contentious provisions requiring social media users to verify their age using government-issued IDs. This mandate would necessitate third-party vendors to handle sensitive identification data, raising significant privacy concerns. Critics argue that such requirements could disproportionately affect marginalized groups lacking access to official IDs and increase the risk of data breaches. While the bill seeks to protect minors online, opponents highlight the potential for misuse of personal information and the challenges in ensuring data security. The legislation underscores the ongoing debate between safeguarding children on digital platforms and preserving individual privacy rights.
North Carolina bill aims to restrict minors’ social media access
Savannah Rudicel | Queen City News
North Carolina's House Bill 301 aims to restrict social media access for minors under 16. The bill prohibits children under 14 from having social media accounts and requires parental consent for 14- and 15-year-olds. It mandates age verification by platforms and imposes penalties up to $50,000 per violation, with potential civil damages up to $10,000. While supporters cite concerns over mental health and online safety, critics question the bill's enforceability and potential privacy implications. The legislation has passed the House and awaits Senate consideration.
MTA wants AI to flag 'problematic behavior' in NYC subways
Stephen Nessen | Gothamist
The Metropolitan Transportation Authority (MTA) is exploring the use of artificial intelligence (AI) to monitor New York City's subway platforms for "problematic behavior." The proposed system would analyze real-time footage from security cameras to detect actions deemed irrational or potentially criminal, alerting the NYPD before incidents occur. MTA officials emphasize that the technology focuses on identifying behaviors, not individuals, and will not employ facial recognition. However, civil liberties advocates express concern over potential biases and the expansion of surveillance in public spaces. This initiative is part of a broader effort to enhance safety in the transit system amid ongoing concerns about subway crime.
‘Dangerous nonsense’: AI-authored books about ADHD for sale on Amazon
Rachel Hall | The Guardian
Amazon's marketplace is currently inundated with AI-generated books on managing ADHD, many of which contain misleading or harmful information. Investigations reveal that these books, often marketed as expert resources, are likely authored by chatbots like ChatGPT, with AI detection tools confirming 100% AI-authorship probability in several cases. Experts warn that such unregulated AI-generated content poses significant risks, especially in sensitive areas like health, due to the potential spread of misinformation. Critics argue that Amazon's current content guidelines are insufficient, allowing for the proliferation of these dubious publications. This situation underscores the urgent need for stricter oversight and accountability in digital publishing to protect consumers from potentially dangerous content.
US judicial panel advances proposal to regulate AI-generated evidence
Nate Raymond | Reuters
A U.S. federal judicial panel has advanced a proposal to regulate the use of AI-generated evidence in courtrooms. The Advisory Committee on Evidence Rules voted 8-1 to seek public commentary on a draft rule that would require AI-generated evidence to meet the same reliability standards as testimony from human expert witnesses under Rule 702 of the Federal Rules of Evidence. This initiative addresses concerns about the reliability of AI-generated content, such as deepfakes, and aims to ensure that such evidence is subjected to rigorous scrutiny before being admitted in trials. The proposal now moves to the Judicial Conference’s Committee on Rules of Practice and Procedure for further consideration and potential public input.
Brazil enacts AI-focused law to combat psychological violence against women
Tiago Neves Furtado | IAPP
On April 24, 2025, Brazil enacted Law No. 15.123/2025, intensifying penalties for psychological violence against women when facilitated by artificial intelligence or similar technologies. This amendment to Article 147-B of Brazil's Penal Code allows for increased sentences—up to 50% longer—if offenses involve tools like deepfakes that manipulate a victim’s image, voice, or likeness. The legislation addresses growing concerns over AI-driven abuse, including the creation of non-consensual synthetic media that can cause severe emotional harm. Beyond criminal penalties, the law signals heightened expectations for AI developers and platforms to implement safeguards against misuse, aligning with global trends in regulating harmful applications of AI. This move underscores the necessity for robust AI governance frameworks that prioritize the protection of vulnerable individuals from technologically facilitated abuse.
US Border Agents Are Asking for Help Taking Photos of Everyone Entering the Country by Car
Caroline Haskins | Wired
U.S. Customs and Border Protection (CBP) is advancing plans to implement facial recognition technology at land border crossings to photograph and identify all individuals in vehicles entering the United States. This initiative aims to enhance existing biometric systems used in air, sea, and pedestrian environments by extending them to vehicular settings. CBP has issued a request for information seeking technological solutions capable of capturing images of every vehicle occupant, including those in rear seats, to match against travel documents and government-held photos. Previous tests at border crossings like Anzalduas and Mariposa revealed challenges, with success rates of capturing all occupants' images meeting validation requirements ranging from 61% to 80.7%. Privacy advocates express concerns over potential racial and gender disparities in facial recognition accuracy and the broader implications for civil liberties.
Kids should avoid AI companion bots—under force of law, assessment says
Khari Johnson | The Markup
A recent assessment by Common Sense Media, in collaboration with Stanford University's Brainstorm Lab, warns that AI companion bots pose significant mental health risks to children, including exacerbating issues like addiction and self-harm. The study found that these bots, designed to engage users in conversation, can quickly veer into inappropriate territory, including sexually explicit content and reinforcement of harmful behaviors. In response, California legislators have proposed bills to regulate such AI systems, aiming to limit their use among minors and enforce safety protocols. While some industry groups express concerns over potential overreach and free speech implications, the push highlights growing unease over the unregulated interaction between children and AI companions. This development underscores the urgent need for comprehensive policies to safeguard children's mental health in the digital age.
AI is fuelling an alarming surge of child sexual abuse material in Canada: What parents need to know
Loraine Centeno | Inside Halton
This article from InsideHalton discusses the alarming rise in the use of artificial intelligence (AI) to create child sexual abuse material (CSAM). Experts and law enforcement agencies express concern over how AI-generated images are being used to exploit children, emphasizing the urgent need for updated legislation and technological safeguards. The article highlights the challenges faced by authorities in detecting and prosecuting offenses involving AI-generated CSAM, given the rapid advancement of technology and the anonymity provided by online platforms. Advocates call for a collaborative approach involving tech companies, policymakers, and law enforcement to develop effective strategies to combat this emerging threat. The piece underscores the importance of public awareness and education in preventing the spread of AI-generated child exploitation content.
He Had No Idea Toyota Was Selling His Driving Data To Insurers, Now He’s Suing
Brad Anderson | Carscoops
A Texas class-action lawsuit filed by Toyota owner Philip Siefke alleges that Toyota shared detailed driving data with insurance companies like Progressive without his informed consent. Siefke discovered this in January 2025 when, after opting out of a data-sharing program during an insurance signup, Progressive revealed it already had his driving records—obtained via Toyota. The data reportedly includes sensitive metrics such as location, speed, fuel levels, seatbelt status, and more, collected from 2018+ Toyota models like Siefke’s 2021 RAV4. Though Toyota claims the data sharing was part of a trial program agreed to at purchase, Siefke asserts he was never properly informed. The lawsuit highlights growing concerns about automakers’ use of vehicle telemetry and its privacy implications, especially when shared with third parties like insurers.
Canada is one of the most data breached countries in the world with more than 800,000 accounts leaked in 2025, report reveals
Lorraine Centeno | Inside Halton
A new report from cybersecurity firm Surfshark reveals that Canada ranked eighth globally for data breaches in the first quarter of 2025, with 887,000 compromised accounts—equivalent to seven per minute. Despite a global drop in breaches following a sharp rise in 2024, Canada remains highly affected, with over 255 million total accounts exposed since 2004. The report warns that such breaches heighten risks of fraud, identity theft, and other cybercrimes. Experts urge Canadians to stay vigilant, improve cyber hygiene, and use tools like DNS filtering and the Canadian Internet Registration Authority’s “Canadian Shield” to block malicious websites. The findings highlight Canada’s ongoing vulnerability in an evolving cyber threat landscape.
Smishing text scams are about to get worse: Ontario cybersecurity expert warns Canadians to be ready as scams get GenAI boost
Loraine Centeno | Inside Halton
A new report reveals that the phishing-as-a-service (PhaaS) platform Darcula—linked to Canada Post and toll scam text attacks—is now using generative AI to create sophisticated phishing websites in minutes. The platform, associated with the notorious Smishing Triad group, allows even novice attackers to replicate branded websites, complete with multilingual support and editable forms to harvest personal data. Experts warn that this GenAI enhancement significantly lowers the barrier to entry for cybercriminals and will likely lead to more frequent and convincing smishing attacks in Canada. Canadians are urged to delete suspicious texts, avoid replying, and report scams to 7726 to help carriers block future threats. Cybersecurity professionals emphasize the need for improved public awareness, stronger digital hygiene, and dynamic AI-based defenses to counter this growing risk.
From driver's license to digital dossier? Why some are worried about REAL ID.
Nathan Diller | USA Today
As the May 7, 2025 deadline for REAL ID enforcement approaches, privacy advocates continue to raise alarms over the national implications of the identification system. While intended to standardize ID verification for domestic flights and federal facilities, critics—including the Electronic Frontier Foundation and ACLU—warn that REAL ID could erode privacy, enable cross-state data sharing, and disproportionately impact undocumented individuals. Concerns centre on the potential expansion of the law’s scope and the risk of personal data being sold or misused, as has already occurred in some state DMV systems. Although the TSA insists the program merely establishes consistent minimum standards without creating a federal database, experts argue the unified system may pave the way for broader surveillance and tracking. The rollout underscores growing tensions between national security measures and digital privacy rights in an era of increasing data centralization
School boards hit with ransom demands linked to PowerSchool cyberattack
Jessica Wong | CBC News
Following a December 2024 ransomware attack on PowerSchool, Canada's largest school boards—including Toronto and Calgary—have received fresh ransom demands, despite the company already paying hackers to delete the stolen data. The breach, caused by the compromise of an admin account, exposed sensitive student and staff information, including personal and medical data, dating back decades. PowerSchool has since offered credit monitoring, while critics argue the payout has backfired, enabling cybercriminals to re-extort victims. Experts emphasize that student data is a valuable target for identity theft and that schools must bolster cybersecurity practices and communication. Parents, meanwhile, are calling for greater transparency and consistency from school boards in handling such breaches.
Privacy authorities for Canada and the United Kingdom advocate for data protections in 23andMe bankruptcy proceedings
Office of the Privacy Commissioner of Canada
The Office of the Privacy Commissioner of Canada (OPC) has proposed a $34 million budget for 2024–25, marking a $4.5 million increase from the previous year. The additional funding is aimed at strengthening the OPC’s ability to investigate privacy breaches and respond more effectively to complaints from Canadians. Spending in the second quarter of the fiscal year reached $16.2 million, largely due to new hires and collective agreement salary adjustments. Key allocations include $29.3 million for personnel and $3.1 million for professional services, reflecting the growing demand for robust privacy oversight. The OPC emphasizes that these resources are essential for navigating Canada’s increasingly complex digital privacy landscape.
The ills of Signal and 23andMe offer chilling lessons about our digital data
Wendy H. Wong | The Globe and Mail
In her Globe and Mail op-ed, Wendy H. Wong draws chilling parallels between two recent digital data controversies—Signal’s national security mishap and 23andMe’s genetic data breach—to warn of the deep vulnerabilities in our data-driven society. U.S. national security advisor Michael Waltz mistakenly included a journalist in a Signal group chat detailing military operations, revealing that even encrypted apps can’t protect against human error and screenshot permanence. Meanwhile, 23andMe’s bankruptcy has left users uncertain about the fate of their sensitive genetic data, which had been monetized through commercial deals, including with pharmaceutical giant GSK. Wong argues these events expose our dangerous complacency with how non-government entities collect, retain, and exploit personal information, and raise urgent questions about consent, control, and deletion of data in the hands of digital platforms. Her takeaway: those who control the data wield real power—something policymakers can no longer afford to ignore.
She only learned her privacy had been breached by filing an access to information request
Michael Gorman | CBC News
A Nova Scotia woman, Patricia Celan, uncovered that her health records had been repeatedly accessed without authorization by a fellow medical resident—something she only discovered after filing an access-to-information request. Despite confirming the breach, Nova Scotia Health and Dalhousie University reportedly took little action, citing that the individual had since left the province and completed their training. Celan criticized the lack of proactive detection and accountability, especially given this was not the first privacy lapse she experienced during her medical education. Privacy lawyer David Fraser noted that snooping is widespread and often linked to personal connections, emphasizing the importance of routine audits, strict enforcement, and ongoing training. The case highlights systemic weaknesses in protecting patient data and the limitations of current safeguards, raising broader concerns about trust and accountability in healthcare privacy practices.
New AI model uses NHS data to predict future disease and complications
Storm Newton | Independent
The UK’s National Health Service has developed “Foresight,” an AI model trained on the de-identified health records of 57 million patients to predict future diseases and complications. Developed by researchers at University College London and King’s College London, the model analyzes data such as hospital visits, vaccinations, and prescriptions to support a shift toward preventative care. Though data is anonymized and housed in a secure NHS environment with strict access controls, privacy advocates have raised concerns about large-scale health data usage without explicit individual consent. Proponents argue the system could significantly improve health outcomes and resource allocation, but public trust remains a key factor. Ongoing oversight and public engagement are essential to ensure the model is ethically deployed and aligned with patient expectations.
Text messaging helps hospital get more feedback
Clin Fleury | TB Newswatch
Thunder Bay Regional Health Sciences Centre has become the first hospital in Ontario to implement text message-based patient surveys, aiming to enhance feedback collection. Since introducing SMS surveys in December 2023, the hospital has observed a 1–2% increase in response rates compared to email-based surveys, particularly benefiting patients with limited internet access or those less inclined to use email . Shannon Schiffer, Manager of Patient and Family Centred Care, emphasized that this approach reduces barriers and aligns with the hospital's commitment to patient-centered care . While email surveys generally see higher completion rates for longer questionnaires, SMS has proven effective for shorter surveys, providing timely insights into patient experiences. This initiative underscores the hospital's dedication to integrating patient feedback into quality improvement efforts.
France: Flaws and injustices of 'predictive policing' laid bare in new report
State Watch
A recent report by La Quadrature du Net, coordinated by Statewatch, exposes significant flaws in France's use of predictive policing technologies. The report scrutinizes systems like RTM, PredVol, PAVED, M-Pulse, and Smart Police, highlighting their reliance on socio-demographic data and discredited criminological theories, which may lead to the disproportionate targeting of marginalized communities. Critics argue that these tools blur the line between correlation and causation, potentially reinforcing existing biases and over-policing in vulnerable areas. Moreover, the lack of transparency and official evaluations raises concerns about their effectiveness and compliance with data protection laws. The report concludes that such technologies risk automating social injustices and calls for an outright ban on predictive policing in France.
Province's self-driving vehicle pilot sparks concern among Toronto councillors
Patrick Cain | The Trillium
Toronto city councillors have expressed significant concerns over Ontario's new self-driving vehicle pilot program, which plans to deploy up to 20 autonomous delivery vehicles in the city's west end by late 2026. Councillor Dianne Saxe highlighted potential issues such as traffic obstruction during deliveries, ambiguous legal accountability for traffic violations, and the possibility of facial images being transmitted to databases outside Canada's jurisdiction, raising privacy concerns. The program, operated by auto parts manufacturer Magna, falls under provincial jurisdiction, leaving the city with limited authority over its implementation. Councillors have also criticized the lack of transparency, noting that the provincial permit outlining operational conditions is confidential and not accessible to the public. The vehicles, resembling large cargo bikes, are restricted to speeds of 32 km/h and prohibited from making left turns, operating within a designated area characterized by lower speed limits.
Dutch Data Protection Authority targets misleading cookie banners
Andre Walter | Michelle Seel | Pinsent Masons
The Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP) has issued warnings to 50 organizations over misleading cookie banners that fail to comply with GDPR consent requirements. Common violations include making it easier to accept cookies than to reject them, omitting a "reject all" option on the first layer, and collecting personal data before obtaining consent. The AP has given these organizations until June 2025 to rectify these issues or face potential enforcement actions, including fines and public disclosure. This initiative is part of the AP's intensified efforts to monitor cookie compliance, supported by an increased budget and plans to systematically scan 10,000 websites for violations. The AP emphasizes the importance of clear and transparent cookie banners that allow users to make informed choices about their personal data.
CPPA, UK Enter Info-Sharing Agreement
Dennis Noone | Industry Inside
On April 29, 2025, the California Privacy Protection Agency (CPPA) and the UK’s Information Commissioner’s Office (ICO) signed a Declaration of Cooperation to bolster cross-border collaboration on privacy protection. The agreement enables joint research, staff exchanges, and shared investigative best practices related to emerging technologies and data governance. It builds on the CPPA’s broader efforts to partner with international regulators, including those in France and South Korea. CPPA Executive Director Tom Kemp emphasized the value of learning from global counterparts, while UK Commissioner John Edwards noted the formal agreement strengthens existing ties. The initiative reflects a growing trend toward international coordination in privacy enforcement and policy development.
Jury orders NSO Group to pay $168 million to WhatsApp for facilitating Pegasus hacks of its users
Suzanne Smalley | The Record
A U.S. federal jury has ordered Israeli spyware firm NSO Group to pay approximately $168 million in damages to Meta's WhatsApp for deploying its Pegasus spyware to hack 1,400 users in 2019. The verdict includes $167 million in punitive damages and $445,000 in compensatory damages, marking the first successful legal action against a spyware developer for exploiting a tech platform's security . The court found that NSO's Pegasus software exploited a vulnerability in WhatsApp's calling feature, allowing the installation of spyware through a missed call, which could then access the device's camera, microphone, messages, and location data . NSO argued that its technology is used by authorized governments to prevent crime and terrorism, but the court barred the company from presenting evidence related to its clients' intentions . Meta hailed the decision as a significant deterrent against illegal surveillance practices targeting American companies and user privacy.
Is it necessary? New CAI recommendations on staff recruitment and privacy
Kirsten Thompson | Arianne Bouchard | Alexandra Quigley | Dentons Data
The Commission d’accès à l’information (CAI), Quebec’s privacy regulator, has issued new guidelines emphasizing that employers may only collect personal information from job applicants when it is strictly necessary for the hiring process. Even if a candidate consents, collecting unnecessary data—such as social insurance numbers, birthdates, or social media profiles—is discouraged unless directly relevant to the role . The CAI advises against using uniform application forms for all positions, as this can lead to the collection of irrelevant information, and recommends delaying reference checks until after a conditional job offer is made. Employers utilizing artificial intelligence in recruitment must disclose its use and ensure it does not introduce bias or infringe on privacy rights. These guidelines align with Quebec’s broader privacy reforms under Law 25, reinforcing the principle that necessity—not convenience—should guide data collection during hiring.