Week of 2025-6-9

OpenAI slams court order to save all ChatGPT logs, including deleted chats

Ashley Belanger | Ars Technica

OpenAI has revealed that a U.S. court has ordered it to preserve all ChatGPT-generated logs—including deleted conversations—for an indefinite period as part of The New York Times copyright lawsuit. Previously, deleted chats were purged after 30 days; this order means even user-deleted data will now be retained. OpenAI argues this directive is a "sweeping overreach" that violates its privacy norms and undermines user trust, and it plans to appeal. CEO Sam Altman has also floated the concept of “AI privilege,” likening private AI conversations to confidential communications with doctors or lawyers. The outcome of this dispute could set a landmark precedent affecting privacy expectations in AI systems.

Generative AI: Updated global guide to key IP considerations

Jonathan Ball | Felicia Boyd | Justin Davidson | Georgina Hey | Jurriaan Jansen | Mike Knapper | Frank Liu | Maya Medeiros | Clemens Rübel | Allison Williams | Jeremiah Chew | Jasper Geerdes | Ronak Kalhor-Witzel | Amy King | Clément Monnet | Norton Rose Fulbright

The updated global guide outlines key intellectual property (IP) considerations arising from the use of generative AI across jurisdictions including Canada, the EU, and the United States. It highlights the legal risks of training AI models on copyrighted materials, particularly when outputs resemble third-party works, and notes that deployers—not just developers—may be liable. The report also addresses the uncertainty surrounding IP protection for AI-generated outputs, emphasizing that standards of originality vary by country. Importantly, it recommends assessing infringement risks at every stage of AI deployment—from data collection to model use—and adjusting IP strategies accordingly.

Google Canada announces $13 Million AI Opportunity Fund to strengthen Canada's AI workforce

NewsWire

Google Canada has announced a $13 million AI Opportunity Fund to support four Canadian organizations—Alberta Machine Intelligence Institute, First Nations Technology Council, Skills for Change, and the Toronto Public Library—in delivering AI skills training to over two million Canadians. The initiative targets diverse groups including post-secondary students, Indigenous youth, individuals facing unemployment, and communities impacted by the digital divide. It builds on Google.org’s previous investments in digital upskilling and aims to prepare Canadians for an economy increasingly shaped by generative AI. By focusing on inclusive access to AI literacy and hands-on training, the fund seeks to broaden participation in Canada’s AI-driven future.

Brookfield to invest up to US$10B in AI data centre in Sweden

Catherine McIntyre | The Logic

Brookfield Asset Management has announced plans to invest up to US $10 billion (approximately 95 billion SEK) to build a massive AI-focused data centre in Strängnäs, Sweden, spanning 3.77 million square feet. The project will increase capacity from 300 MW to 750 MW and is expected to take 10 to 15 years to complete. It is part of Brookfield's broader European push—following a €20 billion investment in France—and reflects growing demand for AI infrastructure from major tech and investment firms. The development is projected to create over 1,000 permanent jobs and 2,000 construction roles and has drawn support from Swedish leadership eager to boost the country’s digital ecosystem. With Sweden already a hub for data centres thanks to reliable electricity and infrastructure, this investment positions the region as a critical player in powering future AI applications.

Japan passes innovation-focused AI governance bill

Caitlin Andrews | IAPP

Japan’s legislature has approved the Act on the Promotion of Research and Development and the Utilization of AI‑Related Technologies, signaling a national commitment to fostering AI innovation while acknowledging potential risks. The law establishes guiding principles and a Cabinet-level AI Strategy Headquarters, but skips enforceable restrictions or penalties, favoring voluntary industry cooperation. It emphasizes transparency, international collaboration, and provides the government authority to issue guidance and disclose misuse—without imposing sanctions. A supplementary resolution addresses concerns like deepfakes and data misuse in sensitive sectors such as health and defense. Japan’s approach contrasts with the EU’s risk-based model, choosing light‑touch governance aimed at encouraging investment and development.

Clearview in the Alberta courts: privacy laws infringe freedom of expression, but mass data scraping for AI facial recognition is still not on the table

Mavra Choudhry | Lauren Nickerson | Julie Himo | Nic Wall | Jon Silver | Gabrielle da Silva | Torys

A recent decision by Alberta’s Court of King’s Bench in Clearview AI Inc v Alberta (Information and Privacy Commissioner) addressed key questions about jurisdiction and the interpretation of “publicly available information” under the province’s Personal Information Protection Act (PIPA). The court confirmed that PIPA applies to out-of-province companies like Clearview AI when there is a “real and substantial connection” to Alberta. It also ruled that the statutory definition of “publicly available” was too narrow and infringed on freedom of expression, striking some language while preserving the principle that data collection must serve reasonable purposes. Despite this partial victory for Clearview, the court upheld the Privacy Commissioner’s order requiring the company to stop operating in Alberta and delete all collected images. The ruling strengthens privacy rights while clarifying that even publicly posted personal data is subject to consent and purpose-based limitations in Canada.

Ada Lovelace Institute questions legality of facial recognition in UK

Masha Borak | Biometric Update

The Ada Lovelace Institute asserts that the UK’s current “diffused” model for regulating biometric and facial recognition technologies is insufficient and creates a concerning legal “grey area.” They argue that scattered guidance—covering both police and private-sector uses—fails to meet mandatory legal standards, leaving individuals vulnerable and undermining public trust. The Institute recommends adopting a comprehensive, risk-based regulatory framework akin to the EU AI Act, including clear definitions, transparency obligations, efficacy and bias testing, and tiered safeguards depending on application risk. They further urge the establishment of an independent biometric regulator empowered to develop and enforce codes of practice across sectors. This move is seen as critical, as increasing deployments—including live facial recognition in public spaces, retail outlets, and schools—outpace current oversight and raise serious privacy and ethical concerns.

Canadian lawmakers consider age verification, estimation for online porn access

Chris Burt | Biometric Update

Canadian lawmakers have reintroduced Bill S-209 (formerly S‑210), aiming to mandate age verification or age estimation systems to restrict online pornography access for users under 18. The bill outlines that these age checks must be performed by “arm’s length” third parties, collect only minimal personal data necessary for verification, and securely destroy the data once verification is completed. It empowers the government to set required methods and create defenses for compliant organizations, while the federal privacy regulator is preparing guidance on handling sensitive biometric data. The legislation has cleared the Senate previously but stalled in the House; it’s now back before Parliament, reigniting debates over the balance between child protection, privacy, and digital rights.

Michael Geist: Here We Go Again: Internet Age Verification and Website Blocking Bill Reintroduced in the Senate (With Some Changes)

Michael Geist

Michael Geist's recent Substack post, Here We Go Again, highlights the reintroduction of the contentious internet age verification and website-blocking bill in the Canadian Senate, now aiming to mandate age checks and ISP-level site blocking. The bill would require platforms to verify users’ ages—potentially using invasive technologies—and impose significant liabilities for non-compliance. Critics warn it’s overly broad, potentially targeting mainstream services like search engines and social media, while raising serious privacy and free expression concerns. Additionally, the use of filibuster tactics by Conservative senators has blocked expert testimony, preventing meaningful debate before progression to the House.

EU is set to launch an age verification app - mandatory for accessing adult content

Petteri Pyyny | After Dawn

The European Union is set to launch a centralized age verification app by July 2025 as part of its efforts to enforce age checks on adult websites under the Digital Services Act (DSA) and related regulations. The app enables users to confirm they are over 18 without having to share personal data with each website. It serves as an interim solution until the full EU Digital Identity Wallet is rolled out by the end of 2026. The move emphasizes protecting minors while minimizing privacy risks, and the European Commission has launched investigations into platforms like Pornhub, Stripchat, XNXX, and XVideos over inadequate age verification measures.

No One Knows How to Deal With 'Student-on-Student' AI CSAM

Emanuel Maiberg | 404 Media

A Stanford Cyber Policy Center report reveals that schools, parents, police, and legal systems are unprepared for the alarming rise of minors using AI to generate child sexual abuse material (CSAM) featuring their peers. The ease of access to "nudify" apps, available via mainstream app stores and social media, has normalized the creation and sharing of such content among students. Although 38 U.S. states have laws addressing AI CSAM and the federal Take It Down Act targets these images, existing legislation often overlooks cases where minors are both creators and victims. The report recommends schools adopt comprehensive responses, including crisis counseling, anonymous reporting mechanisms, clear communication strategies, and ensuring incidents are treated as mandatory reports of sexual abuse.

Texas puts age verification on app stores despite Apple, Google pushback

Joel R. McConvey | Biometric Update

Texas has passed the App Store Accountability Act, requiring app stores like Apple’s and Google Play to verify users' ages and obtain parental consent before minors can download apps or make in‑app purchases—effective January 1, 2026. Apple and Google have strongly opposed the move, arguing it forces them to collect sensitive personal data for all users, even for benign apps like weather or sports, prompting concerns about privacy risks. Apple CEO Tim Cook personally contacted Governor Abbott urging a veto, but the bill passed with veto-proof support and garners backing from Facebook and child safety advocates. This law follows Utah’s lead and is part of a larger national trend, with states like North Dakota and proposals in Louisiana considering similar protections. While supporters view it as essential to protect children, critics warn of First Amendment implications and note that savvy teens may bypass age checks.

Consumer Reports investigation uncovers Kroger’s widespread data collection of loyalty program members to create secret shopper profiles

Consumer Reports

Consumer Reports has revealed that Kroger’s loyalty program extensively gathers and analyzes customer data to create detailed “secret shopper” profiles, which are then sold to more than 50 third-party companies. These profiles include demographic and behavioral estimates—such as income, education, and loyalty status—that can be inaccurate and may influence who receives the best discounts. For example, one customer in Oregon discovered his profile mischaracterized him as a high school-educated woman with a modest income, and this misclassification could impact his access to savings. Although Kroger claims it uses data primarily to offer relevant deals, critics argue the practice raises ethical concerns around “surveillance pricing” and argue that state and federal privacy regulations should intervene.

Wyden Exposes Which Phone Carriers Don't Notify Customers About Government Surveillance

Jessica Corbett | Common Dreams

U.S. Senator Ron Wyden revealed that major carriers—AT&T, Verizon, and T-Mobile—failed to notify Senate offices when government surveillance requests were made on senators' phone lines, despite contractual obligations. A staff investigation found that while AT&T and Verizon now notify for Senate-funded lines, T-Mobile extends this to personally flagged lines, and smaller carriers like Google Fi, U.S. Mobile, and Cape proactively inform all customers. These surveillance request gaps pose risks not only for lawmakers but also for journalists, activists, and ordinary Americans who could be targeted without their knowledge. Wyden is urging legal reforms to empower the Senate Sergeant at Arms to safeguard communications across official, campaign, and personal lines. The revelations have ignited wider concerns about transparency, oversight, and the chilling effects of secret surveillance on democratic processes.

Smart Home Devices and Privacy: What Are the Risks?

Lawrence Powell | Medium

Smart home devices offer undeniable convenience—everything from voice-controlled lights to smart locks—but they also introduce significant privacy and security risks. These devices continuously collect sensitive data like voice recordings, video footage, and usage patterns, which can be exploited if unauthorized parties gain access. Additionally, manufacturers and third-party services often harvest and share personal information, sometimes beyond what users expect or consent to. Cybersecurity vulnerabilities, such as weak default passwords or unpatched firmware, may expose homes to hacking, surveillance, or data theft. To mitigate these risks, users should employ strong, unique passwords, enable two-factor authentication, keep devices updated, and consider isolating smart devices on separate networks.

Trump Taps Palantir to Compile Data on Americans

Sheera Frenkel | Aaron Krolik | New York Times

Palantir is reportedly being tapped by the Trump administration to build a unified federal database of Americans’ personal data, fulfilling a March executive order to enhance inter-agency information sharing. The system uses Palantir’s Foundry platform to pull data from agencies like DHS, HHS, SSA, and the IRS, raising concerns about massive surveillance capabilities tied to political use. Palantir has strongly denied wrongdoing, stating it does not collect data unlawfully and serves merely as a software provider, not a data controller. Still, its defensive stance—including limited press access at events—and denial campaign has drawn media and public scrutiny. The implications of this project could set profound precedents for privacy, government technology, and civil liberties in the U.S.

Thousands of Asus routers are being hit with stealthy, persistent backdoors

Dan Goodin | Ars Technica

Thousands of Asus home and small office routers have been compromised with a stealthy, persistent backdoor that remains active even after reboots or firmware updates. The attackers exploit a known vulnerability (CVE‑2023‑39780) and authentication bypass techniques to enable SSH access on a non-standard port (TCP/53282), implant their public SSH key into the router’s non‑volatile memory, and disable logging to avoid detection. GreyNoise’s AI-powered analysis tool uncovered the campaign—dubbed "ViciousTrap"—which is likely the work of a sophisticated actor laying groundwork for a potential botnet. As of late May 2025, nearly 9,000 devices were confirmed infected, with numbers steadily increasing. Users are urged to check for unwanted SSH access, remove unauthorized keys, perform factory resets, update firmware, and block known malicious IP addresses.

Data broker giant LexisNexis says breach exposed personal information of over 364,000 people

Zack Whittaker | Tech Crunch

LexisNexis Risk Solutions, a major U.S.-based data broker, disclosed a security breach that exposed sensitive personal information of over 364,000 individuals. The incident originated from unauthorized access to a company GitHub account dating back to December 25, 2024, but wasn't discovered until April 1, 2025. Exposed data included names, dates of birth, contact info, Social Security numbers, driver’s license details, and postal addresses—though no financial or credit card information was compromised. LexisNexis has notified affected individuals, involved law enforcement, and is offering two years of identity protection and credit monitoring. The breach has reignited criticism of data brokers, highlighting their vulnerability and the broader need for stricter oversight in handling personal data.

Inside the Kingdom’s digitally powered vision for Hajj

Nada Hameed | Arab News

Saudi Arabia is deploying one of the world’s most advanced tech-driven logistical systems to manage the 2025 Hajj, with the Ministry of Hajj and Umrah at the helm. Key tools include AI-powered crowd monitoring, the enhanced Nusuk app and smart cards, and the Makkah Route program, which streamlines pre-departure processing in eight countries. Over 1.4 million pilgrims are using the Nusuk ecosystem, which integrates services like navigation, health monitoring, and multilingual support. The campaign “No Permit, No Hajj” ensures all participants are verified and supported, while 167,000 trained personnel provide multilingual assistance. These efforts reflect Vision 2030 goals to make Hajj safer, more inclusive, and digitally efficient, with pilgrim satisfaction rising to 81% in 2024.

No Phone Home Is the Privacy Rebellion Digital IDs Didn’t See Coming

Ken Macon | Reclaim The Net

The No Phone Home campaign is a new privacy-first initiative challenging the quiet expansion of mobile driver’s licenses and digital ID systems that could enable government tracking. The campaign warns that these IDs—based on international standard ISO/IEC 18013-5:2021—are built with server-side communication capabilities, allowing data to be sent back to issuing authorities, even if that function is turned off by default. Advocates argue this design flaw opens the door to surveillance, especially as digital IDs expand into areas like online age verification, as seen in Louisiana's use of LA Wallet. The coalition calls for digital credentials to be as privacy-preserving as physical ones, emphasizing that voluntary policy promises are no substitute for technical safeguards. With more states and countries adopting these systems, the campaign urges immediate action to protect civil liberties before surveillance becomes the norm.

Strong Borders Act: A Landmark Shift in Canada’s AML Penalties

Jacqueline D. Shinfield | Vladmir Shatiryan | Ora Morison | Blakes

Canada’s newly proposed Strong Borders Act (Bill C-2) significantly overhauls the country's anti-money laundering regime, introducing a steep new penalty framework under the Proceeds of Crime (Money Laundering) and Terrorist Financing Act (PCMLTFA). Maximum administrative fines for noncompliance would increase fortyfold, reaching up to C$20 million for “very serious” violations, while also expanding the definition of such violations to include subjective assessments of a program’s effectiveness. The Bill imposes a mandatory enrolment system for all reporting entities with FINTRAC, empowers the regulator to revoke enrolment for non-payment of penalties, and introduces compliance orders carrying additional fines up to 3% of global revenues. It further criminalizes large cash transactions over C$10,000 across most sectors and permits law enforcement to share personal data with reporting entities without the individual’s knowledge or consent. These sweeping measures significantly enhance FINTRAC’s oversight powers and raise new privacy and procedural concerns regarding regulatory discretion and data sharing.

CARPAY: That didn't take long... Liberal crusade against cash begins

John Carpay | Western Standard

John Carpay argues that Bill C‑2, the Liberals' so‑called “Strong Borders Act,” functions less as a security measure and more as a sweeping surveillance tool. It not only restricts cash transactions over $10,000—effectively criminalizing large cash payments and donations—but also expands law enforcement’s power to intercept mail and compel digital service providers to hand over user data without judicial oversight. Carpay warns this encroaches on privacy rights and sets a dangerous precedent for progressively lower cash thresholds. Despite assurances by Public Safety Minister Gary Anandasangaree that these measures are Charter‑compliant, critics say they undermine fundamental freedoms and pave the way toward a surveillance state.

Brazil’s dWallet program will let citizens cash in on their data

Gabriel Daros | Rest of World

Brazil is piloting dWallet, a digital “data savings account” that allows citizens to sell their personal data for immediate financial return. The initiative, led by state-owned Dataprev and U.S.-based DrumWave, enables users to profit from data generated through activities like payroll loan applications. Proponents argue that this approach could democratize the digital economy, fostering financial inclusion and giving individuals greater control over their data. However, critics warn that the model risks deepening inequalities by targeting less digitally literate or economically vulnerable populations, and may normalize the commodification of privacy. The pilot is part of a broader movement in Brazil to legally recognize personal data as a form of property.

Mila Welcomes Second Cohort of Indigenous Pathfinders in AI

Mila

Mila, in partnership with Indspire, has launched the second cohort of its Indigenous Pathfinders in AI program, bringing together 21 Indigenous participants from across Canada for an intensive summer initiative beginning May 29, 2025. The initiative combines hands-on workshops and collaborative, community-focused projects that empower participants to apply AI tools to real-world challenges within their communities. It is designed to amplify Indigenous perspectives in technology, nurture a national network of Indigenous AI professionals, and foster knowledge exchange between AI experts and Indigenous talent. This marks a significant effort to ensure First Nations, Métis, and Inuit voices play a leading role in shaping responsible, culturally informed AI.

EU is proposing a new mass surveillance law and is asking the public for feedback

Europa

The European Commission has launched a public consultation and impact assessment aimed at establishing a harmonized framework for the retention of electronic communications data by service providers in support of criminal investigations across the EU. This effort follows the annulment of the 2006 Data Retention Directive by the European Court of Justice, which found the blanket data collection incompatible with EU privacy rights. The new initiative emphasizes a proportionate and necessity-based approach that respects fundamental rights under the EU Charter, incorporating strict limitations and procedural safeguards. It reflects recommendations from expert groups advocating clear oversight, consistent standards across Member States, and enhanced cooperation between law enforcement, service providers, and data protection authorities. The overarching goal is to balance effective access to digital evidence with strong privacy protections.

Hell No: The ODNI Wants to Make it Easier for the Government to Buy Your Data Without Warrant

Matthew Guariglia | EFF

The U.S. Office of the Director of National Intelligence (ODNI) is reportedly planning to create a centralized “Data Consortium” marketplace that would allow federal agencies—including law enforcement and intelligence—to purchase highly sensitive personal data from commercial brokers without a warrant. This data may include granular geolocation, behavioral patterns, and contact logs harvested from smartphone apps, bypassing traditional Fourth Amendment protections. Privacy advocates and some states are pushing back, supporting legislation like Senator Ron Wyden’s Fourth Amendment Is Not for Sale Act and various state-level bans to prevent government warrantless purchases of user data. The ODNI’s reliance on commercial availability as a justification raises significant concerns, especially given its own advisory panel’s findings that such data can be deanonymized and reveal deeply personal information. The debate highlights a growing recognition that stronger legal safeguards are needed to close this surveillance loophole and uphold digital privacy.

Texas police used nationwide license plate reader network to track woman who had self-managed abortion

Josh Marcus | MSN

On May 9, the Johnson County Sheriff’s Office in Texas used the Flock network—encompassing over 83,000 license-plate reader cameras nationwide—to track a woman who had a self-managed abortion. While officials claim the search aimed to ensure her medical safety (“worried that she was going to bleed to death”), the investigation extended into states like Washington and Illinois, where abortion remains legal. The incident highlights how private-public surveillance networks like Flock can be used extraterritorially by law enforcement, bypassing traditional jurisdictional and warrant requirements. Privacy and reproductive rights advocates warn that this sets a dangerous precedent, enabling potential legal actions against people seeking abortions or other sensitive personal movements.

ICE appears to now be illegally using Flock cameras to carry out arrests

William Jackson | SAN

U.S. Immigration and Customs Enforcement (ICE) obtained informal, indirect access to Flock Safety’s national license plate reader (LPR) network by requesting searches through local and state law enforcement agencies. Between June 2024 and May 2025, over 4,000 queries were logged for immigration-related reasons—such as "ICE," "ICE+ERO," and "illegal immigration"—despite the absence of any formal contract between ICE and Flock Safety. This practice directly conflicts with Illinois state law and Flock’s own policies, especially concerning the use of LPR data for immigration enforcement. Privacy advocates warn that this “backdoor” access creates a robust surveillance infrastructure with minimal oversight and exposes participants to significant legal and constitutional risks. The revelations have sparked debates around the legality and ethics of using mass surveillance tools for immigration control rather than specific criminal investigations.

Milwaukee to expand Flock Safety ALPR footprint

Christina Van Zelst | Fox

Milwaukee’s city council recently postponed a decision to install three additional Flock Safety license plate reader cameras on the south side following a heated debate during a May 29 public meeting. Supporters, including local businesses, say the cameras have helped reduce vehicle thefts and aid crime-solving efforts, but opponents and privacy advocates raised alarm over unchecked surveillance and potential misuse. The ACLU of Wisconsin and community members questioned who would have access to the data, how it’s stored, and whether third-party agencies like ICE could exploit the system. Critics also highlighted the need for transparency and public consultation before approving surveillance contracts. As concerns about mass tracking grow, residents are calling for stricter oversight, clear policies on data access, and safeguards against privacy violations.

FBI Wants Access to Encrypted iPhone And Android Data—So Does Europe

Zak Doffman | Forbes

The FBI has renewed its push for “lawful access” to fully encrypted data on iPhones and Android devices, arguing that strong end-to-end encryption hampers criminal investigations. It seeks mechanisms—likely through legislation—that would compel companies to provide decryption access under court order, echoing past encryption battles between law enforcement and tech firms. Meanwhile, European governments are exploring similar policies, weighing the importance of encryption against investigative needs. Critics contend that creating mandated access tools inherently weakens device security and privacy for all users, warning of increased risk from both malicious actors and state surveillance.

Proposed Canadian spy bill "SAAIA" grants government warrantless access to online communications and mail

Government of Canada

The Strong Borders Act (Bill C‑2), introduced on June 3, 2025, broadly upgrades Canada’s border, immigration, and national security regimes—expanding authorities across the Coast Guard, CBSA, RCMP, and IRCC to target organized crime, fentanyl trafficking, and money laundering while emphasizing Charter‑compliant implementation. It includes measures to pause, cancel, or withdraw immigration documents and overhaul asylum claim processing, while enhancing information‑sharing among federal, provincial, and international agencies. Simultaneously, the Act intensifies AML enforcement via strengthened FINTRAC oversight, higher penalties, and limits on large cash and third‑party deposits. Critics raise concerns around privacy and civil liberties, warning that expanded surveillance, mail‑inspection powers, and mandatory data access requirements may erode due‑process protections and refugee rights.

The proposed Strong Borders Act gives police new invasive search powers that may breach Charter rights

Robert Diab | The Conversation

The Conversation highlights that the proposed Strong Borders Act would grant Canadian police and border officials sweeping new powers to search digital devices, vehicles, and goods—often without a warrant or reasonable suspicion. These expanded search authorities, particularly in “urgent” or “exigent” circumstances, raise serious concerns that typical safeguards under the Charter could be bypassed. Legal experts caution this could set a troubling precedent, eroding judicial oversight and weakening protections that safeguard privacy and against arbitrary state intrusion. The piece underscores the need for tighter definitions, oversight, and accountability mechanisms to ensure any expanded authority remains proportional, narrowly tailored, and constitutionally sound.

Border security bill would give law enforcement access to internet subscriber information without a warrant

Marie Woolf | The Globe and Mail

The Globe and Mail reports that Bill C‑2, the Strong Borders Act, would enable CSIS, police, and other federal law enforcement to demand internet subscriber information—such as the user’s identity, service provider, and location details—without obtaining a warrant. This marks a significant departure from a 2014 Supreme Court ruling that requires judicial authorization before accessing such data. Civil liberties advocates describe the bill as a “Trojan horse” for warrantless digital surveillance across Canada, disproportionately expanding state reach into everyday internet usage. While the government defends the measures as necessary for border security and urgent investigations, experts warn this could erode Charter protections regarding digital privacy and set a troubling precedent for future surveillance powers.

New border bill would give authorities sweeping new security powers

Abigail Bimman | Luca Caruso-Moro | CTV News

Canada’s proposed Strong Borders Act (Bill C‑2), unveiled June 3, 2025, would empower law enforcement to access internet subscriber information and search mail without a warrant in urgent scenarios, significantly expanding surveillance powers beyond current judicial oversight. It would lower the threshold for warrantless searches by police and Canada Post, while also allowing digital device and vehicle inspections during border operations. The bill restricts cash transactions over C$10,000 and tightens non-banks' ability to accept third-party deposits—measures aimed at combating money laundering. Critics argue these changes risk eroding Charter protections, privacy rights, and due process under the guise of urgent law-and-border enforcement needs.

Privacy At Risk: Government Buries Lawful Access Provisions in New Border Bill

Michael Geist

The Michael Geist article warns that buried within Bill C‑2—the Strong Borders Act—are sweeping "lawful access" measures that extend far beyond border security. These provisions would empower law enforcement to demand detailed internet subscriber information without a warrant, including IP addresses, device identifiers, service timelines, and app usage. The legislation also authorizes “global production orders,” enabling access to data held anywhere by electronic service providers, and grants legal immunity to providers who comply—including voluntary disclosures. Geist cautions that this marks the revival of decades-long efforts to enact warrantless surveillance in Canada, undermining Supreme Court protections and likely prompting constitutional challenges.

Survey shows Gmail users would gladly sacrifice features for more privacy

Matt Horne | Android Authority

A recent Android Authority poll found that about 73% of respondents would switch from Gmail to Proton Mail for enhanced privacy, with over half indicating they'd be willing to pay for it. However, adopting Proton Mail means accepting trade-offs, including needing both sender and recipient on the platform for end-to-end encryption, limited free storage (1 GB), and fewer productivity features compared to Gmail. The experience also revealed app shortcomings—such as basic text formatting limitations and missing sender visuals—which led some users, including the article’s author, to return to Gmail. While privacy advocates champion Proton’s zero-access encryption and protection from data scanning, some users question whether the benefits are worth the usability and cost compromises.

Developer Builds Tool That Scrapes YouTube Comments, Uses AI to Predict Where Users Live

Matthew Gault | 404 Media

A developer has created YouTube‑Tools, a subscription-based ($20/month) web service that scrapes public YouTube comments and uses AI to infer personal details—like location, languages spoken, and political views—from users’ comment histories. Though advertised for law enforcement use, anyone can subscribe with just an email and credit card, raising serious privacy and anonymity concerns. The AI-generated summaries purportedly flag identifying traits in seconds, making it easy for harassers or investigators to profile individuals across video comment sections. Critics also point out that the tool likely breaches YouTube's scraping policies and highlights a broader issue with public data being repurposed in invasive ways.

Huge, self-driving trucks roll onto Canada’s most treacherous roads

Anita Balakrishnan | The Logic

Canada’s logging industry is pushing forward with autonomous truck technology, deploying rugged AI-equipped semi-trucks on remote forestry roads once deemed inaccessible. Startups like Toronto-based NuPort Robotics and Kratos—with backing from FPInnovations and others—are testing “platooning” systems where a human-driven lead truck is followed by AI-controlled vehicles navigating complex terrain using lidar, radar, cameras, and V2V communication. These trials aim to boost efficiency, mitigate chronic driver shortages, and lower transport costs that now represent up to 40% of wood product expenses. While proponents highlight improved safety and supply-chain resilience, critics—including truckers and unions—warn of job losses and question whether driver-led expertise and manual load handling can ever be fully replicated.

US lawmakers find bipartisanship in opposition to UK's order on Apple encryption back door

Joe Duball | IAPP

U.S. lawmakers from both parties are voicing unified concern over the UK government’s secret “Technical Capability Notice” ordering Apple to build a backdoor into its encrypted iCloud Advanced Data Protection service. During a June 5 House Judiciary subcommittee hearing, lawmakers warned this demand threatens encryption standards worldwide, weakening user security and potentially enabling cybercriminals, authoritarian regimes, or other governments to exploit these vulnerabilities. There’s bipartisan support for revisiting the CLOUD Act and potentially renegotiating U.S.–UK data-sharing agreements to prevent future orders that undermine encryption. Meanwhile, Apple, WhatsApp, and privacy advocates continue their legal challenges in the UK, arguing that such backdoor access contravenes privacy rights and international cybersecurity norms.

Meta plans to replace humans with AI to assess risks

Bobby Allyn | Shannon Bond | NPR

Meta is reportedly planning to automate up to 90% of its internal risk assessments—covering areas like youth safety, AI risks, and content integrity—using AI instead of human reviewers. Internal documents obtained by NPR suggest this shift could bypass the deeper scrutiny previously provided by staff, increasing the likelihood that harmful features or updates slip through. Meta defends the move as a way to accelerate product development and maintain human oversight on genuinely novel or complex issues, though critics caution it may compromise user safety. Former employees warn that the reliance on AI could downgrade rigorous, thoughtful evaluation into a “box-checking exercise,” weakening protections against real-world risks.

Quebec goes to war with social media

Martin Patriquin | The Logic

Quebec’s all-party Special Commission on the Impacts of Screens and Social Media has recommended banning children under 14 from accessing social media without parental consent, alongside expanding an existing cellphone ban in schools province-wide. The recommendation reflects widespread concern: nearly 90% of Quebecers, including three-quarters of teens, support age restrictions, viewing social media as a public health issue akin to historical regulations on unsafe technologies. Commissioners highlighted risks from unregulated exposure—like addictive algorithms, mental-health harms, and cyberbullying—citing internal Meta documents showing platforms exacerbate youth anxiety. While global peers like France and Australia set limits at 15–16, Quebec is considering a lower threshold and stronger parental oversight; however, critics question how digitally-savvy kids might circumvent the rules and call for complimentary education on safe online behaviors.

Navigating employee sick leave and medical documentation

Catherine Hamill | Erika Romanow | Osler

Employers in Canada are facing tighter restrictions on the medical information they can request when employees take sick leave. Employers generally cannot demand detailed diagnoses, symptoms, or medical histories; instead, they may only inquire about fitness to work and expected duration. Recent provincial legislation—such as Ontario’s Working for Workers Five Act—now bars employers from requesting medical notes for short-term absences (e.g., up to three days), accepting self-certification or other reasonable proof instead. Osler advises that while documentation may still be needed for longer-term illness or accommodation, employers should focus on an employee’s functional abilities and maintain clear, consistent policies to protect privacy, support compliance, and ensure fairness.

VC money is fueling a global boom in worker surveillance tech

Gayathri Vaidyanathan | Rest of World

Venture capital is fueling a rapid surge in global “bossware” and algorithmic workplace surveillance tools, particularly across Africa, Latin America, and Asia, where regulatory oversight is often weak. Startups are deploying biometric tracking, keystroke monitoring, predictive analytics, and camera-enabled productivity systems to manage gig workers, factory staff, and office employees—generating stress and suspicion among the workforce. These tools are typically debuted in low-regulation markets before expanding elsewhere, even as many workers remain unaware of the surveillance intricacies. Although employers often tout benefits like enhanced efficiency and security, privacy advocates warn these systems degrade autonomy, foster distrust, and disproportionately impact vulnerable communities.

Amazon Prepares to Test Humanoid Robots for Delivering Packages

Rocket Drew | The Information

Amazon is reportedly preparing to test humanoid robots for last‑mile package delivery, developing its own AI software to power off‑the‑shelf robot bodies. The company is building a "humanoid park"—an indoor obstacle course in San Francisco—to simulate real‑world environments, including loading into Rivian electric vans and navigating doorsteps. These robots would “spring out” of vans during deliveries, working alongside human drivers to potentially improve drop‑off efficiency. While the initiative reflects Amazon’s broader push toward automation—ranging from warehouse robots to drone deliveries—experts caution that reliably handling unpredictable environments (like homes with pets or stairs) remains a key challenge.

Previous
Previous

Week of 2025-6-16

Next
Next

Week of 2025-06-02