Week of 2025-05-19

Contracts kept secret despite call from privacy czar to publish

Joanna Frketich | Hamilton Spectator

The City of Hamilton is facing criticism for withholding key municipal contracts from public view, despite a directive from Ontario's Information and Privacy Commissioner (IPC) urging greater transparency. These contracts, involving significant public expenditures, remain undisclosed, raising concerns about accountability and adherence to provincial transparency guidelines. The IPC has emphasized the importance of public access to such documents to ensure governmental accountability and public trust. Hamilton's reluctance to release these contracts has sparked a debate on the balance between confidentiality and the public's right to information, highlighting the need for clearer policies and adherence to transparency standards.

Alberta government violated freedom of information laws: information watchdog finds

Tom Cardoso | Robyn Doolittle | The Globe and Mail

Alberta's Information and Privacy Commissioner has determined that the provincial government violated freedom of information laws by improperly withholding records from public access. The ruling highlights a pattern of non-compliance with transparency obligations, raising concerns about accountability within Alberta's public institutions. This decision underscores the importance of adhering to freedom of information legislation to maintain public trust and ensure governmental transparency.

Former journalist Evan Solomon named first-ever federal AI minister

Anja Karadeglija | CTV News

Evan Solomon, a former journalist and political commentator, has been appointed as Canada's first Minister of Artificial Intelligence and Digital Innovation. Sworn in on May 13, 2025, under Prime Minister Mark Carney's administration, Solomon's appointment signifies the government's commitment to prioritizing AI and digital technologies at the federal level. Before entering politics, Solomon had a distinguished media career, including hosting roles on CBC and CTV, and serving as publisher of GZERO Media. He was elected as the Member of Parliament for Toronto Centre in April 2025. The creation of this ministerial position reflects Canada's ambition to lead in AI development and regulation, addressing ethical concerns and fostering digital innovation.

Zuckerberg’s new Meta AI app gets personal in a very creepy way

Geoffrey A. Fowler | Washington Post

Meta's newly launched AI chatbot app, Meta AI, has drawn significant privacy concerns due to its extensive data collection and retention practices. By default, the app records and stores all user interactions, creating detailed "Memory" files that include sensitive personal information, such as interests in topics like fertility techniques, divorce, and financial matters. This data is utilized to personalize responses, train future AI models, and potentially inform targeted advertising, with no current option for users to opt out of this data collection. Although users can delete their chat history and memory files, the process is complex and may not guarantee complete data removal. Unlike competitors like ChatGPT and Google's Gemini, which offer more transparent privacy controls, Meta AI's integration with Facebook and Instagram data amplifies concerns about user profiling and the potential for misuse of personal information.

AI Act deadline missed as EU GPAI Code delayed until August

Richard Barker | Slaughter and May

The European Union has missed its May 2, 2025, deadline to finalize the General-Purpose AI (GPAI) Code of Practice, a key guidance document under the AI Act. The GPAI Code is meant to help AI providers comply with obligations related to transparency, copyright, and safety, but its release was delayed due to ongoing consultations and stakeholder feedback. The final version is now expected by August 2025, when GPAI-related provisions of the AI Act come into force. If the voluntary code is not ready or is deemed insufficient, the European Commission may issue binding rules instead. The delay highlights the challenges of crafting comprehensive AI regulation while balancing innovation, legal clarity, and fundamental rights.

‘Hack the Bias in AI’: Hackergal founder says diversity in tech roles needed to reduce ‘harmful’ AI bias

Dominik Kurek | Brampton Guardian

Lucy Ho, founder of the Canadian nonprofit Hackergal, emphasizes that increasing diversity in tech roles is essential to reducing bias in artificial intelligence systems. Through initiatives like Hackergal's annual "Hack the Bias in AI" hackathon, girls and gender-diverse students across Canada engage in coding projects that address issues such as gender and racial bias in AI technologies. Ho's advocacy is informed by her own experiences in tech startups, where she often found herself as the only woman or person of color, highlighting the need for more inclusive representation in the industry. Research supports this approach; a recent study found that gender-diverse AI development teams produce higher-quality code and more robust systems. By fostering inclusivity, programs like Hackergal aim to create AI technologies that better reflect and serve diverse communities.

Proposed federal moratorium on US state-level AI regulation passes House committee

Jedidiah Bracy | IAPP

A U.S. House committee has advanced a Republican-backed proposal for a 10-year federal moratorium on state and local regulation of artificial intelligence, embedded within a broader budget bill. The measure would preempt state laws on issues like algorithmic bias, deepfakes, and automated decision-making, in favour of a uniform national approach. Proponents argue this is necessary to avoid regulatory fragmentation that could stifle innovation. However, the proposal faces bipartisan pushback from state attorneys general and lawmakers who say it undermines consumer protections and infringes on states' rights. Critics also warn the measure may violate constitutional principles and could be blocked under Senate rules governing budget-related legislation.

How different jurisdictions approach AI regulatory sandboxes

Richard Sentinella | IAPP

AI regulatory sandboxes are being adopted globally to facilitate innovation while ensuring compliance with emerging regulations. The European Union mandates each member state to establish or join an AI sandbox by August 2026, with Spain pioneering this initiative in 2022. The United Arab Emirates has operated its Regulations Lab since 2019, allowing companies to test AI technologies under temporary licenses to inform future legislation. Singapore has utilized sandboxes to explore generative AI applications, focusing on sector-specific innovations without offering regulatory relief. In the United States, Utah's 2025 Artificial Intelligence Policy Act established the Learning Lab, providing regulatory mitigation for AI developers, with similar proposals emerging in other states like Connecticut and Texas.

Toronto judge accuses lawyer of using AI and fake cases to make legal arguments

Joseph Brean | National Post

Toronto lawyer and legal tech entrepreneur Benjamin Alarie is at the forefront of integrating artificial intelligence into the legal profession. As a professor at the University of Toronto Faculty of Law and co-founder of Blue J, Alarie has developed AI-powered tools that assist lawyers in predicting legal outcomes, particularly in tax and employment law . His work exemplifies how AI can enhance legal research and decision-making by analyzing vast datasets to identify patterns and precedents. However, Alarie also acknowledges the ethical and practical challenges of AI in law, emphasizing the need for transparency, accountability, and rigorous evaluation to ensure these tools serve justice effectively. His contributions highlight both the potential and the complexities of adopting AI in legal practice.

Insurance firms want to cash in on the AI boom

Murad Hemmadi | The Logic

Toronto-based startup Armilla AI has introduced an insurance product designed to cover businesses for losses resulting from errors or malfunctions caused by artificial intelligence systems, particularly chatbots. The policy, underwritten by Lloyd's of London insurers including Chaucer, offers coverage for legal costs and damages if a company faces claims due to underperforming AI tools. This initiative addresses growing concerns about inaccuracies and unpredictable behavior from AI systems—like chatbots delivering incorrect responses or inappropriate language—and could encourage wider AI adoption by mitigating risk. Notably, incidents like Air Canada’s chatbot inventing a discount or Virgin Money’s bot reprimanding a customer for using the word “virgin” highlight the potential for reputational and financial harm. Armilla’s policy differs from traditional technology errors and omissions insurance by providing more substantial coverage specifically tied to AI performance degradation. Payouts occur if the AI performs significantly below expected standards, ensuring only qualified systems receive coverage. The product is underwritten by several Lloyd’s insurers, including Chaucer, which emphasizes selectivity in providing coverage to ensure only reliable AI.

Dufresne announces OPC consultation on potential children's privacy code for Canada

Joe Duball | IAPP

Canada's Privacy Commissioner, Philippe Dufresne, has initiated a public consultation on the development of a federal children's privacy code, aiming to enhance protections for minors online. Announced at the IAPP Canada Privacy Symposium 2025, this "exploratory consultation" seeks input on key issues such as consent, privacy impact assessments, and data collection practices concerning children. The initiative responds to findings from the OPC's 2024–2025 Public Opinion Research, which revealed that 74% of parents have little or no trust in how companies handle their children's data. The proposed code draws inspiration from similar frameworks in the UK and Australia, reflecting a global trend towards stronger digital safeguards for children. The consultation is open until August 5, 2025, with results to be presented in the following months. 

Canada could be facing a disturbing new frontier in digital child sexual abuse.

Loraine Centeno | Inside Halton

AI-generated child sexual abuse material (CSAM) is surging in Canada, with Cybertip.ca reporting a more than twofold increase in sexually explicit deepfakes of children between 2023 and 2024—and 4,000 more in the first quarter of 2025 alone. Offenders are now using generative AI to produce realistic abuse images from publicly shared photos, accelerating sextortion schemes and bypassing the need to manipulate or contact victims directly. These synthetic images are being sold and traded via the dark web, often through unregulated file hosting platforms or subscription-based services like Patreon and Fanvue, according to recent investigations. Experts warn that without comprehensive AI oversight, this technology will continue to be exploited, creating severe and lasting harm for victims. Authorities urge parents to secure social media accounts, educate children about digital risks, and call on governments and the tech industry to introduce strict regulations and accountability mechanisms for AI misuse.

Telus tops watchdog's telecom complaints ranking as overall gripes up 12%: report

Sammy Hudes | The Canadian Press

Telus has overtaken Rogers as Canada’s most complained-about telecom provider for the first time, according to the Commission for Complaints for Telecom-television Services’ latest mid-year report. Complaints rose 12% overall, with Telus accounting for nearly 20% of the 11,909 complaints filed between August 2024 and January 2025—a 63% year-over-year increase. The top issues involved incorrect charges, contract disputes, and unexpected price hikes, particularly in wireless and internet services. Telus acknowledged the surge and claims it has since reduced complaints by 20% through revised credit policies and clearer contracts. The CCTS emphasizes that customers should review their service agreements closely and encourages the use of its free dispute resolution services.

EU commissioner lays out suite of AI governance, consumer protection goals

Caitlin Andrews | IAPP

At the IAPP AI Governance Global Europe 2025 conference, EU Commissioner Michael McGrath outlined a robust AI and consumer protection agenda, including the proposed Digital Fairness Act (DFA). The DFA targets manipulative design practices like dark patterns and seeks to reduce regulatory burdens for SMEs, especially in digital transactions. McGrath also announced a “democracy shield” initiative to combat AI-driven misinformation and safeguard elections. A digital justice strategy is in development to improve efficiency and fairness in legal systems using AI. The EU will equip regulators with AI tools to enforce safety and compliance under the new AI Act.

Loblaw’s body camera pilot rolls out, raising privacy concerns

Alex Flood | Bay Today

Loblaw has initiated a pilot program deploying body-worn cameras on employees at select Shoppers Drug Mart and Loblaws locations in Toronto, aiming to enhance safety amid rising retail crime. The cameras, supplied by Axon, are activated only during incidents where staff feel threatened, with employees trained to notify customers when recording begins. However, privacy experts, including James Turk from Toronto Metropolitan University, express concerns that such measures may escalate tensions and question their necessity given existing in-store surveillance systems. Turk also highlights that reducing staff numbers in favor of technology may undermine traditional methods of conflict resolution. Loblaw maintains that the pilot is voluntary and part of broader efforts to create a safer shopping environment.

Airlines Are Collecting Your Data And Selling It To ICE

Katya Schwenk | The Lever

A recent investigation reveals that the Airlines Reporting Corporation (ARC), a U.S.-based aviation clearinghouse, is providing extensive passenger data to Immigration and Customs Enforcement (ICE) through a program known as the Travel Intelligence Program (TIP). This data includes full flight itineraries, passenger name records, and financial details from billions of past and future flights, offering ICE a comprehensive view of travelers' movements . The data is collected from transactions processed by ARC, which handles ticketing information from numerous travel agencies and airlines. Privacy experts express concern over the lack of transparency and potential civil liberties violations, as travelers are often unaware that their information is being shared with government agencies . This practice raises questions about data privacy and the need for clearer regulations governing the sharing of personal travel information with law enforcement.

Data compromise confirmed by Nova Scotia Power

SC Media

Nova Scotia Power has confirmed a significant data breach that compromised sensitive customer information following a cybersecurity incident detected in late April 2025. The breach, which occurred on or around March 19, 2025, exposed personal data including names, contact details, dates of birth, Social Insurance Numbers, driver's license numbers, and, for some customers, bank account information associated with pre-authorized payments . While the utility's core operations—such as electricity generation and distribution—remained unaffected, internal IT systems experienced disruptions. Nova Scotia Power is offering affected customers a complimentary two-year subscription to TransUnion's myTrueIdentity® credit monitoring service and has issued warnings about potential phishing attempts impersonating the company . The incident underscores the growing cybersecurity risks faced by critical infrastructure providers and highlights the need for robust data protection measures.

OIPC finds Government non-compliant

Office of the Information and Privacy Commissioner of Alberta

In Investigation Report F2025-IR-01, Alberta’s Information and Privacy Commissioner found that the Government of Alberta failed to meet its obligations under the Freedom of Information and Protection of Privacy Act (FOIP). The report revealed that the government did not respond to 28 access-to-information requests within the legislated timelines, some of which were delayed by over a year. The Commissioner criticized the government's lack of adequate systems and resources to handle these requests promptly. Recommendations were made to improve training, allocate sufficient resources, and implement effective tracking systems to ensure compliance. This case underscores the importance of timely responses to information requests to uphold transparency and accountability.

Weaponized Words: Uyghur Language Software Hijacked to Deliver Malware

Rebekah Brown | Marcus Michaelsen | Matt Brooks | Siena Anstis | The Citizen Lab

In March 2025, members of the World Uyghur Congress (WUC) living in exile were targeted by a sophisticated spear phishing campaign, as detailed in a recent Citizen Lab report. The attackers distributed malware through a trojanized version of a legitimate Uyghur language text editor, originally developed by a trusted community member. Upon execution, the malware profiled the victim's system and communicated with a remote server, with the capability to download additional malicious plugins. This incident exemplifies digital transnational repression, where authoritarian regimes, notably China, employ digital tools to surveil and intimidate diaspora communities. The campaign underscores the persistent threats faced by marginalized groups through the manipulation of culturally significant software.

“Your privacy is a promise we don’t break”: Dating app Raw exposes sensitive user data

Danny Bradbury | Malwarebytes

The dating app Raw, which promotes itself as a privacy-focused platform, inadvertently exposed sensitive user information due to a security vulnerability. This breach included users' display names, birthdates, sexual preferences, and precise location data, accessible without authentication through an insecure direct object reference (IDOR) flaw in the app's API . Despite Raw's claims of employing end-to-end encryption, investigations revealed that user data was transmitted without such protections. Following the exposure, Raw secured the affected endpoints and notified relevant data protection authorities. However, the incident underscores the critical need for robust security measures and third-party audits in applications handling sensitive personal data.

Second Circuit Affirms VPPA Dismissal: Data Is Not “Personally Identifiable Information” If Only Experts Can Decipher It

Jess Davis | Jordan Joachim | Kathryn Cahoy | Covington

The U.S. Court of Appeals for the Second Circuit recently affirmed the dismissal of a class action lawsuit under the Video Privacy Protection Act (VPPA) in the case of Solomon v. Flipps Media, Inc. The plaintiff alleged that FITE TV shared video titles and a Facebook user ID with Meta via tracking pixels, constituting a disclosure of personally identifiable information (PII). However, the court adopted the "ordinary person" standard, concluding that such data does not qualify as PII under the VPPA, as it would not enable an average individual to identify a person's viewing habits. This decision aligns the Second Circuit with the Third and Ninth Circuits, emphasizing that liability under the VPPA should not depend on the capabilities of sophisticated third parties to interpret complex data. The ruling provides clarity for digital media companies regarding the scope of the VPPA and the definition of PII.

Calgary students participate in hackathon aimed at teaching girls and gender-diverse to code

Darren Krause | Livewire Calgary

On May 1, 2025, Calgary students participated in the 14th National Hackergal Hackathon, a Canada-wide initiative aimed at introducing girls and gender-diverse learners to coding. Hosted at Alice Jamieson Girls Academy, the event engaged students aged 11 to 14 in developing interactive projects centered on a surprise social impact theme. Hackergal, a Canadian nonprofit, has reached over 30,000 participants through its programs, striving to close the gender gap in technology by fostering digital literacy and confidence among underrepresented youth. The hackathon not only provided hands-on coding experience but also featured mentorship opportunities, allowing students to connect with industry professionals and peers nationwide. Educators and organizers highlighted the event's role in empowering young learners to envision themselves as future tech leaders.

Is your school spying on your child online?

Chad Marlow | The Guardian

In his May 8, 2025, opinion piece for The Guardian, ACLU senior policy counsel Chad Marlow critiques the pervasive use of surveillance technologies in U.S. schools, arguing they infringe on student privacy and hinder intellectual freedom. He recounts the 2009 Lower Merion School District incident, where school-issued laptops covertly captured images of students in their homes, as a cautionary tale. Marlow contends that post-COVID-19, surveillance tools—marketed as remote learning aids—have become entrenched in K–12 education, with companies like Gaggle, GoGuardian, Securly, and Navigate360 monitoring students' digital activities, including private communications and search histories. He warns that such constant monitoring conditions students to self-censor, stifling their exploration of ideas and expression. Marlow urges federal policymakers to cease funding unproven surveillance technologies and calls on parents to demand transparency and consider less invasive alternatives to ensure student safety without compromising privacy.

The Great Unknown: Bankruptcies Are a Blind Spot for Data Privacy Regulations

Benjamin Joyner | Law.com

A recent article from Law.com highlights a significant oversight in U.S. data privacy regulations concerning the handling of personal data during corporate bankruptcies. When companies file for bankruptcy, their customer data—often a valuable asset—can be sold to satisfy creditors, frequently without adequate safeguards to protect consumer privacy. This issue has gained prominence with the bankruptcy of 23andMe, which holds sensitive genetic information from millions of users. Experts warn that such data could be acquired by entities like biotech firms or law enforcement agencies, raising concerns about misuse and lack of consent. The situation underscores the need for comprehensive data privacy laws that address the transfer of personal information in bankruptcy proceedings, ensuring consumer rights are upheld even when companies face financial collapse.

Why is Elon Musk's X going after Minnesota's political deepfakes law?

Chris Mueller | USA Today

In May 2025, Elon Musk's X Corp. filed a federal lawsuit challenging Minnesota's 2023 law that criminalizes the dissemination of AI-generated deepfakes intended to influence elections. The law prohibits knowingly sharing such content within 90 days of an election if it's designed to harm a candidate or sway voters. X argues the statute is unconstitutionally vague, infringes on First Amendment rights, and conflicts with Section 230 of the Communications Decency Act, which shields platforms from liability for user-generated content. Legal experts, including University of Minnesota professor Alan Rozenshtein, have expressed concerns about the law's potential to suppress protected political speech and predict it may be overturned. Minnesota officials, however, defend the law as a necessary measure to safeguard electoral integrity in the age of AI-driven misinformation.

Saskatoon children's hospital nurse unlawfully snooped on records of 314 patients: privacy report

Brandon Harder | MSN

A registered nurse at Jim Pattison Children’s Hospital in Saskatoon unlawfully accessed the medical records of 314 individuals, including six colleagues, during a 16-month leave of absence starting in August 2021. Using a Saskatchewan Health Authority (SHA)-issued laptop, the nurse viewed 2,437 records without authorization, bypassing system warnings designed to prevent such breaches. The SHA discovered the unauthorized access in December 2021 and disabled the nurse's account; however, the investigation did not commence until October 2022, and the nurse was not interviewed until January 2024 upon returning to work. The nurse was subsequently terminated in March 2024, but affected individuals were not notified until August 2024, nearly two years after the breaches occurred. Saskatchewan's Information and Privacy Commissioner criticized the SHA for its delayed response and inadequate notification process, recommending enhanced privacy training, stricter access controls for employees on leave, and the implementation of regular audits to prevent future incidents.

MedicAlert introducing technology that will allow 911 dispatchers to access records

Katrine Desautels | CTV News

MedicAlert Canada is launching a new technology that enables 911 dispatchers to access subscribers' medical records during emergencies. This system allows dispatchers to view critical health information—such as allergies, medications, and pre-existing conditions—before first responders arrive, potentially improving emergency care outcomes. The initiative aims to enhance the effectiveness of emergency responses by providing immediate access to vital medical data. While the program is designed to bolster patient safety, it also raises important considerations regarding data privacy and the secure handling of sensitive health information. MedicAlert emphasizes that access to records will be limited to authorized personnel and used solely for emergency purposes.

Forget Starlink. Indigenous Innovation Is Canada’s Best Bet for Rural Internet

Rob McMahon | Maclean’s

A recent article in Maclean’s argues that Canada should prioritize Indigenous-led internet solutions over reliance on foreign-owned providers like Elon Musk’s Starlink. While Starlink has rapidly expanded to 400,000 Canadian subscribers, including in remote Indigenous communities, critics caution that its U.S. ownership and centralized control pose long-term risks to Canada’s digital sovereignty. Indigenous organizations, which once held 95% of the rural internet market, have seen their share eroded by subsidized competition from Starlink. The article highlights that Indigenous-led networks are more likely to reinvest in local economies, respect cultural contexts, and ensure community control over infrastructure. It calls for federal investment in homegrown broadband innovation to bridge the digital divide while empowering Indigenous communities to shape their own digital futures.

‘Rubber stamping’: Criticism after York police spend $279K on Dubai ‘junket’, consultants

Jeremy Grimaldi | York Region

York Regional Police (YRP) have come under scrutiny for spending $279,000 on a delegation trip to Dubai, which included hiring consultants for the journey. The trip, intended as a fact-finding mission to learn from international policing practices, has been criticized for its high costs and the perceived lack of tangible benefits to the local community. Critics argue that such expenditures are excessive, especially when public funds are involved, and question the transparency and accountability of the decision-making process. The controversy has sparked a broader debate about the appropriate use of police resources and the need for oversight in public spending.

Expanded high-speed Internet coming to SDG Counties

Philip Oddi | The Review

On April 22, 2025, the Council of Stormont, Dundas, and Glengarry (SDG) Counties in Eastern Ontario approved a municipal access agreement with Redwood Technologies to expand high-speed internet access across the region. This initiative aims to bridge the digital divide in rural communities by enhancing broadband infrastructure. The project aligns with broader federal efforts, such as the Universal Broadband Fund, which seeks to connect underserved areas throughout Canada. Improved internet connectivity is expected to benefit residents by facilitating better access to education, healthcare, and economic opportunities. The collaboration between SDG Counties and Redwood Technologies marks a significant step toward achieving equitable digital access for all community members.

Privacy watchdog investigates Richmond's intersection camera pilot

Maria Rantanen | Richmond News

The Office of the Information and Privacy Commissioner for British Columbia (OIPC) has initiated an investigation into the City of Richmond's pilot project involving high-definition traffic cameras at 10 key intersections. Approved by city council in December 2024 with a 7–2 vote, the initiative aims to enhance public safety and assist law enforcement. However, concerns have been raised regarding the collection, use, disclosure, and protection of personally identifiable information captured by these cameras. The OIPC's inquiry will assess whether the city has the authority to collect and use such data, whether individuals have been adequately notified about the data collection, and if proper safeguards are in place to protect the information. Currently, only one intersection is equipped with the new cameras, but the full rollout is projected to cost $2.5 million, targeting major entry and exit routes in the city.

Trump 2.0 Will Escalate the Contest for Military AI Supremacy 

Kyle Hiebert | CIGI

A new article from the Centre for International Governance Innovation warns that a second Trump administration would escalate the global arms race in military artificial intelligence. President Trump has already rolled back AI oversight regulations, promoting unrestrained development of autonomous weaponry and surveillance systems. Critics fear this will erode human control in critical military decisions and increase risks of accidents or unintended escalation. The administration’s stance, echoed by Vice President J.D. Vance, rejects international cooperation or safety measures in favour of strategic dominance—especially against China. The article calls for urgent global governance to prevent the unchecked militarization of AI.

National Security Goes into Space

Wesley Wark | CIGI

In his article for the Centre for International Governance Innovation, Kyle Hiebert argues that a second Trump administration would intensify the global race for military artificial intelligence supremacy. Shortly after returning to office, President Trump rescinded previous AI oversight directives, advocating for unrestricted development of AI technologies. This policy shift is expected to accelerate the deployment of AI-driven weaponry and surveillance systems, raising concerns about the erosion of human oversight in critical military decisions. The administration's approach, emphasizing rapid technological advancement over ethical considerations, reflects a broader strategy to counter China's growing influence in AI and military capabilities. Hiebert underscores the urgent need for comprehensive governance frameworks to manage the ethical and security implications of integrating AI into military operations.

Neurotechnologies under the EU AI Act: Where law meets science

Nora Santalu | IAPP

An article from the IAPP explores how the EU’s Artificial Intelligence Act (AI Act) applies to emerging neurotechnologies like brain-computer interfaces (BCIs). While the Act restricts AI systems that manipulate human behaviour and sets strict rules for high-risk applications, it does not explicitly categorize neurotechnologies as high-risk. Experts argue this oversight may leave gaps in protecting mental privacy and regulating the unique ethical challenges these technologies pose. The piece calls for more specific regulatory guidance to ensure responsible development and safeguard fundamental rights. As BCIs and similar technologies evolve, the need for adaptive, rights-focused governance is increasingly urgent.

Wikipedia legally challenges 'flawed' online safety rules

Chris Vallance | BBC

Wikipedia is legally challenging the UK's Online Safety Act (OSA), arguing that its vague platform categorization rules could impose excessive obligations on the nonprofit site. The Wikimedia Foundation fears that being classified as a "Category 1" service would compel it to verify volunteer editors' identities, potentially endangering their privacy and safety, especially in authoritarian regimes. The Foundation contends this could deter contributors from engaging with sensitive topics, undermining the platform's quality and neutrality. While not opposing the OSA's overall goals, Wikimedia asserts that the current regulations risk overregulating low-risk platforms like Wikipedia while failing to adequately address genuinely harmful sites. Legal experts suggest this may be the first of several challenges to the OSA as its broader implications for free expression and digital governance become clearer.

This Canadian pharmacist is key figure behind world's most notorious deepfake porn site

Eric Szeto | Jordan Pearson | Ivan Angelovski | CBC News

A CBC News investigation has uncovered that a Canadian man is a key figure behind MrDeepFakes, one of the internet’s largest platforms for non-consensual deepfake pornography. Operating under a pseudonym, he helped build and manage the site, which hosts tens of thousands of AI-generated explicit videos featuring the faces of celebrities and ordinary individuals without their consent. The platform facilitated a marketplace where users could request and purchase custom deepfakes, often paying in cryptocurrencies to maintain anonymity. Despite the site's claims of moderation policies, enforcement was minimal, allowing harmful content to proliferate. The exposure of this individual's involvement has intensified calls for stronger legal measures in Canada and globally to combat the creation and distribution of non-consensual synthetic intimate imagery.

What’s the manosphere? Inside the online world of misogyny that’s targeting boys and young men

Omar Mosleh | Toronto Star

A Toronto Star article explores how social media algorithms and online influencer ecosystems are shaping the identities, beliefs, and mental health of Canadian youth, particularly boys and young men. It features interviews with youth like Roy Murnaghan and Alexander Makeykin, who describe how once-affirming digital spaces are now overrun with misogynistic, anti-LGBTQ, and far-right rhetoric—content they're algorithmically exposed to despite no engagement with similar material. Experts warn that the manosphere and its influencers (e.g., Andrew Tate, Jordan Peterson) offer young men a seductive sense of purpose and community amid real frustrations—economic precarity, delayed life milestones, and perceived social marginalization. Yet this pull toward online extremism often stems from unmet emotional needs like belonging, connection, and identity affirmation. The article underscores the urgent need for supportive offline spaces, digital literacy education, and open parental dialogue to disrupt radicalization pathways before they deepen.

Kosseim reflects on state of play ahead of second Ontario IPC term

Joe Duball | IAPP

Ontario Information and Privacy Commissioner Patricia Kosseim outlined her agenda for a second term during the 2025 IAPP Canada Privacy Symposium, emphasizing five strategic priorities. She called for greater harmonization between federal and provincial privacy laws, highlighting the gap between Ontario’s recent cybersecurity legislation and stalled federal reforms like Bill C-27. Kosseim also prioritized children’s digital privacy through initiatives like classroom lesson plans and a proposed school privacy charter. She urged the privacy community to address emerging issues like the environmental impact of data and the need to support a sustainable, adaptable privacy profession. Looking ahead, she plans to develop a new five-year strategic plan in consultation with stakeholders to guide Ontario’s evolving privacy landscape.

TikTok Fined $600 Million for Sending European User Data to China

Kelvin Chan | AP News

The European Union has fined TikTok €530 million (approximately $600 million) for violating data privacy regulations related to unauthorized data transfers to China. A four-year investigation by Ireland's Data Protection Commission revealed that TikTok inadequately safeguarded European users' personal information, allowing access by staff in China without ensuring protections equivalent to EU standards. The platform also failed to transparently inform users about these data transfers. TikTok plans to appeal the decision, asserting that the issues predate its ongoing "Project Clover," which aims to localize European user data through new data centers. Despite these measures, the regulator expressed concerns over potential Chinese government access to user data and noted that TikTok had provided inaccurate information during the investigation. This fine underscores the EU's commitment to enforcing strict data privacy standards under the General Data Protection Regulation (GDPR).

Over half of Canadian employers have appointed Chief AI Officer

Jim Wilson | Human Resources Director

A recent Amazon report reveals that over half of Canadian employers (52%) have appointed a Chief AI Officer (CAIO), with another 23% planning to do so by 2026. This growing trend reflects the strategic importance of generative AI, which is now prioritized even above cybersecurity investments. Despite the enthusiasm, many organizations face hurdles, including a lack of skilled GenAI professionals and underdeveloped change management strategies—only 11% of firms currently have one. The CAIO role blends technical, strategic, and ethical oversight, helping ensure AI deployment aligns with both business goals and societal values. Experts like University of Guelph’s Nita Chhinzer emphasize that having a CAIO is becoming essential for maintaining competitiveness in the AI-driven economy.

Risk Management in the Modern Era of Workplace Generative AI

Marjorie C. Soto Garcia | Brian Casillas | Rebecca L. Richard | MWE

A recent article from McDermott Will & Emery highlights the growing use of generative AI (GenAI) in workplace settings, particularly within human resources functions like recruitment, workforce management, and payroll. While these technologies offer efficiency gains, they also present legal risks, including potential class actions related to privacy violations, AI regulations, and employment discrimination claims. The authors emphasize the importance of understanding how GenAI algorithms function and the data they utilize, as improper implementation can lead to issues such as biased decision-making or inaccurate outputs. To mitigate these risks, companies are advised to use GenAI as a tool to assist human-led processes rather than as a replacement for human decision-making. Additionally, organizations should collaborate across legal, HR, and IT departments to ensure compliance with evolving federal and state laws governing AI use in employment practices.

Previous
Previous

Week of 2025-06-02

Next
Next

Week of 2025-05-12