[syndicated profile] tim_harford_feed

Posted by Tim Harford

A megaplant near the small village of Flixborough, England, is busy churning out a key ingredient of nylon 6, a material used in everything from stockings to toothbrushes to electronics. When a reactor vessel fails, the engineers improvise a quick-fix workaround, so the plant can keep up with demand. Before long, the temporary patch – a small, bent pipe – becomes a permanent part of the factory, and the people of Flixborough unknowingly drift towards disaster. 

For bonus episodes, ad-free listening, our monthly newsletter and behind-the-scenes conversations with members of the Cautionary Tales production team, consider joining the Cautionary Club.

[Apple] [Spotify] [Stitcher]

Further reading

The Flixborough disaster. Report of the Court of Inquiry

Flixborough 1974 Memories. Essential eye-witness history from the North Lincolnshire Museum. 

‘Fire and devastation’: 50 years on from the Flixborough disaster what’s changed? Chemistry World

[surgery] one year on!

Dec. 11th, 2025 10:28 pm
kaberett: Trans symbol with Swiss Army knife tools at other positions around the central circle. (Default)
[personal profile] kaberett

I continue extremely grateful to no longer have ureteric stents.

a bit of stock-taking )

Timeline of a new phase in my life.

Dec. 11th, 2025 07:12 pm
andrewducker: (Unless I'm wrong)
[personal profile] andrewducker
About two months ago, I had a nasty respiratory infection. And while I was lying awake one night, I could hear my heart beating quite loudly.

Having had multiple friends go to the doctor to check on something and then have the doctor tell them that they urgently needed medication before their high blood pressure did them serious damage/killed them, I thought I should pop in to the doctor for a chat.

They checked me on the spot, said my blood pressure was a little high, but nothing terrible, and told me to join the queue to borrow a blood pressure device. [personal profile] danieldwilliam gave me his old one, and I spent a couple of weeks taking results. Which mostly showed that my pressure is fine in the morning, but that after I've spent 90 minutes shouting at Gideon to stop bloody well mucking about and go to sleep, it's a fair chunk higher than it should be. They also sent me for an ECG (which showed I have Right Bundle Branch Block, a harmless and untreatable condition that affects 15% of the population), an eye test (which found nothing), and a fasting blood test (which showed I'm still not diabetic, even though I can't have sugar in my diet even slightly any more).

They then had a phone call with me to chat it through, said that I'm a little high (on average), and a little young for it to be a major worry, but if I was up for it they could put me on some pills for hypertension.. I agreed that it sounded sensible, and the doctor sounded positively relieved that she hadn't had to bully me into it.

The weird feeling is that this is the first time I've been put on to a medicine that I will have to take for the rest of my life. There is now "The time I didn't have to take medicine every day" and "The time where I had to take medicine every day". Which definitely feels like an inflection point in my life. (Endless sympathy, of course, for people I know who have to take much worse things than a tiny tasteless pill with very few side-effects.)

So all-in-all, nothing major. Just the next step. I'm just very glad for the existence of modern medicine.

Are bubbles good, actually?

Dec. 11th, 2025 05:23 pm
[syndicated profile] tim_harford_feed

Posted by Tim Harford

Swiss psychiatrist Elisabeth Kübler-Ross suggested that there are five stages of grief, but nobody has the attention span for that any more. We have leapt instead from stage one, denial — “there is no AI bubble”, to stage five, acceptance — “AI is a bubble and bubbles are great”.

The “bubbles are great” hypothesis has been advanced both in popular and scholarly books, but it was hard to ignore when Jeff Bezos, one of the world’s richest men, sought to draw a distinction between financial bubbles (bad) and industrial bubbles (less bad, maybe good). Bezos, after all, built one of the 21st century’s great businesses, Amazon, in the middle of a bubble that turned contemporaries such as Webvan and Pets.com into a punchline.

There is a solid theory behind the idea that investment manias are good for society as a whole: it is that without a mania, nothing gets done for fear that the best ideas will be copied.

Entrepreneurs and inventors who do take a risk will soon find other entrepreneurs and inventors competing with them, and most of the benefits will go not to any of these entrepreneurs, but to their customers.

(The dynamic has the delightful name of the “alchemist’s fallacy”. If someone figures out how to turn lead into gold, pretty soon everyone will know how to turn lead into gold, and how much will gold be worth then?)

The economist and Nobel laureate William Nordhaus once tried to estimate what slice of the value of new ideas went to the corporations who owned them, and how much went to everyone else (mostly consumers). He concluded that the answer — in the US, between 1948 and 2001 — was 3.7 per cent to the innovating companies, and 96.3 per cent to everyone else. Put another way, the spillover benefits were 26 times larger than the private profits.

If the benefits of AI are similarly distributed, there is plenty of scope for AI investments to be socially beneficial while being catastrophic bets for investors.

The historical parallel that is mentioned over and over again is the railway bubble. The bluffer’s guide to the railway bubble is as follows: British investors got very excited about railways in the 1840s, share prices went to silly levels, some investors lost their shirts, but in the end, guess what? We had railways! Or as the Victorian historian John Francis wrote, “It is not the promoters, but the opponents of railways, who are the madmen.”

Put like that, it doesn’t sound so bad. But should we put it like that? I got in touch with some bubble historians: William Quinn and John D Turner, who wrote Boom and Bust: A Global History of Financial Bubbles, and Andrew Odlyzko, a mathematician who has also deeply researched the railway mania. They were less sanguine.

“Funding the railways through a bubble, rather than through central planning (as was the case in much of Europe), left Britain with a very inefficiently designed rail network,” says Quinn. “That’s caused problems right up to the present day.”

That makes sense. There are several possible definitions of a bubble, but the two most straightforward ones are either that the price of financial assets becomes disconnected from fundamental values, or that investments are made on the basis of crowd psychology — by people afraid of missing out, or hoping to offload their bets on to a greater fool. Either way, why would anyone expect the investments made in such a context to be anything close to socially desirable?

Or as the Edinburgh Review put it, “There is scarcely, in fact, a practicable line between two considerable places, however remote, that has not been occupied by a company. Frequently two, three or four rival lines have started simultaneously.”

Nor was the Edinburgh Review writing in the 1840s — it was describing the railway bubble of the 1830s, whose glory days saw promoters pushing for sail-powered trains and even rocket-powered locomotives that would travel at several hundred miles an hour.

The bigger, more notorious bubble of the 1840s was still to come — as was the 1860s bubble (“a disaster for investors”, says Odlyzko, adding that it is debatable whether the social gains outweighed the private losses in the 1860s). The most obvious lesson of the railway manias is not that bubbles are good, but that hope springs eternal and greedy investors never learn.

Another lesson of the railway mania is that when large sums of money are on the line, the line between commerce and politics soon blurs, as does the line between hype and outright fraud.

The “railway king” George Hudson is a salutary example. Born into a modest Yorkshire farming family in 1800, he inherited a fortune from a great uncle in suspicious circumstances, then built an empire of railway holding companies, including four of the largest in Britain. He was mayor of York for many years, as well as an MP in Westminster. Business and politics inextricably intertwined? Inconceivable!

Another bubble historian, William J Bernstein, comments on Hudson that “the closest modern equivalent would be the chairman of Goldman Sachs simultaneously serving in the US Senate.” That’s a nice hypothetical analogy. You may be able to think of less hypothetical ones.

Hudson, alas, is not a man to emulate. He kept his finances looking respectable by making distinctly Ponzi-like payments, funding dividends for existing shareholders out of freshly raised capital, and he defrauded his fellow shareholders by getting companies he controlled to buy up his personal shares at above-market prices. In the end, he was protected from ruin only by the rule that serving parliamentarians could not be arrested for unpaid debts while the House of Commons was in session. He eventually fled to exile in France.

The railway manias are not wholly discouraging. William Quinn is comforted by the observation that when banks stay away from the bubble, its bursting has limited effects. That was true in the 1840s and perhaps it will be true today.

And Odlyzko reassures me that the mania of the 1830s “was a success, in the end, for those investors who persevered”, even if one cannot say the same for the 1840s and the 1860s. But Odlyzko is not impressed by analogies between the railways and AI. People at least understood how railways worked, he says, and what they were supposed to do. But generative AI? “We are losing contact with reality,” he opines.

Written for and first published in the Financial Times on 6 November 2025.

I’m running the London Marathon in April in support of a very good cause. If you felt able to contribute something, I’d be extremely grateful.

AIs Exploiting Smart Contracts

Dec. 11th, 2025 05:06 pm
[syndicated profile] bruce_schneier_feed

Posted by Bruce Schneier

I have long maintained that smart contracts are a dumb idea: that a human process is actually a security feature.

Here’s some interesting research on training AIs to automatically exploit smart contracts:

AI models are increasingly good at cyber tasks, as we’ve written about before. But what is the economic impact of these capabilities? In a recent MATS and Anthropic Fellows project, our scholars investigated this question by evaluating AI agents’ ability to exploit smart contracts on Smart CONtracts Exploitation benchmark (SCONE-bench)­a new benchmark they built comprising 405 contracts that were actually exploited between 2020 and 2025. On contracts exploited after the latest knowledge cutoffs (June 2025 for Opus 4.5 and March 2025 for other models), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 developed exploits collectively worth $4.6 million, establishing a concrete lower bound for the economic harm these capabilities could enable. Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476. This demonstrates as a proof-of-concept that profitable, real-world autonomous exploitation is technically feasible, a finding that underscores the need for proactive adoption of AI for defense.

[syndicated profile] openrightsgroup_feed

Posted by Pam Cowburn

19 civil society organisations have called on the Information Commissioner’s Office (ICO) to formally investigate data protection breaches and accessibility issues arising as a result of the Home Office’s eVisa scheme.

The letter’s signatories warn that widespread data errors, inaccessible design, and systemic technical failures are leaving migrants unable to prove their right to work, rent, study, travel or access essential services.

There have been a high volume of data errors which represent both a huge breach of sensitive data as well as a failure in the system as people are prevented from proving their lawful status. Some users have been locked out of their accounts with no effective support from the Home Office or ways of escalating their concerns.

Evidence collected shows the most vulnerable migrants, including refugees and those without digital access, are disproportionately affected by digitalisation.

The signatories argue that the Home Office’s Data Protection Impact Assessment (DPIA) is incomplete, misleading, and fails to mitigate foreseeable risks, breaching the Home Office’s GDPR and equality duties.

Some of the key issues with the DPIA include:

  • There have been a high volume of data errors which represent both a huge breach of sensitive data as well as a failure in the system as people are prevented from proving their lawful status.
  • Some users have been locked out of their accounts with no effective support from the Home Office or ways of escalating their concerns.
  • The DPIA does not fully assess risks if face images are used for matching, automated checks, or shared with third parties that may combine them with other datasets.
  • The DPIA is misleading when it says that evisas are part of the transformation to ‘digital by default’, which the government defines as “digital services that are so straightforward and convenient that all those who can use them will choose to do so whilst those who can’t are not excluded”. However, the eVisa scheme is digital-only with people’s status being checked in real time. It removes all physical evidence of immigration status and users cannot opt out of digital methods.
    The DPIA has not assessed or mitigated the risks faced by people without smartphones or who are digitally excluded because of disability, their age or socio-economic reasons. It also doesn’t address risks, such as partner coercion, when people are forced to rely on friends or family to access their eVisas.

Sara Alsherif, Migrants Digital Justice Programme Manager at Open Rights Group said:

“Since the rollout of the digital-only e-Visa scheme, we’ve seen widespread data errors, inaccessible design, and persistent technical failures that are leaving migrants unable to prove their right to work, rent, study, travel, or access essential services.

“In its DPIA, the Home Office failed to assess the risks that a digital only scheme brings, particularly for those who are vulnerable, older or disabled. It is also misleading in its assessment of the scheme as digital by default.

“If the Home Office had identified some of these risks, migrnats may not have experienced the same levels of distress and hardship that we have seen over the last year. The ICO must investigate.”

ICO must investigate evisa scheme

Read the letter

[syndicated profile] openrightsgroup_feed

Posted by Open Rights Group

Prepared by UK civil society, digital rights, and open-knowledge organisations.

Over 550,000 people have petitioned Parliament to repeal the Online Safety Act (OSA), making it one of the largest public expressions of concern about a UK digital law in recent history. A petition to reform the Act would likely have attracted even more support. While it may seem unusual for so many people to challenge a law framed around “online safety,” this briefing explains what those concerns actually are.

These concerns have hit a nerve. Parliament needs to ensure the OSA works without unfairly restricting people’s day to day activities. The balance needs adjusting, and some clear changes could resolve some of the problems, and reduce the arguments for a wholesale rollback.

We highlight how the Act affects freedom of expression and access to information, and how its requirements risk undermining the ability of small, non-profit, and public-interest websites to operate. This document focuses specifically on these free-expression impacts, rather than the broader range of issues raised by the Act.

The Online Safety Act imposes several dozen duties that service providers must interpret and apply These duties are highly complex and written largely with major commercial social media platforms in mind. Yet they also extend to small businesses, community forums, charities, hobby sites, federated platforms, and public-interest resources such as Wikipedia.

While a small number of community-run or not-for-profit services may present higher risks, most are low-risk spaces. These low-risk sites are often run by volunteers who simply do not have the capacity, expertise, or resources to take on the liabilities and operational burdens created by the Act.

Many of those services – such as bulletin boards, collaborative mapmaking or collaborative encyclopedia-writing sites, are also not run or designed like the social media and filesharing services that officials had in mind when designing the duties in question.

Some services, like the LGFSS cycling forum, have managed to continue under new management. Others, such as a support project for fathers with young children, have had to abandon their independent sites and migrate onto large social media platforms.

Many small providers also struggle to engage meaningfully with Ofcom’s consultations, which are extensive, technical, and time-consuming – effectively excluding the very communities the Act impacts.

Wikimedia Foundation – the charity that operates Wikipedia and a dozen other nonprofit education projects – and hundreds of allied organizations and specialists have warned that the Act creates major burdens for public interest projects.

Wikimedia also warned that secondary legislation, passed in February 2025, added to those challenges, by exposing the most popular UK public interest projects, like Wikipedia, to “Category 1” status under the Act.

Category 1 status seems set to require moderation changes incompatible with Wikipedia’s global, open, volunteer-run model, such as platform-level, globally-applied identity verification that increases costs, conflicts with privacy-by-design principles, exposes individuals around the world to major risks (such as political persecution), and is likely to require privacy-protective volunteers to lose some of their ability to keep Wikipedia free of harmful or low-quality content.

  • Wikimedia warned that these “Category 1” side-effects might theoretically only be avoidable by reducing UK participation, to disqualify Wikipedia from Category 1 status entirely.

Sites without capacity to comply are blocking the UK to avoid prosecution

ORG’s Blocked project tracks sites geoblocking UK users due to OSA compliance pressures.

These sites are often small, low-risk, and community-driven, with no history of safety issues. Yet there is evidence the Act is forcing them to close, restrict access for UK users, or shift onto larger commercial platforms, which may be less safe. A list of affected websites is included in Appendix 1.

Some people have argued that platforms taking down the wrong type of content is simply them failing to implement the law correctly. However it is both the Act and Ofcom’s code of guidance that have created the following drivers for this behaviour:

Strong financial penalties

The Act allows Ofcom to fine noncompliant services up to 10 per cent of qualifying worldwide revenue or block services in the UK for serious noncompliance (Online Safety Act 2023, sch. 13, para. 4).

Broad risk reduction duties

For user to user services likely to be accessed by children, the Act requires a suitable children’s risk assessment and ongoing measures to mitigate identified risks (Online Safety Act 2023, Pt 3 Ch 2 ss 11–12).

Vague definitions of harmful content

The Act defers to Ofcom guidance and Codes for definitions of content harmful to children, creating uncertainty as to the precise type of content to be removed [Online Safety Act 2023, ss 60-61 (with Ofcom guidance per s. 53)].

Pressure to demonstrate proactive compliance

Platforms are pressured to design, operation, and mitigation measures including automated moderation, and age-gating.

Ofcom codes recommending preemptive measures

The Protection of Children Code of Practice requires highly effective age assurance where high risk content is not prohibited for all users [Ofcom, Guidance to Proactive Technology Measures (Draft, June 2025) Online Safety Act 2023, s. 231 (definition of “proactive technology”) and Sch 4 para 13 (constraints on its use for analysing user-generated content)].

Low threshold for removal

Platforms only need to reasonably suspect that content is illegal before removing it. Because the Act does not define illegal content in a way that automatically prescribes censorship, users cannot know in advance whether their content will be removed. This means removals are driven by platform discretion rather than clear legal rules, making it impossible to assess whether each removal is proportionate from a rights perspective, including freedom of expression.

Practical effects and pressures

  • Platforms may delete or restrict lawful content preemptively to avoid risk.
  • Political, controversial, or minority community speech may be disproportionately suppressed or age gated.
  • Certain communities may face disproportionate impact if their speech is more likely to be judged risky.
  • Users may adopt euphemistic or indirect language to avoid automated filters.
  • Appeal and redress mechanisms may be limited. Reporting and complaints procedures exist but there is no independent body to determine if content is lawful.
  • Content may be placed behind age gates incorrectly or overcautiously if risk is interpreted broadly or age assurance is uncertain.
  • Online content may face stricter age restrictions than traditional media such as films or TV, as platforms must satisfy legal safety duties rather than voluntary industry ratings. For example depictions of serious violence to imaginary creatures are classified as priority content that’s harmful to children (anyone aged 17 or less). And yet they could easily watch mythical creatures getting slayed on TV series / films like The Witcher (15 Rating) or the War of the Worlds (12 Rating).

Evidence of these patterns on major platforms is provided in Appendix 2

People are rightly angry that their Article 10 freedom of expression rights are being curtailed in the name of safety. Especially when the content they have had removed was not harmful, was lawful and when it applies to protected categories of speech involving political expression.

In addition to the freedom of expression harms, wrongful censorship and account bans or take-downs can have real economic impacts on content creators, streamers and small online businesses that rely on user-to-user services regulated by the Act for their livelihoods.

Many small providers don’t have the ‘clout’ to get wrongfully removed content or accounts reinstated, and currently there is no third-party appeals or adjudication process that determines if content was harmful or lawful.

Under the Online Safety Act, platforms likely to be accessed by children must prevent them from seeing harmful content. The Act does not clearly define the type of content that should be age-gated, giving Ofcom discretion to shape interpretation and creating ambiguity for platforms. Ofcom’s Protection of Children Codes explicitly require platforms that rely on age-based denial of access, in order to remain safe, to know, or make a reasonable attempt to know, whether a user is a child through age assurance, which can be age verification or age estimation9. Because platforms face heavy compliance costs, reputational risk, and possible penalties for noncompliance, they often apply age-gating more broadly than strictly necessary. This includes content that is legally safe for children but carries any perceived risk. As a result, even borderline or lawful content may be placed behind an age gate, creating stricter restrictions online than in other media and turning age-gating into a default safety measure rather than a targeted one.

Age-gating is now applied to a wide range of content, from literature on Substack and sexual health advice on Reddit to social gaming features on Xbox. This restricts the freedom of expression of both young people and adults who cannot pass age-assurance checks.

While some MPs may have read about a surge in VPN use to bypass age-gating, the latest evidence suggests most of this increase comes from adults (who are perhaps worried about the data protection risks) rather than children. On 4 December, Ofcom’s Online Safety Group Director told the Today Programme that VPN use has recently fallen after an initial spike.

Teenagers aged 16 to 18 face online restrictions that are now stricter than the BBFC content classification system for film and other media. There is also evidence that young people are being blocked from accessing political news, including stories on Ukraine or Gaza. This is particularly concerning given the Government’s intention to allow 16 year olds to vote.

Without legal limits on when age-assurance technology can be used, or regulation of the technology itself, platforms and third-party vendors have economic incentives to collect more data than necessary. Implementing age-assurance at the platform level also creates a significant barrier for non-commercial websites and small services due to the associated costs. Platforms may choose cheaper and less secure vendors in countries with weaker data protection standards. Poorly implemented solutions have already caused harm, as demonstrated by a Discord data breach that exposed IDs for up to 70,000 users. The ICO and current data protection regime have proven ineffective at mitigating these risks

Those raising concerns about the Online Safety Act are not opposing child safety. They are asking for a law that does both: protects children and respects fundamental rights, including children’s own freedom of expression rights.

The petition shows that hundreds of thousands of people feel the current Act tilts too far, creating unnecessary risks for free expression and ordinary online life. With sensible adjustments, Parliament can restore confidence that online safety and freedom of expression rights can co-exist.

Fix the Online Safety Act
wildeabandon: picture of me (Default)
[personal profile] wildeabandon
...but I have (sort of) a plan this time. I've put a weekly reminder in my diary to post, which I hope will help, and I'm going to create a sort of vague template of 'things to update about' which I can follow if I'm feeling uninspired, but not restrain myself to if there's something in particular that takes my fancy.

I had a resolution this semester that I was going to study less and socialise more, which is perhaps not an entirely typical student resolution, but felt like it would be appropriate for me. I largely failed. This is partly because there were a number of occasions where I made a plan to go to an event, and then when the time came around I was faced with a choice of going outside and travelling to somewhere with lots of background noise where I would have to interact with unfamiliar humans, or staying in the quiet warm library with my books and my translation (or other work), and somehow the latter was always much more appealing.

So on the one hand, it doesn't actually feel particularly unhealthy that I'm studying instead of socialising because that's what I want to do rather than because I feel it's what I should do, but on the other hand, if I want to reach the stage where I have a francophone circle of not-unfamiliar people to spend time with here, I'm going to have to go through the 'socialising with unfamiliar people' bit first.

On a related note, I am feeling a bit frustrated with my (lack of) language acquisition here. Before I moved out lots of people suggested that being here and using French on a daily basis would lead to a big improvement, but it doesn't seem to have happened. Partly that's probably because I'm /not/ really using French on a day to day basis. I mean, I use it in the shops and to read the news and listen to announcements on the railways, but my actual day to day work is in English, and although I can read fairly fluently, follow to audiobooks and some podcasts, and have an interesting conversation 1-1 with plenty of context cues, no background noise and an interlocutor who is speaking clearly, I still struggle in fairly basic situations without those accommodations. And crucially, I don't think I've improved significantly since moving here, so I need to do something more active to improve, so I've found a "table de langues" to try next Wednesday evening, and if I just don't go to the library after my final lecture that day, it should be easier to escape it's gravity.

side-tracks off side-tracks

Dec. 10th, 2025 11:08 pm
kaberett: Trans symbol with Swiss Army knife tools at other positions around the central circle. (Default)
[personal profile] kaberett

One of the things I found yesterday, while getting distracted from transcription by regretting not having taken History and Philosophy of Science (or, more accurately, not having shown up to the lectures to just listen), was some tantalising notes on the existence of a four-lecture series entitled Visual Culture in Science and Medicine:

Science today is supremely visual – in its experiments, observations and communication, images have become integral to the scientific enterprise. These four lectures examine the role of images in anatomy, natural history and astronomy between the 15th and the 18th centuries. Rather than assessing images against a yardstick of increasing empiricism or an onward march towards accurate observation, these lectures draw attention to the myriad, ingenious ways in which images were deployed to create scientific objects, aid scientific arguments and simulate instrumental observations. Naturalistic styles of depictions are often mistaken for evidence of first-hand observation, but in this period, they were deployed as a visual rhetoric of persuasion rather than proof of an observed object. By examining the production and uses of imagery in this period, these lectures will offer ways to understand more generally what was entailed in scientific visualisation in early modern Europe.

I've managed to track down a one-hour video (that I've obviously not consumed yet, because audiovisual processing augh). Infuriatingly Kusukawa's book on the topic only covers the sixteenth century, not the full timespan of the lectures, and also it's fifty quid for the PDF. I have located a sample of the thing, consisting of the front matter and the first fifteen pages of the introduction (it cuts off IN MID SENTENCE).

Now daydreaming idly about comparative study of this + Tufte, which I also haven't got around to reading...

[personal profile] cosmolinguist

I was so tired after work I had a nap. Didn't notice D texting to say dinner is ready. He came upstairs to see how I was doing...and now is asleep himself.

For anyone who is struggling

Dec. 10th, 2025 05:39 pm
[syndicated profile] oatmeal_feed

Posted by Matthew Inman

For anyone who is struggling

This is an animation about Demodex -- little parasites that live on your face.

View on my website

[syndicated profile] openrightsgroup_feed

Posted by Open Rights Group

It might seem strange to some MPs that anyone could be opposed to ‘Online Safety’. ORG supports sensible measures to protect children online. But the Online Safety Act (OSA), as currently written and interpreted, is already producing harmful unintended consequences for privacy, cybersecurity, free expression and the wider UK digital economy.

This briefing outlines concerns people have with the way the Act is working in practice and why they are upset with how age assurance has been introduced, both unsafely and applied to a wide range of content and with the wrong social media posts being censored. Small sites have been closed for fears about compliance with the Act. Such problems led to over 550,000 people signing a petition asking for the act to be repealed, which Parliament is considering on 15 December.

The Online Safety Act will always be limited in its ability to tackle online harms, because it focuses on removing illegal and harmful content rather than tackling the underlying economic and structural dominance of major platforms; it leaves the monopolistic business models, algorithmic prioritisation, and lack of interoperability which drive misinformation, polarisation, and loss of user control, largely untouched. These need to be tackled through strong market interventions, to place users in control of what content they receive and how.

We urge MPs to support a more balanced, evidence-based and rights-respecting approach that tackles the underlying causes of social media harms and protects children without harming all of our Article 10 rights to freedom of expression or Article 8 rights to privacy.

  1. Regulate age assurance providers.
  1. Exempt small and low-risk services from the full weight of regulation.
  1. Strengthen due-process protections to prevent wrongful automated takedowns and infringements of freedom of expression rights.
  1. Require Ofcom’s Codes to meet human rights and proportionality standards.
  1. Protect VPN use and encryption, rejecting any enforcement strategy that undermines cybersecurity.
  1. Use existing competition powers so that users can choose their content prioritisation and moderation engines, and switch their social media provider, without losing their networks of contacts, to drive better social media.


There is evidence that most of the British public support pornographic content being age gated. However, the Act does not just cover pornographic material nor does it clearly define the type of content that should be age gated.

Ofcom’s Protection of Children Codes explicitly require platforms that rely on age-based denial of access, in order to remain safe, to know, or make a reasonable attempt to know, whether a user is a child through age assurance, which can be age verification or age estimation. As such, platforms are using age-gating as a way to restrict under 18s’ access to all sorts of content in order to avoid legal liability under the Act. It is also not clear or easy to know or categorise what sort of content should be placed behind an age-gate.

Because platforms face compliance costs, reputational damage and heavy penalties for non-compliance, they often apply age gating more broadly than strictly necessary. This includes content that is legally safe for children but carries any perceived risk. As a result, even borderline or lawful content may be placed behind an age gate, creating greater restrictions online content than for other forms of media, and turning age gating into a default safety measure, rather than a targeted means to prevent access to pornographic material.

People are therefore concerned that they are being asked to go through intrusive age-assurance processes on platforms like BlueSky, Spotify, Xbox gaming services, or to see certain Subreddits, that might be suitable for teenagers.

Without legal limits on how age assurance works, platforms and third-party vendors have an economic incentive to collect more data than necessary. Platforms also have an incentive to choose cheaper and less secure vendors, mainly located in the US, with poor data protection practices. Some of this data collection can cause more online harms to people through increased risk of cybercrimes.

For example, poorly implemented age assurance solutions have exposed users to new harms with the ID photos of up to 70,000 users on Discord leaked in a data-breach. The ICO and current data protection regime is proving ineffective in regulating the industry. Additionally Ofcom are having to play ‘whack-a-mole’ enforcement with mirror sites that pop up without any age gating of content because the system relies on a platform implementing the law, rather than a software or hardware control on a young person’s device.

Platform versus device or operating system level age assurance

In recent weeks a debate has emerged about whether age assurance should occur at the level of the device rather than each platform. In practice it is very difficult for Ofcom to ensure every single platform complies with the law. Bad actors can quickly and easily create mirror sites of reputable platforms.

Open Rights Group believes that service providers should allow users to choose the method and identity provider they use. It may be possible for App stores on users’ own devices to handle verification, as user profiles on personal devices already store extensive sensitive personal data locally under users’ own control; however the risks of further centralising market control by Apple and Google would need to be addressed.

Could Digital ID solve this?

Using a Digital ID to prove your age would not solve the problem of the wrong sort of content being placed behind an age gate. To solve that problem Parliament needs to clearly and explicitly define the type content that should be behind an age 18 age-gate, and prohibit age gating for other forms of content.

Using a Government Digital ID to establish your age online could also come with other privacy risks if a Digital ID app is designed in a way in which it could ‘phone home’ and alert the Government when it had been used for such a purpose. It is unclear that users would accept the idea of using government ID to access adult content.

ORG recommends that:

  • Parliament narrowly defines when age assurance is required.
  • Platforms should provide users with detailed documents regarding the use of their data so that they can understand the risks to their privacy and data.
  • Ofcom and the ICO should work with industry to create a high standard for privacy in age verification.
  • Ofcom should recommend that age verification solutions include the use of high, independently managed data protection standards, and meet interoperability and accessibility needs.
  • Future legislation should incorporate privacy, accessibility, and interoperability requirements for age verification and assurance.
  • Users should have a choice of which age-assurance system they wish to use, including a choice to use a device level proof of age.

For more detail on ORG’s recommended statutory framework for age assurance, see our dedicated briefing.

There are some very clear reasons why the act is causing posts to be placed behind an age gate. This is not merely a case of platforms implementing duties created by the law badly. The causes for the problem lie in both the wording of the act and Ofcom’s code of practices:

Strong financial penalties. The Act allows Ofcom to fine non-compliant services up to ten per cent of qualifying worldwide revenue or to block services in the UK for serious non-compliance (Online Safety Act 2023, sch. 13, para. 4)

Broad risk reduction duties. For user to user services likely to be accessed by children, the Act requires a suitable children’s risk assessment and ongoing measures to mitigate identified risks (Online Safety Act 2023, Pt 3 Ch 2 ss 11–12)

Vague definitions of harmful content. The Act defers to Ofcom guidance and Codes for definitions of content harmful to children, giving services broad discretion over what is treated as harmful [Online Safety Act 2023, ss 60-61 (with Ofcom guidance per s. 53)].

Pressure to demonstrate proactive compliance. Platforms must implement design, operation, and mitigation measures including automated moderation, age assurance, gating, and access controls.

Ofcom codes recommending pre-emptive measures. The Protection of Children Code of Practice requires highly effective age assurance where high risk content is not prohibited for all users. [Ofcom, Guidance to Proactive Technology Measures (Draft, June 2025) Online Safety Act 2023, s. 231 (definition of “proactive technology”) and Sch 4 para 13 (constraints on its use for analysing user‑generated content)].

The result is that, as a platform must know it is restricting “high risk content” for children, it is easiest to age gate parts of the service where that content might be found – such as through direct messages, or on forums or groups that discuss topics like alcohol, sex or drugs. This then restricts advice and helps that may be vital for young people to access.

The same issues apply to content controls. While platforms have a duty to “have particular regard” to free expression (Online Safety Act 2023, Pt 3 Ch 3 s 2), this is a weak commitment compared to the drive to remove risks, in the face of fines.

There is a low threshold for content removal. Platforms only need to “reasonably suspect” that content is illegal before removing it (Online Safety Act 2023, ss 59, 193, 10. And for discussion of the “reasonable grounds” threshold and discretion risks: Ofcom, Illegal Content Judgements Guidance — threshold of “reasonable inference”).

Because the Act does not define illegal content in a way that automatically prescribes censorship, users cannot know in advance whether their content will be removed. This means removals are driven by platform discretion rather than clear legal rules, making it impossible to assess whether each removal is proportionate from a rights perspective, including freedom of expression.

It is the OSA’s structure that is therefore causing platforms to create moderation policies that are overly restrictive. This results in:

  • legitimate speech being taken down;
  • prior restraint censorship of content before it is published;
  • political and journalistic content being suppressed;
  • disproportionate harm to LGBTQ+ users, minority communities, and activists;
  • little recourse or appeal for ordinary users;
  • conflict with countries such as the US where speech is protected under their first amendment rights; and
  • self-censorship as people develop a proxy language and deploy symbolism and slang to avoid AI moderation such as ‘unalive, grape, slice vibes, yeeted’.

In addition, AI-based moderation is error-prone, biased, and lacks transparency. It is incredibly technically difficult to categorise and correctly moderate the scale of user content generated daily on social media platforms. Some MPs have identified the problem with AI moderation already and an EDM has been tabled on the issue.

Unfortunately the harms associated with incorrect moderation of content will increase dramatically under the OSA, especially as Ofcom starts to require platforms deploy perceptual hash matching for a wider range of platforms, and user-to-user services and Ministers expand the number of priority offences via use of statutory instruments.

ORG recommends that Parliament:

  • Strengthens the legal duty on platforms to consider freedom of expression from “have regard to” to “take all reasonable steps to protect”.
  • Introduces a proportionality test, for example: “A provider must take all reasonable steps to protect freedom of expression, consistent with the need to prevent illegal or harmful content.”
  • Provides clear routes for appeal, including to the courts, and correction for wrongful takedowns that considers the actual legality of content, not just whether it was ‘reasonable to infer’ the illegality of the content.
  • Defines “online harms” with greater specificity to avoid ambiguity.
  • Removes powers for the Secretary of State to unilaterally add priority offences to the Act.
  • Requires Ofcom remove references to “bypass strategies” that encourage prior restraint censorship
  • Strengthens standards for transparency, accuracy, and proportionality of moderation.

Without treaties in place the UK has limitations to the extent which it can seek to fine companies in different jurisdictions. The Online Safety Act has already resulted in diplomatic tensions with the US, with the US House Judiciary Hearing on ‘Europe’s Threat to American Speech and Innovation’. This is because in the US all citizens have a first amendment right to free speech, and the act seeks to regulate US platforms.

Without reform to the law that respects the rights of citizens in other countries and their freedom to determine their own approaches to speech the act will continue to harm to our international relations.

When age-assurance was introduced there was evidence that VPN apps surged in downloads. VPNs reduces exposure to cyberattacks and helps maintain consistent access to cloud services or region-specific tools. In addition to all these features a VPN can make a website think you are connecting from a different location by routing your traffic through another country. This can be used to circumvent UK specific age-gates.

The latest research evidence on VPN use is that the increase in use has come from adults rather than from children. On 4 December, Ofcom’s Online Safety Group Director told the BBC’s Today Programme that UK VPN use had risen from 600,000 to well over one million people. Ofcom are now reporting that since August 2025 VPN use has been in decline and now sits around 900,000 users.

What is Ofcom doing about this?

As a result of the Online Safety Act, Ofcom has been paying the commercial data-broker Apptopia to obtain information on UK citizens’ private software use. This consumer-level surveillance appears intended to assess national patterns of VPN adoption. Ofcom have been under pressure to monitor VPN use from the Children’s Commissioner and the House of Lords Communications and Digital Committee, both of whom have highlighted VPNs as a potential circumvention method.

Why trying to regulate professional VPN use is a bad idea

Regulating VPN use in the UK would be both impractical and harmful. A conservative estimate places the UK VPN market at £2.26 billion in 2025, with usage primarily by businesses and adults for cybersecurity, secure remote access, and managing networks with restricted sites, not as a widespread tool for children.

Even the most authoritarian governments, such as China and Iran, struggle to control VPNs, demonstrating the futility of such regulation. Forcing websites to try and detect VPN use would cause widespread disruption to users in other countries.

Data protection regulation that restricted the commercialisation of people’s browsing data through VPNs however could be an effective way to stop free VPNs exploiting people’s personal data. Free VPNs are also far more accessible to children as there is no cost or credit card requirement to obtain one.

If you want to learn more about VPNS and the Online Safety Act we have produced a specific briefing on this issue

End-to-end encryption (E2EE) is a fundamental cybersecurity measure: it ensures that only the sender and the intended recipient can read messages or view shared media. It does this by encrypting a message on a device and then only decrypting it on the receiving device. It prevents interception of private chats on services like WhatsApp, Facebook Messenger or Signal. For many people, including parents sharing photos privately with family, E2EE provides essential protection for privacy and security, guarding against risks such as identity theft, targeting, or misuse of personal media. The Information Commissioner’s Office (ICO) has publicly defended E2EE, arguing that it “serves an important role both in safeguarding our privacy and online safety,” including by preventing criminals from accessing children’s pictures or location details.

Under the Online Safety Act, however, the regulator Ofcom has identified “encryption” as a potential “risk factor” for online harms. The concern: E2EE could enable predators to exchange harmful content. including illicit images or grooming messages. The worry is this could occur out of sight of both platforms and law enforcement. As a result, the law grants Ofcom the power (via a “technology notice”) to require services to seek to develop new unproven technologies such as client-side scanning to scan private messages, even where encrypted, for illicit content like child sexual abuse material (CSAM) or terrorist content.

In practice, implementing such scanning on E2EE platforms would likely require dismantling or weakening encryption (for example via back-doors or client-side scanning), undermining the very privacy and security E2EE was designed to guarantee. That would expose all users, not just those allegedly abusing the system — to increased risk: from hackers, data breaches, mass surveillance, or economic harms (e.g., compromised corporate communications, banking, and trade secrets).

In the UK, the Police already have other means of accessing evidence: if a device is seized during an investigation, messages stored on it decrypted on the device by design can be read. If someone refuses to allow access to an encrypted device they can be charged under Section 49 of RIPA, and if their device is being searched at the border they can be charged under Section 7 of the Terrorism Act. Weakening E2EE is not the only route to obtaining evidence.

Weakening encryption in a broad, systemic way therefore jeopardises cybersecurity and privacy for millions, in pursuit of potential but uncertain gains in detecting criminal content. An approach that undermines cybersecurity cannot be considered a credible “online safety” regime.

ORG recommends the following changes to the Act:

  • Remove “technology notices” entirely.
  • Prevent such notices from applying to private messaging services.
  • Amend the Act to require judicial (not just “skilled person”) oversight for any attempt to compel scanning or decryption.
  • Require an assessment of the economic and cybersecurity harms that could arise from requiring scanning of E2EE messages or files.
  • Remove encryption from Ofcom’s “risk register”.

The OSA applies burdens designed for global platforms to:

  • small businesses
  • community forums
  • charities and clubs
  • blogs and hobby sites
  • open-source projects
  • federated platforms (eg, Mastodon instances)
  • public-interest services like Wikipedia

Many cannot meet the Act’s compliance demands, especially around risk assessments, moderation requirements and age assurance. The result is predictable: they geoblock UK users or shut down entirely. Wikimedia has warned it might have to introduce a cap on UK users if forced to comply with regulations designed for large social media platforms.

ORG is tracking this emerging trend on our Blocked website.

This reduces diversity online, harms SMEs, and accelerates market consolidation into the largest platforms.

ORG has drafted a proposed amendment to the Online Safety Act that would exempt small low-risk sites while retaining Ofcom’s ability to enforce small high risk sites, by introducing an exemption for very small, not-for profit services like community forums, that are objectively low risk (in the view of a “reasonable person”) while also allowing Ofcom to compel any such site to comply with the Act’s usual duties, should Ofcom believe the site is in fact risky. We are happy to speak with any parliamentarian who is interested this reform to the Act.

Recent high profile cases of AI Chat Bots harming children has placed AI chat bots under more scrutiny. Evidence on the impact of AI chat bots is still emerging and is mixed. Undoubtedly some is very concerning, especially regarding the encouraging or sycophantic tone AI chatbots use, which is misleading; and also, due to the inability of AI tools to handle context, the default presumptions built into AI tools that discussion of sex, racism and other difficult topics is inherently harmful, leading to misleading summaries and searches.

If regulation is extended to AI chatbots, then the Online Safety Act would need to be updated. If it were to do so then MPs should at the same time address some of the privacy and freedom of expression concerns with the act.

This would avoid repeating the same problems we are experiencing with the Online Safety Act currently being applied to AI Chat Bot services.

Risk to poorly regulating Chat Bots:

  • There may be demands for stricter age-verification or usage restrictions for under-18s. That could mean children would need to prove age (ID, facial scan, etc.) to use chatbots. This could deny teenagers access to any social benefits that could arise from this technology.
  • Content filters are already aggressive, lead to misleading results and could become more aggressive. Rather than just removing illegal content, chatbots may be required to proactively block or refuse content that might be “harmful to children” leading to further conservative moderation and over blocking.
  • The development of new chatbots or alternative, smaller services could be hampered due to compliance costs and legal risk. This would reduce innovation and diversity in the AI space, benefiting big players through a form of regulatory capture.

ORG is not outrightly opposing the Act. We support safer online environments, but through measures we think would work more effectively including:

  • Platform accountability, including transparency and audits
  • User empowerment, including filtering tools and better reporting
  • Proportionate rules focused on the highest-risk services
  • Privacy-preserving design
  • Protection for encryption and cybersecurity
  • Interoperability, to ensure consumer competition can create safety.

Our report Making Platforms Accountable, Empowering Users and Creating Safety highlights how government and society continue to fuel monopolies through advertising expenditure and policy dependence on major platforms. It urges the UK to apply its new Digital Markets, Competition and Consumers Act 2024, enabling the Competition and Markets Authority (CMA) and its Digital Markets Unit (DMU) to impose interoperability and data-portability obligations on firms with Strategic Market Status.

Fix the Online Safety Act

Profile

pseudomonas: per bend sinister azure and or a chameleon counterchanged (Default)
pseudomonas

November 2024

S M T W T F S
     12
34567 89
10111213141516
17181920212223
24252627282930

Most Popular Tags

Expand Cut Tags

No cut tags
Page generated Dec. 12th, 2025 09:13 am
Powered by Dreamwidth Studios

Style Credit