Nothing much to see

Dec. 12th, 2025 05:48 pm
tig_b: cartoon from nMC set (Default)
[personal profile] tig_b
 Instead of editing or finishing a book, I've been busy catching up on missed deadlines after an Oct mainly spent feeling ill.

So I wrote and delivered a training course and am partway through 4 more.
Plus too many school appeals.
In the middle were other bits and pieces connected to various voluntary posts.
And a little paid work in refill the financial hole left by vet bills and teeth.

Building Trustworthy AI Agents

Dec. 12th, 2025 12:00 pm
[syndicated profile] bruce_schneier_feed

Posted by Bruce Schneier

The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions.

These aren’t edge cases. They’re the result of building AI systems without basic integrity controls. We’re in the third leg of data security—the old CIA triad. We’re good at availability and working on confidentiality, but we’ve never properly solved integrity. Now AI personalization has exposed the gap by accelerating the harms.

The scope of the problem is large. A good AI assistant will need to be trained on everything we do and will need access to our most intimate personal interactions. This means an intimacy greater than your relationship with your email provider, your social media account, your cloud storage, or your phone. It requires an AI system that is both discreet and trustworthy when provided with that data. The system needs to be accurate and complete, but it also needs to be able to keep data private: to selectively disclose pieces of it when required, and to keep it secret otherwise. No current AI system is even close to meeting this.

To further development along these lines, I and others have proposed separating users’ personal data stores from the AI systems that will use them. It makes sense; the engineering expertise that designs and develops AI systems is completely orthogonal to the security expertise that ensures the confidentiality and integrity of data. And by separating them, advances in security can proceed independently from advances in AI.

What would this sort of personal data store look like? Confidentiality without integrity gives you access to wrong data. Availability without integrity gives you reliable access to corrupted data. Integrity enables the other two to be meaningful. Here are six requirements. They emerge from treating integrity as the organizing principle of security to make AI trustworthy.

First, it would be broadly accessible as a data repository. We each want this data to include personal data about ourselves, as well as transaction data from our interactions. It would include data we create when interacting with others—emails, texts, social media posts—and revealed preference data as inferred by other systems. Some of it would be raw data, and some of it would be processed data: revealed preferences, conclusions inferred by other systems, maybe even raw weights in a personal LLM.

Second, it would be broadly accessible as a source of data. This data would need to be made accessible to different LLM systems. This can’t be tied to a single AI model. Our AI future will include many different models—some of them chosen by us for particular tasks, and some thrust upon us by others. We would want the ability for any of those models to use our data.

Third, it would need to be able to prove the accuracy of data. Imagine one of these systems being used to negotiate a bank loan, or participate in a first-round job interview with an AI recruiter. In these instances, the other party will want both relevant data and some sort of proof that the data are complete and accurate.

Fourth, it would be under the user’s fine-grained control and audit. This is a deeply detailed personal dossier, and the user would need to have the final say in who could access it, what portions they could access, and under what circumstances. Users would need to be able to grant and revoke this access quickly and easily, and be able to go back in time and see who has accessed it.

Fifth, it would be secure. The attacks against this system are numerous. There are the obvious read attacks, where an adversary attempts to learn a person’s data. And there are also write attacks, where adversaries add to or change a user’s data. Defending against both is critical; this all implies a complex and robust authentication system.

Sixth, and finally, it must be easy to use. If we’re envisioning digital personal assistants for everybody, it can’t require specialized security training to use properly.

I’m not the first to suggest something like this. Researchers have proposed a “Human Context Protocol” (https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=5403981) that would serve as a neutral interface for personal data of this type. And in my capacity at a company called Inrupt, Inc., I have been working on an extension of Tim Berners-Lee’s Solid protocol for distributed data ownership.

The engineering expertise to build AI systems is orthogonal to the security expertise needed to protect personal data. AI companies optimize for model performance, but data security requires cryptographic verification, access control, and auditable systems. Separating the two makes sense; you can’t ignore one or the other.

Fortunately, decoupling personal data stores from AI systems means security can advance independently from performance (https:// ieeexplore.ieee.org/document/ 10352412). When you own and control your data store with high integrity, AI can’t easily manipulate you because you see what data it’s using and can correct it. It can’t easily gaslight you because you control the authoritative record of your context. And you determine which historical data are relevant or obsolete. Making this all work is a challenge, but it’s the only way we can have trustworthy AI assistants.

This essay was originally published in IEEE Security & Privacy.

Update [me, health, Patreon]

Dec. 12th, 2025 06:49 am
siderea: (Default)
[personal profile] siderea
So, I, uh, got my RSI/ergonomics debugged!* I then promptly lost two days to bad sleep due to another new mechanical failure of the balky meat mecha and also a medical appointment in re two previous malfunctions. But I seem back in business now. The new keyboard is great.

Patrons, I've got three Siderea Posts out so far this month and it's only the 12th. I have two more Posts I am hoping to get out in the next three days. Also about health insurance. We'll see if it actually happens, but it's not impossible. I have written a lot of words. (I really like my new keyboard.)

Anyways, if you weren't planning on sponsoring five posts (or – who knows? – even more) this month, adjust your pledge limits accordingly.

* It was my bra strap. It was doing something funky to how my shoulder blade moved or something. It is both surprising to me that so little pressure made so much ergonomic difference, and not surprising because previously an even lighter pressure on my kneecap from wearing long underwear made my knee malfunction spectacularly. Apparently this is how my body mechanics just are.
siderea: (Default)
[personal profile] siderea
Canonical link: https://siderea.dreamwidth.org/1890494.html


0.

Hey Americans (and other people stuck in the American healthcare system)! Shopping for a health plan on your state marketplace? Boy, do I have some information for you that you should have and probably don't. There's been an important legal change affecting your choices that has gotten almost no press.

Effective with plan year 2026 all bronze level and catastrophic plans are statutorily now HDHPs and thus HSA compatible. You may get and self-fund an HSA if you have any bronze or catastrophic plan, as well as any plan of any level designated a HDHP.

2025 Dec 9: IRS.gov: "Treasury, IRS provide guidance on new tax benefits for health savings account participants under the One, Big, Beautiful Bill"
Bronze and Catastrophic Plans Treated as HDHPs: As of Jan. 1, 2026, bronze and catastrophic plans available through an Exchange are considered HSA-compatible, regardless of whether the plans satisfy the general definition of an HDHP. This expands the ability of people enrolled in these plans to contribute to HSAs, which they generally have not been able to do in the past. Notice 2026-05 clarifies that bronze and catastrophic plans do not have to be purchased through an Exchange to qualify for the new relief.

If you are shopping plans right now (or thought you were done), you should probably be aware of this. Especially if you are planning on getting a bronze plan, a catastrophic plan, or any plan with the acronym "HSA" in the name or otherwise designated "HSA compatible".

The Trump administration doing this is tacit admission that all bronze plans have become such bad deals that they're the economic equivalent of what used to be considered a HDHP back when that concept was invented, and so should come with legal permission to protect yourself from them with an HSA.

Effective immediately, you should consider a bronze plan half an insurance plan.

Read more [3,340 words] )

This post brought to you by the 221 readers who funded my writing it – thank you all so much! You can see who they are at my Patreon page. If you're not one of them, and would be willing to chip in so I can write more things like this, please do so there.

Please leave comments on the Comment Catcher comment, instead of the main body of the post – unless you are commenting to get a copy of the post sent to you in email through the notification system, then go ahead and comment on it directly. Thanks!

more on visual culture in science

Dec. 12th, 2025 11:04 am
kaberett: Trans symbol with Swiss Army knife tools at other positions around the central circle. (Default)
[personal profile] kaberett

This morning I am watching the lecture I linked to on Tuesday!

At 6:53:

Here is an example of how the Hubble telescope image of the Omega nebula, or Messier 17, was created, by adding colours -- which seem to have been chosen quite arbitrarily -- and adjusting composition.

The slide is figure 13 (on page 10) from an Introduction to Image Processing (PDF) on the ESA Hubble website; I'm baffled at the idea that the colours were chosen "arbitrarily" given that the same PDF contains (starting on page 8) §1.4 Assigning colours to different filter exposures. It's not a super clear explanation -- I think the WonderDome explainer is distinctly more readable -- but the explanation does exist and is there.

Obviously I immediately had to stop and look all of this up.

(Rest of the talk was interesting! But that point in particular about modern illustration as I say made me go HOLD ON A SEC--)

[syndicated profile] openrightsgroup_feed

Posted by Pam Cowburn

  • ORG joins Age Verification Providers Association in calling for higher standards for age assurance and more clarity about when it should be used.
  • Online Safety Act is forcing public to use unregulated age assurance services.
  • MPs are due to discuss Online Safety Act on Mon Dec 15 after more than 550,000 people petitioned Parliament to repeal the law.

Open Rights Group has written to the Secretary of State for Science, Innovation and Technology, Liz Kendall MP calling for regulation of age assurance providers operating under the Online Safety Act. The letter has also been signed by Age Verification Providers Association (AVPA) and over 600 members of the public.

Regulate age verification

Since July, many online platforms have forced their users to verify their age as part of their obligations under the Online Safety Act. These are not just pornography websites but also dating apps, social media platforms such as BlueSky and Reddit, streaming services such as Spotify, and Xbox gaming services.

It is platforms, not users, that decide which age verification providers are use. They have an incentive to choose cheaper and less secure vendors, mainly located in the US, with varying quality of data protection practices. Some less reputable providers may also choose to collect more data than necessary in order to profit from it.

ORG is asking the Government, ICO, and Ofcom to establish compulsory privacy and security standards for these providers to ensure that users’ sensitive data is protected.

James Baker, Platform Power Programme Manager at ORG, said:

“As a result of the Online Safety Act adults in the UK are being asked to share sensitive data to access social media sites, dating apps, and online gaming.

“Platforms choose which provider to use, and the public has to hope they can be trusted. Regulation would at least give some reassurance that our data is in safe hands.”

The call for regulation is supported by the Age Verification Providers Association (AVPA). Iain Corby, their Executive Director said:

“We’ve implemented self-regulation – a code of conduct, international standards, audit and certification – but agree more should be done officially too.”

In October, 70,000 IDs of Discord users were leaked, demonstrating the potential risks from age assurance.1 All processes around age assurance need to be secure, including any customer service support put in place to deal with people who experience problems when trying to verify their age.

Regulate the age assurance industry

Read the letter

On Monday December 15, MPs will debate the Online Safety Act after 550,000 people signed a petition calling for it to be repealed. ORG has outlined a number of ways that the Act can be improved in a new briefing.

Online Safety Act briefing for parliament

Read the briefing
[syndicated profile] tim_harford_feed

Posted by Tim Harford

A megaplant near the small village of Flixborough, England, is busy churning out a key ingredient of nylon 6, a material used in everything from stockings to toothbrushes to electronics. When a reactor vessel fails, the engineers improvise a quick-fix workaround, so the plant can keep up with demand. Before long, the temporary patch – a small, bent pipe – becomes a permanent part of the factory, and the people of Flixborough unknowingly drift towards disaster. 

For bonus episodes, ad-free listening, our monthly newsletter and behind-the-scenes conversations with members of the Cautionary Tales production team, consider joining the Cautionary Club.

[Apple] [Spotify] [Stitcher]

Further reading

The Flixborough disaster. Report of the Court of Inquiry

Flixborough 1974 Memories. Essential eye-witness history from the North Lincolnshire Museum. 

‘Fire and devastation’: 50 years on from the Flixborough disaster what’s changed? Chemistry World

[surgery] one year on!

Dec. 11th, 2025 10:28 pm
kaberett: Trans symbol with Swiss Army knife tools at other positions around the central circle. (Default)
[personal profile] kaberett

I continue extremely grateful to no longer have ureteric stents.

a bit of stock-taking )

Timeline of a new phase in my life.

Dec. 11th, 2025 07:12 pm
andrewducker: (Unless I'm wrong)
[personal profile] andrewducker
About two months ago, I had a nasty respiratory infection. And while I was lying awake one night, I could hear my heart beating quite loudly.

Having had multiple friends go to the doctor to check on something and then have the doctor tell them that they urgently needed medication before their high blood pressure did them serious damage/killed them, I thought I should pop in to the doctor for a chat.

They checked me on the spot, said my blood pressure was a little high, but nothing terrible, and told me to join the queue to borrow a blood pressure device. [personal profile] danieldwilliam gave me his old one, and I spent a couple of weeks taking results. Which mostly showed that my pressure is fine in the morning, but that after I've spent 90 minutes shouting at Gideon to stop bloody well mucking about and go to sleep, it's a fair chunk higher than it should be. They also sent me for an ECG (which showed I have Right Bundle Branch Block, a harmless and untreatable condition that affects 15% of the population), an eye test (which found nothing), and a fasting blood test (which showed I'm still not diabetic, even though I can't have sugar in my diet even slightly any more).

They then had a phone call with me to chat it through, said that I'm a little high (on average), and a little young for it to be a major worry, but if I was up for it they could put me on some pills for hypertension.. I agreed that it sounded sensible, and the doctor sounded positively relieved that she hadn't had to bully me into it.

The weird feeling is that this is the first time I've been put on to a medicine that I will have to take for the rest of my life. There is now "The time I didn't have to take medicine every day" and "The time where I had to take medicine every day". Which definitely feels like an inflection point in my life. (Endless sympathy, of course, for people I know who have to take much worse things than a tiny tasteless pill with very few side-effects.)

So all-in-all, nothing major. Just the next step. I'm just very glad for the existence of modern medicine.

Are bubbles good, actually?

Dec. 11th, 2025 05:23 pm
[syndicated profile] tim_harford_feed

Posted by Tim Harford

Swiss psychiatrist Elisabeth Kübler-Ross suggested that there are five stages of grief, but nobody has the attention span for that any more. We have leapt instead from stage one, denial — “there is no AI bubble”, to stage five, acceptance — “AI is a bubble and bubbles are great”.

The “bubbles are great” hypothesis has been advanced both in popular and scholarly books, but it was hard to ignore when Jeff Bezos, one of the world’s richest men, sought to draw a distinction between financial bubbles (bad) and industrial bubbles (less bad, maybe good). Bezos, after all, built one of the 21st century’s great businesses, Amazon, in the middle of a bubble that turned contemporaries such as Webvan and Pets.com into a punchline.

There is a solid theory behind the idea that investment manias are good for society as a whole: it is that without a mania, nothing gets done for fear that the best ideas will be copied.

Entrepreneurs and inventors who do take a risk will soon find other entrepreneurs and inventors competing with them, and most of the benefits will go not to any of these entrepreneurs, but to their customers.

(The dynamic has the delightful name of the “alchemist’s fallacy”. If someone figures out how to turn lead into gold, pretty soon everyone will know how to turn lead into gold, and how much will gold be worth then?)

The economist and Nobel laureate William Nordhaus once tried to estimate what slice of the value of new ideas went to the corporations who owned them, and how much went to everyone else (mostly consumers). He concluded that the answer — in the US, between 1948 and 2001 — was 3.7 per cent to the innovating companies, and 96.3 per cent to everyone else. Put another way, the spillover benefits were 26 times larger than the private profits.

If the benefits of AI are similarly distributed, there is plenty of scope for AI investments to be socially beneficial while being catastrophic bets for investors.

The historical parallel that is mentioned over and over again is the railway bubble. The bluffer’s guide to the railway bubble is as follows: British investors got very excited about railways in the 1840s, share prices went to silly levels, some investors lost their shirts, but in the end, guess what? We had railways! Or as the Victorian historian John Francis wrote, “It is not the promoters, but the opponents of railways, who are the madmen.”

Put like that, it doesn’t sound so bad. But should we put it like that? I got in touch with some bubble historians: William Quinn and John D Turner, who wrote Boom and Bust: A Global History of Financial Bubbles, and Andrew Odlyzko, a mathematician who has also deeply researched the railway mania. They were less sanguine.

“Funding the railways through a bubble, rather than through central planning (as was the case in much of Europe), left Britain with a very inefficiently designed rail network,” says Quinn. “That’s caused problems right up to the present day.”

That makes sense. There are several possible definitions of a bubble, but the two most straightforward ones are either that the price of financial assets becomes disconnected from fundamental values, or that investments are made on the basis of crowd psychology — by people afraid of missing out, or hoping to offload their bets on to a greater fool. Either way, why would anyone expect the investments made in such a context to be anything close to socially desirable?

Or as the Edinburgh Review put it, “There is scarcely, in fact, a practicable line between two considerable places, however remote, that has not been occupied by a company. Frequently two, three or four rival lines have started simultaneously.”

Nor was the Edinburgh Review writing in the 1840s — it was describing the railway bubble of the 1830s, whose glory days saw promoters pushing for sail-powered trains and even rocket-powered locomotives that would travel at several hundred miles an hour.

The bigger, more notorious bubble of the 1840s was still to come — as was the 1860s bubble (“a disaster for investors”, says Odlyzko, adding that it is debatable whether the social gains outweighed the private losses in the 1860s). The most obvious lesson of the railway manias is not that bubbles are good, but that hope springs eternal and greedy investors never learn.

Another lesson of the railway mania is that when large sums of money are on the line, the line between commerce and politics soon blurs, as does the line between hype and outright fraud.

The “railway king” George Hudson is a salutary example. Born into a modest Yorkshire farming family in 1800, he inherited a fortune from a great uncle in suspicious circumstances, then built an empire of railway holding companies, including four of the largest in Britain. He was mayor of York for many years, as well as an MP in Westminster. Business and politics inextricably intertwined? Inconceivable!

Another bubble historian, William J Bernstein, comments on Hudson that “the closest modern equivalent would be the chairman of Goldman Sachs simultaneously serving in the US Senate.” That’s a nice hypothetical analogy. You may be able to think of less hypothetical ones.

Hudson, alas, is not a man to emulate. He kept his finances looking respectable by making distinctly Ponzi-like payments, funding dividends for existing shareholders out of freshly raised capital, and he defrauded his fellow shareholders by getting companies he controlled to buy up his personal shares at above-market prices. In the end, he was protected from ruin only by the rule that serving parliamentarians could not be arrested for unpaid debts while the House of Commons was in session. He eventually fled to exile in France.

The railway manias are not wholly discouraging. William Quinn is comforted by the observation that when banks stay away from the bubble, its bursting has limited effects. That was true in the 1840s and perhaps it will be true today.

And Odlyzko reassures me that the mania of the 1830s “was a success, in the end, for those investors who persevered”, even if one cannot say the same for the 1840s and the 1860s. But Odlyzko is not impressed by analogies between the railways and AI. People at least understood how railways worked, he says, and what they were supposed to do. But generative AI? “We are losing contact with reality,” he opines.

Written for and first published in the Financial Times on 6 November 2025.

I’m running the London Marathon in April in support of a very good cause. If you felt able to contribute something, I’d be extremely grateful.

Profile

pseudomonas: per bend sinister azure and or a chameleon counterchanged (Default)
pseudomonas

November 2024

S M T W T F S
     12
34567 89
10111213141516
17181920212223
24252627282930

Most Popular Tags

Expand Cut Tags

No cut tags
Page generated Dec. 12th, 2025 08:55 pm
Powered by Dreamwidth Studios

Style Credit