On Mastodon
Apr. 29th, 2022 11:41 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
I think Mastodon will wind up being far far far easier for the Troll Factory to astroturf, once it's big enough for them to bother with in earnest.
There's no-one with overview of the network (by design) so it'd be almost impossible to use network analysis methods to discover inauthentic accounts systematically promoting whatever narrative. It's basically an open door. Especially given that the troll factory can easily create their own nodes in as large numbers as they wish.
Also, I assume some nodes are going to be run by people who have skewed attitudes towards what (non-illegal-in-their-jurisdiction) hate speech etc they allow? I don't know what happens when Node A allows something, Node B blocks Node A, Node C doesn't, and I'm watching two-thirds of a conversation from Node B (between someone from Node B and someone from Node C which I can view, and someone from Node A which I can't)?
ISTM that there's a high chance nodes who want to avoid weirdness of that sort end up choosing between "We're going to block any node that hasn't blocked node A" vs "We'll block hatespeech on our own nodes but we're not going to cut off entire nodes just because they're run by two volunteers, one of whom is ill, and haven't got the resources to moderate adequately." At which point the network is basically bifurcated. Repeat for various values of Node A with varying levels of awfulness. Obviously morally a node which is actively cheerleading nazis is worse than a node that just run by someone without the resources to moderate away the nazis, but from the outside they look the same. For some nodes, that will be an important difference; for others it won't.
[ETA: I'm not saying that Mastodon doesn't have any points where it's better than Twitter or whatever comparator. I'm saying that its federated nature makes it peculiarly vulnerable to certain attacks, and I can't see how these vulnerabilities can easily be mitigated]
There's no-one with overview of the network (by design) so it'd be almost impossible to use network analysis methods to discover inauthentic accounts systematically promoting whatever narrative. It's basically an open door. Especially given that the troll factory can easily create their own nodes in as large numbers as they wish.
Also, I assume some nodes are going to be run by people who have skewed attitudes towards what (non-illegal-in-their-jurisdiction) hate speech etc they allow? I don't know what happens when Node A allows something, Node B blocks Node A, Node C doesn't, and I'm watching two-thirds of a conversation from Node B (between someone from Node B and someone from Node C which I can view, and someone from Node A which I can't)?
ISTM that there's a high chance nodes who want to avoid weirdness of that sort end up choosing between "We're going to block any node that hasn't blocked node A" vs "We'll block hatespeech on our own nodes but we're not going to cut off entire nodes just because they're run by two volunteers, one of whom is ill, and haven't got the resources to moderate adequately." At which point the network is basically bifurcated. Repeat for various values of Node A with varying levels of awfulness. Obviously morally a node which is actively cheerleading nazis is worse than a node that just run by someone without the resources to moderate away the nazis, but from the outside they look the same. For some nodes, that will be an important difference; for others it won't.
[ETA: I'm not saying that Mastodon doesn't have any points where it's better than Twitter or whatever comparator. I'm saying that its federated nature makes it peculiarly vulnerable to certain attacks, and I can't see how these vulnerabilities can easily be mitigated]
no subject
Date: 2022-04-29 11:47 am (UTC)1) Mastodon instances are, for the most part, way smaller than Twitter. Playing whack-a-mole with spammers and Nazis isn't fun, but finding people who want to do it doesn't turn out to be impossible. I'm not sure AI is a better solution than (relatively) large numbers of people who care and have skin in the game within their own communities.
2) Users have a lot more control over their home timelines than on Twitter. The ability to turn off boosts is great on days when the network is all up in arms about something. Similarly, the lack of trending topics is something of a calming influence.
3) Users timelines aren't dictated by an algorithm (that rewards engagement for advertising purposes etc). There are still outrage cycles, but they feel different in ways I struggle to articulate.
4) If users aren't happy with the moderation policies on their home instance, they can switch to another one (and I think importing followers/followees is automated now).
No, we cannot stop Nazis creating instances of their own. But putting them mostly in their own echo chamber means that it's possible for the rest of us to participate widely on Mastodon with much less hassle and risk. I am not sure that bifurcation is such a problem in that context.
no subject
Date: 2022-04-29 12:08 pm (UTC)no subject
Date: 2022-04-29 12:17 pm (UTC)no subject
Date: 2022-04-29 12:15 pm (UTC)If I was given a timeline from a troll farm account, I probably wouldn't be able to tell that it was inauthentic content. Certainly I wouldn't be able to tell that any given tweet was. It's only by analysing patterns across hundreds of thousands of accounts that one could tell. And I don't see how you can ask volunteers to do that.
https://www.opendemocracy.net/en/odr/troll-factories-kyrgyzstan/ and https://www.rferl.org/a/russian-troll-factory-hacking/31076160.html https://www.diis.dk/en/my-life-as-a-troll-lyudmila-savchuk-s-story are the sort of thing I'm talking about.
no subject
Date: 2022-04-29 12:21 pm (UTC)no subject
Date: 2022-04-29 02:54 pm (UTC)no subject
Date: 2022-05-02 10:13 am (UTC)I see your logic.
But I also feel like a big problem of the current big centralised platforms is that they massively lean into "a giant mosh pit of everyone on the service all watching a small number of celebrities" which isn't that great for users, and means that any kind of moderation needs to "cope" with people interacting with "basically everyone, unless you almost completely opt out". If most ways of interacting are "people you've specifically friended" and "randoms on the same smallish/moderated instance", I think the problem's much smaller. Still potentially big, in that even "friend-of-friend only or backchannel only friend requests" can be a vector for misinformation and abuse, but possibly susceptible to "choosing an instance which tries to cope, or is big enough to have a paid team".
But I guess, I should understand how it DOES work first...