flipboard.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Welcome to Flipboard on Mastodon. A place for our community of curators and enthusiasts to inform and inspire each other. If you'd like to join please request an invitation via the sign-up page.

Administered by:

Server stats:

1K
active users

I needed to understand the angles on Threads federation in a more rigorous way, so I took a few days to think through and write up my sense of the benefits, risks, and available risk mitigations, along with loopholes that need closing and questions to discuss with fediverse administrators.

This is a blisteringly hot subject for me, so it's hard to keep my head cool enough to understand other people's trade-offs, but I'm trying.

erinkissane.com/untangling-thr

@kissane and @darius 1 of 5 🧵

Would this be a fair distillation of the big set of pros and cons listed here: See if I got these right, want to engage but first want to be sure I grasp it and don’t oversimplify:

I’ll start with the easy and shorter bit the “pro or benefits” section you mentioned.

Pro: users on both sides of the Meta/non-Meta Fediverse could have a larger social graph and connect with friends or accounts they otherwise would miss.

@kissane @darius
4 of 5 🧵
The next risk was:

Interoperability with Threads, also might put the non-Threads Fedi as more vulnerable to larger well well-organized cross-mainstream social media attacks. In essence, they might attack Threads and because we interoperate with that, get to our users, whereas if we weren’t we might be a smaller fish they would not target.

The last two risks seemed to be about offering benefits to Meta.

@tchambers @darius I think that’s a fair summary of part of it, but the tricky thing for me here is that this stuff is coming to fedi anyway, and all big boosts to fedi population and visibility just accelerates it, so this particular risk is less about Meta being Meta and more about Meta just being very big. But there’s a side point here…

@tchambers @darius My fear is that at some point, AP nodes will either build working relationships with industrial-scale security and safety teams or become very soft targets for well resourced operations, which are more widespread than I think most of us want to acknowledge.

(I think present-day fedi is pretty unprepared for something on the scale of Secondary Infektion, for example.)

@kissane @darius To that - very much agreed with this on the point of prepping for larger scale attacks: “I do think the next twelve to eighteen months are a critical moment for building cross-server—and cross-platform—alliances for identifying and rooting out whatever influence networks fedi administrators and existing tooling can detect. “

Think #IFTAS can play a role there.

@tchambers @kissane @darius I think honestly these issues are both something IFTAS can contribute to working on, but also much bigger than IFTAS alone. We're going to need a very comprehensive task force to tackle the issues coming in the near-to-mid-term future.

Greg Scallan :verified_red:

@thisismissem @tchambers @kissane @darius This is probably the largest risk. Even on Flipboard, we regularly disable or otherwise limit hundreds of accounts created daily which have bad actors behind them. A major goal we have to to never allow these accounts to federate to begin with, but it is a cat and mouse game, so IFTAS and AP node admins working together will be crucial to a safe social environment.

@greg @thisismissem @tchambers @kissane @darius and AP developers. There's no way moderation keeps up unless moderators get better tools. Ones that take advantage of federation, in particular.

@greg @jenniferplusplus @tchambers @kissane @darius I'm already planning to work on a proposal for federating moderation notes between Admins / Moderators, such that two instances have a back-channel to communicate on regarding Reports

(I'm hesitant to allow communication not related to Reports)

@thisismissem

This seems so crucial, and also I *think* could theoretically help with EU legal compliance w/r/t sharing user data, although I'm the furthest thing from an expert on that. (The Scaling Trust on the Web annex on fedi flagged it and I've been thinking about it since.)

@greg @jenniferplusplus @tchambers @darius

@kissane @greg @jenniferplusplus @tchambers @darius yeah, there's going to need to be a lot of considerations that need to go into it, of course.

I do want the ability for admins of instances to be able to define "custom actions" that can be done with Reports, Accounts, etc.

These are like manually triggered webhooks, which would be useful for ingesting data into tools from the instance admin panel. (I previously called this a “Receipts API”, but I think a more generic solution is best)

@kissane @greg @jenniferplusplus @tchambers @darius this idea is actually inspired by a tool I'm currently working on integrating with, and I think it could be very powerful for advancing moderation.

@kissane @thisismissem @jenniferplusplus @tchambers @darius I do believe the terms of service and privacy policies of various nodes play a role in what can be shared for the tooling to be effective ... but honestly, I have not put a ton of thought into it. There are some processes which are scoped to the reporter and reported, but others that require a lot more information to be effective in automated fashion.

@greg @kissane @jenniferplusplus @tchambers @darius

From the FIRES paper, I had this question: my thoughts were that the NodeInfo API or maybe some similar `/.well-known/` endpoint could be used to share data about services, operators, data processing agreements, privacy policies, terms of service, etc.

@thisismissem @jenniferplusplus @tchambers @kissane @darius One of the key things to identify is why is a report generated. I've had "bad actors" generate reports on good people. They also create 1000's of accounts and try to flood the reporting. The ability to automate the vast majority of report management will be important for dealing with the armies which do exist out there. That can necessitate some amount of context on users outside your own domain, which will be a challenge.

@greg @jenniferplusplus @tchambers @kissane @darius part of this comes down to how Flag activities are turned into Reports — in Mastodon we've a model for reports where everything is related back to the Actor/Account, this heavily limits *how* reporting can work.

For instance, in a polymorphic report system, like Pixelfed, you can have reports about different things: Accounts, Posts, URLs, Hashtags, etc.

How do you currently report a malicious URL or Hashtag on your instance?

@thisismissem @jenniferplusplus @tchambers @kissane @darius Who decides if it is malicious? What if someone reports a benign url as malicious? What criteria does each instance use and is limiting or even blocking deemed ok? Part of the system is labeling. Another part is taking actions on the labels based on context and local values.

@greg @jenniferplusplus @tchambers @kissane @darius the user typically in this case, since Reports/Flags come from user interaction.

So say you see someone post a link to a misinformation website: you could report that post, but what if you could actually report the link within?

@greg @tchambers @kissane @darius I think for this my FIRES proposal might help actually — it allows for groups to work together on producing advisories and recommendations about various fediverse entities, including Actors/Users.

@thisismissem @tchambers @kissane @darius Would love to learn more about your FIRES proposal.

@greg @tchambers @kissane @darius drop me an email, but I'll only be able to give you access to the peer-review one; I'm currently rewriting a lot of it, but it's been taking _a while_ (since like early October), I started working on FIRES in September.

Possibly going to need someone to act as a scribe for me, because I'm having issues with my shoulder which limits my working hours.