flipboard.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Welcome to Flipboard on Mastodon. A place for our community of curators and enthusiasts to inform and inspire each other. If you'd like to join please request an invitation via the sign-up page.

Administered by:

Server stats:

1.2K
active users

#llm

697 posts224 participants58 posts today

Small request for opinions to #AI #LLM people: unsurprisingly for tech, my workplace has a lot of people very interested in LLMs and we have many who invest tons of hours into investigation, development, and deployment of LLM-supported solutions (agent-based code review, chatbots for end-users of wikis, IT-support, etc.).

It's not that I don't see any legitimate use-cases for LLMs, but that enthusiasm makes me queasy. I keep worrying about the environmental impact, about exploitation of people, about discrimination of the marginalised and poor, about theft of intellectual property during training of these models, and about the power of corporations and its abuse. I could go on and on.

I worry that the risks of LLMs and the associated technologies heavily outweigh their benefits in the short and long. And I don't mean the AI takeover fairytale. I mean the real and palpable costs of this technology.

What do y'all think? Am I seeing things wrong?

The internet is dominated by "western" thought patterns and style of discourse, and #ai models are almost exclusively trained on internet data. All #llm's, even Chinese models will therefore always be "western" at heart, for better or worse. Whether it is possible to fully align an LLM with a completely different world view remains to be seen.

marginalrevolution.com/margina

Just published a small experiment called ThinContext:
A simple way to reduce context bloat in LLMs by summarizing previous outputs instead of feeding everything back verbatim.

message → full reply
context → short summary
Only context gets reused.

It’s lossy, but functional.
Maybe a small step toward more meaningful memory in AI systems.

Repo here if you’re curious, or want to break it: github.com/VanDerGroot/ThinCon

Critique very welcome.