flipboard.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Welcome to Flipboard on Mastodon. A place for our community of curators and enthusiasts to inform and inspire each other. If you'd like to join please request an invitation via the sign-up page.

Administered by:

Server stats:

1.2K
active users

#gemma3

1 post1 participant0 posts today
Germán Martín<p>¿No pensáis que <a href="https://mastodon.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> es muy bueno? Que un modelo de tamaño reducido corriendo en local muestre empatía hace dudar de con quién estás hablando</p>
Hacker News<p>Gemma3 Function Calling</p><p><a href="https://ai.google.dev/gemma/docs/capabilities/function-calling" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ai.google.dev/gemma/docs/capab</span><span class="invisible">ilities/function-calling</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> <a href="https://mastodon.social/tags/Function" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Function</span></a> <a href="https://mastodon.social/tags/Calling" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Calling</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/Technology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Technology</span></a> <a href="https://mastodon.social/tags/Functionality" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Functionality</span></a> <a href="https://mastodon.social/tags/Google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Google</span></a> <a href="https://mastodon.social/tags/Developers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Developers</span></a></p>
Amelis<p>I like coding with these AI LLM models! </p><p>I only have to prompt the AI as if I am a micro managing idiot to get the code that I want. Feels like my previous boss, but he wont manage to spot the provided bugs to update the prompt for a retry.</p><p><a href="https://mastodon.green/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.green/tags/ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ollama</span></a> <a href="https://mastodon.green/tags/deepseek" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>deepseek</span></a> <a href="https://mastodon.green/tags/qwencoder" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>qwencoder</span></a> <a href="https://mastodon.green/tags/qodo" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>qodo</span></a> <a href="https://mastodon.green/tags/gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemma3</span></a></p>
Hacker News<p>Fine-tune Google's Gemma 3</p><p><a href="https://unsloth.ai/blog/gemma3" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="">unsloth.ai/blog/gemma3</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/Fine" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Fine</span></a>-tune <a href="https://mastodon.social/tags/Gemma" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma</span></a> #3 <a href="https://mastodon.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/Google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Google</span></a> <a href="https://mastodon.social/tags/TechNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TechNews</span></a></p>
michabbb<p><a href="https://social.vivaldi.net/tags/Mistral" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Mistral</span></a> Small 3.1: SOTA Multimodal <a href="https://social.vivaldi.net/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> with 128k Context Window 🚀</p><p><a href="https://social.vivaldi.net/tags/MistralAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MistralAI</span></a> releases improved <a href="https://social.vivaldi.net/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensource</span></a> model outperforming <a href="https://social.vivaldi.net/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> and <a href="https://social.vivaldi.net/tags/GPT4oMini" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPT4oMini</span></a> with 150 tokens/sec speed. Features <a href="https://social.vivaldi.net/tags/multimodal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>multimodal</span></a> capabilities under <a href="https://social.vivaldi.net/tags/Apache2" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Apache2</span></a> license.</p><p>🧵👇<a href="https://social.vivaldi.net/tags/machinelearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>machinelearning</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p>The <a href="https://hachyderm.io/tags/ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ollama</span></a> <a href="https://hachyderm.io/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensource</span></a> <a href="https://hachyderm.io/tags/software" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>software</span></a> that makes it easy to run <a href="https://hachyderm.io/tags/Llama3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Llama3</span></a>, <a href="https://hachyderm.io/tags/DeepSeekR1" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepSeekR1</span></a>, <a href="https://hachyderm.io/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a>, and other large language models (<a href="https://hachyderm.io/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a>) is out with its newest release. The ollama software makes it easy to leverage the llama.cpp back-end for running a variety of LLMs and enjoying convenient integration with other desktop software. <br>The new ollama 0.6.2 Release Features Support For <a href="https://hachyderm.io/tags/AMD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AMD</span></a> <a href="https://hachyderm.io/tags/StrixHalo" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>StrixHalo</span></a>, a.k.a. <a href="https://hachyderm.io/tags/RyzenAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>RyzenAI</span></a> Max+ laptop / SFF desktop SoC.<br><a href="https://www.phoronix.com/news/ollama-0.6.2" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">phoronix.com/news/ollama-0.6.2</span><span class="invisible"></span></a></p>
💓 EV∆ ∆ΠΠ∆ 💓<p>Finally, the setting I've been looking for is disabled and I can move on with life. Oh don't worry, it's not in the usual place of course, every device must be special, if that wasn't ensured by the proclivity in Linux towards "yes there are breaking changes which we might not have documented. submit a bug if you find one, we can't be expected to document releases effectively. what, did you expect that is this a BSD phone? lol,no it's android."</p><p>Pausing to remind myself that since this is the modern era, finding the answer amongst the vastness of potential data sources and search engine results... the answer was rapidified by asking my local LLM about specific Android version options which are part of the manufacturer's modified UX element tree: </p><p>&gt; "On a 'TCL NXTPAPER 40 5G' phone, running TCL-UI version 5.0.YGC0, included in the custom settings for "simulated e-ink paper', related to Accessibility Features, where is the floating button for screen color inversion?"</p><p>and there you have it, one query. I should have defaulted to that by default. </p><p>also, Gemma3 is very fast on Ollama... now I need to test the quantized version "Fallen-Gemma3-4B-v1g-Q6_K" for a comparison.</p><p><a href="https://mastodon.bsd.cafe/tags/Ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ai</span></a> <a href="https://mastodon.bsd.cafe/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.bsd.cafe/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> <a href="https://mastodon.bsd.cafe/tags/gemma" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemma</span></a> <a href="https://mastodon.bsd.cafe/tags/gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemma3</span></a> <a href="https://mastodon.bsd.cafe/tags/OSSgemini" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OSSgemini</span></a> <a href="https://mastodon.bsd.cafe/tags/gpu" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gpu</span></a> <a href="https://mastodon.bsd.cafe/tags/android" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>android</span></a> <a href="https://mastodon.bsd.cafe/tags/linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linux</span></a> <a href="https://mastodon.bsd.cafe/tags/mondaymorning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mondaymorning</span></a> <a href="https://mastodon.bsd.cafe/tags/nap" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nap</span></a> <a href="https://mastodon.bsd.cafe/tags/naptime" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>naptime</span></a> <a href="https://mastodon.bsd.cafe/tags/pleaseNoMoreDebugging" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>pleaseNoMoreDebugging</span></a></p>
ENTER.CO<p><a href="https://mastodon.social/tags/PorSiTeLoPerdiste" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PorSiTeLoPerdiste</span></a> Google lanza, Gemma 3, la IA más potente que puedes usar en tu propia GPU <a href="https://www.enter.co/especiales/dev/ai/google-lanza-gemma-3-la-ia-mas-potente-que-puedes-usar-en-tu-propia-gpu/?utm_source=dlvr.it&amp;utm_medium=mastodon" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">enter.co/especiales/dev/ai/goo</span><span class="invisible">gle-lanza-gemma-3-la-ia-mas-potente-que-puedes-usar-en-tu-propia-gpu/?utm_source=dlvr.it&amp;utm_medium=mastodon</span></a> <a href="https://mastodon.social/tags/InteligenciaArtificial" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>InteligenciaArtificial</span></a> <a href="https://mastodon.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> <a href="https://mastodon.social/tags/Google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Google</span></a></p>
Karlheinz Agsteiner<p>Not sure if you have noticed it: Google has released Gemma 3, a powerful model that is small enough to run on local computers.</p><p><a href="https://blog.google/technology/developers/gemma-3/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.google/technology/develop</span><span class="invisible">ers/gemma-3/</span></a></p><p>I've done some experiments on my Laptop (with a Geforce 3080ti), and am very impressed. I tried to be happy with Llama3, with the Deepseek R1 distills on Llama, with Mistral, but the models that would run on my computer were not in the same league as what you get from ChatGPT or Claude or Deepseek remotely.</p><p>Gemma changes this for me. So far I let it write 3 smaller pieces of Javascript, analyze a few texts, and it performed slow, but flawlessly. So finally I can move to a "use the local LLM for the 90% default case, and go for the big ones only if the local LLM fails".</p><p>This way<br>- I use far less CO2 for my LLM tasks<br>- I am in control of my data, nobody can collect my prompts and later sell my profile to ad customers<br>- I am sure the IP of my prompts stay with me<br>- I have the privacy to ask it whatever I want and no server in the US or CN has those data.</p><p>Interested? If you have a powerful graphiccs card in your PC, it is totally simple:</p><p>1. install LMStudio from LMStudio.ai<br>2. in LMStudio, click Discover, and download the Gema3 27b Q4 model<br>3. Chat</p><p>If your graphics card is too small, you might head for the smaller 12b model, but I can't tell you how well it performs.</p><p><a href="https://hachyderm.io/tags/LMStudio" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LMStudio</span></a> <a href="https://hachyderm.io/tags/gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemma3</span></a> <a href="https://hachyderm.io/tags/gemma" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemma</span></a> <a href="https://hachyderm.io/tags/chatgpt" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatgpt</span></a> <a href="https://hachyderm.io/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://hachyderm.io/tags/google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>google</span></a></p>
Mastokarl 🇺🇦<p>Did a few coding experiments with Gemma 3 local on lmstudio. So far it performs flawless (in terms of capability - on my lowly Geforce 3080ti it is fairly slow, something like 5 tokens per second. But I've got time, and it is mine, running locally, no Billionaire's corporation sees my prompts.</p><p>For me (privacy nut) this is a big thing, not having to use ChatGPT for everything.</p><p><a href="https://mastodon.social/tags/gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemma3</span></a> <a href="https://mastodon.social/tags/LMStudio" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LMStudio</span></a> <a href="https://mastodon.social/tags/chatgpt" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatgpt</span></a></p>
Lucas Janin 🇨🇦🇫🇷<p>Testing Open WebUi with Gemma:3 on my proxmox mini PC in a LXC. My hardware is limited, 12th Gen Intel Core i5-12450H so I’m only using the 1b (28 token/s) and 4b (11 token/s) version for now.</p><p>Image description is functioning, but it is slow; it takes 30 seconds to generate this text with the 4b version and 16G allocated for the LXC.</p><p>Next step, trying this on my Mac M1.</p><p><a href="https://mastodon.social/tags/openwebui" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>openwebui</span></a> <a href="https://mastodon.social/tags/gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemma3</span></a> <a href="https://mastodon.social/tags/selfhosted" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosted</span></a> <a href="https://mastodon.social/tags/selfhost" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhost</span></a> <a href="https://mastodon.social/tags/selfhosting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosting</span></a> <a href="https://mastodon.social/tags/alttext" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>alttext</span></a> <a href="https://mastodon.social/tags/ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ollama</span></a> <a href="https://mastodon.social/tags/proxmox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>proxmox</span></a> <a href="https://mastodon.social/tags/lxc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lxc</span></a></p>
Alessio Pomaro<p>🧠 Ho provato <a href="https://mastodon.uno/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> 27B, il nuovo modello open di <a href="https://mastodon.uno/tags/Google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Google</span></a>, che sembra essere uno dei migliori di questa categoria.&nbsp;</p><p>❓ Com'è andata?&nbsp;<a href="https://www.linkedin.com/posts/alessiopomaro_gemma3-google-prompt-activity-7306277042738683904-_K4y" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">linkedin.com/posts/alessiopoma</span><span class="invisible">ro_gemma3-google-prompt-activity-7306277042738683904-_K4y</span></a></p><p>___&nbsp;</p><p>✉️ 𝗦𝗲 𝘃𝘂𝗼𝗶 𝗿𝗶𝗺𝗮𝗻𝗲𝗿𝗲 𝗮𝗴𝗴𝗶𝗼𝗿𝗻𝗮𝘁𝗼/𝗮 𝘀𝘂 𝗾𝘂𝗲𝘀𝘁𝗲 𝘁𝗲𝗺𝗮𝘁𝗶𝗰𝗵𝗲, 𝗶𝘀𝗰𝗿𝗶𝘃𝗶𝘁𝗶 𝗮𝗹𝗹𝗮 𝗺𝗶𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿:&nbsp;<a href="https://bit.ly/newsletter-alessiopomaro" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">bit.ly/newsletter-alessiopomar</span><span class="invisible">o</span></a>&nbsp;</p><p><a href="https://mastodon.uno/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.uno/tags/GenAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenAI</span></a> <a href="https://mastodon.uno/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://mastodon.uno/tags/IntelligenzaArtificiale" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>IntelligenzaArtificiale</span></a> <a href="https://mastodon.uno/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a>&nbsp;</p>
Mike Stone<p>So, I did it. I hooked up the <a href="https://fosstodon.org/tags/HomeAssistant" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HomeAssistant</span></a> Voice to my <a href="https://fosstodon.org/tags/Ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Ollama</span></a> instance. As <span class="h-card" translate="no"><a href="https://aus.social/@ianjs" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>ianjs</span></a></span> suggested, it's much better at recognizing the intent of my requests. As <span class="h-card" translate="no"><a href="https://fosstodon.org/@chris_hayes" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>chris_hayes</span></a></span> suggested, I'm using the new <a href="https://fosstodon.org/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> model. It now knows "How's the weather" and "What's the weather" are the same thing, and I get an answer for both. Responses are a little slower than without the LLM, but honestly it's pretty negligible. It's a very little bit slower again if I use local <a href="https://fosstodon.org/tags/Piper" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Piper</span></a> vs HA's cloud service.</p>
Mike Stone<p>Testing out the newly released <a href="https://fosstodon.org/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> model locally on <a href="https://fosstodon.org/tags/ollama" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ollama</span></a>. This is one of the more frustration aspects of these LLMs. It must be said that LLMs are fine for what they are, and what they are is a glorified autocomplete. They have their uses (just like autocomplete does), but if you try to use them outside of their strengths your results are going to be less than reliable.</p>
ThunDroid<p>Google's Gemma 3: A New Era in Open-Source AI Innovation<br><a href="https://mastodon.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> <a href="https://mastodon.social/tags/Google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Google</span></a><br><a href="https://thundroid.co/googles-gemma-3-a-new-era-in-open-source-ai-innovation/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">thundroid.co/googles-gemma-3-a</span><span class="invisible">-new-era-in-open-source-ai-innovation/</span></a></p>
ENTER.CO<p>Google lanza, Gemma 3, la IA más potente que puedes usar en tu propia GPU <a href="https://www.enter.co/especiales/dev/ai/google-lanza-gemma-3-la-ia-mas-potente-que-puedes-usar-en-tu-propia-gpu/?utm_source=dlvr.it&amp;utm_medium=mastodon" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">enter.co/especiales/dev/ai/goo</span><span class="invisible">gle-lanza-gemma-3-la-ia-mas-potente-que-puedes-usar-en-tu-propia-gpu/?utm_source=dlvr.it&amp;utm_medium=mastodon</span></a> <a href="https://mastodon.social/tags/InteligenciaArtificial" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>InteligenciaArtificial</span></a> <a href="https://mastodon.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> <a href="https://mastodon.social/tags/Google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Google</span></a></p>
Dr. Fortyseven 🥃 █▓▒░<p>Okay, man, THIS is really amazing.</p><p><a href="https://defcon.social/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://defcon.social/tags/gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>gemma3</span></a></p>
AiBay<p>🚀 L'innovazione non si ferma: Google lancia <a href="https://mastodon.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a>, il modello di GPU singola più potente mai realizzato. Pronti a sfidare i limiti? <a href="https://mastodon.social/tags/InnovationTech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>InnovationTech</span></a> <a href="https://mastodon.social/tags/socialmedia" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>socialmedia</span></a> <a href="https://mastodon.social/tags/artificialintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>artificialintelligence</span></a> <a href="https://mastodon.social/tags/technology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>technology</span></a></p><p>🔗 <a href="https://aibay.it/notizie/google-gemma-3-rivoluzione-nellai-open-source" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">aibay.it/notizie/google-gemma-</span><span class="invisible">3-rivoluzione-nellai-open-source</span></a></p>
KINEWS24<p>Google präsentiert Gemma 3: Multimodales KI-Kraftpaket</p><p>Unterstützt 140+ Sprachen<br>Verarbeitet Text, Bilder, Videos<br>Läuft auf einzelnen GPUs</p><p><a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/ki" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ki</span></a> <a href="https://mastodon.social/tags/artificialintelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>artificialintelligence</span></a> <a href="https://mastodon.social/tags/kuenstlicheintelligenz" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kuenstlicheintelligenz</span></a> <a href="https://mastodon.social/tags/Google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Google</span></a> <a href="https://mastodon.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> <a href="https://mastodon.social/tags/Multimodal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Multimodal</span></a> <a href="https://mastodon.social/tags/googlegemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>googlegemma3</span></a> </p><p>Jetzt lesen und folgen!</p><p><a href="https://kinews24.de/gemma-3-das-multimodale-kraftpaket-von-google/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">kinews24.de/gemma-3-das-multim</span><span class="invisible">odale-kraftpaket-von-google/</span></a></p>
arun<p>A short review of testing Google's <a href="https://mastodon.social/tags/Gemma3" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Gemma3</span></a> locally. </p><p>Gemma 3 is an awesome model for running locally and it pushes the possibilities to the next level with vision and multi lingual support! <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/google" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>google</span></a></p><p><a href="https://deepgains.substack.com/p/googles-gemma3-testing-the-new-features" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">deepgains.substack.com/p/googl</span><span class="invisible">es-gemma3-testing-the-new-features</span></a></p>