det.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Mastodon Server des Unterhaltungsfernsehen Ehrenfeld zum dezentralen Diskurs.

Administered by:

Server stats:

1.7K
active users

#gpu

15 posts14 participants0 posts today
grob 🇺🇦:intersexprogresspride:<p>What do you use to do <a href="https://mstdn.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> general purpose computing in <a href="https://mstdn.social/tags/rustlang" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rustlang</span></a> <a href="https://mstdn.social/tags/rust" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rust</span></a>?</p><p>I tried <a href="https://mstdn.social/tags/wgpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>wgpu</span></a> and found it hard to do presumably simple things like executing the same shader module on different input data _without_ going through the whole (execution time consuming) process of setting up the encoder, input buffer, … from scratch again.</p><p>Can I repopulate the input buffer from CPU? Can I reuse the command buffer?</p><p>Any pointer appreciated. (am new to Rust &amp; GPU, doing it for learning)</p><p>:boost_requested:</p>
ℒӱḏɩę :blahaj: 💾<p>Someone needs to remind Lydie that trying to <a href="https://tech.lgbt/tags/overclock" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>overclock</span></a> her <a href="https://tech.lgbt/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> while doing a 1.7TB main drive <a href="https://tech.lgbt/tags/backup" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>backup</span></a> is a bad idea.</p><p>The crash was weird - first the <a href="https://tech.lgbt/tags/2080TI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>2080TI</span></a>, the one I was overclocking, crashed. Then the <a href="https://tech.lgbt/tags/7900XTX" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>7900XTX</span></a> hung - I hit Alt+Win+Shift+B to restart the GPU and <a href="https://tech.lgbt/tags/Windows" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Windows</span></a> made the beep and the screen went black.</p><p>Meanwhile the music I was playing in the background never had a hiccup!!</p><p>I had to hard power off. Which of course broke the backup about 40% the way in 🤦‍♀️ (I don't need anti-windows hate, move on)</p>
Paul Buetow<p>How do GPUs work? Usually, people only know about CPUs... <a href="https://blog.codingconfessions.com/p/gpu-computing" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">blog.codingconfessions.com/p/g</span><span class="invisible">pu-computing</span></a> ... I got the gist, but</p><p><a href="https://fosstodon.org/tags/gpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gpu</span></a> <a href="https://fosstodon.org/tags/cpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cpu</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> tries to catch <a href="https://hachyderm.io/tags/CUDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CUDA</span></a> with performance-boosting <a href="https://hachyderm.io/tags/ROCm7" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm7</span></a> software<br>House of Zen promises 3.5x improvement in inference and 3x uplift in training perf over last-gen software<br><a href="https://hachyderm.io/tags/ROCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm</span></a> is a suite of software libraries and development tools, including <a href="https://hachyderm.io/tags/HIP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HIP</span></a> frameworks, that provides developers a low-level programming interface for running high-performance computing (<a href="https://hachyderm.io/tags/HPC" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HPC</span></a>) and <a href="https://hachyderm.io/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> workloads on <a href="https://hachyderm.io/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a>, but for AMD GPUs rather than Nvidia. <br><a href="https://www.theregister.com/2025/09/17/amd_rocm_7_chases_nvidia_cuda/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">theregister.com/2025/09/17/amd</span><span class="invisible">_rocm_7_chases_nvidia_cuda/</span></a></p>
Crypto News<p>Nvidia to Invest $5B in Intel and Develop Data Centers, PCs - Nvidia (NVDA), the world's largest public company by market cap, said it will invest $5 b... - <a href="https://www.coindesk.com/business/2025/09/18/nvidia-to-invest-usd5b-in-intel-and-develop-data-centers-pcs" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">coindesk.com/business/2025/09/</span><span class="invisible">18/nvidia-to-invest-usd5b-in-intel-and-develop-data-centers-pcs</span></a> <a href="https://schleuss.online/tags/datacenters" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>datacenters</span></a> <a href="https://schleuss.online/tags/finance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>finance</span></a> <a href="https://schleuss.online/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a> <a href="https://schleuss.online/tags/intel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>intel</span></a> <a href="https://schleuss.online/tags/news" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>news</span></a> <a href="https://schleuss.online/tags/cpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>cpu</span></a> <a href="https://schleuss.online/tags/gpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gpu</span></a> <a href="https://schleuss.online/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a></p>
Benjamin Carr, Ph.D. 👨🏻‍💻🧬<p><a href="https://hachyderm.io/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> <a href="https://hachyderm.io/tags/ROCm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm</span></a> 7.0 Officially Released With Many Significant Improvements<br>Key highlights of <a href="https://hachyderm.io/tags/ROCm7" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ROCm7</span></a>.0 include:<br>- AMD Instinct <a href="https://hachyderm.io/tags/MI350X" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MI350X</span></a> and Instinct <a href="https://hachyderm.io/tags/MI355X" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MI355X</span></a> officially supported<br>- <a href="https://hachyderm.io/tags/Ubuntu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ubuntu</span></a> 24.04.3 LTS and <a href="https://hachyderm.io/tags/RockyLinux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RockyLinux</span></a> 9 with Linux 5.14 officially supported<br>- ROCm 7.0 supports <a href="https://hachyderm.io/tags/KVM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KVM</span></a> Passthrough for MI350X and MI355X GPUs<br>- ROCm 7.0 supports <a href="https://hachyderm.io/tags/PyTorch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PyTorch</span></a> 2.7, <a href="https://hachyderm.io/tags/TensorFlow" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TensorFlow</span></a> 2.19.1<br>- Official support for Llama.cpp.<br>- AMD <a href="https://hachyderm.io/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> kernel driver code is now distributed separately from ROCm stack. <br><a href="https://www.phoronix.com/news/AMD-ROCm-7.0-Released" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">phoronix.com/news/AMD-ROCm-7.0</span><span class="invisible">-Released</span></a></p>
The News Lens<p><b>川普帶來美企投資,英國宣布成立新AI發展特區,啟動「英版星際之門」計畫</b></p><blockquote>中央通訊社 2025-09-18 09:48:00 CST</blockquote><span>英國啟動AI特區,吸引300億英鎊投資,創造5000個就業機會。OpenAI和輝達參與「英國星際之門」計畫,強化英國運算主權,並與黑石集團合作投資AI基礎設施。特區聚焦低碳電力供應,並與美國簽署核能合作協議,確保能源安全。<br></span><a href="https://www.thenewslens.com/article/258790" rel="nofollow noopener" target="_blank">https://www.thenewslens.com/article/258790</a><span><br></span><a href="https://social.mikala.one/tags/全球核融合政策峰會" rel="nofollow noopener" target="_blank">#全球核融合政策峰會</a> <a href="https://social.mikala.one/tags/英國" rel="nofollow noopener" target="_blank">#英國</a> <a href="https://social.mikala.one/tags/大西洋先進核能夥伴關係" rel="nofollow noopener" target="_blank">#大西洋先進核能夥伴關係</a> <a href="https://social.mikala.one/tags/歐洲" rel="nofollow noopener" target="_blank">#歐洲</a> <a href="https://social.mikala.one/tags/小型模組化反應爐" rel="nofollow noopener" target="_blank">#小型模組化反應爐</a> <a href="https://social.mikala.one/tags/AI機遇行動計畫" rel="nofollow noopener" target="_blank">#AI機遇行動計畫</a> <a href="https://social.mikala.one/tags/AI基礎建設" rel="nofollow noopener" target="_blank">#AI基礎建設</a> <a href="https://social.mikala.one/tags/SMR" rel="nofollow noopener" target="_blank">#SMR</a> <a href="https://social.mikala.one/tags/能源合作協議" rel="nofollow noopener" target="_blank">#能源合作協議</a> <a href="https://social.mikala.one/tags/運算主權" rel="nofollow noopener" target="_blank">#運算主權</a> <a href="https://social.mikala.one/tags/核融合" rel="nofollow noopener" target="_blank">#核融合</a> <a href="https://social.mikala.one/tags/英格蘭" rel="nofollow noopener" target="_blank">#英格蘭</a> <a href="https://social.mikala.one/tags/川普" rel="nofollow noopener" target="_blank">#川普</a> <a href="https://social.mikala.one/tags/Blackstone" rel="nofollow noopener" target="_blank">#Blackstone</a> <a href="https://social.mikala.one/tags/人工智慧" rel="nofollow noopener" target="_blank">#人工智慧</a> <a href="https://social.mikala.one/tags/布萊斯鎮" rel="nofollow noopener" target="_blank">#布萊斯鎮</a> <a href="https://social.mikala.one/tags/英國星際之門" rel="nofollow noopener" target="_blank">#英國星際之門</a> <a href="https://social.mikala.one/tags/OpenAI" rel="nofollow noopener" target="_blank">#OpenAI</a> <a href="https://social.mikala.one/tags/輝達" rel="nofollow noopener" target="_blank">#輝達</a> <a href="https://social.mikala.one/tags/資料運算中心" rel="nofollow noopener" target="_blank">#資料運算中心</a> <a href="https://social.mikala.one/tags/GPU" rel="nofollow noopener" target="_blank">#GPU</a> <a href="https://social.mikala.one/tags/Nscale" rel="nofollow noopener" target="_blank">#Nscale</a> <a href="https://social.mikala.one/tags/Stargate" rel="nofollow noopener" target="_blank">#Stargate</a> UK <a href="https://social.mikala.one/tags/美國" rel="nofollow noopener" target="_blank">#美國</a> <a href="https://social.mikala.one/tags/鈷園園區" rel="nofollow noopener" target="_blank">#鈷園園區</a> <a href="https://social.mikala.one/tags/AI發展特區" rel="nofollow noopener" target="_blank">#AI發展特區</a><p></p>
Hacker News<p>Gluon: a GPU programming language based on the same compiler stack as Triton</p><p><a href="https://github.com/triton-lang/triton/blob/main/python/tutorials/gluon/01-intro.py" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/triton-lang/triton/</span><span class="invisible">blob/main/python/tutorials/gluon/01-intro.py</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/Gluon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gluon</span></a> <a href="https://mastodon.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> <a href="https://mastodon.social/tags/Programming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Programming</span></a> <a href="https://mastodon.social/tags/Triton" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Triton</span></a> <a href="https://mastodon.social/tags/Compiler" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Compiler</span></a> <a href="https://mastodon.social/tags/Technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Technology</span></a> <a href="https://mastodon.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a></p>
WinFuture.de<p>Leistungsprobleme beim <a href="https://mastodon.social/tags/Pixel10" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Pixel10</span></a>? <a href="https://mastodon.social/tags/Google" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Google</span></a> verwendet offenbar einen veralteten <a href="https://mastodon.social/tags/Grafiktreiber" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Grafiktreiber</span></a>, der die <a href="https://mastodon.social/tags/PowerVR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PowerVR</span></a>-<a href="https://mastodon.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> im <a href="https://mastodon.social/tags/TensorG5" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TensorG5</span></a> massiv ausbremst - statt 1 GHz läuft die GPU nur mit 396 MHz. <a href="https://winfuture.de/news,153651.html?utm_source=Mastodon&amp;utm_medium=ManualStatus&amp;utm_campaign=SocialMedia" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">winfuture.de/news,153651.html?</span><span class="invisible">utm_source=Mastodon&amp;utm_medium=ManualStatus&amp;utm_campaign=SocialMedia</span></a></p>
Linuxiac<p>AMD ends AMDVLK development, unifying its Linux Vulkan driver strategy and backing RADV as the official open-source driver for Radeon GPUs.<br><a href="https://linuxiac.com/amd-ends-amdvlk-development-shifts-focus-to-radv-vulkan-driver/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">linuxiac.com/amd-ends-amdvlk-d</span><span class="invisible">evelopment-shifts-focus-to-radv-vulkan-driver/</span></a></p><p><a href="https://mastodon.social/tags/amd" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>amd</span></a> <a href="https://mastodon.social/tags/linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>linux</span></a> <a href="https://mastodon.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> <a href="https://mastodon.social/tags/gpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gpu</span></a></p>
CryptGoat<p><a href="https://fedifreu.de/tags/AMDVLK" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMDVLK</span></a> is dead. Yay, less confusion for <a href="https://fedifreu.de/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> <a href="https://fedifreu.de/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> users on <a href="https://fedifreu.de/tags/Linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Linux</span></a>! :tux: <br><a href="https://www.phoronix.com/news/AMDVLK-Discontinued" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">phoronix.com/news/AMDVLK-Disco</span><span class="invisible">ntinued</span></a></p><p><a href="https://fedifreu.de/tags/AMDVLK" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMDVLK</span></a> <a href="https://fedifreu.de/tags/MesaRADV" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MesaRADV</span></a> <a href="https://fedifreu.de/tags/Steam" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Steam</span></a> <a href="https://fedifreu.de/tags/LinuxGaming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LinuxGaming</span></a> <a href="https://fedifreu.de/tags/Gaming" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gaming</span></a></p>
Solarbird :flag_cascadia:<p>GamersNexus's documentary on GPU smuggling into China, showing literally every step of the supply chain with people on camera willing to talk, is BACK ONLINE after Bloomberg's attempt to silence them via a bogus copyright claim.</p><p>It's here! Watch it, because it's sat at 0 views for two weeks thanks to Bloomberg News. Even if you just play it in the background, let's get this thing showing up in YouTube's algorithm again:</p><p><a href="https://www.youtube.com/watch?v=1H3xQaf7BFI" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">youtube.com/watch?v=1H3xQaf7BFI</span><span class="invisible"></span></a></p><p>Spread the word! Tell everybody, make it go!</p><p><a href="https://mastodon.murkworks.net/tags/GamersNexus" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GamersNexus</span></a> <a href="https://mastodon.murkworks.net/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> <a href="https://mastodon.murkworks.net/tags/smuggling" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>smuggling</span></a> <a href="https://mastodon.murkworks.net/tags/china" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>china</span></a> <a href="https://mastodon.murkworks.net/tags/tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tech</span></a> <a href="https://mastodon.murkworks.net/tags/technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>technology</span></a> <a href="https://mastodon.murkworks.net/tags/news" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>news</span></a> <a href="https://mastodon.murkworks.net/tags/TechnologyNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechnologyNews</span></a> <a href="https://mastodon.murkworks.net/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a></p>
Klaus Frank<p>Oh the <a href="https://chaos.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> Black Market GPU video from Gamers Nexus that <a href="https://chaos.social/tags/Bloomberg" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bloomberg</span></a> tried to suppress and get nuked off of the face of the planet is back online again.</p><p>So watch it just to fuck off Bloomberg.<br><a href="https://www.youtube.com/watch?v=1H3xQaf7BFI" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">youtube.com/watch?v=1H3xQaf7BFI</span><span class="invisible"></span></a></p>
Christos Argyropoulos MD, PhD<p>The <a href="https://mstdn.science/tags/Perl" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Perl</span></a> module <a href="https://metacpan.org/pod/Bit::Set::DB" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">metacpan.org/pod/Bit::Set::DB</span><span class="invisible"></span></a> was an interesting exercise in writing Perl modules that can offload computations to the <a href="https://mstdn.science/tags/gpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gpu</span></a> in a device portable manner using <a href="https://mstdn.science/tags/OpenMP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenMP</span></a> <span class="h-card" translate="no"><a href="https://mast.hpc.social/@openmp_arb" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>openmp_arb</span></a></span></p>
Hacker News<p>AMD's RDNA4 GPU Architecture at Hot Chips 2025</p><p><a href="https://chipsandcheese.com/p/amds-rdna4-gpu-architecture-at-hot" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">chipsandcheese.com/p/amds-rdna</span><span class="invisible">4-gpu-architecture-at-hot</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> <a href="https://mastodon.social/tags/RDNA4" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RDNA4</span></a> <a href="https://mastodon.social/tags/HotChips" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HotChips</span></a> <a href="https://mastodon.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> <a href="https://mastodon.social/tags/Architecture" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Architecture</span></a> <a href="https://mastodon.social/tags/Technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Technology</span></a></p>
Fell<p>Linux Mint is often shilled as a lightweight distro that runs great on older hardware. Well, turns out the Cinnamon Desktop actually requires a certain amount of GPU performance to render smoothly.</p><p>On my old laptop Intel HD Graphics 4000, the entire Desktop only runs at ~30 FPS. The Xfce/MATE Editions run at 60 with occasional drops. </p><p>I didn't expect that from Cinnamon. Even KDE Plasma ran better. </p><p><a href="https://ma.fellr.net/tags/Linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Linux</span></a> <a href="https://ma.fellr.net/tags/Mint" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Mint</span></a> <a href="https://ma.fellr.net/tags/Cinnamon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cinnamon</span></a> <a href="https://ma.fellr.net/tags/Intel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Intel</span></a> <a href="https://ma.fellr.net/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> <a href="https://ma.fellr.net/tags/Desktop" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Desktop</span></a> <a href="https://ma.fellr.net/tags/Xfce" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Xfce</span></a> <a href="https://ma.fellr.net/tags/MATE" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MATE</span></a> <a href="https://ma.fellr.net/tags/KDE" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KDE</span></a> <a href="https://ma.fellr.net/tags/Plasma" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Plasma</span></a> <a href="https://ma.fellr.net/tags/DesktopLinux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DesktopLinux</span></a></p>
Schenkl | 🏳️‍🌈🦄<p>Habe endlich rausgefunden, wie ich unter meinem Haupt-OS <a href="https://chaos.social/tags/Linux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Linux</span></a> auch <a href="https://chaos.social/tags/FoldingAtHome" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FoldingAtHome</span></a> machen kann, ohne dass die <a href="https://chaos.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> so laut wird, dass ich den Krach nicht aushalte!</p><p>Die <a href="https://chaos.social/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> Karte hat die Treiber im Kernel, da gibt's kein fancy GUI Tool zum einstellen der Frequenzen.<br>Aber dieses <a href="https://chaos.social/tags/LACT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LACT</span></a> Tool kann das!<br>Ein bisschen rumgespielt und schon bekomme ich eine Drosselung um 50% hin und die Karte bleibt schön leise.</p><p>Also let's fold!</p>
KINEWS24<p>🚀 Nvidia Rubin CPX: 1M-Token-KI ab 2026?</p><p>▶️ Kontext separat lesen<br>▶️ 1M-Token Fenster nutzen<br>▶️ Verfügbar Ende 2026</p><p><a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/ki" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ki</span></a> <a href="https://mastodon.social/tags/artificialintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>artificialintelligence</span></a> <a href="https://mastodon.social/tags/nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nvidia</span></a> <a href="https://mastodon.social/tags/rubincpx" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>rubincpx</span></a> <a href="https://mastodon.social/tags/gpu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gpu</span></a> <a href="https://mastodon.social/tags/tech2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tech2025</span></a> <a href="https://mastodon.social/tags/longcontext" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>longcontext</span></a></p><p>⚡ SAVE IT! SHARE IT! READ IT! 🚀</p><p><a href="https://kinews24.de/nvidia-rubin-cpx-long-context-inferenz-2026/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">kinews24.de/nvidia-rubin-cpx-l</span><span class="invisible">ong-context-inferenz-2026/</span></a></p>
apfeltalk :verified:<p>A19 vs. A19 Pro: Die Chip-Unterschiede im iPhone 17 erklärt<br>Mit der Einführung des iPhone 17 hat Apple erstmals drei verschiedene Chipvarianten vorgestellt. Ihr fragt euch, was die Unterschiede zwischen dem A19 und dem A19 Pro sind? Wir fassen die Fakten kompakt zusammen.</p><p>Unterschiede zwischen A19 und A19 Pro<br><a href="https://www.apfeltalk.de/magazin/news/a19-vs-a19-pro-die-chip-unterschiede-im-iphone-17-erklaert/" rel="nofollow noopener" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">apfeltalk.de/magazin/news/a19-</span><span class="invisible">vs-a19-pro-die-chip-unterschiede-im-iphone-17-erklaert/</span></a><br><a href="https://creators.social/tags/iPhone" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>iPhone</span></a> <a href="https://creators.social/tags/News" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>News</span></a> <a href="https://creators.social/tags/A19" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>A19</span></a> <a href="https://creators.social/tags/A19Pro" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>A19Pro</span></a> <a href="https://creators.social/tags/Apple" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Apple</span></a> <a href="https://creators.social/tags/Chipvergleich" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chipvergleich</span></a> <a href="https://creators.social/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> <a href="https://creators.social/tags/IPhone17" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IPhone17</span></a> <a href="https://creators.social/tags/NeuralEngine" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeuralEngine</span></a></p>
aaron ~# :blinkingcursor:<p><strong>Making the most out of a small LLM</strong></p><p>Yesterday i finally built my own <a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://infosec.exchange/tags/server" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>server</span></a>. I had a spare <a href="https://infosec.exchange/tags/Nvidia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nvidia</span></a> RTX 2070 with 8GB of <a href="https://infosec.exchange/tags/VRAM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VRAM</span></a> laying around and wanted to do this for a long time.</p><p>The problem is that most <a href="https://infosec.exchange/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> need a lot of VRAM and i don't want to buy another <a href="https://infosec.exchange/tags/GPU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPU</span></a> just to host my own AI. Then i came across <a href="https://infosec.exchange/tags/gemma3" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gemma3</span></a> and <a href="https://infosec.exchange/tags/qwen3" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>qwen3</span></a>. Both of these are amazing <a href="https://infosec.exchange/tags/quantized" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>quantized</span></a> models with stunning reasoning given that they need so less resources. </p><p>I chose <code>huihui_ai/qwen3-abliterated:14b</code> since it supports <a href="https://infosec.exchange/tags/deepthinking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>deepthinking</span></a>, <a href="https://infosec.exchange/tags/toolcalling" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>toolcalling</span></a> and is pretty unrestricted. After some testing i noticed that the 8b model performs even better than the 14b variant with drastically better performance. I can't make out any quality loss there to be honest. The 14b model sneaked in chinese characters into the response very often. The 8b model on the other hand doesn't. </p><p>Now i've got a very fast model with amazing reasoning (even in German) and tool calling support. The only thing left to improve is knowledge. <a href="https://infosec.exchange/tags/Firecrawl" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Firecrawl</span></a> is a great tool for <a href="https://infosec.exchange/tags/webscraping" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>webscraping</span></a> and as soon as i implemented websearching, the setup was complete. At least i thought it was. </p><p>I want to make the most out of this LLM and therefore my next step is to implement a basic <a href="https://infosec.exchange/tags/webserver" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>webserver</span></a> that exposes the same <a href="https://infosec.exchange/tags/API" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>API</span></a> <a href="https://infosec.exchange/tags/endpoints" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>endpoints</span></a> as <a href="https://infosec.exchange/tags/ollama" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ollama</span></a> so that everywhere ollama is supported, i can point it to my python script instead. This way it feels like the model is way more capable than it actually is. I can use these advanced features everywhere without being bound to it's actual knowledge.</p><p>To improve this setup even more i will likely switch to a <a href="https://infosec.exchange/tags/mixture_of_experts" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>mixture_of_experts</span></a> architecture soon. This project is a lot of fun and i can't wait to integrate it into my homelab.</p><p><a href="https://infosec.exchange/tags/homelab" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>homelab</span></a> <a href="https://infosec.exchange/tags/selfhosting" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>selfhosting</span></a> <a href="https://infosec.exchange/tags/privacy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>privacy</span></a> <a href="https://infosec.exchange/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://infosec.exchange/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://infosec.exchange/tags/largelanguagemodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>largelanguagemodels</span></a> <a href="https://infosec.exchange/tags/coding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>coding</span></a> <a href="https://infosec.exchange/tags/developement" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>developement</span></a></p>