Toot

Written by will_a113@lemmy.ml on 2024-12-13 at 13:23

24GB VRAM will easily let you run medium-sized models with good context length, and if you’re a gamer the XTX is a beast for raster performance and has good price/performance.

If you want to get serious about LLMs also keep in mind that most models and tools scale well across multiple GPUs, so you might buy one today (even a lesser one with “only” 16 or 12GB) and add another later. Just make sure your motherboard can fit 2, and you have a CPU, RAM and power supply that can handle it.

Here’s a good example from a guy who glued two much more modest cards together with decent results: adamniederer.com/blog/rocm-cross-arch.html

=> More informations about this toot | View the thread | More toots from will_a113@lemmy.ml

Mentions

=> View sith@lemmy.zip profile

Tags

Proxy Information
Original URL
gemini://mastogem.picasoft.net/toot/113645729552651579
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
248.106026 milliseconds
Gemini-to-HTML Time
0.323874 milliseconds

This content has been proxied by September (ba2dc).