There are many issues with LLMs, but I'm fascinated by the different setups people are using to run DeepSeek with 671B locally consumer hardware.
https://digialps.com/deepseek-v3-on-m4-mac-blazing-fast-inference-on-apple-silicon/
=> More informations about this toot | View the thread | More toots from kern@iosdev.space
text/gemini
This content has been proxied by September (3851b).