Oh boy! This bad bad no good week for #OpenAI
[#]MistralAI #Large2 is here 👏
https://ollama.com/library/mistral-large
=> More informations about this toot | More toots from oktawian@mastodon.social
My 64GB RAM (DDR5) and 24GB VRAM (CUDA) should be enough to run this beautiful beast🤔
Let me try…
#OpenAI #MistralAI #Large2
=> More informations about this toot | More toots from oktawian@mastodon.social
It runs fine but I need like 2-3x the amount of VRAM for usable token generation speed 💸
English is impeccable, Polish is acceptable at best
Still better than some humans tbh 😝
[#]MistralAI #Large2
=> More informations about this toot | More toots from oktawian@mastodon.social
text/gemini
This content has been proxied by September (ba2dc).