About to hop on a plane to head to #PositConf2024 but want a local coding assistant for inflight hacking? With #ollama and #Shiny for #Python you can bring a coding assistant with you!
ollama run llama3.1:8b
to get a decent, light ~5GB modelmodel="llama3.1:8b
in app.py
shiny run app.py
=> More informations about this toot | More toots from grrrck@fosstodon.org
I've also tried deepseek-coder-v2
which is an 8GB download and pretty darn good https://ollama.com/library/deepseek-coder-v2
=> More informations about this toot | More toots from grrrck@fosstodon.org
Quick screen recording of a local chat interface in #Positron with #ollama and #Shiny for #Python
=> More informations about this toot | More toots from grrrck@fosstodon.org
@grrrck how's this impact your battery life? How much RAM are you working with?
=> More informations about this toot | More toots from thomasw@toot.bldrweb.org
@thomasw Amazingly, it doesn't seem to be a big battery draw at all. I'm on an M1 with 32gb of memory (my work laptop) but it's definitely not using all of the memory either.
=> More informations about this toot | More toots from grrrck@fosstodon.org
@grrrck I was testing out the Alpaca[1] desktop GUI for Ollama a few days ago. On my machine it was a bit slow, but it was still pretty amazing watch these words appear on my screen "out of thin air," fully severed from the internet.
[1] https://jeffser.com/alpaca/
=> More informations about this toot | More toots from thomasw@toot.bldrweb.org
text/gemini
This content has been proxied by September (3851b).