I know a lot of us have been using LLM's more frequently and the same lot has an emotional quarry to justify it's use.
This not only being from the gigantic environmental footprint cost, but the feeding of personal data as well.
I have recently been testing out GPT4ALL, with some of the local models, to see if it is worth anything.
=> https://flathub.org/apps/io.gpt4all.gpt4all
I have to say, I was actually surprised of the utility and the low impact, to run from the trained models.
I've tested the most lean models (4GB ram), as I feel this needs to be available to "everyone" to be useful.
Qwen2 is in my opponion the best, not only is it almost twice as fast as Phi-3, but it provides "good enough" answers, where Phi-3 tends to blab.
Neither is feature complete like GPT, but there will always be a tradeoff to doing it lighter and more local.
A lot of you mentioned, that you use it because searches are useless and for this purpose, both models provide responses that can help you find online resources.
I have both models installed and if there is any tests you want me to perform or even another model you want me to try out, let me know. :D
Be aware that I have 16GB of ram, so 8GBis my max ceiling, ram means neccesary free ram.
3 hours ago ยท ๐ gritty
text/gemini; charset=utf-8
This content has been proxied by September (ba2dc).