"ramalama serve deepseek-r1:14b" if you want a fancy UI at http://127.0.0.1:8080/ FWIW
=> More informations about this toot | View the thread
$ ramalama run deepseek-r1:14b
Hello good morning
Hello! Good morning to you too! π How can I assist you today?
=> More informations about this toot | View the thread
π Excited to announce linenoise.cpp β a modern C++ fork of the classic linenoise library! π
Why linenoise.cpp? Built in C++ for better safety and maintainability. Fully compatible with C++17 compilers.
Feel free to send patches: https://github.com/ericcurtin/linenoise.cpp
=> More informations about this toot | View the thread
π Excited for #FOSDEM! Donβt miss "RamaLama: Making working with AI Models Boring". Pierre-Yves Chibon and I will also be talking about "Bootable Containers & Image Mode". Letβs explore the future of AI and OS innovation together! π #AI #RamaLama #bootc
=> More informations about this toot | View the thread
First math, now mixed criticalityβour journey continues! πβ¨ Thrilled to share progress on ISO 26262 safety certification for the Red Hat In-Vehicle Operating System. #RHIVOS #FunctionalSafety #ISO26262 https://www.redhat.com/en/about/press-releases/red-hat-reaches-key-milestone-push-functional-safety-certification-red-hat-vehicle-operating-system
=> More informations about this toot | View the thread
Trying to achieve 2 second boot time? Skip udev, manually specify necessary modules via systemd-modules-load (or build directly into kernel), mount via:
https://gitlab.com/CentOS/automotive/rpms/util-linux-automotive/-/blob/main/mount-sysroot.c
this saves seconds on most hardware, querying hardware takes time #optimization #bootspeed #RHIVOS
=> More informations about this toot | View the thread
π Meet llama-run, the newest tool in the llama.cpp ecosystem! Simplify running LLMs with one command, flexible configs, and seamless integration into OCI environments. Focus on outcomes, not infrastructure.
https://developers.redhat.com/blog/2024/12/17/simplifying-ai-ramalama-and-llama-run
[#]AI #LLMs #RedHat #llamacpp #RamaLama
=> More informations about this toot | View the thread
I found myself wrestling with pip. I rewrote @simon 's files-to-prompt tool in C++, it has no dependencies.
https://github.com/ericcurtin/files-to-prompt.cpp
Big thanks to @simon for the original inspiration! Always great to build on the shoulders of giants. π
=> More informations about this toot | View the thread
π Announcing lm-pull: A Lightweight Model Downloader for Developers π
lm-pull is a versatile, lightweight tool designed to simplify downloading models from different platforms like HuggingFace, Ollama, or direct URLs:
https://github.com/ericcurtin/lm-pull
=> More informations about this toot | View the thread
RamaLama is a fun project that is open to community contributions. We cater for use cases of AI from local machines to Edge to Enterprise. It tries to make AI simple and boring and it's under rapid development. Get involved!
https://developers.redhat.com/articles/2024/11/22/how-ramalama-makes-working-ai-models-boring https://github.com/containers/ramalama
=> More informations about this toot | View the thread
RamaLama discussion moved from Discord to Matrix. Interested in RamaLama and making using AI Boring? join us at this link: This is a good opportunity to influence the project or maybe join in on the coding.
https://matrix.to/#/#ramalama:fedoraproject.org
=> More informations about this toot | View the thread
We've opened a Discord for discussions on RamaLama, join if you are interested: https://discord.gg/czz5ETuy
=> More informations about this toot | View the thread
=> This profile with reblog | Go to ecurtin@treehouse.systems account This content has been proxied by September (3851b).Proxy Information
text/gemini