I followed their instructions here: speech.fish.audio
I am using the API server to do inference: speech.fish.audio/inference/#http-api-inference
I don’t know about other ways. To be clear, this is not (necessarily) an LLM, it’s just for speech synthesis, so you don’t run it on ollama. That said I think it does technically use Llama under the hood since there are two models, one for encoding text and the other for decoding to audio. Honestly the paper is terrible but it explains the architecture somewhat: arxiv.org/pdf/2411.01156
=> More informations about this toot | View the thread | More toots from hok@lemmy.dbzer0.com
=> View PerogiBoi@lemmy.ca profile
text/gemini
This content has been proxied by September (3851b).