LLM for Classification (ChatGPT 3.5 Turbo). Lots of prompt engineering. Lots and lots of iteration to get the prompt right.
Trialled llama, gemini, on local GPUs, just couldn't get the accuracy they need.
Total cost depends on the total number of tokens. Discovered more is not always better.
QA key part of the detailed solution. Sampling. Might result is our first EDRMS disposal (Yayyy!)
(2/🧵)
[#]ALGIM2024 #ALGIM24
=> More informations about this toot | View the thread | More toots from joshuatj@digipres.club
=> View algim2024 tag | View algim24 tag This content has been proxied by September (ba2dc).Proxy Information
text/gemini