Toot

Written by joshuatj on 2024-11-18 at 23:22

LLM for Classification (ChatGPT 3.5 Turbo). Lots of prompt engineering. Lots and lots of iteration to get the prompt right.

Trialled llama, gemini, on local GPUs, just couldn't get the accuracy they need.

Total cost depends on the total number of tokens. Discovered more is not always better.

QA key part of the detailed solution. Sampling. Might result is our first EDRMS disposal (Yayyy!)

(2/🧵)

[#]ALGIM2024 #ALGIM24

=> View attached media

=> More informations about this toot | View the thread | More toots from joshuatj@digipres.club

Mentions

Tags

=> View algim2024 tag | View algim24 tag

Proxy Information
Original URL
gemini://mastogem.picasoft.net/toot/113506528119923765
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
241.318737 milliseconds
Gemini-to-HTML Time
0.650965 milliseconds

This content has been proxied by September (ba2dc).