Sometimes in debates about the usefulness of "AI" technology, people note recent examples in the news of AI being used to do amazing things, like improving cancer detection.
There's a really important distinction between AI / machine learning tools, even tools including generative AI, and the big LLMs like ChatGPT.
(1/?)
=> More informations about this toot | More toots from dynamic@social.coop
You don't need GPT tools to detect
cancer.
(2/?)
=> More informations about this toot | More toots from dynamic@social.coop
LLMs are a machine learning framework that can be applied in different contexts. GPT (and friends) are LLMs that are trained on vast amounts of internet social data.
(3/?)
=> More informations about this toot | More toots from dynamic@social.coop
An LLM could be very effective for biomedical uses, but in those circumstances you want to train it on biomedical data, not on social data.
(4/?)
=> More informations about this toot | More toots from dynamic@social.coop
I think a lot of researchers (I would hope not biomedical researchers, but I have reason to suspect otherwise) are reaching for GPT as a magical tool that can be used to analyze data without needing to be programmed by a trained software developer.
(5/?)
=> More informations about this toot | More toots from dynamic@social.coop
Even if using GPT for specialized purposes such as analyzing biomedical data works (which it can't actually reliably do), the infrastructure vs. task mismatch is reminiscent to me of an analogy my uncle once used in another context: it's like using NASA's space shuttle to travel from Dayton to Columbus.
(6/6)
=> More informations about this toot | More toots from dynamic@social.coop This content has been proxied by September (ba2dc).Proxy Information
text/gemini