Has anyone else noticed ChatGPT is slipping? https://adriano.fyi/posts/chatgpt-is-slipping/
=> More informations about this toot | More toots from adriano@indieweb.social
A few people emailed me saying they had similar experiences, but nothing definitive. I was really hoping someone smart would have more insight than me.
Looks like I may never get to the bottom of what happened here.
=> More informations about this toot | More toots from adriano@indieweb.social
@adriano I noticed strange behaviour on chatgpt modules when I used them through the API a few months ago, and I noted that they add some specific models when they change stuff. For example, some time ago there was only gpt-4o, but now there is gpt-4o, gpt-4o-2024-08-06, gpt-4o-2024-05-13, and so on. After some tests I realized that they behave differently, and if I use a specific model (one with a date) the responses are more stable. I don't know if you already checked that.
=> More informations about this toot | More toots from daco@mas.to
@daco thanks for the followup! I did check which date stamped models are available, and I’ve since pegged my app to a date stamp, but if you see my update in the post, OpenAI fixed the problem some time over the weekend, and even the default models started passing the tests again.
=> More informations about this toot | More toots from adriano@indieweb.social
@adriano I also realized that using the assistants API I can set specific settings that overwrite the defaults and are saved. Maybe you can also check that?
FYI: Somehow it was hard to find this post in Mastodon to comment (after following the instructions on your site).
=> More informations about this toot | More toots from daco@mas.to
@daco the UX of integrating mastodon posts with a static Hugo site is…imperfect!
=> More informations about this toot | More toots from adriano@indieweb.social This content has been proxied by September (3851b).Proxy Information
text/gemini