I used F1 score in the past in many machine learning projects in #rstats and #python (It's useful for imbalanced classification problems), but I hadn't visualized its relationship with precision and recall UNTIL TODAY :)
Here a {rayshader} #dataviz comparing F1 score harmonic mean (As you see, it penalizes the score if one metric is bad) with the mean (It simply averages metrics).
=> View attached media | View attached media
=> More informations about this toot | More toots from jrosell@mastodon.social
Thanks to @koaning for pointing out this visualization in Stack Overflow https://stackoverflow.com/a/49535509/481463 in this video https://www.youtube.com/watch?v=3M2Gmuh5mtI
=> More informations about this toot | More toots from jrosell@mastodon.social
Here the code: https://gist.github.com/jrosell/667fee58c8ff47a99591fa6122e04ed3
=> More informations about this toot | More toots from jrosell@mastodon.social
@jrosell really cool viz!
=> More informations about this toot | More toots from mario_angst_sci@fediscience.org This content has been proxied by September (3851b).Proxy Information
text/gemini