I used F1 score in the past in many machine learning projects in #rstats and #python (It's useful for imbalanced classification problems), but I hadn't visualized its relationship with precision and recall UNTIL TODAY :)
Here a {rayshader} #dataviz comparing F1 score harmonic mean (As you see, it penalizes the score if one metric is bad) with the mean (It simply averages metrics).
=> View attached media | View attached media
=> More informations about this toot | View the thread | More toots from jrosell@mastodon.social
=> View rstats tag | View python tag | View dataviz tag This content has been proxied by September (3851b).Proxy Information
text/gemini