what's fascinating to me is how every company seems to brag about their complex, custom, secret, in-house AI safety controls (that barely work)
unlike every other infosec protective control, where people tend to collaborate to find the best solutions
https://arstechnica.com/security/2025/01/microsoft-sues-service-for-creating-illicit-content-with-its-ai-platform/
=> More informations about this toot | More toots from april@macaw.social
@april return to the basics. Control the in, control de process and control the out. Oh! It’s basic infosec… and google documents it in SAIF 🤦🏻♂️
=> More informations about this toot | More toots from keroz@infosec.exchange
@april are they going to sue Windows users who use Windows to post abuse and hate all over the net, too? I bet not.
=> More informations about this toot | More toots from the_turtle@mastodon.sdf.org
@april aye, and I’m not sure why that is. I’ve tried reaching out to ML safety/ML security folks at other bigcorps for collaboration, and have gotten little more than crickets.
Maybe because the state of these processes and controls is abysmal everywhere and no one wants to admit it?
=> More informations about this toot | More toots from li5a@chaos.social This content has been proxied by September (ba2dc).Proxy Information
text/gemini