the press release (archive) says:
featuring its first advanced on-processor chip AI accelerator for inferencing
For instance, our AI-driven fraud detection solutions are designed to save clients millions of dollars annually. With the introduction of the AI accelerator on the Telum processor, we’ve seen active adoption across our client base. Building on this success, we’ve significantly enhanced the AI accelerator on the Telum II processor
if I’m reading this correctly, it’s on-die in the telum ii, but was a separate thing (like co-processor or architecture add-in card or something) previously?
the usecase sooooort of makes sense but I’m still skeptical about part of it because this seems awfully like it’d be potentially limited by changes over time in how one might do such tasks (e.g. if a new preferred inferencing method comes out that doesn’t quite fit the chip pattern). but also “our AI-driven fraud detection solutions” - ah.
guess it’ll be interesting to see how this shit sits in 10y or something.
=> More informations about this toot | View the thread | More toots from froztbyte@awful.systems
=> View froztbyte@awful.systems profile
text/gemini
This content has been proxied by September (3851b).