Toot

Written by froztbyte@awful.systems on 2024-09-09 at 12:45

the press release (archive) says:

featuring its first advanced on-processor chip AI accelerator for inferencing

For instance, our AI-driven fraud detection solutions are designed to save clients millions of dollars annually. With the introduction of the AI accelerator on the Telum processor, we’ve seen active adoption across our client base. Building on this success, we’ve significantly enhanced the AI accelerator on the Telum II processor

if I’m reading this correctly, it’s on-die in the telum ii, but was a separate thing (like co-processor or architecture add-in card or something) previously?

the usecase sooooort of makes sense but I’m still skeptical about part of it because this seems awfully like it’d be potentially limited by changes over time in how one might do such tasks (e.g. if a new preferred inferencing method comes out that doesn’t quite fit the chip pattern). but also “our AI-driven fraud detection solutions” - ah.

guess it’ll be interesting to see how this shit sits in 10y or something.

=> More informations about this toot | View the thread | More toots from froztbyte@awful.systems

Mentions

=> View froztbyte@awful.systems profile

Tags

Proxy Information
Original URL
gemini://mastogem.picasoft.net/toot/113107662104775257
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
268.199802 milliseconds
Gemini-to-HTML Time
0.762261 milliseconds

This content has been proxied by September (3851b).