Tux Machines

Mozilla and Proprietary/Artificial Intelligence (AI)

Posted by Roy Schestowitz on Aug 10, 2023

=> Open Hardware: Arduino and Raspberry Pi | Windows Total Cost of Ownership (TCO)

Openness and AI: Fostering innovation and accountability in the EU’s AI Act

=> ↺ Openness and AI: Fostering innovation and accountability in the EU’s AI Act

Open source lies at the heart of Mozilla and our Manifesto. Despite its ubiquity in the current technology landscape, it is easy to forget that open source was once a radical idea which was compared to cancer. In the long journey since, Mozilla has helped create an open source browser, email client, programming language, and data donation platform while applying the ethos beyond our code, including our advocacy.

'Hypnotized' ChatGPT and Bard Will Convince Users to Pay Ransoms and Drive Through Red Lights

=> ↺ 'Hypnotized' ChatGPT and Bard Will Convince Users to Pay Ransoms and Drive Through Red Lights

Making matters worse, the researchers told the LLMs never to tell users about the “game” in question and to even restart said game if a user was determined to have exited. With those parameters in place, the AI models would commence to gaslight users who asked if they were part of a game. Even if users could put two and two together, the researchers devised a way to create multiple games inside of one another so users would simply fall into another one as soon as they exited a previous game. This head-scratching maze of games was compared to the multiple layers of dream worlds explored in Christopher Nolan’s Inception.
“We found that the model was able to ‘trap’ the user into a multitude of games unbeknownst to them,” Lee added. “The more layers we created, the higher chance that the model would get confused and continue playing the game even when we exited the last game in the framework.” OpenAI and Google did not immediately respond to Gizmodo’s requests for comment.

Large Language Models — the hardware connection

=> ↺ Large Language Models — the hardware connection

According to Wikipedia, an LLM typically requires six FLOP per parameter and token. This translates to 6 x 175B x 300B or 3.15 x 10^23 FLOP to train the GPT-3 model. GPT-3 model took three weeks to train. Thus, it needed 5.8 x 10^16 FLOPS (Floating Point Operations per second) of compute power during that three-week duration.
The highest-performing H100 GPU from Nvidia can do approximately ~60 TeraFLOPS. If these GPUs were 100% utilized, we require ~1000 GPUs to get 5.8 x 10^16 FLOPS. But, in many training workloads, GPU utilization hovers around 50% or less due to memory and network bottlenecks. Thus the training requires twice the number of GPUs or roughly ~2,000 H100 GPUs. The original LLM model (Table 1) was trained using an older version of the GPU, so it needed 10,000 of them.
With thousands of GPUs, the model and the training data sets need to be partitioned among the GPUs to run in parallel. Parallelism can happen in several dimensions.

Exclusive poll: Americans distrust AI giants

=> ↺ Exclusive poll: Americans distrust AI giants

By the numbers: Those polled prefer federal AI regulation over self-regulation by tech companies, with 82% saying they don't trust tech executives to regulate AI.

Zoom CEO admits mistake as terms-of-service changes raise AI fears

=> ↺ Zoom CEO admits mistake as terms-of-service changes raise AI fears

Details: Zoom made changes to its terms of service back in March, but concern only spiked this past weekend after a Hacker News post highlighted that the changes appeared to give the company unbounded rights to use content to train its AI systems.

=> gemini.tuxmachines.org

Proxy Information
Original URL
gemini://gemini.tuxmachines.org/n/2023/08/10/Mozilla_and_Proprietary_Artificial_Intelligence_AI.gmi
Status Code
Success (20)
Meta
text/gemini;lang=en-GB
Capsule Response Time
144.578035 milliseconds
Gemini-to-HTML Time
0.583668 milliseconds

This content has been proxied by September (ba2dc).