China trains a single generative AI model across multiple data centers with a GPU-mixing breakthrough


What you need to know

  • Tech industry analyst Patrick Moorhead claims that a single generative AI model is running across multiple data centers in China.
  • Rather than relying on a consistent array of matching GPUs, researchers in China are combining “non-sanctioned” units from various brands.
  • Splitting the workload of a single generative AI model across several locations could solve power limits synonymous with the technology.

Despite an ongoing saga of import restrictions and outright blocks deterring NVIDIA from shipping approximately $5 billion worth of AI chips, the state of generative AI in China doesn’t seem to be slowing down. On the contrary, the country appears to be pooling whatever resources it has left after NVIDIA was blocked from selling its A800 and H800 AI and HPC GPUs in their local market and inventing clever ways to combine “non-sanctioned” hardware across multiple, separate data centers.

Tech industry analyst Patrick Moorhead claimed via X (formerly Twitter) that China is excelling with “lower-performing hardware” than what is available to generative AI developers in the United States and that it recently became “the first to train a single GAI model across multiple data centers.” It comes with a pinch of salt as the source was “a very large company” during a conversation protected by an NDA (Non-Disclosure Agreement), but it would be a realistic solution to the gigantic electricity consumption seen in Microsoft and Google’s AI efforts.

How is China pushing AI forward without the latest GPUs?

An AI server based on NVIDIA A100 technology revealed in 2021. (Image credit: Getty Images | Feature China)

Although the United States government’s restrictions force NVIDIA to acquire licenses to ship its A100, A800, H100, and H800 GPUs designed explicitly for artificial intelligence computing, this hasn’t halted China’s generative AI efforts, as the country finds inventive and unusual workarounds. Primarily, a tactic to “meld GPUs from different brands into one training cluster” (via Tom’s Hardware) keeps its researchers pushing ahead with whatever hardware is at hand.





Source link

Previous articleSeptember 30, 2024 – Apple and OpenAI, more