Meta’s next AI model to require nearly 10 times the power



Facebook parent company Meta will continue to invest heavily in its artificial intelligence research efforts, despite expecting the nascent technology to require years of work before becoming profitable, company executives explained on the company’s Q2 earnings call Wednesday.

Meta is “planning for the compute clusters and data we’ll need for the next several years,” CEO Mark Zuckerberg said on the call. Meta will need an “amount of compute… almost 10 times more than what we used to train Llama 3,” he said, adding that Llama 4 will “be the most advanced [model] in the industry next year.” For reference, the Llama 3 model was trained on a cluster of 16,384 Nvidia H100 80GB GPUs.

The company is no stranger to writing checks for aspirational research and development projects. Meta’s Q2 financials show the company expects to spend $37 billion to $40 billion on capital expenditures in 2024, and executives expect a “significant” increase in that spending next year. “It’s hard to predict how this will trend multiple generations out into the future,” Zuckerberg remarked. “But at this point, I’d rather risk building capacity before it is needed rather than too late, given the long lead times for spinning up new inference projects.”

And it’s not like Meta doesn’t have the money to burn. With an estimated 3.27 billion people using at least one Meta app daily, the company made just over $39 billion in revenue in Q2, a 22% increase from the previous year. Out of that, the company earned around $13.5 billion in profit, a 73% year-over-year increase.

But just because Meta is making a profit doesn’t mean its AI efforts are profitable. CFO Susan Li conceded that its generative AI will not generate revenue this year, and reiterated that revenue from those investments will “come in over a longer period of time.” Still, the company is “continuing to build our AI infrastructure with fungibility in mind, so that we can flex capacity where we think it will be put to best use.”

Li also noted that the existing training clusters can be easily reworked to perform inference tasks, which are expected to constitute a majority of compute demand as the technology matures and more people begin using these models on a daily basis.

“As we scale generative AI training capacity to advance our foundation models, we’ll continue to build our infrastructure in a way that provides us with flexibility in how we use it over time. This will allow us to direct training capacity to gen AI inference or to our core ranking and recommendation work, when we expect that doing so would be more valuable,” she said during the earnings call.








Source link

Previous articleApple defends business practices in motion to dismiss DOJ lawsuit