Google’s Hugging Face deal puts ‘supercomputer’ power behind open-source AI

An illustration of a cartoon brain with a computer chip imposed on top.
Illustration by Alex Castro / The Verge

Google Cloud’s new partnership with AI model repository Hugging Face is letting developers build, train, and deploy AI models without needing to pay for a Google Cloud subscription. Now, outside developers using Hugging Face’s platform will have “cost-effective” access to Google’s tensor processing units (TPU) and GPU supercomputers, which will include thousands of Nvidia’s in-demand and export-restricted H100s.

Hugging Face is one of the more popular AI model repositories, storing open-sourced foundation models like Meta’s Llama 2 and Stability AI’s Stable Diffusion. It also has many databases for model training.

There are over 350,000 models hosted on the platform for developers to work with or upload their own models to Hugging Face, much like coders put their code up on GitHub. Valued at $4.5 billion, Hugging Face has seen Google, Amazon, Nvidia, and others help raise $235 million over the last year.

Google said that Hugging Face users can begin using the AI app-building platform Vertex AI and the Kubernetes engine that helps train and fine-tune models “in the first half of 2024.”

Google said in a statement that its partnership with Hugging Face “furthers Google Cloud’s support for open-source AI ecosystem development.” Some of Google’s models are on Hugging Face, but its banner large language models like Gemini, which now powers the chatbot Bard, and the text-to-image model Imagen are not on the repository and are considered more closed source models.