AWS and OpenAI announce $38 billion partnership to scale AI infrastructure
Amazon CEO Andy Jassy took to LinkedIn to announce a multi-year partnership between Amazon Web Services (AWS) and OpenAI. The agreement, valued at $38 billion, gives OpenAI immediate access to AWS’s compute infrastructure to train and scale its next generation of models, including ChatGPT and future agentic AI systems.
OpenAI will run inference, training, and other advanced workloads using AWS’s EC2 UltraServers and NVIDIA GPUs. AWS confirmed that deployment will be completed before the end of 2026, with room to expand capacity in 2027 and beyond.
Jassy wrote: “The new multi-year, strategic partnership with OpenAI will provide our industry-leading infrastructure for them to run and scale ChatGPT inference, training, and agentic AI workloads. It allows OpenAI to leverage our unusual experience running large-scale AI infrastructure securely, reliably, and at scale.”
AWS, the cloud division of Amazon, provides infrastructure and services used by enterprises and developers to build, deploy, and manage AI applications. The company described the new arrangement as a way to meet accelerating global demand for compute power among frontier model providers.
Infrastructure designed for scale and performance
Under the partnership, OpenAI will use AWS’s EC2 UltraServers, which connect hundreds of thousands of state-of-the-art NVIDIA GPUs—including GB200s and GB300s, through a low-latency network architecture. The design allows for parallel processing across workloads, from model training to real-time inference.
AWS CEO Matt Garman said on LinkedIn: “With this new $38B agreement, OpenAI will immediately start using our world-class infrastructure – including Amazon EC2 UltraServers packed with hundreds of thousands of state-of-the-art NVIDIA GPUs and the ability to scale to tens of millions of CPUs.”
He added that this infrastructure will power “training next generation models and scaling agentic AI workloads,” highlighting AWS’s capacity to deliver secure, high-performance compute environments for advanced AI research.
Public reaction underscores mixed sentiment
Jassy’s announcement generated mixed reactions to his post on LinkedIn, reflecting broader industry tensions over AI investment and workforce reductions at major tech firms. The timing of the post drew particular scrutiny following reports that Amazon is cutting around 14,000 corporate roles, with additional layoffs possibly extending to 30,000 positions worldwide, including engineers, scientists, and recruiters.
One commenter wrote: “Firing thousands of loyal employees while dumping money in the AI money pit. Great job Andy, definitely not a ghoul.” Another user added: “Sure, keep dumping billions of dollars on unproven AI at the expense of thousands of talented employees you just laid off.”
Others defended the partnership as an essential step for AI progress. Board member Katie Taylor called it “a remarkable move that strengthens the foundation for AI scalability and long-term innovation.” Data analyst Gabriela Alvarado wrote that the deal “makes OpenAI more resilient while giving AWS the ultimate reference customer for their AI infrastructure.”
Strengthening existing collaboration
The deal extends an existing relationship between the two organizations. Earlier this year, OpenAI’s open-weight foundation models were added to Amazon Bedrock, AWS’s platform for hosting generative AI models from multiple providers.
According to AWS, OpenAI has become one of Bedrock’s most used model providers, with customers including Comscore, Peloton, and Thomson Reuters deploying its models for coding, analytics, and scientific applications.
OpenAI CEO Sam Altman said: “Scaling frontier AI requires massive, reliable compute. Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
Garman added: “As OpenAI continues to push the boundaries of what's possible, AWS's best-in-class infrastructure will serve as a backbone for their AI ambitions.”