Amazon Web Services (AWS) and OpenAI have entered a multi-year, $38 billion strategic partnership to provide the cloud infrastructure needed to run and scale OpenAI’s artificial intelligence workloads globally.
Under the seven-year agreement, OpenAI will leverage AWS’s compute resources—spanning hundreds of thousands of NVIDIA GPUs and the ability to scale to tens of millions of CPUs—to power its frontier AI systems, including ChatGPT and future large language models.
AWS will deploy specialized GPU clusters featuring GB200 and GB300 chips connected through Amazon EC2 UltraServers, designed for high-efficiency AI processing and low-latency performance. This infrastructure will support both model training and inference at massive scale, with full deployment expected by the end of 2026 a

 CNBC-TV18

 Free Press Journal
 The Indian Express
 Republic World
 AlterNet
 Raw Story
 The List
 ABC30 Fresno Sports