• Pivot 5
  • Posts
  • Nvidia Launches New Products Built for the AI Arms Race

Nvidia Launches New Products Built for the AI Arms Race

Pivot 5 is the must-read daily AI briefing for 500,000+ CEOs and business leaders who need signal, not noise.

5 headlines. 5 minutes. 5 days a week.

AI INFRASTRUCTURE

Nvidia Expands AI Systems Lineup To Defend Chip Dominance

  • Nvidia CEO Jensen Huang used Computex to roll out next-generation GB300 accelerator systems, the RTX Pro Server, and new software tools including DGX Cloud Lepton. He framed the launches as a bid to keep Nvidia at the core of global AI data-center spending.

  • The RTX Pro Server delivers four times H100 performance on DeepSeek and 1.7 times on Llama workloads and is already in volume production. New NVLink Fusion systems let customers mix Nvidia CPUs with rival accelerators, while MediaTek, Marvell, Qualcomm, Fujitsu, Acer, and Gigabyte join an enlarged ecosystem for custom chips and smaller Spark and Station machines.

  • By opening its system designs yet preserving the NVLink backbone, Huang anchors Nvidia technology even as cloud giants develop in-house silicon. Bloomberg Intelligence flags the GB300 ramp in the second half as a test of whether AI server demand remains firm amid economic and geopolitical uncertainties.

Read more here.

AI TOOLS

OpenAI Unveils Cloud-Based Codex Agent to Automate Software Tasks Inside ChatGPT

  • OpenAI released a research preview of Codex, a cloud-based software engineering agent embedded in ChatGPT. The agent generates code, answers repository questions, fixes bugs, and submits pull requests from its own isolated sandbox.

  • Codex uses the new codex-1 model trained on real coding tasks and runs in secure containers with no internet access. It starts rolling out free to ChatGPT Pro, Enterprise, and Team users today, with later paid access and API pricing of $1.50 per million input tokens and $6 per million output tokens for the lighter codex-mini model.

  • Early adopters such as Cisco, Temporal, and Superhuman say the agent accelerates feature development and cuts repetitive work. OpenAI engineers now rely on Codex for refactoring, test writing, and on-call triage, reporting clearer focus and faster shipping.

Read more here.

AI INFRASTRUCTURE

OpenAI Plans 5-Gigawatt Abu Dhabi Data Center Bigger Than Monaco

  • OpenAI is working with Abu Dhabi–based tech conglomerate G42 to build a 5-gigawatt data center campus in the UAE capital. The facility would cover roughly 10 square miles, making OpenAI the anchor tenant in one of the largest AI infrastructure projects ever disclosed.

  • The Abu Dhabi site would dwarf the 1.2-gigawatt Stargate campus already underway in Abilene, Texas. Both projects sit under OpenAI’s Stargate initiative, a venture with SoftBank and Oracle aimed at deploying massive chip-laden data centers worldwide.

  • The expansion deepens OpenAI’s partnership with the UAE, a relationship that has previously drawn U.S. scrutiny over G42’s historical ties to Chinese firms such as Huawei and BGI. G42 says it has exited its China investments, and Microsoft has since invested $1.5 billion in the company and taken a board seat.

Read more here.

AI INFRASTRUCTURE

China Starts Building 2,800-Satellite Orbital AI Supercomputer

  • China has launched the first 12 satellites of its planned 2,800-satellite “Three-Body Computing Constellation”. The flight kicks off ADA Space’s effort to create a space-based supercomputer that processes data onboard instead of relying on ground stations.

  • Each satellite runs an 8-billion-parameter AI model delivering 744 tera operations per second, giving the initial cluster 5 peta operations per second. The units link by laser at up to 100Gbps and pool 30 terabytes of storage for shared workloads.

  • Processing in orbit sidesteps the bandwidth limits that let less than 10% of satellite data reach Earth. Experts quoted say solar power and direct heat radiation make orbital data centers more energy-efficient and lower in carbon footprint than terrestrial counterparts.

Read more here.

OPINION

Nadella Converts DeepSeek Challenge Into Azure Opportunity

  • Satya Nadella reacted to the surprise debut of DeepSeek’s ultra-cheap R1 model by launching a round-the-clock security review and ordering teams to list it on Azure. Within days Microsoft began selling R1 access alongside OpenAI and its own models, turning a potential rival into another marketplace option.

  • DeepSeek’s R1 matches OpenAI performance while cutting compute costs from $1,000 to $36 and comes fully open-source. After a 48-hour audit involving about 100 staff, Microsoft cleared the model and added it to an Azure catalog of more than 1,900 options that range from $10 OpenAI calls to 90-cent DeepSeek queries, all billed on Azure infrastructure.

  • Nadella sees the flood of low-cost models as proof AI is commoditizing, so Microsoft is also training its own MAI2 family to cut Copilot expenses and reduce reliance on OpenAI. The move underscores a strategy that prizes Azure consumption over loyalty to any single model vendor and insulates Microsoft if its partner stumbles.

Read more here.