April 6, 2026

Welcome back.

We’re coming off a long holiday weekend, and the news did not slow down.

Cloud infrastructure is under literal fire, Google is channeling their inner Pied Piper, and neocloud spend is up 223% year-over-year.

As always, thank you for reading - and hit us up with any feedback.

Today's edition:

  • Iranian drone strikes take out AWS Bahrain for the second time

  • Google's TurboQuant and what a compression breakthrough means for AI infrastructure

  • Cisco's inaugural State of Wireless report

  • Amazon and Siemens team up to tackle the growing concerns around data center energy

  • Neocloud market growth and implications

Let’s dive in.

🆙 Round Up

Iranian drones hit Amazon's Bahrain data center for the second time in a month, disrupting AWS ME-SOUTH-1 services and raising serious questions about physical infrastructure risk for US hyperscalers with billions committed to Middle East expansion.

Cisco's inaugural State of Wireless Report is worth a read if you're making a case internally for wireless investment or trying to understand where the market is heading. The core argument is that wireless has shifted from a utility to a strategic asset, and organizations treating it that way are seeing compounding returns across productivity and customer experience. The report also covers the talent gap in wireless ops and how AI-driven automation is changing what IT teams can realistically manage without adding headcount.

Anthropic accidentally exposed 513,000 lines of source code for Claude Code through a misconfigured package, and the code is now mirrored across hundreds of GitHub repositories with tens of thousands of forks. The more immediate operational risk for teams: threat actors are already using fake "leaked Claude Code" repositories as lures to distribute Vidar and GhostSocks malware, so anyone in your org curious enough to search GitHub for the leaked code could end up compromised.

Quick Reads

🔄 Cisco patched a critical authentication bypass in its Integrated Management Controller (Cisco)

🥷 AT&T logged more than 10,400 copper theft incidents last year, costing the company $82 million in repairs. (AT&T)

📹 Verkada appoints notable ex-Meraki executive to CIO role (PR Newswire)

🔵 IBM announces strategic partnership with Arm built around high-availability and virtualization (IBM)

💬 Opinion: Why can’t we have nice routers anymore? (Network World)

🔋 Amazon Web Services and Siemens Energy are teaming up to tackle the data center energy problem (Digital CxO)

🔦 Spotlight

Google published TurboQuant, a compression algorithm that claims to shrink the memory footprint of large language models and vector search engines without impacting accuracy.

The problem it's addressing is real. AI inference runs into a hard ceiling in the key-value cache, a fast-access memory store models use to avoid reprocessing the same information repeatedly. As models scale, that cache becomes a bottleneck. Most compression methods chip away at it but introduce their own overhead, so you end up trading one tax for another.

TurboQuant combines two underlying algorithms, PolarQuant and QJL, to compress the high-dimensional vectors AI models use to process information. PolarQuant converts vector coordinates into a compact polar format that removes the need for expensive data normalization. QJL handles residual error with a single bit, adding effectively zero memory cost.

The lab results are strong. TurboQuant compressed key-value memory by at least 6x across standard long-context benchmarks and hit up to 8x faster performance over unquantized keys on H100 GPUs, with no retraining required and no reported accuracy loss.

The skepticism worth holding onto: these benchmarks were run by Google, on Google's chosen models and datasets, presented at a conference where the incentive is to show your work in the best light. Independent replication hasn't happened yet. "No accuracy loss" in controlled benchmark conditions and "no accuracy loss" in a messy production environment aren't the same claim.

This is a research paper, not a shipping product. Whether the gains hold up outside a lab, across diverse workloads and model architectures, is still an open question.

The math looks good. The real test is someone else running it.

Read more on this from Google Research

📊 The Data Link

Neocloud revenue hit $25 billion in 2025, with Q4 alone coming in at $9 billion, up 223% year over year. Synergy Research Group, the source of the data, projects the market will approach $400 billion by 2031 at a 58% compound annual growth rate.

The growth is largely based on structural shifts, rather than something more cyclical. Demand for GPU-accelerated compute continues to outstrip hyperscale capacity, pushing workloads toward providers purpose-built for AI. Neoclouds aren't offering something hyperscalers can't, they're just offering it faster and more cheaply, at a moment when the big three can't build data centers fast enough to keep up.

Synergy's own analyst acknowledges hyperscalers and neoclouds are competing for the same pie, and the hyperscalers have more capital, more customers, and more negotiating leverage. Concentration risk is real, with some neocloud players depending on a single contract for continuity and growth.

Read more on neocloud market trends here.

👇 See you next time

  • Explore more articles from Uplink

  • Follow us on social media to stay in the loop

  • Contact us with questions, comments, or leads

Keep Reading