April 6, 2026

Welcome back. We’re back from a long Easter weekend and the news didn't slow down. Iran hit an AWS data center in Bahrain for the second time in a month, Google Research published a compression algorithm that could quietly change the economics of AI infrastructure, and AT&T is fighting an organized copper theft problem that took out service for hundreds of thousands of customers. As always, thank you for reading - and hit us up with any feedback.

Today's edition:

  • Iranian drone strikes take out AWS Bahrain for the second time

  • Google's TurboQuant and what a compression breakthrough means for AI infrastructure

  • Cisco's inaugural State of Wireless report

Let’s dive in.

🆙 Round Up

Iranian drones hit Amazon's Bahrain data center for the second time in a month, disrupting AWS ME-SOUTH-1 services and raising serious questions about physical infrastructure risk for US hyperscalers with billions committed to Middle East expansion. Data centers were not built to withstand military-grade attacks, and Iran has explicitly threatened at least 18 US tech companies operating in the region.

Cisco's inaugural State of Wireless Report is worth a read if you're making a case internally for wireless investment or trying to understand where the market is heading. The core argument is that wireless has shifted from a utility to a strategic asset, and organizations treating it that way are seeing compounding returns across productivity and customer experience. The report also covers the talent gap in wireless ops and how AI-driven automation is changing what IT teams can realistically manage without adding headcount.

Anthropic accidentally exposed 513,000 lines of source code for Claude Code through a misconfigured package, and the code is now mirrored across hundreds of GitHub repositories with tens of thousands of forks. The more immediate operational risk for teams: threat actors are already using fake "leaked Claude Code" repositories as lures to distribute Vidar and GhostSocks malware, so anyone in your org curious enough to search GitHub for the leaked code could end up compromised.

🔦 Spotlight

Google Research published TurboQuant, a compression algorithm that shrinks the memory footprint of large language models by at least 6x. The technique is aimed at making AI inference faster and cheaper to run, and Google has already signaled it's headed into production across Gemini and its search infrastructure.

Between the lines: The enterprise networking angle here is easy to miss but worth paying attention to. Right now, running AI inference at scale requires enormous GPU clusters with high-density switching, low-latency interconnects, and serious power infrastructure. That is why every vendor conversation this year has been about 800G optics, liquid cooling, and Spectrum-X fabrics.

A compression breakthrough that lets the same hardware serve significantly more inference requests changes that math. If you can get 6x more out of your existing GPU memory, you need fewer GPUs to serve the same workload, which means less east-west traffic between compute nodes, less pressure on the switching fabric, and potentially less data center capacity overall. For network architects being asked to plan AI infrastructure, that matters.

The broader read: Think of it like Silicon Valley's Pied Piper — the show's entire premise was that an ‘unsexy’ compression algorithm could quietly reshape the economics of everything built on top of it. TurboQuant is a real-world version of that bet. If techniques like this reach broad deployment in AI infrastructure, the hardware calculus for running inference at scale shifts: the same GPU memory serves more requests, which means less pressure on the switching fabric connecting those GPUs, fewer racks needed to hit a given performance target, and potentially a slower pace of the AI infrastructure buildout that has been driving networking market growth for the past two years.

Read more on this from Google Research

🔎 Uplink Exclusive

Jack Dorsey shipped Bitchat, a Bluetooth-based messaging app that creates a peer-to-peer mesh network between nearby devices, with no internet, no cellular, and no central infrastructure required. Messages hop between devices in range, are end-to-end encrypted, and leave no centralized logs. Think of it as a local-area network built out of phones, operating entirely outside the stack you manage.

Why this matters for network teams: The interesting architectural question here is not about security policy, it's about what happens to your network's relevance when users can route around it entirely. Campus and enterprise wireless has always assumed that the network is the chokepoint: you control access, you can see traffic, you can enforce policy. Bitchat-style mesh networking sidesteps all of that at the physical layer. Bluetooth doesn't touch your SSID, your firewall, or your SD-WAN policy. For IT and network architects thinking about zero trust and network-based access control, a device communicating silently over Bluetooth in a restricted zone is genuinely outside the threat model most enterprise network designs account for.

The open question: Whether Bitchat itself gains traction is almost beside the point. The underlying capability, local mesh networking that bypasses managed infrastructure, is already available in various forms and will only get more capable. The teams that start thinking about Bluetooth and near-field communication as part of their network architecture conversation now will be better positioned than those treating it as a security team problem. Learn more about this here.

Quick Reads

🔄 Cisco patched a critical authentication bypass in its Integrated Management Controller (Cisco)

📹 Verkada appoints notable ex-Meraki executive to CIO role (PR Newswire)

🥷 AT&T logged more than 10,400 copper theft incidents last year, costing the company $82 million in repairs. (AT&T)

👇 See you next time

  • Explore more articles from Uplink

  • Follow us on social media to stay in the loop

  • Contact us with questions, comments, or leads

Keep Reading