DataCentersX > Workloads > Communications and Content
DC Communications & Content Workloads
Communications and Content is the workload cluster where geography determines deployment. The three children below (Content Delivery, 5G and Mobile Edge Compute, and Networking and Telco) share a defining architectural property: the compute has to be physically near the users it serves or physically near the network aggregation points it integrates with. Centralizing these workloads in a few large facilities does not work, because the latency and bandwidth costs of backhauling user traffic from edge to core are too high. The cluster scales through replication across many smaller sites rather than consolidation in a few larger ones, which gives it the opposite deployment signature from Cloud and Enterprise.
The scale of this cluster is substantial. Content delivery networks operate thousands of edge points of presence globally. Telecommunications infrastructure (central offices, mobile switching centers, exchange points) runs tens of thousands of facilities worldwide, many now hosting compute workloads alongside traditional networking equipment. 5G mobile edge compute sites are being deployed in fresh waves alongside 5G radio infrastructure. The aggregate compute capacity is large, but the individual sites are smaller than hyperscale, and the operational model is fleet-oriented rather than facility-oriented.
The three children
| Child Workload | Characteristic Profile | Dominant Deployment |
|---|---|---|
| Content Delivery | Video streaming, CDN caching, gaming edge compute; latency-sensitive user-facing | Edge DCs, CDN points of presence, carrier-hotel colocation |
| 5G and Mobile Edge Compute | Ultra-low-latency mobile compute; telco-integrated; application-specific | Mobile edge sites, 5G central offices, private enterprise 5G deployments |
| Networking and Telco | Core telco functions, network function virtualization, IXP and peering fabric | Telco central offices, carrier hotels, internet exchanges, NFV clouds |
Why geography is the defining constraint
The three children share a distinctive architectural property: their latency and bandwidth economics are set by physical distance to users or to network aggregation points, not by raw compute capacity. That property shapes every aspect of how the cluster deploys and operates.
Content Delivery. Video streaming quality, gaming responsiveness, and web application performance all degrade measurably once round-trip latency exceeds roughly 50 milliseconds. For a user in Denver watching video hosted in a North Virginia hyperscale facility, that latency is physically unavoidable at the speed of light alone, before any queueing or processing delay. The CDN solution is to replicate content to edge points of presence close enough to every user population that latency stays inside acceptable bounds. Netflix Open Connect, Cloudflare, Akamai, and the in-network CDN infrastructure of all major hyperscalers operate on this principle.
5G Mobile Edge Compute. The 5G standard specifies sub-20-millisecond latency for certain application classes, and some use cases (vehicle-to-everything communication, industrial automation, extended reality) require sub-10-millisecond end-to-end budgets. Central-office-hosted compute is too far from the radio for those budgets; the standard defines mobile edge compute as compute resources co-located with or near the radio access network. The deployment pattern places small compute enclosures at or near cell sites and central offices, with application workloads scheduled to run on the edge tier rather than backhauled to a distant data center.
Networking and Telco. Carrier-grade networking has always been a geographically distributed workload, with switching and peering happening where fiber routes physically meet. Network function virtualization has moved this workload from purpose-built hardware onto general-purpose compute platforms, but the geography remains: NFV runs at carrier hotels, internet exchanges, and telco central offices because that is where the traffic is, not because the compute is inherently more efficient there.
Shared infrastructure demands
The three children share a set of infrastructure requirements distinct from both hyperscale workloads and AI-specific workloads.
Many small sites rather than few large ones. An AI factory at 500 MW is a rounding error in CDN footprint terms; a single CDN operator runs thousands of points of presence each sized at tens to hundreds of kilowatts. Facility management shifts from the single-building operational discipline of hyperscale to fleet management across hundreds or thousands of small sites. Remote operations, standardized deployment, and automated orchestration become first-class design concerns.
Carrier-grade networking. Unlike AI training fabrics optimized for bandwidth at the expense of geographic distribution, or cloud networking optimized for elasticity, communications and content workloads need carrier-grade reliability (five nines), protocol stacks (BGP, MPLS, segment routing), and peering infrastructure. The network is the product, not the plumbing.
Ruggedized and flexible siting. Edge and telco sites are often in cell towers, street cabinets, telco central offices built decades ago, or purpose-built modular enclosures. Environmental specifications are wider than hyperscale (extended temperature ranges, seismic and vibration tolerance, occasional sub-optimal power quality), and equipment selection has to accommodate that flexibility.
Modest per-site compute. Individual edge and telco sites typically run tens to low hundreds of racks, not thousands. Rack densities are moderate (10 to 30 kW). Air cooling remains sufficient for most sites. The cluster does not drive the same extreme-density thermal engineering that AI training does.
Network function acceleration where needed. Some networking workloads (packet processing, encryption, traffic shaping) benefit from SmartNICs, DPUs, and purpose-built network acceleration hardware. These accelerators are different in kind from AI training accelerators and occupy a different position in the facility's compute inventory.
Where communications and content workloads run
| Deployment Context | Typical Workloads | Rationale |
|---|---|---|
| Edge DCs | CDN caching, 5G MEC, distributed inference | Proximity to users; sub-50 ms latency requirements |
| Carrier hotels and internet exchanges | Peering, IXP fabric, CDN anchor nodes, interconnection services | Fiber route density; cross-connect to every major carrier and cloud |
| Telco central offices | Core telco functions, NFV, 5G core, subscriber-facing services | Existing telecommunications infrastructure and fiber aggregation |
| Cell tower sites and street cabinets | 5G MEC for URLLC applications, radio access network compute | Sub-10 ms latency to radio interface; environmental ruggedization required |
| Modular DCs | Rapid-deployment edge compute at novel sites | Prefabricated capacity deployable in unusual or temporary locations |
| Hyperscaler DCs | CDN origin servers, video transcoding, cloud-hosted NFV | Back-end origin and transcoding compute backing the edge fleet |
The edge buildout
The Communications and Content cluster has driven the largest-scale deployment of distributed datacenter infrastructure in the industry. CDN fleets expanded through the 2010s as streaming video and mobile traffic grew. 5G deployment in the late 2010s and 2020s added a second wave of edge sites tied to mobile infrastructure. Network function virtualization migrated carrier networks from dedicated hardware onto standard server platforms, turning telco central offices into compute environments. Each of these waves added distributed capacity, and the resulting footprint is larger in site count than any other datacenter category by a wide margin.
The edge-versus-hyperscale split within the cluster continues to evolve as AI inference becomes relevant at the edge. Small inference models for real-time personalization, latency-sensitive visual applications, and conversational interfaces are increasingly candidates for edge deployment, which pulls some AI workload into the communications and content infrastructure footprint. That AI-at-the-edge direction is covered under AI Inference rather than repeated here, but the facility infrastructure supporting it is the same edge footprint that this cluster covers.
Where this cluster sits in the workload taxonomy
Communications and Content is the geographically distributed workload cluster, distinct from the consolidated hyperscale deployments of Cloud and Enterprise and from the specialized accelerator density of AI Training and HPC and Simulation. Its deployment signature (many small sites close to users) is the opposite of the hyperscale signature (few large sites optimized for aggregate capacity), and that contrast drives most of the architectural differences in facility design, network topology, and operations discipline between the two clusters.
Related coverage
Workloads | Content Delivery | 5G and MEC | Networking and Telco | Cloud and Enterprise | AI Inference | Edge DCs | Modular DCs | Hyperscaler DCs