Meshing in AI and Hyperscale Data Centers: Practical Guidance for Evolving Infrastructure Design
AFL’s Keith Sullivan introduces the principles and best-practice strategies of meshing in evolving AI and hyperscale data centers. For a technical deep dive, read our white paper: Meshing in AI and Hyperscale Data Centers: Practical Guidance for Evolving Infrastructure Design - https://www.aflhyperscale.com/downloads/meshing-in-ai-and-hyperscale-data-centers-white-paper/ What’s in store in this video: What is Meshing in a Data Center Keith begins by introducing the concept of meshing as a strategy to increase network radix (extending the reach and efficiency of data center fabrics). By enabling GPUs to operate as a cohesive system, meshing helps alleviate congestion and reduces the risk of packet collisions, ensuring smoother data flow across high-performance environments. Why We Mesh: The AI Challenge In this section, Keith explains how the principles of meshing mirror the logic of adding lanes to reduce road traffic. In this analogy, a single-lane road is split into multiple lanes to ease congestion and maintain flow. In AI data centers, this translates into multiple paths between GPUs, allowing synchronized workloads to move efficiently and without bottlenecks. How We Mesh: Techniques and Tools In the final part of the video, Keith outlines the practical building blocks of meshing in AI data centers, explaining how MPO patchcords, shuffle cassettes, and MPO shuffle trunks are used to create scalable, high-density connectivity across complex topologies. Stay connected with us: Website: https://www.aflglobal.com LinkedIn: https://www.linkedin.com/company/aflglobal X: https://twitter.com/AFLGlobal Facebook: https://www.facebook.com/AFLGlobal Don't forget to like, comment, and subscribe for more updates on how AFL is driving innovation and excellence in the industry.