PCIe 6.4/CXL 3.2 Fabric Switch Sample is Out Now! - Request the Silicon Sample via[email protected]
Learn More
Logo
  • About
  • Product
  • Technology
  • Newsroom
  • Careers
VisionLeadershipHistoryMembership

Hardware

PanSwitchPanRetimer

Silicon IP

LAU IPController IP

Custom Silicon & Solutions

PanEndpointPanFabricTotal AI Solution
Technical ReportsTech BlogPublications
EnglishKorean
CareersPositionsApply
Contact Us
  1. Home
  2. News
  3. General

SK Telecom and Panmnesia Sign Partnership to Innovate AI Data Center Architecture, Enhancing Cost Efficiency and Performance...“CXL-Based AI Rack” to Be Built and Validated

04 Mar 2026

Share

SKT x Panmnesia MOU Signing Ceremony

Panmnesia, an AI infrastructure link solution provider, today announced the signing of a strategic Memorandum of Understanding (MOU) with SK Telecom, South Korea’s largest telco and a leading AI company. The agreement, signed at MWC26 in Barcelona, aims to jointly develop a CXL-based next-generation AI data center architecture.

As large-scale AI services continue to expand, data centers are investing heavily in massive deployments of high-performance GPUs, resulting in astronomical costs. Recognizing the need for sustainable scalability, SK Telecom and Panmnesia are focusing beyond simple GPU expansion to technologies that enable more efficient utilization of existing computing resources. Through this collaboration, the two companies aim to simultaneously improve cost efficiency and performance by innovating data center interconnect architecture based on Compute Express Link (CXL)* technology.

*CXL (Compute Express Link) is a high-speed, low-latency interconnect standard that organically connects CPUs, GPUs, and memory, enabling flexible expansion and utilization of computing resources beyond traditional server boundaries.

SKT x Panmnesia Collaboration Poster

Background: Limitations of Modern AI Data Center Architectures

Modern AI data centers typically configure servers with fixed ratios of CPUs, GPUs, and memory. Multiple servers are connected via networks to form racks, and multiple racks are interconnected to build data centers. However, as AI models become increasingly diverse and larger in scale, this architecture faces limitations in terms of cost-to-performance efficiency.

To address these challenges, the two companies propose:

  1. Breaking away from rigid, monolithic server architecture.

  2. Replacing traditional network-based interconnects with CXL

Challenge #1: Resource Inefficiency from Fixed Server Configurations

In conventional AI data centers, CPUs, GPUs, and memory are statically bundled within individual servers. As a result, unused resources in one server cannot easily be utilized by others. In particular, when memory capacity becomes insufficient, additional GPUs—often unnecessary—must be deployed alongside it, creating inefficiencies. This structure lowers GPU utilization rates and increases both capital and operational expenditures.

To solve this issue, SK Telecom and Panmnesia propose a disaggregated architecture in which computing resources are separated by type and flexibly composed as needed. Instead of being confined within servers, CPUs, GPUs, and memory are interconnected at the rack level through a CXL Fabric Switch†, operating as a unified system. By dynamically allocating only the resources required for each AI workload, this approach minimizes unnecessary resource waste and maximizes cost efficiency.

† Fabric Switch is a device that flexibly interconnects multiple system devices while managing data flow between them.

Challenge #2: Performance Degradation from Network Overhead

The companies will also improve computational efficiency by fundamentally changing the interconnect mechanism. In conventional AI data centers, GPU collective operations‡—essential for large-scale AI training and inference—rely on general-purpose networks such as Ethernet. This process introduces data copies and software intervention, resulting in performance degradation.

To address this limitation, SK Telecom and Panmnesia will eliminate network involvement in computational paths and transition to CXL. By utilizing CXL, it is able to interconnect resources without traversing conventional networks.

At the core of this architecture is the Link Controller, an electronic component that can be integrated into CPUs, GPUs, AI accelerators, and memory devices. Within each device, it enables direct communication over CXL, replacing data transfer that previously required multiple data copies into simple memory access operations. Furthermore, the architecture enables GPU-to-GPU and GPU-to-memory communication without software intervention, significantly improving processing efficiency. As a result, AI data centers can deliver higher performance without increasing the number of GPUs.

‡ GPU collective operations refer to the process by which multiple GPUs share and aggregate computational results, an essential component for large-scale AI training and inference.

SKT x Panmnesia CXL-Based AI Rack

Collaboration Details

Under this collaboration, SK Telecom will lead the design of an architecture optimized for real-world deployment, leveraging its large-scale AI data center construction and operational expertise, along with its experience in AI model development and commercialization.

Panmnesia will implement a CXL-Based AI Rack by applying its link solutions—including CXL Fabric Switches that serve as the core of physical connectivity and Link Controllers responsible for logical integration. Through this approach, the link architecture—previously confined within individual servers—will be extended beyond server boundaries to the rack level and above.

The two companies plan to validate the next-generation AI data center architecture by running real AI models and comprehensively evaluating GPU and memory utilization, latency, and throughput by the end of this year. Following this, they intend to conduct proof-of-concept deployments in large-scale AI data center environments and pursue commercialization and business expansion.

Executive Quotes

Suk Geun Chung, Head of AI CIC at SK Telecom, stated, “The competitiveness of AI data centers now extends beyond GPU performance alone and depends on system-level optimization encompassing memory and data flow. This collaboration will help alleviate the structural bottleneck known as the ‘Memory Wall,’ where data movement and supply cannot keep pace with increasing computational performance, thereby enhancing both the performance and economic efficiency of AI data centers.”

Myoungsoo Jung, CEO of Panmnesia, said, “Next-generation AI infrastructure will be defined not by the performance of individual devices, but by the architecture created through diverse link semiconductors. Together with SK Telecom, we aim to present a high-efficiency AI data center model that will set a new standard in the global market.”

About Panmnesia

Panmnesia is an AI infrastructure company that develops link solutions to make AI data centers more efficient. architecture. As part of the solutions, the company has developed CXL controllers and CXL switches, and has introduced hybrid link architectures that integrate interconnect technologies such as UALink and NVLink Fusion, along with advanced interconnect and semiconductor technologies including HBM.

Recognized for its technological leadership, Panmnesia secured approximately USD 60 million in Series A funding in 2024 and achieved a company valuation of approximately USD 250 million.

Availability

Panmnesia’s partners can request CXL Fabric Switches (including PCIe 6.4/CXL 3.2 switch samples) and Link Controllers (PCIe 6.4/CXL 3.2 controllers) utilized in this collaboration project. Link Controllers are available either as IP or as custom silicon solutions.

Panmnesia is advancing toward deployment readiness beyond the prototype stage by conducting long-duration operational testing in real-world AI computational environments to verify data transmission stability and interoperability.

Companies incorporating Panmnesia’s link technology into their CPU, GPU, AI accelerator, and memory devices are expected to further strengthen their competitiveness in the data center market by establishing system-level integrated reliability that extends beyond validation at the individual device level. For more information about samples, products, and partnership, please contact [email protected].

SKT x Panmnesia MOU Signing Ceremony

SKT x Panmnesia MOU Signing Ceremony

SKT x Panmnesia MOU Signing Ceremony

SKT x Panmnesia MOU Signing Ceremony

SKT x Panmnesia Collaboration Poster

SKT x Panmnesia Collaboration Poster

SKT x Panmnesia Collaboration Poster



Share this article:

Related Featured Articles
Panmnesia's CEO Delivered a Distinguished Lecture at SK hynix
16 Jan 2026
Panmnesia's CEO Delivered a Distinguished Lecture at SK hynix
Panmnesia Showcases PCIe 6.4 / CXL 3.2 Fabric Switch and Pilot System at CES 2026
09 Jan 2026
Panmnesia Showcases PCIe 6.4 / CXL 3.2 Fabric Switch and Pilot System at CES 2026
Panmnesia's CEO, Dr. Myoungsoo Jung, Delivered a Plenary Talk at ICCE-Asia 2025
26 Nov 2025
Panmnesia's CEO, Dr. Myoungsoo Jung, Delivered a Plenary Talk at ICCE-Asia 2025

GET IN TOUCH

Want to Learn More About Panmnesia's Link Solution?

Contact Us
Logo

Building the future of AI infrastructure with innovative semiconductor solutions.

Privacy Policy© 2026 Panmnesia, Inc.
All rights reserved.
About
VisionLeadershipHistoryMembership
Product

Hardware

PanSwitchPanRetimer

Silicon IP

LAU IPController IP

Custom Silicon & Solutions

PanEndpointPanFabricTotal AI Solution
Technology
Technical ReportsTech BlogPublications
Newsroom
EnglishKorean
Careers
CareersPositionsApply
Logo

Building the future of AI infrastructure

Quick Access
AboutProductsCareersNews
Technical ReportsPublications

About

▼

VisionLeadershipHistoryMembership

Products

▼

PanSwitchPanRetimerLAU IPController IPPanEndpointPanFabricTotal AI Solution

Technology

▼

Technical ReportsTech BlogPublications

Newsroom

▼

EnglishKorean

Careers

▼

CareersPositionsApply
Privacy Policy© 2026 Panmnesia, Inc.