Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
CXL Dominated The 2022 Flash Memory Summit
Monday, November 25, 2024

Trending Topics

HomeTechnologyCXL Dominated The 2022 Flash Memory Summit

CXL Dominated The 2022 Flash Memory Summit

spot_img

Enterprise Tech CXL Dominated The 2022 Flash Memory Summit Tom Coughlin Contributor Opinions expressed by Forbes Contributors are their own. New! Follow this author to stay notified about their latest stories. Got it! Aug 15, 2022, 12:57am EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin Connections getty The Compute Express Link (CXL) has emerged as the dominant architecture for pooling and sharing connected memory devices.

It was developed to support heterogeneous memory with different performance characteristics as well as to include special purpose processors near to memory. During the Monday seminars at the 2022 FMS the Open Memory Initiative (OMI) and the OpenCAPI consortium that spawned OMI, said that it was going to become part of the CXL Consortium. In particular, leaders from OMI and CXL met to say that, “we announce that OCC and CXL are entering an agreement , which if approved and agreed upon by all parties, would transfer the OpenCAPI and OMI specifications and OpenCAPI Consortium assets to the CXL consortium.

” The image below shows a beefy OMI differential DDR module that was being passed around at the FMS. Large OMI memory card Photo by Tom Coughlin OMI offers a combination of faster and higher capacity memory directly connected to a server CPU with low latency, that could replace DDR or HBM memory. We can call this near memory.

CXL is an interface running on the PCIe bus that provides an arbitrated access to heterogeneous memory. CXL has somewhat higher latency and allows pooling of memory that can be shared between CPUs. We can call this far memory.

CXL was developed to support volatile memory such as DRAM combined with persistent memory, such as Intel’s Optane. The joining of OMI with CXL provides a solution for near and far memory, going beyond the original plan for CXL. Several of the larger SSD companies announced or talked about flash-based SSDs with the CXL interface.

It seems that NAND flash will try to fill some of the niches that 3D XPoint (Optane) was going to fill. MORE FOR YOU The 5 Biggest Technology Trends In 2022 ‘Enthusiastic Entrepreneurs’: Pre-IPO Statements On Profitability Prove To Be Larger Than Real Life The 7 Biggest Artificial Intelligence (AI) Trends In 2022 Kioxia talked about NAND flash for memory expansion with CXL and talked about NAND-based SSDs for performance and capacity storage. The company’s XL-Flash Storage Class Memory SSDs provide higher endurance and performance.

The SSD they showed was an NVMe drive, but in their keynote presentation they talked about providing both BiCS Flash and XL-Flash with CXL interfaces to help with various workloads. This is shown in the image below. XL-Flash is currently SLC but company said that XL-Flash with MLC is coming.

Kioxia plans on CXL SSDs Tom Coughlin Photo In their keynote talk, FADU was showing a PCIe Gen6 CXL SSD by 2024, see below. FADU plans for CXL SSDs Tom Coughlin photo Samsung introduced a memory-semantic SSD based upon the CXL protocol for AI/ML applications, shown below. The product provides lower latency using internal DRAM cache with larger data space using NAND flash over a CXL interface.

Small IO’s come out of the DRAM and normal IOs are fed from the NAND flash. They showed a 20X improvement in random read performance compared to a regular PCIe 4. 0 SSD.

Samsung CXL SSD Tom Coughlin photo Samsung was also showing a CXL memory expander with 512GB capacity. Marvell is also supporting CXL in its controllers with a vision of full data center composibility as shown below. Their presentation also hinted a combined data access using NVMe and CXL, both over the PCIe bus.

Marvell vision of CXL in the data center Tom Coughlin photo Marvell went further to talk about a composable infrastructure with NVMe-oF and CXL as shown below. Possible interplay between NVMe-oF and CXL Tom Coughlin photo SK hynix was showing a CXL memory expander as well as an Elastic CXL memory solution FPGA prototype in their booth, see below. SK hynix announced the development of the company’s first DDR5-based CXL samples.

SK hynix CXL memory prototype Tom Coughlin photo Microchip talked about their PM8596 Smart Memory Solutions that supports the Open Memory interface (OMI, now part of the CXL consortium). This product received an FMS award in 2019. The company also announced their first CXL smart memory controller for data center applications.

The SMC 2000 controller family delivers DDR4/5 memory bandwidth and capacity expansion for AI/ML and other data intensive applications. The image below was from their keynote talk. Microchip CXL controller Photo by Ron Dennison Silicon Motion didn’t announce a CXL product, but their keynote indicated that CXL is on their radar.

Intel’s Debendra Das Sharma gave a keynote talk focused on CXL. He spoke about interconnects and load-store IO and the role of PCIE/CXL as well as the Universal Chiplet Interconnect Express (UCIe), see below. He invited more participants in the UCIe Consortium.

Memory architecture with CXL and UCIe Photo by Ron Dennison The following image shows what a CXL-enabled environment enables versus PCIe alone. CXL enables expanding memory beyond what DDR allows and sharing it between CPUs. CXL (and perhaps OMI, now part of the CXL consortium) as well as DDR combined provide the memory requirements for advanced data processing applications.

CXL-enabled architecture compared to PCIe only Photo by Ron Dennison A CXL switch allows pooling and sharing memory between CPUs as shown below. This is enabled by CXL 2. 0.

A CXL switch allows memory pooling Photo by Ron Dennison Debendra announced the release of the CXL 3. 0 specification. The image below shows the CXL 3.

0 enhancements. They include a doubling of bandwidth to 64GT/s with no additional latency, protocol enhancement allowing peer-to-peer to HCM memory and composable systems with spine/leaf architectures at the rack/pod levels. He speculated that over time CXL could become the only external memory attach point and used for on-package memory.

CXL 3. 0 announced enhancements Photo by Ron Dennison Besides these keynote talks, CXL was covered in multiple FMS sessions and in demonstrations in the exhibits area. In a real sense, CXL was one of the biggest developments at the 2022 FMS.

The 2022 FMS was dominated by CXL, used for DRAM and also NAND flash devices. OpenCAPI and OMI joined the CXL consortium. All the major flash memory companies announced or said they were working on CXL NAND-flash devices.

The controller companies at the conference said that they were supporting CXL interface products. Intel announced CXL 3. 0 and talked about a future where CXL would be the ubiquitous memory fabric.

Follow me on Twitter or LinkedIn . Check out my website . Tom Coughlin Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/tomcoughlin/2022/08/15/cxl-dominated-the-2022-flash-memory-summit/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News