Friday, November 22, 2024

Trending Topics

HomeTechnologySNIA Persistent Memory And Computational Storage Summit, Part 1

SNIA Persistent Memory And Computational Storage Summit, Part 1

spot_img

Enterprise Tech SNIA Persistent Memory And Computational Storage Summit, Part 1 Tom Coughlin Contributor Opinions expressed by Forbes Contributors are their own. Jun 11, 2022, 05:47pm EDT | Share to Facebook Share to Twitter Share to Linkedin Persistence getty SNIA held its Persistent Memory and Computational Storage Summit, virtual this year, like last year. The Summit explored some of the latest developments in these topics.

Let’s explore some of the insights from that virtual conference from the first day. Dr. Yang Seok, VP of the Memory Solutions Lab at Samsung spoke about the company’s SmartSSD.

He argued that computational storage devices, which off-load processing from CPUs, may reduce energy consumption and thus provide a green computing alternative. He pointed out that data center energy usage has stayed flat at about 1% since 2010 (in 2020 its was 200-250 TWh per year) due to technology innovations. Nevertheless there is a challenging milestone to reduce greenhouse emissions from data centers by 53% from 2020 to 2030.

Computational storage SSDs (CSDs) can be used for off-loading data from a CPU to free the CPU for other tasks or for local process acceleration closer to the stored data. This local processing is done with less power than a CPU, can be used for data reduction operations in the storage device (to enable higher storage capacities and more efficient storage, to avoid data movement and can be part of a virtualization environment. CSDs appear to be competitive and use less energy for IO-intensive tasks.

Also the computational power of CSDs increases with the number of CSDs used. Samsung announced its first SmartSSD in 2020 and says that its next generation will be available soon, see figure below. The next generation will allow greater customization of what the processor in the CSD can do, enabling its use in more applications and perhaps saving energy for many processing tasks.

Samsung’s 2nd Generation SmartSSD SNIA 2022 PM and CS Summit MORE FOR YOU The 5 Biggest Technology Trends In 2022 ‘Enthusiastic Entrepreneurs’: Pre-IPO Statements On Profitability Prove To Be Larger Than Real Life The 7 Biggest Artificial Intelligence (AI) Trends In 2022 Stephen Bates from Eidetic and Kim Malone from Intel spoke about new standard developments for NVMe computational computing. One of these additions in the Computational Programs Command set is computational namespace. This is an entity in an NVMe subsystems that is capable of executing one or more programs, may have asymmetric access to subsystem memory and may support a subset of all possible program types.

The conceptual image below gives an idea how this works. Computational Namespaces for Computational Storage SNIA 2022 PM and CS Summit There is support for both device-defined and downloadable programs. The device-defined programs are fixed programs provided by the manufacturer or various functionality implemented by the device that can be called as programs such as compression or decryption.

Downloadable programs are loaded to the Computational Programs namespace by the host. Andy Rudoff from Intel gave an update on persistent memory. He talked about the developments along a timeline.

He said, by 2019 Intel Optane Pmem was generally available. The image below shows Intel’s approach to connecting to the memory bus with Optane Pmem. Connecting the Memory Bus with Octane Poem SNIA 2022 PM and CS Summit Note the direct access to memory (DAX) is the key to this use of Optane PMem.

The following image shows a timeline for PMem related developments since 2012. Timeline for Poem Related Developments since 2012 SNIA 2022 PM and CS Summit Andy went through several customer use cases for Intel’s Optane PMem includin g Oracle Exadata with PMem access through RDMA, Tencent Cloud and Baidu. He also discuss future PMem directions, particularly paired with CXL.

These include accelerating AI/ML and data-centric applications with temporal caching and metadata persistent storage. Jinpyo Kim from VMware and Michael Mesnier from Intel Labs spoke on computational storage in a virtualized environment, in collaboration with MinIO. Some of the uses presented included data scrubbing (reading data and detecting any accumulated errors) of a MinIO storage stack and a Linux file system.

They found that doing with this computation close to the stored data was 50% to 18X more scalable, depending upon the link speed (process is read intensive). VMware, in a research prototype with UC Irvine, did a project on near-storage log analytics and found an order of magnitude better query performance compared to using software only—this capability is being ported to a Samsung SmartSSD. VMware also used NGD computational storage devices for running a Greenplum MPP database.

They have done a lot of work on virtualizing CSD (they call it vCSD) on vSphere/vSAN, which allows sharing hardware accelerators more effectively and migrating a vCSD between compatible hosts. CSDs can be used to disaggregate cloud native apps and offload storage intensive functions. The figure below shows collaborative efforts to use CSDs between MinIO, VMware and Intel.

vmware, MinIO and intel Collaboration SNIA 2022 PM and CS Summit Chris Petersen from Meta talked about AI memory at Meta. Meta is using AI for many applications and at scale and from the data center to the edge. Since AI workloads scale so fast it requires more vertical integration from SW requirement to HW design.

A considerable portion of capacity needs high BW accelerator memory but inference has the bigger portion of its capacity at low bandwidth compared to training. Also, inference has a tight latency requirement. They found that a tier of memory beyond HBM and DRAM can be leverages, particularly for inference.

They found that for software defined memory backed by SSDs that they had to use SCM SSDs (Optane SSDs). The use of faster (SCM) SSDs reduced the need to scale out and thus reduced power. The figure below shows Meta’s view of the required memory tiers for AI applications showing that there will be a need for higher latency CXL higher performance and capacity memory/storage to get optimized performance, cost and efficiency.

Meta AI Memory Tiers SNIA 2022 PM and CS Summit Arthur Sainio and Pekon Gupta from SMART Modular Technologies spoke about the use of NVDIMM-N in DDR5 and CXL-enabled applications. These NVDIMM-N’s include a battery backup that allows the DRAM on the module to be written back to flash in case of power loss. This technology is also being developed for CXL application with an NV-XMM specification of devices that have an on-board power source for back-up power and operate with the standard programming model for CXL Type-3 devices.

The figure below shows the form factors for these devices. CXL NVDIMM-N Form Factors SNIA 2022 PM and CS Summit In addition to these talks, David Eggleston from Intuitive Cognitive Consulting moderated a panel discussion with the day’s speakers and their was a Birds of a Feather session on computational storage at the end of the day moderated by Scott Shadley from NGD and Jason Molgaard from AMD. Samsung, Intel, Eideticom, VMware, meta and SMART Modular gave insightful presentations on persistent memory and computational storage at the first day of the 2022 SNIA Persistent Memory and Computational Storage Summit.

Follow me on Twitter or LinkedIn . Check out my website . Tom Coughlin Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/tomcoughlin/2022/06/11/snia-persistent-memory-and-computational-storage-summit-part-1/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News