144 episodes

Every week the Storage Developer Conference (SDC) podcast presents important technical topics to the Storage Developer community. Each episode is hand selected by the SNIA Technical Council from the presentations at our annual Storage Developer Conference. The link to the slides is available in the show notes at www.snia.org/podcasts.

Storage Developer Conferenc‪e‬ SNIA Technical Council

    • Technology
    • 5.0 • 5 Ratings

Every week the Storage Developer Conference (SDC) podcast presents important technical topics to the Storage Developer community. Each episode is hand selected by the SNIA Technical Council from the presentations at our annual Storage Developer Conference. The link to the slides is available in the show notes at www.snia.org/podcasts.

    #144: Key Value Standardized

    #144: Key Value Standardized

    The NVMe Key Value (NVMe-KV) Command Set has been standardized as one of the new I/O Command Sets that NVMe Supports. Additionally, SNIA has standardized a Key Value API that works with the NVMe Key Value allows access to data on a storage device using a key rather than a block address. The NVMe-KV Command Set provides the key to store a corresponding value on non-volatile media, then retrieves that value from the media by specifying the corresponding key. Key Value allows users to access key-value data without the costly and time-consuming overhead of additional translation tables between keys and logical blocks. This presentation will discuss the benefits of Key Value storage, present the major features of the NVMe-KV Command Set and how it interacts with the NVMe standards, and present open source work that is available to take advantage of Key Value storage.

    Learning Objectives: Present the standardization of SNIA KV API,Present the standardization of NVMe Key Value Command Set,Present the benefits of Key Valeu in computational storage,Present open source work on Key Value Storage.

    • 50 min
    #143: Deep Compression at Inline Speed for All-Flash Array

    #143: Deep Compression at Inline Speed for All-Flash Array

    The rapid improvement of overall $/Gbyte has driven the high performance All-Flash Array to be increasingly adopted in both enterprises and cloud datacenters. Besides the raw NAND density scaling with continued semiconductor process improvement, data reduction techniques have and will play a crucial role in further reducing the overall effective cost of All-Flash Array. One of the key data reduction techniques is compression. Compression can be performed both inline and offline. In fact, the best All-Flash Arrays often do both: fast inline compression at a lower compression ratio, and slower, opportunistic offline deep compression at significantly higher compression ratio. However, with the rapid growth of both capacity and sustained throughput due to the consolidation of workloads on a shared All-Flash Array platform, a growing percentage of the data never gets the opportunity for deep compression. There is a deceptively simple solution: Inline Deep Compression with the additional benefits of reduced flash wear and networking load. The challenge, however, is the prohibitive amount of CPU cycles required. Deep compression often requires 10x or more CPU cycles than typical fast inline compression. Even worse, the challenge will continue to grow: CPU performance scaling has slowed down significantly (breakdown of Dennard scaling), but the performance of All-Flash Array has been growing at a far greater pace. In this talk, I will explain how we can meet this challenge with a domain-specific hardware design. The hardware platform is a FPGA-based PCIe card that is programmable. It can sustain 5+Gbyte/s of deep compression throughput with low latency for even small data block sizes (TByte/s BW and less than 10ns of latency) and the almost unlimited parallelism available on a modern mid-range FPGA device. The hardware compression algorithm is trained with a vast amount of data available to our systems. Our benchmarks show it can match or outperform some of the best software compressors available in the market without taxing the CPU.

    Learning Objectives: Hardware Architecture for Inline Deep Compression,Design of Hardware Deep Compression Engine,Inline and offline compression of All-Flash Array.

    • 35 min
    #142: ZNS: Enabling in-place Updates and Transparent High Queue-Depths

    #142: ZNS: Enabling in-place Updates and Transparent High Queue-Depths

    Zoned Namespaces represent the first step towards the standardization of Open-Channel SSD concepts in NVMe. Specifically, ZNS brings the ability to implement data placement policies in the host, thus providing a mechanism to lower the write-amplification factor (WAF), (ii) lower NAND over-provisioning, and (iii) tighten tail latencies. Initial ZNS architectures envisioned large zones targeting archival use cases. This motivated the creation of the "Append Command” - a specialization of nameless writes that allows to increase the device I/O queue depth over the initial limitation imposed by the zone write pointer. While this is an elegant solution, backed by academic research, the changes required on file systems and applications is making adoption more difficult. As an alternative, we have proposed exposing a per-zone random write window that allows out-of-order writes around the existing write pointer. This solution brings two benefits over the “Append Command”: First, it allows I/Os to arrive out-of-order without any host software changes. Second, it allows in-place updates within the window, which enables existing log-structured file systems and applications to retain their metadata model without incurring a WAF penalty. In this talk, we will cover in detail the concept of the random write window, the use cases it addresses, and the changes we have done in the Linux stack to support it.

    Learning Objectives: Learn about general ZNS architecture and ecosystem,Learn about the use cases supported in ZNS and the design decisions in the current specification with regards to in-place updates and multiple inflight I/Os,Learn about new features being brought to NVMe to support in-place updates and transparent hight queue depths.

    • 45 min
    #141: Unlocking the New Performance and QoS Capabilities of the Software-Enabled Flash API

    #141: Unlocking the New Performance and QoS Capabilities of the Software-Enabled Flash API

    The Software-Enabled Flash API gives unprecedented control to application architects and developers to redefine the way they use flash for their hyperscale applications, by fundamentally redefining the relationship between the host and solid-state storage. Dive deep into new Software-Enabled Flash concepts such as virtual devices, Quality of Service (QoS) domains, Weighted Fair Queueing (WFQ), Nameless Writes and Copies, and controller offload mechanisms. This talk by KIOXIA (formerly Toshiba Memory) will include real-world examples using the new API to define QoS and latency guarantees, workload isolation, minimize write amplification by application-driven data placement, and achieve higher performance with customized flash translation layers (FTL).

    Learning Objectives: Provide an in-depth dive into using the Software Enabled Flash API,Map application workloads to Software Enabled Flash structures,Understand how to implement QoS requirements using the API.

    • 51 min
    #140: Introduction to libnvme

    #140: Introduction to libnvme

    The NVM Express workgroup is introducing new features frequently, and the Linux kernel supporting these devices evolves with it. These ever moving targets create challenges when developing tools when new interfaces are created, or older ones change. This talk will provide information on some of these recent features and enhancements, and introduce the open source 'libnvme' project which implements an open source library available in public git repositories that provides access to all NVM Express features with convenient abstractions to the kernel interfaces interacting with your devices. The session will demonstrate integrating the library with other programs, and also provide an opportunity for the audience to share what additional features they would like to see out of this common library in the future.

    Learning Objectives: Explain protocol and host operating system interaction complexities,Introduce libnvme and how it manages those relationships,Demonstrate integration with applications.

    • 45 min
    #139: Use Cases for NVMe-oF for Deep Learning Workloads and HCI Pooling

    #139: Use Cases for NVMe-oF for Deep Learning Workloads and HCI Pooling

    The efficiency, performance and choice in NVMe-oF is enabling some very unique and interesting use cases – from AI/ML to Hyperconverged Infrastructures. Artificial Intelligence workloads process massive amounts of data from structured and from unstructured sources. Today most deep learning architectures rely on local NVMe to serve up tagged and untagged datasets into map-reduce systems and neural networks for correlation. NVMe-oF for Deep Learning infrastructures enables a shared data model to ML/DL pipelines without sacrificing overall performance and training times. NVMe-oF is also enabling HCI deployment to scale without adding more compute, enabling end customers to reduce dark flash and reduce cost. The talk explores these and several innovative technologies driving the next storage connectivity revolution.

    Learning Objectives: Storage architectures for Deep Learning Workloads,Extending the reach of HCI platforms using NVMe-oF,Ethernet Bunch of Flash architectures.

    • 58 min

Customer Reviews

5.0 out of 5
5 Ratings

5 Ratings

Top Podcasts In Technology

Listeners Also Subscribed To