Extending the HDF Library to Support
Intelligent I/O Buffering for Deep Memory and Storage Hierarchy Systems
(NSF OCI-1835764)
https://github.com/hdfgroup/hermes
1) Lack of automated data movement between tiers, is now left to the users.
2) Lack of intelligent data placement in the DMSH.
1) Lack of expertise from the user.
2) Lack of existing software for managing tiers of heterogeneous buffers.
3) Lack of native buffering support in HDF5.
Modern high performance computing (HPC) applications generate massive amounts of data. However, the performance improvement of disk based storage systems has been much slower than that of memory, creating a significant Input/Output (I/O) performance gap. To reduce the performance gap, storage subsystems are under extensive changes, adopting new technologies and adding more layers into the memory/storage hierarchy. With a deeper memory hierarchy, the data movement complexity of memory systems is increased significantly, making it harder to utilize the potential of the deep memory and storage hierarchy (DMSH) design.
As we move towards the exascale era, I/O bottleneck is a must to solve performance bottleneck facing the HPC community. DMSHs with multiple levels of memory/storage layers offer a feasible solution but are very complex to use effectively. Ideally, the presence of multiple layers of storage should be transparent to applications without having to sacrifice I/O performance. There is a need to enhance and extend current software systems to support data access and movement transparently and effectively under DMSHs.
Hierarchical Data Format (HDF) technologies are a set of current I/O solutions addressing the problems in organizing, accessing, analyzing, and preserving data. HDF5 library is widely popular within the scientific community. Among the high level I/O libraries used in DOE labs, HDF5 is the undeniable leader with 99% of the share. HDF5 addresses the I/O bottleneck by hiding the complexity of performing coordinated I/O to single, shared files, and by encapsulating general purpose optimizations. While HDF technologies, like other existing I/O middleware, are not designed to support DMSHs, its wide popularity and its middleware nature make HDF5 an ideal candidate to enable, manage, and supervise I/O buffering under DMSHs.
This project proposes the development of Hermes, a heterogeneous aware, multi-tiered, dynamic, and distributed I/O buffering system that will significantly accelerate I/O performance.
This project proposes to extend HDF technologies with the Hermes design. Hermes is new, and the enhancement of HDF5 is new. We believe that the combination of DMSH I/O buffering and HDF technologies is a reachable practical solution that can efficiently support scientific discovery.
Here vertical means access data to/from different levels locally and horizontal means spread/gather data across remote compute nodes.
Here selective means some memory layer, e.g. NVMe, only for selected data.
The buffering schema can be changed dynamically based on messaging traffic
By learning the application's access pattern, we can adapt prefetching algorithms and cache replacement policies at runtime.
Enables, manages, and supervises I/O operations in the Deep Memory and Storage Hierarchy (DMSH).
Offers selective and dynamic layered data placement.
Is modular, extensible, and performance-oriented.
Supports a wide variety of applications (scientific, BigData, etc).
Large amount of RAM, Local NVMe and/or SSD device, Shared Burst Buffers and Remote disk-based PFS.
Vertical ->within node
Horizontal ->across nodes
fully scalable deployment on distributed clusters, consisting of node/local end remote shared storage layers
Access Latency
Data Throughput
Capacity.
1 million fwrite() of various size and measured memory ops/sec
1 million metadata operations and measure MDM throughput ops/sec
1 million queue operations and measure messaging rate msg/sec
8x higher write performance on average
11x higher read performance for repetitive patterns
5x higher write performance on average
7.5x higher read performance for repetitive patterns
Hermes Beta Release
Hermes Update - HDF Group
Hermes Buffer Organizer - HDF Group
That is true. We suggest using profiling tools before hand to learn about the application’s behavior and tune Hermes. Default policy works great.
As of now, applications link to Hermes (re-compile or dynamic linking). We envision a system scheduler that also incorporates buffering resources.
Hermes’ Application Orchestrator was designed for multi-tenant environments. This work is described in Vidya: Performing Code-Block I/O Characterization for Data Access Optimization.
It can be severe but in scenarios where there is some computation in between I/O then it can work nicely to our advantage.
In our evaluation, for 1 million user files, the metadata created were 1.1GB.
Hermes’ System Profiler provides the current status of the system (i.e., remaining capacity, etc) and DPE is aware of this before it places data in the DMSH.
Horizontal data movement can be in the way of the normal compute traffic. RDMA capable machines can help. We also suggest using the “service class” of the Infiniband network to apply priorities in the network.
Configurable by the user. Typical trade-off. More RAM to Hermes can lead to higher performance. No RAM means skip the layer.
Hermes captures existing I/O calls. Our own API is really simple consisting of hermes::read(…, flags) and hermes::write(…,flags). Flag system implements active buffering semantics (currently only for the burst buffer nodes).
We expose a configuration_manager class which is used to pass several Hermes’ configuration parameters.