Show simple item record

dc.contributor.advisorHe, Xubin
dc.creatorLiu, Tong
dc.date.accessioned2021-08-23T17:58:48Z
dc.date.available2021-08-23T17:58:48Z
dc.date.issued2021
dc.identifier.urihttp://hdl.handle.net/20.500.12613/6868
dc.description.abstractIn modern distributed storage systems, space efficiency and system reliability are two major concerns. As a result, contemporary storage systems often employ data deduplication and erasure coding to reduce the storage overhead and provide fault tolerance, respectively. However, little work has been done to explore the relationship between these two techniques.Scientific simulations on high-performance computing (HPC) systems can generate large amounts of floating-point data per run. To mitigate the data storage bottleneck and lower the data volume, it is common for floating-point compressors to be employed. As compared to lossless compressors, lossy compressors, such as SZ and ZFP, can reduce data volume more aggressively while maintaining the usefulness of the data. However, a reduction ratio of more than two orders of magnitude is almost impossible without seriously distorting the data. In deep learning, the autoencoder technique has shown great potential for data compression, in particular with images. Whether the autoencoder can deliver similar performance on scientific data, however, is unknown. Nowadays, modern industry data centers have employed erasure codes to provide reliability for large amounts of data at a low cost. Although erasure codes provide optimal storage efficiency, they suffer from high repair costs compared to traditional three-way replication: when a data miss occurs in a data center, erasure codes would require high disk usage and network bandwidth consumption across nodes and racks to repair the failed data. This dissertation lists our research results on the above three mentioned challenges in order to either optimize or solve the issues for the HPC and distributed storage systems. Details are as follows: To solve the data storage challenge for the erasure-coded deduplication system, we propose Reference-counter Aware Deduplication (RAD), which employs the features of deduplication into erasure coding to improve garbage collection performance when deletion occurs. RAD wisely encodes the data according to the reference counter, which is provided by the deduplication level and thus reduces the encoding overhead when garbage collection is conducted. Further, since the reference counter also represents the reliability levels of the data chunks, we additionally made some effort to explore the trade-offs between storage overhead and reliability level among different erasure codes. The experiment results show that RAD can effectively improve the GC performance by up to 24.8% and the reliability analysis shows that, with certain data features, RAD can provide both better reliability and better storage efficiency compared to the traditional Round-Robin placement. To solve the data processing challenge for HPC system, we for the first time conduct a comprehensive study on the use of autoencoders to compress real-world scientific data and illustrate several key findings on using autoencoders for scientific data reduction. We implement an autoencoder-based prototype with conventional wisdom to reduce floating-point data. Our study shows that the out-of-the-box implementation needs to be further tuned in order to achieve high compression ratios and satisfactory error bounds. Our evaluation results show that, for most of the test datasets, the autoencoder outperforms SZ and ZFP by 2 to 4X in compression ratios. Our practices and lessons learned can direct future optimizations for using autoencoders to compress scientific data. To solve the data transfer challenge for the distributed storage systems,we propose RPR, a rack-aware pipeline repair scheme for erasure-coded distributed storage systems. RPR for the first time investigates the insights of the racks, and explores the connection between the node level and rack level to help improve the repair performance when a single failure or multiple failures occur in a data center. The evaluation results on several common RS code configurations show that, for single-block failures, our RPR scheme reduces the total repair time by up to 81.5% compared to the traditional RS code repair method and 50.2% compared to the state-of-the-art CAR algorithm. For multi-block failures, RPR reduces the total repair time and cross-rack data transfer traffic by up to 64.5% and 50%, respectively, over the traditional repair.
dc.format.extent138 pages
dc.language.isoeng
dc.publisherTemple University. Libraries
dc.relation.ispartofTheses and Dissertations
dc.rightsIN COPYRIGHT- This Rights Statement can be used for an Item that is in copyright. Using this statement implies that the organization making this Item available has determined that the Item is in copyright and either is the rights-holder, has obtained permission from the rights-holder(s) to make their Work(s) available, or makes the Item available under an exception or limitation to copyright (including Fair Use) that entitles it to make the Item available.
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectComputer science
dc.titleEFFICIENT DATA REDUCTION IN HPC AND DISTRIBUTED STORAGE SYSTEMS
dc.typeText
dc.type.genreThesis/Dissertation
dc.contributor.committeememberKant, Krishna
dc.contributor.committeememberLiu, Qing, 1979-
dc.contributor.committeememberLing, Haibin
dc.contributor.committeememberZhao, Zhigen
dc.description.departmentComputer and Information Science
dc.relation.doihttp://dx.doi.org/10.34944/dspace/6850
dc.ada.noteFor Americans with Disabilities Act (ADA) accommodation, including help with reading this content, please contact scholarshare@temple.edu
dc.description.degreePh.D.
dc.identifier.proqst14587
dc.date.updated2021-08-21T10:07:42Z
refterms.dateFOA2021-08-23T17:58:49Z
dc.identifier.filenameLiu_temple_0225E_14587.pdf


Files in this item

Thumbnail
Name:
Liu_temple_0225E_14587.pdf
Size:
5.700Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record