EVALUATING THE LATENCY PERFORMANCE OF DISTRIBUTING STORAGE SYSTEMS: A COMPARISON OF AMAZON S3 USING REAL SERVICE SIMULATION TECHNIQUES.

Authors

  • Suriguge Lincoln University College, Petaling Jaya, Malaysia.
  • Noraisyah Binti Tajudin Lincoln University College, Petaling Jaya, Malaysia.

Keywords:

Storage Systems, Latency Performance, Amazon S3, Techniques, Real Service

Abstract

When it comes to constructing parity nodes, the data nodes of the current erasure codes are quite important. Adding additional parity nodes might make it more probable that the researchers can obtain the original data back if they are prepared to work collaboratively and make mistakes. The number of parity nodes determines how much work it takes to correct data nodes and how much additional storage space is required. This happens because users usually approach data nodes for aid with correcting parity nodes at the same time. If an LRC global parity node failed, for instance, it would be necessary to fix all of the data nodes. It will take a lot longer to finish read requests than it used to because "the network's data nodes are getting more requests." One piece of software that doesn't need to get data very often is Google Search. It "produces both data and parity nodes," which implies that the parity nodes may undertake some of the maintenance work that the data nodes generally do. This means they won't have to wait as long as they would in any other case. In short, adding a parity node doesn't impact how many data nodes may be accessed overall. Researches analysis indicates that parity nodes correlate with increased storage expenses. Adding parity nodes to provide parity could minimise access latency without changing the amount of storage required, as long as the architecture is good. This will be shown in the next sections, which may confuse researchers.

Downloads

Published

2025-10-03