ANALYSING THE LATENCY PERFORMANCE OF DISTRIBUTED STORAGE SYSTEMS: A COMPARATIVE STUDY OF AMAZON S3 THROUGH REAL SERVICE SIMULATION METHODS.
Keywords:
Storage Techniques, Latency Efficiency, Amazon S3, Methodologies, Actual ServiceAbstract
The data nodes of the present erasure codes are quite significant when it comes to making parity nodes. If the researchers are more willing to work together and make errors, adding more parity nodes could make it more likely that they can get the original data back. The number of parity nodes affects how much labour it takes to fix data nodes and how much extra storage space is needed. This arises because people typically ask data nodes for help with fixing parity nodes at the same time. If an LRC global parity node were to fail, for example, it would be essential to resolve all of the data nodes. Because “the network’s data nodes are getting more requests,” it will take significantly longer to complete read requests than it did in the past. Google Search is an example of software that doesn’t need to obtain data very frequently. It “produces both data and parity nodes,” which means that the parity nodes may conduct part of the maintenance work that the data nodes usually do. Because of this, they won’t have to wait as long as they would in any other possible situation. To put it simply, having a parity node doesn’t change the overall number of data nodes that may be accessed. Researcher’s research shows that parity nodes are connected to higher storage costs. If the design is well thought out, adding parity nodes to provide parity might lower access latency without affecting the amount of storage needed. This will be demonstrated in the upcoming portions, which might make researchers perplexed.

