Data Integrity and Dynamic Storage Way in Cloud Computing
It is not an easy task to securely maintain all essential data where it has the need in many applications for clients in cloud. To maintain our data in cloud, it may not be fully trustworthy because client doesn’t have copy of all stored data. But any authors don’t tell us data integrity through its user and CSP level by comparison before and after the data update in cloud. So we have to establish new proposed system for this using our data reading protocol algorithm to check the integrity of data before and after the data insertion in cloud. Here the security of data before and after is checked by client with the help of CSP using our “effective automatic data reading protocol from user as well as cloud level into the cloud” with truthfulness. Also we have proposed the multi-server data comparison algorithm with the calculation of overall data in each update before its outsourced level for server restore access point for future data recovery from cloud data server. Our proposed scheme efficiently checks integrity in efficient manner so that data integrity as well as security can be maintained in all cases by considering drawbacks of existing methods.
💡 Research Summary
The paper addresses the problem of ensuring data integrity in cloud storage environments where the client does not retain a local copy of the outsourced data and must rely on the cloud service provider (CSP). The authors claim that existing solutions, which largely depend on third‑party auditors (TPA), do not allow the client to verify the integrity of data before and after updates. To fill this gap, they propose two main mechanisms: a “data reading protocol” and a “multi‑server data comparison algorithm.”
The data reading protocol is intended to be executed both on the client side and on the CSP side. Before a data upload, the client computes certain metadata (size, signature, possibly a hash) and sends the data to the cloud. After the upload, the CSP runs the same protocol on the stored copy and returns the resulting metadata. The client then compares the pre‑upload and post‑upload values; if they match, the data is considered intact. This process is repeated for every dynamic operation (append, delete, modify).
The multi‑server data comparison algorithm assumes that the data is split into fragments and distributed across several cloud servers. For each fragment, the algorithm records an identifier and a cryptographic hash. A central verifier (or the client) periodically collects these hashes from all servers and checks for consistency. If any server’s fragment hash deviates, the system flags a potential integrity breach. Additionally, the authors introduce the concept of a “restore point” that is created after each update. The restore point stores a snapshot of the entire data set (or its aggregate hash) so that, in the event of a server crash, the system can roll back to the most recent consistent state.
The paper also discusses auxiliary topics such as the role of the TPA (which is minimized in the proposed design), cost considerations for cloud usage, criteria for selecting a trustworthy CSP, and the need for dynamic computing environments that support modification, append, and delete operations.
Despite the high‑level description, the paper lacks concrete technical details. The pseudo‑code provided is simplistic (basic loops and increment operations) and does not specify how cryptographic primitives are used, how authentication between client and CSP is achieved, or how the system defends against replay attacks, colluding malicious servers, or Byzantine faults. No security proofs, performance benchmarks, or experimental evaluations are presented. The authors do not quantify the overhead introduced by the additional metadata exchanges, the storage cost of maintaining restore points, or the latency incurred by the multi‑server comparison process.
In summary, the contribution of the paper is a conceptual framework that combines client‑side verification with server‑side fragment comparison and periodic restore points to achieve data integrity in cloud storage. While the idea of giving the client more direct control over integrity checks is appealing, the lack of rigorous algorithmic design, security analysis, and empirical validation limits the practical impact of the work. Future research should flesh out the cryptographic protocols, provide formal security guarantees, and evaluate the scheme under realistic cloud workloads to determine its feasibility and efficiency.
Comments & Academic Discussion
Loading comments...
Leave a Comment