You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The reason for expanding the original data row-wise is to ensure data availability by sending a column to each storage node and obtaining a sufficient number of responses from different storage nodes for sampling. In our design, three different commitment values are calculated. Row commitment values are calculated to verify the accuracy of the RS-encoding process, column commitment values are computed for checking the ordering within a column and ensuring chunk integrity, while the aggregate column commitment value is calculated to ensure column ordering control.
We also show how the equivalence between the data held in Zone and the data sent to the nodes is ensured in the protocol. The original data was initially expanded with RS. Then, the commitment values for each column were calculated using KZG. These calculated commitments guarantee the integrity of the data within each column. Subsequently, since the column commitments are linked with a separate commitment, having any $k$ of these columns will be sufficient to reconstruct the original data. This way, by aggregating the signatures from the storage nodes without downloading the entire data, the equivalence can be proven.
In this design, there are three different commitment calculations. The row commitment values are defined to ensure the RS encoding control of the data sent to storage nodes and the order binding of the relevant chunk value through the row commitment. The column commitment values guarantee the position binding in terms of rows in the data encoded by the Zone without the need for super nodes. Without this calculation, the Zone could change the row order of the data it holds enabling attacks on DAS. With these two commitment values, the positions of the original data on the x and y axes are bound.
The final calculated $C_{\text{agg}}$commitment value guarantees the order of the column commitments. Due to bandwidth issues, instead of sending each column commitment value to all storage nodes, a single aggregate commitment value is calculated and this value is sent to the storage nodes. This also prevents adding new columns to the original data. Additionally, storage nodes will sign the $C_{\text{agg}}$ value, thereby fixing the column order. (Since the storage nodes already had the row commitments, they were handling the row part by signing the hash value of these row commitments.)
The text was updated successfully, but these errors were encountered:
Design Principles
The reason for expanding the original data row-wise is to ensure data availability by sending a column to each storage node and obtaining a sufficient number of responses from different storage nodes for sampling. In our design, three different commitment values are calculated. Row commitment values are calculated to verify the accuracy of the RS-encoding process, column commitment values are computed for checking the ordering within a column and ensuring chunk integrity, while the aggregate column commitment value is calculated to ensure column ordering control.
We also show how the equivalence between the data held in Zone and the data sent to the nodes is ensured in the protocol. The original data was initially expanded with RS. Then, the commitment values for each column were calculated using KZG. These calculated commitments guarantee the integrity of the data within each column. Subsequently, since the column commitments are linked with a separate commitment, having any$k$ of these columns will be sufficient to reconstruct the original data. This way, by aggregating the signatures from the storage nodes without downloading the entire data, the equivalence can be proven.
In this design, there are three different commitment calculations. The row commitment values are defined to ensure the RS encoding control of the data sent to storage nodes and the order binding of the relevant chunk value through the row commitment. The column commitment values guarantee the position binding in terms of rows in the data encoded by the Zone without the need for super nodes. Without this calculation, the Zone could change the row order of the data it holds enabling attacks on DAS. With these two commitment values, the positions of the original data on the x and y axes are bound.
The final calculated$C_{\text{agg}}$ commitment value guarantees the order of the column commitments. Due to bandwidth issues, instead of sending each column commitment value to all storage nodes, a single aggregate commitment value is calculated and this value is sent to the storage nodes. This also prevents adding new columns to the original data. Additionally, storage nodes will sign the $C_{\text{agg}}$ value, thereby fixing the column order. (Since the storage nodes already had the row commitments, they were handling the row part by signing the hash value of these row commitments.)
The text was updated successfully, but these errors were encountered: