Skip to content
This repository has been archived by the owner on Jun 6, 2023. It is now read-only.

DataCap should be used based bytes being stored, not padded piece size #1419

Open
dkkapur opened this issue May 10, 2021 · 6 comments
Open
Labels
fip Issue likely needs its own FIP

Comments

@dkkapur
Copy link

dkkapur commented May 10, 2021

Basic Information
For clients on the network today who get DataCap and use it in deals, DataCap is used based on the padded piece size, rather than the raw byte size of the data.

Describe the problem
This is relatively un-intuitive for users who end up spending DataCap faster than they think - leading to inefficient usage of DataCap they have been allocated. DataCap should instead be used per the amount of data a client is looking to store.

@magik6k
Copy link
Contributor

magik6k commented May 17, 2021

This is a market-actor design issue, and would require changing how we handle datacap in markets - we'd have to allow clients to specify how many bytes of verified data should be assigned to a given piece - right now the market actor doesn't know about the raw data size, it only cares about whole pieces

(This will require an FIP and a bunch of likely non-trivial actor changes)

@jennijuju jennijuju transferred this issue from filecoin-project/lotus May 18, 2021
@jennijuju jennijuju added the blocked Implementation blocked on information or decisions external to actors team label May 18, 2021
@jennijuju
Copy link
Member

i feel like if datacap is not supposed to be sacred - this doesn't really matter, but yah having this will make things more rigorous

@anorth
Copy link
Member

anorth commented May 19, 2021

The inefficient usage of data cap reflects inefficient usage of the underlying storage. Irrespective of FIL+ verified status, Filecoin right now requires pieces to be sized in powers of two. If a deal uses less than that, the padded leftover is unavailable to the miner to store other deals. I know it's a bit of a stretch right now to imagine supply of storage being scarce, but if it were then the fact that the client has to pay for the whole padded piece, and consume data cap for the whole padded piece, reflect the underlying storage economics. A client wishing to economise on use of DataCap (and paid-for storage) is incentivised to pack the data more tightly into power-of-two deals.

I'm not clear on the dynamics that cause data cap to be scarce.

I don't think this is actionable within actors right now, and would need a FIP to determine if we want to change it, and then lay out how. I suggest opening a discussion in the FIPs repo instead.

@dkkapur
Copy link
Author

dkkapur commented May 19, 2021

DataCap, as a resource, is used to incentivize useful utilization of the supply available in the network. Miners are incentivized to take deals that come with DataCap since that provides a substantial boost to their earnings for the useful storage they provide. By having DataCap be consumed based on the whole piece size rather than the bytes, we miss out on maximizing the leverage given to clients to make deals on the network. This is in addition to clients needing to learn and optimize for deal packing, which introduces additional complexity and worse UX. Feedback received through user testing showed that it was confusing when there was a substantial disparity between the amount of data attempted to be stored vs. the amount of DataCap that ended up being used once the piece was padded. This also ends up unfairly rewarding miners that receive inflated rewards due to sub-optimally packed sectors.

I don't think this is actionable within actors right now, and would need a FIP to determine if we want to change it, and then lay out how. I suggest opening a discussion in the FIPs repo instead.

@anorth quick question for you - is the ask to take this into a FIP because the current actors implementation would not be able to support this?

@jennijuju
Copy link
Member

DataCap, as a resource, is used to incentivize useful utilization of the supply available in the network. Miners are incentivized to take deals that come with DataCap since that provides a substantial boost to their earnings for the useful storage they provide. By having DataCap be consumed based on the whole piece size rather than the bytes, we miss out on maximizing the leverage given to clients to make deals on the network. This is in addition to clients needing to learn and optimize for deal packing, which introduces additional complexity and worse UX. Feedback received through user testing showed that it was confusing when there was a substantial disparity between the amount of data attempted to be stored vs. the amount of DataCap that ended up being used once the piece was padded. This also ends up unfairly rewarding miners that receive inflated rewards due to sub-optimally packed sectors.

I don't think this is actionable within actors right now, and would need a FIP to determine if we want to change it, and then lay out how. I suggest opening a discussion in the FIPs repo instead.

@anorth quick question for you - is the ask to take this into a FIP because the current actors implementation would not be able to support this?

Correct, see

right now the market actor doesn't know about the raw data size, it only cares about whole pieces

@jennijuju jennijuju added fip Issue likely needs its own FIP and removed blocked Implementation blocked on information or decisions external to actors team labels May 19, 2021
@dkkapur
Copy link
Author

dkkapur commented May 19, 2021

ACK - this makes sense, thanks for confirming @jennijuju. FIP it is.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
fip Issue likely needs its own FIP
Projects
None yet
Development

No branches or pull requests

4 participants