Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add query_list functionality / tests #509

Merged
merged 1 commit into from
Aug 25, 2020
Merged

Conversation

maurolacy
Copy link
Contributor

A small PR to reproduce #508 in cosmwasm.

Copy link
Member

@ethanfrey ethanfrey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will try to dig deeper


/// This returns the list of ids for all active swaps
pub fn all_swap_ids<S: ReadonlyStorage>(storage: &S) -> StdResult<Vec<String>> {
prefixed_read(PREFIX_SWAP, storage)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you change this from prefixed_read(PREFIX_SWAP, storage).range... to storage.range... does this pass?

Sounds like a bug in the ReadonlyPrefixedStorage

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this change it passes:

    // prefixed_read(PREFIX_SWAP, storage)
    storage

Seems like range_with_prefix does something funny.

Copy link
Contributor Author

@maurolacy maurolacy Aug 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I've just tried it. The results are different, though, as iterating over the whole storage gives different entries.

I would love to analyse / fix this, but I totally lack debugging capabilities in the vm. How do I print / log, by example? println! does nothing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed a simplification. Any time you call range() with Some(b"key") as either start or end, we get this bug.

Time to look at the vm-wasm interface

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would love to analyse / fix this, but I totally lack debugging capabilities in the vm. How do I print / log, by example? println! does nothing.

There is no way to print from wasm, just from native code (when compiling unit tests).
It's kind of odd to let a contract print to the same place as all the log files... they could definitely make some havoc or at least confusion there.

Maybe we could consider enabling printing from wasm with some debug vm flag. But that is totally out of scope.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will debug this. Thank you for finding a reproduceable case. It involves lots of internals of vm-wasm boundaries

Copy link
Contributor Author

@maurolacy maurolacy Aug 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could consider enabling printing from wasm with some debug vm flag.

+1 to that. Printing, or at the very least logging to a file.

@ethanfrey
Copy link
Member

Notes:

  1. Bug causes when iterating range with bounds either upper or lower
  2. There is an existing test for the vm that checks the callback handles the bounds.
  3. The issue is the communication between std and vm (not the higher-level implementation on either side)

Stack trace of an error:

thread 'query_list' panicked at 'VM error: CommunicationErr { source: DerefErr { offset: 1953720684, msg: "Tried to access memory of region Region { offset: 1953720684, capacity: 11, length: 11 } in wasm memory of size 1179648 bytes. This typically happens when the given Region pointer does not point to a proper Region struct.", backtrace: Backtrace(()) } }', /home/ethan/IdeaProjects/cosmwasm/packages/vm/src/testing/calls.rs:76:5
stack backtrace:
...
  15: core::result::Result<T,E>::expect
             at /home/ethan/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libcore/result.rs:963
  16: cosmwasm_vm::testing::calls::query
             at /home/ethan/IdeaProjects/cosmwasm/packages/vm/src/testing/calls.rs:76
  17: integration::query_list
             at tests/integration.rs:128
...

@ethanfrey
Copy link
Member

I can also cause the error with this. Clearly it is db_scan with the bug

pub fn all_swap_ids<S: ReadonlyStorage>(storage: &S) -> StdResult<Vec<String>> {
    let _ = storage
        .range(Some(b"atomic_swap"), Some(b"atomic_swaq"), Order::Ascending);
    Ok(vec![])
}

@ethanfrey
Copy link
Member

The issue is in ExternalStorage::range. Something about pointers. This is the current setup:

        // actual code properly creates a region from input, this is for control 
        let start = start.map(|s| Box::new(Region{offset: 50, length: 500, capacity: 500}));
        let start_ptr = match start {
            Some(reg) => &*reg as *const Region as u32,
            None => 0,
        };

which gives us this on the VM side: Region { offset: 1953720684, capacity: 11, length: 11 }

However, if I do the same logic, but without the maps (and only handle the Some case):

        let start = Box::new(Region{offset: 100, length: 200, capacity: 300});
        let start_ptr = &*start as *const Region as u32;

I end up with this on the VM side: Region { offset: 100, capacity: 300, length: 200 }

All these maps and pointer references cause chaos somewhere... I'm going to make all the branches more explicit

@ethanfrey
Copy link
Member

Solution in #511

@ethanfrey ethanfrey merged commit 4d55816 into master Aug 25, 2020
@ethanfrey ethanfrey deleted the integ_test_query_list branch August 25, 2020 17:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants