-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GH-32240: [C#] Support decompression of IPC format buffers #33603
Conversation
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
FWIW, the Java implementation does take the approach of separating out the compression dependencies into a new package, with only the interface (and a no-op implementation) in the core package. You do have to know to pass the compression codec factory explicitly in that case, unfortunately. |
|
Thanks @lidavidm, yes I think having a separate package would make sense for dotnet too, and maybe that can be added in a separate PR so for now I've stripped this back to remove the reflection based implementation and just have an implementation in the test project. I've also renamed things to be a bit more consistent with the Java API, although I've kept the One question I have is whether it makes sense for the public compression API ( |
Java also puts the compression interface in its IPC package. C++ treats it as a more general utility (since it's also used for Parquet, etc.). Strictly in terms of Arrow I think it's fine to be in Ipc. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This all looks good! I just had a minor question.
// First 8 bytes give the uncompressed data length | ||
var uncompressedLength = BinaryPrimitives.ReadInt64LittleEndian(source.Span.Slice(0, 8)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems sensible, but I couldn't find where in the spec this come from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The buffer compression strategy is described at https://github.com/apache/arrow/blob/apache-arrow-11.0.0/format/Message.fbs#L59, I'll add a reference to this in a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah it was right there. Thanks!
I see a script to generate some .arrow files and then also those .arrow files checked in. Did you mean to commit the generated arrow files? |
I'd imagine this is only temporary until we allow writing compressed IPC files in C#. Then we can remove those files and simply round trip within the implementation. Does that seem fair? |
Yes, I didn't think we'd want to depend on being able to run Python with PyArrow for the dotnet tests to run, but wanted to keep the script there partly as documentation on what should be in the files and so that people could regenerate them if needed. Do you think there's a better way to handle this? |
Benchmark runs are scheduled for baseline = 8f537ca and contender = b7fd793. b7fd793 is a master commit associated with this PR. Results will be available as each benchmark for each run completes. |
I agree it is nice not to need this.
For a few small files I don't think it's something we need to worry about too much. There is a separate repository however, https://github.com/apache/arrow-testing which stores various test binaries and is already included as a submodule in apache/arrow. If you find you need to generate more files in the future you could consider storing them there. |
{ | ||
if (stream == null) | ||
throw new ArgumentNullException(nameof(stream)); | ||
|
||
_implementation = new ArrowStreamReaderImplementation(stream, allocator, leaveOpen); | ||
_implementation = new ArrowStreamReaderImplementation(stream, allocator, compressionCodecFactory, leaveOpen); | ||
} | ||
|
||
public ArrowStreamReader(ReadOnlyMemory<byte> buffer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not add compression support for ArrowMemoryReaderImplementation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That wasn't intentional, I'd missed that there was a separate implementation for reading from a memory buffer. I'll make a PR to fix this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR opened at #34108
…che#33603) # Which issue does this PR close? Closes apache#32240 / https://issues.apache.org/jira/browse/ARROW-16921 # What changes are included in this PR? This PR implements decompression support for Arrow IPC format files and streams in the dotnet/C# library. The main concern raised in the above Jira issue was that we don't want to add new NuGet package dependencies to support decompression formats that won't be needed by most users, so a default `CompressionProvider` implementation has been added that uses reflection to use the `ZstdNet` package for ZSTD decompression and `K4os.Compression.LZ4.Streams` and `CommunityToolkit.HighPerformance` for LZ4 Frame support if they are available. The `netstandard1.3` target has decompression support disabled due to some reflection functionality being missing, and neither `ZstdNet` or `K4os.Compression.LZ4.Streams` support `netstandard1.3`. The `ArrowFileReader` and `ArrowStreamReader` constructors accept an `ICompressionProvider` parameter to allow users to provide their own compression provider if they want to use different dependencies. ### Alternatives to consider An alternative approach that could be considered instead of reflection is to use these extra dependencies as build time dependencies but not make them dependencies of the NuGet package. I tested this out in adamreeve@4544afd and it seems to work reasonably well too but required bumping the version of `System.Runtime.CompilerServices.Unsafe` under the `netstandard2.0` and `netcoreapp3.1` targets. This reduces all the reflection boilerplate but seems pretty hacky as now Apache.Arrow.dll depends on these extra dlls and we rely on the dotnet runtime behaviour of not trying to load them until they're used. So I think the reflection approach is better. Another alternative would be to move decompression support into a separate NuGet package (eg. `Apache.Arrow.Compression`) that depends on `Apache.Arrow` and has an implementation of `ICompressionProvider` that users can pass in to the `ArrowFileReader` constructor, or maybe has a way to register itself with the `Apache.Arrow` package so it only needs to be configured once. That would seem cleaner to me but I'm not sure how much work it would be to set up a whole new package. # Are these changes tested? Yes, new unit tests have been added. Test files have been created with a Python script that is included in the PR due to only decompression support being added and not compression support. # Are there any user-facing changes? Yes, this implements a new feature but in a backwards compatible way. * Closes: apache#32240 Authored-by: Adam Reeve <adreeve@gmail.com> Signed-off-by: Will Jones <willjones127@gmail.com>
…ReadOnlyMemory (#34108) This is a small follow-up to #33603 to support reading a compressed IPC stream from a `ReadOnlyMemory<byte>`, as I missed that this has a separate reader implementation. * Closes: #32240 Authored-by: Adam Reeve <adreeve@gmail.com> Signed-off-by: Weston Pace <weston.pace@gmail.com>
…che#33603) # Which issue does this PR close? Closes apache#32240 / https://issues.apache.org/jira/browse/ARROW-16921 # What changes are included in this PR? This PR implements decompression support for Arrow IPC format files and streams in the dotnet/C# library. The main concern raised in the above Jira issue was that we don't want to add new NuGet package dependencies to support decompression formats that won't be needed by most users, so a default `CompressionProvider` implementation has been added that uses reflection to use the `ZstdNet` package for ZSTD decompression and `K4os.Compression.LZ4.Streams` and `CommunityToolkit.HighPerformance` for LZ4 Frame support if they are available. The `netstandard1.3` target has decompression support disabled due to some reflection functionality being missing, and neither `ZstdNet` or `K4os.Compression.LZ4.Streams` support `netstandard1.3`. The `ArrowFileReader` and `ArrowStreamReader` constructors accept an `ICompressionProvider` parameter to allow users to provide their own compression provider if they want to use different dependencies. ### Alternatives to consider An alternative approach that could be considered instead of reflection is to use these extra dependencies as build time dependencies but not make them dependencies of the NuGet package. I tested this out in adamreeve@4544afd and it seems to work reasonably well too but required bumping the version of `System.Runtime.CompilerServices.Unsafe` under the `netstandard2.0` and `netcoreapp3.1` targets. This reduces all the reflection boilerplate but seems pretty hacky as now Apache.Arrow.dll depends on these extra dlls and we rely on the dotnet runtime behaviour of not trying to load them until they're used. So I think the reflection approach is better. Another alternative would be to move decompression support into a separate NuGet package (eg. `Apache.Arrow.Compression`) that depends on `Apache.Arrow` and has an implementation of `ICompressionProvider` that users can pass in to the `ArrowFileReader` constructor, or maybe has a way to register itself with the `Apache.Arrow` package so it only needs to be configured once. That would seem cleaner to me but I'm not sure how much work it would be to set up a whole new package. # Are these changes tested? Yes, new unit tests have been added. Test files have been created with a Python script that is included in the PR due to only decompression support being added and not compression support. # Are there any user-facing changes? Yes, this implements a new feature but in a backwards compatible way. * Closes: apache#32240 Authored-by: Adam Reeve <adreeve@gmail.com> Signed-off-by: Will Jones <willjones127@gmail.com>
… from ReadOnlyMemory (apache#34108) This is a small follow-up to apache#33603 to support reading a compressed IPC stream from a `ReadOnlyMemory<byte>`, as I missed that this has a separate reader implementation. * Closes: apache#32240 Authored-by: Adam Reeve <adreeve@gmail.com> Signed-off-by: Weston Pace <weston.pace@gmail.com>
… from ReadOnlyMemory (apache#34108) This is a small follow-up to apache#33603 to support reading a compressed IPC stream from a `ReadOnlyMemory<byte>`, as I missed that this has a separate reader implementation. * Closes: apache#32240 Authored-by: Adam Reeve <adreeve@gmail.com> Signed-off-by: Weston Pace <weston.pace@gmail.com>
…IPC decompression (#33893) ### Rationale for this change This further addresses #32240 and is a follow up to PR #33603 to provide an implementation of the `ICompressionCodecFactory` interface in a new `Apache.Arrow.Compression` package. Making this a separate package means users who don't need IPC decompression support don't need to pull in extra dependencies. ### What changes are included in this PR? Adds a new `Apache.Arrow.Compression` package and moves the existing compression codec implementations used for testing into this package. ### Are these changes tested? There are unit tests verifying the decompression support, but this also affects the release scripts and I'm not sure how to fully test these. ### Are there any user-facing changes? Yes, this adds a new package users can install for IPC decompression support, so documentation has been updated. * Closes: #32240 Lead-authored-by: Adam Reeve <adreeve@gmail.com> Co-authored-by: Eric Erhardt <eric.erhardt@microsoft.com> Signed-off-by: Eric Erhardt <eric.erhardt@microsoft.com>
Which issue does this PR close?
Closes #32240 / https://issues.apache.org/jira/browse/ARROW-16921
What changes are included in this PR?
This PR implements decompression support for Arrow IPC format files and streams in the dotnet/C# library.
This PR originally provided a reflection based implementation to avoid pulling in new dependencies but now only adds a new
ICompressionCodecFactory
interface that users must implement for decompression support, although I intend to add a new package that implements this in #33893. The original description is below:Are these changes tested?
Yes, new unit tests have been added. Test files have been created with a Python script that is included in the PR due to only decompression support being added and not compression support.
Are there any user-facing changes?
Yes, this implements a new feature but in a backwards compatible way.