-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BEAM-14383] Improve "FailedRows" errors returned by beam.io.WriteToBigQuery #17517
Conversation
Can one of the admins verify this patch? |
1 similar comment
Can one of the admins verify this patch? |
R: @jrmccluskey |
Codecov Report
@@ Coverage Diff @@
## master #17517 +/- ##
==========================================
+ Coverage 73.83% 73.90% +0.07%
==========================================
Files 690 691 +1
Lines 90830 91259 +429
==========================================
+ Hits 67065 67446 +381
- Misses 22556 22604 +48
Partials 1209 1209
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Run PythonLint PreCommit |
@TheNeuralBit i've implemented a test based on the other tests in the |
Run Python 3.7 PostCommit |
I haven't forgotten about this. It's in my queue : ) |
Thanks @Firlej! I triggers a CI check that should exercise this new test. This looks good, I guess my only concern is this could be a breaking change for existing users, e.g. if they're unpacking the current result like |
@TheNeuralBit I updated |
I'll agree with Brian, and say that we can accept this inl. It's onlybreaking one particular usage, and it adds to the functionality. LGTM. |
It looks like we didn't get a green run on the Python PostComit before merging, the new test is failing at HEAD. I filed BEAM-14447 to track the failure. Could you take a look @Firlej? If its not quick to diagnose and fix, we might just rollback this PR to preserve test signal. It's easy enough to roll it forward with a fix once we figure it out. |
my bad. I am adding a fix here: #17584
…On Mon, May 9, 2022 at 10:01 AM Brian Hulette ***@***.***> wrote:
It looks like we didn't get a green run on the Python PostComit before
merging, the new test is failing at HEAD. I filed BEAM-14447
<https://issues.apache.org/jira/browse/BEAM-14447> to track the failure.
Could you take a look @Firlej <https://github.com/Firlej>?
If its not quick to diagnose and fix, we might just rollback this PR to
preserve test signal. It's easy enough to roll it forward with a fix once
we figure it out.
—
Reply to this email directly, view it on GitHub
<#17517 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJ5Z3FLUUGJLOMTKMB4ZXLVJFAF7ANCNFSM5UYRZW5A>
.
You are receiving this because you modified the open/close state.Message
ID: ***@***.***>
|
I'm checking it RN |
@pabloem I see you refactored and simplified the test. I guess it was needlessly complicated. In hindsight I shouldn't have used a literal string. I don't understand why it threw a Additionally I see this line throws an error according to this testReport posted by @TheNeuralBit in BEAM-14447. The test exactly shows how this is a breaking change :D |
ah makes sense. I'll fix that as well. |
…iledRows" errors returned by beam.io.WriteToBigQuery" This reverts commit 3587820.
…#17517 from [BEAM-14383] Improve "FailedRo… [BEAM-14447] Revert "Merge pull request #17517 from [BEAM-14383] Improve "FailedRo…
WriteToBigQuery
pipeline returnserrors
when trying to insert rows that do not match the BigQuery table schema.errors
is a dictionary that cointains oneFailedRows
key.FailedRows
is a list of tuples where each tuple has two elements: BigQuery table name and the row that didn't match the schema.This can be verified by running the
BigQueryIO deadletter pattern
https://beam.apache.org/documentation/patterns/bigqueryio/Using the template approach I can print the failed rows in a pipeline. When running the job, logger simultaneously prints out the reason why the rows were invalid.
The reason (for why the row is invalid) should also be included in the tuple in addition to the BigQuery table and the raw row. This way next pipeline could eg. insert the invalid rows into a different BigQuery table with a schema.
The whole pipeline implementation could look something like this
During my reasearch I found a couple of alternate solutions, but they are more complex than they need to be. Thats why I explored the beam source code and found the solution to be an easy and simple change.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
R: @username
).[BEAM-XXX] Fixes bug in ApproximateQuantiles
, where you replaceBEAM-XXX
with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI.