Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix a crash involving a view on a hypertable #6796

Merged
merged 1 commit into from
Apr 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .unreleased/bugfix_6796
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fixes: #6796 Fix a crash involving a view on a hypertable
7 changes: 4 additions & 3 deletions src/nodes/chunk_append/chunk_append.c
Original file line number Diff line number Diff line change
Expand Up @@ -348,9 +348,6 @@ ts_chunk_append_path_create(PlannerInfo *root, RelOptInfo *rel, Hypertable *ht,
List *merge_childs = NIL;
MergeAppendPath *append;

if (flat == NULL)
break;

/*
* For each lc_oid, there will be 0, 1, or 2 matches in flat_list: 0 matches
* if child was pruned, 1 match if the chunk is uncompressed or fully compressed,
Expand All @@ -363,6 +360,10 @@ ts_chunk_append_path_create(PlannerInfo *root, RelOptInfo *rel, Hypertable *ht,
#ifdef USE_ASSERT_CHECKING
int nmatches = 0;
#endif
/* Before entering the "DO" loop, check for a valid path entry */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering, is the identical check in line 351 still required? Or can it be removed now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jnidzwetzki ah, yes. Probably not required. Changed.

if (flat == NULL)
break;

do
{
Path *child = (Path *) lfirst(flat);
Expand Down
45 changes: 45 additions & 0 deletions test/expected/chunk_utils.out
Original file line number Diff line number Diff line change
Expand Up @@ -1556,3 +1556,48 @@ SELECT drop_chunks('drop_chunk_test_ts', created_after => INTERVAL '1 hour', cre
_timescaledb_internal._hyper_4_20_chunk
(2 rows)

-- Test views on top of hypertables
CREATE TABLE view_test (project_id INT, ts TIMESTAMPTZ NOT NULL);
SELECT create_hypertable('view_test', by_range('ts', INTERVAL '1 day'));
create_hypertable
-------------------
(17,t)
(1 row)

-- exactly one partition per project_id
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit:

Suggested change
-- exactly one partition per project_id
-- Exactly one partition per project_id

SELECT * FROM add_dimension('view_test', 'project_id', chunk_time_interval => 1); -- exactly one partition per project; works for *integer* types
NOTICE: adding not-null constraint to column "project_id"
dimension_id | schema_name | table_name | column_name | created
--------------+-------------+------------+-------------+---------
22 | try_schema | view_test | project_id | t
(1 row)

INSERT INTO view_test (project_id, ts)
SELECT g % 25 + 1 AS project_id, i.ts + (g * interval '1 week') / i.total AS ts
FROM (SELECT timestamptz '2024-01-01 00:00:00+0', 600) i(ts, total),
generate_series(1, i.total) g;
-- Create a view on top of this hypertable
CREATE VIEW test_view_part_few AS SELECT project_id,
ts
FROM view_test
WHERE project_id = ANY (ARRAY[5, 10, 15]);
-- Complicated query on a view involving a range check and a sort
SELECT * FROM test_view_part_few WHERE ts BETWEEN '2024-01-04 00:00:00+00'AND '2024-01-05 00:00:00' ORDER BY ts LIMIT 1000;
project_id | ts
------------+------------------------------
10 | Wed Jan 03 16:31:12 2024 PST
15 | Wed Jan 03 17:55:12 2024 PST
5 | Wed Jan 03 22:07:12 2024 PST
10 | Wed Jan 03 23:31:12 2024 PST
15 | Thu Jan 04 00:55:12 2024 PST
5 | Thu Jan 04 05:07:12 2024 PST
10 | Thu Jan 04 06:31:12 2024 PST
15 | Thu Jan 04 07:55:12 2024 PST
5 | Thu Jan 04 12:07:12 2024 PST
10 | Thu Jan 04 13:31:12 2024 PST
15 | Thu Jan 04 14:55:12 2024 PST
5 | Thu Jan 04 19:07:12 2024 PST
10 | Thu Jan 04 20:31:12 2024 PST
15 | Thu Jan 04 21:55:12 2024 PST
(14 rows)

17 changes: 17 additions & 0 deletions test/sql/chunk_utils.sql
Original file line number Diff line number Diff line change
Expand Up @@ -655,3 +655,20 @@ SELECT show_chunks('drop_chunk_test_ts');
-- timestamp
SELECT show_chunks('drop_chunk_test_ts', created_after => now() - INTERVAL '1 hour', created_before => now());
SELECT drop_chunks('drop_chunk_test_ts', created_after => INTERVAL '1 hour', created_before => now());

-- Test views on top of hypertables
CREATE TABLE view_test (project_id INT, ts TIMESTAMPTZ NOT NULL);
SELECT create_hypertable('view_test', by_range('ts', INTERVAL '1 day'));
-- exactly one partition per project_id
SELECT * FROM add_dimension('view_test', 'project_id', chunk_time_interval => 1); -- exactly one partition per project; works for *integer* types
INSERT INTO view_test (project_id, ts)
SELECT g % 25 + 1 AS project_id, i.ts + (g * interval '1 week') / i.total AS ts
FROM (SELECT timestamptz '2024-01-01 00:00:00+0', 600) i(ts, total),
generate_series(1, i.total) g;
-- Create a view on top of this hypertable
CREATE VIEW test_view_part_few AS SELECT project_id,
ts
FROM view_test
WHERE project_id = ANY (ARRAY[5, 10, 15]);
-- Complicated query on a view involving a range check and a sort
SELECT * FROM test_view_part_few WHERE ts BETWEEN '2024-01-04 00:00:00+00'AND '2024-01-05 00:00:00' ORDER BY ts LIMIT 1000;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you also add the explain plan for this query?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems a ChunkAppend plan is not chosen eventually, but the PR addresses the issue so approving

Loading