Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add documentation for streaming usecase #9070

Merged
merged 9 commits into from
Jan 31, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 73 additions & 0 deletions datafusion-examples/examples/csv_sql_streaming.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
// Licensed to the Apache Software Foundation (ASF) under one
mustafasrepo marked this conversation as resolved.
Show resolved Hide resolved
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.

use datafusion::common::test_util::datafusion_test_data;
use datafusion::error::Result;
use datafusion::prelude::*;
use datafusion_expr::expr::Sort;

/// This example demonstrates executing a simple query against an Arrow data source (CSV) and
/// fetching results with streaming aggregation and streaming window
#[tokio::main]
async fn main() -> Result<()> {
// create local execution context
let ctx = SessionContext::new();

let testdata = datafusion_test_data();

// File is ordered by ts ASC. This is an invariant, it is the responsibility of the user to make sure
// that file indeed satisfies this condition.
mustafasrepo marked this conversation as resolved.
Show resolved Hide resolved
let sort_expr = vec![Expr::Sort(Sort {
expr: Box::new(Expr::Column(Column::from_name("ts"))),
asc: true,
nulls_first: true,
})];
mustafasrepo marked this conversation as resolved.
Show resolved Hide resolved
// register csv file with the execution context
ctx.register_csv(
"ordered_table",
&format!("{testdata}/window_1.csv"),
CsvReadOptions::new().file_sort_order(vec![sort_expr]),
)
.await?;

// execute the query
// Following query can be executed with unbounded sources because group by expressions (e.g ts) is
// already ordered at the source.
mustafasrepo marked this conversation as resolved.
Show resolved Hide resolved
let df = ctx
.sql(
"SELECT ts, MIN(inc_col), MAX(inc_col) \
FROM ordered_table \
GROUP BY ts",
)
.await?;

df.show().await?;

// execute the query
// Following query can be executed with unbounded sources because window executor can calculate
// its result in streaming, when its required ordering is already satisfied at the source.
mustafasrepo marked this conversation as resolved.
Show resolved Hide resolved
let df = ctx
.sql(
"SELECT ts, SUM(inc_col) OVER(ORDER BY ts ASC) \
FROM ordered_table",
)
.await?;

df.show().await?;

Ok(())
}
20 changes: 20 additions & 0 deletions datafusion/common/src/test_util.rs
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,26 @@ macro_rules! assert_not_contains {
};
}

/// Returns the arrow test data directory, which is by default rooted at `datafusion/core/tests/data`.
mustafasrepo marked this conversation as resolved.
Show resolved Hide resolved
///
/// The default can be overridden by the optional environment
/// variable `DATAFUSION_TEST_DATA`
///
/// panics when the directory can not be found.
///
/// Example:
/// ```
/// let testdata = datafusion_common::test_util::datafusion_test_data();
/// let csvdata = format!("{}/window_1.csv", testdata);
/// assert!(std::path::PathBuf::from(csvdata).exists());
/// ```
pub fn datafusion_test_data() -> String {
match get_data_dir("DATAFUSION_TEST_DATA", "../../datafusion/core/tests/data") {
Ok(pb) => pb.display().to_string(),
Err(err) => panic!("failed to get arrow data dir: {err}"),
}
}

/// Returns the arrow test data directory, which is by default stored
/// in a git submodule rooted at `testing/data`.
///
Expand Down
12 changes: 11 additions & 1 deletion docs/source/user-guide/sql/ddl.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ file system or remote object store as a named table which can be queried.
The supported syntax is:

```
CREATE EXTERNAL TABLE
CREATE [UNBOUNDED] EXTERNAL TABLE
[ IF NOT EXISTS ]
<TABLE_NAME>[ (<column_definition>) ]
STORED AS <file_type>
Expand Down Expand Up @@ -147,6 +147,16 @@ WITH HEADER ROW
LOCATION '/path/to/directory/of/files';
```

With `CREATE UNBOUNDED EXTERNAL TABLE` SQL statement. We can create unbounded data sources such as following:

```sql
CREATE UNBOUNDED EXTERNAL TABLE taxi
STORED AS PARQUET
LOCATION '/mnt/nyctaxi/tripdata.parquet';
```

Datafusion tries to execute queries that refer to unbounded sources in streaming fashion. If this is not possible according to query specifications, datafusion plan generation fails stating it is not possible to execute given query in streaming fashion. Please note that queries that can be executed with unbounded sources (e.g. in streaming mode) are a subset of the bounded sources. A query that fail with unbounded source may work in bounded source.
mustafasrepo marked this conversation as resolved.
Show resolved Hide resolved

When creating an output from a data source that is already ordered by
an expression, you can pre-specify the order of the data using the
`WITH ORDER` clause. This applies even if the expression used for
Expand Down
Loading