-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spec: Support geo type #10981
base: main
Are you sure you want to change the base?
Spec: Support geo type #10981
Conversation
a096921
to
19f24a4
Compare
format/spec.md
Outdated
XZ2 is based on the paper [XZ-Ordering: A Space-Filling Curve for Objects with Spatial Extensions]. | ||
|
||
Notes: | ||
1. Resolution must be a positive integer. Defaults to TODO |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jiayuasu do you have any suggestion for default here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
12 sounds fine. CC @Kontinuation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GeoMesa uses a high XZ2 resolution when working with key-value stores such as Accumulo and HBase, it is not appropriate to always use a resolution that high for partitioning data (for instance, GeoMesa on FileSystems).
XZ2 resolution 11~12 works for city-scale data, but will generate too many partitions for country-scale or world-scale data. I'd like to have a smaller default value such as 7 to be safe on various kinds of data.
format/spec.md
Outdated
| **`struct`** | `group` | | | | ||
| **`list`** | `3-level list` | `LIST` | See Parquet docs for 3-level representation. | | ||
| **`map`** | `3-level map` | `MAP` | See Parquet docs for 3-level representation. | | ||
| **`geometry`** | `binary` | `GEOMETRY` | WKB format, see Appendix G. Logical type annotation optional for supported Parquet format versions [1]. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could add this section later too, once its implemented (same for ORC below)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Appendix G](#appendix-g)
19f24a4
to
d7096e4
Compare
format/spec.md
Outdated
| _optional_ | _optional_ | **`110 null_value_counts`** | `map<121: int, 122: long>` | Map from column id to number of null values in the column | | ||
| _optional_ | _optional_ | **`137 nan_value_counts`** | `map<138: int, 139: long>` | Map from column id to number of NaN values in the column | | ||
| _optional_ | _optional_ | **`111 distinct_counts`** | `map<123: int, 124: long>` | Map from column id to number of distinct values in the column; distinct counts must be derived using values in the file by counting or using sketches, but not using methods like merging existing distinct counts | | ||
| _optional_ | _optional_ | **`125 lower_bounds`** | `map<126: int, 127: binary>` | Map from column id to lower bound in the column serialized as binary [1]. Each value must be less than or equal to all non-null, non-NaN values in the column for the file [2] For Geometry type, this is a Point composed of the min value of each dimension in all Points in the Geometry. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this work? Does Iceberg need to interpret each WKB to produce this value? Will it be provided by Parquet?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, once we switch to Geometry logical type from Parquet we will get these stats from Parquet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we mention that it is the parquet type BoundingBox
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea will add a footnote here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@szehon-ho BTW, the reason why we had a separate bbox statistics in havasu is to be compatible with existing Iceberg tables. Since this is to add the native geometry support, so lower_bound
and upper_bound
are good choices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that the bounds were stored as WKB-encoded points (according to Appendix D and G), and WKB encodes dimensions of geometries in the header. It is more consistent to make bound values the same type/representation as the field data type.
More sophisticated coverings in Parquet statistics cannot be easily mapped to lower_bounds
and upper_bounds
, so do we simply use the bbox
statistics and ignore the coverings
for now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is ok since these two bounds are optional and in case they are not presented, it still follow the spec.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it support different dimensions like XY, XYZ, XYM, XYZM? If yes, how can we tell if the binary is for XYZ or XYM?
We should say For Geometry type, this is a WKB-encoded Point composed of the min value of each dimension in all Points in the Geometry.
Then we don't have to worry about the Z and M value.
CC @szehon-ho
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should say For Geometry type, this is a WKB-encoded Point composed of the min value of each dimension in all Points in the Geometry. Then we don't have to worry about the Z and M value.
@jiayuasu @Kontinuation @wgtmac Done, thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we mention that it is the parquet type BoundingBox?
Actually looking again after some time, not sure how to mention this here, as that is filetype specific. This is an optional field, and only set if type is parquet and bounding_box is set, but that's implementation detail .
format/spec.md
Outdated
| **`void`** | Always produces `null` | Any | Source type or `int` | | ||
| Transform name | Description | Source types | Result type | | ||
|-------------------|--------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|----------------------| | ||
| **`identity`** | Source value, unmodified | Any | Source type | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Except for geometry
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe that's fine if it is comparable, but practically people will always use xz2
, right? I'm not sure, but wondering if there is some implications, e.g., too expensive, or super high cardinality, so that we don't recommend user to use the original GEO value as the partition spec.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea I think its possible to do (it's just the wkb value after all), you are right , not sure if any good use case. Yea we have to get the wkb in any case, i am not sure if its that expensive, but can check. But I guess the cardinality is the same consideration as any other type (uuid for example), and we let the user choose ?
format/spec.md
Outdated
| Transform name | Description | Source types | Result type | | ||
|-------------------|--------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|----------------------| | ||
| **`identity`** | Source value, unmodified | Any | Source type | | ||
| **`bucket[N]`** | Hash of value, mod `N` (see below) | `int`, `long`, `decimal`, `date`, `time`, `timestamp`, `timestamptz`, `timestamp_ns`, `timestamptz_ns`, `string`, `uuid`, `fixed`, `binary` | `int` | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we going to support bucketing on GEO?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think its possible, again not sure the utility. Geo boils down to just WKB bytes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel that the argument for identity
can apply here as well. In that case, we can support it, but it's users' call to use it or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may want to change this to be like identity
, using Any except [...]
.
I would not include geo as a source column for bucketing because there is not a clear definition of equality for geo. The hash would depend on the structure of the object and weird things happen when two objects are "equal" (for some definition) but have different hash values.
format/spec.md
Outdated
@@ -198,6 +199,9 @@ Notes: | |||
- Timestamp values _with time zone_ represent a point in time: values are stored as UTC and do not retain a source time zone (`2017-11-16 17:10:34 PST` is stored/retrieved as `2017-11-17 01:10:34 UTC` and these values are considered identical). | |||
- Timestamp values _without time zone_ represent a date and time of day regardless of zone: the time value is independent of zone adjustments (`2017-11-16 17:10:34` is always retrieved as `2017-11-16 17:10:34`). | |||
3. Character strings must be stored as UTF-8 encoded byte arrays. | |||
4. Coordinate Reference System, i.e. mapping of how coordinates refer to precise locations on earth. Defaults to "OGC:CRS84". Fixed and cannot be changed by schema evolution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- When we say
OGC:CRS84
, the value you put in this field should be the following PROJJSON string (see GeoParquet spec)
{
"$schema": "https://proj.org/schemas/v0.5/projjson.schema.json",
"type": "GeographicCRS",
"name": "WGS 84 longitude-latitude",
"datum": {
"type": "GeodeticReferenceFrame",
"name": "World Geodetic System 1984",
"ellipsoid": {
"name": "WGS 84",
"semi_major_axis": 6378137,
"inverse_flattening": 298.257223563
}
},
"coordinate_system": {
"subtype": "ellipsoidal",
"axis": [
{
"name": "Geodetic longitude",
"abbreviation": "Lon",
"direction": "east",
"unit": "degree"
},
{
"name": "Geodetic latitude",
"abbreviation": "Lat",
"direction": "north",
"unit": "degree"
}
]
},
"id": {
"authority": "OGC",
"code": "CRS84"
}
}
- Both
crs
andcrs_kind
field are optional. But when theCRS
field presents, thecrs_kind
field must present. In this case, since we hard code thiscrs
field in this phase, then we need to setcrs_kind
field (string) toPROJJSON
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we include this example in the parquet spec as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, should we advise accepted forms or values for CRS and Edges?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wgtmac We should add this value to the Parquet spec for sure. CC @zhangfengcdt
@szehon-ho There is another situation mentioned in the GeoParquet spec: If the CRS field presents but its value is null, it means the data is in unknown CRS
. This situation happens sometimes because the writer somehow cannot find or lose the CRS info. Do we want to support this? I think we can use the empty string
to cover this case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
accepted forms or values for CRS and Edges
If we borrow the conclusion from Parquet Geometry proposal, then C,T,E fields are the follows:
C is a string. Based on what I understand from this PR, @szehon-ho made this field a required field, which is fine.
T is optional and a string. Currently, it only allows this value PROJJSON
. When it is not provided, it defaults to PROJJSON
too.
E is a string. The only allowed value is PLANAR
in this phase. Based on what I understand from this PR, @szehon-ho made this field a required field, which is fine. @szehon-ho According to our meeting with Snowflake, I think maybe we can allow SPHERICAL
too? We can add in the spec that: currently it is unsafe to perform partition transform / bounding box filtering when E = SPHERICAL
because they are built based on PLANAR
edges. It is the reader's responsibility to decide if they want to use partition transform / bounding box filtering.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, the parquet-format uses crs_encoding
instead of crs_kind
. Do we want to unify the names as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the expectation from C
, T
and E
fields from the Parquet/ORC data files? Are they required to be set by Iceberg? From the Parquet spec, only E
is required, both C
and T
are optional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Does it make sense to include the following CRS84 example from the Parquet Geometry PR?
/**
* Coordinate Reference System, i.e. mapping of how coordinates refer to
* precise locations on earth. Writers are not required to set this field.
* Once crs is set, crs_encoding field below MUST be set together.
* For example, "OGC:CRS84" can be set in the form of PROJJSON as below:
* {
* "$schema": "https://proj.org/schemas/v0.5/projjson.schema.json",
* "type": "GeographicCRS",
* "name": "WGS 84 longitude-latitude",
* "datum": {
* "type": "GeodeticReferenceFrame",
* "name": "World Geodetic System 1984",
* "ellipsoid": {
* "name": "WGS 84",
* "semi_major_axis": 6378137,
* "inverse_flattening": 298.257223563
* }
* },
* "coordinate_system": {
* "subtype": "ellipsoidal",
* "axis": [
* {
* "name": "Geodetic longitude",
* "abbreviation": "Lon",
* "direction": "east",
* "unit": "degree"
* },
* {
* "name": "Geodetic latitude",
* "abbreviation": "Lat",
* "direction": "north",
* "unit": "degree"
* }
* ]
* },
* "id": {
* "authority": "OGC",
* "code": "CRS84"
* }
* }
*/
- It is ok to have them all fixed to default values for this phase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jiayuasu i put it in the example (if you render the page). let me know if its not what you meant.
format/spec.md
Outdated
@@ -190,6 +190,7 @@ Supported primitive types are defined in the table below. Primitive types added | |||
| | **`uuid`** | Universally unique identifiers | Should use 16-byte fixed | | |||
| | **`fixed(L)`** | Fixed-length byte array of length L | | | |||
| | **`binary`** | Arbitrary-length byte array | | | |||
| [v3](#version-3) | **`geometry(C, T, E)`** | An object of the simple feature geometry model as defined by Appendix G; This may be any of the Geometry subclasses defined therein; coordinate reference system C [4], coordinate reference system type T [5], edges E [6] | C, T, E are fixed. Encoded as WKB, see Appendix G. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What syntax to use for an engine to create the geometry type? Does it require C/T/E to appear in the type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related to above comment, I think these will all be optional (take a default value if not specified).
Hi all fyi i have unfortunately encountered some problems while remote and probably cant update this, will come back to this after i get back home in two weeks. |
75326dc
to
0591f68
Compare
@jiayuasu @Kontinuation @wgtmac @flyrain @rdblue sorry for the delay, as I only got access now. Updated the pr. |
format/spec.md
Outdated
|
||
Notes: | ||
|
||
1. Timestamp values _without time zone_ represent a date and time of day regardless of zone: the time value is independent of zone adjustments (`2017-11-16 17:10:34` is always retrieved as `2017-11-16 17:10:34`). | ||
2. Timestamp values _with time zone_ represent a point in time: values are stored as UTC and do not retain a source time zone (`2017-11-16 17:10:34 PST` is stored/retrieved as `2017-11-17 01:10:34 UTC` and these values are considered identical). | ||
3. Character strings must be stored as UTF-8 encoded byte arrays. | ||
4. Crs (coordinate reference system) is a mapping of how coordinates refer to precise locations on earth. Defaults to "OGC:CRS84". Fixed and cannot be changed by schema evolution. | ||
5. Crs-encoding (coordinate reference system encoding) is the type of crs field. Must be set if crs is set. Defaults to "PROJJSON". Fixed and cannot be changed by schema evolution. | ||
6. Edges is the interpretation for non-point geometries in geometry object, i.e. whether an edge between points represent a straight cartesian line or the shortest line on the sphere. Defaults to "planar". Fixed and cannot be changed by schema evolution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we maybe explicitly mention here that both "planar" and "spherical" are supported as edge type enum values?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dmitrykoval i was debating this.
I guess we talked about it before, but the Java reference implementation, we cannot easily do pruning (file level, row level, or partition level) because the JTS library and the XZ2 only support non-spherical. We would need new metrics types, new Java libraries , and new partition transform proposals if we wanted to support it in Java reference implementation.
But if we want to support it, Im ok to list it here and have checks to just skip pruning for spherical geometry columns.
@flyrain @jiayuasu @Kontinuation does it make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I think if "planar" is the default edge type, then there shouldn't be many changes to the planar geometry code path, except for additional checks to skip some partitioning/pruning cases, right?
Regarding the reference implementation of the "spherical" type, do we need to fully support it from day one, or can we maybe mark it as optional in the initial version of the spec? For example, it would work if the engine supports it, but by default, we would fall back to the planar edge type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could list spherical
as an allowed edge type here. Maybe just mark it that it is not safe to perform partition transform or lower_bound/upper_bound filtering when the edge is spherical
. We did the same in the Parquet Geometry
PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea , forgot to mention explicitly that in Iceberg, pruning is always an optional feature for reads, so no issue.
format/spec.md
Outdated
| _optional_ | _optional_ | **`110 null_value_counts`** | `map<121: int, 122: long>` | Map from column id to number of null values in the column | | ||
| _optional_ | _optional_ | **`137 nan_value_counts`** | `map<138: int, 139: long>` | Map from column id to number of NaN values in the column | | ||
| _optional_ | _optional_ | **`111 distinct_counts`** | `map<123: int, 124: long>` | Map from column id to number of distinct values in the column; distinct counts must be derived using values in the file by counting or using sketches, but not using methods like merging existing distinct counts | | ||
| _optional_ | _optional_ | **`125 lower_bounds`** | `map<126: int, 127: binary>` | Map from column id to lower bound in the column serialized as binary [1]. Each value must be less than or equal to all non-null, non-NaN values in the column for the file [2] For geometry type, this is a WKB-encoded point composed of the min value of each dimension among all component points of all geometry objects for the file. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For geometry type, this is a WKB-encoded point composed of the min value of each dimension among all component points of all geometry objects for the file.
As we are finishing the PoC on the Parquet side, the remaining issue is what value to write to min_value/max_value fields of statistics and page index. To give some context, Parquet requires min_value/max_value fields to be set for page index and statistics are used to generate page index. The C++ PoC is omitting min_value/max_value values and the Java PoC is pretending geometry values are plain binary values while collecting the stats. Should we do similar things here? Then the Iceberg code can directly consume min_value/max_value from statistics instead of issuing another call to get the specialized GeometryStatistics
which is designed for advanced purpose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wgtmac do you mean that Iceberg uses the Parquet Geometry GeometryStatistics or Parquet Geometry uses the min_value/max_value idea from Iceberg?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean the latter. The ColumnOrder
of the new geometry type is undefined
as specified at https://github.com/apache/parquet-format/pull/240/files#diff-834c5a8d91719350b20995ad99d1cb6d8d68332b9ac35694f40e375bdb2d3e7cR1144. It means that the min_value/max_value fields are meaningless and should not be used. I'm not sure if it is a good idea to set min_value/max_value fields in the same way as lower_bounds/upper_bounds of Iceberg.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest defining the sort order of geometry columns as WKB-encoded points in the parquet format spec. This is the most simple yet useful way of defining the min and max bounds for geometry columns, and the sort order is better to be well-defined rather than left undefined.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it is better to explicitly define the column order than being undefined. If we go with this approach, the format PR and two PoC impls need to reflect this change, which might get more complicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there anything for this specific line we need to change? As long as we get from Parquet some way we are ok here, but is the format of the lower/upper bound still ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I was thinking if Parquet could do better by doing similar things in the future.
b459eaf
to
1ee5fad
Compare
format/spec.md
Outdated
@@ -200,12 +200,16 @@ Supported primitive types are defined in the table below. Primitive types added | |||
| | **`uuid`** | Universally unique identifiers | Should use 16-byte fixed | | |||
| | **`fixed(L)`** | Fixed-length byte array of length L | | | |||
| | **`binary`** | Arbitrary-length byte array | | | |||
| [v3](#version-3) | **`geometry(C, CE, E)`** | An object of the simple feature geometry model as defined by Appendix G; This may be any of the geometry subclasses defined therein; crs C [4], crs-encoding CE [5], edges E [6] | C, CE, E are fixed, and if unset will take default values. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think maybe we should just link out for the requirements here since it's a bit complicated.
The description as well could be
Simple feature geometry Appendix G, Parameterized by ....
I also don't think we should allow it to be unset ... can we just require that a subclass is always picked? We could recommend a set of defaults for engines to set on field creation but I'm not sure we need to be that opinionated here.
format/spec.md
Outdated
@@ -1312,7 +1335,7 @@ This serialization scheme is for storing single values as individual binary valu | |||
| **`struct`** | **`JSON object by field ID`** | `{"1": 1, "2": "bar"}` | Stores struct fields using the field ID as the JSON field name; field values are stored using this JSON single-value format | | |||
| **`list`** | **`JSON array of values`** | `[1, 2, 3]` | Stores a JSON array of values that are serialized using this JSON single-value format | | |||
| **`map`** | **`JSON object of key and value arrays`** | `{ "keys": ["a", "b"], "values": [1, 2] }` | Stores arrays of keys and values; individual keys and values are serialized using this JSON single-value format | | |||
|
|||
| **`geometry`** | **`JSON string`** |`00000000013FF00000000000003FF0000000000000`| Stores WKB as a hexideciamal string, see Appendix G | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
link again
format/spec.md
Outdated
| **`struct`** | Not supported | | ||
| **`list`** | Not supported | | ||
| **`map`** | Not supported | | ||
| **`geometry`** | WKB format, see Appendix G | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
link
format/spec.md
Outdated
| **`timestamp_ns`** | Stores nanoseconds from 1970-01-01 00:00:00.000000000 in an 8-byte little-endian long | | ||
| **`timestamptz_ns`** | Stores nanoseconds from 1970-01-01 00:00:00.000000000 UTC in an 8-byte little-endian long | | ||
| **`string`** | UTF-8 bytes (without length) | | ||
| **`uuid`** | 16-byte big-endian value, see example in Appendix B | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might as well fix this one too while we are at it :)
format/spec.md
Outdated
@@ -200,12 +200,15 @@ Supported primitive types are defined in the table below. Primitive types added | |||
| | **`uuid`** | Universally unique identifiers | Should use 16-byte fixed | | |||
| | **`fixed(L)`** | Fixed-length byte array of length L | | | |||
| | **`binary`** | Arbitrary-length byte array | | | |||
| [v3](#version-3) | **`geometry(C, E)`** | An object of the simple feature geometry model as defined by Appendix G; This may be any of the geometry subclasses defined therein; crs C [4], edges E [5] | C and E are fixed, and if unset will take default values. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think maybe we should just link out for the requirements here since it's a bit complicated. Remove "an object"
The description as well could be
Simple feature geometry Appendix G, Parameterized by ....
I also don't think we should allow it to be unset ... can we just require that a subclass is always picked? We could recommend a set of defaults for engines to set on field creation but I'm not sure we need to be that opinionated here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can be more specific here and call out the standard that we are referencing, like we do with IEEE 754. This is "Geometry features as WKB(link) stored in coordinate reference system C and edge type E (see Appendix G)"
I would also say that "If not specified, C is OGC:CRS84 and E is planar".
format/spec.md
Outdated
| _optional_ | _optional_ | **`110 null_value_counts`** | `map<121: int, 122: long>` | Map from column id to number of null values in the column | | ||
| _optional_ | _optional_ | **`137 nan_value_counts`** | `map<138: int, 139: long>` | Map from column id to number of NaN values in the column | | ||
| _optional_ | _optional_ | **`111 distinct_counts`** | `map<123: int, 124: long>` | Map from column id to number of distinct values in the column; distinct counts must be derived using values in the file by counting or using sketches, but not using methods like merging existing distinct counts | | ||
| _optional_ | _optional_ | **`125 lower_bounds`** | `map<126: int, 127: binary>` | Map from column id to lower bound in the column serialized as binary [1]. Each value must be less than or equal to all non-null, non-NaN values in the column for the file [2] For geometry type, this is a WKB-encoded point composed of the min value of each dimension among all component points of all geometry objects for the file, and can be used for basic pruning only on geometry columns with planar edges. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's move these details out of the description and either into the footnotes ore another section for geometry.
format/spec.md
Outdated
| | _optional_ | **`135 equality_ids`** | `list<136: int>` | Field ids used to determine row equality in equality delete files. Required when `content=2` and should be null otherwise. Fields with ids listed in this column must be present in the delete file | | ||
| _optional_ | _optional_ | **`140 sort_order_id`** | `int` | ID representing sort order for this file [3]. | | ||
| v1 | v2 | Field id, name | Type | Description | | ||
| ---------- | ---------- |-----------------------------------|------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you remove the reformatting so we can more easily look at the changes?
format/spec.md
Outdated
@@ -1084,14 +1100,16 @@ The 32-bit hash implementation is 32-bit Murmur3 hash, x86 variant, seeded with | |||
| **`uuid`** | `hashBytes(uuidBytes(v))` [4] | `f79c3e09-677c-4bbd-a479-3f349cb785e7` → `1488055340` | | |||
| **`fixed(L)`** | `hashBytes(v)` | `00 01 02 03` → `-188683207` | | |||
| **`binary`** | `hashBytes(v)` | `00 01 02 03` → `-188683207` | | |||
| **`geometry`** | `hashBytes(wkb(v))` [5] | `(1.0, 1.0)` → `-246548298` | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would probably not specify how to hash geometry because we don't yet know how to do it correctly. The reason why we have the second table (hash requirements that are not part of bucket) is that we don't want anyone to forget that float and double should hash to the same value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@szehon-ho, I don't think we should specify this or allow geometry in bucket transforms because of issues with equality.
Thanks @rdblue @RussellSpitzer addressed review comments.
|
format/spec.md
Outdated
@@ -1312,7 +1325,7 @@ This serialization scheme is for storing single values as individual binary valu | |||
| **`struct`** | **`JSON object by field ID`** | `{"1": 1, "2": "bar"}` | Stores struct fields using the field ID as the JSON field name; field values are stored using this JSON single-value format | | |||
| **`list`** | **`JSON array of values`** | `[1, 2, 3]` | Stores a JSON array of values that are serialized using this JSON single-value format | | |||
| **`map`** | **`JSON object of key and value arrays`** | `{ "keys": ["a", "b"], "values": [1, 2] }` | Stores arrays of keys and values; individual keys and values are serialized using this JSON single-value format | | |||
|
|||
| **`geometry`** | **`JSON string`** |`00000000013FF00000000000003FF0000000000000`| Stores WKB as a hexideciamal string, see [Appendix G](#appendix-g-geospatial-notes) | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rdblue am not entirely sure where this part of the spec is used. Should it also match the above (the more optimized serialization for stats)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is used for default values and for encoding values in JSON expressions for filtering.
What about using WKT here instead of WKB?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, added.
@@ -483,6 +485,7 @@ Notes: | |||
2. For `float` and `double`, the value `-0.0` must precede `+0.0`, as in the IEEE 754 `totalOrder` predicate. NaNs are not permitted as lower or upper bounds. | |||
3. If sort order ID is missing or unknown, then the order is assumed to be unsorted. Only data files and equality delete files should be written with a non-null order id. [Position deletes](#position-delete-files) are required to be sorted by file and position, not a table order, and should set sort order id to null. Readers must ignore sort order id for position delete files. | |||
4. The following field ids are reserved on `data_file`: 141. | |||
5. For `geometry`, this a point composed of the min (lower_bound) or max (upper_bound) value of each dimension among all component points of all geometry objects for the file. These can be used for basic pruning only on geometry columns with planar edges. See Appendix D for encoding. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to be a little more specific here. The way I read this is that you can take min and max values for each dimension in the point, but that isn't sufficient for spherical edges.
I think this needs to state that the lower and upper bounds must be less than or equal (or greater than or equal) to the values of any point that is located on an edge of the geometry object. In other words, the bounding box must contain all points that are in the geometry object.
If we don't have that requirement, then there could be a point that is outside of the bounding box. If that's the case, then a query that includes the point may not overlap the bounding box and we cannot use it for filtering.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per a previous conversation , it could be beneficial to have it in this form even for spherical edges. An engine could do some conversion from the bounding box to be useful for spherical edge.
Do you mean, you do not want this option at all ? (I suppose due to risk of mis-interpreting it)
format/spec.md
Outdated
|
||
1. [https://github.com/apache/parquet-format/pull/240](https://github.com/apache/parquet-format/pull/240)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer not to reference a PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed these, can add the logical types once the pr is merged in Parquet
format/spec.md
Outdated
@@ -1286,6 +1298,7 @@ This serialization scheme is for storing single values as individual binary valu | |||
| **`struct`** | Not supported | | |||
| **`list`** | Not supported | | |||
| **`map`** | Not supported | | |||
| **`geometry`** | Always a single point, it is encoded in big-endian fashion as concatenated {x, y, optional z, optional m} values. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why big endian? Most of the time we use little endian in the format, with the only exception being the encoded decimal values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, what is the encoding for these values? 8-byte IEEE 754?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, added little-endian and 8-byte IEEE 754 (ie double type) for each coordinate
1e50f16
to
204dfdd
Compare
@rdblue thanks for further review. Would appreciate a clarification for this comment #10981 (comment), otherwise everything else is addressed. |
@@ -1102,6 +1105,7 @@ Hash results are not dependent on decimal scale, which is part of the type, not | |||
4. UUIDs are encoded using big endian. The test UUID for the example above is: `f79c3e09-677c-4bbd-a479-3f349cb785e7`. This UUID encoded as a byte array is: | |||
`F7 9C 3E 09 67 7C 4B BD A4 79 3F 34 9C B7 85 E7` | |||
5. `doubleToLongBits` must give the IEEE 754 compliant bit representation of the double value. All `NaN` bit patterns must be canonicalized to `0x7ff8000000000000L`. Negative zero (`-0.0`) must be canonicalized to positive zero (`0.0`). Float hash values are the result of hashing the float cast to double to ensure that schema evolution does not change hash values if float types are promoted. | |||
6. WKB format, see [Appendix G](#appendix-g-geospatial-notes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing hash specification for geometry primitive type in the table above. We should add a new row for geometry
and annotate it with [6]
.
@@ -1286,6 +1291,7 @@ This serialization scheme is for storing single values as individual binary valu | |||
| **`struct`** | Not supported | | |||
| **`list`** | Not supported | | |||
| **`map`** | Not supported | | |||
| **`geometry`** | A single point, encoded as a {x, y, optional z, optional m} concatenation of its 8-byte IEEE 754 values, little-endian. | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it always concatenated by 4 floating-point values? If it is not the case, we'll have a hard time figuring out if the point is in XYZ or XYM when there are 3 encoded dimensions. I suggest we use the WKB encoding of points here as well.
Enforcing the appearance of all 4 components and allow filling NaN for optional components also works, as it is more similar to the BoundingBox
struct defined in the Parquet spec.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. This is why we have introduced a separate bounding box stats in the Parquet proposal to avoid the issue.
This is the spec change for #10260.
Also this is based closely on the decisions taken in the Parquet proposal for the same : apache/parquet-format#240