Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: Bump crate-ci/typos from 1.24.2 to 1.25.0 #19043

Merged
merged 3 commits into from
Oct 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/lint-global.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,4 @@ jobs:
- name: Lint Markdown and TOML
uses: dprint/check@v2.2
- name: Spell Check with Typos
uses: crate-ci/typos@v1.24.2
uses: crate-ci/typos@v1.25.0
2 changes: 1 addition & 1 deletion crates/polars-arrow/src/array/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
//! to a concrete struct based on [`PhysicalType`](crate::datatypes::PhysicalType) available from [`Array::dtype`].
//! All immutable arrays are backed by [`Buffer`](crate::buffer::Buffer) and thus cloning and slicing them is `O(1)`.
//!
//! Most arrays contain a [`MutableArray`] counterpart that is neither clonable nor sliceable, but
//! Most arrays contain a [`MutableArray`] counterpart that is neither cloneable nor sliceable, but
//! can be operated in-place.
use std::any::Any;
use std::sync::Arc;
Expand Down
2 changes: 1 addition & 1 deletion crates/polars-ops/src/chunked_array/list/sets.rs
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ where
set2.clear();
set2.extend(b);
}
// We could speed this up, but implementing ourselves, but we need to have a clonable
// We could speed this up, but implementing ourselves, but we need to have a cloneable
// iterator as we need 2 passes
set.extend(a);
out.extend_buf(set.symmetric_difference(set2).copied())
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
use crate::parquet::encoding::delta_bitpacked;

/// Encodes a clonable iterator of `&[u8]` into `buffer`. This does not allocated on the heap.
/// Encodes a cloneable iterator of `&[u8]` into `buffer`. This does not allocated on the heap.
/// # Implementation
/// This encoding is equivalent to call [`delta_bitpacked::encode`] on the lengths of the items
/// of the iterator followed by extending the buffer from each item of the iterator.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ where
let s = s.to_physical_repr();
let s = prepare_key(&s, chunk);

// todo! ammortize allocation
// TODO: Amortize allocation.
for phys_e in self.aggregation_columns.iter() {
let s = phys_e.evaluate(chunk, &context.execution_state)?;
let s = s.to_physical_repr();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ impl StringGroupbySink {
let s = s.to_physical_repr();
let s = prepare_key(&s, chunk);

// todo! ammortize allocation
// TODO: Amortize allocation.
for phys_e in self.aggregation_columns.iter() {
let s = phys_e.evaluate(chunk, &context.execution_state)?;
let s = s.to_physical_repr();
Expand Down
2 changes: 1 addition & 1 deletion py-polars/requirements-lint.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
mypy==1.11.1
ruff==0.6.4
typos==1.24.2
typos==1.25.0
2 changes: 1 addition & 1 deletion py-polars/tests/unit/io/test_scan.py
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ def data_file_glob(session_tmp_dir: Path, data_file_extension: str) -> _DataFile
assert sum(row_counts) == 10000

# Make sure we pad file names with enough zeros to ensure correct
# lexographical ordering.
# lexicographical ordering.
assert len(row_counts) < 100

# Make sure that some of our data frames consist of multiple chunks which
Expand Down
Loading