Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: add topic_delete_unavailable_test, tweak tiered storage topic deletion order #7460

Merged
merged 4 commits into from
Jan 4, 2023
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions tests/rptest/tests/topic_delete_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,8 @@ def produce_until_partitions():


class TopicDeleteCloudStorageTest(RedpandaTest):
topics = (TopicSpec(partition_count=3,
partition_count = 3
topics = (TopicSpec(partition_count=partition_count,
cleanup_policy=TopicSpec.CLEANUP_DELETE), )

def __init__(self, test_context):
Expand All @@ -196,13 +197,13 @@ def _populate_topic(self, topic_name):
self.kafka_tools.alter_topic_config(
topic_name, {'retention.local.target.bytes': 5 * 1024 * 1024})

# Write out 10MB
# Write out 10MB per partition
self.kafka_tools.produce(topic_name,
record_size=4096,
num_records=2560)
num_records=2560 * self.partition_count)

# Wait for segments evicted from local storage
for i in range(0, 3):
for i in range(0, self.partition_count):
wait_for_segments_removal(self.redpanda, topic_name, i, 5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe wait_for_segments_removal should fail if the initial number of segments is less than the final desired count.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, it would be nice, but I think it's going to be racy unless the test is very confident that the segment count cannot end up lower than the target. I have limited trust in most of the tests that assert segment counts as a proxy for data retention


# Confirm objects in remote storage
Expand Down Expand Up @@ -254,7 +255,7 @@ def topic_delete_unavailable_test(self):
next_topic = "next_topic"
self.kafka_tools.create_topic(
TopicSpec(name=next_topic,
partition_count=3,
partition_count=self.partition_count,
cleanup_policy=TopicSpec.CLEANUP_DELETE))
self._populate_topic(next_topic)
after_keys = set(o.Key for o in self.redpanda.s3_client.list_objects(
Expand Down