Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: implement simple Fetch by key for cache items #4032

Merged
merged 12 commits into from
Sep 10, 2024

Conversation

javiermolinar
Copy link
Contributor

@javiermolinar javiermolinar commented Aug 29, 2024

What this PR does:
It implements a new way to fetch by key. That way we can skip more expensive operations using batches of key lookups when we only want one key.

It replaces the existing Fetch in the sync_handler_cache middleware.

Tempo local example now can use memcache, I think this doesn't add too much overhead and it's good for testing

[X] Test it locally against both Memcached and Redis

Which issue(s) this PR fixes:
Fixes #

Checklist

  • Tests updated
  • Documentation added
  • CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX]

@javiermolinar
Copy link
Contributor Author

javiermolinar commented Aug 29, 2024

I would like to test this against a real memcache/redis

Done, it works

@javiermolinar javiermolinar changed the title feat: implement simple Fetch key for cache feat: implement simple Fetch by key for cache items Aug 30, 2024
@javiermolinar javiermolinar marked this pull request as ready for review August 30, 2024 08:40
@joe-elliott
Copy link
Member

Oh, can we replace this as well:

https://github.com/grafana/tempo/blob/main/tempodb/backend/cache/cache.go#L118

@knylander-grafana
Copy link
Contributor

Is there any user impact for this change like a config file option?

@javiermolinar
Copy link
Contributor Author

Is there any user impact for this change like a config file option?

No, this is internal

@javiermolinar
Copy link
Contributor Author

javiermolinar commented Sep 10, 2024

Now we have 0 references from code to the Fetch method, we even have an internal fetchKeysBatched method in Memcached code, wich seems a bit overengineered

func (c *Memcached) fetchKeysBatched(ctx context.Context, keys []string) (found []string, bufs [][]byte, missed []string) {

Do think this is something we are going to need? I would favor to delete at very least that batched code

@joe-elliott
Copy link
Member

Do think this is something we are going to need? I would favor to delete at very least that batched code

If you don't see an immediate future for all the batched code, I'm good on deleting it.

Copy link
Member

@joe-elliott joe-elliott left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM. Two things:

  • Let's remove the batching code as you suggested.
  • I'd prefer to back out the event code, but I'm good if you want to leave it. I thought we were replacing a span creation, but this will just incur more cost. Let me know which direction you decide to go.

pkg/cache/memcached.go Show resolved Hide resolved
@javiermolinar
Copy link
Contributor Author

This LGTM. Two things:

  • Let's remove the batching code as you suggested.
  • I'd prefer to back out the event code, but I'm good if you want to leave it. I thought we were replacing a span creation, but this will just incur more cost. Let me know which direction you decide to go.

right, I think we should handle this with two new metric counters one for hits and another one for missed ones

@joe-elliott joe-elliott merged commit e3f8096 into grafana:main Sep 10, 2024
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants