Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Next major version: V5 #1504

Closed
wants to merge 54 commits into from
Closed

Next major version: V5 #1504

wants to merge 54 commits into from

Conversation

luin
Copy link
Collaborator

@luin luin commented Feb 6, 2022

Closes #1292

Major changes:

@luin luin force-pushed the v5 branch 11 times, most recently from 50a4525 to f4a4c03 Compare February 6, 2022 12:30
@AVVS
Copy link
Collaborator

AVVS commented Feb 6, 2022

@luin any chance we can squeeze in resp3 (dont remember this being supported, but might be wrong) / client side caching here? glad to help!

@AVVS
Copy link
Collaborator

AVVS commented Feb 6, 2022

and another idea is native opentelemetry instrumentation

@luin
Copy link
Collaborator Author

luin commented Feb 6, 2022

@luin any chance we can squeeze in resp3 (dont remember this being supported, but might be wrong) / client side caching here? glad to help!

It hasn't been supported 🤣 . For myself, I don't feel that resp3 will bring a lot of real value, and I'm not sure as a driver, what kind of API needs to be provided to support the new features that resp3 brings. Similarly, client side caching does not seem to be as meaningful for a language that uses GC.

But I haven't thought much about this, so open to suggestions! If we want to support resp3, v5 seems to be a good chance to do so.

and another idea is native opentelemetry instrumentation

Can you elaborate a little bit more on this? Does it mean to provide an interface to get the average command execution time or something similar?

@AVVS
Copy link
Collaborator

AVVS commented Feb 6, 2022

Can you elaborate a little bit more on this? Does it mean to provide an interface to get the average command execution time or something similar?

it sort of allows you to perform diagnostics better and get a complete trace of a request when every part of your request chain is instrumented, typically you'd use async_hooks, but can be anything

https://github.com/elastic/apm-agent-nodejs/blob/main/lib/instrumentation/modules/ioredis.js is an example of an existing instrumentation, which can be bridged to opentracing/opentelementry, fed to something like https://github.com/jaegertracing/jaeger for visualization/search

@AVVS
Copy link
Collaborator

AVVS commented Feb 6, 2022

It hasn't been supported 🤣 . For myself, I don't feel that resp3 will bring a lot of real value, and I'm not sure as a driver, what kind of API needs to be provided to support the new features that resp3 brings. Similarly, client side caching does not seem to be as meaningful for a language that uses GC.

I don;t think GC is relevant in terms of client-side caching here, in my opinion main benefit here is lower latency to access information, along with guarantees that information is still what server has. https://redis.io/topics/client-side-caching
for certain queries you can opt-in to cache the response. when server-side data changes - server will notify clients about that and clients will get new info

@marcbachmann
Copy link
Collaborator

marcbachmann commented Feb 8, 2022

Resp3 changes response formats and we should mention that. eg. booleans are converted to true/false instead of 1/0: https://redis.io/topics/lua-api#lua-to-resp3-type-conversion

Therefore I'd schedule resp3 with another version later or support opt-out (maybe opt-in using HELLO 3 as resp2 is still the default in redis).

BREAKING CHANGE:

1. We now require Node.js v10.12.0 or newer.
2. We now only work with Redis v3.0.0 or newer.
3. `Redis` can't be called as a function anymore as it's now a class.
Please change `Redis()` to `new Redis()`. Note that `Redis()` was already deprecated
in the previous version.
} else if (!this.stream.writable) {
writable = false;
// @ts-expect-error
} else if (this.stream._writableState && this.stream._writableState.ended) {
Copy link
Collaborator

@marcbachmann marcbachmann Feb 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still needed with node 10? 😉

@github-pages github-pages bot temporarily deployed to github-pages March 12, 2022 05:41 Inactive
@github-pages github-pages bot temporarily deployed to github-pages March 12, 2022 05:44 Inactive
@github-pages github-pages bot temporarily deployed to github-pages March 12, 2022 12:54 Inactive
@github-pages github-pages bot temporarily deployed to github-pages March 12, 2022 13:41 Inactive
// TODO WeakMap
// @ts-expect-error
command.__is_reject_overwritten = true;
if (!node && !REJECT_OVERWRITTEN_COMMANDS.has(command)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keep in mind that the command.promise & command.reject could get overwritten in some cases.
But at the moment it's only done in the pipeline and for that node should be present here.
It's probably good enough at the moment, but .has(command.promise) might be better.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Will think about it!

@github-pages github-pages bot temporarily deployed to github-pages March 13, 2022 01:22 Inactive
@github-pages github-pages bot temporarily deployed to github-pages March 13, 2022 02:36 Inactive
@github-pages github-pages bot temporarily deployed to github-pages March 13, 2022 04:04 Inactive
@github-pages github-pages bot temporarily deployed to github-pages March 13, 2022 04:07 Inactive
@github-pages github-pages bot temporarily deployed to github-pages March 13, 2022 04:51 Inactive
BREAKING CHANGE: `slotsRefreshInterval` is disabled by default,
previously, the default value was 5000.
@github-pages github-pages bot temporarily deployed to github-pages March 14, 2022 03:11 Inactive
@luin luin marked this pull request as ready for review March 14, 2022 03:17
@luin
Copy link
Collaborator Author

luin commented Mar 14, 2022

Renamed to the main branch. Published v5.0.0-beta.1.

@luin luin closed this Mar 14, 2022
@luin
Copy link
Collaborator Author

luin commented Mar 14, 2022

The biggest thing I could think of for a major version would be changing the default to enableAutoPipelining: true to avoid the potential bottleneck.

Things it would affect:

  • select to manually select a db number (1505)
  • applications where failures due to networking errors are important to avoid? When not to use enableAutoPipelining? #1377 (comment)
    (In practice, I assume the number of in flight commands would be low if redis is in the same local network)

Yeah I think that makes sense. Let's delay this to the next major release.

@luin luin deleted the v5 branch March 14, 2022 04:54
@luin
Copy link
Collaborator Author

luin commented Mar 14, 2022

Hey @marcbachmann @TysonAndre 👋 , thanks for the contributions to ioredis. I've just invited you to become collaborators on this repository. Feel free to help review new pull requests in the future if you have time (and it doesn't matter at all if you don't have time 😄). Appreciate it 🙏. You can refer to https://github.com/luin/ioredis/blob/main/.github/CONTRIBUTING.md for some notes.

@luin
Copy link
Collaborator Author

luin commented Mar 14, 2022

Why disable it by default?

Hey @shaharmor , The option at some level introduces side effects as it consumes resources, especially during a failover, that by default every 5 seconds all nodes may have to be recreated (if cluster slots gives different results). Given that, I think developers need to be aware of this option and only enable it explicitly when they are aware of its purpose. WDYT?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants