Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removed flexbuffer dependency #856

Merged
merged 4 commits into from
May 3, 2019
Merged

Removed flexbuffer dependency #856

merged 4 commits into from
May 3, 2019

Conversation

kroleg
Copy link
Contributor

@kroleg kroleg commented Apr 30, 2019

As suggested in #826

@AVVS
Copy link
Collaborator

AVVS commented Apr 30, 2019

Did you run the benchmarks for this? Would be interested to see how the change affects the performance

@kroleg
Copy link
Contributor Author

kroleg commented Apr 30, 2019

@AVVS here are results. But i have no idea how to interpret them, they seem not accurate to me.

redis: 4.9.1
CPU: 4
OS: darwin x64
node version: v10.14.2

Current branch:

                  SET foo bar
      25,343 op/s » javascript parser + dropBufferSupport: true
      25,488 op/s » javascript parser

                  LRANGE foo 0 99
      13,581 op/s » javascript parser + dropBufferSupport: true
      13,475 op/s » javascript parser

Suites: 2
Benches: 4
Elapsed: 28,387.94 ms

Master

                  SET foo bar
      24,418 op/s » javascript parser + dropBufferSupport: true
      24,875 op/s » javascript parser

                  LRANGE foo 0 99
      14,257 op/s » javascript parser + dropBufferSupport: true
      13,834 op/s » javascript parser

Suites: 2
Benches: 4
Elapsed: 26,790.68 ms

@AVVS
Copy link
Collaborator

AVVS commented Apr 30, 2019

there were more benchmarks earlier, not sure they all went
@luin care to help here?

In terms what to look at - because they are all synthetic and will vary based on the workstation you primary look at differences between them. In that very limited set you can see that stuff that requires more memory runs faster on master branch, while stuff that doesn't require it is slower

reason for that is flexbuffer - which would lock up a portion of memory and reuse it later on. because allocation of a buffer is a costly operation flexbuffer itself was used in the first place

@kroleg
Copy link
Contributor Author

kroleg commented May 1, 2019

@AVVS thank you for such a detailed explanation 👍
I did a refactoring, so toWritable only does one extra buffer allocation per call. In theory it should not be slower than flexbuffer.
Benchmark results are better, but i because i run them on my machine their precision sucks (SET foo bar was faster previously, but now it's slower)

Current branch

                  SET foo bar
      24,677 op/s » javascript parser + dropBufferSupport: true
      25,112 op/s » javascript parser

                  LRANGE foo 0 99
      13,316 op/s » javascript parser + dropBufferSupport: true
      14,205 op/s »

Master:

                  SET foo bar
      21,249 op/s » javascript parser + dropBufferSupport: true
      25,860 op/s » javascript parser

                  LRANGE foo 0 99
      13,255 op/s » javascript parser + dropBufferSupport: true
      13,984 op/s » ja

lib/command.ts Outdated Show resolved Hide resolved
lib/command.ts Outdated Show resolved Hide resolved
@kroleg
Copy link
Contributor Author

kroleg commented May 1, 2019

I am trying to compare (w/ benchmark module) implementation of Command.toWritable with flexbuffer VS my implementation. But i am having troubles with it and i am not sure if it's worth spending time on it. Because in theory my implementation should be faster (at least i think so).

Also i think that MixedBuffers should be extracted to seprate file (and can use a better name)

@AVVS
Copy link
Collaborator

AVVS commented May 1, 2019

Sounds reasonable. I'd still wait for @luin or someone else to approve this change - generally it looks good to me, but I do want to run more benchmarks and more cases

@luin
Copy link
Collaborator

luin commented May 3, 2019

Awesome! I did some benchmark (I added back the benchmarks about pipelines on my side), and the results are pretty similar.

@luin luin merged commit 35e0c5e into redis:master May 3, 2019
ioredis-robot pushed a commit that referenced this pull request May 3, 2019
## [4.9.2](v4.9.1...v4.9.2) (2019-05-03)

### Bug Fixes

* removed flexbuffer dependency ([#856](#856)) ([35e0c5e](35e0c5e))
@ioredis-robot
Copy link
Collaborator

🎉 This PR is included in version 4.9.2 🎉

The release is available on:

Your semantic-release bot 📦🚀

@kroleg kroleg deleted the feature/remove_flexbuffer branch May 4, 2019 09:17
@kroleg kroleg restored the feature/remove_flexbuffer branch May 4, 2019 09:17
janus-dev87 added a commit to janus-dev87/ioredis-work that referenced this pull request Mar 1, 2024
## [4.9.2](redis/ioredis@v4.9.1...v4.9.2) (2019-05-03)

### Bug Fixes

* removed flexbuffer dependency ([#856](redis/ioredis#856)) ([35e0c5e](redis/ioredis@35e0c5e))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants