Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory optimization #90

Merged
merged 3 commits into from
Aug 19, 2024
Merged

memory optimization #90

merged 3 commits into from
Aug 19, 2024

Conversation

hweihwang
Copy link
Contributor

No description provided.

@hweihwang hweihwang linked an issue Jul 26, 2024 that may be closed by this pull request
@juliushaertl juliushaertl added enhancement New feature or request 3. to review labels Jul 29, 2024
Comment on lines 32 to 34
app.get('/system-overview', (req, res) => {
res.json(getSystemOverview(rooms))
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we expose those also as prometheus metrics in the /metrics endpoint instead? If there is something we want to expose separately it should also be limited to requests where the METRICS_TOKEN is passed

export const roomUsers = new Map()
export const lastEditedUser = new Map()
const INACTIVE_THRESHOLD = 30 * 60 * 1000 // 30 minutes
const MAX_ROOMS = 100
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not to sure we should have an artificial limit in here and the tokens, might become hard to debug if that causes problems. Or can we be sure that once an entry is disposed existing sessions will properly continue to work and just get back into the cache?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@juliushaertl

Yes, it's one of the options to ensure safe bounded storage like written in the docs https://www.npmjs.com/package/lru-cache

Some of the benefits:

  • Performance optimization:: Allocate all the required memory up-front
  • LRU behavior: The cache will store a maximum of 100 rooms. When the 101 room is added, the least recently used item will be removed from the cache

I think better to set to control the memory but the value can be discussed and it will not affect the function because we will always have the data get back from file or re-decode the token

Copy link
Contributor Author

@hweihwang hweihwang Jul 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good points about when the roomData suddenly got wiped out, what will happen with all users in that room, let me think about that @juliushaertl

@hweihwang hweihwang linked an issue Aug 7, 2024 that may be closed by this pull request
3 tasks
super()
this.apiService = apiService
this.client = createClient({
url: process.env.REDIS_URL || 'redis://localhost:6379',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please document in .env.example

@@ -22,53 +24,66 @@ const {
TLS,
TLS_KEY: keyPath,
TLS_CERT: certPath,
STORAGE_STRATEGY = 'redis',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also to document in .env.example

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably default to LRU

Suggested change
STORAGE_STRATEGY = 'redis',
STORAGE_STRATEGY = 'lru',

Copy link
Member

@juliushaertl juliushaertl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather minimal comments, otherwise tested both storage strategies again and the code looks good 🚀

@juliushaertl
Copy link
Member

And the node build needs some fixes, strangely npm ci works fine locally for me

@hweihwang
Copy link
Contributor Author

Thank @juliushaertl, about to wrap up and push another including the scaling socket server on multiple nodes part using Redis Stream Adapter

Copy link
Member

@grnd-alt grnd-alt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code itself is clean and understandable 🔥
One little duplicate file though.
Also npm ci does not work for me yet, could not figure out why though, so could not really test.
image

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this file's a duplicate!? we also have ApiService.js

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @grnd-alt , not sure why running well at my local, trying to fix

@juliushaertl
Copy link
Member

@hweihwang Can you squash your commits into reasonable chunks (at least to get the commits with the same message merged into one atomic change)

@hweihwang hweihwang force-pushed the feat/memory-optimization branch 2 times, most recently from dce7c15 to 77f20fd Compare August 19, 2024 08:17
@juliushaertl
Copy link
Member

Sorry another conflict in the package-lock to resolve 🙈

Signed-off-by: Hoang Pham <hoangmaths96@gmail.com>
Signed-off-by: Hoang Pham <hoangmaths96@gmail.com>
Signed-off-by: Hoang Pham <hoangmaths96@gmail.com>
@juliushaertl juliushaertl merged commit 57c0aea into main Aug 19, 2024
23 checks passed
@juliushaertl juliushaertl deleted the feat/memory-optimization branch August 19, 2024 13:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3. to review enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

roomData is never removed Scaling out the backend
3 participants