Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Application-wide, size-limited cache for scenario networks #940

Draft
wants to merge 5 commits into
base: dev
Choose a base branch
from

Conversation

abyrd
Copy link
Member

@abyrd abyrd commented Jul 20, 2024

When running regional analyses on a list of over 70 scenarios with bike egress on a large network with many transit stops (Netherlands), the workers eventually run out of memory and stall. This is because the linkages and egress tables for all these scenarios accumulate in memory and are never evicted. This problem was not usually noticeable when people were running analyses one by one, but becomes apparent when using scripting tools to launch regional analyses for large numbers of slightly varying scenarios.

Here I have used a Caffeine LoadingCache to hold the scenario networks instead of a simple Map. This allows eviction of older items when the cache reaches a maximum size. It was a little tricky to locate all the scenario application logic in the right place together with the CacheLoader function while maintaining access to all needed data structures (such as the ScenarioCache). There were comments in the code about the possibility (and advantages) of a single top-level scenario network cache instead of nesting such a cache within each TransportNetwork. Adopting this approach by switching to a single cache inside TransportNetworkCache simplified/clarified some of the code.

The single-cache approach does have the downside of not evicting the scenario networks together with their base network when it's evicted, as well as the problem of possibly retaining multiple base networks in memory because the N cached scenario networks hold references to different base networks. But in practice, in non-local (cloud) operation, a given worker instance is locked to a single network and these situations should never happen.

Here scenario application is happening inside a CacheLoader. When resolving or applying scenarios, all sorts of errors and exceptions can then happen inside the cache's value-loading code. But as mentioned in the code comments and Caffeine Javadoc, the get method of the LoadingCache allows these exceptions to bubble up. I tested this with some handled and unhandled exceptions and validation problems inside the scenario/modification application code, and everything worked as expected, with errors clearly visible in API responses and the UI.

greatly improves debugging dependent artifacts
Moved from individual TransportNetwork instances to a single cache
under TransportNetworkCache.
@abyrd
Copy link
Member Author

abyrd commented Jul 20, 2024

Initial impression is that this worked as intended in a large batch of regional analyses. In two similar previous runs, workers crashed from running out of memory or ground to a halt from excessive garbage collection, and parts of the batch had to be manually rerun. After this change, the whole batch finished quickly.

Nonetheless this deserves some careful review because it shifts the locking/synchronization behavior onto the loading cache instead of simple synchronized methods, and eviction behavior will be different than before. Some complex interactions may exist with large/numerous regional analyses, or workers performing both regional and single point analyses.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant