-
Notifications
You must be signed in to change notification settings - Fork 399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FIFO depth optimization #509
Conversation
Hi @nicologhielmetti, thanks for putting together this important PR! Can you edit your first comment describe what the purpose of the PR is and/or provide a link to a presentation on it that you've already given? This is to help others understand, who may not be so familiar with this development. Also, recall the general contributing guidelines: https://github.com/fastmachinelearning/hls4ml/blob/master/CONTRIBUTING.md Thanks! |
Right now the pass is an optimizer pass that is part of the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Making one optimizer pass for each backend is a good solution, but now there is code duplicated across the two, which is not good for maintainability. The code in the passes should be factorized a bit more so that there is less duplication (ideally none). Perhaps it can help even if there are multiple passes that make up this flow?
To document the status of this: the optimizer pass is in a good state now. We're holding off merging since the integration as a 'flow' is not working as expected, and may require some changes to the framework. After that we'll see if anything has to change for this PR - but it would likely be limited to the way it's registered as a flow and any flow requirements. |
- Update FIFO opt optimizers to depend on the write flow - Don't call write from the opt, instead apply the writer flow after
…into fifo_depth_merge
Hi, When running this code;
I get a NotImplementedError:
However, if I change the model to the model that's used in the documentation
The error is gone, and I can run the optimization using the Vivado backend. But, when running the code from the VivadoAccelerator backend:
I have error messages related to resource constraints (the full log file is added below):
I am using the latest stable release v0.7.0: delphinium which should support the FIFO depth optimization. Could someone look into this? Thanks in advance! |
Please don't hijack old pull requests and other issues with unrelated problems. Your issue comes from tensorflow, not hls4ml. Compare what you do differently in the two snippets that you shared. hint: it's in the very first few lines you shared |
The PR optimize the depth of the FIFOs implemented in
io_stream
based models by estimating the actual usage of the FIFOs through cosimulation. For more details consider this presentation -> https://docs.google.com/presentation/d/1ItM-8EAlNRdjRk4Cu1LY7gVvCavD90wrE95SsH2wQQw/edit?usp=sharingHow to use it:
https://gist.github.com/nicologhielmetti/3a268be32755448920e9f7d5c78a76d8