Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add docs for custom loss layer #107

Merged
merged 5 commits into from
Sep 19, 2022

Conversation

JamieMair
Copy link
Contributor

Links to #106. This is a start at creating some documentation around implementing more loss functions for external users of the package, and also other open-source devs that want to contribute a varied set of loss functions.

This might still need some work, but it is enough for solving my initial issue.

@codecov
Copy link

codecov bot commented Sep 18, 2022

Codecov Report

Base: 73.55% // Head: 73.28% // Decreases project coverage by -0.27% ⚠️

Coverage data is based on head (9841444) compared to base (a735c44).
Patch has no changes to coverable lines.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #107      +/-   ##
==========================================
- Coverage   73.55%   73.28%   -0.28%     
==========================================
  Files          14       14              
  Lines        2590     2590              
==========================================
- Hits         1905     1898       -7     
- Misses        685      692       +7     
Impacted Files Coverage Δ
src/dropout.jl 82.89% <0.00%> (-9.22%) ⬇️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Contributor

@chriselrod chriselrod left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great, thanks!

Of course, I'd note that adding @turbo to the loss and gradient should improve performance.

The pointer pu is in case the layer wants to allocate temporaries. As you noted here

As the other loss functions do this, we should define some functions to say that we don't want any preallocated temporary arrays

If you did want temporaries, you'd need to specify their size in layer_output_size and forward_layer_output_size.
Then, the memory the pointer pu points to would have that much extra space.
The recommended approach from there is to use a PtrArray for an array-like API, and then bump the pointer if the PtrArray is supposed to live beyond this function.

SimpleChains probably needs a better concept of lifetimes, to avoid overallocating memory.

@chriselrod chriselrod merged commit 2ee684d into PumasAI:main Sep 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants