You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank for this inspirational blog-post. I have stumbled upon your paper while researching for my BSc thesis. It is concerned with training agents to navigate in complex buildings. As you know, navigation is a very complex task where memory is great importance.
Given the complexity of the task and the promising results of self-attention, I was wondering if you have considered exchanging the RNN with self-attention mechanism. I reckon that this would make the memory model more powerful while being computationally less expensive.
Thank you for your considerations,
Raphaël Baur, BSc Student ETH Zürich
The text was updated successfully, but these errors were encountered:
Dear David Ha, dear Jürgen Schmidhuber
Thank for this inspirational blog-post. I have stumbled upon your paper while researching for my BSc thesis. It is concerned with training agents to navigate in complex buildings. As you know, navigation is a very complex task where memory is great importance.
Given the complexity of the task and the promising results of self-attention, I was wondering if you have considered exchanging the RNN with self-attention mechanism. I reckon that this would make the memory model more powerful while being computationally less expensive.
Thank you for your considerations,
Raphaël Baur, BSc Student ETH Zürich
The text was updated successfully, but these errors were encountered: