Replies: 15 comments
-
>>> nmstoker |
Beta Was this translation helpful? Give feedback.
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> mardan |
Beta Was this translation helpful? Give feedback.
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> TheDayAfter |
Beta Was this translation helpful? Give feedback.
-
>>> TheDayAfter |
Beta Was this translation helpful? Give feedback.
-
>>> TheDayAfter |
Beta Was this translation helpful? Give feedback.
-
>>> dkreutz |
Beta Was this translation helpful? Give feedback.
-
>>> TheDayAfter |
Beta Was this translation helpful? Give feedback.
-
>>> Vpr |
Beta Was this translation helpful? Give feedback.
-
>>> TheDayAfter |
Beta Was this translation helpful? Give feedback.
-
>>> dkreutz |
Beta Was this translation helpful? Give feedback.
-
>>> georroussos |
Beta Was this translation helpful? Give feedback.
-
>>> TheDayAfter |
Beta Was this translation helpful? Give feedback.
-
>>> erogol
[July 17, 2020, 11:54am]
I believe we've done almost everything practically possible on Tacotron.
Mozilla TTS has the most robust public Tacotron implementation so far.
However, it is still slightly slow for low-end devices.
It is time for us to go for a new model. I just want to ask your opinion
about what model we should use for this next iteration. You can also
share some papers if you like.
[This is an archived TTS discussion thread from discourse.mozilla.org/t/what-are-the-tts-models-you-know-to-be-faster-than-tacotron]
Beta Was this translation helpful? Give feedback.
All reactions