You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I should create a filter in the randomization with hashsets of profanity words in English and Chinese Vocab.
This is low priority, as from observations, it seldomly happens.
For words generated from the static json files. I can do pre examination and clean on the source as another approach.
The text was updated successfully, but these errors were encountered:
I should create a filter in the randomization with hashsets of profanity words in English and Chinese Vocab.
This is low priority, as from observations, it seldomly happens.
For words generated from the static json files. I can do pre examination and clean on the source as another approach.
The text was updated successfully, but these errors were encountered: