-
Notifications
You must be signed in to change notification settings - Fork 829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed Length Pre-Tokenizer #1713
base: main
Are you sure you want to change the base?
Conversation
Thanks for this. The code looks working, but I think it could be simplified quite a lot. Is there any source/paper for trying to do fixed sized chunking ? Before adding anything to the library usually we try to make sure it's used in the wild and would benefits actual users of models (not necessarily researchers exploring new ideas, for this they can try out your branch or create their own pre_tokenizer directly in Python). |
221d55e
to
e42ac9e
Compare
You're right, I simplified it along the lines of my initial comment. I also asked the author of the issue whether this is a common approach in the literature or not (I'm not aware of it either). Should have probably clarified this before jumping on it ;) |
according to the author it's used in DNA Transformer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as my colleague! would be nice if we can get a reference to the paper it was used in the documentation of the class! (like a arxiv link)
Otherwise we can also keep this issue open an let the community upvote ! if it get's traction we merge 🤗
|
||
pretok.length = 10 | ||
assert pretok.length == 10 | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we'd also want to make sure that it does it's job as a pretokenizer! so testing with the same string, that it splits in 5 then 10!
Introduces a pre-tokenizer to split text in fixed length chunks (closes #1697).
The method
pre_tokenize
could be more made more concise by creating a vector with indices first like sobut that would take a bit more memory, so I went for my approach instead.