Submitted by sonudofsilence t3_y19m36 in deeplearning
neuralbeans t1_irw4jiw wrote
You're supposed to pass in each sentence separately, as a list of sentences. You do not pass all the sentences as one string.
sonudofsilence OP t1_irw765w wrote
Yes, i know but in this way the embedding of a word will be created according only to the tokens of the sentence in which it is found, right?
ExchangeStrong196 t1_irw93ux wrote
Yes. In order to ensure the contextual token embedding attends to longer text, you need to use a model that accepts larger sequence lengths. Check out Longformer
Viewing a single comment thread. View all comments