Submitted by sonudofsilence t3_y19m36 in deeplearning
I would like to take the word embeddings of a text and visualize them at the same plot (for understanding reasons). The question is how i should pass the text into the pretrained BERT model? At first, i separated the text on sentences and passed each one separetely, but im not sure if this had the right results.
neuralbeans t1_irw2mza wrote
If you're talking about the contextual embeddings that BERT is known for then those change depending on the sentence used, so you need to supply the full sentence.