Lolologist t1_itux8cs wrote
This all looks very impressive!
I'm not terribly well-versed in the nitty-gritty of ML's underpinnings so forgive me if this is a dumb question but:
How might we apply your speedup to, say, spaCy? Is this something that is dragged and dropped in somewhere?
pommedeterresautee OP t1_ituya3z wrote
I have not used Spacy since years but my understanding is that for large models they leverage Hugging Face library (https://spacy.io/universe/project/spacy-transformers), so I would say it should work out of the box, the only thing is to catch the model instance and override it with the optimized version (it will take the very same input).
Maybe a redditer with more Spacy knowledge than I have can validate the approach...
reSAMpled t1_iuj2yrm wrote
I also haven't used spaCy in a while, but I am pretty sure there is not a way to make this work with -sm
, -md
or -lg
models, but what Michaël says should be true for -trf
models, but I don't think it will be easy. Already spacy-transformers has to wrap HF models so they have a thinc API, you would have to dig deep in there to call Kernl's optimize_model
Viewing a single comment thread. View all comments