Submitted by bradenjh t3_z26fui in MachineLearning
_Arsenie_Boca_ t1_ixgjhjp wrote
As interesting as weak supervision is, the main takeaway is that using LLM few-shot predictions as labels to train a small model is a great approach to save labeling costs. Using snorkel on top means you have to query multiple LLMs and have snorkel as additional complexity, yielding only a few extra points. Perhaps those extra points also could have been achieved by letting the LLM label a few more samples or giving it a few more shots to get better labels
Viewing a single comment thread. View all comments