Viewing a single comment thread. View all comments

Scarlet_pot2 OP t1_jed7tts wrote

Fine-tuning isn't the problem.. if you look at the alpaca paper, they fine tuned the LLaMA 7B model on gpt-3 and achieved gpt-3 results with only a few hundred dollars. The real costs are the base training of the model, which can be very expensive. Also having the amount of compute to run it after is an issue too.

Both problems could be helped if there was a free online system to donate compute and anyone was allowed to use it

1

smokingthatosamapack t1_jedmdli wrote

Yeah I see what you mean and it could happen but there's no such thing as a free lunch and even if there was a system it would probably pale in comparison to paid solutions for compute

1