QTQRQD t1_jd491r2 wrote on March 21, 2023 at 6:51 PM Reply to [D] Running an LLM on "low" compute power machines? by Qwillbehr there's a number of efforts like llama.cpp/alpaca.cpp or openassistant but the problem is that fundamentally these things require a lot of compute, which you really cant step around. Permalink 11
QTQRQD t1_jbnmcv7 wrote on March 10, 2023 at 9:42 AM Reply to comment by potatoandleeks in [D] Is it possible to train LLaMa? by New_Yak1645 you really think Meta spent 30 million on GPUs and then sold them on craigslist? Permalink Parent 6
QTQRQD t1_jd491r2 wrote
Reply to [D] Running an LLM on "low" compute power machines? by Qwillbehr
there's a number of efforts like llama.cpp/alpaca.cpp or openassistant but the problem is that fundamentally these things require a lot of compute, which you really cant step around.