Submitted by Qwillbehr t3_11xpohv in MachineLearning
It's understandable that companies like OpenAI would want to charge for access to their projects due to the ongoing cost to train then run them, I assume most other projects that require as much power and have to run in the cloud will do the same.
I was wondering if there were any projects to run/train some kind of language model/AI chatbot on consumer hardware (like a single GPU)? I heard that since Facebook's LLama leaked people managed to get it running on even hardware like an rpi, albeit slowly, I'm not asking to link to leaked data but if there are any projects attempting to achieve a goal like running locally on consumer hardware.
xtof54 t1_jd467f3 wrote
There are several. either collaboratively (look at together.computer hivemind petals) or on single no gpu machine with pipeline parallelism, but it requires reimplementing for every model, see e.g slowLLM on github for bloom176b