Viewing a single comment thread. View all comments

light24bulbs t1_jdzzeh4 wrote

Yes, I'm into it now. Code like this can be adapted to load bulk data instead of q&a.

I suspect some of the training parameters need to be adjusted a bit to prevent over fitting and obviously the data loading and templating needs to be removed.

https://github.com/lxe/llama-tune Or for a cooler approach where you make a Lora layer https://github.com/serp-ai/LLaMA-8bit-LoRA

1