polawiaczperel
polawiaczperel t1_jb98qce wrote
Reply to comment by wywywywy in [R] Created a Discord server with LLaMA 13B by ortegaalfredo
Even with one rtx 3090 https://github.com/oobabooga/text-generation-webui/issues/147#issuecomment-1456626387
polawiaczperel t1_jed1e9h wrote
Reply to [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
I was playing with Llama 7b, 13b, 30b, 65b, Alpaca 30b native and lora, but this seems to be much better, and it is only 13b. Nice! Will they share the weights?