Submitted by Business-Lead2679 t3_1271po7 in MachineLearning
KerfuffleV2 t1_jecbxy7 wrote
Reply to comment by wind_dude in [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
It's based on Llama, so basically the same problem as anything based on Llama. From the repo "We plan to release the model weights by providing a version of delta weights that build on the original LLaMA weights, but we are still figuring out a proper way to do so." edit: Nevermind.
You will still probably need a way to get a hold of the original Llama weights (which isn't the hardest thing...)
wind_dude t1_jecct1i wrote
ahh, sorry, referring to the dataset pulled from shareGPT that was used for finetuning. Which shareGPT has disappeared since the media hype about google using it for BARD.
​
Yes, the llama weights are everywhere, including HF in converted form for hf transformers.
Viewing a single comment thread. View all comments