Submitted by austintackaberry t3_120usfk in MachineLearning
SatoshiNotMe t1_jdtemml wrote
So if the notebook is tuning on a fixed dataset, anyone running it will arrive at the same weights after an expensive compute, which seems wasteful. Why not just share the weights, I.e the final trained + tuned model ? Or is that already available?
matterhayes t1_jeacmx0 wrote
It’s been released here: https://huggingface.co/databricks/dolly-v1-6b
SatoshiNotMe t1_jeakml0 wrote
thanks!
SatoshiNotMe t1_jealb7d wrote
Is there a "nice" way to use this model, (say, via the command-line like in the GPT4All or alpaca.cpp repos), rather than in a databricks notebook or in HG spaces? For example I'd like to chat with it on my M1 MacBook Pro. Any pointers appreciated!
Viewing a single comment thread. View all comments