Viewing a single comment thread. View all comments

SatoshiNotMe t1_jdtemml wrote

So if the notebook is tuning on a fixed dataset, anyone running it will arrive at the same weights after an expensive compute, which seems wasteful. Why not just share the weights, I.e the final trained + tuned model ? Or is that already available?

1

matterhayes t1_jeacmx0 wrote

1

SatoshiNotMe t1_jealb7d wrote

Is there a "nice" way to use this model, (say, via the command-line like in the GPT4All or alpaca.cpp repos), rather than in a databricks notebook or in HG spaces? For example I'd like to chat with it on my M1 MacBook Pro. Any pointers appreciated!

1