WokeAssBaller

WokeAssBaller t1_jea0o2f wrote

Again you are using an incredibly limited definition of fine tuning based on what the open ai api allows, which once again tells me you don’t know ML.

Fine tuning is ANY additional training on a foundational model, this can be MLM training on the model base or selectively training the subsequent layers.

OF COURSE this can add knowledge as you are doing the same training that got it knowledge in the first place. Glad to see you jumped on the chatgpt band wagon last week, build a transformer from scratch and come talk to me

2

WokeAssBaller t1_je783go wrote

Reply to comment by lambertb in [D] GPT4 and coding problems by enryu42

Fair enough then give them problems to solve and measure their output. This feels like “90% of dentists claim crest improves your dental health”

I’ll take an independent study into consideration but today I find it more of a novelty

1

WokeAssBaller t1_je04bbu wrote

Reply to comment by lambertb in [D] GPT4 and coding problems by enryu42

I’m and MLE and I’ve used it a bunch, it’s hardly ever actually useful. It gets close but it’s not there and it’s faster to google almost every time.

It will be useful in probably a year or two, but it needs to understand how to run its own experiments. Anyone who actually thinks this is useful right now is just buying hype

1

WokeAssBaller t1_jdvmmfp wrote

Reply to comment by lambertb in [D] GPT4 and coding problems by enryu42

I don’t even roll yet but that 40% number, I would love to see how they calculated it.

I’ve tried gpt 4 on a lot of problems and it fails 9/10 times and I would be faster just googling it.

This stuff will be amazing it’s just not quite yet

1