Blacky372

Blacky372 t1_jdl62vl wrote

GPT-J-6B with instruction finetuning will surely not ever be better than GPT-4. With RLHF you may reach a similar response quality in some contexts for some types of instruction, but you will never match the vast amounts of proprietary data that ClosedAI fed into a probably 250+B parameter model with specialized expert data from literally 50 experts in various fields that worked on the response quality in their domain. This cannot be surpassed easily, unfortunately. But maybe future open source models will be of similar capabilities with advanced training techniques. I would definitely hope so.

56

Blacky372 t1_j521lej wrote

I like your article, thank you for sharing.

But writing "no spam, no nonsense" is a little weird to me if I get this when trying to subscribe.

Don't get me wrong, it's fine to monetize your content and to use your followers data to present them personalized ads. Acting like you're just enthusiastic about sharing info at the same time doesn't really fit.

3