Submitted by Kaarssteun t3_zz3lwt in singularity
To those with a slight grasp on LLMs, you might have noticed ChatGPT isn't that big of a deal architecturally speaking. It's using an updated version of GPT - GPT 3.5, fine-tuned on conversational data, with RLHF (reinforcement learning with human feedback)
Everyone could have had this functionality, a smart chatbot capable of slicing a big chunk of your workload for you, with a little prompt engineering in openai's playground.
No source for this one, but if I recall correctly ChatGPT wasn't that big of a project - understandable given it's not much more than an easy-to-use pre-prompted interface to GPT 3.5. OpenAI likely did not expect this kind of a reaction from the general public, given their three previous big language models were certainly not talked about on the streets. ChatGPT being in the familiar format of a simple chat interface wholly dictated its success.
ChatGPT is officially a research preview - which subsequently exploded. Instead of collecting human feedback from little extra computational cost, they now face hordes of people sucking the FLOPS out of their vaults for puny tasks, expecting this to remain readily available and free - while the costs for openai are "eye-watering".
Openai cannot shut this thing down anymore, the cat's out of the bag. This is of course exciting from an r/singularity user's perspective; google is scrambling to cling to the reigns of every internet user, and AI awareness is higher than it has ever been.
Just can't imagine this was the optimal outcome for openai!
hauntedhivezzz t1_j299yeo wrote
Umm, the optimal outcome was a viral hit / free marketing, which would lead to an excited user base who would then pay for their product.