Viewing a single comment thread. View all comments

VelveteenAmbush t1_jcbu8nr wrote

> While they also potentially don't release every model (see Google's PaLM, LaMDA) or only with non-commercial licenses after request (see Meta's OPT, LLaMA), they are at least very transparent when it comes to ideas, architectures, trainings, and so on.

They do this because they don't ship. If you're a research scientist or ML research engineer, publication is the only way to advance your career at a company like that. Nothing else would ever see the light of day. It's basically a better funded version of academia, because it doesn't seem to be set up to actually create and ship products.

Whereas if you can say "worked at OpenAI from 2018-2023, team of 5 researchers that built GPT-4 architecture" or whatever, that speaks for itself. The products you release and the role you had on the teams that built them are enough to build a resume -- and probably a more valuable resume at that.

14

the_mighty_skeetadon t1_jccdzgr wrote

Many of the interesting developments in deep learning have in fact made their way to Google + FB products, but that those have not been "model-first" products. For example: ranking, personalization, optimization of all kinds, tech infra, energy optimization, and many more are driving almost every Google product and many FB ones as well.

However, this new trend of what I would call "Research Products" which are light layers over a model -- it's a different mode of launching with higher risks, many of which have different risk profiles for Google-scale big tech than it does for OpenAI. Example: ChatGPT would tell you how to cook meth when it first came out, and people loved it. Google got a tiny fact about JWST semi-wrong in one tiny sub-bullet of a Bard example, got widely panned and lost $100B+ in market value.

14

VelveteenAmbush t1_jccksp9 wrote

Right, Google's use of this whole field has been limited to optimizing existing products. As far as I know, after all their billions in investment, it hasn't driven the launch of a single new product. And the viscerally exciting stuff -- what we're calling "generative AI" these days -- never saw the light of day from inside Google in any form except arguably Gmail suggested replies and occasional sentence completion suggestions.

> it's a different mode of launching with higher risks, many of which have different risk profiles for Google-scale big tech than it does for OpenAI

This is textbook innovator's dilemma. I largely agree with the summary but think basically the whole job of Google's leadership boils down to two things: (1) keep the good times rolling, but (2) stay nimble and avoid getting disrupted by the next thing. And on the second point, they failed... or at least they're a lot closer to failure than they should be.

> Example: ChatGPT would tell you how to cook meth when it first came out, and people loved it. Google got a tiny fact about JWST semi-wrong in one tiny sub-bullet of a Bard example, got widely panned and lost $100B+ in market value.

Common narrative but I think the real reason Google's market cap tanked at the Bard announcement is due to two other things: (1) they showed their hand, and it turns out they don't have a miraculous ChatGPT-killer up their sleeves after all, and (2) the cost structure of LLM-driven search results are much worse than classical search tech, so Google is going to be less profitable in that world.

Tech journalists love to freak out about everything, including LLM hallucinations, bias, toxic output, etc., because tech journalists get paid based on engagement -- but I absolutely don't believe that stuff actually matters, and OpenAI's success is proving it. Google's mistake was putting too much stock in the noise that tech journalists create.

9