Comments

You must log in or register to comment.

Sashinii t1_it8qrq4 wrote

I can't even keep up with Google's AI progress anymore, let alone AI progress in general.

41

mj-gaia t1_it8rbkc wrote

I never understand a single thing posted here but comments like yours tell me that AI seems to be progressing really fast right now so thanks that’s all I need to understand I guess lol

27

Apollo24_ t1_it8t3xc wrote

I saw Flan-T5 about 30 minutes ago and was amazed it could beat PaLM with far less parameters. Half an hour later we get a new PaLM :P

45

AdditionalPizza t1_it8zew3 wrote

Basically look at it this way, scaling works but we haven't scaled massive again (yet). But also in ELI5 terms, they're discovering significantly "better ways to scale" in a sense. So It's going to be bonkers when we do a next generation scale.

36

Ezekiel_W t1_it9276k wrote

Another fantastic AI paper was released today, this speed is almost spooky.

24

visarga t1_it96451 wrote

The idea is actually from a 2021 paper from the same authors. Language models usually predict the next token when they are GPT like, and predict random masked words when they are BERT like. They combine both of them and discover it has a huge impact on scaling laws. In other words we were using the wrong mix of noise to train the model. The new solution is 2x better than before.

This paper combines with the FLAN paper that uses 1800 different tasks to instruction-tune the model. They hope learning many tasks will teach the model to generalise to new tasks. An important trick is using chain of thought, without it there is a big drop. Both methods boost the score and together they get the largest boost.

They even released the FLAN models. Google is on a roll!

I tried FLAN, reminds me of GPT-3 how quickly it gets the task. It doesn't have the vast memory of GPT-3 though. So now I have on my computer a Dall-E like model (SD) and a GPT-3 like model (FLAN-T5-XL), plus an amazing voice recognition system - Whisper. It's hard to believe. After 2 years they shrunk GPT-3 and we have voice, image and language on a regular gaming desktop.

14

Tavrin t1_it9bbov wrote

There tends to always be a lot more papers this time of the year because the NeurIPS conference is just around the corner so that's why we are suddenly seeing alot of new stuff right now but it's always nice to see.

And obviously the papers become more and more impressive each year.

I've got to say right now Google came prepared and came in full force

8

CommentBot01 t1_it9fevk wrote

No one can know it until one try it to the end. Questioning is important but without try and fail, nothing progress. Currently deep learning and LLM are very successful and not even close to its limit.

8

FirstOrderCat t1_it9oqhg wrote

> Currently deep learning and LLM are very successful and not even close to its limit.

to me it is opposite, companies already invested enormous resources, but LLM can solve some simplistic limited scope tasks, and no much AGI-like real applications have been demonstrated.

2

katiecharm t1_it9t9sy wrote

None of this means anything to my crayon brain until I see the quality of titties it can generate.

8

AsthmaBeyondBorders t1_ita7j8l wrote

LLMs are best when coupled with other AIs for natural language commanding. Instructing a robot on what to do using natural language and chain of thought instead of pre determined scripts. Instructing an image generator like stable diffusion and Dall-E on what to draw based on language instead of complicated manual adjustment of parameters and code. I'd say those are very necessary applications.

You may be looking at LLM models on their standalone form but don't forget LLMs are behind stable diffusion, dreamfusion, dreambooth, etc.

6

AsthmaBeyondBorders t1_itabm60 wrote

Look at the post you are replying to.

A wall is when we can't improve the results of the last LLMs.

New LLMs, both with different models and bigger scale, not only improve the performance of the last LLMs on tasks we already know they can do, but we also know there are emergent skills that we may still find scaling up. The models become capable of doing something completely new just because of scale, when we scale up and stop finding emergent skills then that's a wall.

7

FirstOrderCat t1_itacdp4 wrote

>A wall is when we can't improve the results of the last LLMs.

The wall is a lack of break through innovations.

Latest "advances" are:

- build Nx larger model

- tweak prompt with some extra variation

- fine-tune on another dataset, potentially leaking benchmark data to training data

- receive marginal improvement in benchmarks irrelevant to any practical task

- call your new model with some epic-cringe name: path-to-mind, surface-of-intelligence, eye-of-wisdom

But none of these "advances" somehow can replace humans on real tasks, with exception to style-transfer of images and translation.

−5

FirstOrderCat t1_itaeypy wrote

this race maybe over.

On the graph guy is proud of getting 2 points in some synthetic benchmark, while spending 4 millions TPUv4 hours = $12M.

At the same time we hear that Google cuts expenses and considering layoffs, and LLM part of Google Research will be the first in the line, because they don't provide much value in Ads/Search business.

1

AsthmaBeyondBorders t1_itagbfz wrote

This model had up to 21% gains in some benchmarks, as you can see there are many benchmarks. You may notice this model is still 540B just like the older one, so this isn't about scale it is about a different model which can be as good and better than the previous ones while cheaper to train.

You seem to know a lot about Google's internal decisions and strategies as of today, good for you, I can't discuss stuff I have absolutely no idea about and clearly you have insider information about where google is going and what they are doing, that's real nice.

3

FirstOrderCat t1_itahazn wrote

>This model had up to 21% gains in some benchmarks, as you can see there are many benchmarks

Meaning they received less than 2 points in many others..

> it is about a different model which can be as good and better than the previous ones while cheaper to train.

Model is the same, they changed training procedure.

> You seem to know a lot about Google's internal decisions and strategies as of today

This is public information.

2

visarga t1_itap4rx wrote

LMs can be coupled with toys - an execution environment to run pieces of code it generates, a search engine, a knowledge base, or even a simulator. They "infuse" strict symbolic consistency into the process creating a hybrid neural-symbolic system.

5

Spoffort t1_itbjrj0 wrote

I know what do you mean, look at the x axis where compute is. The model is not 2 times better (your point with y axis) but 2 times less compute for given outcome (x axis). If you want i can explain it further 😄

3

FirstOrderCat t1_itc6pne wrote

It looks like they had point of diminishing return somewhere at 0.5*1e25 FLOPS.

After that model trains much slower. They could continue training farther, and say they "saved" another 20M TPU hours.

1