Comments

You must log in or register to comment.

Parzival_007 t1_iqsfyoy wrote

LeCunn and Lex would loose their minds if they saw this.

25

seba07 t1_iqshjec wrote

And the "results are 0.x% better" papers are often about challenges that aren't interesting anymore since many years.

139

Even_Information4853 t1_iqsk1i0 wrote

Andrew Ng - Geoffrey Hinton - Yann LeCun

Yoshua Bengio - ??? - ???

Jeremy Howard? - ??? - ???

??? - Demis Hassabis - Lex Fridman

​

Anyone can help me fill the rest ?

187

throwawaythepanda99 t1_iqsrohn wrote

Did they use machine learning to turn people into children?

130

RageA333 t1_iqst9zc wrote

Proving is still an advancement

90

JanneJM t1_iqt4922 wrote

One of these is not like the others.

"Prove something known empirically" is actually useful and important.

305

anyspeed t1_iqt54zp wrote

I find this paper disturbing

2

moschles t1_iqtccm9 wrote

Some feelings were hurt by this meme.

5

superawesomepandacat t1_iqtcxtl wrote

Top right is usually how it works outside of academia, data-iterative modelling.

7

BrotherAmazing t1_iqtiyud wrote

This looks more like a “meme”-tag worthy post than “discussion”.

12

OptimalOptimizer t1_iqtmkgr wrote

You’re missing “Schmidhuber did it 30 years ago”

64

Delta-tau t1_iqtnde1 wrote

All funny and right on the spot except the one about "proving what had already been known empirically for 5 years". That would be actually a big deal.

47

forensics409 t1_iqu8xlc wrote

I'd say these are 99% of papers. 0.99% are review papers and 0.01% are actually cool papers.

6

jcoffi t1_iqu9vqw wrote

I feel attacked

3

emerging-tech-reader t1_iqubs5s wrote

I saw an NLP-ML one a few years back that had a conclusion of "This would never work" and they really tried. (forgot what they were trying to do)

3

Magneon t1_iquektg wrote

Other common ones:

> We fiddled with the hyperparameters without mentioning, and didn't create a new validation set

and

> What prompted the layer configuration we selected? I dunno, it seemed to work best.

15

KeikakuAccelerator t1_iquntc2 wrote

Bruh, demis hassabis and his team literally solved protein folding.

4

TheReal_Slim-Shady t1_iqurhqq wrote

When they are produced to find jobs or progress in careers, that is what exactly happens.

2

show-up t1_iqut6en wrote

Demis Hassabis: model provably surpasses human-level perf on these handful of tasks.

Media: Congrats!

Researcher spending more time on social media than the PI would like:

Results are 0.1% better than that other paper. Kek.

4

gonomon t1_iquweo8 wrote

This is perfect.

1

MangoGuyyy t1_iquwrit wrote

Andrew Ng - Coursera - AI educator - Stanford Geoffrey Hinton - deep learning godfather - Canada Yann Lecun - chief AI at meta - deep learning - Canada godfather - founder of CNN

Yoshua Bengio - deep learning godfather Daphne Koller - cofounder Coursera - comp bio - Stanford prof Fei Fei Li - Stanford Vision Lab

Jeremy Howard - cofounder Fast AI - AI educator Jeff dean - Google engineer Andre kaparthy - Tesla AI head

?? Demi’s hassabis- deep mind head Lex Friedman - MIT AI prof - YouTuber / podcaster

6

impossiblefork t1_iqv6u0x wrote

All of them are useful.

The 0.1% improvements have sort of added up, and then you get the 'baseline is all you need' and then people start adding on 0.1% improvements again, and then people prove something about it, or something else of that sort.

28

jturp-sc t1_iqvcptz wrote

Most of them are really just CV padding to some 1st or 2nd year grad student. If you look into them more, it's usually just as trivial as being the first to publish a paper about using a model that came out 12 months ago on a less common dataset.

It's really more about the grad student's advisor doing them a solid in terms of building their CV than actually adding useful literature to the world.

15

ScottTacitus t1_iqvht66 wrote

More of these need to be in Mandarin to represent

3

sephiap t1_iqvw3pk wrote

Man that last one is amazing, what a way to get your citation count up - goad the entire community. Totally going to use this.

1

Fourstrokeperro t1_iqw4vuv wrote

We plugged one Lego block into another is too real omg

2

supermopman t1_iqwdxmz wrote

What's the source for the images?

1

sk_2013 t1_iqwen2y wrote

Honestly I wish my advisor had done that.

My CS program was alright overall, but the ML professor used the same undergrad material for all his classes and I've kind of been left trying to put together functioning knowledge and a career myself.

4

Frizzoux t1_iqx0p6q wrote

Lego block gang !

2

jonas__m t1_ircbeek wrote

missing from the list: Present 10 sophisticated innovations when only one simple trick suffices, to ensure reviewers find paper "novel"

1

Quetzacoatl85 t1_irzr5vy wrote

It's fun and I chuckled, but I'd also say this covers the majority of all papers in any scientific field, and I'd also say that that's ok. This is how science works, it can't all be groundbreak, status-upset and axiom-refute.

1