Scarlet_pot2

Scarlet_pot2 OP t1_j39g574 wrote

https://en.wikipedia.org/wiki/Word2vec

"Word2vec is a technique for natural language processing (NLP) published in 2013 (Google). The word2vec algorithm uses a neural network model to learn word associations from a large corpus of text."

This was the first "guess the next word" model.

https://towardsdatascience.com/attention-is-all-you-need-discovering-the-transformer-paper-73e5ff5e0634

This next link is the "Attention is all you need" paper that describes how to build a transformer model for the first time.

These two discoveries didn't take millions or billions in funding. Made by small groups of passionate people, and their work led to the LLMs of today. We need to find new methods that would be similarly disruptive when extrapolated out.. and the more people we have working on it, the better chance we have of finding things like these. IMO these are parts of the future AGI, or at least important steps towards it. It doesn't take ungodly amounts to make the important innovations like these

1

Scarlet_pot2 OP t1_j39e8ao wrote

The goal shouldn't be to develop AGI, The goal should be to make discoveries that could lead to parts of AGI when extrapolated out.. Like the first word generation model was made that led to the LLMs today, we need small teams trying new things and sharing the results.

Let's assume "guess the next word" fills the part of the brain for prediction down the line for when AGI is developed. Maybe a small group develops the first thing that will later on fit another part of the brain, like how to make memory work. or how to develop reasoning. or any other parts.

and at least some of those can be found by small groups trying new approaches. John Carmack said that all the code of AGI would be able to fit on a USB. the goal should be to find parts of that code.

It won't be easy or quick, but I'm sure if we had 100k people with beginner-intermediate base understanding of the subjects related to AI, all trying different approaches and sharing their results, some working together, after a few years we would probably have at least a few new methods worth trying that may lead to a part of AGI.

5

Scarlet_pot2 OP t1_j39b08w wrote

IMO, guess the next word isn't going to lead to AGI alone, but it most likely will play a part. Let's assume "guess the next word" fills the part of the brain for prediction down the line for when AGI is developed. Maybe a small group develops the first thing that will later on fit another part of the brain, like how to make memory work. or how to develop reasoning. or any other parts.

The goal should be to make discoveries that could lead to parts of AGI when extrapolated out.. and at least some of those can be found by small groups trying new approaches. John Carmack said that all the code of AGI would be able to fit on a USB. the goal should be to find parts of that code.

1

Scarlet_pot2 OP t1_j39a7ok wrote

The fact that there are people at these companies trying new approaches shouldn't stop you from trying to. They aren't going to be trying what you are. Even if what you try comes up short, then we know one more path that isn't towards AGI

The goal should be to try as many different approaches as possible so we can identify the ones that show promise. It won't be easy or quick, but I'm sure if we had 100k people with beginner-intermediate base understanding of the subjects related to AI, all trying different approaches and sharing their results, some working together, after a few years we would probably have at least a few new methods worth trying that may lead to a part of AGI.

2

Scarlet_pot2 OP t1_j398xn5 wrote

The expectations should be tampered. the foundations of AGI aren't going to be made in a couple of hours, but just as "guess the next word" was found out and led to LLMs, I'm sure there are many of simple small discoveries waiting to be found. And many diverse groups trying different things and sharing their results could lead to some of those. It may not be you that builds the million dollar model, but you could make the first simple program that shows promise and ends up being the base idea for large models a few years down the line, by someone.

2

Scarlet_pot2 OP t1_j397wnk wrote

Yes making large models is expensive, but coming across the next discovery like "guess the next word" isn't. That small discovery led to all the LLMs we post about today, and it was made from a small group of people. The goal shouldn't be to train million dollar massive models. The goal should be to find new novel approaches.

A small group could make the next discovery like guess the next word, and I'm sure there are many discoveries to be made. building massive models from it may happen years later, from the creators or a better funded group.

1

Scarlet_pot2 OP t1_j396pzu wrote

The goal shouldn't be to build the rocket. It should be to develop the physics so someone with more funding can build the rocket later on.

Like the first word generation model was made that led to the LLMs today, we need small teams trying new things and sharing the results. If something is useful it may lead to the big models of tomorrow

1

Scarlet_pot2 OP t1_j3961c1 wrote

I'm not talking about LLMs. I'm talking about new, novel approaches. LLMs were started because a small group figured out they could make a program that could guess the next word. small groups trying new things could develop the next program like that, which is something people could try.

You may try new things and make a simple discovery that leads to new advanced AIs a couple years later

0

Scarlet_pot2 OP t1_j395goa wrote

The fact it is still in its infancy is more of a reason to get involved.. Get in early and you have more of a chance to make an impact. You don;t need to fully move into the field, you can do it for an hour a day as a hobby, keeping your regular job. Maybe it'll lead to a programmer / AI job in the future. Maybe you'll stumble across the next simple discover like "guess the next word" that will lead to advanced models down the line

1

Scarlet_pot2 OP t1_j394x4d wrote

It doesn't have to be. Think of the first word generation model that lead to all the new LLMs we post about. It was a simple new thing we discovered we can do with code ( guess the next word) . Things like that could definitely be developed by small groups, and it could lead to many new advanced AIs a few years down the line

0

Scarlet_pot2 OP t1_j393z6j wrote

I'm for it! We could share resources, help each other learn, etc. We could start a discord server, make a post about it with the mission or goal, and then people who want to contribute can join. It's doable and once one group starts more can follow

When you look at the start of LLMs, it was a small team learning they
could build a model to generate the next word, and later that was extrapolated
out to chatGPT, image generation and more. We could try to find a change like that. a new method that later can lead to many new AIs

1

Scarlet_pot2 OP t1_j392tjr wrote

Agreed. the Innovation we need will require out of the box thinking, so sticking to current AI trends and methods isn't necessary. We should tinker with the fundamentals and figure out new ways to do things. When you look at the start of LLMs, it was a small team learning they could build a model to generate the next word, and that was extrapolated out to chatGPT, image generation and more. We need more advancements like that, and those type of advancements are totally possible for small teams and individuals

Also a jack of all trades approach would help with new ideas and trying them out. Knowing about human and animal cognition yes, and studying human psychology and its development. Using theories like these, trying to recreate in code in new novel ways can lead to advancements. small groups can make these advancements and more without needing billions in funding

1

Scarlet_pot2 OP t1_j391hyr wrote

I'm saying specifically for AI. I won't be pushed by stakeholders to train LLMs, to push out products to generate revenue, to constantly trying to please the funders. By no profit motive I mean building solely for the purpose of AI advancement. Willing to lose money and time for it, not expecting to make it back next quarter.

I'm not saying a group of amateurs or people with intermediate experiences could build it. But they could make small advancements over time, and over years it could get better, leading to an Open-Source AI made from many people contributing over time

1