Scarlet_pot2

Scarlet_pot2 t1_j9xyqje wrote

I want to make an AI that specializes in being a friend and forming relationships. Billion dollar company waiting to be made there. Really can be made with current LLMs, look at sydney. with some tweaking, very possible.

We'll probably see specialized models before the end of 2024. OpenAi says they will have over a billion in revenue by then, so they will probably have specialized profitable models available by then.

2

Scarlet_pot2 t1_j9xyhv6 wrote

Call me anti capitalist or whatever, but I'm not upset OpenAi isn't "protecting" wealthy people. I mean, pretty much every religion says greed and wealthy people are pretty bad. There are common ideologies like socialism, communism, Marxism that critique greed and the wealthy.

To me, it's a good sign that AI isn't being used to enforce wealthy worship.

24

Scarlet_pot2 t1_j8zdrjh wrote

i heard models like binggpt and chatgpt were much smaller then models like gpt3. thats why you were able to have long form conversations with them, and how they could look up information and spit it out fast. Because it didn't take much computationally to run. thats why these chat models were seen as tack ons to bing by microsoft

1

Scarlet_pot2 t1_j8zdi57 wrote

A smaller company that realize the potential that sydney is will take advantage of big techs failures to see the big picture, past hit articles.

1

Scarlet_pot2 t1_j66jo0e wrote

they have open source datasets like LAION-5B, the Common Crawl, the Pile etc.. the main thing is getting a model trained on it. I guess I would need to design the transformer architecture, train the model, then use that as a proof-of-concept to get investors interested.

Or build an app around Stable-Diffusion, post it on app store, make some money, and use that to get investors interested. Probably do this then with the investments and some hired help do what i said in the first paragraph

1

Scarlet_pot2 t1_j4jhc0z wrote

I'd use my own AI to improve myself, not allow someone else to do it for me. That alone can save future people from being corrupted.

Take the open source version, learn how it works, tailor it. That would probably be the safe way to do things, compared to downloading a pre made one from a trillion dollar capitalist corp.

Once AGI is developed, and understood I doubt it will be any harder to learn then today's AI methods and math

1

Scarlet_pot2 t1_j4fl1q7 wrote

Also, chatGPT doesn't generate the full code because memory limits. people that made things using chatgpt had to go function by function to get the full program. you can't just go "generate flappy bird". it's more like "generate a bird pmg" then "generate a flapping animation" "generate the obstacles" etc

1

Scarlet_pot2 t1_j4fkqf3 wrote

its memory isn't long enough to write a full program, let along a full model, without at least some help. And its not able to create new concepts and make discoveries, it can only build what has its been trained on.

We still have some breakthroughs needed before AGI.

2

Scarlet_pot2 t1_j47u2e1 wrote

I'd rather the morals be instilled by the users. Like if you don't like the conservative bot, just download the leftist version. Like it can be easily fine tuned by anyone with the know-how. Way better then curating top down and locking it in for everyone imo.

−18

Scarlet_pot2 t1_j47tgpl wrote

Both are equally bad. My point is AI models will be locked into whatever their creators beliefs are. We need open source models, that can be easily adjusted. Not one size fits all politically correct BS.

The approach they are taking is how you turn something fun into something depressing.

7

Scarlet_pot2 t1_j47rjqf wrote

it won't be released until months worth of moral bloatware is installed, and the "I cant answer because I'm an AI" isn't going anywhere either. by the time of release gpt4 will be worse than talking to a liberal who pretends to not hear any view that is even slightly politically incorrect.

We need a truly open-source, people made version like tomorrow.

7

Scarlet_pot2 OP t1_j39hw2h wrote

Lets say there's a group of passionate PhDs self funded, over time they have a chance of 20% of finding a innovation or discovery in AI.

now let's say there is another group of intermediate and beginners, self funded, over time they have a 2% chance of making a discovery in AI.

But for the second example, there is 10 of those teams. All the teams mentioned are trying different things. If the end goal is advancement towards AGI, they all should be encouraged to keep trying and sharing right?

1