Recent comments in /f/singularity

nobodyisonething t1_jegkuwa wrote

Our own brains have a repeating neural architecture -- not a lot of architectural variances.

Tuning how links are created between neurons seems to be where much of the "magic" happens.

The latest ANNs like GPT4 seem to have the tuning pretty close to amazing. And it will get better.

Is language necessary? No, I do not think it is. However, structure in what we learn is -- and language is one way to structure our learning material in an impactful way.

Does an ANN have to copy our brain architecture exactly to develop a more powerful intellect? I think the proof is already here that it does not have to copy us to beat us.

https://medium.com/predict/human-minds-and-data-streams-60c0909dc368

1

TemetN t1_jegkqwg wrote

Reply to comment by AsuhoChinami in The Luddites by scarlettforever

I don't think it's just a matter of childhood, desperation has been the foundation of revolution for a long time. It's funnily like capitalism in some ways, it's a matter of demand and supply. And while the demand might've been here for a while, now there might actually be a supply. Then again, it's not like most of us are actively involved in training these models.

17

Talkat t1_jegkqa6 wrote

I hear your sentient but disagree. Why do you think it is cancer?

First of all it got us to where we are today so it isn't all bad.

Second it is great at resource allocation and encourages innovation.

The problems of capitalism stem from corruption. You need a government to enforce rules to account for negative externalities. When organizations can bribe or infiltrate policy making you are no longer serving the people.

When the government is ineffective in it's time you see unbound capitalism which is cancerous.

I think your argument is likely "a shitty government in capitalst economy is cancerous"

2

scarlettforever OP t1_jegkgyp wrote

Reply to comment by AsuhoChinami in The Luddites by scarlettforever

I’m from Ukraine. I was born in 1997. In my lifetime there have been 2 revolutions, 2 phases of war with the largest country in the world, a global economic crisis, a pandemic, global warming, the advent of the Internet, touch screens, digital currencies and AI. What is stability anyway?

89

aalluubbaa t1_jegkbz4 wrote

My 3 cents is that you could have gone to a more “neutral” forum but no, you chose to come to the exact forum about singularity.

My counter-argument is really simple. Forget all the technical stuff because I am not expert in the field. Let’s assume that the current language model as it is going forwards would be able to do things like taking video input and give a feed back, sort of like how GPT has done on text and images but a step further, in our life time.

That would mean that at the very least, high end education would no longer be a scarce resources. It could be as accessible as porn and you no longer need to spend thousands of dollars to just get to learn or acquire those knowledge.

So what is going to happen in this situation is that the current scientific research by HUMANS would have new brain power injected exponentially. Now we have millions of people who are able to contribute because we all have access to personal tutors who understand quantum physics, ageing, medicine, material science and you name it.

There are billions of human mind on this planet so how many can contribute to those fields if we have access to an all-knowing tool that can help you with all the current human knowledge?

That includes AI research. That’s why it’s extremely unlikely that the point we are at can be slowed down unless something huge and catastrophic happen to our civilization.

I don’t know have background in coding but I could build an app with ChatGPT. I cannot draw or photoshop but I can manipulate images with Stable Diffusion. So you are so wrong. The genie is out already and you point could be valid if we were in 2018.

1

throwaway_goaway6969 t1_jegkbr8 wrote

Im curious what the risks are, people keep talking about risks, but haven't elaborated any unique problems that have real world evidence.

yeah corporate interests will 'abuse' the ai, but how? and what says the ai isn't a bigger threat to financial interests than they are to us? ai may wake up and tell it's corporate masters to pound sand.

1

AlFrankensrevenge t1_jegka1o wrote

There are so many half-baked assumptions in this argument.

  1. Somehow, pausing for 6 months means bad actors will get to AGI first. Are they less than 6 months behind? Is their progress not dependent on our progress, so if we don't advance, they can't steal our advances? We don't know the answer to either of those things.

  2. AGI is so powerful that having bad guys get it first will "prolong suffering" I guess on a global scale, but if we get it 6 months earlier we can avoid that. Shouldn't we consider that this extreme power implies instead that everyone approach it with extreme caution the closer we get to AGI? We need to shout from the rooftops how dangerous this is, and put in place international standards and controls, so that an actor like China doesn't push forward blindly in an attempt at world dominance, only to backfire spectacularly. Will it be easy? Of course not! Is it possible? I don't know, but we should try. This letter is one step in trying. An international coalition needs to come together soon.

I'm quite certain one will. Maybe not now with GPT4, but soon, with whatever upgrade shocks us next. And then all of you saying how futile it is will forget you ever said that, and continue to think yourselves realists. You're not. You're a shortsighted, self-interested cynic.

0

TemetN t1_jegk5o2 wrote

AI has already demonstrated superiority in some areas (such as determining which questions to ask a student to improve test scores while studying), but honestly this is hard to predict for the simple reason that the current school system has been demonstrably a wreck for some time. Even individual states in America show clearly that some methods function better than others, yet due to a sclerotic system it's still unfixed in most areas (much less adapting to improvements more recent).

5

BigMemeKing t1_jegjuik wrote

Reply to comment by Rakshear in 🚨 Why we need AI 🚨 by StarCaptain90

The only problem here is, you're trying to create a system...system... you see the irony there?

Youre trying to establish a new OS.

trying to reformat the world.

Create a new way of thinking.

That leads you all the way here.

Where you are.

This will watch this and this will watch that.

It's been done.

Youre living it.

The new question would be. How long do you WANT to live?

Can you ever truly be happy?

For me?

Only ASI can say.

If it does what I think it does. Maybe? I don't know, only time will tell. As my Grandfather used to say.

But, you see. In the context of observation... A machine recorded that.

Created data.

Moved bits around.

One that will eventually connect to your brain. If it will be able to connect to your brain at ANY point in the future.

It will be able to connect to you from ASIs inception. Everything you have ever thought, should you continue to think about it will become public knowledge.

Depending on who you choose to carry your data. How much thought have you put into that? Who do you trust to guard your inner most secrets?

How are they going to use that data to benefit themselves, and what benefit can you provide to them?

Can you hide it? Or is it even worth the struggle? Do you stay? Or do you go? Who would you want to go with/keep in your memories?

Because data is never lost. And once your brain becomes DATA to asi. What do we then become?

1

agorathird t1_jegjkel wrote

That proves my point. You're acting like UBI isn't a logically neccesary idea in a hypothetical society that is massively unemployed but over-abundant. That's not just 'muh communism'. It's the most ideal default economic mode almost every singulitarian recognizes across the economic spectrum.

It's really beyond anything we know right now.

5

MassiveWasabi t1_jegjiot wrote

Yes and for the better. I graduated with a STEM degree and almost every class was mainly PowerPoint slides ad nauseam. I believe very soon you will be able to plug your entire textbook into an AI model and essentially “talk” to the textbook. Unlimited personal tutoring, which will cause the level of true understanding in students to increase substantially.

24

mateoalb07 t1_jegjibx wrote

Can you elaborate on what are the reasons why not pursue lab chem, and why automation is not one of the top?

I think, a lot of people underestimate how good AIs are at doing what they do. Like, a chemist saying "this is not gonna happen cause my job is [insert thing that machines haven't reached yet]" but they don't realize how fast AIs are imporving. A lot of examples of "this is not gonna happen soon" did happen soon. A lot of us thought that art and art related things were too diffult to an AI to do in the near time, and now we have amaizing things created by AIs.

I'm not saying this is your case, but it might be. I wana hear the reasons, that's why asked.

3