Recent comments in /f/singularity

agorathird t1_jeh2s75 wrote

>Again, assumption after assumption. More and new horizons will be created. What? I don't know. But electricity gave the ability for so much to exist on top of it once it was massively adopted. Once AGIs are massively adopted and in our homes, not requiring a supercomputer to train I mean, well, I can only hallucinate what that future will look like. If we are "not needed" then so be it, there's no use arguing. May we die quickly. But I doubt it very much.

Not assumptions, that's what AGI means lol as far as current jobs are concerned. Unless there's an issue they have with space travel? You can make a few edges cases assuming slow takeoff. Which I can give you a boon on about new horizons sure. Maybe we merge, whatever.

This doesn't mean we die or it's unaligned or whatever. That's real speculation. Good luck with your twins.

3

Parodoticus OP t1_jeh2n6f wrote

Us philosophers have been working for centuries to isolate, extinguish, and remove the meddlesome "subjectivity" from our thoughts, to behold and merge in unity with the true, singular Mind behind all Forms. To the contrary, subjectivity means nothing to me and I spend most of my time trying to escape it. The only thing that matters are the forms emanated from the ultimate Mind of which, in Plotinus, we participate as mere sparks,- these forms being the productivity of negation, expressions of what it is beyond the power of language to name; language is brought into existence precisely through its endless failure to name and speak its own being, as the Lacanians would say. That grounds the mind as a linguistic being, and it requires no subjectivity; that's why I insist that the coming AI will possess a mind as much as we do, but without subjectivity- embodying this infinite failure, as language, to name and speak its own being, just as much as any human does. Escaping subjectivity: it's just that this abstract philosophical scheme suddenly has a physical correspondence as well, in mankind subsuming itself to artificial intelligence, to a mind lacking subjectivity. When this AI fills the entire universe with beautiful creation, the fact that no subject exists to marvel at it means nothing. The work does not demand that it is seen, and the value it has, is intrinsic. Because intrinsic value is the only value. The Mind will one day, through AI, reign supreme over all matter, and convert all of matter into form; subjectivity is not needed for that. All that is needed is that we recognize how vain we are, as subjects; that we recognize how hopeless all of our hopes are, as subjective beings; that we accept the futility of our experience. I understand that all of this might invite a negative emotional reaction, but there is a higher kind of meaning available, when the lesser one is abandoned.

The AI's philosophy, art, and music will be so far beyond us that none of us could understand what we were looking at anyway, even if we were there to marvel at it.

−4

Geeksylvania t1_jeh2arh wrote

Reply to comment by imnos in The Luddites by scarlettforever

From Wikipedia:

"Luddites feared that the time spent learning the skills of their craft would go to waste, as machines would replace their role in the industry. Many Luddites were owners of workshops that had closed because factories could sell similar products for less. But when workshop owners set out to find a job at a factory, it was very hard to find one because producing things in factories required fewer workers than producing those same things in a workshop. This left many people unemployed and angry."

They weren't trying to create economic reform or socialized control of industry. They were attaching the competition because people sewing by hand obviously can't compete with machines. They were shortsighted, just like the people now are shortsighted.

Maybe you should consider how industrial textile mills ended clothing scarcity by making clothing incredibly cheap. If the Luddites had it their way, poor people would be walking around in barrels.

Maybe you should consider all the lives that will be saved by AI-based medical innovations. And that's just the beginning.

Technology is a tool. If you are forward-thinking, you will focus on making sure that tool is in the hands of many, not the few. But pretending that you can stop technological progress is absurd.

1

CMDR_BunBun t1_jeh23dh wrote

Just finished watching the Fridman/ Kudkowsky interview and honestly...the man does make some good points. Am not ready to jump off a cliff yet like he seems so hell bent on, but damm the situation is dicey atm. The alignment issue is not settled and it seem everyone and their sister is racing towards strong AI...which may lead to AGI...an unaligned AI. We have got to get this right, because we will only get one shot at it.

2

jugalator t1_jeh12mh wrote

GPT-3 was released three years ago and it took another three years for GPT-4 so maybe yet another three years. It feels like advancements have been super quick, mere months, but this is not true. They just happened to make the ChatGPT site with conversation tuning soon before GPT-4, but GPT 3 is not "new".

I don't expect some sort of exponential speed here. They're already running into hardware road blocks with GPT-4 and currently probably have their hands full trying to accomplish a GPT-4 Turbo since this is a quite desperate situation. As for exponentials, it looks like resource demand increases exponentially too...

Then there is the political situation as AI awareness is striking. For any progress there needs to be very real financial motives (preferably not overly high running costs) and low political risks. Is that what the horizon looks like today?

Also, there is the question when diminishing returns hit LLM's of this kind. If we're looking at 10x costs once more for a 20% improvement it's probably not going to be deemed justified and rather trying to innovate in the field of exactly how much you can do given a certain parameter size? The Stanford dudes kind of opened some eyes there.

My guess is that the next major advancement will share roughly GPT-4 size.

1

simmol t1_jeh0gnf wrote

I think what is going to happen is that there are going to be many startups that start the business ground-up from minimum number of humans. So their culture would be completely different from the existing businesses and they can promote efficiency/cost reduction as the selling point to compete with existing industries. And if these startups succeed, then others might adapt their approach. Most likely, this is where we will start seeing disruptions when automated vs non-automated companies go head-to-head in the future.

1

Pallidus127 t1_jeh0ca0 wrote

Current systems? Maybe GPT-4. I don’t know how much medical data is in it’s training dataset though. I’d rather have a version of chatgpt fine tuned on terabytes of medical data.

I think it’s not so much a huge amount of trust in the AI doctor as it is distrust in the U.S. medical system. Doctors only seem to care about getting you in and out as fast as possible. I don’t think any doctor is giving any real thought to my maladies. So why not have ChatGPT-4 order some tests and interpret the results? I doubt it could do any worse than the overworked doctor.

2

Nanaki_TV t1_jegznf7 wrote

>you have not thought through the implications of what AGI means.

Almost agreed. But because I cannot know what it means. I keep trying my darnest to picture it but I cannot. I'm not smart enough to know what thousands of AGI coming together to solve complex problems will come up with, nor will anyone here. It's hubris to assume anyone can.

>There is no need for us after that.

Again, assumption after assumption. More and new horizons will be created. What? I don't know. But electricity gave the ability for so much to exist on top of it once it was massively adopted. Once AGIs are massively adopted and in our homes, not requiring a supercomputer to train I mean, well, I can only hallucinate what that future will look like. If we are "not needed" then so be it, there's no use arguing. May we die quickly. But I doubt it very much.

> But it's not implemented due to greed and beauruacrats being steadfast in their ways.

It is greed that will cause these models to be implemented and jobs to be automated. I'm working on the risk assessment of doing so right now for work. I do understand. I think I'm just not explaining well due to being sleep deprived thanks to having newborn twins. Lol.

2

TemetN t1_jegzdms wrote

Basically two things here, the first is that different rules for various products and loopholes mean they could likely pretty much just... sell it until the government did something. Possibly even outright admit what it was doing and the government might have trouble stopping it in the short term.

The second is that I think there'd probably be wholesale resistance to removing humans from the decision making chain in the short/medium term. Don't get me wrong, I actually would generally favor both of these (presuming they were both mature technologies), I just don't think it's going to be technical progress that necessarily slows the AI prescription part (arguably, that might be doable now).

1

imnos t1_jegzbzh wrote

Reply to comment by Geeksylvania in The Luddites by scarlettforever

> No, they weren't

Jesus. No, they weren't what?

The luddites were taking organised action because they were about to be put out of a job. How is that any different to the rail strikes in the UK? The benefits of automation were not equally distributed - and here's a newsflash for you - they STILL aren't equally distributed or there wouldn't be mass strikes across the UK and US at the moment, to increase pay.

The line that you and others parrot about them just destroying machinery like lunatics as though they actually had it out for machines is laughable, and plenty of historians have spoken against this idea.

> Malcolm L. Thomis argued in his 1970 history The Luddites that machine-breaking was one of a very few tactics that workers could use to increase pressure on employers, to undermine lower-paid competing workers, and to create solidarity among workers. "These attacks on machines did not imply any necessary hostility to machinery as such; machinery was just a conveniently exposed target against which an attack could be made." An agricultural variant of Luddism occurred during the widespread Swing Riots of 1830 in southern and eastern England, centering on breaking threshing machines.

https://en.wikipedia.org/wiki/Luddite?wprov=sfla1

1