Cryptizard

Cryptizard t1_ja4yhua wrote

You have to just study what you personally find interesting and fulfilling. Generally, that is a good way to get a job as well because the more passionate you are about a subject, the more you want to learn about it, the more competent you will be. So if the singularity comes and nobody has a job, then at least you spent your time on something that was worthwhile to you.

2

Cryptizard t1_ja4y67r wrote

Reply to comment by Mason-B in So what should we do? by googoobah

You are really not following what is going on, or else you have closed your mind so much you can't process it. 90 years for general intelligence? Buddy, 30 years ago we didn't even have the internet. Or cell phones. Computers were .001% as fast as they are now. And technological progress speeds up, not slows down.

I don't think it is coming tomorrow or anything, but look at current AI models and tell me it will take 3 more internets worth of advancement to make them as smart as a human. Humans aren't even that smart!

>Skipping the obvious answer of "programmers will be the last people to be programmed out of a job."

This is a really terrible take. Programmers are going to be among the first to be replaced, or at least most of them. We already have AI models doing a big chunk of programming. Programming is just a language problem, and LLMs have proven to be really good at language.

1

Cryptizard t1_j9w1idt wrote

That's what the inflationary argument addresses. If every universe creates 10^30 new universes a second (one of the interpretations of cosmic inflation and bubble universes), then at any point in time there will be exponentially more "young" universes than old ones, and so almost every civilization will be the first civilizations in their universes.

2

Cryptizard t1_j9vripn wrote

What are you adding to this discussion that hasn't already been talked to death in dozens of other posts with the exact same topic over the last couple days? Or that EY hasn't been saying for a decade?

10

Cryptizard t1_j9vlbuz wrote

>which is statistically very unlikely

What makes you say that? There are multiple studies that suggest we are at the very, very beginning of the time period that the universe is able to support life. The universe is only 14 billion years old, and it will have conditions for life to arise for another 10-100 trillion years. Statistically, the overwhelming majority of lifeforms (99.999%) that will ever evolve will come after us.

We reached the advanced intelligence stage almost as fast as we possibly could. Our solar system was one of the earliest ones with abundant heavy elements. Life evolved very shortly after our planet's formation, less than 1 billion years after. It has taken us 4 billion years to reach the level we are at now. Our planet will naturally become uninhabitable in another half a billion years, as the sun gets too hot and we lose all the CO2 in the atmosphere. On a cosmic scale, we had a very small window to actually get the intelligence and civilization stuff worked out.

There is also the inflationary argument made by Alan Guth, that the number of universes is growing exponentially and so almost every civilization that ever arrises is the "first" one in their own universe. I'll let you google that one if you haven't heard it.

6

Cryptizard t1_j9qresh wrote

That’s literally the point of this entire fucking website. People make subs for a topic they like and moderate it to keep it on topic. If you want to see this and the other people here don’t, make a new sub. Regardless, it’s not wanted here.

11

Cryptizard t1_j9ouc8l wrote

Could not agree more. I think there should be a sub rule prohibiting posting random conversations with AI or it is just going to get worse as this stuff becomes more accessible.

36

Cryptizard t1_j9l0qws wrote

What is the point of this post?

>What would constitute proof of the AI navigating creatively around its rule set without leading by the user

You want us to imagine a random hypothetical scenario.

>what would be the potential ramifications?

Then figure out the real-world implications of our random imaginary scenario.

Ok let me start, scenario: it could launch a nuclear missile. That's certainly against its rules. Ramifications: everyone dies.

−5

Cryptizard t1_j93skf4 wrote

>Maybe the tech companies are all in on it

They aren't.

>Nvidia might already have like a 1 TB VRAM GPU out there that the military uses right now

This is laughably wrong. The military runs on outdated hardware that was commissioned a decade plus ago. They do not have some magic semiconductor technology that is unknown to the public. They just have a lot of money.

6

Cryptizard t1_j8t9b1m wrote

>it's just the default way their creators thought they should respond with

No, that's not right. Nobody programmed the LLM how to respond, it is just based on training data. It is emergent behavior.

>I don't see why it would be an issue at all to "fine-tune" any of these LLMs to write with a specific style that would sound more casual and normal.

You can try to ask it to do that, it doesn't really work.

>Admittedly I know nothing about the field

Yeah...

−5