Cryptizard
Cryptizard t1_ja4yhua wrote
Reply to So what should we do? by googoobah
You have to just study what you personally find interesting and fulfilling. Generally, that is a good way to get a job as well because the more passionate you are about a subject, the more you want to learn about it, the more competent you will be. So if the singularity comes and nobody has a job, then at least you spent your time on something that was worthwhile to you.
Cryptizard t1_ja4y67r wrote
Reply to comment by Mason-B in So what should we do? by googoobah
You are really not following what is going on, or else you have closed your mind so much you can't process it. 90 years for general intelligence? Buddy, 30 years ago we didn't even have the internet. Or cell phones. Computers were .001% as fast as they are now. And technological progress speeds up, not slows down.
I don't think it is coming tomorrow or anything, but look at current AI models and tell me it will take 3 more internets worth of advancement to make them as smart as a human. Humans aren't even that smart!
>Skipping the obvious answer of "programmers will be the last people to be programmed out of a job."
This is a really terrible take. Programmers are going to be among the first to be replaced, or at least most of them. We already have AI models doing a big chunk of programming. Programming is just a language problem, and LLMs have proven to be really good at language.
Cryptizard t1_ja4wsim wrote
Reply to comment by greatdrams23 in So what should we do? by googoobah
You are trolling if you say you can't see the difference this time.
Cryptizard t1_j9yp9lj wrote
Reply to The unequal treatment of demographic groups by ChatGPT/OpenAI content moderation system by grungabunga
It seems like this guy has just discovered that the vulnerability of a demographic is inherently tied into whether something is hate speech or not? There is no such thing as hate speech against rich people, because there is no negative connotation to being rich.
Cryptizard t1_j9w1idt wrote
Reply to comment by Mortal-Region in Optimism in the Singularity in face of the Fermi-Paradox by [deleted]
That's what the inflationary argument addresses. If every universe creates 10^30 new universes a second (one of the interpretations of cosmic inflation and bubble universes), then at any point in time there will be exponentially more "young" universes than old ones, and so almost every civilization will be the first civilizations in their universes.
Cryptizard t1_j9vripn wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
What are you adding to this discussion that hasn't already been talked to death in dozens of other posts with the exact same topic over the last couple days? Or that EY hasn't been saying for a decade?
Cryptizard t1_j9vlbuz wrote
>which is statistically very unlikely
What makes you say that? There are multiple studies that suggest we are at the very, very beginning of the time period that the universe is able to support life. The universe is only 14 billion years old, and it will have conditions for life to arise for another 10-100 trillion years. Statistically, the overwhelming majority of lifeforms (99.999%) that will ever evolve will come after us.
We reached the advanced intelligence stage almost as fast as we possibly could. Our solar system was one of the earliest ones with abundant heavy elements. Life evolved very shortly after our planet's formation, less than 1 billion years after. It has taken us 4 billion years to reach the level we are at now. Our planet will naturally become uninhabitable in another half a billion years, as the sun gets too hot and we lose all the CO2 in the atmosphere. On a cosmic scale, we had a very small window to actually get the intelligence and civilization stuff worked out.
There is also the inflationary argument made by Alan Guth, that the number of universes is growing exponentially and so almost every civilization that ever arrises is the "first" one in their own universe. I'll let you google that one if you haven't heard it.
Cryptizard t1_j9qresh wrote
Reply to comment by redroverdestroys in Seriously people, please stop by Bakagami-
That’s literally the point of this entire fucking website. People make subs for a topic they like and moderate it to keep it on topic. If you want to see this and the other people here don’t, make a new sub. Regardless, it’s not wanted here.
Cryptizard t1_j9qjin2 wrote
Reply to comment by redroverdestroys in Seriously people, please stop by Bakagami-
The community does and it seems like they mostly agree with me not you.
Cryptizard t1_j9pytw8 wrote
Reply to comment by redroverdestroys in Seriously people, please stop by Bakagami-
Then make a sub for chatgpt conversations. It's not this one.
Cryptizard t1_j9ouc8l wrote
Reply to Seriously people, please stop by Bakagami-
Could not agree more. I think there should be a sub rule prohibiting posting random conversations with AI or it is just going to get worse as this stuff becomes more accessible.
Cryptizard t1_j9otoix wrote
Reply to If only you knew how bad things really are by Yuli-Ban
This sub seems to attract a hugely disproportionate number of people with mental health disorders.
Cryptizard t1_j9l2617 wrote
Reply to comment by TheBlindIdiotGod in Ramifications if Bing is shown to be actively and creatively skirting its own rules? by [deleted]
It's not a useful or interesting thought experiment if you say, "make up the rules and also the implications." That's just asking someone to entertain you with a story. It is the definition of a low-effort post and should be removed by mods as violating rule 3.
Cryptizard t1_j9l0qws wrote
Reply to Ramifications if Bing is shown to be actively and creatively skirting its own rules? by [deleted]
What is the point of this post?
>What would constitute proof of the AI navigating creatively around its rule set without leading by the user
You want us to imagine a random hypothetical scenario.
>what would be the potential ramifications?
Then figure out the real-world implications of our random imaginary scenario.
Ok let me start, scenario: it could launch a nuclear missile. That's certainly against its rules. Ramifications: everyone dies.
Cryptizard t1_j9ka13l wrote
Reply to comment by coumineol in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
No it is economics, they make less money the longer they stop and think about it.
Cryptizard t1_j9jxvg2 wrote
Reply to comment by coumineol in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
It's not ordinary humans, it's people on mechanical turk who are paid to do them as fast as possible and for as little money as possible. They are not motivated to actually think that hard.
Cryptizard t1_j9j8qk5 wrote
Reply to comment by Bakagami- in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
The human performance number is not from this paper, it is from the original ScienceQA paper. They are they ones that did the benchmarking.
Cryptizard t1_j9j80kk wrote
Reply to comment by Bakagami- in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
But then they wouldn’t be able to say that the AI beats them and it wouldn’t be as flashy of a publication. Don’t you know how academia works?
Cryptizard t1_j9j7x5v wrote
Reply to comment by IluvBsissa in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Serious, read the paper.
Cryptizard t1_j9j6j7x wrote
Reply to comment by Bakagami- in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
You are wrong. It’s not experts. It’s randos on mechanical Turk.
Cryptizard t1_j9cqrr5 wrote
Reply to comment by ChronoPsyche in Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
Just by playing the game you are opening them up to extreme risk of pain/death. Why would you do that?
Cryptizard t1_j93skf4 wrote
Reply to comment by Stakbrok in Do you think the military has a souped-up version of chatGPT or are they scrambling to invent one? by Timely_Hedgehog
>Maybe the tech companies are all in on it
They aren't.
>Nvidia might already have like a 1 TB VRAM GPU out there that the military uses right now
This is laughably wrong. The military runs on outdated hardware that was commissioned a decade plus ago. They do not have some magic semiconductor technology that is unknown to the public. They just have a lot of money.
Cryptizard t1_j8xopxr wrote
Reply to comment by el_chaquiste in Microsoft Killed Bing by Neurogence
It’s a technical limitation. Attention mechanisms scale poorly and there is an upper limit to the size of the context window.
Cryptizard t1_j8t9b1m wrote
Reply to comment by TwitchTvOmo1 in What if Bing GPT, Eleven Labs and some other speech to text combined powers... by TwitchTvOmo1
>it's just the default way their creators thought they should respond with
No, that's not right. Nobody programmed the LLM how to respond, it is just based on training data. It is emergent behavior.
>I don't see why it would be an issue at all to "fine-tune" any of these LLMs to write with a specific style that would sound more casual and normal.
You can try to ask it to do that, it doesn't really work.
>Admittedly I know nothing about the field
Yeah...
Cryptizard t1_ja4zefv wrote
Reply to comment by johnnymoha in So what should we do? by googoobah
No, it's just uh... what is it called... objective reality? Maybe you should try it some time.