Cryptizard
Cryptizard t1_j1tut1b wrote
Reply to comment by mocha_sweetheart in Driverless cars and electric cars being displayed as the pinnacle of future transportation engineering is just… wrong. Car-based infrastructure is inefficient, bad for the environment and we already have better technologies in other fields that could help more. An in depth analysis by mocha_sweetheart
It seems like you have never lived outside of a city? And I’m not talking about a suburb, I mean a rural area. You can’t make those “walkable”. And you can’t just force people to move to cities.
Cryptizard t1_j1ry5sp wrote
Reply to comment by mocha_sweetheart in Genuine question, why wouldn’t AI, posthumanism, post-singularity benefits etc. become something reserved for the elites? by mocha_sweetheart
When have rich people ever tried to keep a technology to themselves? It doesn't make sense on its face. The only things that are exclusive to rich people are very rare and supply can't be increased, like real estate, precious gems, supercars, etc.
Cryptizard t1_j1rxto1 wrote
Reply to comment by mocha_sweetheart in Genuine question, why wouldn’t AI, posthumanism, post-singularity benefits etc. become something reserved for the elites? by mocha_sweetheart
>For all we know if things like posthumanism etc. become real they might as well just charge an unreasonable price and only the select few will get it
Why don't only rich people have electric cars or sweet gaming computers or literally any other new technology? Because they want to make money and they can make more money and be more rich by selling that shit to the public. It is called capitalism.
Cryptizard t1_j1nomm9 wrote
I don’t understand the comment about not having children. If they think AI is going to destroy the world why are they working so hard on it?
Cryptizard t1_j1k30q3 wrote
Reply to comment by YesramDeens in Hype bubble by fortunum
Protein folding, n-body simulation, really any type of simulation, network analysis, anything in cryptography or that involves matrices. Basically anything that isn’t “off the top of your head” and requires an iterative approach or multiple steps to solve.
Cryptizard t1_j1hvl85 wrote
Reply to comment by Argamanthys in Hype bubble by fortunum
Except no, because they currently scale quadratically with the number of “steps” they have to think. Maybe we can fix that but it’s not obvious that it is possible to fix without a completely new paradigm.
Cryptizard t1_j1hrdyt wrote
Reply to comment by Outrageous_Point_174 in Excluding quantum computers, do you think that ASI/AGI will crack our encryption system? by Outrageous_Point_174
No, ASI is not capable of everything. There are just fundamental limits to computation like there are limits to physics. It can still do a lot though, there are only a few things we know (or conjecture) lower bounds about. It just happens to be that cryptography is entirely designed to resist even incredibly advanced computers.
Cryptizard t1_j1hfn4j wrote
Reply to comment by Ortus12 in Hype bubble by fortunum
Here is where it becomes obvious that you don’t understand how LLMs work. They have a fixed depth evaluation circuit, which means that they take the same amount of time to respond to the prompt 2+2=? as they do to “simulate this complex protein folding” or “break this encryption key”. There are fundamental limits on the computation that a LLM can do which prevent it from being ASI. In CS terms, anything which is not computable by a constant depth circuit (many important things) cannot be computed by a LLM.
Cryptizard t1_j1a3gmq wrote
We “as a species” can’t even agree that things like human rights are a good idea. We can’t even stop killing each other for petty reasons. We can wait a thousand years and there will never be a consensus about something as complicated as AI.
Folks that are optimistic about AI hope it will actually be morally better than we are. We need AI to save us from ourselves.
Cryptizard t1_j0l7vyn wrote
Reply to comment by SeaBearsFoam in Is anyone else concerned that AI will eventually figure out how to build itself in three-dimensional space? by HeavierMetal89
Good thing the EM leakage from CPUs is like 5 orders of magnitude lower than you would need to transmit the length of a room.
Cryptizard t1_j0l2952 wrote
Reply to comment by SeaBearsFoam in Is anyone else concerned that AI will eventually figure out how to build itself in three-dimensional space? by HeavierMetal89
How is it making arbitrarily EM fields with no network card?
Cryptizard t1_j0ixl8v wrote
Reply to Anyone remember Blob's Park? by FelDeadmarsh
So weird, I was just thinking about this. We went there ever year for Christmas when I was a kid.
Cryptizard t1_j0i0p19 wrote
Reply to comment by Wroisu in Is anyone else concerned that AI will eventually figure out how to build itself in three-dimensional space? by HeavierMetal89
How would it be able to escape if it was airgapped? More likely someone would stupidly let it out.
Cryptizard t1_j0cwdbp wrote
Cryptizard t1_j0cw287 wrote
Reply to comment by SgathTriallair in this sub by TinyBurbz
I'm not even an artist, but what is starting to make me upset about this whole thing is that people look at DALLE or whatever and go, "haha artists out of a job" which seriously underappreciates what artists do. If you made a painting where you couldn't remember how many fingers a person was supposed to have you would fail out of art school. The fact that people are saying DALLE can do as good as a real artist just shows that the vast majority don't appreciate art in the first place.
That's not to say that it won't improve, it definitely will. But its not there right now, and a huge percentage of people around here think that it is which is so fucking cringe.
Cryptizard t1_j01sjol wrote
Reply to comment by electriceeeeeeeeeel in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
>The way it can already reason through academic papers is pretty astonishing
Not sure what you are talking about here. Do you have a link? ChatGPT is very bad at understanding more than the surface level of academic topics.
Cryptizard t1_j006y22 wrote
Reply to comment by Superschlenz in Excluding quantum computers, do you think that ASI/AGI will crack our encryption system? by Outrageous_Point_174
Wut
Cryptizard t1_izznmo0 wrote
Reply to Excluding quantum computers, do you think that ASI/AGI will crack our encryption system? by Outrageous_Point_174
I said it in another reply, but there are some types of cryptography that are information-theoretically secure, meaning no matter how much computation you have you provably cannot break them. These will continue to be secure against singularity AI.
As to the rest of cryptography, it depends on the outcome of the P vs. NP question. It is conceivable that an ASI could prove that P = NP and break all computationally-bound cryptography. But if P != NP, as most mathematicians believe, then there will be some encryption schemes that cannot be broken^(*) no matter how smart you are or how much computation you have access to. A subset of our current ciphers may be broken, i.e. an ASI could find an efficient algorithm for factoring and break RSA, but we have enough of them based on different problems that are conjectured to be difficult that at least some of them would turn out to truly be intractable.
For example, suppose that breaking AES is truly outside of P. Then, according to the Landauer limit, the most efficient computer physically possible would take about 1% of the mass energy of the milky way galaxy to break one AES-256 ciphertext. Note, this is an underestimate because I assume it only takes one elementary computation per key attempt when in reality it is a lot more than that.
^(*)This is a small oversimplification, there is the possibility that we live in a world where P != NP but we still don't have any useful cryptography. See Russel Impagliazzo's famous paper "A personal view of average-case complexity."
Cryptizard t1_izzl2u8 wrote
Reply to comment by RedErin in Excluding quantum computers, do you think that ASI/AGI will crack our encryption system? by Outrageous_Point_174
This is a bad take. There are many limits, physical and computational, that prevent even a singularity AI from doing “anything it wants.” We know, for instance, that the one-time pad is an informational-theoretically unbreakable encryption scheme, regardless of how smart you are or how much computation you have.
Moreover, if P != NP like we believe, there are other encryption schemes that can’t be broken even with a computer the size of the galaxy. These are fundamental limits.
Cryptizard t1_izujvea wrote
Reply to comment by TopicRepulsive7936 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Neat.
Cryptizard t1_izufeqm wrote
Reply to comment by TopicRepulsive7936 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
I think you may be having a psychotic break, friend. I have no idea what you are talking about.
Cryptizard t1_izue3uh wrote
Reply to comment by TopicRepulsive7936 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
>Do you even know what computers are used for?
What is a computer? I'm posting this from a coconut that I hacked into a radio.
Cryptizard t1_izu5jlk wrote
Reply to comment by __ingeniare__ in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Sort of, but look how long it takes to train these models. Even if it can self improve it still might take years to get anywhere.
Cryptizard t1_j1vnuzo wrote
Reply to comment by Kinexity in Driverless cars and electric cars being displayed as the pinnacle of future transportation engineering is just… wrong. Car-based infrastructure is inefficient, bad for the environment and we already have better technologies in other fields that could help more. An in depth analysis by mocha_sweetheart
Cool so just fuck all the people that don’t, right?