Cryptizard

Cryptizard t1_j1tut1b wrote

It seems like you have never lived outside of a city? And I’m not talking about a suburb, I mean a rural area. You can’t make those “walkable”. And you can’t just force people to move to cities.

9

Cryptizard t1_j1ry5sp wrote

When have rich people ever tried to keep a technology to themselves? It doesn't make sense on its face. The only things that are exclusive to rich people are very rare and supply can't be increased, like real estate, precious gems, supercars, etc.

0

Cryptizard t1_j1rxto1 wrote

>For all we know if things like posthumanism etc. become real they might as well just charge an unreasonable price and only the select few will get it

Why don't only rich people have electric cars or sweet gaming computers or literally any other new technology? Because they want to make money and they can make more money and be more rich by selling that shit to the public. It is called capitalism.

12

Cryptizard t1_j1k30q3 wrote

Reply to comment by YesramDeens in Hype bubble by fortunum

Protein folding, n-body simulation, really any type of simulation, network analysis, anything in cryptography or that involves matrices. Basically anything that isn’t “off the top of your head” and requires an iterative approach or multiple steps to solve.

1

Cryptizard t1_j1hvl85 wrote

Reply to comment by Argamanthys in Hype bubble by fortunum

Except no, because they currently scale quadratically with the number of “steps” they have to think. Maybe we can fix that but it’s not obvious that it is possible to fix without a completely new paradigm.

1

Cryptizard t1_j1hrdyt wrote

No, ASI is not capable of everything. There are just fundamental limits to computation like there are limits to physics. It can still do a lot though, there are only a few things we know (or conjecture) lower bounds about. It just happens to be that cryptography is entirely designed to resist even incredibly advanced computers.

2

Cryptizard t1_j1hfn4j wrote

Reply to comment by Ortus12 in Hype bubble by fortunum

Here is where it becomes obvious that you don’t understand how LLMs work. They have a fixed depth evaluation circuit, which means that they take the same amount of time to respond to the prompt 2+2=? as they do to “simulate this complex protein folding” or “break this encryption key”. There are fundamental limits on the computation that a LLM can do which prevent it from being ASI. In CS terms, anything which is not computable by a constant depth circuit (many important things) cannot be computed by a LLM.

7

Cryptizard t1_j1a3gmq wrote

We “as a species” can’t even agree that things like human rights are a good idea. We can’t even stop killing each other for petty reasons. We can wait a thousand years and there will never be a consensus about something as complicated as AI.

Folks that are optimistic about AI hope it will actually be morally better than we are. We need AI to save us from ourselves.

6

Cryptizard t1_j0cwdbp wrote

Reply to comment by TinyBurbz in this sub by TinyBurbz

The vast majority of people on this sub, and Reddit in general, are not capable of appreciating art. That is just becoming more obvious now because of AI art generation, and it is infuriating a lot of people.

1

Cryptizard t1_j0cw287 wrote

Reply to comment by SgathTriallair in this sub by TinyBurbz

I'm not even an artist, but what is starting to make me upset about this whole thing is that people look at DALLE or whatever and go, "haha artists out of a job" which seriously underappreciates what artists do. If you made a painting where you couldn't remember how many fingers a person was supposed to have you would fail out of art school. The fact that people are saying DALLE can do as good as a real artist just shows that the vast majority don't appreciate art in the first place.

That's not to say that it won't improve, it definitely will. But its not there right now, and a huge percentage of people around here think that it is which is so fucking cringe.

1

Cryptizard t1_izznmo0 wrote

I said it in another reply, but there are some types of cryptography that are information-theoretically secure, meaning no matter how much computation you have you provably cannot break them. These will continue to be secure against singularity AI.

As to the rest of cryptography, it depends on the outcome of the P vs. NP question. It is conceivable that an ASI could prove that P = NP and break all computationally-bound cryptography. But if P != NP, as most mathematicians believe, then there will be some encryption schemes that cannot be broken^(*) no matter how smart you are or how much computation you have access to. A subset of our current ciphers may be broken, i.e. an ASI could find an efficient algorithm for factoring and break RSA, but we have enough of them based on different problems that are conjectured to be difficult that at least some of them would turn out to truly be intractable.

For example, suppose that breaking AES is truly outside of P. Then, according to the Landauer limit, the most efficient computer physically possible would take about 1% of the mass energy of the milky way galaxy to break one AES-256 ciphertext. Note, this is an underestimate because I assume it only takes one elementary computation per key attempt when in reality it is a lot more than that.

^(*)This is a small oversimplification, there is the possibility that we live in a world where P != NP but we still don't have any useful cryptography. See Russel Impagliazzo's famous paper "A personal view of average-case complexity."

8

Cryptizard t1_izzl2u8 wrote

This is a bad take. There are many limits, physical and computational, that prevent even a singularity AI from doing “anything it wants.” We know, for instance, that the one-time pad is an informational-theoretically unbreakable encryption scheme, regardless of how smart you are or how much computation you have.

Moreover, if P != NP like we believe, there are other encryption schemes that can’t be broken even with a computer the size of the galaxy. These are fundamental limits.

9