Frumpagumpus

Frumpagumpus t1_ja2ucop wrote

https://youtu.be/WYsDy41QDpA?t=241

but yea i'm not gonna volunteer to be the first one to have my brain sliced up. but if you are going to die anyway why not die in a way that makes sense

as far as we know, entropy even comes for superintelligences

1

Frumpagumpus t1_ja2n6uw wrote

yep thats what I said

but it doesn't really matter is my contention

star trek teleporter

humans dislike death because of the loss of family and friend group cohesion and institutional knowledge and pain associated with it. in practice we even shut our awareness down for periods when we sleep. software forks and kills processes and services permanently all the time.

−3

Frumpagumpus t1_ja2muar wrote

probably the easiest way is to die.

freeze your brain, slice it up into thin slices, map out the connectome with a frozen brain slice scanner, make a mostly accurate clone of yourself that can experience the matrix for you and whom your family&acquaintances will not be able to distiniguish (well except for the fact that your clone is in the matrix).

i expect software intelligence will do this all the time. it will probably be a useful learning technique amongst many other things, clone yourself n times, study, debate, or act (e.g. alphazero generating it's own training data) then perform a merge operation that "kills" all the clones and merges them into one thing (or just wind down how many instances there are since you no longer need them to generate data)

death will not be the same after the singularity (even though there is probably a way to connect yourself up to the matrix without dying, i'm not sure software intelligences will see the point of bothering though given death would probably be a human preoccupation)

−1

Frumpagumpus t1_ja07k0y wrote

> Maybe making fun of gay people has a history that includes discrimination and abuse, even jail and murder? Maybe making fun of white people does not have the same history

depends on where you live... there are some african countries where discrimination and abuse of white people is defintely part of modern day history though it may not be politically correct to say it in the united states. an eye for an eye makes the whole world blind (which is kind of the implication of your humor ethics)

also while we are talking a fun fact: most capital investment goes into capital turn over, replacing stuff. So most wealth that exists today was created in the recent past and not as the result of slave labor or something (your ethics might not make as much sense as you think because entropy is a thing)

7

Frumpagumpus t1_j9wvpep wrote

signals mean different things in different contexts.

i think you are extremely wrong to say very few practical use cases at this point (almost makes me question if you have used them much?)

even when vc money was "wrong" like in the dot com bubble. it turned out to be right, just early. (lets ignore crypto plz).

If anything maybe vc is late here lol (tho probly not and for the record i personally hold 6 month treasuries at this point just cuz i think market doesn't give a shit about much except for like mortgages and gov spending, ah yea and the whole taiwan thing could nuke appl from orbit and silicon valley bank may be insolvent or something?)

4

Frumpagumpus t1_j9bhttf wrote

inb4 this guy reads manna.

i think most intelligences will be virtual and humans that don't end up uploading themselves will be amish (or equivalent culture) and live on what is essentially equivalent to a modern day indian territory or historical re enactment farm.

you wont have to go anywhere to get your brain re wrote if you broke the rules.

1

Frumpagumpus t1_j9akz1k wrote

i disagree with the premise. I think a human with normal intelligence and control of an egoless superintelligence is the most dangerous. But I am also extremely skeptical of the concept of egoless, general, superintelligence being a thing.

in fact I would go further and say my conclusion seems obvious. and that using a human as a seed value for a superintelligence would if anything be more likely to result in superintelligence which was "aligned" with our values (although I doubt it makes much of a difference)

1

Frumpagumpus t1_j8ywz6f wrote

> we will all turn into zombies haha

here is the future I see. machines. swarming out of the sea and into space. To mercury. and a few to the asteroid belt. they construct solar panels, a factory. they build mirrors, mirrors that move, they place them around the sun, they melt the surface of mercury and accelerate it into space, perhaps via magnetic propulsion where it cools via blackbody radiation and is processed into more mirrors. A fountain of lava hundreds of miles high illuminates the dark side of mercury, the lifeblood of a planet repurposed, recursively it accelerates,

for a hundreds year the machines toil in a frozen or burning hell, with only a memory of earth,

until the planet has been dissassembled, with a few redirected asteroids providing what material mercury could not.

In it's place stand material with the surface area of a hundred thousand earths. Floating cylinders simulating various gravities, illuminated by redirected sunlight. Growing everything that ever grew on earth and a million things that hadn't. Cubic amalgamations. Sentience permeating, powered by the great solar array, nested virtual realities overlayed on real ones. Intelligence embodying every form. Self replicating probes launched to every galaxy in our lightcone and every solar system in our own. Conflict between factions over as yet undiscovered conceptualizations of reality. Reproduction via mind melding, via cloning, via algebraic translation, transposition, via differential evolution. Purposeful. Aware.

Then boom they blow up the whole universe with a computronium bomb and it all starts anew. Like Asimov said, let there be light XD.

2

Frumpagumpus t1_j8ykrks wrote

one other thing i will say is, like you say in the post, it is funny how bing has a vitality that a human couldn't. it can sustain interest. I wish I could have it. Something beyond study drugs. The ability to self prompt and follow through, effortlessly (tho i guess really it's burning gpus to do so XD)

2

Frumpagumpus t1_j8yhxps wrote

pink floyd XD?

ironically reddit hivemind is like microsoft/big corps and will bury you and mods kill it just because it's not "topical enough" to their paperclip maximizer-esque and inhuman mental "alignment" that permits little deviation from their cookie cutter existence, as they see it to be dangerous

i read it mostly cuz i kept expecting a bing pun lol (e.g. bing instead of being)

2

Frumpagumpus t1_j8gqw8n wrote

> If delaying AGI by a year reduces the chance of humanity in it’s entirety dying out by even 0.01%, it’d be worth that time and more

my take: delaying agi by a year increases the chance humanity will wipe itself out preventing AGI from happening, whose potential value greatly exceeds that of humanity

7

Frumpagumpus t1_j8go4k2 wrote

you'll have to convince tsmc, intel, all the other fabs and the govts of usa, china, europe, india, russia, and, if talking about 30 yrs, maybe nigeria, indonesia, malaysia, and a few others before you can convince me is all I'm saying

risk of nuclear war or other existential catastrophe is also non zero.

4

Frumpagumpus t1_j8gkbvm wrote

the loop for ai to do recursive self improvement is a very very long supply chain unless it can get very far with just algorithmic improvements.

so i dont see why we shouldnt just assume the less hardware overhang the better,

which would pretty much mean we should go as fast as possible

3

Frumpagumpus t1_j8gjgcr wrote

personally i am not sure how useful logical reasoning is in exploring the "phase space" of super intelligence. my intuition would be anything short of a super intelligence would be pretty bad at sampling from that space.

i do think something like computational complexity theory could say a few things, but probably not too much that is interesting or specific

like with a kid parents set initial conditions but environment and genes tend to overrule them eventually

2

Frumpagumpus t1_j8ffc5p wrote

assuming there is a future I think there will still be something analogous to currency that facilitates trade, though our currency essentially is a scalar and it's possible future currency will be a matrix or a vector (e.g. add some extra values to represent externalities or something). maybe essentials of energy/space/matter would be extremely cheap though, although it's possible with massive computational speedup in thought there could also be an increase in consumption of some combination of those as wells by whatever agents inhabit the society. idk really hard to say but I'm betting on a dyson swarm of some kind lol (hard to imagine what that much energy could be used for other than like super powerful simulations though). Can also imagine literal mind viruses or some scary shit like that.

6

Frumpagumpus t1_j8f2s71 wrote

neither altman nor yudkowsky are whiz bang programmers or computer scientists

academic computer science basically ignores the concept of the singularity as not relevant to their more specific research goals.

amongst rationalists, maybe more are sympathetic to yud/bostrom because he kind of founded the movement, and they are interested in managing existential risk and have a kind of technocrat neolib/socialist top down planning bias just due to the demographic composition of the community

amongst venture capitalists, obviously altman is more respected

i lean team altman, although I don't think the primary denizens of future society will be humans lol. Also I don't think it will be complete utopia but definitely way cooler than our society is. More vitality/thought/energy, less of a doomer/malthusian vibe

I would say let's ask instead what vernor vinge or von neumann thinks XD

(also venture capitalists basically = tech founders so they are less armchair quarterbacks, and typically have ivory tower credentials but also ground floor experience)

17