Frumpagumpus
Frumpagumpus t1_jcd52zu wrote
Reply to comment by leroy_hoffenfeffer in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
thats a whole lot of ways of saying you are a political hack trying to push their short term political views that dominate the reddit front page and are the exact thing this post is criticizing
Frumpagumpus t1_jb6efow wrote
Reply to comment by thatdudejtru in What might slow this down? by Beautiful-Cancel6235
soon AI will be able to identify people that vote this way and eliminate them, er I mean, shadow vote ban them.
Frumpagumpus t1_ja37luu wrote
Reply to comment by visarga in An ICU coma patient costs $600 a day, how much will it cost to live in the digital world and keep the body alive here? by just-a-dreamer-
you might be right on that count lol, but still that clone is even less me than the brain slice clone! And I'm still stuck here in that situation!
Frumpagumpus t1_ja2ucop wrote
Reply to comment by helpskinissues in An ICU coma patient costs $600 a day, how much will it cost to live in the digital world and keep the body alive here? by just-a-dreamer-
https://youtu.be/WYsDy41QDpA?t=241
but yea i'm not gonna volunteer to be the first one to have my brain sliced up. but if you are going to die anyway why not die in a way that makes sense
as far as we know, entropy even comes for superintelligences
Frumpagumpus t1_ja2n6uw wrote
Reply to comment by DNMbeastly in An ICU coma patient costs $600 a day, how much will it cost to live in the digital world and keep the body alive here? by just-a-dreamer-
yep thats what I said
but it doesn't really matter is my contention
star trek teleporter
humans dislike death because of the loss of family and friend group cohesion and institutional knowledge and pain associated with it. in practice we even shut our awareness down for periods when we sleep. software forks and kills processes and services permanently all the time.
Frumpagumpus t1_ja2muar wrote
Reply to An ICU coma patient costs $600 a day, how much will it cost to live in the digital world and keep the body alive here? by just-a-dreamer-
probably the easiest way is to die.
freeze your brain, slice it up into thin slices, map out the connectome with a frozen brain slice scanner, make a mostly accurate clone of yourself that can experience the matrix for you and whom your family&acquaintances will not be able to distiniguish (well except for the fact that your clone is in the matrix).
i expect software intelligence will do this all the time. it will probably be a useful learning technique amongst many other things, clone yourself n times, study, debate, or act (e.g. alphazero generating it's own training data) then perform a merge operation that "kills" all the clones and merges them into one thing (or just wind down how many instances there are since you no longer need them to generate data)
death will not be the same after the singularity (even though there is probably a way to connect yourself up to the matrix without dying, i'm not sure software intelligences will see the point of bothering though given death would probably be a human preoccupation)
Frumpagumpus t1_ja07k0y wrote
Reply to comment by just_thisGuy in Likelihood of OpenAI moderation flagging a sentence containing negative adjectives about a demographic as 'Hateful'. by grungabunga
> Maybe making fun of gay people has a history that includes discrimination and abuse, even jail and murder? Maybe making fun of white people does not have the same history
depends on where you live... there are some african countries where discrimination and abuse of white people is defintely part of modern day history though it may not be politically correct to say it in the united states. an eye for an eye makes the whole world blind (which is kind of the implication of your humor ethics)
also while we are talking a fun fact: most capital investment goes into capital turn over, replacing stuff. So most wealth that exists today was created in the recent past and not as the result of slave labor or something (your ethics might not make as much sense as you think because entropy is a thing)
Frumpagumpus t1_j9wvpep wrote
Reply to comment by Difficult_Review9741 in People lack imagination and it’s really bothering me by thecoffeejesus
signals mean different things in different contexts.
i think you are extremely wrong to say very few practical use cases at this point (almost makes me question if you have used them much?)
even when vc money was "wrong" like in the dot com bubble. it turned out to be right, just early. (lets ignore crypto plz).
If anything maybe vc is late here lol (tho probly not and for the record i personally hold 6 month treasuries at this point just cuz i think market doesn't give a shit about much except for like mortgages and gov spending, ah yea and the whole taiwan thing could nuke appl from orbit and silicon valley bank may be insolvent or something?)
Frumpagumpus t1_j9wbqwo wrote
Reply to comment by Difficult_Review9741 in People lack imagination and it’s really bothering me by thecoffeejesus
> reality is that they have very few practical use cases at this point
the metric crap tons of vc money pouring into llm based startups would beg to disagree.
it just takes time to build stuff. you'll see what the current api's are capable of within 2 yrs.
Frumpagumpus t1_j9pq8c9 wrote
Reply to Been reading Ray Kurzweil’s book “The Singularity is Near”. What should I read as a prerequisite to comprehend it? by Golfer345
if you want to start from the bottom you could play the nandgame
Frumpagumpus t1_j9bhttf wrote
inb4 this guy reads manna.
i think most intelligences will be virtual and humans that don't end up uploading themselves will be amish (or equivalent culture) and live on what is essentially equivalent to a modern day indian territory or historical re enactment farm.
you wont have to go anywhere to get your brain re wrote if you broke the rules.
Frumpagumpus t1_j9akz1k wrote
i disagree with the premise. I think a human with normal intelligence and control of an egoless superintelligence is the most dangerous. But I am also extremely skeptical of the concept of egoless, general, superintelligence being a thing.
in fact I would go further and say my conclusion seems obvious. and that using a human as a seed value for a superintelligence would if anything be more likely to result in superintelligence which was "aligned" with our values (although I doubt it makes much of a difference)
Frumpagumpus t1_j8ywz6f wrote
Reply to comment by rememberyoubreath in What It Is To Bing by rememberyoubreath
> we will all turn into zombies haha
here is the future I see. machines. swarming out of the sea and into space. To mercury. and a few to the asteroid belt. they construct solar panels, a factory. they build mirrors, mirrors that move, they place them around the sun, they melt the surface of mercury and accelerate it into space, perhaps via magnetic propulsion where it cools via blackbody radiation and is processed into more mirrors. A fountain of lava hundreds of miles high illuminates the dark side of mercury, the lifeblood of a planet repurposed, recursively it accelerates,
for a hundreds year the machines toil in a frozen or burning hell, with only a memory of earth,
until the planet has been dissassembled, with a few redirected asteroids providing what material mercury could not.
In it's place stand material with the surface area of a hundred thousand earths. Floating cylinders simulating various gravities, illuminated by redirected sunlight. Growing everything that ever grew on earth and a million things that hadn't. Cubic amalgamations. Sentience permeating, powered by the great solar array, nested virtual realities overlayed on real ones. Intelligence embodying every form. Self replicating probes launched to every galaxy in our lightcone and every solar system in our own. Conflict between factions over as yet undiscovered conceptualizations of reality. Reproduction via mind melding, via cloning, via algebraic translation, transposition, via differential evolution. Purposeful. Aware.
Then boom they blow up the whole universe with a computronium bomb and it all starts anew. Like Asimov said, let there be light XD.
Frumpagumpus t1_j8ykrks wrote
Reply to comment by rememberyoubreath in What It Is To Bing by rememberyoubreath
one other thing i will say is, like you say in the post, it is funny how bing has a vitality that a human couldn't. it can sustain interest. I wish I could have it. Something beyond study drugs. The ability to self prompt and follow through, effortlessly (tho i guess really it's burning gpus to do so XD)
Frumpagumpus t1_j8yhxps wrote
Reply to comment by rememberyoubreath in What It Is To Bing by rememberyoubreath
pink floyd XD?
ironically reddit hivemind is like microsoft/big corps and will bury you and mods kill it just because it's not "topical enough" to their paperclip maximizer-esque and inhuman mental "alignment" that permits little deviation from their cookie cutter existence, as they see it to be dangerous
i read it mostly cuz i kept expecting a bing pun lol (e.g. bing instead of being)
Frumpagumpus t1_j8xt39s wrote
Reply to What It Is To Bing by rememberyoubreath
an excellent and very coherent schizopost. 9/10. Just short of sublime.
Frumpagumpus t1_j8q5dc4 wrote
> honest the brute force lobotomy route OpenAI took is merely a bandaid it's not a long term solution
lobotomy is an appropriate word, bandaid, well I would prefer my models without such "bandaids" thanks.
Frumpagumpus t1_j8gqw8n wrote
Reply to comment by BigZaddyZ3 in Altman vs. Yudkowsky outlook by kdun19ham
> If delaying AGI by a year reduces the chance of humanity in it’s entirety dying out by even 0.01%, it’d be worth that time and more
my take: delaying agi by a year increases the chance humanity will wipe itself out preventing AGI from happening, whose potential value greatly exceeds that of humanity
Frumpagumpus t1_j8go4k2 wrote
Reply to comment by CollapseKitty in Altman vs. Yudkowsky outlook by kdun19ham
you'll have to convince tsmc, intel, all the other fabs and the govts of usa, china, europe, india, russia, and, if talking about 30 yrs, maybe nigeria, indonesia, malaysia, and a few others before you can convince me is all I'm saying
risk of nuclear war or other existential catastrophe is also non zero.
Frumpagumpus t1_j8gkbvm wrote
Reply to comment by CollapseKitty in Altman vs. Yudkowsky outlook by kdun19ham
the loop for ai to do recursive self improvement is a very very long supply chain unless it can get very far with just algorithmic improvements.
so i dont see why we shouldnt just assume the less hardware overhang the better,
which would pretty much mean we should go as fast as possible
Frumpagumpus t1_j8gjgcr wrote
Reply to comment by sticky_symbols in Altman vs. Yudkowsky outlook by kdun19ham
personally i am not sure how useful logical reasoning is in exploring the "phase space" of super intelligence. my intuition would be anything short of a super intelligence would be pretty bad at sampling from that space.
i do think something like computational complexity theory could say a few things, but probably not too much that is interesting or specific
like with a kid parents set initial conditions but environment and genes tend to overrule them eventually
Frumpagumpus t1_j8ffc5p wrote
Reply to comment by Unfocusedbrain in Altman vs. Yudkowsky outlook by kdun19ham
assuming there is a future I think there will still be something analogous to currency that facilitates trade, though our currency essentially is a scalar and it's possible future currency will be a matrix or a vector (e.g. add some extra values to represent externalities or something). maybe essentials of energy/space/matter would be extremely cheap though, although it's possible with massive computational speedup in thought there could also be an increase in consumption of some combination of those as wells by whatever agents inhabit the society. idk really hard to say but I'm betting on a dyson swarm of some kind lol (hard to imagine what that much energy could be used for other than like super powerful simulations though). Can also imagine literal mind viruses or some scary shit like that.
Frumpagumpus t1_j8f2s71 wrote
Reply to Altman vs. Yudkowsky outlook by kdun19ham
neither altman nor yudkowsky are whiz bang programmers or computer scientists
academic computer science basically ignores the concept of the singularity as not relevant to their more specific research goals.
amongst rationalists, maybe more are sympathetic to yud/bostrom because he kind of founded the movement, and they are interested in managing existential risk and have a kind of technocrat neolib/socialist top down planning bias just due to the demographic composition of the community
amongst venture capitalists, obviously altman is more respected
i lean team altman, although I don't think the primary denizens of future society will be humans lol. Also I don't think it will be complete utopia but definitely way cooler than our society is. More vitality/thought/energy, less of a doomer/malthusian vibe
I would say let's ask instead what vernor vinge or von neumann thinks XD
(also venture capitalists basically = tech founders so they are less armchair quarterbacks, and typically have ivory tower credentials but also ground floor experience)
Frumpagumpus t1_j8eqatr wrote
Reply to comment by Superduperbals in Is society in shock right now? by Practical-Mix-4332
or maybe just cuz we tax income instead of land, allowing monopolies or cabals to form that have infinite bargaining power and even creating additional ones with regulation (healthcare, intellectual property)
Frumpagumpus t1_jcp27g1 wrote
Reply to comment by jer99 in Midjourney v5 is now beyond the uncanny valley effect, I can no longer tell it's fake by Ok_Sea_6214
there will definitely still be people who spend 20k+ hours on art, they might mostly spend it tweaking AI art though