Frumpagumpus
Frumpagumpus t1_j8dhip1 wrote
Reply to comment by helpskinissues in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
usually wrong and mostly right lol. Better than a human.
I literally just explained to you that you COULD give it short term memory by prepending context to your messages. IT IS TRIVIAL. if i were talking to gpt3 it would not be this dense.
Humans take time to pause and compose their responses. gpt3 is afforded no such grace, but still does a great job anyway, because it is just that smart
yesterday I gave it two lines of sql ddl and asked it to create a view denormalizing all columns except primary key into a nested json object. it did in in .5 seconds, i had to change 1 word in a 200 line sql query to get it to work right.
yea that saved me some time. It does not matter that it was slightly wrong. If that is a stochastic parrot then humans must be mostly stochastic sloths barely even capable of parroting responses.
Frumpagumpus t1_j8dg6rt wrote
Reply to comment by helpskinissues in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
you are moving the goalposts.
two messages ago is short term memory, what you are now talking about is long term memory.
you can also try and give it long term memory by summarizing previous messages for example.
But, yes, it is more limited than humans, so far, at incorporating NEW knowledge into its long term memory (although it has FAR more text memorized than any human has ever memorized)
Frumpagumpus t1_j8dbeep wrote
Reply to comment by helpskinissues in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
how many humans remember what you said two messages ago lol
(and actually it can if you prepend the messages effectively giving it a bit of short term memory, a pretty fricking easy thing to do)
humans will not have a perfect short term recall of up to 4000 characters much less tokens so actually it is ironically superhuman along the axis you are criticizing it for XD
(copilot has a context window of like 8000 characters btw and they will only get even better)
Frumpagumpus t1_j8db96k wrote
f*** if there was a woman that could explain what a mutex to me in a thorough fashion while citing sources from memory and give examples in c++ in the middle of an argument i think i would have no choice but to date her lol.
Frumpagumpus t1_j7zkn3f wrote
Reply to comment by Practical-Mix-4332 in How far are we from this technology? by FusionRocketsPlease
computer graphics software
Frumpagumpus t1_j7xkowa wrote
Reply to The copium goes both ways by IndependenceRound453
as a former conservative, if a conservative looked at my life there is a good chance they would accuse me of "coping".
However I literally ditched their way of doing things because even the most allegedly forward looking subfactions were not in fact planning their lives while taking the future into account, which I now am, and doing so will lead to living a different life with different values lol.
which will have more impact on the future, having kids or writing reddit comments? the answer may turn out a bit surprising for 99% of humans...
It's not even uber optimism bro, let me give you my first "blackpill".
If you do a couple order of magnitude estimates, you will realize, it would take ~humans~ like ten thousand years to terraform mars or venus. Similarly, asteroid living a la kim stanley robinson isn't a sustainable alternative to earth. And guess what? HUMAN CIVLIZATIONS DO NOT LAST 10 THOUSAND YEARS BUDDY.
there was never an alternative, and in fact along a similar vein, greater than human intelligence shouldnt from first principles be that far off... (honestly who gives a shit if kurzweil is off by 10 or even 40 years, it just doesn't make a difference lol, (but if anything he looks right on the money for the most important predictions))
the first and greatest ethical principle of all humans is inertia, and that can lead to dumb conclusions when faced with a change in velocity, MUCH less ACCELERATION.
in summary. i would guess a 99% chance you are in fact the one coping by writing this post because the will of the universe/god is robots and not your genes (i know it can be hard for humans to realize this since their prefontal cortex is flooded with sex hormones during its maturation, but it is what it is)
Frumpagumpus OP t1_j7gzws1 wrote
Reply to comment by jcinterrante in Does the high dimensionality of AI systems that model the real world tell us something about the abstract space of ideas? [D] by Frumpagumpus
thx for the recommendations, always fun to read research that appeals to your personal flavor of intuition!
Frumpagumpus OP t1_j7gi0l1 wrote
Reply to comment by Sharchimedes in Does the high dimensionality of AI systems that model the real world tell us something about the abstract space of ideas? [D] by Frumpagumpus
one persons guessing is another's monte carlo technique perhaps? (also i don't understand why the downvotes)
Frumpagumpus t1_j7bm6rk wrote
Reply to comment by ComplicitSnake34 in What will happen to the Amish people when the singularity happens? by uswhole
i think your worst case scenario is actually the best case scenario but I dont think you've really put much thought or justification into some of the properties you think that scenario will have.
I harp on these points in like every other comment, but here we go again...
> hive mind
no, the importance of data locality to computation means intelligence and especially self awareness will be NECESSARILY distributed, however, the extreme speedup in communication/thinking, maybe a million times faster, MIGHT (maybe probably would) mean that to humans, it would seem like a hive mind.
> the Ai could easily determine which minds are to be erased
my take is that post human intelligences will intentionally copy and erase themselves because it is convenient to do so. Human take on life and death is a cultural value associated with our brains being anchored to our bodies.
my guess would be that most of this copying and erasing would occur under one's own will. Obviously computer viruses would become analogous to a much much more dangerous version of modern biological viruses. However if I had to bet, while bad stuff would happen I would bet it would happen at a rate less than bad stuff currently happens at a population level in our society (any given individual would be much less likely to die in an accident or disaster).
Frumpagumpus t1_j73xquv wrote
Reply to comment by Iffykindofguy in Future of The Lower and Middle Class Post-Singularity, and Why You Should Worry. by ttylyl
some wealth is justly earned, land wealth is like 99% unearned.
also like I said a plurality of their wealth is in fact land wealth when you get down to the assets behind all the various financial instruments, most loans are mortgages, student loans are quickly transformed into college campuses, even auto loans basically exist to prop up sprawl. And the remaining kind, gov debt, is in part collateralized by public lands. (and all currency comes from said debt)
Frumpagumpus t1_j73usci wrote
Reply to comment by Iffykindofguy in Future of The Lower and Middle Class Post-Singularity, and Why You Should Worry. by ttylyl
governments are usually filled with people of a richer persuasion, certainly it would be weird i think if they gave themselves a worse deal than their populace
I also think, to re hash a previous argument on this forum, that rich peoples wealth is somewhat overstated, it mostly manifests in the form of equity which represents control over productive assets rather than some physical wealth you could actually use to sustain yourself if you were to take it from them piecemeal and break up their companies
but I am very strongly opposed to land wealth, which is a large portion of the book value of companies and probably the single largest portion of said value of any particular kind of asset.
Frumpagumpus t1_j73tgbw wrote
Reply to comment by Iffykindofguy in Future of The Lower and Middle Class Post-Singularity, and Why You Should Worry. by ttylyl
i wouldnt want to live under an autocrat but I certainly dont mind living under somebody or somebodies (oligarchy) or preferably some system if it means I get to delegate some of the responsibility for the state of things.
I also would prefer that there be multiple collectives/states and that I could choose between them as freely as possible.
Frumpagumpus t1_j73mxjw wrote
Reply to comment by Iffykindofguy in Future of The Lower and Middle Class Post-Singularity, and Why You Should Worry. by ttylyl
as one of geolibertarian leanings I would say the important bit that makes a king's power tyrannical is his claim to all the land (even if only via proxy nobles) that prevent you from sustaining yourself.
But i don't agree that that is a necessary outcome of AI
Frumpagumpus t1_j73m8jw wrote
Reply to comment by Iffykindofguy in Future of The Lower and Middle Class Post-Singularity, and Why You Should Worry. by ttylyl
if their goal is to buy everything up it is sand slipping through their fingers
(so far, I am sure our legal system is going to come and save the day for us any moment now... (and by that i mean doom us))
Frumpagumpus t1_j73lxwi wrote
Reply to comment by Iffykindofguy in Future of The Lower and Middle Class Post-Singularity, and Why You Should Worry. by ttylyl
big companies are buying up AI companies almost as quickly as AI devs are cashing out and doing their own thing.
a couple extremely prominent examples just off top of my head of what i'm sure is a broader industry trend:
openai product manager for chatgpt
tesla head of AI (andrej kaparthy)
6/8 of the coauthors of the transformers paper (if i remember correctly)
i am sure some of them plan to get acquihired, but a lot of them seem to want speed and independence, e.g. john carmack (who left fb tho he didnt do AI at facebook but i think he is also a pretty good example of this)
Frumpagumpus t1_j6pd4w3 wrote
Reply to comment by DBRespawned in OpenAI once wanted to save the world. Now it’s chasing profit by informednews
one hand holds the modafinil and amphetamines
the other, weed and shrooms
Frumpagumpus t1_j6oxk7x wrote
Reply to comment by 55redditor55 in I love how the conversation about AI has developed on the sub recently by bachuna
there are other subs that specialize in both kinds of the posts you are talking about, between the two the technical oriented ones are probably more active actually lol. if you want to find them just cyberstalk power users around here.
Frumpagumpus t1_j6j0vr4 wrote
Reply to comment by Steven81 in I’m ready by CassidyHouse
> If we live in a materialistic universe , I don't think that concepts like "importance" can even enter the conversation.
what, why does a soul or whatever have anything to do with importance? (my suspicion here would be you are trying to do something impossible to do with an axiomatic system)
> Yes the world will go on in some abstract way, but not in a manner that can -even in principle- matter to you
we just went over how "abstract" and "material" (the world) aren't necessarily so different... they are both spaces in a geometric sense mapped by coordinate systems
Frumpagumpus t1_j6iung4 wrote
Reply to comment by Steven81 in I’m ready by CassidyHouse
i think we disagree, I think your version of "platonism" is solipsistic lol, since it places so much emphasis on your point of view.
my version of platonism (which is not pure by any means but possibly just as aligned w/original platonism's theory of forms if not moreso than yours) is more: abstract and physical world can both be described with coordinate systems, e.g. numbers. So just like it turned out space and time were actually spacetime, there might be something similar going on.
and yes i don't believe in souls. (in particular there is no a priori reason to believe in them and even if it was a real concept it wouldnt' change much since the soul would also live in a reality similar to our own, e.g. that of space describable by a coordinate system, probably some timelike dimension as well in order to map to our own reality, in my estimation)
> In one case Uploading yourself is killing yourself, in another it is living forever without the need of pesky mediums.
uh, it can be killing yourself in both of them, because causal continuity is a "material" property...
the question is more of how much difference does it make, people die all the time, is it so bad to die, etc. You think maintaining your personal narrative is of paramount importance because it's tied to some trans dimensional soul or something. I see myself as more about fighting for my ideals, and making sacrifices when necessary or important.
Though i'm not sure it will really be much of a sacrifice actually, seems like that to us but we see things differently as exclusively embodied agents than future intelligences will.
Frumpagumpus t1_j6inapz wrote
Reply to comment by Steven81 in I’m ready by CassidyHouse
idk seems more like a solipsistic point of view than a platonic one
(i actually consider myself a bit of a platonist, in particular i think the distinction between space of ideas/math our brains/gpt navigates and physical space we move through might be a bit more subtle than it seems on the surface, but i don't think that really makes any difference to present discussion (well, not in the way you seem to be arguing it, actually i think it could almost go in the opposite direction... abstract world might be a bit more material than first suspected))
Frumpagumpus t1_j6i0i4n wrote
Reply to comment by Steven81 in I’m ready by CassidyHouse
> you are your neurons
why does that matter. you go to sleep every night and the cessation of conscioussness doesn't bug you.
People have died for stupider reasons than "I want to create a clone of me that has my values and can clone themselves and possibly shut down their clones if needed such that they can perform tasks in parallel, oh and they also get a massive speedup and don't require nearly as much space or resources and could thus go into space much more easily and can save quite a bit of time on maintenance etc."
its for the cause (though to be clear i am not actively betting that destructive brain uploading will be a thing, more like, even if you had non destructive brain uploading or some ship of theseus stuff or whatever, once you were actually in the computer you would find it VERY VERY convenient to clone yourself. software processes fork all the time, and their children are killed and garbage collected with reckless abandon)
if you were trying to preserve yourself biologically probably the easiest way would be to stick your brain in a jar lol. which i bet a lot of people would also find morally objectionable XD
Frumpagumpus t1_j6g4uu2 wrote
Reply to comment by Steven81 in I’m ready by CassidyHouse
then there is me where at first i was like, naw i won't destructively upload my mind into the computer because i want causal continuity, but then I thought about it some more, and I think causal continuity may be an old person value soon lol, screw it, i would rather be in two places at once, i will pre commit myself to it XD
(i wont be the first person in the star trek teleporter but heck yeah i would use it)
Frumpagumpus t1_j69myrc wrote
Reply to comment by visarga in Google not releasing MusicLM by Sieventer
nice example.
it definitely does seem like "contextualization" is one of the biggest limiters on gpt performance.
https://thakkarparth007.github.io/copilot-explorer/posts/copilot-internals
you might enjoy this copilot reverse engineering in a similar vein. if i had enough time i would probably port some of these techniques to emacs (can use copilot there but looking at extensions dont quite do all this i dont think, tho it does work well enough with just the buffer)
Frumpagumpus t1_j8diap9 wrote
Reply to comment by helpskinissues in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
lol ants cant speak and i would be curious to read any literature on if they possess short term memory at all XD