monsieurpooh
monsieurpooh t1_j35wbej wrote
Reply to comment by LarsPensjo in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
It's probably going to devolve into a semantics debate.
ChatGPT model neurons stay the same until they retrain it and release a new version.
But, you feed it back its own output + more prompt, and now it has extra context about the ongoing conversation.
For now I would have to say it shouldn't be described as "reflecting on its own thinking", since each turn is independent from others and it's simply trying to predict what would've been reasonable to appear in text. For example: It could be an interview in a magazine, etc.
That being said... I'm a big fan of the saying that AI doesn't need human-brain-style thinking to achieve a working imitation of human-level intelligence, just like the airplane is an example of flying without imitating the bird.
monsieurpooh t1_j2z3bt5 wrote
Reply to comment by indigoHatter in Asked ChatGPT to write the best supplement stack for increasing intelligence by micahdjt1221
Thanks. I find your edited version hard to understand and still a little wrong, but I won't split hairs over it. We 100% agree on the main point though: This algorithm is prone to emulating whatever stuff is in the training data, including bro-medical-advice.
monsieurpooh t1_j2y9k2b wrote
Reply to comment by louddoves in Asked ChatGPT to write the best supplement stack for increasing intelligence by micahdjt1221
I was about to comment the same thing and forgot about it. Every time I see this mistake I can't help but visualize someone huffing and sighing about something they're supposed to be suspicious of
monsieurpooh t1_j2xw6ta wrote
Reply to comment by indigoHatter in Asked ChatGPT to write the best supplement stack for increasing intelligence by micahdjt1221
These models are trained only to do one thing really well, which is predict what word should come after an existing prompt, by reading millions of examples of text. The input is the words so far and the output is the next word. That is the entirety of the training process. They aren't taught to look up sources, summarize, or "run nootropics through its neural network" or anything like that.
From this simple directive of "what should the next word be" they've been able to accomplish some pretty unexpected breakthroughs, in tasks which conventional wisdom would've held to be impossible for just a model programmed to figure out the next word, e.g. common sense Q and A benchmarks, reading comprehension, unseen SAT questions, etc. All this was possible only because the huge neural network transformers model is very smart, and as it turns out, can produce emergent cognition where it seems to learn some logic and reasoning even though its only real goal is to figure out the next word.
Edit: Also, your original comment appears to be describing inference, not training
monsieurpooh t1_j2xt9m4 wrote
Reply to comment by Crypt0n0ob in Your favorite series about AI? by DJswipeleft
Yes, this is the only famous show I know of which actually portrays a singularity style AI rather than terminator style.
Only thing is viewer should be aware the AI stuff doesn't really pick up until the middle and end of the season. So the first season or two will seem a little cheesy but it gets a lot better
monsieurpooh t1_j2xsvrj wrote
Reply to Your favorite series about AI? by DJswipeleft
Person of Interest beats most shows by a mile. Only issue is the first season or two have almost nothing to do with AI and it really only picks up in the middle and end
monsieurpooh t1_j2wx7o9 wrote
Reply to comment by indigoHatter in Asked ChatGPT to write the best supplement stack for increasing intelligence by micahdjt1221
Why do people keep spreading this misinformation? The process you described is not how GPT works. If it were just finding a source and summarizing it, it wouldn't be capable of writing creative fake news articles about any topic
monsieurpooh t1_j1r6ezu wrote
Reply to comment by turntable_server in One thing ChatGPT desperately needs: An upgrade to its humor by diener1
I wouldn't be so sure about that. The first AI joke in history was "horses go to Hayvard". It is a perfectly functional joke. That was many years ago by Google's gpt-like chat bot. I am sure gpt 3 and chat gpt have gotten way more capable of this and there must be tons of examples of jokes they made which were actually legitimate jokes unlike what's in the OP
Also, common sense isn't easy and attempts to codify or turn the logic into anything other than what a neural net naturally does, haven't been very successful (as far as I know). The only reason common sense benchmarks got better was because the whole neural net just got better.
monsieurpooh t1_j1ozdfg wrote
Reply to comment by refugezero in One thing ChatGPT desperately needs: An upgrade to its humor by diener1
Heheh, technically true, and yet... It has broken world records for benchmarks such as common sense reasoning.
An intelligence doesn't need to operate in the same way as a human brain to achieve intelligent behavior
monsieurpooh t1_j14ooy9 wrote
Reply to comment by Shelfrock77 in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
5 senses is really misleading if it's not full-dive matrix VR. If you can't do Judo in it then it's not really a fully immersive VR. If you try to do a parkour Kong vault and end up falling flat on your face because the support wasn't there in real life then it's not a true VR. If your in game character gets double legged and your real life self is still standing, that's an instant de sync
monsieurpooh t1_j14o4lb wrote
Reply to comment by SoulGuardian55 in To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
You just pretty much described what AI Roguelite is trying to do
monsieurpooh t1_j14ns0e wrote
Reply to To all you well-read and informed futurologists here: what is the future of gaming? by Verificus
Have you checked out AI Roguelite yet? It's the world's first game to use gpt for actual game mechanics. Granted it's only text based. For a future version, judging by the way technology is progressing, I'd envision skipping the entire 3D model step altogether and jumping straight to on-the-fly video generation.
monsieurpooh t1_j0osyyj wrote
Reply to comment by Fortkes in OpenAI Forecasts $1 Billion in Revenue by 2024 by liquidocelotYT
wtf, here I was investing in s&p500 when I could've been focused on BETTING ON THE SINGULARITY HELL YEAH
monsieurpooh t1_ixibzmd wrote
Reply to comment by [deleted] in Neuralink Co-Founder Unveils Rival Company That Won't Force Patients To Drill Holes in Their Skull by Economy_Variation365
What about OpenWater (which claims to have very high res non-invasive scanning technology but has yet to show their technology publicly)?
monsieurpooh t1_ivwdsux wrote
Reply to Will Text to Game be possible? by Independent-Book4660
IMO, it will most likely require AGI. It is almost the same task as a fully automated software engineer.
In the mean time we have my game called AI Roguelite on steam to entertain us until then
Edit: some might wonder why it would take so long if we already have auto video etc. This only becomes clear when you start thinking about how to build this kind of game (i.e. what would AI Roguelite 2 look like and how would it be built). It is not enough to auto generate video or models or even animation. With those 3 we could get something like No man's sky that's much more visually detailed but not necessarily better gameplay. The game needs more info like what each attack/ability actually does, creative enemy abilities/behavior, what the effect of an item should be (which are often very open-ended and shouldn't simply be +dmg) etc. When all these are taken into consideration it basically requires AGI
monsieurpooh t1_iuxoei8 wrote
Reply to comment by mynd_xero in Google’s ‘Democratic AI’ Is Better at Redistributing Wealth Than America by Mynameis__--__
Further muddying the waters is sometimes the bias is correct and sometimes it isn't, and the way the terms are used doesn't make it easy to distinguish between those cases and it easily becomes a sticking point for political arguments where people talk past each other.
A bias could be said to be objectively wrong if it leads to suboptimal performance in the real world.
A bias could be objectively correct and improve real-world performance but still be undesirable e.g. leveraging the fact that some demographics are more likely to commit crimes than others. This is a proven fact but if implemented makes the innocent ones amongst those demographics feel like 2nd class citizens and can also lead to domino effects.
monsieurpooh t1_ithpe6v wrote
Reply to comment by Zermelane in Given the exponential rate of improvement to prompt based image/video generation, in how many years do you think we'll see entire movies generated from a prompt? by yea_okay_dude
The thing is (as GPT itself has proven, since it can be used for image generation despite being made for text) sometimes improving a model in a general way will solve multiple problems at once.
monsieurpooh t1_ir96v6f wrote
Reply to StabilityAI announced AI Music Generator Harmonai based on Dance Diffusion Model by Ezekiel_W
They have "don't stop believing" chorus as one of the samples. This makes me distrust the rest of the results. I suspect they would have better luck if adapting one of the newly released video generating models to do this task, rather than a model that's for images.
monsieurpooh t1_ir96t50 wrote
Reply to comment by BbxTx in StabilityAI announced AI Music Generator Harmonai based on Dance Diffusion Model by Ezekiel_W
Those image AI's have been better for way longer and can generate totally unseen ridiculous prompts; I find it highly unlikely they're committing a similar blunder. I think dance diffusion may just be flawed and need some fixing, either that or the attempt to translate to a time-based thing just doesn't work. I suspect using the very recently released video-generating AI to do this will fare much better
monsieurpooh t1_ir96lev wrote
Reply to comment by the_coyote_smith in StabilityAI announced AI Music Generator Harmonai based on Dance Diffusion Model by Ezekiel_W
One of the samples in "freshly generated samples" is the don't stop believing chorus..
monsieurpooh t1_j51eo4q wrote
Reply to comment by Evil_Patriarch in The year is 2058. I awake in my pod. by katiecharm
I would've liked some details about the "workout session" actually involving slaying a bunch of orcs via sword-fighting and magic in VR and/or practicing martial arts against world champions, or free-soloing a rock wall on Mars.
Otherwise based on the description the first thing that comes to mind is a cheesy "Black Mirror" style of gamified workout session based on incredibly banal tasks such as biking and running which is a huge downgrade even from today's options such as rock climbing, MMA, dancing or whatever your actual passion is (which most people have yet to discover, and hence haven't really escaped the idea that a workout can be something more fulfilling than "forcing yourself to go to the gym"), endrant.