monsieurpooh

monsieurpooh t1_j51eo4q wrote

I would've liked some details about the "workout session" actually involving slaying a bunch of orcs via sword-fighting and magic in VR and/or practicing martial arts against world champions, or free-soloing a rock wall on Mars.

Otherwise based on the description the first thing that comes to mind is a cheesy "Black Mirror" style of gamified workout session based on incredibly banal tasks such as biking and running which is a huge downgrade even from today's options such as rock climbing, MMA, dancing or whatever your actual passion is (which most people have yet to discover, and hence haven't really escaped the idea that a workout can be something more fulfilling than "forcing yourself to go to the gym"), endrant.

7

monsieurpooh t1_j35wbej wrote

It's probably going to devolve into a semantics debate.

ChatGPT model neurons stay the same until they retrain it and release a new version.

But, you feed it back its own output + more prompt, and now it has extra context about the ongoing conversation.

For now I would have to say it shouldn't be described as "reflecting on its own thinking", since each turn is independent from others and it's simply trying to predict what would've been reasonable to appear in text. For example: It could be an interview in a magazine, etc.

That being said... I'm a big fan of the saying that AI doesn't need human-brain-style thinking to achieve a working imitation of human-level intelligence, just like the airplane is an example of flying without imitating the bird.

2

monsieurpooh t1_j2xw6ta wrote

These models are trained only to do one thing really well, which is predict what word should come after an existing prompt, by reading millions of examples of text. The input is the words so far and the output is the next word. That is the entirety of the training process. They aren't taught to look up sources, summarize, or "run nootropics through its neural network" or anything like that.

From this simple directive of "what should the next word be" they've been able to accomplish some pretty unexpected breakthroughs, in tasks which conventional wisdom would've held to be impossible for just a model programmed to figure out the next word, e.g. common sense Q and A benchmarks, reading comprehension, unseen SAT questions, etc. All this was possible only because the huge neural network transformers model is very smart, and as it turns out, can produce emergent cognition where it seems to learn some logic and reasoning even though its only real goal is to figure out the next word.

Edit: Also, your original comment appears to be describing inference, not training

2

monsieurpooh t1_j2xt9m4 wrote

Yes, this is the only famous show I know of which actually portrays a singularity style AI rather than terminator style.

Only thing is viewer should be aware the AI stuff doesn't really pick up until the middle and end of the season. So the first season or two will seem a little cheesy but it gets a lot better

3

monsieurpooh t1_j1r6ezu wrote

I wouldn't be so sure about that. The first AI joke in history was "horses go to Hayvard". It is a perfectly functional joke. That was many years ago by Google's gpt-like chat bot. I am sure gpt 3 and chat gpt have gotten way more capable of this and there must be tons of examples of jokes they made which were actually legitimate jokes unlike what's in the OP

Also, common sense isn't easy and attempts to codify or turn the logic into anything other than what a neural net naturally does, haven't been very successful (as far as I know). The only reason common sense benchmarks got better was because the whole neural net just got better.

1

monsieurpooh t1_j14ooy9 wrote

5 senses is really misleading if it's not full-dive matrix VR. If you can't do Judo in it then it's not really a fully immersive VR. If you try to do a parkour Kong vault and end up falling flat on your face because the support wasn't there in real life then it's not a true VR. If your in game character gets double legged and your real life self is still standing, that's an instant de sync

1

monsieurpooh t1_ivwdsux wrote

IMO, it will most likely require AGI. It is almost the same task as a fully automated software engineer.

In the mean time we have my game called AI Roguelite on steam to entertain us until then

Edit: some might wonder why it would take so long if we already have auto video etc. This only becomes clear when you start thinking about how to build this kind of game (i.e. what would AI Roguelite 2 look like and how would it be built). It is not enough to auto generate video or models or even animation. With those 3 we could get something like No man's sky that's much more visually detailed but not necessarily better gameplay. The game needs more info like what each attack/ability actually does, creative enemy abilities/behavior, what the effect of an item should be (which are often very open-ended and shouldn't simply be +dmg) etc. When all these are taken into consideration it basically requires AGI

2

monsieurpooh t1_iuxoei8 wrote

Further muddying the waters is sometimes the bias is correct and sometimes it isn't, and the way the terms are used doesn't make it easy to distinguish between those cases and it easily becomes a sticking point for political arguments where people talk past each other.

A bias could be said to be objectively wrong if it leads to suboptimal performance in the real world.

A bias could be objectively correct and improve real-world performance but still be undesirable e.g. leveraging the fact that some demographics are more likely to commit crimes than others. This is a proven fact but if implemented makes the innocent ones amongst those demographics feel like 2nd class citizens and can also lead to domino effects.

1

monsieurpooh t1_ir96t50 wrote

Those image AI's have been better for way longer and can generate totally unseen ridiculous prompts; I find it highly unlikely they're committing a similar blunder. I think dance diffusion may just be flawed and need some fixing, either that or the attempt to translate to a time-based thing just doesn't work. I suspect using the very recently released video-generating AI to do this will fare much better

2