Surur
Surur t1_jc2aoxo wrote
Reply to comment by ninjadude93 in ChatGPT or similar AI as a confidant for teenagers by demauroy
> The LLM needed to be prompted pretty specifically in the correct direction.
And children have to be taught. ChatGPT5 will have this natively built in.
> It's not reasoning on its own merits and its still generating text based on a statistical distribution of next likely characters rather than examining the problem and formulating an answer then producing the response.
Look here little man, do I have to demonstrate again you have no idea what is actually going on inside the black-box of the 96 layers of chatGPT? I guess if you are slow I might have to.
> rather than examining the problem and formulating an answer then producing the response
Again, you are obviously not examining the problem before you are formulating your response. Why don't you try it a bit and see where you get. Take that as a challenge.
Surur t1_jc2940h wrote
Reply to comment by ninjadude93 in ChatGPT or similar AI as a confidant for teenagers by demauroy
> A NN will never be able to logically reason by way of deduction.
See, what you don't appear to understand, being somewhat below average intelligence, is that deductive reasoning is not native to humans and has to be taught.
Using simple Chain of Thought prompting deductive reasoning is much improved in LLMs.
I hate to break it to you, little ninja, but you are not that much better than ChatGPT.
Surur t1_jc244ph wrote
Reply to comment by ninjadude93 in ChatGPT or similar AI as a confidant for teenagers by demauroy
> Humans are able to logically reason by deduction rather than inference.
This is mostly not true lol. For example, I detect a distinct lack of reasoning and logic on your part lol.
So clearly that is not the case, because if you were actually thinking you would see the resemblance and equivalence between how the human brain works and the NN in LLMs.
Surur t1_jc20j8z wrote
Reply to comment by ninjadude93 in ChatGPT or similar AI as a confidant for teenagers by demauroy
> We have specialized portions of the brain that do things other than simple statistical inference
So just because you cant physically see the layout of the neural network you don't think it has a specialist structure? Studies in simpler models have shown that LLMs build physical representations of their world model in their layers, but according to you that is just "a bunch of terms combined together"
> Also its not all that complex, the output from training a NN is just a mathematical model.
Again, if you think LLMs only do "simple statistical inference" then replicate the system without using NNs.
Else just admit your ignorance and move on.
Surur t1_jc1xcxw wrote
Reply to comment by ninjadude93 in ChatGPT or similar AI as a confidant for teenagers by demauroy
> Input is transformed by node weights and passed along between layers getting sequentially transformed by the next weights.
Think, Forrest, think. Isn't that how the human nervous system works? Or are you one of the magic microtubule guys?
> But according to you this means its fully aware AGI right?
I never said that lol. What I am saying is that this is the most complex machine humans have ever made. You don't appear to appreciate this. You are like an idiot who thinks a car works by turning the ignition and then the wheels roll.
Surur t1_jc1ue53 wrote
Reply to comment by ninjadude93 in ChatGPT or similar AI as a confidant for teenagers by demauroy
Let me enlighten you.
ChatGPT uses a large neural network with 96 layers, an unknown number of artificial neurons and 175 billion parameters. When you type in a prompt that prompt is broken into tokens, which are passed onto the first layer of the neural network. The first layer (of 96) then processes that token, using a selection of those billions of weights) and generates a vector, which is passed into the next one in turn. This is repeated until you get to the output layers, where you end up with an array of output token possibilities, which will be processed by a decoding algorithm once more to select the optimal combination of tokens, which are then converted back to text and outputted.
Importantly we do not know what happens in those 96 layers of artificial neural network - it's mostly a black box, and if you can explain exactly what happens, feel free to write your paper - I am sure a science prize awaits.
Surur t1_jc1orae wrote
Reply to comment by ninjadude93 in ChatGPT or similar AI as a confidant for teenagers by demauroy
>Chatgpt works by selecting the most likely sequence of words given the preceeding word.
Thank you for confirming that you are one of those idiots. That is like saying a car works by rolling the wheels lol.
You are clearly ignorant. Get educated for all our sakes.
Surur t1_jc13oal wrote
Reply to comment by ninjadude93 in ChatGPT or similar AI as a confidant for teenagers by demauroy
Your understanding is so superficial I would be surprised if you passed grade 1.
If ChatGPT is just a "a statisical machine" please explain how you would replicate the result without a neural network.
Get educated and stop wasting my time.
Surur t1_jc04691 wrote
Reply to comment by ninjadude93 in ChatGPT or similar AI as a confidant for teenagers by demauroy
That is certainly not what your opinion is.
Surur t1_jc0436e wrote
Reply to comment by ninjadude93 in ChatGPT or similar AI as a confidant for teenagers by demauroy
I seriously doubt it.
Surur t1_jc03ijw wrote
Reply to comment by ninjadude93 in ChatGPT or similar AI as a confidant for teenagers by demauroy
You seem very confused about the nature of AI and reality.
Surur t1_jc03f3p wrote
Reply to comment by ninjadude93 in ChatGPT or similar AI as a confidant for teenagers by demauroy
That is obviously your opinion based on a misunderstanding of what chatGPT is, so I will leave it at that.
Surur t1_jbyxqvr wrote
Reply to comment by nosmelc in ChatGPT or similar AI as a confidant for teenagers by demauroy
> A confidant needs to understand the real world and human emotions, which are extremely difficult for AI systems.
ChatGPT actually shows pretty good theory of mind. I think it just needs a lot more safety training via human feedback. There is a point where things are "good enough".
Surur t1_jbyux03 wrote
Reply to comment by nosmelc in ChatGPT or similar AI as a confidant for teenagers by demauroy
An AGI can do any intellectual task a human can do. Do we really need an AI which can do brain surgery to have one good enough to be a confidant? Do you have the same demand for your therapist, that they can also design computer chips?
Surur t1_jbypao6 wrote
Reply to comment by DEMOLISHER500 in ChatGPT or similar AI as a confidant for teenagers by demauroy
AI is any intelligence which is not organic. The current implementation is neural networks, but there was a time people thought AIs would use simple algorithms. Even AlphaGo uses tree searches, so there is no real cut-off which makes one thing an AI and the other not.
Which is why OP's statement that ChatGPT is not real AI is so ridiculous.
Surur t1_jbyki9g wrote
Reply to comment by DEMOLISHER500 in ChatGPT or similar AI as a confidant for teenagers by demauroy
That's what they said before AlphaGo beat Go lol.
Surur t1_jbyjw78 wrote
Reply to comment by Jasrek in ChatGPT or similar AI as a confidant for teenagers by demauroy
Do you think ChatGPT got this far magically? OpenAI uses Human FeedBack Reinforcement Learning to teach the neural network what kind of expressions are appropriate and which ones are inappropriate.
Here is a 4-year-old 1-minute video explaining the technique.
For ChatGPT, the feedback was provided by Kenyans, and maybe they did not have as much awareness of child exploitation.
Clearly, there have been some gaps, and more work has to be done, but we have come very far already.
Surur t1_jbyif2t wrote
Reply to comment by Jasrek in ChatGPT or similar AI as a confidant for teenagers by demauroy
> If your concerns would be met by the program beginning each conversation with a disclaimer of "I am a computer program and not a real life adult human being", then I'm perfectly fine with that and support your idea.
My concern is around children. A disclaimer would not help.
> If your concern is that a chat program needs to be advanced enough to have "moral and legal" judgement, well, I guess you can come back in 15 years and see if we're there yet.
I don't think we need 15 years. Maybe even 1 is enough. What I am saying is when it comes to children a lot more safety work needs to happen.
Surur t1_jbyh7ee wrote
Reply to comment by Jasrek in ChatGPT or similar AI as a confidant for teenagers by demauroy
Why do you keep talking about hiding a bruise? The tweet is about a 13-year-old child being abducted for out-of-state sex by a 30-year-old.
The issue is that a while ChatGPT may present as an adult, a real adult would have an obligation to make a report, especially if presented in a professional capacity (working for Microsoft or Snap for example).
I have no issue with ChatGPT working as a counsellor, but it will have to show appropriate professional judgement first, because, unlike a random friend or web page, if does represent Microsoft and OpenAI, including morally and legally.
Surur t1_jbyev3j wrote
Reply to comment by Jasrek in ChatGPT or similar AI as a confidant for teenagers by demauroy
You dont think the lack of awareness of what is appropriate for children is a risk when it comes to an AI as a confidant for a child?
We do a lot to protect children these days (e.g. background checks of anyone that has professional contact with them, appropriate safeguarding training etc) so it is appropriate to be careful with children who may not have good enough judgement.
Surur t1_jbyeden wrote
Reply to comment by demauroy in ChatGPT or similar AI as a confidant for teenagers by demauroy
> It is not ChatGPT.
It is actually. OpenAI has licensed their AI to Snap.
https://www.cnbc.com/2023/02/27/snap-launches-ai-chatbot-powered-by-openais-gpt.html
Surur t1_jbye85e wrote
Reply to comment by Taxoro in ChatGPT or similar AI as a confidant for teenagers by demauroy
> People need to stop thinking chatgpt and any other ai's have actual intelligence or can give proper information or adivce.. they can't.
And yet you would lose against a $20 chess computer, so when you said "any other AI" you clearly did not mean a $20 chess computer.
Surur t1_jby9wy6 wrote
Reply to comment by Taxoro in ChatGPT or similar AI as a confidant for teenagers by demauroy
ChatGPT says with an attitude like yours, you will be "left behind an in increasingly AI-driven world" and suggests you should "seek to understand the potential of AI and how it can be used to solve complex problems in a variety of fields, including healthcare, finance, and transportation."
Surur t1_jby70c0 wrote
> and the advice given, while not very original, was of very decent quality and quite fine-tuned to her situation.
This would worry you then:
https://twitter.com/tristanharris/status/1634299911872348160
Surur t1_jc2cgvn wrote
Reply to comment by ninjadude93 in ChatGPT or similar AI as a confidant for teenagers by demauroy
Lol. Have you run out of things to say? Why don't you employ your logic and reasoning for once.
Lets see:
Humans, when presented with a prompt, produce a response using their neural network, based on training they have received.
LLMs, when presented with a prompt, produce a response using their neural network, based on training they have received.
We do not know in detail how the brain works, though we know how neurons work.
We do not know in detail how the LLMs works, though we know how the GPUs work.
Fact: In most cases it is difficult to differentiate between the output of an average human vs a LLM, and in many cases the output from humans are worse, which is why they find great utility in LLMs like ChatGPT.
LLMs, similar to children, still have deficits in symbolic computation, but developers are making progress in that area.
Conclusion: Little Ninja is no different to a pile of silicon.
BTW I tested ChatGPT's deductive skills:
> Children do not believe LLMs can reason. Little Ninja is a child. What conclusion can we draw? Please lay out your thoughts step by step.
ChatGPT:
> The premise states that "Children do not believe LLMs can reason."
> Little Ninja is a child.
> Therefore, based on the given information, we can conclude that Little Ninja may not believe that LLMs can reason.
Or how about an example not including little ninja.
Q: South Africa is at the southern tip of Africa. Chile is at the southern tip of South America. Both countries are far away from the equator. The further you get from the equator, the colder you are. What can you conclude about the climate of South Africa and Chile.
A: Based on the given information, we can conclude that both South Africa and Chile are located far away from the equator, and the statement "the further you get from the equator, the colder you are" suggests that these countries are likely to have colder climates compared to countries located closer to the equator. However, we cannot make any specific conclusions about the climates of South Africa and Chile beyond this generalization, as climate is influenced by a wide range of factors such as altitude, ocean currents, and prevailing winds, which can vary from region to region within a country. Therefore, further information about specific regions within South Africa and Chile would be needed to make more detailed conclusions about their climates.
Or:
Q: South Africa is at the southern tip of Africa. Brazil is in South America and closer to the equator. The further you get from the equator, the colder you are. Which country is warmer and which is colder.
A: Based on the given information and the known relationship between distance from the equator and temperature, we can conclude that Brazil is likely to be warmer than South Africa. This is because Brazil is located closer to the equator than South Africa, meaning that the angle of incidence of the sun's rays is more direct and the sunlight is more concentrated, resulting in more heat energy reaching the surface.