Submitted by wtfcommittee t3_1041wol in singularity
LarsPensjo t1_j33enxl wrote
Reply to comment by sticky_symbols in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
I saw an example where someone asked for a Python program to solve a task. ChatGPT produced such a program. But there was an error, and the person pointed out the error and asked for a fix.
ChatGPT then produced a correct program.
Isn't this an example of self-improvement? There was external input, but that is beside the point. Also, the improvement is going to be forgotten if you restart with a new prompt. But that is also beside the point, there was an improvement while the sessions lasted.
Notice also that ChatGPT did the improvement, the person starting the prompt did not explicitly how to solve the error.
micaroma t1_j3469ny wrote
"But that is also beside the point, there was an improvement while the sessions lasted."
Really? That seems like the most important factor of "self-improvement". If it only improved its error in the session but makes the same error if you refresh the page, then it didn't improve itself, it simply improved its output. There's a huge difference between permanently upgrading your own capabilities from external input, and simply fixing text already written on the page with external input.
(Also, it sometimes continues to make the same error within the same session even after pointing out its mistake, which is greater evidence against true self-improvement.)
visarga t1_j34sgk5 wrote
I don't see the problem. The language model can have feedback from code execution. If it is about facts, it could have access to a search engine. But the end effect is that it will be much more correct. A search engine provides grounding and has fresh data. As long as you can fit the data/code execution results in the prompt, all is ok.
But if we save the correctly executed tasks and problems we could make a new dataset to be used in fine-tuning the model. So it could learn as well.
sticky_symbols t1_j33ih3f wrote
That's improvement but definitely not self improvement since a human had to ask.
LarsPensjo t1_j33luce wrote
Aren't all self-improvements ultimately triggered by external events?
magnets-are-magic t1_j34qojy wrote
And aren’t humans taught how to function in society? It takes decades or mentorship from parents, school, friends. And we continue to constantly learn for our entire lives
eroggen t1_j355ace wrote
Yes but ChatGPT doesn't have the ability to initiate the process of synthesizing external input. It can hold the conversation in "short term memory" but it can't ask questions or experiment.
sticky_symbols t1_j37pynh wrote
Ultimately, yes. But humans can make many steps of thinking and self Improvement after that external event. Chatgpt is impacted by the event but simply does not think or reflect on its own to make further improvements.
visarga t1_j34s0ku wrote
Like, you can put a Python REPL inside chatGPT so it can see the error messages. And allow it a number of fixing rounds.
Viewing a single comment thread. View all comments