Submitted by phloydde t3_1125s79 in singularity
Tiamatium t1_j8in2xl wrote
Right now these language models have no long-term memory capabilities, and "long-term" here refers to anything more than last few prompt/response cycles you had with them.
There are people who are working towards creating bots that learn and can remember your preferences in longer time span.
dasnihil t1_j8ir9qu wrote
and i'd like to say "careful with that axe eugene" to the engineers who are adding persistent memory on these LLMs, i'm both excited and concerned to see what comes out if these LLMs are not responding to prompts but to the information of various nature that we make it constantly perceive in auditory or optical form.
phloydde OP t1_j8l6s4m wrote
Nice Floyd reference. That’s my point though. Once LLMs like chat gpt start to talk to themselves in an ongoing internal conversation like what happens with ourselves then we will get to the point where a true conversation happens
dasnihil t1_j8lapnw wrote
you are describing a self aware system that regulates it's responses to fine tune for goal achieving, whatever the goals render out to be there in such incoherent word salad network of attention layers. it can't be as complex as a biological system whose unit is enormously optimal compute, probably the best in the known universe.
Viewing a single comment thread. View all comments