Submitted by SoulGuardian55 t3_10ro4hc in singularity
Nadeja_ t1_j6wtvef wrote
-
Although neural networks (the human brain too) tend to “hallucinate” and to make things up (your own memory isn’t 100% reliable either) that’s why we help our memory with pictures, taking notes, journals, record numbers and so on (not just because we forget, but also because we might not remember correctly). If you want to retrieve accurate info from a nn, then you have it to understand your question and come up with the probable answer, then find the source on the net or in a database, then, if found, a quote function returns the exact quote/info. However, trust-wise, there is the alignment problem, but that’s another story.
-
Yeah, that sounds like “we don’t need the wheel, because we did fine without it in the past 300,000 years”.
-
“Would only”, “would never”… is reasoning in absolutist terms, witch ends up in faulty predictions such as “heavier than air machines would never fly”. For now, with the current models, you still have to to review the results: the generated answer or may contain inaccurate or made up info, the generated code may have bugs or not work at all, the generated image comes with weird stuff that you notice when you zoom in or the hands look funny, and so on. But it’s pretty likely that eventually we will have reliable models that understand the context better, that know how a hand is supposed to be and how it works, that return accurate sourced info, that code like the best professional. Our brain is the example that’s doable, unless you believe (based on no evidence) it’s because of something magical.
-
You can hardly be 100% be sure of anything, if you ask to a philosopher, and there may be some issue, but there are also peer reviewed papers.
-
Or maybe the opposite happens and there would be fewer wrong diagnoses. In the medical field there is already who uses machine learning. Still, students shouldn’t delegate their learning, reasoning and writing to language models and other models (not yet at least, I’m not sure how I would feel when an ASI will be around), but use them to improve (e.g. you ask ChatGPT to improve your essay and you learn how to write better).
SoulGuardian55 OP t1_j6wvwhg wrote
>but use them to improve (e.g. you ask ChatGPT to improve your essay and you learn how to write better).
# Such thought I used in argument with one of them, but he tried to counter it like that: "Do you really think students shall use such systems, even if they are be "education type" to improve themselves? I highly doubtful that's shall be the case."
SoulGuardian55 OP t1_j6wwav7 wrote
One more thing. Dispute was with people who are pretty young (22 years old, one was 23 years old and oldest of them 28 years old).
Viewing a single comment thread. View all comments