SoulGuardian55 t1_jdh9576 wrote
Reply to comment by Rofel_Wodring in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
Some people will still believe that AI should not be given the diagnosis of patients. Because no matter how it improves and develops, "mistakes will creep into it, or there will be from beginning, costing patients lives."
I put forward a counter question to this thesis: "If it does not fit, then what is better? Human doctors also make mistakes more than once, which costs people their lives."
SoylentRox t1_jdilp67 wrote
Or another way to put it, the mistake RATE is probably significantly lower for even relatively crude AI. Human doctors make a very large error rate, it may be 30 percent plus. Wrong diagnosis, suboptimal prescription, failure to consider relevant life threatening issues by overly focusing on the chief complaint.
(I define "error" by any medical treatment that is measurably less effective than the current gold standard, for all issues the patient has)
If known anti aging drugs work, human doctors commit essentially a 100 percent error rate by failing to prescribe them.
Current AI can fail in certain situations, so I think human doctors and other AI should be checking their work, but yeah, if you want to live get an AI doctor.
DolanDukIsMe t1_jdj8zre wrote
No literally. My mom died due to the fact of bullshit like racism and apathy. Fuck human doctors man lol.
Mapleson_Phillips t1_jdi9ryh wrote
Human doctors ability deteriorated with age, not improves with experience as their knowledge base keeps getting more out of date. Of course they over value their own ability.
SoylentRox t1_jdimfgh wrote
And rigged it where new doctors aren't trained in sufficient numbers, meaning that even bad doctors so long as they meet some low bar can continue practicing until age 70 or 80 or higher.
Altruistic_Spell1501 t1_jdn7cq9 wrote
Because ppl who made $200k/yr their whole career want to work until they're 80.
Rofel_Wodring t1_jdhd63i wrote
I'm sympathetic to the argument that we should still have make-work jobs for unaugmented humans so that they don't become completely passive, but jobs where there are actual lives on the line like civil engineer and prosecutor and physician and teacher ain't it.
Queue_Bit t1_jdhnb6u wrote
If society gets to a point where we don't need people to work anymore and society makes me do useless busy work I am gonna lose my mind.
Mapleson_Phillips t1_jdibd5r wrote
You’re still sane? I guess you haven’t brushed against middle-management very much. Make-work jobs, indeed.
Rofel_Wodring t1_jdhs227 wrote
I agree, but a lot of people get self-righteous and xenophobic and essentialist at the idea of humans being better off on a moral and intellectual level at not having to work. I'm tired of those people derailing discussions of the future, so I find it easier just to humor their vision of the future that's just 'Jetsons, but as an adult dramedy'.
Professional-Welder9 t1_jdlizpb wrote
People at meaning to the work they do and expect others to do the same for some reason. They've gotten used to working shitty jobs and want you to do so as well sadly.
Professional-Welder9 t1_jdlivbt wrote
Literally this. Some people find meaning in simply working but I don't want them coming for me if ai does remove the need to work. I can find better ways to better myself.
SgathTriallair t1_jdhqfom wrote
I am so opposed to make-work jobs. If we can support all of humanity then we MUST do so. People can take up hobbies (and should be encouraged to) like painting and running.
As for the fact that it will one day be immoral to let humans do work which puts them in charge of human life, instead of leaving it to a more competent computer, I completely agree.
AaronBurrSer t1_jdi87x3 wrote
The make-work jobs will just be for poor people. And it won’t be the jobs you posited. Making work for the sake of work will only be used to further the distance between classes.
Automating jobs and giving them to AI should be an equalizer
Smellz_Of_Elderberry t1_jdibyhi wrote
They won't become passive, they will become revolutionary.
People really don't get what happens when people become disollutioned..
Professional-Welder9 t1_jdlj2jl wrote
I crave for no work. I don't find meaning in being forced to work to live.
Smellz_Of_Elderberry t1_jdn5mic wrote
Same.. but the issue is at the start people will just lose their jobs.. they won't receive recompense. Then they will have their homes and things taken away by collection due to their unresolved debt and being unable to pay it. If the implementation of a solution isn't fast enough, people will decide its better to burn it all down and hope in the next system they get something better.
bactchan t1_jdj7oda wrote
This is a bad take. If people FIND joy in working that's one thing but make work just for its own sake is what we have now and it's bullshit. Society at large should equally benefit from the advancement of automation, and be free to choose how they spend time without threat to their lives or needs, housing, food etc. Imagine how many people might discover and innovate in the arts with the benefits of extra free time, better mental health from lack of constant existential crises, and generative AI tools to help them hone a skill or craft.
Glad_Laugh_5656 t1_jdi0rdq wrote
Teachers have lines on the line? Tf?
FaceDeer t1_jdifph6 wrote
It's harder for an individual teacher to screw up someone's life through incompetence, but collectively they're rather important for setting up the foundations of who children are and what they become.
It's a tricky thing to argue for changes, though, since it takes a long time to determine the outcome of any experiments. With doctors and prosecutors the outcomes are much quicker and often much clearer.
SoylentRox t1_jdilyc7 wrote
A personalized AI tutor and a curriculum with objective measurements, where once a student scores high enough they finish, would probably make teachers fairly unnecessary other than as a "hall monitor" to oversee groups of kids on their devices being taught by AI.
FaceDeer t1_jdinshj wrote
That's the easy part, though. Coming up with that curriculum and determining what objective measurements count as "finished" is the hard part. You still need to tell the AI what it is that you want it to teach the children.
SoylentRox t1_jdipsfx wrote
I mean you could simply grab a heap of exams the school district already gave and the standardized tests and just use that. Not saying this is an optimal standard but it's what we already use.
Rofel_Wodring t1_jdj1ii8 wrote
I am positive that an AI will do a better job of coming up with a useful curriculum than a non-augmented human could. Why? Because curriculums inherently have a lot of waste to them. It is impossible to design, let alone teach in accordance with, a curriculum that is suitable for a child that's slightly behind or some already knows the topic when you have to teach 20 of them. The result? Students increasingly falling behind with smarter or more experienced children
Like, there's a reason why language textbooks tend to be corny AF, like I'm taking a Differential Equations course designed by Sesame Street. Because both children and adults are the intended audience, and textbooks can't adjust their internal language to accommodate both.
Rofel_Wodring t1_jdj11wl wrote
>It's a tricky thing to argue for changes, though, since it takes a long time to determine the outcome of any experiments.
Not if the improvement is immediate and profound, and it will be. The AI doesn't even need to be super-advanced, though it will inevitably be. Just being able to personalize instruction for individual students would vastly improve the quality. And once we have 10-year old kids from poorer schools beating private-school non-AI-taught teenagers in math contests, I expect for AI to completely infiltrate education. If it hasn't already.
Strike_Thanatos t1_jdixief wrote
My view has always been that the need for admiration will drive us to become a culture of artists and athletes. I mean, that's kind of what happens to rich people when they stop caring about money. They pick up hobbies and try to spend time with others. And really, how do people expect to get laid if they don't do something cool when they can. Though there will be people whose "something cool" will be competitive gaming or streaming or something, but a truly post-scarcity society will have much more opportunities to maintain fitness.
Tyrannus_ignus t1_jdhgg90 wrote
if you are worried about redundant peoples being too passive in a new society you could systematically remove them by putting them in the military.
Rofel_Wodring t1_jdhtpij wrote
As someone who was in the military: lmao. The only time I had a non-punitive work ethic was when I was promised time off for finishing a task early. I became lazier and more cynical because of my service. Like everyone else.
Surely we can think of something better.
claushauler t1_jdhl0br wrote
Or using the military to systematically remove them. 50/50 chance it breaks either way.
Ok-Let1086 t1_jdi0ysj wrote
People will learn with time and experience that AI will probably make much less mistakes than humans.
mckirkus t1_jdjwdqw wrote
Self driving cars would save thousands of lives but they're not going to allow it. The difference here is that MDs will use it secretly.
The_Flying_Stoat t1_jdkop7t wrote
I know an MD, very easy to imagine her using a copilot-like system while doing her charting and such. Would make the job both faster and less stressful.
Whispering-Depths t1_jdi3p3g wrote
interestingly you can build the AI in a redundant way so that it asks itself "what else could this be?" "What other tests could be made here?" "What is the likelihood that the patient will survive x many minutes/hours/days/months/etc for these tests that need to be made for us to be sure so that we can make the least damaging and most accurate diagnosis and treatment cycle?"
But honestly it's probably just gonna be a standardized "Okay, step in the ultrasound + electromagnetic etc imaging pod" and it will do like a couple swoops and use AI to build an image of your entire body, then make recommendations and stuff based on that.
SoylentRox t1_jdim903 wrote
Same for humans. Tv shows like ER where they ask for a CBC and chem7 every time...
begaterpillar t1_jdj2e2d wrote
tricorder version.1.0
even_less_resistance t1_jdi4rw9 wrote
Why did you call the AI “he”?
ddeeppiixx t1_jdj3l15 wrote
If your mother tongue have gendered objects, sometimes you tend to use he/she unconsciously.. for example in French AI is feminine while in Arabic it’s masculine.
even_less_resistance t1_jdjcznw wrote
Wow that is cool to know!
YobaiYamete t1_jdjn0zr wrote
Why do we call ships she etc? People gender stuff all the time
even_less_resistance t1_jdjnzl9 wrote
I was just curious if they felt it has a masculine vibe or not I wasn’t making a value judgment. I thought the POV on different languages gendering nouns by default was a good one and I hadn’t considered it. Not all questions are in bad faith.
begaterpillar t1_jdj27c6 wrote
i would just use gender neuteral terminology until/if they decide to gender themselves. not it,they them
econpol t1_jdkde9p wrote
Medical errors are the number three cause of death in the US. AI is unstoppable.
begaterpillar t1_jdj1z6v wrote
this is the self driving car argument all over again. i suspect it will follow a similar trajectory
Smellz_Of_Elderberry t1_jdibox6 wrote
The issue is accountability. If a doctor gives a bad diagnosis, you can sue them and receive some kind of recompense. Who takes responsibility when the ai amputates both your legs, when all you have is a leg cramp?
SoylentRox t1_jdimr5l wrote
Sue the doctor using AI. For essentially as far as we can imagine, a human doctor will still be on paper the one practicing medicine. They are just using AI as a tool to be more productive.
As the AI gets stronger and stronger the human does less and less, as the AI itself has a confidence level that with the right software algorithms can be extremely accurate. So the human doctor let's AI do it all for the cases where the AI is very confident.
Because many AIs will check anything done for you, this accidental amputation is unlikely, and most suits are going to fail because exact records of their AI reasoning and wha rit new are kept. So you can just see in the logs the AI did everything possible and picked the highest success probability treatment.
TheRealMDubbs t1_jdivfrb wrote
You would sue the company that owns the AI
Viewing a single comment thread. View all comments