Submitted by Overall-Importance54 t3_y5qdk8 in MachineLearning
iqisoverrated t1_islc3jj wrote
Depends on where you live but it could simply be regulations. AI isn't allowed to make diagnostic decisions and some places mandate a 4 eye rule (i.e. 2 independent physicians have to look at a scan)
Then there's time constraints. The amount of time physicians can spend looking at a scan is severely limited (just a few seconds on average). Many AI implementations take too long and are therefore not useful (Hospitals are paid by throughput. Slowing down the workflow is not acceptable)
There's also a bit of an issue with "explainable AI". If the output isn't easily explainable then it's not going to be accepted by the physician.
But the general attitude towards AI assisted reading seems to be changing, so I'd expect AI based reading assitance to become the norm in the next 10 years or so.
111llI0__-__0Ill111 t1_isoa3uq wrote
The whole explainability thing is becoming ridiculous, because all these fancy techniques while explainable are still not going to be explainable to someone without math knowledge.
And even simple regressions have problems like Table 2 fallacy. Completely overrated
iqisoverrated t1_isov3ek wrote
You can do some stuff that helps people who aren't familiar with the math. E.g. you can color in the pixels that most prominently went into making a decision. If the 'relevant' pixels are nowehere near the lesion then that's a pretty good indication that the AI is telling BS.
Another idea that is being explored is that it will select some images from the training set that it thinks show a similar pathology (or not) and display those alongside.
Problem isn't so much that AI makes mistakes (anyone can forgive it that if the overall result is net positive). The main problem is that it makes different mistakes than humans...i.e. you're running the risk of overlooking something that a human would have easily spotted if you overrely on AI diagnstics.
[deleted] t1_islkdn5 wrote
[deleted]
Overall-Importance54 OP t1_isnspk1 wrote
10 yeeeeeears????? ðŸ˜
iqisoverrated t1_isnubwl wrote
Well, I'm drawing the analogy to Tesla. While one can have a viable product in a much shorter timespan: in order to reap real economies of scale type benefits (i.e. what will set the winner apart from the 'also-ran' competition that will eventually go bankrupt because they cannot offer a competitive product at a similar price ) you have to go big. And I mean: REALLY big. Large factories. Global resource chains. That takes time.
Overall-Importance54 OP t1_iso2ag0 wrote
They created text to image, Google just released text to video. They speak, AI generates a full movie. I can write software to detect gold deposits from Google Images data. I just feel like there is a huge unexplained lag between where we are in actually tech, off the shelf, and what's applied in everyday life when the shelf is riiiiiight there.
CurryGuy123 t1_isoa60l wrote
There's still a lot of uncertainties from the non-AI piblic about a lot of things though. And this is even more of an issue in the healthcare field - it's a very slow moving industry because there are lots of privacy concerns and the margin for error is very low. Things are gonna take a long time to be implemented in healthcare compared to other sectors.
iqisoverrated t1_iso3qqv wrote
oops..sorry..wrong thread (redacted)
Viewing a single comment thread. View all comments