currentscurrents
currentscurrents t1_j1x2yup wrote
Reply to comment by [deleted] in Google Assistant Takes the crown beating Bixby and Siri in Voice Assistant Test by PuzzleheadedHeat4409
If only the compute requirements weren't so onerous.
But really, voice assistant needs AI to be really useful. I don't want just a way to set timers, I want Jarvis.
currentscurrents t1_iyewx3y wrote
Reply to comment by blablanonymous in [D] Other than data what are the common problems holding back machine learning/artificial intelligence by BadKarma-18
What are you talking about? ML has been used in real-world use cases for ages; speech-to-text, machine translation, OCR/handwriting recognition, image generation, and more.
currentscurrents t1_iye68b8 wrote
Reply to comment by Desperate-Whereas50 in [D] Other than data what are the common problems holding back machine learning/artificial intelligence by BadKarma-18
Well, fair or not, it's a real challenge for ML since large datasets are hard to collect and expensive to train on.
It would be really nice to be able to learn generalizable ideas from small datasets.
currentscurrents t1_iybz6a1 wrote
Reply to comment by piyabati in [D] Other than data what are the common problems holding back machine learning/artificial intelligence by BadKarma-18
I do agree that current ML systems require much larger datasets than we would like. I doubt the typical human hears more than a million words of english in their childhood, but they know the language much better than GPT-3 does after reading billions of pages of it.
> What is holding back AI/ML is to continue to define intelligence the way Turing did back in 1950 (making machines that can pass as human)
But I don't agree with this. Nobody is seriously using the Turing test anymore, these days AI/ML is about concrete problems and specific tasks. The goal isn't to pass as human, it's to solve whatever problem is in front of you.
currentscurrents t1_iu7flju wrote
Reply to comment by pickles55 in India eradicates 'extreme poverty' via PMGKY: IMF paper by ammjajt
This isn't true though, poverty has decreased a lot especially in Asia.
China used international trade to turn a billion-person impoverished country into the 2nd largest economy in the world. This was a major advancement for hundreds of millions of the world's poorest.
currentscurrents t1_itomo5z wrote
Reply to comment by mason240 in The cutting-edge cellular therapies aiming to ease America's organ shortage. Major transplantation surgeries could one day become outpatient procedures. by Sariel007
If you're dead, society is the only one that has a say in the matter.
Is it good for society to look for harvestable organs after every death? Probably - they're not helping anyone in the grave.
currentscurrents t1_itomb89 wrote
Reply to comment by DanteJazz in The cutting-edge cellular therapies aiming to ease America's organ shortage. Major transplantation surgeries could one day become outpatient procedures. by Sariel007
This is a good idea but it wouldn't completely solve the problem. There are countries with opt-out policies, and they do have higher donation rates, but the demand still exceeds the supply. This isn't going to change as long as the leading cause of death is old age.
Technology is the only answer here; xenotransplantation or organ cloning. Right now xenotransplantation is much more promising - just this year, a genetically-altered pig heart was successfully transplanted into a human. We are going to see a lot more clinical trials in the very near future.
currentscurrents t1_isw0ejn wrote
Reply to comment by Basimi in The federal government will allocate $47.7 million in next week's budget to reintroduce bulk-billed video telehealth psychiatric services for Australians living in rural and regional areas. by ChadT-70
Most health insurance covers telehealth therapy at $0 through a partnered provider. Check your insurer's website for details.
If you don't have insurance you're SOL, but then it is America what did you expect.
currentscurrents t1_ispnhch wrote
Reply to comment by chintakoro in Number of poor people in India fell by about 415 mn between 2005-06 and 2019-21, a 'historic change': UN by AP24inMumbai
Also because it's much harder to clog a toilet with that stuff.
currentscurrents t1_isplb43 wrote
Reply to comment by sharanyaaaaa in Number of poor people in India fell by about 415 mn between 2005-06 and 2019-21, a 'historic change': UN by AP24inMumbai
>Plus the poverty level data isn't updated according to sky high inflation either i.e. the income bracket you should fall in to be considered poor here.
This isn't an income-based measure of poverty like you'd use in the US, they're measuring access to real goods like food and cooking fuel. How many calories are they eating a day, do they have access to clean drinking water, etc.
We may see an temporary increase in poverty over the next few years if there is a global recession, but the long-term trendline shouldn't change.
currentscurrents t1_ispku2f wrote
Reply to comment by [deleted] in Number of poor people in India fell by about 415 mn between 2005-06 and 2019-21, a 'historic change': UN by AP24inMumbai
Did you read any of that at all?
>Some had huge improvements to drinking water, others to education, attendance, and years in education, others to things like electricity and cooking fuel, others in housing + assets, and most regions saw big improvements to nutrition/caloric intake and very little improvement to child mortality.
currentscurrents t1_j21fh9o wrote
Reply to comment by Thatweasel in How AI innovation is powered by underpaid workers in foreign countries. by eddytony96
The big thing these days is "self-supervised" learning.
You do the bulk of the training on a simpler task, like predicting missing parts of images or sentences. You don't need labels for this, and it allows the model to learn a lot about the structure of the data. Then you fine-tune the model with a small amount of labeled data for the specific task you want it to do.
Not only does this require far less labeled data, it also lets you reuse the model - you don't have to repeat the first phase of training, just the fine-tuning. You can download pretrained models on huggingface and adapt them to your specific task.