quantumfucker
quantumfucker t1_j21f5yq wrote
Reply to comment by skychasezone in Dall-E 2, ChatGPT to Push AI to a Tipping Point in 2023 by upyoars
What do you mean blow up
quantumfucker t1_j20f9dv wrote
Reply to comment by reconrose in How AI innovation is powered by underpaid workers in foreign countries. by eddytony96
When drawing a comparison to Nike and other companies with a history of literally hiding their labor abuses, what “hidden” literally means does matter in terms of accountability. There isn’t a point to mentioning what humans are doing to train new ML models in most articles. The data labeling and content moderation angles are not really relevant to what the model’s own impact and application is, and those processes really don’t change. This isn’t new information at all.
quantumfucker t1_j205bad wrote
Reply to comment by whittily in How AI innovation is powered by underpaid workers in foreign countries. by eddytony96
That’s fair, though I imagine their use of AI then is to flag specific images for human verification if there are concerns.
quantumfucker t1_j204drh wrote
Reply to comment by whittily in How AI innovation is powered by underpaid workers in foreign countries. by eddytony96
That seems unlikely for a basic OCR task. Doesn’t that come equipped with every smartphone these days anyways? It seems more likely to me that it’s possibly just some poorly designed app that sends images to some remote server for analysis, but has terrible response times or is frequently down for maintenance without notifying OP. We are definitely past the point where humans need to intervene for OCR.
quantumfucker t1_j201exu wrote
Reply to comment by imdb_shenanigans in How AI innovation is powered by underpaid workers in foreign countries. by eddytony96
This isn’t really being hidden, consumers just don’t care. The article itself cites authorities saying so.
From Facebook’s public content policy from their own website regarding their use of AI: “Sometimes, a piece of content requires further review and our AI sends it to a human review team to take a closer look. In these cases, review teams make the final decision, and our technology learns and improves from each decision. Over time—after learning from thousands of human decisions—the technology gets better.” They are not hiding the need for human labor behind AI. This is from a source in the article. This is different already than companies using sweatshops they try to hide and disavow knowledge of.
And these outsourcing companies proudly advertise big tech as their clients: https://www.sama.com/Others are using the very popular and frequently used MTurk service for this promoted publicly by Amazon, such that universities are aware of this and use it to advance academia, with these services being described in their methodology. This is all available information that’s being actively marketed.
This article’s headline and much of its content make it sound like a conspiracy specific to AI instead of “by the way, issues with the global labor markets apply to the labor behind AI too.” The transparency or lack thereof isn’t even the problem, because people don’t care. American consumers enjoy cheap products, people in foreign countries consider the American outsourcing better than jobs in their localities (and data labeling and content moderation are significant improvements to physical labor- the article itself cites someone saying as much), and the countries that accept Americans outsourcing their labor benefit politically and economically.
I don’t like exploitation and I think all content moderators should have readily available mental health access, but this is what a global liberal marketplace looks like, and I’m wary of blaming AI and the companies behind it for this instead of examining the economic systems we have that promote these issues. The technology and its needs aren’t the issue. It’s not as if big tech is marketing a camera with a child inside who quickly draws what they see and gives it to you. They need a tough job done cheaply that Americans don’t want to do. Not unlike how that’s a big reason Americans allow immigrants to come in in the first place.
Also worth noting that the author doesn’t actually have experience in technology, but is rather an artist who also writes about AI ethics. I do apply extra scrutiny to what narratives are being painted by that kind of author.
quantumfucker t1_j1zfxkf wrote
Reply to comment by Light_Error in How AI innovation is powered by underpaid workers in foreign countries. by eddytony96
AI does not operate independently of people. That has never been the goal. The goal is to use what we know about intelligence to make new tools that help us take society in a more productive and automated direction. In this case, humans still need to be the ones who train AI to begin with. A developed AI only needs an operator/maintainer.
quantumfucker t1_j1zetcp wrote
Reply to comment by Oscarcharliezulu in How AI innovation is powered by underpaid workers in foreign countries. by eddytony96
This is not what the article is about. Humans are involved in the data labeling process prior to the deployed AI model, not during or after.
quantumfucker t1_j1zepu5 wrote
Reply to comment by TheSnivelingSinking in How AI innovation is powered by underpaid workers in foreign countries. by eddytony96
Yes, and no. Data labeling is a manual human task, yes. It’s tedious and low-paid because anyone can do it sitting at their computer with literally no background training at all. It is literally just labeling objects you see so the computer knows them. It’s also used often by universities and small startups, not just big tech, because that’s just what the future of technology in AI requires. It’s not being hidden or ignored, it’s being acknowledged to the point where you’re literally reading an article about Amazon offering a service for it. This is half an ad.
“Howson can’t say for sure whether tech companies are intentionally obscuring human AI laborers, but that doing so certainly works in their interests.” Cool, get back to me when there’s evidence.
quantumfucker t1_j1y8ixx wrote
Reply to comment by Just-a-Mandrew in An A.I. Pioneer on What We Should Really Fear by quikfrozt
Not really. AI is not so distinguishable from an algorithm designed by a human, implemented by a human, and supported/owned by a human. It cannot function absent a human-designated policy or prompt. It’s still very much a product, and I’d argue a tool.
quantumfucker t1_j1y8f29 wrote
Reply to comment by quikfrozt in An A.I. Pioneer on What We Should Really Fear by quikfrozt
With most developments in technology, what usually happens is that many humans learn how to operate the tools without needing to know the internal details too well, which boosts their productivity and output as they have more time to think about creative visions and difficult problems instead of performing labor. Much like how you don’t need to know anything about the principles behind a combustion engine to drive a car well. We instead have dedicated professions where people are put in charge of retaining the finer points of how the tool actually works in case that needs to be examined, like a mechanic for your car. One might argue that the high standards we had of managing a horse have been eroded as people gained access to cars, but in reality, the average person is able to achieve meaningful transport with a lower barrier of entry while horse riding still exists as a niche recreational pursuit instead. AI, imo, is poised to operate on the same principles. I wouldn’t be so concerned about laziness and declining standards, I’d be more optimistic about how much easier it’ll be for the average person to participate in creative artistic projects.
quantumfucker t1_j1y7cm6 wrote
Reply to comment by Ranryu in Google Assistant Takes the crown beating Bixby and Siri in Voice Assistant Test by PuzzleheadedHeat4409
I really just hate that they used a whole button for it.
quantumfucker t1_j1sjnku wrote
Reply to comment by _mh05 in AI Is Now Essential National Infrastructure by jormungandrsjig
That’s not accurate, regarding the comic book. That decision isn’t finalized, it is only halted so that the artist can explain their creative process and why it deserves protection. There is concern that it’s not human-driven enough to be considered a creative product, though I would disagree.
quantumfucker t1_j4uwbba wrote
Reply to comment by TheSnozzwangler in Dutch Students using ChatGPT to finish homework; Teachers aren't noticing by Parking_Attitude_519
The only reason Google translating foreign language essays is bad is because you can’t rely on Google translate in the real world. But say there was such a tool that allowed you to seamlessly real-time translate anything you spoke or wrote into any other language. What would the actual point of expecting students to learn it by default be? Save it for enthusiasts/hobbyists or niche experts. Most people would benefit from having such a tool to improve global communication and work on problems past a language barrier.
Similarly, if the homework assignments given to kids is so rote that we can now automate it, maybe we should be finding better assignments that allow kids to work with new tools instead of complaining that they can’t do what the tools can manually.