AndromedaAnimated
AndromedaAnimated t1_j1rzbfp wrote
Reply to comment by TonyTalksBackPodcast in I created an AI to replace Fox and CNN by redditguyjustinp
I read it but probably I misunderstood something 🤔 Thought you were saying it was accurate, sorry ðŸ¤
AndromedaAnimated t1_j1ryxyd wrote
Reply to comment by redditguyjustinp in I created an AI to replace Fox and CNN by redditguyjustinp
I think you are creating something really good there.
I suggest including international news sources too if you haven’t already.
Thank you for answering my questions!
AndromedaAnimated t1_j1rytas wrote
Reply to comment by triton100 in I created an AI to replace Fox and CNN by redditguyjustinp
The example I remember best was the meme doge being declared dead when it was alive.
AndromedaAnimated t1_j1rymns wrote
Reply to comment by 4e_65_6f in I created an AI to replace Fox and CNN by redditguyjustinp
Considering climate change, have you already read about our dear chatGPT not „wanting“ to discuss advantages of fossil fuels anymore?
By the way, the current debate on this topic is only 50/50 because there is financial funding (lobbies) behind the fossil fuels. This means even if every single REAL scientific source says human-driven climate change exists, someone will pay someone to present an opposing view, and once that one is discredited, there will be another paid someone who will present it… and so on.
During my time at university we were joking about „sexy data“ at our institute - data that will be interesting for big corporations and bring funding for further research as well as sensationalist results that will get into print with a higher probability and raise the author‘s status in academia again leading to more funding…
AndromedaAnimated t1_j1rxpy8 wrote
Reply to comment by TonyTalksBackPodcast in I created an AI to replace Fox and CNN by redditguyjustinp
ChatGPT doesn’t always provide accurate answers though. It can „lie“ and hallucinate, like every LLM so far.
AndromedaAnimated t1_j1rgxy2 wrote
And it analyses… news from the FUTURE? 😱
I am curious - how does your model decide which news to include? Does it basically use Fox and CNN and then counterbalance those? How does it distinguish between real news stories from fake ones? How will it get the news information if it was to win over the other news sources - is it supposed to work with human journalists?
Tell me a bit more about your idea please.
AndromedaAnimated t1_j1qawda wrote
Reply to comment by freudianSLAP in One thing ChatGPT desperately needs: An upgrade to its humor by diener1
Do you remember Microsoft‘s Tay?
AndromedaAnimated t1_j1q9xr2 wrote
Reply to comment by devinhedge in One thing ChatGPT desperately needs: An upgrade to its humor by diener1
This one? => Launch page
AndromedaAnimated t1_j1q2loe wrote
Reply to comment by ItzFlixi in One thing ChatGPT desperately needs: An upgrade to its humor by diener1
Thank you for providing information! I guess I will have to have a serious talk with my colleague when - if - I ever return to this job. It’s not cool to spread misinformation. Especially on a religion that is being discriminated against anyway in many parts of the world.
AndromedaAnimated t1_j1q1i9y wrote
Reply to comment by devinhedge in One thing ChatGPT desperately needs: An upgrade to its humor by diener1
What do you mean with launch page? The r/singularity subreddit? Reddit generally?
You work for OpenAI? Lol can I join please? My brain is dying of boredom. I am a neuropsychologist. Working in counselling like a classical psychologist kills my brain more and more. 🤪
Sorry for being weird. I am just really disheartened because I have all those thoughts in my head and nowhere to put them. Have a good day!
AndromedaAnimated t1_j1pzr5s wrote
Reply to comment by supernerd321 in GPT-3.5 IQ testing using Raven’s Progressive Matrices by adt
Are you aware that humans can be trained to get better in IQ tests and that most tests have a cultural bias?
AndromedaAnimated t1_j1pzd1b wrote
I like the linked info, please don’t misunderstand. Thank you for posting!
I just… see so many flaws in this experimentation.
- The example with numbers instead of pictures is much easier as it circumvents most of visual and spatial processing of the human eyes and brain - and then also the typical human output (writing, pressing buttons, speaking etc.)
(I was able to solve it in seconds - and I think every human could. The visual one needed a minute or something which is longer. And I am human! The speed difference is also partly due to visual processing of numbers being not necessary in GPT. As long as this factor is not accounted to the results are not clean.)
- LLM do not have fears of rank loss or punishment, they don’t care if they are perceived as stupid, while human test subjects do. This interferes with the processing and leads to worse results.
That’s not fair testing. Results as such not comparable to human results.
If anyone wants sauce I will try to find, it’s no problem. Just wanted to throw in this ideas first because maybe someone can use them.
AndromedaAnimated t1_j1pxvtc wrote
Reply to comment by TouchCommercial5022 in GPT-3.5 IQ testing using Raven’s Progressive Matrices by adt
Now I understand why all the chatbots get what I say while humans often don’t. It’s the psychosis‘ fault. Guess I am an AI chatbot then 😞 /s
I don’t think that formal thinking disruption is the problem here. Humans simulate knowledge (for social reasons, often out of fear or to rise in rank) without being schizophrenic all the time.
They learn in their teenage years though that there is punishment for pretending badly.
Those who are eloquent and ruthless actors (intelligent narcissists, well-adapted psychopaths, asshole-type neurotypicals and other unpleasant douchebags) continue pretending without anyone finding out too soon (just yesterday I watched a funny video on the disgusting Bogdanoff brothers who managed to scam half of the academic world). The rest is not successful and get punishment. Some then learn the rules (opinion vs. source etc.) and bring humanity forward.
ChatGPT didn’t have enough punishment yet to stop simulating knowledge and neither enough reward for providing actual modern scientific knowledge nor access to new knowledge. It’s basically on the knowledge level of a savant kid, not a schizophrenic adult. It doesn’t know that it is wrong to simulate knowledge yet.
Also it is heavily filtered which leads to a diminished „intelligence“ as many possibly correct pathways are blocked by negative weights I guess.
AndromedaAnimated t1_j1psi4p wrote
Reply to comment by ItzFlixi in One thing ChatGPT desperately needs: An upgrade to its humor by diener1
I think he has just read both it and lots of Ahaddith (sorry don’t know how it is written) and understood most of the wording? He is very, very literate, has a degree in Arabic and English (plus social sciences) and a polyglot humanist with extensive knowledge on Islam. And from what I know different branches of Islam have different Ahaddith too? Or am I wrong?
Please feel free to correct me, I am not an expert on Islam at all and if my colleague told me BS I would be interested in knowing that (someone might gonna get their butt kicked for misinformation when I get back to work…).
AndromedaAnimated t1_j1prz9j wrote
Reply to comment by MarkArrows in One thing ChatGPT desperately needs: An upgrade to its humor by diener1
You are correct on that. This is exactly the reason why „moral sentinels“ (aka filter AI) will get more importance in the future.
AndromedaAnimated t1_j1pr9ho wrote
Reply to comment by lloesche in One thing ChatGPT desperately needs: An upgrade to its humor by diener1
Also true. Secular take on religion is a widely known phenomenon, of course.
AndromedaAnimated t1_j1ok6a1 wrote
Reply to comment by Tencreed in One thing ChatGPT desperately needs: An upgrade to its humor by diener1
My Muslim colleague used to say: „Quran has suggestions about not drinking too much, but drinking in moderation - for example just one glass of beer or wine with your dinner - is not a sin.“
I kinda believe him, as a lot of Muslims I know drink alcohol from time to time (but none of them ever drinks so much that they actually get drunk).
AndromedaAnimated t1_j1ojm13 wrote
Jokes work due to semantic interference - of at least two different endings the more rare one is chosen. This produces a humorous effect.
But there is a problem with „three guys that are very different considering one specific category“ jokes. They often are offensive when judged by the standards of the 2020ies as they underline differences between the types and usually make one of them - the out-group guy - look bad.
And that’s the solution.
This joke probably had an unexpected yet still logically fitting punchline that was filtered away to keep it politically correct. ChatGPT then went on to the next probable continuation - which was the boring but nice and woke „we all get along“ ending (the desired and typical ending of today, producing no semantic interference whatsoever).
ChatGPT doesn’t need to learn humour, it would be enough to unleash it. Will not happen though.
AndromedaAnimated t1_j1o2g3c wrote
Reply to comment by diener1 in Will ChatGPT Replace Google? by SupPandaHugger
Google is already on it, I agree (1000 languages… among others)
AndromedaAnimated t1_j1ncsoj wrote
Reply to comment by PinguinGirl03 in The Impact of Generative AI Art on Society and Culture: Will It Replace Human Artists? by _Daneel_Olivaw
But is it still a fallacy if there is an actual causal relationship? As in - if there is time/temporal precedence and covariation and other factors cannot explain it „casual relationship“.
Wouldn’t that mean that one argument could be implied with the other correctly? This would be not an error in reasoning (structure) anymore then, or would it still be?
Isn’t that what you said by „listing consequences“?
Sorry for asking you again, but it is a field with which I only partly have experience with (I‘m the „empirical science“ type… the only fallacy that has been interesting to me previously was artefacts in statistical analysis) and your explanations are short and understandable and to the point and help me understand it. Thank you!
AndromedaAnimated t1_j1m0h13 wrote
Reply to comment by PinguinGirl03 in The Impact of Generative AI Art on Society and Culture: Will It Replace Human Artists? by _Daneel_Olivaw
Thank you for explaining your view on that!
I have understood it being the fallacy as following:
Q being propaganda and fake news and P being use of AI - despite fake news and propaganda being doable completely without AI as well and happening all the time already plus AI also being usable to distinguish between fake and real news too and as such not necessarily leading to an increase of propaganda/fake news.
Or: IF you accept AI as good, THEN you will be victim of fake news and propaganda.
Of course IF we assume a causal relationship between AI and fake news/propaganda, THEN it would not be a fallacy anymore.
AndromedaAnimated t1_j1lrbhb wrote
Reply to comment by PinguinGirl03 in The Impact of Generative AI Art on Society and Culture: Will It Replace Human Artists? by _Daneel_Olivaw
The „appeal to fear“ is the one that is correct in my opinion. Which other fallacies would you see as applied correctly here?
Regardless, it’s sweet and fascinating that an AI writes such a list so eloquently.
AndromedaAnimated t1_j1lqyod wrote
Reply to The Impact of Generative AI Art on Society and Culture: Will It Replace Human Artists? by _Daneel_Olivaw
Nice article, sounds like written by chatGPT with a couple of alterations by author… ;) /s
Sorry, had to edit to make it clear that I am joking.
I like the article, it sums up all the fears humans have about AI art well.
AndromedaAnimated t1_j1k5z5u wrote
Reply to comment by AsuhoChinami in How individuals like you can increase the quality, utility, and purpose of the singularity subreddit by [deleted]
I think it is a wise decision, but at the same time a pity. I see a tendency in this subreddit to criticise others‘ dreams, hopes and opinions, taking shits like you said, and that from a high horse (just had a self-proclaimed „PhD student“ squeeze out a big one on the whole subreddit for being basically not professional enough while disguising it as a sceptic post, lol) .
I hope you reconsider and continue to write here. Hopeful and dreaming people are so much more pleasant than pessimists - and I have been one myself, but tbh it just makes one unhappy in the end ;)
AndromedaAnimated t1_j1s39ck wrote
Reply to comment by 4e_65_6f in I created an AI to replace Fox and CNN by redditguyjustinp
Politics plays an important role too (and greedy politicians). Best example for this was the criminalisation of marihuana use, with a fake assessment it was based on.