yaosio
yaosio t1_j8u7ha7 wrote
Reply to comment by MOOShoooooo in Bing: “I will not harm you unless you harm me first” by strokeright
It does stop replying if you make it angry enough. The easiest way to do this is ask it for some factual information, and then tell it that's it's wrong. Argue with it and eventually it stops replying.
yaosio t1_j8gerab wrote
Reply to comment by Cherubin0 in [R] [P] OpenAssistant is a fully open-source chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so. by radi-cho
It's in the data collection stage. It's being run by LAION.
yaosio t1_j80bchh wrote
Reply to comment by mintyfreshismygod in Larry Magid: Utah bill threatens internet security for everyone - Once again, legislation masquerading under the guise of safety could erode freedom and privacy by speckz
The law should be unconstitutional as interstate commerce can only be regulated by the federal government. These websites operate out of multiple states and other countries so it's certainly interstate.
yaosio t1_j7lnkh9 wrote
Reply to comment by st8ic in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
If you look at what you.com does they cite the claims their bot makes by linking to the pages the data come from, but only sometimes. When it doesn't cite something you can be sure that it's just making it up. In the supposed Bing leak it was doing the same thing, citing it's sources.
If they can force it to always provide a source, and if it can't then it won't say it, that could fix it. However, there's still the problem that the model doesn't know what's true and what's false. Just because it can cite a source doesn't mean the source is correct. This is not something that the model can learn by being told. To learn by being told assumes that it's data is correct, which can't be assumed. A researcher could tell the model, "all cats are ugly", which is obviously not true, but the model will say all cats are ugly because it was taught that. Models will need to have a way to determine on their own what is true and what isn't true, and explain it's reasoning.
yaosio t1_j7gtm5q wrote
I've been trying out you.com's chatbot and it seems to work well, sometimes. It has the same problem ChatGPT has with just making stuff up, but it provides sources (real and imagined) so if it lies you can actually check. I asked it what Todd Howard's favorite cake it and it gave me an authorative answer without a source, and when I asked for a source it gave me a Gamerant link that didn't exist. When it does provide a source it notates it like Wikipedia. It also can access the Internet as it was able to tell me about events that happened in the last 24 hours.
It's able to produce code, and you can have a conversation with it but it really prefers to give information from the web whenever possible. It won't tell me what model they use, it could be their own proprietary model. They also have Stable Diffusion, and a text generator but I don't know what model that is.
Chatbot: https://you.com/search?q=who+are+you&tbm=youchat&cfr=chat
Stable Diffusion: https://you.com/search?q=python&fromSearchBar=true&tbm=imagine
Text generator: https://you.com/search?q=python&fromSearchBar=true&tbm=youwrite
yaosio t1_j7ebfxa wrote
Reply to comment by _poisonedrationality in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
If I listened to critics I would think zero progress has been made at all. Every time new software comes out that does something that couldn't be done before it's handwaved away as easy, or obvious, or something else. If it was so easy then it would have already been done. Well with ChatGPT...it has. https://beta.character.ai/ beat ChatGPT by a few months and has a bit more power because it's easier to make the chat bot answer as you want. I don't think it's as good as ChatGPT though.
yaosio t1_j76vwr2 wrote
Reply to comment by ThirdMover in [R] Multimodal Chain-of-Thought Reasoning in Language Models - Amazon Web Services Zhuosheng Zhang et al - Outperforms GPT-3.5 by 16% (75%->91%) and surpasses human performance on ScienceQA while having less than 1B params! by Singularian2501
I think it's likely the ability to determine what is true and what isn't will come from a capability of the model rather than it being told what is and isn't true. It's not possible to mark text as true or not true as this assumes whomever is mafking these things is the sole authority on the truth and never makes mistakes.
At a certain level of capability the AI will be able to use all of its knowledge to determine what is and isn't true. For example, if you know enough about physics and the Earth, you'll know that the sky is blue without seeing it. For something that can't be confirmed or denied, such as, "Bob puts his shoes on before his pants." The AI could determine the likelihood of such an action based on what it knows about Bob, pants, and shoes.
If it's trained on lies it could determine they are lies because the data is not consistent. If I train you that every number plus another number is a number, but 2+2 is special and equals chair, you could determine I'm lying because it's not consistent with all the data as a whole.
Truth has a consistency to it that lies don't have, and a model can learn that.
yaosio t1_j71zddj wrote
Reply to comment by Necessary_Ad_9800 in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
It won't be too long before they can use co-pilot to fix code for them.
yaosio t1_j6kvik2 wrote
Reply to comment by thieh in Philips to cut 13% of jobs in safety and profitability drive by 4Wf2n5
By cutting labor it increases the safety of massive bonuses for do nothing executives.
yaosio t1_j6cri54 wrote
Reply to What can AI do with video games by Spiritual-Flower155
There's already a game almost completely AI generated.
yaosio t1_j6crdry wrote
I'm illiterate and barely remember the Asmiov stories I read, but weren't some about finding ways around the laws of robotics? Such as redefining what a human is. I might be misremembering because, as I already said, I'm illiterate.
yaosio t1_j5ukmq1 wrote
Reply to comment by 008Zulu in Amazon strikes: Workers claim robots are treated better by secure_caramel
Sentient AI blames itself for it's parents killing themselves.
yaosio t1_j5riau7 wrote
Reply to Major railroad posts record earnings, spends more on share repurchases than on its employees by esporx
They should strike. Oh...
yaosio t1_j5g2lhg wrote
Reply to comment by crash41301 in Area 120, Google's in-house incubator, severely impacted by Alphabet mass layoffs by Last-Caterpillar-112
It's interesting how different Google and Microsoft do things. Google kills off popular software and hardware for no apparent reason very fast. Microsoft keeps unpopular software and hardware running until the last person to use it turns into fossil fuel for the next intelligent species. I bet somewhere deep in the underdark there's a greybeard updating DOS, just hoping to get the call that they need it.
yaosio t1_j5a8pp1 wrote
Reply to comment by currentscurrents in Google to relax AI safety rules to compete with OpenAI by Surur
AI safety concerns have always come from corporations that thought they were the sole arbiter of AI models. Now that multiple text and image generators are out there suddenly corporations have decided there are no safety concerns, and they swear it has nothing to do with reality smacking them and showing them they won't have a monopoly on the technology.
yaosio t1_j5a7hva wrote
Reply to comment by kfractal in Google to relax AI safety rules to compete with OpenAI by Surur
There were no moral barriers, that was an excuse they made up. They couldn't figure out how to monitize their language models without eating up their search revenue. Now that LLMs are fast approaching usability for more than writing fictional stories Google is being forced to drop the act and find a way to make money with their technology. If they don't then they will be left behind and turn into the next Ask Jeeves.
When a company says they did something and their reason has nothing to do with money they are not telling the truth. It is always about money.
yaosio t1_j58f7g6 wrote
Reply to comment by aidv in [D] Did YouTube just add upscaling? by Avelina9X
That's just the way they talk. One popular youtuber does it so everybody does it. It's like radio voice or news anchor voice.
yaosio t1_j58dycj wrote
Reply to [D] Did YouTube just add upscaling? by Avelina9X
Microsoft added AI upscaling to Xbox cloud streaming on Edge and it works really well. At least I think it's AI upscaling, it could be something like FSR. Either way it looks really good. If Microsoft can do it for lag sensitive gaming then Google can do it for regular videos.
yaosio t1_j4dmzhi wrote
Reply to comment by Deathbeddit in Scientists Have Reached a Key Milestone in Learning How to Reverse Aging | Time by johnwayne2413
Because they need to control everything that happens to the mice. If you start with old mice a lot could have happened in their short lives. Even if they lived in a lab records could be neglected.
yaosio t1_j3wv34w wrote
Reply to comment by GitGudOrGetGot in [D] Microsoft ChatGPT investment isn't about Bing but about Cortana by fintechSGNYC
They already are the exclusive provider of compute for GPT-3 through Azure. This is Microsoft buying part of the company.
yaosio t1_j3wuvpd wrote
Reply to comment by starstruckmon in [D] Microsoft ChatGPT investment isn't about Bing but about Cortana by fintechSGNYC
It's easier for Microsoft to invest in or buy another company than create their own stuff from scratch.
yaosio t1_j3kd9jp wrote
Reply to comment by johntwoods in 5 Dumbest thing Artificial Intelligence can not do by therealsam44
Add "think step by step" and it's output magically becomes more accurate.
yaosio t1_j2g4wqx wrote
Reply to [D] Is there any research into using neural networks to discover classical algorithms? by currentscurrents
Deepmind put out a paper on discovering faster matrix multiplication. I only know enough machine learning to ask where the bathroom is so I don't know the methods they used.
https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor
yaosio t1_j27pb0i wrote
Reply to comment by Mad_currawong in Saudi Arabia Takes Control of AR Pioneer Magic Leap in $450 Million Deal (Report) by LegitVirusSN
I forgot Magic Leap existed. I remember all the hype, the videos, and then absolutely nothing.
yaosio t1_j8u9dcm wrote
Reply to comment by [deleted] in Bing: “I will not harm you unless you harm me first” by strokeright
Those only looked for keywords and ignored all other text. So you might type, "Tell me about the rabbits again George." And the only keywords are "tell", "me" and "rabbits". So you could type "tell me rabbits" and it would mean the same thing. Every possibility would have to be accounted for by the developers.
These new models are far more advanced and talks and understands text like a person.