yaosio

yaosio t1_j8u9dcm wrote

Those only looked for keywords and ignored all other text. So you might type, "Tell me about the rabbits again George." And the only keywords are "tell", "me" and "rabbits". So you could type "tell me rabbits" and it would mean the same thing. Every possibility would have to be accounted for by the developers.

These new models are far more advanced and talks and understands text like a person.

1

yaosio t1_j7lnkh9 wrote

If you look at what you.com does they cite the claims their bot makes by linking to the pages the data come from, but only sometimes. When it doesn't cite something you can be sure that it's just making it up. In the supposed Bing leak it was doing the same thing, citing it's sources.

If they can force it to always provide a source, and if it can't then it won't say it, that could fix it. However, there's still the problem that the model doesn't know what's true and what's false. Just because it can cite a source doesn't mean the source is correct. This is not something that the model can learn by being told. To learn by being told assumes that it's data is correct, which can't be assumed. A researcher could tell the model, "all cats are ugly", which is obviously not true, but the model will say all cats are ugly because it was taught that. Models will need to have a way to determine on their own what is true and what isn't true, and explain it's reasoning.

1

yaosio t1_j7gtm5q wrote

I've been trying out you.com's chatbot and it seems to work well, sometimes. It has the same problem ChatGPT has with just making stuff up, but it provides sources (real and imagined) so if it lies you can actually check. I asked it what Todd Howard's favorite cake it and it gave me an authorative answer without a source, and when I asked for a source it gave me a Gamerant link that didn't exist. When it does provide a source it notates it like Wikipedia. It also can access the Internet as it was able to tell me about events that happened in the last 24 hours.

It's able to produce code, and you can have a conversation with it but it really prefers to give information from the web whenever possible. It won't tell me what model they use, it could be their own proprietary model. They also have Stable Diffusion, and a text generator but I don't know what model that is.

Chatbot: https://you.com/search?q=who+are+you&tbm=youchat&cfr=chat

Stable Diffusion: https://you.com/search?q=python&fromSearchBar=true&tbm=imagine

Text generator: https://you.com/search?q=python&fromSearchBar=true&tbm=youwrite

3

yaosio t1_j7ebfxa wrote

If I listened to critics I would think zero progress has been made at all. Every time new software comes out that does something that couldn't be done before it's handwaved away as easy, or obvious, or something else. If it was so easy then it would have already been done. Well with ChatGPT...it has. https://beta.character.ai/ beat ChatGPT by a few months and has a bit more power because it's easier to make the chat bot answer as you want. I don't think it's as good as ChatGPT though.

5

yaosio t1_j76vwr2 wrote

I think it's likely the ability to determine what is true and what isn't will come from a capability of the model rather than it being told what is and isn't true. It's not possible to mark text as true or not true as this assumes whomever is mafking these things is the sole authority on the truth and never makes mistakes.

At a certain level of capability the AI will be able to use all of its knowledge to determine what is and isn't true. For example, if you know enough about physics and the Earth, you'll know that the sky is blue without seeing it. For something that can't be confirmed or denied, such as, "Bob puts his shoes on before his pants." The AI could determine the likelihood of such an action based on what it knows about Bob, pants, and shoes.

If it's trained on lies it could determine they are lies because the data is not consistent. If I train you that every number plus another number is a number, but 2+2 is special and equals chair, you could determine I'm lying because it's not consistent with all the data as a whole.

Truth has a consistency to it that lies don't have, and a model can learn that.

18

yaosio t1_j5g2lhg wrote

It's interesting how different Google and Microsoft do things. Google kills off popular software and hardware for no apparent reason very fast. Microsoft keeps unpopular software and hardware running until the last person to use it turns into fossil fuel for the next intelligent species. I bet somewhere deep in the underdark there's a greybeard updating DOS, just hoping to get the call that they need it.

76

yaosio t1_j5a8pp1 wrote

AI safety concerns have always come from corporations that thought they were the sole arbiter of AI models. Now that multiple text and image generators are out there suddenly corporations have decided there are no safety concerns, and they swear it has nothing to do with reality smacking them and showing them they won't have a monopoly on the technology.

0

yaosio t1_j5a7hva wrote

There were no moral barriers, that was an excuse they made up. They couldn't figure out how to monitize their language models without eating up their search revenue. Now that LLMs are fast approaching usability for more than writing fictional stories Google is being forced to drop the act and find a way to make money with their technology. If they don't then they will be left behind and turn into the next Ask Jeeves.

When a company says they did something and their reason has nothing to do with money they are not telling the truth. It is always about money.

17

yaosio t1_j58dycj wrote

Microsoft added AI upscaling to Xbox cloud streaming on Edge and it works really well. At least I think it's AI upscaling, it could be something like FSR. Either way it looks really good. If Microsoft can do it for lag sensitive gaming then Google can do it for regular videos.

1