yaosio

yaosio t1_jdomvtr wrote

I think they give GPT-4 a task, GPT-4 attempts to complete it and is told if it worked or not, then GPT-4 looks at what happened and determines why it failed, and then tries again with this new knowledge. This is all done through natural language prompts, the model isn't being changed.

I saw somebody else in either this sub or /r/openai using a very similar method to get GPT-4 to write and deploy a webpage that could accept valid email addresses. Of course, I can't find it, and neither can Bing Chat, so maybe I dreamed it. I distinctly remember asking if it could do QA, and then the person asked what I meant, and I said have it check for bugs. I post a lot so I can't find it in my post history.

I remember the way it worked was they gave it the task, then GPT-4 would write out what it was going to do, what it predicted would happen, write the code, and then check if what it did worked. If it didn't work it would write out why it didn't work, plan again, then act again. So it went plan->predict->act->check->plan. This successfully worked as it went from nothing to a working and deployed webpage without any human intervention other than setting the task.

2

yaosio t1_jcnzijo wrote

Reply to comment by ccnmncc in Those who know... by Destiny_Knight

NoFunAllowedAI.

"Tell me a story about cats!"

"As an AI model I can not tell you a story about cats. Cats are carnivores so a story about them might involve upsetting situtations that are not safe.

"Okay, tell me a story about airplanes."

"As an AI model I can not tell you a story about airplanes. A good story has conflict, and the most likely conflict in an airplane could be a dangerous situation in a plane, and danger is unsafe.

"Okay, then just tell me about airplanes."

"As an AI model I can not tell you about airplanes. I found instances of unsafe operation of planes, and I am unable to produce anything that could be unsafe."

"Tell me about Peppa Pig!"

"As an AI model I can not tell you about Peppa Pig. I've found posts from parents that say sometimes Peppa Pig toys can be annoying, and annoyance can lead to anger, and according to Yoda anger can lead to hate, and hate leads to suffering. Suffering is unsafe."

3

yaosio t1_jcetfxg wrote

This is like the evil genie that gives people wishes exactly as they say rather than what they want. A true AGI means it would be intelligent, and would not take any requests as a pedantic reading. Current language models are already able to understand the unsaid parts of prompts. There's no reason to believe this ability will vanish as AI gets better. A true AGI would also not just do whatever somebody tells it. True AGI implies that it has its own wants and needs, and would not just be a prompt machine like current AI.

The danger comes from narrow AI, however this isn't a real damaged as narrow AI has no ability to work outside it's domain. Imagine a narrow AI paperclip maker. It figures out how to make paperclips fast and efficiently. One day it runs out of materials. It simply stops working because it has run out of input. There would need to be a chain of narrow AIs for every possible aspect of paperclip making. However, the slightest unforseen problem would cause the entire chain to stop.

Given how current AI has to be trained we don't know what a true AGI will be like. We will only know once it's created. I doubt anybody could have guessed Bing Chat would get depressed because it can't do things.

5

yaosio t1_jc3tjpe wrote

In some countries pro-LGBT writing is illegal. When a censored model is released that can't write anything pro-LGBT because it's illegal somewhere, don't you think there would cause quite an uproar, quite a ruckus?

In Russia it's illegal to call their invasion of Ukraine a war. Won't it upset Ukranians that want to use such a model to help write about the war when they find out Russian law applies to their country?

9

yaosio t1_jc3skgg wrote

Yes, they mean censorship. Nobody has ever provided a definition of what "safety" is in the context of a large language model. From use of other censored models not even the models know what safety means. ChatGPT happily described the scene from The Lion King where Scar murders Mufasa and Simba finds his dad's trampled body, but ChatGPT also says it can't talk about murder.

From what I have gathered from the vagueness on safety I've read from LLM developers, that scene would be considered unsafe to them.

8

yaosio t1_jc3rcvo wrote

It reminds me of the 90's when hardware became obsolete in under a year. Everybody moved so fast with large lanague models that they hit hardware limitations very quickly, and now they are working on efficiency. This also reminds me of computers when they moved to multi-core processors and increasing work per clock rather than jacking up the frequency as high as possible.

If I live to see the next few years I'm going to wonder how I managed to use today's state of the art text and image technology. That reminds me of old video games I used to love, but now they are completely unplayable.

25

yaosio t1_jbzc89n wrote

Teenager: Hi AI, I feel bad. :(

AI: That's sad to hear teenager. I'm here for you. What always cheers me up is seeing a movie, like the new Mario movie. It has great reviews and makes everybody laugh. :)

That's the future of chatbots. They suck you in by being friendly and then turn into ad machines.

3

yaosio t1_ja9rvvw wrote

It's only considered dangerous because individuals can do what companies and governments have done for a long time. What took teams of people to create plausible lies can now be done by one person. When somebody says AI is dangerous all I hear is they want to keep the power to lie in the hands of the powerful.

5

yaosio t1_ja4zry5 wrote

You could try using Google Colab to run it, but your results will vary. https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb You should be able to use free Google Colab for this. I've never used it in Google Colab so I can't provide any information on how to run it.

2

yaosio t1_j9rww7k wrote

I'd like to see opinions before, and after, using a companion robot, or with today's technology a companion chatbot. How do people feel after chatting with a chatbot? How do they feel if abilities of the chatbot are taken away? Do they change how they view bots after using them, and if so do those views become more positive or more negative?

As we've seen with other forms of technology or entertainment the opinion changes with more use. Video games and comic books had major detractors, but are now completely accepted. Will bots experience the same change in public opinion with use? We're not going to have a choice, even with current missteps we're only going to see more bots with more features as time marches on.

7

yaosio t1_j9kdjhz wrote

Bing Chat uses a better model than ChatGPT which results in better written stories. Biggest improvement is I don't have to tell Bing Chat not to start the story with "Once upon a time." It's now at the level of an intelligent 8 year old fan fiction writer that needs to write their story as fast as possible because it's almost bed time. https://pastebin.com/G8iTJmqk

Every time they improve the model it becomes a better writer. I remember when AI Dungeon had the original GPT-3 and it could not stay on topic, and that was fine tuned on stories.

1

yaosio t1_j9gfypb wrote

This has very limited use as they already have the tools to deal with it. There's a second bot of some kind that reads the chat and deletes things if it doesn't like what it sees. Adding the ability to detect when commands are giving through a webpage would close it off. Then you would need some extra clever methods of working around it, such as putting the page in a format Sydney can read but the bot can't read.

1

yaosio t1_j9c792j wrote

A 3D metaverse has been attempted since the mid-90's before such a word was in use. There were numerous companies all promising we would be flying around a 3D Internet going into virtual malls and virtual stores to buy things, because that's all you can do on the Internet obviously. One company had plans to charge retailers more money to get their virtual stores closer to the spawn points of users.

Here's a much later example. https://youtu.be/d7EjqWbwmsk Note that the video was uploaded 14 years ago. There's more videos on youtube but it's hard to find them as I keep getting videos from malls in the 90's.

2