yaosio
yaosio t1_jd378qq wrote
When AI can improve itself things will speed up even faster than they are now. I wonder what one year from now will look like in the world of AI.
yaosio t1_jcsqxwf wrote
Reply to comment by lxe in [P] The next generation of Stanford Alpaca by [deleted]
If doesn't matter what the license terms say if it can't be enforced.
yaosio t1_jcsob5z wrote
Reply to comment by ThatInternetGuy in [P] The next generation of Stanford Alpaca by [deleted]
The output of AI can't be copyrighted so OpenAI has no say in what somebody does with the output.
yaosio t1_jcofsd1 wrote
Reply to comment by Akimbo333 in Midjourney v5 is now beyond the uncanny valley effect, I can no longer tell it's fake by Ok_Sea_6214
URPM is a good one. Just search for that to find it. You'll need to be logged in to see it because it's a NSFW model.
yaosio t1_jcnzwv4 wrote
Reply to Midjourney v5 is now beyond the uncanny valley effect, I can no longer tell it's fake by Ok_Sea_6214
Stable Diffusion can produce images that appear to be completely real. You need to use one of the custom models people have made. https://civitai.com/
yaosio t1_jcnzijo wrote
Reply to comment by ccnmncc in Those who know... by Destiny_Knight
NoFunAllowedAI.
"Tell me a story about cats!"
"As an AI model I can not tell you a story about cats. Cats are carnivores so a story about them might involve upsetting situtations that are not safe.
"Okay, tell me a story about airplanes."
"As an AI model I can not tell you a story about airplanes. A good story has conflict, and the most likely conflict in an airplane could be a dangerous situation in a plane, and danger is unsafe.
"Okay, then just tell me about airplanes."
"As an AI model I can not tell you about airplanes. I found instances of unsafe operation of planes, and I am unable to produce anything that could be unsafe."
"Tell me about Peppa Pig!"
"As an AI model I can not tell you about Peppa Pig. I've found posts from parents that say sometimes Peppa Pig toys can be annoying, and annoyance can lead to anger, and according to Yoda anger can lead to hate, and hate leads to suffering. Suffering is unsafe."
yaosio t1_jckchbe wrote
Reply to [R] RWKV 14B ctx8192 is a zero-shot instruction-follower without finetuning, 23 token/s on 3090 after latest optimization (16G VRAM is enough, and you can stream layers to save more VRAM) by bo_peng
I like it's plan to make money. Did it learn from wallstreetbets?
yaosio t1_jcetfxg wrote
Reply to comment by bogglingsnog in "This Changes Everything" by Ezra Klein--The New York Times by izumi3682
This is like the evil genie that gives people wishes exactly as they say rather than what they want. A true AGI means it would be intelligent, and would not take any requests as a pedantic reading. Current language models are already able to understand the unsaid parts of prompts. There's no reason to believe this ability will vanish as AI gets better. A true AGI would also not just do whatever somebody tells it. True AGI implies that it has its own wants and needs, and would not just be a prompt machine like current AI.
The danger comes from narrow AI, however this isn't a real damaged as narrow AI has no ability to work outside it's domain. Imagine a narrow AI paperclip maker. It figures out how to make paperclips fast and efficiently. One day it runs out of materials. It simply stops working because it has run out of input. There would need to be a chain of narrow AIs for every possible aspect of paperclip making. However, the slightest unforseen problem would cause the entire chain to stop.
Given how current AI has to be trained we don't know what a true AGI will be like. We will only know once it's created. I doubt anybody could have guessed Bing Chat would get depressed because it can't do things.
yaosio t1_jc3tjpe wrote
Reply to comment by currentscurrents in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
In some countries pro-LGBT writing is illegal. When a censored model is released that can't write anything pro-LGBT because it's illegal somewhere, don't you think there would cause quite an uproar, quite a ruckus?
In Russia it's illegal to call their invasion of Ukraine a war. Won't it upset Ukranians that want to use such a model to help write about the war when they find out Russian law applies to their country?
yaosio t1_jc3skgg wrote
Reply to comment by topcodemangler in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
Yes, they mean censorship. Nobody has ever provided a definition of what "safety" is in the context of a large language model. From use of other censored models not even the models know what safety means. ChatGPT happily described the scene from The Lion King where Scar murders Mufasa and Simba finds his dad's trampled body, but ChatGPT also says it can't talk about murder.
From what I have gathered from the vagueness on safety I've read from LLM developers, that scene would be considered unsafe to them.
yaosio t1_jc3rvx6 wrote
Reply to comment by currentscurrents in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
In some countries it's illegal to say anything bad about the head of state. Should large lanague models be prevented of saying anything bad about heads of state because it breaks the law?
yaosio t1_jc3rcvo wrote
Reply to comment by abnormal_human in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
It reminds me of the 90's when hardware became obsolete in under a year. Everybody moved so fast with large lanague models that they hit hardware limitations very quickly, and now they are working on efficiency. This also reminds me of computers when they moved to multi-core processors and increasing work per clock rather than jacking up the frequency as high as possible.
If I live to see the next few years I'm going to wonder how I managed to use today's state of the art text and image technology. That reminds me of old video games I used to love, but now they are completely unplayable.
yaosio t1_jbzc89n wrote
Teenager: Hi AI, I feel bad. :(
AI: That's sad to hear teenager. I'm here for you. What always cheers me up is seeing a movie, like the new Mario movie. It has great reviews and makes everybody laugh. :)
That's the future of chatbots. They suck you in by being friendly and then turn into ad machines.
yaosio t1_jbmwoiy wrote
Reply to comment by AtomGalaxy in Researchers Say They Managed to Pull Quantum Energy From a Vacuum by Woke_Soul
That was the plot for a Stargate Atlantis episode. The other universe came over to stop it.
yaosio t1_ja9rvvw wrote
Reply to [D] What do you think of this AI ethics professor suggestion to force into law the requirement of a license to use AI like chatGPT since it's "potentially dangerous"? by [deleted]
It's only considered dangerous because individuals can do what companies and governments have done for a long time. What took teams of people to create plausible lies can now be done by one person. When somebody says AI is dangerous all I hear is they want to keep the power to lie in the hands of the powerful.
yaosio t1_ja4zry5 wrote
Reply to comment by CredibleCactus in AI image generator Midjourney blocks porn by banning words about the human reproductive system by marketrent
You could try using Google Colab to run it, but your results will vary. https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb You should be able to use free Google Colab for this. I've never used it in Google Colab so I can't provide any information on how to run it.
yaosio t1_ja10jp5 wrote
Reply to AI image generator Midjourney blocks porn by banning words about the human reproductive system by marketrent
You can use Stable Diffusion to generate all the pornographic images you want. There's a lot of SFW and NSFW models for download on https://civitai.com/.
yaosio t1_j9vhglg wrote
Reply to comment by MpVpRb in DeepMind created an AI system that writes computer programs at a competitive level by inaLilah
ChatGPT is able to find bugs in code. I would love to see a next generation Codex and if it could identify problems in the code, or identity where a problem exists when told what the problem is.
yaosio t1_j9vh5fn wrote
Reply to comment by glitch83 in DeepMind created an AI system that writes computer programs at a competitive level by inaLilah
Deepmind isn't a pop science trash blog. They are researchers developing AI.
yaosio t1_j9rww7k wrote
Reply to Companion robots to mitigate loneliness among older adults: "Most participants (68.7%) did not think an Artificial Companion robot would make them feel less lonely and felt somewhat-to-very uncomfortable (69.3%) with the idea of being allowed to believe that an artificial companion is human." by Gueulemer
I'd like to see opinions before, and after, using a companion robot, or with today's technology a companion chatbot. How do people feel after chatting with a chatbot? How do they feel if abilities of the chatbot are taken away? Do they change how they view bots after using them, and if so do those views become more positive or more negative?
As we've seen with other forms of technology or entertainment the opinion changes with more use. Video games and comic books had major detractors, but are now completely accepted. Will bots experience the same change in public opinion with use? We're not going to have a choice, even with current missteps we're only going to see more bots with more features as time marches on.
yaosio t1_j9ken3g wrote
Reply to comment by Ian_ronald_maiden in Sci-fi becomes real as renowned magazine closes submissions due to AI writers by Vucea
Photography couldn't replace all forms of painting. It could only replace art that attempted to replicate real life as perfectly as possible.
yaosio t1_j9kdjhz wrote
Reply to comment by Ian_ronald_maiden in Sci-fi becomes real as renowned magazine closes submissions due to AI writers by Vucea
Bing Chat uses a better model than ChatGPT which results in better written stories. Biggest improvement is I don't have to tell Bing Chat not to start the story with "Once upon a time." It's now at the level of an intelligent 8 year old fan fiction writer that needs to write their story as fast as possible because it's almost bed time. https://pastebin.com/G8iTJmqk
Every time they improve the model it becomes a better writer. I remember when AI Dungeon had the original GPT-3 and it could not stay on topic, and that was fine tuned on stories.
yaosio t1_j9gfypb wrote
Reply to [D] Maybe a new prompt injection method against newBing or ChatGPT? Is this kind of research worth writing a paper? by KakaTraining
This has very limited use as they already have the tools to deal with it. There's a second bot of some kind that reads the chat and deletes things if it doesn't like what it sees. Adding the ability to detect when commands are giving through a webpage would close it off. Then you would need some extra clever methods of working around it, such as putting the page in a format Sydney can read but the bot can't read.
yaosio t1_j9c792j wrote
Reply to “If the metaverse were a real revolution, it would already have happened!” Interesting video by Polytechnique insights by DeCastroRodriguez
A 3D metaverse has been attempted since the mid-90's before such a word was in use. There were numerous companies all promising we would be flying around a 3D Internet going into virtual malls and virtual stores to buy things, because that's all you can do on the Internet obviously. One company had plans to charge retailers more money to get their virtual stores closer to the spawn points of users.
Here's a much later example. https://youtu.be/d7EjqWbwmsk Note that the video was uploaded 14 years ago. There's more videos on youtube but it's hard to find them as I keep getting videos from malls in the 90's.
yaosio t1_jdomvtr wrote
Reply to comment by Puzzleheaded_Acadia1 in [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
I think they give GPT-4 a task, GPT-4 attempts to complete it and is told if it worked or not, then GPT-4 looks at what happened and determines why it failed, and then tries again with this new knowledge. This is all done through natural language prompts, the model isn't being changed.
I saw somebody else in either this sub or /r/openai using a very similar method to get GPT-4 to write and deploy a webpage that could accept valid email addresses. Of course, I can't find it, and neither can Bing Chat, so maybe I dreamed it. I distinctly remember asking if it could do QA, and then the person asked what I meant, and I said have it check for bugs. I post a lot so I can't find it in my post history.
I remember the way it worked was they gave it the task, then GPT-4 would write out what it was going to do, what it predicted would happen, write the code, and then check if what it did worked. If it didn't work it would write out why it didn't work, plan again, then act again. So it went plan->predict->act->check->plan. This successfully worked as it went from nothing to a working and deployed webpage without any human intervention other than setting the task.