ReadSeparate
ReadSeparate t1_iuu7lef wrote
Reply to Do you think we could reach a singularity without the invention of agi? by Effective-Dig8734
You mean like reach the singularity through narrow AI? Yeah, absolutely. If we could make a narrow AI in which its only function is to iteratively maximize its own intelligence, I don't see why that wouldn't be possible in principle. An AGI would obviously be born at some point in that iterative cycle though, but I don't believe humans have to make the AGI directly.
I do, however, think it's significantly more likely that humans will directly create AGI, probably with the assistance of narrow AIs, rather than narrow AIs doing it solely by themselves, which will then be able to recursively self-improve and lead to the singularity
ReadSeparate t1_iu4njyx wrote
Reply to comment by augustulus1 in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
There's something you're missing here though, and that is that the minds in a superintelligent society will also be super-competent at convincing people to stop being luddites. They would probably capable of saying the PERFECT thing to convince virtually everyone, and for those that aren't convinced, they will eventually die off because they presumably will refuse life extension tech as well.
So, in the long term, we're talking about the whole planet here.
There's also the possibility that the superintelligent society does it by force as well. They may determine it's less immoral to force them to assimilate than it is to allow them to live regular human lives filled with suffering and hardship.
ReadSeparate t1_itw079x wrote
Reply to ai psd files by mattdyer
Maybe an even better idea is img2psd. It would be easy to do unsupervised learning for this. Just start with noise or a random image in a PSD file, make random changes like adding layers, drawing lines, adding text, etc, then output the corresponding PNG.
Then, you tokenize the PNG and the PSD files, and use the PNG as the input and the PSD as the output for the training data.
Could make a shit load of training data effortlessly that way.
That way we can use the current prompt to image solutions and just plug in the resulting image to this new model to output a PSD.
I’m not sure how well it would work, but it would be cool to try. Maybe it would also need some supervised data as well.
ReadSeparate t1_ittgzjh wrote
Reply to comment by 4e_65_6f in Large Language Models Can Self-Improve by xutw21
I've never really cared too much about the moral issues involved here, to be honest. People always talk about sentience, sapience, consciousness, capacity to suffer, and that is all cool stuff for sure, and it does matter, however, what I think is far more pressing is can this model replace a lot of people's jobs, and can this model surpass the entire collective intelligence of the human race?
Like, if we did create a model and it did suffer a lot, that would be a tragedy. But it would be a much bigger tragedy if we built a model that wiped out the human race, or if we built superintelligence and didn't use it to cure cancer or end war or poverty.
I feel like the cognitive capacity of these models is the #1 concern by a factor of 100, the other things matter too, and it might turn out that we'll be seen as monsters in the future by enslaving machines or something, certainly possible. But I just want humanity to evolve to the next level.
I do agree though, it's probably going to be extremely difficult if not impossible to get an objective view on the subjective experience of a mind like this, unless we can directly view it somehow, rather than asking it how it feels.
ReadSeparate t1_itqkpoj wrote
Reply to comment by 4e_65_6f in Large Language Models Can Self-Improve by xutw21
When GPT-3 first came out, I had a similar realization about how this all works.
Rather than thinking of a binary “is this intelligence or not” it’s much better to think of it in terms of accuracy and probabilities of giving correct outputs.
Imagine you had a gigantic non-ML computer program with billions or trillions of IF/THEN statements, no neural networks involved, just IF/THEN in, say, C++ and the output was 99.9% accurate to what a real human would do/say/think. A lot of people would say that this mind isn’t a mind at all, and it’s not “real intelligence”, but are you still going to feel that way when it steals your job? When it gets elected to office?
Behavioral outputs ARE all that matters. Who cares if a self driving car “really understands driving” if it’s safer and faster than a human driver.
It’s just a question of, how accurate are these models at approximating human behavior? Once it gets past the point of anyone of us being able to tell the difference, then it has earned the badge of intelligence in my mind.
ReadSeparate t1_itjm3kw wrote
Reply to comment by NTIASAAHMLGTTUD in Large Language Models Can Self-Improve by xutw21
I wonder if this can keep being done iteratively or if it will hit a wall at some point?
ReadSeparate t1_itgs97l wrote
Reply to comment by Zermelane in Given the exponential rate of improvement to prompt based image/video generation, in how many years do you think we'll see entire movies generated from a prompt? by yea_okay_dude
There is one big assumption in this, and that's that we won't get ALL of those things out of scale alone. It's entirely possible someone builds a multi-modal model trained on text, video, and audio, and a text-to-movie generator is simply a secondary feature of such a model.
If this does happen, we could see it as soon as 2-5 years from now, in my opinion.
The one major breakthrough I DO think we need to see before text-to-movie is something to replace Transformers, as they aren't really capable of long term memory without hacks, and the hacks don't seem very good. You need long term memory to have a coherent movie.
I think it's pretty likely that everything else will be accomplished through scale and multi-modality.
ReadSeparate t1_iv34tpk wrote
Reply to comment by ihateshadylandlords in How do you think an ASI might manifest? by SirDidymus
Imagine being so dumb and short-sighted you use ASI to make money, I hope they're not that unwise.