dasnihil
dasnihil t1_j7kvz2l wrote
Reply to comment by dasnihil in John Carmack’s ‘Different Path’ to Artificial General Intelligence by lolo168
and we need this to cut that cost from $100bn to potato because biology runs on potato hardware, not a $100bn super computer. only if these pseudonerds realized it in the AI industry, we'd be expediting our search for more optimally converging networks.
dasnihil t1_j7kvu8z wrote
Reply to comment by SoylentRox in John Carmack’s ‘Different Path’ to Artificial General Intelligence by lolo168
we just need a team of a few math wizards to come up with better algorithms for training, matrix multiplications and whatever np problems are there in meta learning.. oh wait! we can just throw all our data into current AI and they will come up with the algorithms!!
this is how AGI will be achieved, there is no other way because humans don't get too many emmy noethers to come up with some new ways to do math. humans are busy with their short life and various indulgence.
dasnihil t1_j7kfjqo wrote
Reply to 200k!!!!!! by Key_Asparagus_919
yay more depressed people looking for a coping mechanism that doesn't sound too absurd.
dasnihil t1_j73jmie wrote
Reply to comment by vivehelpme in Will humanity reach its peak in this century? by Outdoorhans
sentience is not reserved for biological/evolutionary species, at least we haven't confirmed or understood it.
dasnihil t1_j6xv0jt wrote
Reply to comment by JenMacAllister in ChatGPT Passes US Medical Licensing Exams Without Cramming by RareGur3157
DocGPT, not to be confused with my other model DogGpt.
dasnihil t1_j6tq7bh wrote
Reply to Former Instagram Co-Founders Launch AI-Powered Personalized News App, Artifact by Flaky_Preparation_50
back in 2012 i implemented some ML product recommendation. is this what you mean by "AI Powered personalized news"? just some recommendations based on your interests and clicks?
dasnihil t1_j6p6zrw wrote
Reply to comment by alexiuss in Gmail creator says ChatGPT will destroy Google's business in two years by AdSnoo9734
they also have various image generation AIs that work totally different than the diffusion models (dalle/stable diffusion) which i doubt they'll release to public anytime soon.
dasnihil t1_j6p04zk wrote
Reply to comment by AdamAlexanderRies in ChatGPT creator Sam Altman visits Washington to meet lawmakers | In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” by Buck-Nasty
yeah, look at this handsome fella i met https://i.imgur.com/0fsMOv2.jpg
walking around this area late night was something.
dasnihil t1_j6oz2ke wrote
Reply to comment by alexiuss in Gmail creator says ChatGPT will destroy Google's business in two years by AdSnoo9734
google's engineers came up with the algorithm that makes gpt-* possible. they have several alternative network architectures that have their own strengths and weaknesses.
google is well aware of chatgpt taking marketshare but they're eyeing on something bigger, considering almost every household has their smart assistants and devices. i know how far ahead google thinks. i've done engineering for them.
dasnihil t1_j6o5mzd wrote
Reply to comment by [deleted] in How does society benefit from AGI? by beachinit23
nobody is pointing out the direct benefit to humanity yet, and only focusing on nuclear fusion and all.
the wishful thinking is that once we have nuclear fusion, energy gets dirt cheap. how long will it take for the rich to let it go and not make citizens pay for basic things like food, water, electricity, HOUSE to live in and so on.
i hope that will take shorter time for them to work towards that future where basic requirements for living is provideD, including emotional support of whatever form.
look at it this way, we were primates once, we didn't care about art, emotions, sentiments and attachments in today's form. our consciousness has trancended the regulatory needs of a primate hardware and is now trying to engineer a society where everything needed for the primate hardware will be provided and we only get to sink in our conscious experience. i mean i love fucking but many more things too.
we got out of food chain, meaning none of us have to hunt for food now or fear death while at it. the moment we got out of food chain, our road till now has been how to provide for basic necessities with least suffering. we just have to keep continuing and faster. suffering is no good.
i'm balls high right now.
dasnihil t1_j6o4icf wrote
Reply to comment by AdamAlexanderRies in ChatGPT creator Sam Altman visits Washington to meet lawmakers | In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” by Buck-Nasty
thank you for correcting me.
i did get scaroused when this dude stepped in front of my jeep. almost climbed on it too.
dasnihil t1_j6nxu43 wrote
Reply to comment by FC4945 in Andrew Moore is the head of AI at Google Cloud and the former dean of the Carnegie Mellon School of Engineering in Pittsburgh, where he has been at work on the big questions of AI for more than 20 years. Here he shares his vision for some of what we can expect over the next 10. by alfredo70000
as much as i admire ray, i don't think he has a say in when we will get it. there are a few million dollar problems to solve before we solve the intelligence problem. but that's just my view, nobody has to agree or disagree, i just urge everyone to look into it.
dasnihil t1_j6niu4e wrote
Reply to comment by alakeya in I don’t think that artists will be doomed with AI by alakeya
we will soon be desensitized to "art" as we know it today in various forms: drawing/painting, music, literature etc
you mentioned "print" as a technology but you're discounting the fact that we're talking about a different beast here. a beast that can bring your imagination to life and wow you.
isn't art something that wows the audience and takes them places? and if a machine generated thing is coherent enough to paint our imagination for us, do you get where we're headed now?
art as we know today will be understood better by future humans, just like we now understand religion and human constructs way better than even Nietzche did during his time. when he said "god is dead" it was shocking and profound, now we're all like "yeah god is dead who gives a shit".
humans have a peculiar way of finding new constructs and making it a trend. a few of us see that, rest are just discussing art like it was not a human construct and as if the universe has a thing like "art" in there lol.
dasnihil t1_j6mvkio wrote
Reply to comment by arckeid in Andrew Moore is the head of AI at Google Cloud and the former dean of the Carnegie Mellon School of Engineering in Pittsburgh, where he has been at work on the big questions of AI for more than 20 years. Here he shares his vision for some of what we can expect over the next 10. by alfredo70000
turing never thought of this test as a human talking to a machine to see if it's smart.
he had the intelligence problem in mind and thought of state machines that are turing complete could become generally intelligent just like humans.
and if we're talking about that kind of general intelligence, i don't think we will get that by 2029, but what do i know.
dasnihil t1_j6mo6n0 wrote
Reply to comment by alfredo70000 in Andrew Moore is the head of AI at Google Cloud and the former dean of the Carnegie Mellon School of Engineering in Pittsburgh, where he has been at work on the big questions of AI for more than 20 years. Here he shares his vision for some of what we can expect over the next 10. by alfredo70000
turing test is not a sufficient test and people can't be the judge, people are easily fooled. but i doubt it'll take that long from where we are.
dasnihil t1_j6mnry9 wrote
Reply to comment by AdamAlexanderRies in ChatGPT creator Sam Altman visits Washington to meet lawmakers | In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” by Buck-Nasty
beautiful place Calgary, i hiked in the banff last year, felt like leaving America for good.
And yeah AI stuff, i forgot what my comment even was, who cares, Calgary is a beautiful city!! screw ai and humanity.
dasnihil t1_j6i7amm wrote
Reply to comment by Superschlenz in ChatGPT creator Sam Altman visits Washington to meet lawmakers | In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,” by Buck-Nasty
When horses became obsolete because of cars, I'm glad there were people lobbying and made cars possible today.
I'm fine with lobbying if it's gonna bring attention from people who should pay attention to the fuckery we're going to get into if we let these tools evolve without planning.
dasnihil t1_j63d628 wrote
Reply to comment by Professional-Song216 in MusicLM: Generating Music From Text (Google Research) by nick7566
i also cannot wait dear fellow scholar
dasnihil t1_j5z5ijq wrote
Reply to comment by beezlebub33 in Gary Marcus refuted?? by FusionRocketsPlease
disclaimer: idk much about gary marcus, i only follow a few people closely in the field like joscha bach, and i'm sure he wouldn't say or worry about such things.
if you give 3 hands to a generally intelligent neural network, it will figure out how to make use of 3 hands, or no hands. it doesn't matter. so those trivial things are not to be worried about, the problem at hand is different.
dasnihil t1_j5yd0ai wrote
Reply to comment by GlobusGlobus in Gary Marcus refuted?? by FusionRocketsPlease
that's fine and it's a great tool like most tools humans have invented, id even say NN and gradient descent is the greatest idea so far. so what, we must keep going while society makes use of inventions along the way.
dasnihil t1_j5ybc3o wrote
Reply to comment by GlobusGlobus in Gary Marcus refuted?? by FusionRocketsPlease
if gpt is more important for you that's okay. everyone has a mission and it doesn't have to be the same. there are physicists still going at it without caring much about gpt or agi. who cares man, we have a limited life and we'll all be dead sooner or later. relax.
dasnihil t1_j5vnkn4 wrote
Reply to comment by EOE97 in Gary Marcus refuted?? by FusionRocketsPlease
all my engineering intuition bets against that. but i do get the idea, and i also have a good intuition of what kind of intelligence this kind of approach will give rise to and i'm okay with that. nothing wrong with scaled up LLMs and reinforcement learning. all innovative algorithms are welcome. engineers will keep at it while fancy things distract others.
dasnihil t1_j5tvy2m wrote
Reply to Gary Marcus refuted?? by FusionRocketsPlease
futurology is the worst subreddit for factual information.
gary marcus' objections have nothing to do with world models, but the fact that both deep learning and LLM have nothing to do with intelligence the way we see it in biological species i.e. their lack of ability for generalizing. it is fundamentally based on optimizing using gradient learning and this in my view is the opposite route to go when we're trying to engineer general intelligence.
dasnihil t1_j5tub1a wrote
Besides the industrial implications of changes to come, I see some psychological changes in general public.
- We will be soon desensitized with art as we know it today (pretty looking photos, beautiful landscapes, renaissance work), because when things become too cheap to produce without requiring any expertise, it automatically diminishes in value and society will come up with new trend
- Psychologically the people who grew up to be aspiring artists of various forms (writing, painting, music, film, photography etc) will shy away from such forms of art and new generations to come will have new forms of art coming from the scarcity created by automation
- Eventually we'll get self-awareness engineered using digital networks and we'll go back to the old ways of art, mixed with new philosophies that are to emerge when we give birth to our own sentient systems, immortality will have it's own baggage of problems
dasnihil t1_j8ir9qu wrote
Reply to comment by Tiamatium in Speaking with the Dead by phloydde
and i'd like to say "careful with that axe eugene" to the engineers who are adding persistent memory on these LLMs, i'm both excited and concerned to see what comes out if these LLMs are not responding to prompts but to the information of various nature that we make it constantly perceive in auditory or optical form.