ertgbnm
ertgbnm t1_jeb1cii wrote
Reply to comment by horance89 in Can quantum computers be used to develop AGI > ASI? by Similar-Guitar-6
Sure. That doesn't change my bet though. Because much more investment and human attention will be devoted to optimizing conventional architecture and software since those are what have the largest return on investment at the moment. So, the speed up goes to all sectors. Granted quantum computing scales differently than conventional computing but I still don't see a reality where it outperforms conventional computing at training model weights before we already hit AGI. Also granted, that there is probably more low hanging fruit in Quantum computing compared to the nearly century of maturity that conventional computing has. There are trillions of dollars in conventional AI research and GPU manufacturing that would have to be retooled to achieve AGI via quantum computing whereas I believe that conventional approaches will be done faster, cheaper, and more easily. If I'm wrong then I think the issue with my beliefs is the time horizon for AGI and not about the future of technological development.
ertgbnm t1_jear5fe wrote
There's lot of research into how quantum computers could be used to help train neural networks with lower compute requirements. But I put a very low probability of quantum computing scaling fast enough to be useful at the sizes of models were working at in the near future.
ertgbnm t1_jdxgpnp wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Isn't this just progress?
ertgbnm t1_jdkv8rw wrote
Reply to [R] Reflexion: an autonomous agent with dynamic memory and self-reflection - Noah Shinn et al 2023 Northeastern University Boston - Outperforms GPT-4 on HumanEval accuracy (0.67 --> 0.88)! by Singularian2501
Umm wow! I recommend backing up this GitHub before it gets taken down for "safety"
ertgbnm t1_jdegpsc wrote
Reply to How will you spend your time if/when AGI means you no longer have to work for a living (but you still have your basic needs met such as housing, food etc..)? by DreaminDemon177
I think that world is so unknowable it's pretty much impossible to say.
First, I let AGI plan my day because it will probably be way better at that than me.
I think the utopic future will be made up of time with friends and family, mental and physical stimulation, good food, good rest, novel experiences, novel destinations.
ertgbnm t1_jd0xwfh wrote
Reply to comment by Civil_Collection7267 in [Project] Alpaca-30B: Facebook's 30b parameter LLaMa fine-tuned on the Alpaca dataset by imgonnarelph
Good to hear. Thanks!
ertgbnm t1_jd028k5 wrote
Reply to [Project] Alpaca-30B: Facebook's 30b parameter LLaMa fine-tuned on the Alpaca dataset by imgonnarelph
I heard 30B isn't very good. Anyone with experience disagree?
ertgbnm t1_jcbhjzm wrote
I had a working version of flappy bird using the JavaScript sandbox over a year ago. The learning algorithm is pretty cool tho. The sandbox took a few prompts to get it working too but it didn't have to code a single thing.
ertgbnm t1_jbyocgi wrote
Reply to comment by serge_cell in [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
Isn't alphago trained against itself? So I would consider it adversarial training.
ertgbnm t1_ja3jq8t wrote
Reply to comment by [deleted] in Man successfully performs gene therapy on himself to cure his lactose intolerance by [deleted]
Nope that was five years ago and he is still uploading videos.
ertgbnm t1_j9ykyso wrote
Reply to comment by ActuatorMaterial2846 in Open AI officially talking about the coming AGI and superintelligence. by alfredo70000
Let the idiots move the goal posts. Prove them wrong by building some amazing stuff.
ertgbnm t1_j9jgoi9 wrote
Reply to comment by IluvBsissa in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Read the questions on scienceQA. They are hot and not hot dog type questions
ertgbnm t1_j8e1vzz wrote
Reply to comment by eat-more-bookses in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
Since November we have had as much growth if not more than I saw between June and November of last year. Doesn't seem like a plateau at all.
ertgbnm t1_j6e5hgp wrote
Reply to My human irrationality is already taking over: as generative AI progresses, I've been growing ever more appreciative of human-made media by Yuli-Ban
This post is no different than wingeing about cgi effects replacing some practical effects. You are way off base in my opinion. If first gen generative models have taught us anything it's that "human irrationality" is definitely automatable and perhaps it's easier to do than many other seemingly easier tasks.
ertgbnm t1_j4rcli6 wrote
Currently you can co-author with chatGPT and get a book of arbitrary length with enough revising, re-generations, and trial and error. The book will be ok, albeit cliche-ridden and surface level in a lot of areas but it'd be readable and worse books would certainly exist. Will we ever get to the point that we can do it with the click of a button? Probably. But that alone is super human. If you were told to write a book about a topic it would take you a lot of revising, rewriting, and trial/error too.
ertgbnm t1_jef18w7 wrote
Reply to Will AI's make language learning useless? by IntroVertu
Definitely not.
It will make translators as a profession useless. And it will make working with people who speak different languages easier. But I don't see how it wouldn't still be a valuable skill. There is a big difference between connecting with someone face to face vs through a translator. I'd argue that it's going to make language learning an even more accessible and rewarding hobby/skill than it already is.