Borrowedshorts
Borrowedshorts t1_j4hnexk wrote
Reply to comment by SnooDonkeys5480 in Microsoft invests $10 billion in large language models development by SalzaMaBalza
Could take longer than you think. There's not even a good automation solution for linking references in Word, and that seems pretty basic.
Borrowedshorts t1_j42uyey wrote
No and just think about it. If LLM's become monetizable at the scale that other tech areas such as search or social media has, there's a ton of opportunity there, and you have a leg up on everyone else.
Borrowedshorts t1_j3x3bhr wrote
It's about both I would guess.
Borrowedshorts t1_j3db194 wrote
Reply to comment by zendonium in Now that’s pretty significant! (By Anthropic) by MajorUnderstanding2
It's not all or nothing. It has understanding of many types of relationships, but It's understanding can be limited in certain cases, and in some cases, more than others. It doesn't have to be it understands everything or it understands nothing. There is a middle ground there.
Borrowedshorts t1_j30dr0i wrote
Reply to comment by [deleted] in 2022 was the year AGI arrived (Just don't call it that) by sideways
Attempting to skip an entire autonomy level was idiotic and has slowed progress in the AV industry immensely. Engineers and business leaders bet big that they could skip L3 autonomy. Well they were wrong.
Borrowedshorts t1_j30db4j wrote
Reply to comment by manOnPavementWaving in 2022 was the year AGI arrived (Just don't call it that) by sideways
Agreed, cause and effect has already been demonstrated on much lesser models. Seems OP is making up randomly stated limitations and hoping it sticks.
Borrowedshorts t1_j30d0cx wrote
Reply to comment by GoldenRain in 2022 was the year AGI arrived (Just don't call it that) by sideways
In some ways, it already has way more intelligence even compared to >90th percentile humans. ChatGPT can write a good quality 5 page essay in seconds that might take most humans at least 5 hours. It has a breadth of knowledge that few humans can match. No it doesn't learn continuously, but I'd say in some ways it is pretty adaptive and cause and effect really isn't that difficult.
Borrowedshorts t1_j2lnqf6 wrote
Reply to [D] Data cleaning techniques for PDF documents with semantically meaningful parts by cm_34978
2023 and we still can't automate working with PDF documents. Sad.
Borrowedshorts t1_j2ffld8 wrote
Reply to comment by Analog_AI in GPT-3 scores better than humans on Raven’s Progressive Matrices in a display of emergent analogical reasoning by visarga
It seems like most AI companies have been doing this for now. I wonder if they're optimizing a local maxima instead of a global and that the global can only be achieved through further scale.
Borrowedshorts t1_j1i2uv9 wrote
Reply to There are far more dissenting opinions in this sub than people keep saying. by Krillinfor18
My honest opinion is that those who are optimistic about the singularity don't truly understand it. Your entire life and everything you know will be entirely upended. The pace of change will be greater and hit with more ferocity than we can even comprehend. Maybe you think your life sucks now and the singularity will somehow make it better, but it will truly be an alien world from everything you now know.
Borrowedshorts t1_j12omqy wrote
Reply to comment by Akashictruth in Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
Yep, the state of complimentary and also disparate technologies is converging on multiple breakthroughs in this decade.
Borrowedshorts t1_j0sm79z wrote
Reply to Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
Real time inference is still limited and there is still a wide gap between humans and AI. If we assume humans are at 100 trillion parameters and the limits of real time AI inference is still around 20 billion parameters, we still have a long ways to go in matching hardware capability with human performance. Both are constraints, though I would say software has typically quickly followed the hardware capabilities which allowed it. Imo, hardware is actually the bigger constraint.
Borrowedshorts t1_j0sk9tm wrote
This is mostly right. People definitely get more respect when they work. Just a personal example, I delayed going into the workforce for awhile because I was working on a research project that I myself was extremely proud of more than any job would bring. But my family couldn't understand why I was doing the project let alone putting off work for it. It's just something that they couldn't connect with. OP is definitely right that there's a social contact of sorts where you attain a higher status because you have a job, and even better a career, and also if you get married, or have kids. I for one can't wait until this social contract tying work with respect gets destroyed.
Borrowedshorts t1_ixadfcb wrote
Reply to Would like to say that this subreddit's attitude towards progress is admirable and makes this sub better than most other future related discussion hubs by Foundation12a
Yeah I agree. Though AGI approaching singularity is one of the few technologies that actually scares the shit out of me.
Borrowedshorts t1_j53ksqo wrote
Reply to comment by icedrift in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
There are two types of AI experts. Those who focus their efforts on a very narrow subdomain and then there are those who study the problem from a broader lens. The latter group who are AGI experts and who have actually studied the problem tend to be very optimistic on timelines. I'd trust the opinion of those who have actually studied the problem vs those who haven't. There are numerous examples of experts in narrow subdomains being wrong or just completely overshadowed by changes they could not see.