Borrowedshorts
Borrowedshorts t1_j8fc3wu wrote
Reply to [D] Quality of posts in this sub going down by MurlocXYZ
I'd say it's the opposite. 2 million members didn't sign up to this sub for academic only discussions. If you want that, it would be best to start a subreddit expressly for that purpose. ChatGPT is changing the world, so to say those posts are low quality is just gatekeeping discussions away from what people actually want to participate in.
Borrowedshorts t1_j8f4clg wrote
Reply to comment by hydraofwar in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
Even if we assume that, it's not necessarily a problem or suggests that AI progress will slow anytime soon. We can afford to dedicate a lot more energy to AI improvement than we currently are. Recent multimodal models seem to suggest there is plenty of room for efficiency gains yet. We are still far from limitations of energy becoming a primary concern, if it ever does, as AI self-improvement will make its own algorithms more efficient and get better and better at finding outside resources to exploit.
Borrowedshorts t1_j8erk8h wrote
Reply to comment by NTIASAAHMLGTTUD in This is Revolutionary?! Amazon's 738 Million(!!!) parameter's model outpreforms humans on sience, vision, language and much more tasks. by Ok_Criticism_1414
It's as good as it sounds, and you can't really fake performance on a dataset such as this. Multimodal models will change the game. I don't think multimodal models by themselves are the end game, but they appear to be poised to takeover state of the art performance for the foreseeable future.
Borrowedshorts t1_j8emd66 wrote
Reply to comment by wren42 in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
One conversation where the user got to make it say weird stuff because he purposely manipulated it does not mean it needs to be taken away from all users. I use it a little bit like a research assistant and it helps tremendously. Do I trust all of its outputs? No, but it gives me a start to look at topics in more detail.
Borrowedshorts t1_j8dz7r7 wrote
Reply to comment by PrivateUser010 in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
AI likely doesn't exhibit this, but it has been advancing faster than Moore's law. The only thing that will exhibit a double exponential is probably quantum computing.
Borrowedshorts t1_j8dym0f wrote
AI progress from 1960s to 2010 was exponential, but followed Moore's law and most of the progress was in symbolic AI and not connectionist. Part of the reason connectionist AI didn't make much advancements during this period is because they didn't get any increase in computational power dedicated to connectionist research in an argument formed by Moravec. From 2010-2020s, we've seen much faster progress in connectionist AI, and much faster than Moore's law, at least 6x faster. The doubling rate of progress has been 3-4 months from 1-2 years. This is still exponential progress, but at a faster rate than Moore's law.
Borrowedshorts t1_j8dx7i2 wrote
Reply to comment by duffmanhb in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
Computation and AI haven't demonstrated S curves, but have always been exponential. If we look at some of the effects, they may be S curves. If we look at Siri, there was a massive and rapid adoption of that, but has since tapered off. I suspect job displacement will show an S curve. But computation itself has demonstrated exponential progress for a very long time, and I doubt that slows anytime soon.
Borrowedshorts t1_j8c6nng wrote
Reply to comment by chrisc82 in Bing Chat blew ChatGPT out of the water on my bespoke "theory of mind" puzzle by Fit-Meet1359
It'll be one of the first dominoes, not the last, doctorate level research that is. I suspect it will be far better than humans.
Borrowedshorts t1_j7xz19r wrote
Yes, and this example actually isn't all that impressive. Google demonstrated a LLM can significantly improve decision making for a real world robot, giving it a type of 'common sense'. Check out Palm-SayCan which is a collaboration of two models that can perform real world robotic tasks through the assistance of a language model.
Borrowedshorts t1_j7w6deg wrote
Reply to Based on what we've seen in the last couple years, what are your thoughts on the likelihood of a hard takeoff scenario? by bloxxed
The faster things go now, the more likely it is to be a slow takeoff scenario. AI models, though they are getting increasingly close to matching human performance on general tasks, are still very far from matching human parameter count in any efficiency scale close to the human brain. This will be a requirement before general ASI can bring about an intelligence explosion, which I still don't see happening before 2040. Meanwhile I believe we are in the midst of a slow takeoff now that will usher in enormous societal change with proto-agi and agi and agi systems.
Borrowedshorts t1_j7szllb wrote
Reply to I asked Microsoft's 'new Bing' to write me a cover letter for a job. It refused, saying this would be 'unethical' and 'unfair to other applicants.' by TopHatSasquatch
This is a good thing imo. Letting any and all of the most mediocre candidates easily write customized cover letters will severely dilute the job selection process even more than it already is. It's the same reason you need to send out dozens of resumes on indeed to expect any sort of response. Indeed has made it easy, too easy, to get your resume out in front of companies and why most immediately go in the trash bin as soon as it reaches a hiring manager.
Borrowedshorts t1_j7rikxh wrote
Reply to comment by Scoimies in I asked Microsoft's 'new Bing' to write me a cover letter for a job. It refused, saying this would be 'unethical' and 'unfair to other applicants.' by TopHatSasquatch
Don't underestimate the laziness of the average person. I'm okay if they're not able to access it through a simple search site, but for those who put a little more effort into finding an API that can do the same thing, and it allows me to get ahead by doing that, I'm perfectly okay with that. If it were easily accessible by every average Joe out there, what little purpose a cover letter already serves would be entirely diluted to nothingness.
Borrowedshorts t1_j7pvvfv wrote
Reply to I asked Microsoft's 'new Bing' to write me a cover letter for a job. It refused, saying this would be 'unethical' and 'unfair to other applicants.' by TopHatSasquatch
I'm okay with that honestly. Not every Tom, Dick, and Harry who can access one of the most popular search sites will be able to pad their job resume, but those who put in a little more effort with finding an alternative application will.
Borrowedshorts t1_j7ahe1p wrote
Reply to comment by Lawjarp2 in Major leak reveals revolutionary new version of Microsoft Bing powered by ChatGPT-4 AI by Phoenix5869
I'm convinced it's using a GPT 3 model. If that's the best a GPT 4 model can do, then it's extremely disappointing.
Borrowedshorts t1_j6y47g2 wrote
Reply to comment by Iffykindofguy in The next Moravec's paradox by CharlisonX
Construction jobs are some of the first use cases I've seen for drones in industry.
Borrowedshorts t1_j6vvtvd wrote
Reply to comment by [deleted] in GPT tool that lets you connect to databases and ask questions in text. by Mogen1000
I wonder if it wouldn't make searching for answers a 100 times faster. Most of the call center type jobs I've worked for wanted you to give only answers that were in some knowledge base. Well you could train an AI to learn everything in that knowledge base and recall it instantly to help with any customer problem. Connect it with all the other systems that are used for servicing accounts and I'm pretty confident an AI could be much more efficient than even the best customer service agents.
Borrowedshorts t1_j6lxx4f wrote
It's very easy to make an AI based dating app that mimics human interaction. Have >95% of the girls swipe left on your profile and the few that somehow match, make 90% of them don't respond or give one word answers to everything. Would be indistinguishable from any other dating app whether you're talking to human or AI.
Borrowedshorts t1_j6jrc54 wrote
Reply to comment by blissblogs in Do Large Language Models learn world models or just surface statistics? by Buck-Nasty
They combined a platform called saycan with a LLM and it demonstrated much higher planning accuracy than what's previously been shown with robotics. So apparently the LLM is giving it the capability to have some real world smarts and better understands the relationships between objects. Actual task execution still has a ways to go, the main limitation there being robotic control algorithms, which Google admittedly is pretty bad at.
Borrowedshorts t1_j5xda2b wrote
Reply to This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
r/machinelearning has gone downhill could be part of the reason. It's like they're stuck on problems from 5 years ago. I want to see stuff that's cutting edge and this is the place for it.
Borrowedshorts t1_j5s756d wrote
Reply to Anyone else kinda tired of the way some are downplaying the capabilities of language models? by deadlyklobber
It has relational understanding equal or superior to the average human in several different domains. And it does this without having the benefit of a real world model or experiences. It's like Helen Keller perceiving the world blind, deaf, and dumb but yet understanding concepts many humans cannot. The people who proclaim they aren't the least bit impressed by it really shows how little they know.
Borrowedshorts t1_j5ilq8l wrote
There's already evidence that they do learn world models. The Google robotics lab has demonstrated a sort of 'common sense' task understanding by adding a LLM to its algorithmic capability, perhaps demonstrating the first such time it's been done. LLM's and multimodal models will greatly speed up algorithmic control capabilities of robotics. It’s already been demonstrated.
Borrowedshorts t1_j55qvtl wrote
Reply to comment by SurroundSwimming3494 in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
I don't think they are honestly. They may know some of the intracacies and difficulties of their specific problem and then project that it will be that difficult to make progress in other subdomains. Which is probably true, but they also tend to underestimate the efforts other groups are putting in and the progress that can happen in other subdomains, which isn't always linear. So imo, they aren't really qualified to give an accurate prediction because very few have actually even studied the problem. I'd trust the people who have actually studied the problem, these are AGI experts and tend to be much more optimistic than the AI field overall.
Borrowedshorts t1_j53m7r6 wrote
Reply to comment by 94746382926 in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
That group also consists of a disproportionate number of researchers who have actually studied AGI broadly.
Borrowedshorts t1_j53lrlq wrote
Reply to comment by icedrift in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
The world has never seen anything like AI progress. AI capability has been advancing at nearly an order of magnitude improvement each year. It's completely unprecedented in human history. I think it's much more absurd to have such confidence AI progress will cease for no particular reason, which is what will have to happen if the post-2050 predictions are correct.
Borrowedshorts t1_j8fc7bk wrote
Reply to [D] Quality of posts in this sub going down by MurlocXYZ
I'd say it's the opposite. 2 million members didn't sign up to this sub for academic only discussions. If you want that, it would be best to start a subreddit expressly for that purpose. ChatGPT is changing the world, so to say those posts are low quality is just gatekeeping discussions away from what people actually want to participate in.