Borrowedshorts

Borrowedshorts t1_j8fc7bk wrote

I'd say it's the opposite. 2 million members didn't sign up to this sub for academic only discussions. If you want that, it would be best to start a subreddit expressly for that purpose. ChatGPT is changing the world, so to say those posts are low quality is just gatekeeping discussions away from what people actually want to participate in.

−2

Borrowedshorts t1_j8fc3wu wrote

I'd say it's the opposite. 2 million members didn't sign up to this sub for academic only discussions. If you want that, it would be best to start a subreddit expressly for that purpose. ChatGPT is changing the world, so to say those posts are low quality is just gatekeeping discussions away from what people actually want to participate in.

−3

Borrowedshorts t1_j8f4clg wrote

Even if we assume that, it's not necessarily a problem or suggests that AI progress will slow anytime soon. We can afford to dedicate a lot more energy to AI improvement than we currently are. Recent multimodal models seem to suggest there is plenty of room for efficiency gains yet. We are still far from limitations of energy becoming a primary concern, if it ever does, as AI self-improvement will make its own algorithms more efficient and get better and better at finding outside resources to exploit.

2

Borrowedshorts t1_j8erk8h wrote

It's as good as it sounds, and you can't really fake performance on a dataset such as this. Multimodal models will change the game. I don't think multimodal models by themselves are the end game, but they appear to be poised to takeover state of the art performance for the foreseeable future.

1

Borrowedshorts t1_j8emd66 wrote

One conversation where the user got to make it say weird stuff because he purposely manipulated it does not mean it needs to be taken away from all users. I use it a little bit like a research assistant and it helps tremendously. Do I trust all of its outputs? No, but it gives me a start to look at topics in more detail.

4

Borrowedshorts t1_j8dym0f wrote

AI progress from 1960s to 2010 was exponential, but followed Moore's law and most of the progress was in symbolic AI and not connectionist. Part of the reason connectionist AI didn't make much advancements during this period is because they didn't get any increase in computational power dedicated to connectionist research in an argument formed by Moravec. From 2010-2020s, we've seen much faster progress in connectionist AI, and much faster than Moore's law, at least 6x faster. The doubling rate of progress has been 3-4 months from 1-2 years. This is still exponential progress, but at a faster rate than Moore's law.

1

Borrowedshorts t1_j8dx7i2 wrote

Computation and AI haven't demonstrated S curves, but have always been exponential. If we look at some of the effects, they may be S curves. If we look at Siri, there was a massive and rapid adoption of that, but has since tapered off. I suspect job displacement will show an S curve. But computation itself has demonstrated exponential progress for a very long time, and I doubt that slows anytime soon.

2

Borrowedshorts t1_j7xz19r wrote

Yes, and this example actually isn't all that impressive. Google demonstrated a LLM can significantly improve decision making for a real world robot, giving it a type of 'common sense'. Check out Palm-SayCan which is a collaboration of two models that can perform real world robotic tasks through the assistance of a language model.

18

Borrowedshorts t1_j7w6deg wrote

The faster things go now, the more likely it is to be a slow takeoff scenario. AI models, though they are getting increasingly close to matching human performance on general tasks, are still very far from matching human parameter count in any efficiency scale close to the human brain. This will be a requirement before general ASI can bring about an intelligence explosion, which I still don't see happening before 2040. Meanwhile I believe we are in the midst of a slow takeoff now that will usher in enormous societal change with proto-agi and agi and agi systems.

1

Borrowedshorts t1_j7szllb wrote

This is a good thing imo. Letting any and all of the most mediocre candidates easily write customized cover letters will severely dilute the job selection process even more than it already is. It's the same reason you need to send out dozens of resumes on indeed to expect any sort of response. Indeed has made it easy, too easy, to get your resume out in front of companies and why most immediately go in the trash bin as soon as it reaches a hiring manager.

0

Borrowedshorts t1_j7rikxh wrote

Don't underestimate the laziness of the average person. I'm okay if they're not able to access it through a simple search site, but for those who put a little more effort into finding an API that can do the same thing, and it allows me to get ahead by doing that, I'm perfectly okay with that. If it were easily accessible by every average Joe out there, what little purpose a cover letter already serves would be entirely diluted to nothingness.

−1

Borrowedshorts t1_j6vvtvd wrote

I wonder if it wouldn't make searching for answers a 100 times faster. Most of the call center type jobs I've worked for wanted you to give only answers that were in some knowledge base. Well you could train an AI to learn everything in that knowledge base and recall it instantly to help with any customer problem. Connect it with all the other systems that are used for servicing accounts and I'm pretty confident an AI could be much more efficient than even the best customer service agents.

26

Borrowedshorts t1_j6jrc54 wrote

They combined a platform called saycan with a LLM and it demonstrated much higher planning accuracy than what's previously been shown with robotics. So apparently the LLM is giving it the capability to have some real world smarts and better understands the relationships between objects. Actual task execution still has a ways to go, the main limitation there being robotic control algorithms, which Google admittedly is pretty bad at.

1

Borrowedshorts t1_j5s756d wrote

It has relational understanding equal or superior to the average human in several different domains. And it does this without having the benefit of a real world model or experiences. It's like Helen Keller perceiving the world blind, deaf, and dumb but yet understanding concepts many humans cannot. The people who proclaim they aren't the least bit impressed by it really shows how little they know.

13

Borrowedshorts t1_j5ilq8l wrote

There's already evidence that they do learn world models. The Google robotics lab has demonstrated a sort of 'common sense' task understanding by adding a LLM to its algorithmic capability, perhaps demonstrating the first such time it's been done. LLM's and multimodal models will greatly speed up algorithmic control capabilities of robotics. It’s already been demonstrated.

10

Borrowedshorts t1_j55qvtl wrote

I don't think they are honestly. They may know some of the intracacies and difficulties of their specific problem and then project that it will be that difficult to make progress in other subdomains. Which is probably true, but they also tend to underestimate the efforts other groups are putting in and the progress that can happen in other subdomains, which isn't always linear. So imo, they aren't really qualified to give an accurate prediction because very few have actually even studied the problem. I'd trust the people who have actually studied the problem, these are AGI experts and tend to be much more optimistic than the AI field overall.

3

Borrowedshorts t1_j53lrlq wrote

The world has never seen anything like AI progress. AI capability has been advancing at nearly an order of magnitude improvement each year. It's completely unprecedented in human history. I think it's much more absurd to have such confidence AI progress will cease for no particular reason, which is what will have to happen if the post-2050 predictions are correct.

9