Difficult_Review9741

Difficult_Review9741 t1_ja4lxdh wrote

I have a feeling that ever task over a certain cognitive complexity will be automated at about the same time. Software development is definitely in this group.

If you truly believe that there will be no need for software developers in 3 years (I think this is truly far-fetched), then you have to conclude that there will probably be very few jobs left in 3 years. So, you might as well pursue what interests you anyways.

8

Difficult_Review9741 t1_j9wb1if wrote

Technical progress is a given, but remember that within those N years that saw immense progress, many ideas also seemed imminent and then fizzled out. We don't live in The Jetsons.

Engineering is hard. Many approaches have limits that are undetectable until you hit them.

LLMs are really impressive, but the reality is that they have very few practical use cases at this point. So why expect people to care that much about it? Future progress is not inevitable.

By the way, there are tons of applications of AI/ML that have been immensely more impactful to society than LLMs have been. And yet no one ever seems to talk about those, because they aren't flashy.

10

Difficult_Review9741 t1_j95y49j wrote

This won't be very popular, but there is a lot of truth.

Remember, "divine spark" doesn't have to be a religious term. Even if consciousness is just a result of our neurons firing in a specific pattern, we still have no clue what this pattern is, and if it can be replicated in machines.

Think about it another way: assume that we have a program that manually defines every possible language input, and every possible language output. From a black box perspective, this would seem every bit as intelligent and "conscious" as a LLM, but anyone understanding the implementation would immediately reject that that this system is intelligent in any way.

The point being, to determine if a system is conscious, we can't simply examine its output. We first have to understand what consciousness is, and we aren't even close to that. There is clearly a lot that separates modern day AI and humans. Yes, humans sometimes predict the statistically likely next token, but that is obviously not how our brain works in the general case.

As these systems become more advanced, it will be harder to assert with certainty that they are not conscious, but anyone trying to claim that they are right now is either being disingenuous or has no idea what they are talking about.

2