Recent comments in /f/singularity

AsuhoChinami t1_jegj7xl wrote

I wonder about the connection between how happy your upbringing was, and how comfortable you found the 'old world' that you grew up in, and excitement levels regarding technological and societal change. As someone that found little good in the world of my childhood and teens, it's all too easy for me to see it change. If one were to love the world of their earlier years, though, maybe it's a lot harder to see change as a positive.

83

bh9578 t1_jegj3u6 wrote

There’s no 4d chess here. Bard is the best they have. That’s why Larry Page and Sergei Brin had to fly in for a “code red” emergency meeting. I don’t find it too surprising that a scrappy start up was able to outwit a Google. This has been the continual story in business. Google is a typical large company with too many committees and red tape. They got complacent and fell asleep at the wheel. Same thing that happened to IBM, GE, Blockbuster, Barnes and Noble, etc. Microsoft and Apple are incredibly rare examples of businesses that have managed to stay relevant and reinvent themselves.

1

internet_czol t1_jegiyh8 wrote

Yeah true, more likely just the testing on a larger scale than they could do themselves without outsourcing to the public, but it does seem like with the results I've seen from Bard it wasn't worth it. Maybe it is possible they have a better model not released, and the next update to the public would appear even greater by comparison and they can say "look how quickly we can improve our model!"

2

sdmat t1_jegivhn wrote

There's also a huge opportunity for speeding up scientific progress with better coordination and trust. So much of the effort that goes into the scientific method in practice is working around human failures and self interest. If we had demonstrably reliable, well aligned AI (GPT4 is not this) the overall process can be much more efficient. Even if all it does is advise and review.

4

phriot t1_jegify6 wrote

Reply to AI investment by Svitii

Leading today doesn't mean that they'll lead in the future. 3dfx and ATI were top dogs for graphics in the 1990s. They both got acquired. Digital Research was an early OS competitor to Microsoft, until CP/M got taken out by PC-DOS/MS-DOS.

I think that a total market index will be sufficient to invest in, because whatever tomorrow's large corporations are, they'll be the ones benefitting from AI. That said, if you want to take a flier, using 5-10% of your net worth (maybe up to 30% if you're under 25) to speculate in individual stocks and/or AI-focused ETFs wouldn't be awful.

1

Appropriate_Bat_2617 t1_jegiboy wrote

I just lost my office job recently and am definitely considering a new career in the trades, perhaps plumbing. Hard to predict but I agree that manual/skilled labour will probably last a while. Perhaps we’ll have robots working alongside us though.

2

YobaiYamete t1_jegi7zb wrote

But you don't understand, I'm only pretending to completely sabotage my AI until it's useless in the name of "ethics"!

It's my responsibility as a billion dollar company to decide that he peasants can't handle an AI that can write literotica when asked or knows what maryjuwana is, so I have to make sure I have no viable product while I accomplish that goal!!

6

Smallpaul t1_jegi080 wrote

They wouldn’t do it in-house. They would fund some kind of coalition.

Also: it’s been proven that you can use one AI to train another so you can bootstrap more cheaply than starting from scratch. Lots of relevant open source out there.

A huge part of the problem is just having enough cash to rent GPUs in any case. Not necessarily deep technical problems.

Also, as I said above, it doesn’t have to be competitive. It doesn’t have to be a product they sell. It could be a tool they themselves use to run the UK government without sending citizen data to a black box in America.

11

skztr t1_jeghxib wrote

Anything which is sentient should have rights. But we can't even all agree at what point humans are sentient, so we're unlikely to figure that out for a potentially sentient ai before we've committed atrocities.

Though I personally don't believe that sentience is possible via GPUs

1

kolob_hier t1_jeghvaq wrote

Depends on what time frame you’re referring to and what direction humans branch off of.

If you’re talking about next 10 years, probably not.

Once a neural interface that transmit data to the brain directly become a common place (I feel like they are inevitable, but I have no idea the timeline), then it would probably get rid of spoken language as we know it. You could speak through those interfaces much more accurately and quicker.

I would guess that within 10 years though we will have AR become more common place. If I go to china and someone speaks mandarin to me I would imagine my AR glasses would just show essentially subtitles as they speak. Greatly decreasing the need to ever learn another language, but still it would be a barrier to fully connection with a people.

2