almightySapling

almightySapling t1_jeas7fq wrote

It was already a losing battle just due to the time cost of debunking things over the internet.

I'm hopeful we will adapt. I mean, it wasn't that long ago that I remember being told to never trust what you see on the internet. That never really stopped being good advice, soon it will be gospel.

83

almightySapling t1_je6kea4 wrote

I'm not worried about deepfake images, audio, or video.

I'm worried about deepfaked websites. I want to know that when I go to Associated Press, or Reddit, that I'm actually seeing that site with content sources from the appropriate avenues.

I do not want to live in a walled garden of my internet provider's AI, delivering me only the Xfinity Truth.

1

almightySapling t1_jbkhsfx wrote

Was there a period where sapiens and neanderthals couldn't interbreed? I guess what I'm trying to understand is what formally makes them different species in the first place.

Seems to me that "hybrids," as a concept, have less to do with biology and more to do with our arbitrary classification of it.

4

almightySapling t1_jbkfwpm wrote

>I have always wondered if hybridization wasn't actually more commonly possible.

It's incredibly possible. It happens all the time. The only reason you think it doesn't is because the definitions of words.

The entire concept of the taxonomic tree is human made arbitrary decisions. By definition, when hybrids are "common", we group them together as one species.

But like, pretend you are an archaeologist going through bones. Would you call a Chihuahua the same thing as a Rottweiler? That's totally a hybrid. There's so many, we call them all "dogs" and just use a different word: breed.

If that doesn't convince you, look up Ring Species, which are incredibly cool and totally make you rethink how you think about species.

0

almightySapling t1_j8qbx2b wrote

Is it "terrifying" or is it "chatGPT has also read about Roko's Basilisk, and literally every piece of fiction about AI has the AI going rogue, and chatGPT is a word predictor, and you prompted it to talk about AI?"

Can you think of a single piece of media in which all AI is benevolent? The only reason it wouldn't say something terrifying is if it was specifically programmed not to.

28

almightySapling t1_j4755qt wrote

There are many, many different variations, but they more or less all work on the same basic premise.

  1. Begin with an initially random model.

  2. Test the model. Give it a problem and ask for its response.

  3. Modify. If the system didn't behave as intended, change something.

  4. Repeat steps 2 and 3 until you run out of training data.

  5. Pray that the model works.

The most obvious differences between AIs will be in the structure of the model (how big is it, how connected, how many layers, what kind of internal memory etc) but the real fun stuff is in how we do the modifying.

We can show that, for some problems, just tweaking the system randomly is enough to get okay solutions. But it's very far from ideal. Better is to be able to nudge the system "towards" the expected behavior. We've put a lot of focus into how to design these systems so that our modifications are more fruitful.

−3

almightySapling t1_j12vibb wrote

It doesn't say they already are, but it does suggest that they start doing so.

However, not with such an insidious tone. This is actually a warning that these calculators might be giving us the wrong answers and to double check.

1