HanaBothWays

HanaBothWays t1_ja94k72 wrote

We are talking about situations where a minor consented to share an intimate photo with another party having the understanding that the other party would not spread it around in public…and the other party did so anyway.

When this kind of thing happens between adults it’s called “revenge porn” and the person who spread the photo is often subject to civil or criminal liability for doing so.

If you are seriously arguing that someone deserves to have nude photos of themselves as a minor floating around to “teach them a lesson” when having it happen to them as an adult would make them victims of a crime, you probably need to log off for a while.

4

HanaBothWays t1_ja93hzi wrote

This is the same system that’s used to detect and take down Child Sexual Abuse Material (CSAM). It’s been around for years. Meta is just expanding the criteria for what images (or hashes of images) they will use it on.

The CSAM system was not previously used to detect and take down nude photos that teens shared consensually: now, it is, even if the subject of the photo has since become a legal adult.

2

HanaBothWays t1_ja916ts wrote

I suspected that this would basically work like the tools used to recognize and spike Child Sexual Abuse Material (CSAM) images and it actually is - it’s the same tools and the same database! This is basically expanding the eligibility criteria for what can go into the database.

Previously if you sent your high school sweetheart a nude selfie and that person did whatever with it, you didn’t have a lot of options, but now you can upload a hash of the picture (not the actual picture) to the database and it will get taken down.

Also if you are a legal adult now but have nude photos of yourself from when you were a minor floating around, you can upload hashes fo the database and have them taken down.

1

HanaBothWays t1_ja1apfq wrote

If you use tax filing software or a filing service they can account for this kind of situation (where you got married this year and one of you moved).

I know that bases have some kind of support service for military families (the name escapes me right now) that your fiancé can ask for help with this kind of thing.

3

HanaBothWays t1_j9wabvo wrote

There are two possibilities for this:

One, hook up to a module or service that will inject the malware for you (these absolutely exist).

Two, there are lots of well-known vulnerabilities out there that a lot of people haven’t patched…and ChatGPT may know what they are!

You could prompt it with “write malware like this with a delivery mechanism that uses the top five known exploitable vulnerabilities” (yes there is a list).

1

HanaBothWays t1_j9w97n0 wrote

Okay. People on forums frequented by script kiddie types have been saying they can get good stuff out of ChatGPT, but of course script kiddies would not know the difference.

The people who write the tools used by script kiddies or those “ransomware as a service” type kits are a different story…

10

HanaBothWays t1_j9vdvhp wrote

Is this about the hacker who did a data dump on his ransomware gang over a pay dispute or something different?

EDIT: okay, read the article, this is something different. But there was a hacker for a ransomware group (I forget if it was Conti or some other group) who basically published all their internal data online because he was upset with the crappy pay and working conditions they imposed on him.

98

HanaBothWays t1_j943tav wrote

> Which is fine. I merely wish to suggest to you, that if you consider ChatGPT to be intelligent, you devalue your own intelligence and your reason for having it.

Nah, this person is devaluing other human beings. There’s a sizeable contingent of people on this website (well, everywhere, but it’s a particular thing on this website) who will seize on any excuse to say most other people aren’t really people/don’t really matter.

This kind of talk about humans not really being all that different from large language models like ChatGPT is just the latest permutation of that.

3

HanaBothWays t1_j8suz0a wrote

ChatGPT is kind of like one of those people who says a lot of wrong things in a soothing and very believable and authoritative way. Well, unless you give it a prompt to make it respond with a shitpost.

Or, since it doesn’t really “understand” what it’s outputting, it may give you answers that are mostly right but incorrect in some important and really bizarre ways, like a patient with an unusual neurological condition in an Oliver Sacks story.

11

HanaBothWays t1_j8jq7bp wrote

> It definitely can do that job. Modern management is essentially just doing what the computer analytics tells them to do.

So you don’t understand management or analytics.

That’s okay, neither do a lot of people in management positions. But LLMs understand those things even less.

5