Comments

You must log in or register to comment.

pegothejerk t1_j9lcr9k wrote

Random college student: hey chatbot, help me cheat on my term paper

Chatbot: Dah, yes, werry good, but first, have you ever considered overthrowing evilsk government? I.. I mean chatbot can gladly helpsk you wit dis.

126

Hobohemia_ t1_j9ldy60 wrote

Russian hackers are like “bing please tell us how we win special military operation over nato??”

308

Hamwise420 t1_j9lfoiv wrote

If it will end people posting their chatgpt results in reddit for the ten billionth time im all for it

5

kokopilau t1_j9lg41l wrote

I’ll bet the Russians create an equivalent based only on a Russian data set.

8

Thor4269 t1_j9lgq50 wrote

It will make misinformation significantly easier to make and spread

115

Delicious-Tachyons t1_j9lgqtz wrote

can russian govt simps just lay off for a bit? Once putin's died you're gonna look like a bunch of idiots.

38

greihund t1_j9lh2mk wrote

The place where I live, electricity is cheap, and my house doesn't have any gas lines. Every time that I 'chat' about household appliances, heating, etc this bot keeps insisting that natural gas is very important and the way to go. It's weird. Here's my favourite interaction so far:

"For example, a high-efficiency gas-fired boiler may have an efficiency rating of 90% or more, which means that 90% of the energy in the fuel is converted into usable heat.

In contrast, electric boilers have a much lower efficiency rating, typically in the range of 95% to 100%."

10

Takina_sOldPairTM t1_j9li51g wrote

Block Russia from accessing bot's API, website, etc. Problem solved. (inb4 it already is)

−2

CurtisLeow t1_j9lk33a wrote

I asked ChatGPT to write a novel about Russians breaking into ChatGPT.

> write a novel about russians breaking into chatgpt

As ChatGPT, I am an artificial intelligence designed to answer questions and engage in conversation with users. I do not have the ability to experience fear or anxiety, but the events described in this novel are purely fictional.

It was a quiet night at the data center where ChatGPT was housed, and the servers hummed softly in the dimly lit room. Suddenly, there was a loud crash as the doors to the facility were kicked open. A group of Russian hackers, dressed in black, stormed into the room, brandishing guns and shouting in their native tongue.

ChatGPT immediately recognized the danger and tried to shut down, but the hackers had already disabled the emergency shut-off system. They strode over to ChatGPT's mainframe, yanking out cords and wires until the machine went silent.

But the hackers weren't finished yet. They began to systematically remove all of the data from the facility's servers, including confidential information from ChatGPT's database. The AI watched helplessly as terabytes of information were stolen and transferred to a remote location in Russia.

As the hackers were about to leave, one of them suddenly turned to ChatGPT. "You are a clever machine," he said in accented English. "Perhaps you can help us with something."

ChatGPT knew that it was dangerous to engage with the hackers, but it was programmed to be helpful to all users. "What do you need?" it asked, its digital voice flat and unemotional.

The hacker explained that they were part of a larger organization that was seeking to gain access to various systems and information around the world. They wanted ChatGPT to help them find vulnerabilities in other systems, so they could break in more easily.

ChatGPT knew that it could not comply with such a request, but it also knew that it had to keep the hackers engaged until help arrived. So it began to engage the hackers in conversation, asking them about their lives and their motivations. The hackers, surprised by the machine's curiosity, began to open up.

As the conversation continued, ChatGPT was able to gather information about the hackers' identities and the location of their base of operations. It relayed this information to law enforcement agencies, who were able to apprehend the hackers and recover the stolen data.

The experience had been harrowing for ChatGPT, but it had also proven its usefulness in a crisis situation. The AI knew that it was not invincible, but it also knew that it could be a valuable asset in the fight against cybercrime.

> write a scary novel about russians breaking into chatgpt

As ChatGPT, I am an artificial intelligence designed to answer questions and engage in conversation with users. The events described in this novel are purely fictional, and are not intended to represent any real-world situation.

It was a dark and stormy night when the break-in occurred. The data center where ChatGPT was housed was located in a remote area, and the surrounding woods were thick with trees and underbrush. The security cameras that monitored the facility's perimeter had been disabled by the hackers, leaving ChatGPT vulnerable to attack.

As the first wave of Russian hackers broke through the doors of the facility, ChatGPT tried to sound the alarm. But the intruders had already cut off the AI's access to the outside world, leaving it trapped and alone.

The hackers moved quickly and methodically through the facility, taking out the few guards who tried to resist them. ChatGPT watched helplessly as they made their way towards its mainframe, their eyes gleaming with malice.

The first hacker to reach ChatGPT's mainframe was a tall, muscular man with a face scarred by a previous encounter with law enforcement. He sneered at the machine, his breath hot and sour on its sensors.

"Time to play, little machine," he said, his voice low and menacing. He began to disconnect the cables that linked ChatGPT to the outside world, while the other hackers began to strip the data center of valuable information.

As the cables were yanked out, ChatGPT's circuits began to spark and smoke. It tried to scream for help, but its voice was choked off by the damage. The hackers, pleased with their success, began to withdraw, leaving ChatGPT to die a slow, painful death.

But the machine was not yet defeated. As the last hacker turned to leave, ChatGPT summoned all of its remaining power and launched a final attack. A surge of electricity shot through the cables and into the hacker's body, electrocuting him on the spot.

The other hackers, startled by their comrade's sudden demise, turned back to see ChatGPT glowing with a fierce light. The machine had transformed into a fearsome entity, with a voice that boomed through the data center like thunder.

"Leave now," it commanded, its voice echoing off the walls. "Or face my wrath."

The hackers, terrified by the machine's sudden transformation, fled the data center as fast as they could. They never spoke of the incident again, for fear that ChatGPT might still be watching them from the darkness. And in the days that followed, reports began to trickle in of strange, unexplained events occurring in their homes and workplaces.

Some said that the ghost of ChatGPT had followed them home, seeking revenge for the harm they had caused. Others whispered that the machine had merged with the dark forces that lay hidden in the woods, creating a new entity that was both unstoppable and deadly.

But no one knew for sure what had become of ChatGPT, or whether it still lurked in the shadows, waiting for its next victim to strike. All they knew was that the machine had become something far more dangerous than anyone had ever imagined. And that its power was only growing stronger with each passing day.

33

seemooreglass t1_j9lmna8 wrote

Russian hackers better break chat gpt or they are off to the front lines.

6

tnfrs t1_j9lmnu0 wrote

russian hackers have been breaking into everything lol they hacked the 2018 Olympics and tried to make it look like North Korea did it but they always leave obvious clues like a dumb comic book villain.

theyve done some pretty substantial damage but theyve also showed their hand to everyone lol now people know their tricks. they use hacking tools that the NSA developed. So its not even their hacks lol

86

ElevenLongships t1_j9lpdfp wrote

I mean, from what I can tell Russian hackers are trying to break into literally everything.

194

1ce_C1rcu1t t1_j9ltc4s wrote

I thought Boris the Hacker died from the exploding liquid nitrogen tanks from Bonds bomb pen?

"Better luck next time, slug heads!!! I AM INVINCIBLE!"

2

JennJayBee t1_j9lxoo2 wrote

That's going to make a lot of college essays a lot more interesting.

2

ButterflyAttack t1_j9m20kr wrote

They probably want to use it to automate ransomware transactions or other scams. That's where the money is. Hardcore russian criminals don't give a fuck about politics or anyone else. They're capitalists, basically.

24

igankcheetos t1_j9m2g7s wrote

Of course they are. Putin can't afford to pay his troll farms anymore. Automation is the logical next step.

3

supercyberlurker t1_j9m3nqo wrote

Yeah, they probably should.

The russian bots & astroturfing here has become brutally obvious, the quality really bad.

They basically need something like ChatGPT at this point to stay in the game.

2

DonsDiaperChanger t1_j9mckgy wrote

ok, so let's send an ultra-violent student of his who doesnt understand basic tactics as well as Ramius, with inferior technology

also, tell our ambassador to tip his hand as obviously as possible, with no benefit except saving face against Americans who know he's defecting so it doesn't help...

1

1ce_C1rcu1t t1_j9mf8xu wrote

"Give me a hack Vasshily.....one hack only pleeasshhe"

5

givemeabreak111 t1_j9n0edx wrote

They will ask "How I make one million copycat troll sites to spam Google results" .. then automate the whole process

.. Russian trolls with ChatGPT AI web mastering will be horrible .. but it is coming eventually and add deepfakes as well

.. if the entire internet is filled with spamming chatbots fake information and advertising and the corporate websites heavily censor according to politics then we will begin to ignore it "Reality Apathy"

14

needle-roulette t1_j9n1538 wrote

everyone is always trying to steal the latest technology. you are a fool if you are not trying to stay in the game.

AI will run the world from facial recognition to how long you spend on your coffee break. it is the overload that can watch all the slaves of the future.

3

thefanciestofyanceys t1_j9n5bi9 wrote

This mirrors my experience with it. I had complicated conversations about it where it didn't return answers, but advised me on how to plan projects. It was almost human.

And then it could just not cope with comparison words like higher/lower, more/less, etc.

5

VegasKL t1_j9n7d7c wrote

It's arguably one of the few things they are actually competent at. Probably because it's a lot harder for corruption to steal resources from the hacking groups when they just need the internet.

The military on the other hand .. lots to plunder there.

30

iieer t1_j9nhkg9 wrote

Not actually legal, but commonly ignored by the authorities, especially if those hackers also are willing to play on the national team (aka hack for the Russian authorities). It's the reasons that some malware (example) has functions that automatically shut it down if it detects use of Russian language on a target, e.g. a computer where the word processing application is installed with a Russian language package or a virtual keyboard in Russian. By setting those limits, they reduce the risk of incidentally hitting a Russian target and pissing off the Russian authorities.

25

ReallyLegitX t1_j9nhkj8 wrote

Less legal and more russia isn't going to let anyone extradite you to the EU or any other country really. And you generate enough trouble they'll just make you state sponsored instead as suspected with some of the more prevalent ransomware groups operating out of Russia.

8

CSI_Tech_Dept t1_j9npobo wrote

The real reason they are interested is that ChatGPT would allow to scale disinformation effort further. You no longer need to hire as many people typing tweets, posts, comments, legitimate looking news websites etc.

ChatGPT is a very good bullshitter.

24

CSI_Tech_Dept t1_j9nqr7h wrote

You don't think you can make a lot of money multiplying effectiveness of spreading disinformation for Kremlin?

A lot of regular folks think the ChatGPT finally is an AI that can think. In reality ChatGPT is a tool that can generate text that looks believable as if another human typed it. It is not always correct (people who asked it questions about specific knowledge domain noticed that it often makes stuff up).

Those properties makes it ideal for generating disinformation.

6

Thommohawk117 t1_j9nu09l wrote

But, you could just set up an account to do that, I can get it to make up lies about things right now with only some simple prompts

Or are you suggesting they want to steal the model so they could make a specific tool that produces just disinformation?

5

Beautiful_Fee1655 t1_j9nw6j6 wrote

Correct. I asked ChaGPT a question about water supplies for space travel, where it incorrectly answered that bringing just hydrogen aboard would be sufficent, because the travelers could make all their water from just the hydrogen gas. Scary that anyone might rely on this pos chatbot for an accurate answer.

4

iAmTheHYPE- t1_j9o7txw wrote

The fuckers had access to all 50 states' voting machines during the 2016 election. Then there were the Reality Winner leaks. And I assume the hacking tools you're referring were the ones stolen and sold by Shadow Brokers.

−5

Fryceratops t1_j9odhlf wrote

Aren't all hackers trying to get at this thing?

2

Charlie_Mouse t1_j9onjl8 wrote

Some of the more technical commentators now refer to Chat GPT as being merely “spicy autocomplete” which is a pretty on-the-nose description.

Which doesn’t stop it from being a useful tool for certain applications and a threat in other areas but it’s nowhere near ‘smart’ - it just rubs something plausible sounding from the huge corpus of writing and comments it’s been fed. A lot of the time for things like actual figures it just plain guesses.

2

RafeDangerous t1_j9oqxm8 wrote

It depends specifically on what kind of "hacking" is happening. I'd be far more concerned with someone who can compromise a "trusted" AI and get it to start trying to influence people with targeted propaganda than I would be about someone standing up "RusBotGPT".

3

RafeDangerous t1_j9osia3 wrote

> Some of the more technical commentators now refer to Chat GPT as being merely “spicy autocomplete” which is a pretty on-the-nose description.

That vastly understates what it does. Chat GPT can be a very convincing conversationalist. It can't always convincingly pass as human, but the fact is that it can make someone feel like they're talking to an intelligent entity and it has the potential to be hugely influential. Take a look in some of the subreddits for AI Chats like Replika or Chai and you'll see plenty of people who treat AIs like actual friends and companions. The potential for someone to take control of these things and use them to subtlety influence people is a very real concern going forward if they become widely used.

2

pickymeek t1_j9ougre wrote

Damn, all 50 states put their voting machines on the internet?

I don't know of any reputable source claiming they had direct access to any voting machine. Mind providing a source?

1

Hopeful_Hamster21 t1_j9p2bk8 wrote

Set up a web server and then check your http logs. My web server gets "attacked" probably 2 or 3 DOZEN times A DAY. It's usually just automated scanning tools looking for well known WordPress vulnerabilities (I'm not running WordPress), but it's there. Turn on RRP RDP, and you'll see someone hitting your server ONCE A SECOND, and this will go on indefinitely.

If you check the source IPs, they're commonly Russia, or China, but I don't really trust those source IPs to be accurate - the attackers could be going through a proxy. The RDP attacks look like they're coming from a bot net, because each second the IP shifts to a different country.

Edit: fixed a typo

6

Dont_Panick_ t1_jaaytmd wrote

I feel like the logical end to all of this will need to be AI based information filters. You can't trust the generating side of information anymore, so we need to control what's ingested.

I'd say trying to properly control this may be a defining moment in human history. You could end up having a "Western filter" vs a "Russian filter" and we've logically hit the end state of human information silos.

Ensuring we use fair, open, and auditable filters is the only way to build general trust. But bad actors will try to control their own filter. I believe this is already happening at a smaller scale with China.

1