Viewing a single comment thread. View all comments

HanaBothWays t1_jbj732b wrote

Honestly if I were in the financial sector I would not do a thing like this until OpenAI comes out with versions of the product that are certified for use with regulated data, the way there are cloud computing products that are certified for use in the financial sector, healthcare sector, etc.

“Certified” is not exactly the right word, but basically they meet certain baseline requirements so they are safe to use with particular kinds of sensitive information/in secure environments with that kind of information.

35

habichuelacondulce OP t1_jbjastp wrote

If they don't use it for trading they can used it for copy/editing news/blogs/articles to frame a narrative for their algos to pick up and trade on such information

24

andylowenthal t1_jbjsaf4 wrote

More specifically, and immediately, they can use it to post comments on social media, including Reddit, to shift narrative based on a false majority consensus. It’s already happening now, they just pay people minimum wage for the comments, this would just make creating those false narratives cheaper and faster.

42

DebateGullible8618 t1_jblbhcc wrote

yeah and the bots will eventually be able to respond just like people to reinforce certain views onto people. AI is going to be the biggest evolution in tech since the smartphone.

5

HanaBothWays t1_jbjbc44 wrote

No I mean if you were a financial company you would not even want to let it inside your internal network at all, no matter what you did or didn’t use it for, unless it was a version made to keep your confidential/regulated data safe.

Right now ChatGPT is not allowed on government agency networks, for example, for any reason because it might pick up on sensitive but unclassified (SBU) data in those network environments.

5

thecookie93 t1_jbkzru6 wrote

Yeah, I don't think they would let it touch their systems. They just buy the license and run it on an off-site server where it can do it's thing to write targeted blog posts and "news" articles.

3

HanaBothWays t1_jbl07ni wrote

Something like that. They just shouldn’t let it near the financial and transaction records or correspondence.

1

stuffitystuff t1_jbkh5m6 wrote

They can afford to get a copy of the model and run it on their own systems, though. Just like Microsoft.

1

Touchyuncle45 t1_jbjqouz wrote

Well looks like this is a race , the more you wait the more money and power you will lose.

Who would have thought google could become less relevant as a search engine? Ai powered search engines are the future , imagine reddit using AI to filter posts and results ...

5

HanaBothWays t1_jbjtj4n wrote

It’s a race to develop better Large Language Model tech, but if you are in a sector that deals with sensitive data and these tools pose a risk of inadvertently disclosing that data (because the tools send everything back to “the mothership” for analysis), being an early adopter is maybe not such a good idea.

2

NoSaltNoSkillz t1_jbkm19o wrote

If you localize the instance within the company, or more specifically, within the teams with access to that data already, and run different instances for those outside of that group, its less of a problem. The model being local, and only allowing input local should limit the risks, although if it is still scrapping current data, who knows, could be a risk poin

2

HanaBothWays t1_jbko8au wrote

Yes, but to ensure you have a model that’s behaving in that way, with standardized controls, you need to first established what those standardized controls are and then figure out some kind of auditing and certification framework for saying “this version of the tool works that way and is safe to use in an environment with sensitive information/regulated data.”

These organizations shouldn’t be trying to roll their own secure instance of ChatGPT (they wouldn’t even know where to start) and I bet they don’t want to.

2

seweso t1_jbkb2eh wrote

OpenAI isn't going to be the only one with this tech. You can't lock it down...

1

venustrapsflies t1_jbkdce8 wrote

I’m sure they’re not using it to make any sort of meaningful decisions. There’s a lot of big game being talked about ChatGPT, but it’s not going into anything critical at any place serious about making money. It’ll be used for like, internal utilities to save their people time.

5

HanaBothWays t1_jbkk58h wrote

That’s not the problem, the issue is ChatGPT piping things from their network back to OpenAI.

1

DevAnalyzeOperate t1_jbjxbuy wrote

Same this is the one of the few industries I think ought to hit the brakes until they can run their own server.

2

HanaBothWays t1_jbjxtfv wrote

Also healthcare and the government.

They probably don’t need to run their own server (that may not be possible), but they may need the equivalent of an industry-specific virtual private cloud service.

1

Disastrous_Ball2542 t1_jbkl1uo wrote

Armchair hedge fund manager lol

1

HanaBothWays t1_jbkoejc wrote

I know nothing about managing a hedge fund. I know some things about having novel technology in networks where you also have sensitive data - mostly, that you don’t want to be the first one to do it.

0

Disastrous_Ball2542 t1_jbktata wrote

Would it be crazy to think that the hedge fund who posted the best return in history with resources in the billions has qualified IT expertise who has considered and mitigated this risk?

4

drgrubtown t1_jbmyamj wrote

You realize some of the highest paying and most competitive engineering positions are at hedge funds, right?

2

HanaBothWays t1_jbkyz2k wrote

Let me put it this way, would you hire a hedge fund manager to manage your network security operations, configure your firewalls, set up your intrusion detection systems, etc.?

−1

Disastrous_Ball2542 t1_jbl0oen wrote

Let me put it this way, the hedge fund no doubt hired qualified IT specialists who know much more than you and get paid much more than you to handle their security (not saying this to attack you, just making my point)

Like the guy that played 1.5 years of college football then thinks they know better than Bill Belichek lol

2

HanaBothWays t1_jbl5u0i wrote

> Let me put it this way, the hedge fund no doubt hired qualified IT specialists who know much more than you and get paid much more than you to handle their security

I am one of those kinds of specialists and I get paid pretty well LOL

0

Disastrous_Ball2542 t1_jbna2yu wrote

Cap. You doing IT for a schoolboard doesn't mean you're on the level of Citadel

1

HanaBothWays t1_jbns68c wrote

I don’t do it for school boards. You already stepped in it pretty good. You should stop now.

1