Viewing a single comment thread. View all comments

EmbarrassedHelp t1_jc8xjg5 wrote

They can claim whatever they like, but it shouldn't be taken seriously if they hide the important details behind fake excuses of "safety". From the paper:

> Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.

77

donnygel OP t1_jc8ybbh wrote

Its definitely not an open source approach to AI. Monetization of AI is the goal here, which is disappointing. I can only assume the intellectual property that they're trying to protect contains some groundbreaking architecture that gives them a bleeding edge advantage over other AI architectures?

72

Malkiot t1_jc9sjk7 wrote

Or it's so damn simple that just talking about it would let others replicate it.

39

LtDominator t1_jcbatc4 wrote

Unless they are doing something truly unique whatever they are doing is already well known and out there to people familiar with how these neural networks work. Theirs is just bigger and better tuned, all of which knowing how to do comes with experience, it unlikely that any of the big players looking more into AI now are going to have any issues replicating this themselves, it's just a matter of how long its going to take them.

1

Arachnophine t1_jc90hp4 wrote

Have you tried using it yet? It's very impressive even compared to GPT-3.

IMO building these things so quickly is the worst decision humans have made yet, so I'm glad they're not releasing more details.

15

[deleted] t1_jc97dx9 wrote

[deleted]

17

BSartish t1_jc9kkl7 wrote

Bing chat is free if you can get pass the waitlist, and that's been gpt4 since it's release.

1

Tkins t1_jcaiu5j wrote

An early version of GPT4

−1

BSartish t1_jcaj7rs wrote

It's most likely the same version, just different guard rails and developer prompts optimize for Bing search.

2

curryoverlonzo t1_jc94k1o wrote

Also, where can you use GPT-4?

1

gurenkagurenda t1_jc98cwk wrote

It’s available as a model option with ChatGPT now, although I’m not sure if it’s “Plus” only. It’s just chat, though, no images, and it’s limited to 100 messages per four hours.

8

appleparkfive t1_jc9abjj wrote

Is the limit still in place with the paid plan for GPT-4 or is it unlimited for the paid folks?

1

second-last-mohican t1_jc9lbpr wrote

Um, chatgpt is over a year or so old.. They released it to the public because they didn't know what to ask it.. public release and its popularity was a surprise.

1

techni_24 t1_jc9fmps wrote

Idk if those excuses of safety are really so fake. These models are pretty powerful and about to change a lot very quickly. Instead of dumping all the information about it, I’d rather them take a cautious approach.

We need to remember that there are plenty of bad actors out there that would love to use the power of these models to do some really bad things we can hardly conceptualize yet. ‘Democratizing AI’ sounds great in theory, but lets remember that the larger threat for humanity is not AI itself but how it’s used and who wields it. Making that capability open source might just do more harm than good. They might have picked up on that based on how scarce the paper is regarding details.

12

LaJolla86 t1_jc9kbvv wrote

Meh. People who want to run their own version will do so anyways.

I never found the argument convincing.

It just makes the best parts of the technology locked up behind the technical know how. And the cost of hardware (which you could rent out).

8

ACCount82 t1_jcav54f wrote

It's a tough spot. GPT-4 is clearly no Skynet - but it's advanced enough to be an incredibly powerful tool, in the right hands. An incredibly dangerous tool, in the wrong hands.

Being able to generate incredibly realistic text that takes image and text context into account is a trust-destoying tech, if used wrong. Reviews, comments, messages? All those things we expect to be written by humans? They may no longer be. A single organization with an agenda to push can generate thousands of "convincing" users and manufacture whatever consensus it wants.

2

seweso t1_jc9u8eq wrote

What they are doing isn't far ahead of what is already in the open. Their performance can be easily be explained by quality data + shitton of gpus and good controls (to prevent weird shit).

The more logical explanation is that they wanna sell this as soon as possible before it's in the open.

1

fokac93 t1_jcd7a8p wrote

Agree 100%...People on the internet seem to forget as you said that there are bad actors out there. Some people have to come down to earth and understand reality. At this point those AI model have to be treated as nuclear secrets. They're not perfect, but very powerful. Just imagine chatgpt 20 or 50. I just saw the demo of. Chatgpt4 yesterday and I was blown away.

1

anothermaninyourlife t1_jc9djst wrote

Them hiding this information has nothing to do with these claims. Maybe they just want to protect their IP.

9

logicnreason93 t1_jc9rj0k wrote

Then they should rename their company to Close A.I

18

anothermaninyourlife t1_jc9y6ki wrote

Eh, it's better for the market to grow with whatever information that is already out there (more unique innovation).

It's not necessary for them (Open A.I) to disclose every new step that they have taken to improve on their AI especially when the market is still trying to catch up with version 3.

−11

Strazdas1 t1_jca2jtq wrote

>It's not necessary for them (Open A.I) to disclose every new step that they have taken

Then they should rename their company to Close AI.

14

ninjasaid13 t1_jc9m7gq wrote

>They can claim whatever they like, but it shouldn't be taken seriously if they hide the important details behind fake excuses of "safety". From the paper:

They already admitted 'competitive landscape' aka money. Everything else is bullshit.

7