Recent comments in /f/deeplearning

BellyDancerUrgot t1_jdpbtyo wrote

Oh I’m sure it had the data. I tested them on a few different things , OOPs, some basic CNN math, some philosophy, some literature reviews, some paper summarization. The last two were really bad. One mistake in CNN math. One mistake in OOPs. Creative things like writing essays or solving technical troubleshooting problems, even niche stuff like how I could shunt a gpu , it managed to answer correctly.

I think people have the idea that I think gpt is shit. On the contrary I think it’s amazing. Just not the holy angel and elixir of life that AI influencers peddle it as.

1

BellyDancerUrgot t1_jdpb9pi wrote

I agree that Bing chat is not nearly as good as chatgpt4 and I already know everyone is going to cite that paper as a counter to my argument but that paper isn’t reproducible, idek if it’s peer reviewed, it’s lacking a lot of details and has a lot of conjecture. It’s bad literature. Hence even tho the claims are hype, I take it with a bucket full of salt. A lot of scientists I follow in this field have mentioned that even tho the progress is noticeable in terms of managing misinformation, it’s just an incremental improvement and nothing truly groundbreaking.

Not saying OpenAI is 100% lying. But this thread https://twitter.com/katecrawford/status/1638524011876433921?s=46&t=kwpwSgfnJvGe6J-1CEe_5Q by Kate Crawford (msft research ) is a good example of what researchers actually think of claims like these and some of its dangers.

Until I use it for myself personally I won’t know and will have to rely on what I’ve heard from other phds and masters or PostDocs or professors. Personally, The only thing I can compare to is chatgpt and bing chat and both have been far less than stellar in my experience.

1

BellyDancerUrgot t1_jdpa0mz wrote

Tbf I think I went a bit too far when I said it has everything memorized. But it also has access to an internet worth of contextual information on basically everything that has ever existed. So even though it’s wrong to say it’s 100% memorized, it’s still just intelligently regurgitating information it has learnt with new context. Being able to re-contextualize information isn’t a small feat mind u. I think gpt is amazing just like I found the original diffusion paper and wgans to be. It’s Just really overhyped to be something it isn’t and fails quite spectacularly on logical and factual queries. Cites things that don’t exist, makes simple mistakes but solves more complex ones. Tell tale sign of the model lacking a fundamental understanding of the subject.

2

BellyDancerUrgot t1_jdp945d wrote

Claim, since you managed to get lost in your own comment:

Gpt hallucinates a lot and is unreliable for any factual work. It’s useful for creative work when the authenticity of its output doesn’t have to be checked.

Your wall of text can be summarized as, “I’m gonna debate you by suggesting no one knows the definition of AGI.” The living embodiment of the saying “empty vessels make much noise. No one knows what the definition of intuition is but what we know is that memory does not play a part in it. Understanding causality does.

It’s actually hilarious that you bring up source citation as some form of trump card after I mention how everything you know about GPT4 is something someone has told you to believe in without any real discernible and reproducible evidence.

Instead of maybe asking me to spoon feed you spend a whole of 20 secs googling.

https://twitter.com/random_walker/status/1638525616424099841?s=46&t=kwpwSgfnJvGe6J-1CEe_5Q

https://twitter.com/chhillee/status/1635790330854526981?s=46&t=kwpwSgfnJvGe6J-1CEe_5Q

https://aisnakeoil.substack.com/p/gpt-4-and-professional-benchmarks

https://aiguide.substack.com/p/did-chatgpt-really-pass-graduate

“I don’t quite get it how works” + “it surprises me” ≠ it could maybe be sentient if I squint.

Thank you for taking the time to write two paragraphs pointing out my error in using the phrase “aces leetcode” after I acknowledged and corrected the mistake myself, maybe you have some word quota you were trying to fulfill with that . Inference time being dependent on length of output sequence has been a constant since the first attention paper let alone the first transformer paper. My point is, it’s good at solving leetcode when it’s present in the training set.

Ps- also kindly refrain from passing remarks on my understanding of the subject when the only arguments you can make are refuting others without intellectual dissent. It’s quite easy to say, “no I don’t believe u prove it” while also not being able to distinguish between Q K and V if it hit u on the face.

1

GSG_Zilch t1_jdojtg7 wrote

We need an acceptable peformance that justifies the inference (and potential hosting) cost. Therefore depending on the complexity of the task, we choose the right size of model to be as cost-efficient as possible.

GPT-3 is not just a 175B model, only its largest version (davinci). There are more lightweight versions as well for less complex tasks such as text classification.

1

Praise_AI_Overlords t1_jdnxffd wrote

>Yes, there have been several studies that have used deep learning techniques, including convolutional neural networks (CNNs), on satellite images to predict the risk of malaria and other similar diseases.

For example, a study titled "Deep Learning for Malaria Detection in Labeled and Unlabeled Data" by Rajaraman et al. (2018) used CNNs on satellite images to predict the incidence of malaria in various regions of India. The study achieved a high accuracy of 97.1% and was able to predict malaria risk with a high degree of accuracy.

Another study titled "Deep Learning for Identifying Malaria Vectors Using Convolutional Neural Networks" by Alagendran et al. (2019) used CNNs to identify the presence of malaria vectors in satellite images. The study found that CNNs were able to accurately identify the presence of malaria vectors in satellite images with an accuracy of 96.2%.

There have also been other studies that have used deep learning techniques on satellite images to predict other diseases, such as dengue fever and Zika virus. For example, a study titled "A Deep Learning Approach for Predicting Dengue Fever Outbreaks Using Satellite Remote Sensing Data" by Lopez et al. (2018) used CNNs on satellite images to predict dengue fever outbreaks in Brazil.

Therefore, it is possible to use deep learning techniques, including CNNs, on satellite images to predict the risk of malaria and other similar diseases.

1

Praise_AI_Overlords t1_jdnx9qj wrote

lol

Here's some mosquito jokes from GPT-4

Why did the mosquito go to art school? Because it wanted to learn how to draw blood!

What do mosquitoes and vampires have in common? They both suck!

Why did the mosquito get a job at the blood bank? To make sure it always had a fresh supply!

What's a mosquito's favorite sport? Skin-diving!

What do you call a mosquito with a GPS? A "bloodhound"!

Why was the mosquito always the life of the party? It knew how to get under everyone's skin!

What did the mosquito say to the bartender? "I'll have a Bloody Mary, and hold the Mary!"

What's a mosquito's favorite band? The Buzz!

Why do mosquitoes make terrible comedians? Their jokes always leave a bad itch!

Why did the mosquito join the orchestra? It heard they needed a little more buzz!

2

suflaj t1_jdnvq8q wrote

> Not giving u the exact prompts

Then we will not be able to verify your claims. I hope you don't expect others (especially those with a different experience, challenging your claims) to carry your burden of proof.

> When I said ‘ace’ I implied that It does really good on leetcode questions from before 2021 and it’s abysmal after.

I have not experienced this. Could you provide the set of problems you claim this is the case for?

> Also the ones it does solve it solves at a really fast rate.

Given its architecture, I do not believe this is actually the case. Its inference is only reliant on the output length, not the problem difficulty.

> From a test that happened a few weeks ago it solved 3 questions pretty much instantly and that itself would have placed it in the top 10% of competitors.

That does not seem to fit my definition of acing it. Acing is being able to solve all or most question. Given a specific year, that is not equal to being able to solve 3 problems. Also, refer to above paragraph about why inference speed is meaningless.

Given that it is generally unknown what it was trained on, I don't think it's even adequate to judge its performance on long-known programming problems.

> Insufficient because as I said , no world model, no intuition, only memory. Which is why it hallucinates.

You should first cite some authority on why it would be important. We generally do not even know what it would take to prevent hallucination, since we humans, who have that knowledge, often hallucinate as well.

> Intuition is understanding the structure of the world without having to have the entire internet to memorize it.

So why would that be important? Also, the world you're looking for is generalizing, not intuition. Intuition has nothing to do with knowledge, it is at most loosely tied to wisdom.

I also fail to understand why such a thing would be relevant here. First, no entity we know of (other than God) would possess this property. Secondly, if you're alluding that GPT- like models have to memorize something to know, you are deluding yourself - GPT-like models memorize relations, they are not memory networks.

> A good analogy would be of how a child isnt taught how gravity works when they first start walking.

This is orthogonal to your definition. A child does not understand gravity. No entity we know of understands gravity, we at most understand its effects to some extent. So it's not a good analogy.

> Or how you can not have knowledge about a subject and still infer based on your understanding of underlying concepts.

This is also orthogonal to your definition. Firstly it is fallacious in the sense that we cannot even know what is objective truth (and so it requires a very liberal definition of "knowledge"), and secondly you do not account for correct inference by chance (which does not require understanding). Intuition, by a general definition, has little to do with (conscious) understanding.

> These are things u can inherently not test or quantify when evaluating models like gpt that have been trained on everything and you still don’t know what it has been trained on lol.

First you should prove that these are relevant or wanted properties for whatever it is you are describing. In terms of AGI, it's still unknown what would be required to achieve it. Certainly it is not obvious how intuition, however you define it, is relevant for it.

> I’m not even an NLP researcher and even then I know the existential dread creeping in on NLP researchers because of how esoteric these results are and how AI influencers have blown things out of proportion citing cherry picked results that aren’t even reproducible because you don’t know how to reproduce them.

Brother, you just did an ad hominem on yourself. These statements only suggest you are not qualified to talk about this. I have no need to personally attack you to talk with you (not debate), so I would prefer if you did not trivialize your standpoint. For the time being, I am not interested in the validity of it - first I'm trying to understand what exactly you are claiming, as you have not provided a way for me to reproduce and check your claims (which are contradictory to my experience).

> There is no real way an unbiased scientist reads openAIs new paper on sparks of AGI and goes , “oh look gpt4 is solving AGI”.

Nobody is even claiming that. It is you who mentioned AGI first. I can tell you that NLP researchers generally do not use the term as much as you think. It currently isn't well defined, so it is largely meaningless.

> Going back on what I said earlier, yes there is always the possibilit

The things worth considering you said are easy to check - you can just provide the logs (you have the history saved) and since GPT4 is as reproducible as ChatGPT, we can confirm or discard your claims. There is no need for uncertainty (unless you will it).

0

StrippedSilicon t1_jdnukc7 wrote

People who point to this paper to claim sentience or AGI or whatever are obviously wrong, it's nothing of the sort. Still, saying that it's just memorizing is also very silly, given it can answer questions that aren't in the training data, or even particularly close to anything in the training data.

2

BellyDancerUrgot t1_jdns4yg wrote

  1. Paper summarization and factual analysis of 3d generative models, basic math, basic oops understanding were the broad topics I experimented it on. Not giving u the exact prompts but you are free to evaluate it yourselves.

  2. Wrong choice of words on my part. When I said ‘ace’ I implied that It does really good on leetcode questions from before 2021 and it’s abysmal after. Also the ones it does solve it solves at a really fast rate. From a test that happened a few weeks ago it solved 3 questions pretty much instantly and that itself would have placed it in the top 10% of competitors.

  3. Unbiased implies being tested on truly unseen data which there is far less off considering the size of the train data used. Many of the examples cited in their new paper “sparks of agi” are not even reproducible.

https://twitter.com/katecrawford/status/1638524011876433921?s=46&t=kwpwSgfnJvGe6J-1CEe_5Q

  1. Insufficient because as I said , no world model, no intuition, only memory. Which is why it hallucinates.

  2. Intuition is understanding the structure of the world without having to have the entire internet to memorize it. A good analogy would be of how a child isnt taught how gravity works when they first start walking. Or how you can not have knowledge about a subject and still infer based on your understanding of underlying concepts.

These are things u can inherently not test or quantify when evaluating models like gpt that have been trained on everything and you still don’t know what it has been trained on lol.

  1. You can keep daring me and idc because I have these debates with fellow researchers in the field, always looking for a good debate if I have time. I’m not even an NLP researcher and even then I know the existential dread creeping in on NLP researchers because of how esoteric these results are and how AI influencers have blown things out of proportion citing cherry picked results that aren’t even reproducible because you don’t know how to reproduce them.

  2. There is no real way an unbiased scientist reads openAIs new paper on sparks of AGI and goes , “oh look gpt4 is solving AGI”.

  3. Going back on what I said earlier, yes there is always the possibility that I’m wrong and GPT is indeed the stepping stone to AGI but we don’t know because the only results u have access to are not very convincing. And on a user level it has failed to impress me beyond being a really good chatbot which can do some creative work.

3

nixed9 t1_jdnpdma wrote

In my personal experience, Bing Chat, while it says it's powered by GPT-4, is way, way, way less powerful and useful than ChatGPT-4 (which is only available for Pro users right now). I've found ChatGPT-4 SIGNIFICANTLY better.

It also has emergent properties of intelligence, vision, and mapping, somehow. We don't know how.

This paper, which was done on GPT-4, and a more powerful version than what we have access to via either Bing or OpenAI.com, is astounding: https://arxiv.org/pdf/2303.12712.pdf

2

BellyDancerUrgot t1_jdno8w6 wrote

That paper is laughable and a meme. My twitter feed has been spammed by people tweeting about this paper and as someone in academia it’s sad to see the quality for research publications to be this low. I can’t believe I’m saying this as a student of Deep Learning but Gary Marcus on his latest blogpost is actually right.

1