currentscurrents

currentscurrents t1_javx4pw wrote

The Winograd Schema is a test of commonsense reasoning. It's hard because it requires not just knowledge of english, but also knowledge of the real world.

But as you found, it's pretty much solved now. As of 2019 LLMs could complete it with better than 90% accuracy, which means it was actually already solved when Tom Scott made his video.

15

currentscurrents t1_jatvmtm wrote

You're right, I misread it. I thought they held out 4 patients for tests. But upon rereading, their dataset only had 4 patients total and they held out the set of images that were seen by all of them.

>NSD provides data acquired from a 7-Tesla fMRI scanner over 30–40 sessions during which each subject viewed three repetitions of 10,000 images. We analyzed data for four of the eight subjects who completed all imaging sessions (subj01, subj02, subj05, and subj07).

...

>We used 27,750 trials from NSD for each subject (2,250 trials out of the total 30,000 trials were not publicly released by NSD). For a subset of those trials (N=2,770 trials), 982 images were viewed by all four subjects. Those trials were used as the test dataset, while the remaining trials (N=24,980) were used as the training dataset.

4 patients is small by ML standards, but with medical data you gotta make do with what you can get.

I think my second question is still valid though. How much of the image comes from the brain data vs from the StableDiffusion pretraining? Pretraining isn't inherently bad - and if your dataset is 4 patients, you're gonna need it - but it makes the results hard to interpret.

2

currentscurrents t1_jasxijr wrote

I'm a wee bit cautious.

Their test set is a set of patients, not images, so their MRI->latent space model has seen every one of the 10,000 images in the dataset. Couldn't it simply have learned to classify them? Previous work has very successfully classified objects based on brain activity.

How much information are they actually getting out of the brain? They're using StableDiffusion to create the images, which has a lot of world knowledge about images pretrained into it. I wish there was a way to measure how of the output image is coming from the MRI scan vs from StableDiffusion's world knowledge.

16

currentscurrents t1_janr9qo wrote

>"Sentience is the capacity to experience feelings and sensations". Scientists use this to study sentience in animals for example (not in rocks, because THEY HAVE NONE).

How do you know whether or not something experiences feelings and sensations? These are internal experiences. I can build a neural network that reacts to damage as if it is in pain, and with today's technology it could be extremely convincing. Or a locked-in human might experience sensations, even though we wouldn't be able to tell from the outside.

Your metastudy backs me up. Nobody's actually studying animal sentience (because it is impossible to study); all the studies are about proxies like pain response or intelligence and they simply assume these are indicators of sentience.

>What we found surprised us; very little is actually being explored. A lot of these traits and emotions are in fact already being accepted and utilised in the scientific literature. Indeed, 99.34% of the studies we recorded assumed these sentience related keywords in a number of species.

Here's some reading for you:

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
https://en.wikipedia.org/wiki/Mind%E2%80%93body_problem

People much much smarter than either of us have been flinging themselves at this problem for a very long time with no progress, or even no ideas of how progress might be made.

2

currentscurrents t1_janlwsv wrote

Sure it's idiotic. But you can't disprove it. That's the point; everything about internal experience is shrouded in unfalsifiability.

>it's very easy to understand what each neuron does,

That's like saying you understand the brain because you know how atoms work. The world is full of emergent behavior and many things are more than the sum of their parts.

>And then again, we do have a definition for sentience

And it is?

>, and there have been studies that have proven for example in multiple animal species that they are in fact sentient

No, there have been studies to prove that animals are intelligent. Things like the mirror test do not tell you that the animal has an internal experience. A very simple computer program could recognize itself in the mirror.

If you know of any study that directly measures sentience or consciousness, please link it.

3

currentscurrents t1_jangzvf wrote

Hah! Not even close, they're almost black boxes.

But even if we did, that wouldn't help us tell whether or not they're sentient, because we'd still need understand to sentience. For all we know everything down to dumb rocks could be sentient. Or maybe I'm the only conscious entity in the universe - there's just no data.

2

currentscurrents t1_jajpjj7 wrote

It's not dead, but gradient-based optimization is more popular right now because it works so well for neural networks.

But you can't always use gradient descent. Backprop requires access to the inner workings of the function, and requires that it be smoothly differentiable. Even if you can use it, it may not find a good solution if your loss landscape has a lot of bad local minima.

Evolution is widely used in combinatorial optimization problems, where you're trying to determine the best order of a fixed number of elements.

69

currentscurrents t1_jai5dk2 wrote

Basically all of the text-to-image generators available today are diffusion models based around convolutional U-Nets. Google has an (unreleased) one that uses vision transformers.

There is more variety in the text encoder, which turns out to be more important than the diffuser. CLIP is very popular, but large language models like T5 show better performance and are probably the future.

6