GreenTeaBD

GreenTeaBD t1_j9ynka0 wrote

The human brain, as far as we can tell, requires input to be creative too. It's just our senses. Making creativity into anything else is basically calling it magic, an ability to generate something from nothing.

This does not have to be a person typing prompts for ai, it just is because that's how it's useful. I've joked before about strapping a webcam to a Roomba, running the input through clip, and dumping the resulting text into gpt. Theres nothing that stops that from working.

2

GreenTeaBD t1_j9ymzr3 wrote

There are models that are open source and near GPT3. The most open are eleutherai's models, though not as big as GPT3 perform very well. You can go run them right now with some very basic python.

The problem is less that we don't have open models, it's that we haven't found good ways to run the models that big on consumer hardware. We do have open models that are about as big as GPT3 (The largest Bloom model) but the minimum requirements in GPUs would set you back about 100,000 us dollars.

Stable Diffusion didn't just democratize image gen AI by releasing SD open source, but by releasing it in a way people with normal gaming computers could use it.

We are maybe almost at this point with language models. Flexgen just came out, and if those improvements continue we might get an SD like moment. But until then it doesn't matter if GPT3 is open or not for the vast majority of people.

1

GreenTeaBD t1_irludvh wrote

If you end up liking it when you're done with it, I'd recommend An Introduction to the Philosophy of Time by Sam Baron and Kristie Miller. I read both, and I think they go together well.

The latter hints at a different conclusion and is more exhaustive (for better and for worse) but just The Order of Time left me feeling like there were some things left unexplained.

Introduction to the Philosophy of Time definitely explained things.

3