keepthepace

keepthepace t1_jdzvxl2 wrote

Maybe I am stubborn but I haven't totally digested the "bitter lesson" and I am not sure I agree in its inevitability. Transformers did not appear magically out of nowhere, they were a solution to RNN's venishing gradient problem. AlphaGo had to be put into a min-max montecarlo search to do anything good, and it is hard to not feel that LLMs grounding issues may be a problem to solve with architecture changes rather than scale.

3

keepthepace t1_jdzvbww wrote

> You won't get replaced by AI, you will get replaced by someone who knows how to use the AI.

I wonder why this is any comfort. This is just a rephrasing of "your skillset is obsolete, your profession that used to pay you a salary is now worth a 15 USD/month subscription service"

The person "who knows how to use AI" is not necessarily a skilled Ai specialist. It could simply be your typical client.

The current AI wave should be the trigger to reconsider the place we give to work in our lives. Many works are being automated and no, this is not like the previous industrialization waves.

Workers used to be replaced by expensive machines. It took time to install things, prepare the infrastructure for the transition, it required other workers to do maintenance.

This wave replaces people instantly with an online service that requires zero infrastructure (for the user), costs a fraction of a wage and gives almost instant results.

Yes, progress that suppress jobs tend to create new jobs as well, but there is no mechanism through which there is any guarantee of symmetry between these two quantities and when you think about the AI wave, it is clear that the jobs will be removed faster than they are created and that the skillsets from the jobs removed do not translate well to the hypothetical jobs created.

23

keepthepace t1_jcij20g wrote

> Do they actually add useful insight, or do they serve more as a PR thing?

The one we hear about most are pure PR.

> Is there anything you think AI ethics as a field can do to be more useful and to get more change?

Yes. Work on AI alignment. It is a broader problem than just ethics. It is also about having models generate truthful and grounded answers. I am extremely doubtful of the current trend to use RLHF for it, we need other approaches. But this is real ML development work, not just PR production. That would be an extremely useful way to steer erhicalAI efforts

3

keepthepace t1_jbe0b4a wrote

Medical robotics is a field that is booming now and that is hungry for tech-inclined licensed practitioners to join them. Just go advertise yourself to startups in the field, I am sure you will get interesting proposals.

1

keepthepace t1_janzb1v wrote

Congratulations! The world desperately needs what you are doing! Was thinking about joining a while ago but got distracted by image-oriented research.

> As access to LLMs has increased, our research has shifted to focus more on interpretability, alignment, ethics, and evaluation of AIs.

Does this mean EleutherAI is not working anymore on big language models?

39

keepthepace t1_j7jgm75 wrote

Google has been the biggest team player when it comes to publish advances in AI. OpenAI has been the worst: AI research paper of big players.

Most of the techs that made ChatGPT possible were published by Google. Worse: OpenAI does not publish the 1% of things that makes ChatGPT unique (though we know enough to have a pretty good idea of what they did).

I'd be whiny in their place as well. The GPT family is not super innovative, they just ran away with an architecture mostly made by Google (Transformers/BERT), stripped it of everything that prevented huge parallelization (which many suspect included things that would allow it to stay "grounded" in reality) and slapped more compute on it.

30

keepthepace t1_j3e6gi8 wrote

Agreed on 1 and 2.

Not sure about 3: NVidia is dominant (maybe to the point of risking a monopoly litigation?) by providing to everyone. Making an "in" and "out" group carries little benefits and would push the out-group towards competition

4: I fail to see an "alignment organisation" that would provide 100M of value, either in tech or in reputation. It may emerge this year but I doubt there is one yet. Most valuable insights come from established AI shops

5: I doubt it. Artists are disregarded by politicians since forever. Copyright lobbyists have more power and they already outlawed generated images copyright

6: OpenAI is not an open source company. And this has already happened. Microsoft poured 1 billion into OpenAI

7: gosh I hope! Here is my own bold prediction: we will discover that multitask models require far less parameters for similar performance than language models and GATO successors will outperform similarly sized LLMs while simultaneously doing more tasks.

5

keepthepace t1_j2m2xrp wrote

Yes, they never reached the level of very complex algorithms but also no one ever tried to throw a lot of compute at them before we created gigantic language models able to mostly do these tasks with orders of magnitude more parameters.

I do suspect that if we threw a bit more of compute at a DNC we would get very interesting results, but that's little more than a hunch.

1