Soundwave_47
Soundwave_47 t1_j8bpaqd wrote
Reply to comment by sam__izdat in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
Yes, please keep this sort of stuff in /r/futurology or something. We're here trying to formalize the n steps needed to even get to something that vaguely resembles AGI.
Soundwave_47 t1_j3k9npf wrote
Reply to comment by IshKebab in [P] I built Adrenaline, a debugger that fixes errors and explains them with GPT-3 by jsonathan
Anecdotally, it is comparable.
Soundwave_47 t1_ir7olyd wrote
Reply to comment by flyfrog in [R] Discovering Faster Matrix Multiplication Algorithms With Reinforcement Learning by EducationalCicada
Agree that some intuition will probably be required for NP-Hard problems to encode knowledge that we've learned in other fields. A wholly probabilistic model would be harder.
Soundwave_47 t1_j8fu3r6 wrote
Reply to comment by kaityl3 in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
Somewhat, and no.
We generally define AGI as an intelligence (which, in the current paradigm, would be a set of algorithms) that has decision making and inference capabilities in a broad set of areas, and is able to improve its understanding of that which it does not know. Think of it like school subjects, it might not be an expert in all of {math, science, history, language, economics}, but it has some notion of how to do basic work in all of those areas.
This is extremely vague and not universally agreed upon (for example, some say it should exceed peak human capabilities in all tasks).