EducationalCicada
EducationalCicada t1_januat3 wrote
Reply to comment by topcodemangler in [D] Are Genetic Algorithms Dead? by TobusFire
Aren't people working on ways to implement NN in hardware?
EducationalCicada t1_jamrqo2 wrote
Reply to [D] Podcasts about ML research? by Tight-Vacation-9410
The Gradient Podcast
Gradient Dissent Podcast
Lex Fridman's podcast has also had all the biggest names in AI as guests.
EducationalCicada t1_j8d81bz wrote
EducationalCicada t1_j8d5y9z wrote
Reply to comment by BenjaminJamesBush in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
Not if it's actually impossible.
Submitted by EducationalCicada t3_10vgrff in MachineLearning
EducationalCicada t1_j6wkdlr wrote
Reply to comment by lmericle in [D] What does a DL role look like in ten years? by PassingTumbleweed
I would even say that neural networks are not the be-all end-all of machine learning.
EducationalCicada t1_j1yux4o wrote
Reply to [D] DeepMind has at least half a dozen prototypes for abstract/symbolic reasoning. What are their approaches? by valdanylchuk
Previously from Deep Mind in the domain of symbolic reasoning:
>This paper attempts to answer a central question in unsupervised learning:what does it mean to "make sense" of a sensory sequence? In our formalization,making sense involves constructing a symbolic causal theory that both explains the sensory sequence and also satisfies a set of unity conditions. The unity conditions insist that the constituents of the causal theory -- objects,properties, and laws -- must be integrated into a coherent whole. On our account, making sense of sensory input is a type of program synthesis, but it is unsupervised program synthesis.
>
>Our second contribution is a computer implementation, the Apperception Engine, that was designed to satisfy the above requirements. Our system is able to produce interpretable human-readable causal theories from very small amounts of data, because of the strong inductive bias provided by the unity conditions.A causal theory produced by our system is able to predict future sensor readings, as well as retrodict earlier readings, and impute (fill in the blanks of) missing sensory readings, in any combination.
>
>We tested the engine in a diverse variety of domains, including cellular automata, rhythms and simple nursery tunes, multi-modal binding problems,occlusion tasks, and sequence induction intelligence tests. In each domain, we test our engine's ability to predict future sensor values, retrodict earlier sensor values, and impute missing sensory data.
>
>The engine performs well in all these domains, significantly out-performing neural net baselines. We note in particular that in the sequence induction intelligence tests, our system achieved human-level performance. This is notable because our system is not a bespoke system designed specifically to solve intelligence tests, but a general-purpose system that was designed to make sense of any sensory sequence.
Edit: Also check out the followup paper:
EducationalCicada t1_ixws7qk wrote
Reply to comment by dulipat in [D] Paper Explained - CICERO: An AI agent that negotiates, persuades, and cooperates with people (Video) by ykilcher
No, the ancient Roman orator and statesman Marcus Tullius Cicero.
EducationalCicada t1_isk6rc6 wrote
EducationalCicada t1_isjcz0x wrote
Reply to [R] UL2: Unifying Language Learning Paradigms - Google Research 2022 - 20B parameters outperforming 175B GTP-3 and tripling the performance of T5-XXl on one-shot summarization. Public checkpoints! by Singularian2501
Is there a website that keeps track of all the models being released by the major AI labs?
I guess this sub has them all, but looking for a neater presentation.
EducationalCicada t1_jdvcof6 wrote
Reply to [D] Can we train a decompiler? by vintergroena
Yes, there’s already work on this:
https://arxiv.org/abs/2010.00770