evanthebouncy
evanthebouncy OP t1_isfytt9 wrote
Reply to comment by JNmbrs in [P] a minimalist guide to program synthesis by evanthebouncy
Yaya hit me with the question and I'll see what i can do!
evanthebouncy OP t1_iseg25n wrote
Reply to comment by yldedly in [P] a minimalist guide to program synthesis by evanthebouncy
Yaya thanks! My belief is that for most part, people know exactly what they want from computers, and can articulate it well enough so that a developer (with knowledge of computers) can implement it successfully. In this process the first person need not code at all, in the traditional sense.
All we need is the technology to replace the dev with AI haha
evanthebouncy OP t1_isbui1j wrote
Reply to comment by yldedly in [P] a minimalist guide to program synthesis by evanthebouncy
I read all of your blog.
I loved this reference
"""The physicist David Deutsch proposes a single criterion to judge the quality of explanations. He says good explanations are those that are hard to vary, while still accounting for observations. """
You write really well! I followed you on twitter. I Think you have thought about the relationship between explaining data and probablistic programming deeper and longer than I have so i cant say much of surprising cool things to you.
I think my work "communicating natural programs to humans and machines" will entertain you for hours. Give it a go.
It's my belief that we should program computers using natural utterances such as language, demonstration, doodles, ect. These "programs" are fundamentally probablistic and admits multiple interpretations/executions.
evanthebouncy OP t1_isb2n4f wrote
Reply to comment by yldedly in [P] a minimalist guide to program synthesis by evanthebouncy
Not much personally no.
But it's widely applicable, because in many instances, you'll have a stoicastic system that generates data. You see the data, you want to infer the system.
Example 1 modeling behavior. You could have a game, and the way a person plays the game is random, doing something different at times. By observing a person playing the game, you have collected some observation data, that's generated from a random behavior. To model the strategy the person is using, you'd have to use a probablistic program. It'll have some logical components, and some random components.
Example 2 modeling natural phenomenon. You have a toilet (I'm sitting on one now lmaoo) that you're building and you want to know, given the weight and consistency of the poo inside (X), how much water does it need (Y) to flush cleanly. The relationship between X and Y can be described by an equation, plus some noise, making it really intuitive to model as a probablistic program.
I'd learn about it here
evanthebouncy OP t1_is76k7z wrote
Reply to comment by neuralbeans in [P] a minimalist guide to program synthesis by evanthebouncy
more than translation per se. in real life, when you're given a specification, it is rare you can directly translate it into a solution in a 1:1 mapping. typically, you have to _search_ for a solution.
before deep learning, the search can be performed by enumeration/back-tracking solver. various SAT engines (miniSAT, z3) were used to effectively (not by today's standard) find a solution within the search space.
evanthebouncy OP t1_isgwjhn wrote
Reply to comment by JNmbrs in [P] a minimalist guide to program synthesis by evanthebouncy
Q: What do you see as the bottleneck to be overcome to make library learning program synthesis systems (e.g., Dreamcoder) scalable?
A: one can take a simulation based approach to understand/view dreamcoder -- an agent starts with only knowing primitives, and is asked to solve a set of tasks, from easy to hard. the agent solves a few, compress the library, then try to solve the ones slightly harder than those, and repeat. the input to the simulation is the set of tasks, and the learning algorithm, and you just hit "run" and off it goes in a self-contained way, and we observe what it comes up with in a few days -- kind of like opening a jar of a closed evolutionary system and see if dinosaurs are in there or something like that lol.
so obviously we can improve the simulation by picking up different components of dreamcoder and make them run faster or more efficient. my idea (kind of silly tbh) is to allow additional input as the simulation is run. what if you let users tweak the simulation as it is running? what if you let the user guide some of the search, or pick different curriculum of tasks? etc? how do we make it so it is easy for end-users to inject knowledge into the system as it is running?
ultimately we're designers as much as simulation creators. we can half let the system run on its own through self-play, half with some hand-picked intervention because humans are good at solving problems.
​
Q: In the immediate term (3-5 years), in what fields (e.g., theory generators to aide scientists or as modules in robotics) do you think library learning program synthesis programs will have the greatest impact?
A: well the sell is library right? so I'd say it'll do well in a field where, there _should_ be some library, yet they're somewhat unintuitive for the humans to design. I'm unsure haha maybe robotics is a good domain, or planning problems if we can view library learning as a kind of hierarchical planning set up, being able to come up with its own abstractions.
Q: (Sorry if this is especially stupid, but) Do you think humans have explicit representations of rules (e.g., programs) in our brain "hardware" that we could in theory point to?
A: I... don't know and I don't think about these problems too much tbh, I'm more practical, I want to build systems that end-users use. so by profession I don't ponder those questions. philosophically I'm more into music and reading old chinese stories hhaha so I don't ponder those questions philosophically either. I will tell you a funny story though, hopefully it makes up for my lack of answer. There was this lex friedman lecture at MIT at one point, and he invited a rly awesome neuro biologist. a student asked her a question "how do we know worms have no consciousness, what if they do?" and she simply said "it's unlikely because the sensory neuron (eye) of the worm directly wires into the motor (feet) of the worm, no in between, it sees bright light, it retracts backwards reflexively. so what hardware is there for the worm to even process and make decision?" and I thought that was brutally hilarious answer.
Although, irrespective of what our brain program is like, we _did_ invent rules and logic right? we _did_ invent tools that are highly programmatic, and reliable in their executions. So maybe the question should be "can we make AI systems that can _invent_ logic itself" because clearly humans have done it.
Q: I was intrigued but also left a little confused by the LARC paper. In the conclusion you advocate for that we need advances to help map from natural programs to machine programs or, instead, that machine programs should have the properties of natural language (like being ambiguous)? Or did I miss the point entirely lol?
A: the latter. machine programs need to have the properties of language, namely, being expressive and universal (i.e. can express a lot of ideas, and be understood by a range of interpreters), yet still being precise (i.e. can be used to carry out specific tasks). How to do it? honestly iono but I'm working on it so subscribe for the next episode (that's a sn00pdawg quote isn't it ahahaha)