Viewing a single comment thread. View all comments

ezelikman t1_j6lx0vm wrote

Hi, author here!

There are a few ways to interpret this question.

The first is, "why generate a bunch of composable small functions - why not generate complete Python/Lean/etc. implementations directly from the high-level sketch?" If you generate 10 complete implementations, you have 10 programs. If you generate 10 implementations of four subfunctions, you have 10,000 programs. By decomposing problems combinatorially, you call the language model less. You can see the benefits in Fig. 6 and our direct compilation ablation. There's also the context window: a hundred 500-token functions from Parsel is a 50,000-token program. You won't get that with Codex alone.

Another interpretation is, "why do you need to expose intermediate language when you can use a more abstract intermediate representation." You suggest "leveraging the value of LLMs--through a more natural language interface." That's the goal. Parsel is intentionally basically indented natural language w/ unit tests. There's minimal extra syntax for efficiency and generality - ideally, people who've never used Python can understand and write Parsel. The "expert" details here aren't syntax: most people are unfamiliar with the nuances of writing natural language that automatically compiles to code, like the value of comprehensive unit tests.

Another is, "why design a new language instead of writing this as, e.g., a Python library?" My response is we did this too. Internally, Parsel is in Python, and a "Function" class already exists - you can find it on GitHub. Still, you need a process to generate implementations and select one satisfying the constraints, which we call the compiler.

Hope this answers your question!

11

farmingvillein t1_j6nxa0i wrote

> If you generate 10 complete implementations, you have 10 programs. If you generate 10 implementations of four subfunctions, you have 10,000 programs. By decomposing problems combinatorially, you call the language model less

Yup, agreed--this was my positive reference to "the big idea". Decomposition is almost certainly very key to any path forward in scaling up automated program generation in complexity, and the paper is a good example of that.

> Parsel is intentionally basically indented natural language w/ unit tests. There's minimal extra syntax for efficiency and generality.

I question whether the extra formal syntax is needed, at all. My guess is, were this properly ablated, it probably would not be. LLMs are--in my personal experience, and this is obviously born out thematically--quite flexible to different ways in representing, say, unit input and outputs. Permitting users to specify in a more arbitrary manner--whether in natural language, pseudocode, or extant programming languages--seems highly likely to work equally well, with some light coercion (i.e., training/prompting). Further, natural language allows test cases to be specified in a more general way ("unit tests: each day returns the next day in the week, Sunday=>Monday, ..., Saturday=>Sunday") that LLMs are well-suited to work with. Given LLM's ability to pick up on context and apply it, as well, there is a good chance that free-er form description of test cases are likely to drive improved performance.

If you want to call that further research--"it was easier to demonstrate the value of hierarchical decomposition with a DSL"--that's fine and understood, but I would call it out as a(n understandable) limitation of the paper and an opportunity for future research.

4