MysteryInc152 OP t1_jahgb2n wrote
Reply to comment by limpbizkit4prez in [R] EvoPrompting: Language models can create novel and effective deep neural architectures. These architectures are also able to outperform those designed by human experts (with few-shot prompting) by MysteryInc152
Overfitting comes the necessary connotation that the model does not generalize well to instances of the task outside the training data.
As long as what the model creates is novel and works, "overfitting" seems like an unimportant if not misleading distinction.
limpbizkit4prez t1_jahhmhd wrote
Lol, I strongly disagree. There are already methods out there that provide architecture design. This is a "that's neat" type of project, but I'd be really disappointed to see this anywhere other than arxiv.
_Arsenie_Boca_ t1_jai5zgz wrote
The final evaluation is done on test metrics right? If so, why does it matter?
limpbizkit4prez t1_jai7l96 wrote
It matters because the authors continue to increase model capacity to do better on a single task and that's it. They also determined that strategy, not the LLM. It would be way cooler if they constrained the problem to roughly the same number of parameters and showed generalization across multiple tasks. Again, it's neat, just not innovative or sexy.
Viewing a single comment thread. View all comments