Tutorial and my thoughts on how we can optimise prompts for small LLMs to match quality of bigger one with DSPy and GEPA
Terrific write-up. Thanks for sharing.
At the risk of infinite recursive time-wasting, I wonder if GEPA could be used to tune feedback in the optimizer metric
Probably we can do this. I had idea that we can train Judge LLM like we train GANs
that’s very useful, thanks! 👍🏻
Terrific write-up. Thanks for sharing.
At the risk of infinite recursive time-wasting, I wonder if GEPA could be used to tune feedback in the optimizer metric
Probably we can do this. I had idea that we can train Judge LLM like we train GANs
that’s very useful, thanks! 👍🏻