Vivek Haldar

Portability of LLM Prompts

Practitioners know that there is no such thing as prompt portability right now. If you change models, you need to re-eval, and re-tune, all your prompts. Small changes in phrasing and ordering can have large output effects.

This parallels the days before compilers– when you had to write bespoke assembly for each platform. Software was not portable.

Later, compilers for high-level languages could target various ISAs without the programmer having to worry about it.

What if– LLMs came with “atomic prompts”? Prompts tuned for that specific model for basic tasks like summarization, simple reasoning, personas etc. Make it part of an (extended) model card. Would certainly take a lot of guesswork out of porting prompts across models.

And while we’re at it can we please also make it so that we never have to say “Take a deep breath” and “I will tip you $200 for getting this right”? That would be great.