Artificial Intelligence

AI Coding Tip 002 - Prompt in English

AI Coding Tip 002 - Prompt in English

Key Takeaways

  • AI models trained on 90% English data perform better with English prompts
  • Non-English prompts waste tokens on translation and reduce accuracy
  • Technical terms lose meaning when translated, confusing AI systems

Why It Matters

The revelation that AI speaks better code in English shouldn't surprise anyone who's watched a machine try to translate "callback" into Spanish as "retrollamada." It's like asking your GPS for directions in ancient Latin—technically possible, but you're going to end up in the wrong neighborhood. Developer Maxi Contieri's research confirms what many programmers suspected: when you're dealing with systems trained predominantly on English datasets, speaking their native tongue gets better results.

The token economics alone make this compelling. Every non-English prompt forces the AI to spend precious context window real estate on translation gymnastics instead of actual problem-solving. It's computational inefficiency at its finest—like hiring a translator to explain your grocery list to a cashier who already speaks your language. The AI essentially has to play linguistic telephone with itself, and we all know how that game ends.

This isn't just about convenience; it's about getting the code you actually want. When technical terms get lost in translation, the AI starts guessing, and AI guessing rarely ends well for anyone's deployment schedule. The recommendation to stick with American English for programming terms makes sense—after all, most programming languages were designed by English speakers, and the documentation ecosystem remains stubbornly monolingual despite our globally connected world.

Related Articles