Why Continue's Philosophy of Model Choice Makes Sense

Different AI models excel at different coding tasks. Here's why developers should choose their own models and Continue's 2025 recommendations for planning, chat, autocomplete, and more.

Why Continue's Philosophy of Model Choice Makes Sense

In the wake of Anthropic's recent publication of best practices for using Claude Code, I've been reflecting on a fundamental debate in the AI-native development ecosystem: should developers choose their own models, or should these decisions be made for them?

Quinn Slack @ Sourcegraph captured this tension perfectly when he tweeted:

This observation highlights a critical reality that some vendors try to hide: different AI models behave differently, and understanding these differences matters.

The Black Box Problem

Some in our industry argue that model selection should be hidden from developers. They position model pickers as "bad UX" that wastes developers' time by forcing them to make decisions they're not equipped to make.

The argument sounds reasonable on the surface: "Why should developers worry about which model to use? They just want solutions."

But this perspective misses the profound value of transparency and choice.

black box on white table
Photo by Tommy Diner / Unsplash

Why Model Choice Matters

At Continue, we've always believed in empowering developers through choice and transparency. Here's why:

  1. Models behave differently - As Quinn pointed out, you need to "empathize" with different models. They have distinct personalities, strengths, and weaknesses. Some excel at code generation, others at explanation, and others at refactoring.
  2. Different tasks need different tools - Just as you wouldn't use a hammer for every construction task, you shouldn't use a single AI model for every coding challenge. Our research shows that the best autocomplete model (QwenCoder2.5) isn't necessarily the best for planning (where Qwen 3 Coder or Claude Opus shine).
  3. Knowledge builds expertise - When developers understand which models work best for specific tasks, they build intuition that makes them more effective over time.
  4. Context matters more than model power - As we emphasized in our blog about Gemini 2.5 Pro, sometimes the difference in performance comes from a model's ability to handle more context, not just raw intelligence.

Continue's August 2025 Model Recommendations

Based on extensive testing and community feedback, here are our current recommendations for each AI coding task:

For Planning and Agent Work

  • Open models: Qwen 3 Coder (480B), Devstral (24B), GLM 4.5 (355B)
  • Closed models: Claude Opus 4.1, Claude Sonnet 4, GPT-5, Gemini 2.5 Pro
  • Note: Closed models show slight advantages for complex planning tasks

For Chat and Editing

  • Open models: Qwen 3 Coder (480B), gpt-oss (120B)
  • Closed models: Claude Opus 4.1, Claude Sonnet 4, GPT-5, Gemini 2.5 Pro
  • Note: Performance is remarkably similar between open and closed models

For Autocomplete

  • Open models: QwenCoder2.5 (1.5B), QwenCoder2.5 (7B)
  • Closed models: Codestral, Mercury Coder
  • Note: Closed models maintain a slight edge, but the gap is narrowing

For Next Edit

  • Closed models: Mercury Coder
  • Note: This is where specialized models really shine
Ready to explore?

Check out our full model recommendations and join the conversation on our Discord about which models work best for your workflow.

The Right Balance: Community-Driven Curation

Where Continue differs from both extremes in this debate is our focus on community-driven model selection. We believe:

  1. Not every developer needs to choose models - Platform teams, architects, or power users can configure assistants tailored to specific languages, frameworks, or projects.
  2. The community benefits from shared knowledge - When experts discover which models excel at particular tasks, they can share these configurations through Continue Hub.
  3. Transparency doesn't mean complexity for all - Most developers can use pre-configured assistants, while those who want to tinker have the freedom to do so.

The Future of AI-Native Development

The push to hide model selection behind black boxes ultimately does developers a disservice. It's a step backward toward the proprietary, closed systems that open source fought against for decades.

At Continue, we believe in a future where developers are amplified, not automated—where they have the knowledge and tools to make their AI assistants work for them, not the other way around.

As AI models continue to evolve rapidly, our approach of transparent, community-driven model selection ensures developers can always access the best tools for their specific needs without being locked into any single vendor's black box.

The most powerful AI coding assistant isn't the one with the fanciest model—it's the one that gives developers the freedom to choose the right model for the job.