Happy 1st birthday, Continue!

Happy 1st birthday, Continue!

Looking back on the first year of Continue

Just over a year ago, we launched Continue on Hacker News. Since then, the Continue community has taken shape and grown considerably:

  • 300k+ downloads across VS Code and JetBrains extensions
  • 14k+ stars on GitHub, 5k+ Discord members, and 140+ contributors
  • Lots of organizations like Siemens have rolled out to their teams
  • And most importantly: all of you who love and use Continue daily!

Many of you are already building your own code RAG context engines, using open-source LLMs on your own compute, and taking advantage of the latest AI code research as soon as it is released with Continue. And while organizations have progressed by leaps and bounds in their adoption of AI developer tools already, there’s still an exciting amount of room left to grow.

Our biggest learnings from year one

After talking to thousands of users during our first year, it’s become clear that we are all still early in the journey of learning to use AI while building software. The most widely adopted use cases so far—autocomplete and chat—are helpful but limit us from more deeply integrating LLMs into our workflows.

While these affordances are the most popular today, we are seeing an explosion of alternatives begin to be explored. This is happening because 1) labs are releasing more capable models, 2) developer tools are trying many different UI/UX approaches, and 3) developers are learning to better leverage both of them.

In talking to hundreds of teams using Continue, we’ve also become aware of the many challenges they overcome to create custom AI code assistants. The admin setup and support required to go beyond a proof of concept in an organization is a considerable amount of work, making even small pilots far more painful than they should be.

Custom does not have to mean challenging though. Instead of building from scratch, teams are already beginning to rely on a core set of components to put together their custom assistants. Nevertheless, they often struggle when layering on authentication and permissions, centralizing configuration, figuring out how to manage API keys, setting up monitoring, etc. on top of these components.

Looking ahead to the second year of Continue

The two biggest learnings from this past year shape what's ahead for us:

  • Deliver better affordances for using LLMs while coding
  • Make it is easy to roll out Continue in organizations

A 1% better developer experience every day

Building a better experience means first refining existing affordances. We’ve been hard at work building state-of-the-art codebase retrieval and autocomplete, releasing techniques like hierarchical context, multi-pass reranking, and building synthetic datasets to better measure quality. In the next year, we’ll be scaling up these efforts to create more magical moments while coding.

Perhaps more exciting is the possibility we see in expanding the definition of an AI code assistant. Whether it’s “smart paste”, predicting the next edit instead of just the next characters, “daemon-mode” LLMs that are constantly helping you avoid mistakes as you code, or tackling multi-file tasks in parallel to your work, there’s a ton more than can be done to make engineers more productive.

A governance layer on top of your custom AI code assistant

For engineering teams to realize the promised productivity gains of AI code assistants, we believe that custom assistants are necessary. Each of us has our own unique practices for building software and particular organizational requirements, so we need our AI code assistant to fit into them. But this does not mean we believe that everyone needs to implement an assistant from scratch.

In fact, we think it will only be possible for most teams to create custom assistants once a cohesive ecosystem of integrated AI code assistant components emerges. Our open-source VS Code and JetBrains extensions are early examples of components. Other examples include Ollama, Codestral, Greptile, LiteLLM, and PostHog.

However, until now, these components had to be discovered and cobbled together by each and every one of us on our own. This is why we are creating Continue for teams, a control plane that adds a governance layer on top of your custom AI code assistant, providing a secure and extensible way to roll out to your team.

Into the future

We have started to deploy Continue for teams with a select group of organizations as part of a beta program. If you are interested in joining, please let us know here.

We believe in a future where developers are amplified, not automated. Our first year was a step towards this vision. This upcoming year will get us even closer. If helping us make progress towards this gets you excited, consider applying to join our team in San Francisco.

Cheers,
Ty, CEO & Co-Founder