Why use an LLM while coding
For the first time ever, it’s possible to build software without writing any code yourself, though if taken literally this is currently limited to small “from-scratch” projects. On the other end of the spectrum, LLMs are often too slow to use for the faster actions in your workflow. However, there are many cases in between these extremes, where using an LLM is faster and easier than the manual option. If (and only if) you learn to rely on LLMs in the right situations, you can use them to accelerate your work.
By using LLMs while coding, you can automate more of your workflows. You can augment your developer experience. You can amplify your intelligence. You can get stuck less often and for shorter amounts of time. You can raise your ambitions. You can use less energy because you don't have to be so precise at times. If you are newer to software development or working with unfamiliar technologies, you can get up to speed faster. By using LLMs, you can be more productive and find more joy in building software.
What you need to know about LLMs
As Simon Willison has suggested, LLMs are like “calculators for words”, except they give you a slightly different answer every time you prompt them, and sometimes they might make things up. This is because Large language models (LLMs) are neural networks trained, on a dataset generally made up of trillions of tokens from the Internet, to predict the next token.
To train these models, the following steps are repeatedly performed: 1) sample some text from the dataset and 2) teach the model to predict the token following the sample. The correct answer is always known because we have access to the entire corpus of text, so it can be used to provide feedback to the model during training without requiring people to annotate the data beforehand.
Because there is a lot of code on the Internet, you can use these models while you are building software to generate and discuss code. You provide some context and the model repeatedly predicts the next token. If you provide the right instructions for things you do while coding, LLMs can often do them for you. This is why many developers are now replacing some of their Google + Stack Overflow workflow with LLMs.
But to do this, you need to know how to code yourself. That is, to effectively use LLMs while coding, you need to be able to guide them to the result you expect. As you become familiar with an LLM, you will begin to build a better mental model of how it works, what it can do, and where it is likely to lead you astray. This is important because it can be deceptively challenging to use them.
How to incorporate an LLM into your workflows
When you build software, you start with a one sentence description and end by merging a set of code changes. This is what I’ll refer to as a “software development task”, or just “task” for short. Tasks are often quite complex to accomplish, so we frequently recursively break them down into more manageable “sub-tasks”, which together form the set of actions that need to be done to consider a task complete.
Sometimes that one sentence description you start with is a Jira ticket from your team, sometimes it is a to-do list item for yourself, and other times you don’t even bother to write it down. But, for each task and any of its sub-tasks, you follow some set of steps to turn that one sentence description into the code that you ultimately merge. You are often in a flow state while executing these steps so you might not realize it, but you can describe what you do step-by-step if you reflect on it.
LLMs are transformational because they can help speed up and lower the energy required for many of these steps. The exact procedure you take really depends on how you build software and how you break it down into steps, but the steps generally involve one or more of the following…
- Clarification: you make sure you understand how code works or should work
- Collection: you identify the files / lines that are crucial context or that need to change
- Creation: you create some initial boilerplate structure to get things started, especially when you are leveraging frameworks and libraries
- Implementation: you specify the business logic for how the code should work
- Execution: you run something, often commands in the terminal, to check, adjust, or set up something
- Exploration: you search for and consider possible solutions to some problem
- Reintegration: you reread the code, documentation, etc. to refresh your understanding
- Correction: you slightly adjust or improve the code in a specific way, frequently in response to an error, exception, or failing test case
- Optimization: you refactor the code to improve performance on some dimension, but the inputs and outputs remain the same
- Verification: you make sure the code works as expected before you run it / after running it
To get help from an LLM with these steps, you will need to do the same three things each time:
- Collect: gather the relevant context
- Instruct: tell the model what it needs to do
- Review: check the suggestion and apply it
When it’s faster and easier to do 1-3 to complete a step than to do it yourself manually, you should use an LLM. To figure out when this is the case, you need to build an intuition. The best way to start is to identify small, simple steps and see if an LLM can automate them. Once you find some, this will free up more time to learn about how to attack more challenging, involved steps and eventually enable you to automate some of them too as your experience grows.
How to select an LLM for coding
We wrote a guide about how to figure out what code LLM to use here.
Not providing enough context
It’s often a lot of effort to make the LLM aware of everything it needs to know to complete a step for you. However, if you don’t share all of the relevant information, then the LLM is unlikely to complete the step exactly how you want.
Providing too much context
Many people correct for the previous mistake by trying to shove as much into the context window as possible. This is also not a good idea. You should only include relevant information and nothing more. Since these models are doing next token prediction, any unnecessary tokens included as context are going to increase the chances that it will not behave like you want.
Providing ambiguous instructions
It usually works best to provide full and detailed questions / instructions, as if you are speaking to another person. You will typically want to say something like "how to check if a two integers are equal in Python” instead of "check if two integers are equal.” When applicable, it’s also generally a good idea to indicate relevant topics if it’s not obvious from the context (e.g. ”create the boilerplate for a Tooltip component using React”).
Using the wrong tool
There are many tools out there to help you use LLMs while coding. If you are often copying and pasting context and suggestions manually, you are probably not using the right tool. It’s important to find a tool that fits into how you build software.
Not critically reviewing LLM suggestions
In the end, you are responsible for all code that you ship, whether it was written by you or by an LLM that you directed. This means it is crucial that you review what the LLM writes. If you don’t, you might be accepting suggestions that are partially or completely made up, which can lead to subtle bugs and broken software in production.
Trying to complete a step without context
It’s often difficult to benefit from LLMs while coding today because so many of our thoughts and actions are not explicit and transparent. If the necessary context and instructions are not mostly already captured or easily collectable, it can be hard to leverage models for that step.
Asking LLMs to do something too complex
Developers are much better than LLMs at figuring out what context is relevant and then using it to link together the many steps and sub-tasks required to complete a task, so focus on using it for smaller steps. If you are trying to use it for a new step and don’t have a sense of how much an LLM can help, it can often be helpful to start like this:
- Collect the code section(s) that you don’t understand and say "tell me how this code works"
- If the explanation seems reasonable, then ask "how would you change this code to [INSERT STEP]?"
- If this explanation is also pretty good, then instruct it to “[INSERT STEP]"
- If it does not work on first attempt, then try again—often it will make a different suggestion each time
- If it is not giving you what you want after another attempt, try again with more specific / clear instructions and only crucial context, articulating exactly what you want it to do and not to do
- If this still does not work, then you likely need to break down the task into smaller sub-tasks and ask the LLM to do each of those one at a time or just do it yourself manually
If you liked this blog post and want to read more about DevAI–the community of folks building software with the help of LLMs–in the future, join our monthly newsletter here.