My New Developer Workstation: NVIDIA DGX Spark
When NVIDIA asked if we wanted to test the new DGX Spark as a daily driver, I said yes immediately.

Ever since I started using Continue, I've wanted a development setup where everything works together without compromises. My editor, my tools, and my AI assistant all running locally. When NVIDIA asked if we wanted to test the new DGX Spark as a daily driver, I said yes immediately.
When the box arrived, I was surprised by how compact it was. Six inches square and two inches tall (150mm x 150mm x 50mm). This wasn't going to be some server under my desk. It would sit right on my desktop. The DGX Spark has a textured finish that reminds me of golden coral, and it looks like it belongs on a desk.
A Real Developer Machine
Here's what NVIDIA put in this tiny box:
- NVIDIA GB10 Grace Blackwell Superchip (their latest CPU and GPU architecture)
- 128 GB unified memory (more than enough for development and AI)
- 273 GB/s memory bandwidth (keeps everything responsive)
- 1 PFLOP tensor performance (sounds absurd for such a small desktop, but you feel it)
This isn't a specialized inference appliance. It's a complete workstation that happens to have serious AI capabilities built in. NVIDIA designed it for developers who want professional-grade hardware without rack-mounted servers.
Setting Up My Daily Driver
Setup was straightforward. We went with the desktop configuration since I wanted to use this as my primary development machine. After getting the basics configured, I installed my usual development stack: VS Code with Continue, my Go and Python toolchains, Docker for containerized development, and Ollama for running local models.
curl -fsSL https://ollama.com/install.sh | sh
ollama pull qwen3:30b
ollama pull granite4:small-h
With 128 GB of unified memory, I could keep multiple large models loaded alongside my development environment. No trade-offs, no memory pressure. Everything runs. We can take advantage of Ollama’s existing models in Continue’s Hub and we are up and running in a few minutes.
models:
- uses: ollama/gpt-oss-120b
- uses: ollama/qwen3-30b
- uses: ollama/granite4-small-h
What Daily Development Looks Like
The difference from my previous setup was immediate. I'm working in VS Code with Continue, and everything feels responsive:
- Chat responses appear as fast as I can read them
- Agent mode is able to search my entire codebase without lag
- Multiple terminal sessions, containers, and models all running simultaneously
The speed matters because it stays out of your way. When AI assistance is this responsive, you stop thinking about it and start using it naturally.
In the terminal with Continue CN, it gets even better:
# Generate commit messages
cn -p "Generate a conventional commit message for the current changes"
# Quick code reviews
cn -p "Review the current git changes for bugs and suggest improvements"
Having this in the terminal means no context switching and no waiting.
Why This Works
Having everything run on your own machine is liberating. Your code never leaves your desk. Your models are always available. Your development environment and AI assistant live in the same place.
For individual developers or small teams, this removes a lot of complexity. No API keys to manage, no rate limits during crunch time, no latency from cloud services, and no wondering what's being logged. It's your workstation doing what workstations do: running your code and your tools.
The Bigger Picture
Developers who want serious AI capabilities have had limited options. You can use cloud APIs with ongoing costs and latency, run limited models on consumer hardware, or build out server infrastructure.
The DGX Spark fills that middle ground. It's a developer workstation that doesn't compromise. Powerful enough for serious development work, with the memory and performance to run modern AI models comfortably.
Continue makes the AI side feel natural. Integrated into your actual workflow, in your editor and terminal, where you need it.
What's Next
I'm still early in my experience with the DGX Spark as my daily driver, but so far it's exceeded expectations. Having a single machine that handles my development work and runs capable AI models locally has changed how I think about my workflow.
In a future post, I'll talk about how we've set this up for our team to access remotely. There's a story there about shared infrastructure and making powerful compute available to everyone. But that's for another time.
For now, I'm enjoying having a workstation that works for code, for AI, and for everything I need to do as a developer.
Try Continue CN
You don't have to wait to get a NVIDIA DGX™ Spark to experience AI-assisted development with Continue. Continue CN works with any model provider, cloud or local.
See what AI-assisted development feels like when it's fast and integrated into your workflow.