· 6 min read ·

Decades of Extensibility, Now with AI: What Emacs and Neovim Actually Offer in 2026

Source: lobsters

Every few years, someone declares that Emacs and Vim are finally done. In the 2010s it was Sublime Text, then Atom, then VS Code. Now it’s Cursor, Zed, and whatever AI-native IDE launches next quarter. Bozhidar Batsov wrote recently about how these editors are holding up in the AI era, and his take is cautiously optimistic. The real question is whether that optimism is justified, or whether it’s the kind of loyalty that convinces people their 30-year-old workflow is fine when it quietly costs them productivity every day.

The answer depends heavily on which editor you’re talking about, which AI workflow you care about, and how much configuration overhead you’re willing to accept in exchange for control.

The Current Tooling Landscape

Both Emacs and Neovim have developed mature AI integration ecosystems over the past two years. They look different from each other, which reflects the different design philosophies of the two editors.

On the Emacs side, gptel by Karthik Chikmagalur is the closest thing to a canonical LLM client. It supports multiple backends: OpenAI, Anthropic, Gemini, Azure, and local models via Ollama or llama.cpp. The interface is straightforward: you open a *gptel* buffer and hold a conversation, or you invoke gptel-rewrite on a selected region to get an inline transformation. Configuration is standard Emacs Lisp:

(use-package gptel
  :config
  (setq gptel-model 'claude-sonnet-4-6
        gptel-backend (gptel-make-anthropic "Claude"
                        :stream t
                        :key (getenv "ANTHROPIC_API_KEY"))))

There’s also ellama, which focuses on Ollama-backed local models and provides a richer set of named commands for specific tasks: ellama-summarize, ellama-code-review, ellama-improve-code. Simon Willison’s llm library has Emacs wrappers that let you pipe buffer contents through any model your CLI setup supports, which is useful if you’ve already centralized your model configuration there.

For inline completions in the style of Copilot, copilot.el and the Codeium plugin both work reasonably well. They show ghost text completions and accept them with a configurable key, similar to how the VS Code equivalents behave.

Neovim’s story is more fragmented but arguably more ambitious. avante.nvim is the most direct attempt to bring Cursor-style interaction into Neovim: it opens a floating sidebar for AI chat, supports applying suggested changes directly to the active buffer, and handles multi-file diffs. It’s backed by multiple providers including OpenAI, Anthropic, and Copilot. The plugin is under active development and the API has broken across versions, but it’s the most feature-complete option for developers who want Cursor’s UX inside Neovim.

The original copilot.vim from GitHub still works, and there’s a Neovim-native Lua rewrite called copilot.lua that integrates more cleanly with the Neovim ecosystem, particularly with completion frameworks like nvim-cmp. CopilotChat.nvim adds the conversational interface on top of that.

Why the Buffer Model Works Well with LLMs

There’s a structural reason these editors have been able to integrate AI without requiring architectural changes: both Emacs and Neovim treat everything as text in a buffer. LLM responses are text. That means AI output slots naturally into the same workflows as any other text source.

In Emacs, you can take the output of a gptel-rewrite call, pipe it through sort-lines, feed it to a shell-command, or apply a macro to it. The AI is not a privileged interface; it’s another text source that the rest of the editor can operate on. This composability is genuinely useful. If you’re generating a list of test cases, you can ask the LLM for suggestions, select the ones you want, and apply your existing formatting commands without leaving the editor or copying text between windows.

Neovim’s Lua scripting gives a similar composability story. A plugin like avante.nvim can hook into the same buffer manipulation APIs that any other plugin uses, which means its output is subject to your existing keymap, your formatter, your LSP. The AI is a first-class citizen in the editing environment rather than an overlay on top of it.

Where Purpose-Built AI Editors Have a Real Advantage

Honesty requires acknowledging what Cursor and similar tools actually do better. The main gap is codebase-aware indexing. Cursor maintains a semantic index of your entire repository and uses it to inject relevant context automatically when you ask a question or request a change. You can ask “why is the user authentication slow” and it will pull the relevant files, the relevant functions, and recent git history into the context without you specifying any of it.

Getting this in Emacs or Neovim requires explicit configuration. You can point gptel at specific files, use projectile to collect related buffers, or write Elisp that constructs a context block from your project tree. The capability exists, but it’s manual or requires custom code. avante.nvim has started adding repository-context features, but they’re less polished than Cursor’s implementation.

The other gap is the multi-file edit flow. Cursor’s Composer feature can propose changes across multiple files simultaneously, show a unified diff, and apply everything in one step. In Neovim, you can approximate this with avante.nvim’s apply functionality combined with a diff tool, but the workflow is less seamless. In Emacs, you’d typically work file by file, using gptel to suggest changes and applying them manually or with ediff.

These gaps are real for developers whose primary AI workflow is large-scale refactoring or codebase navigation. They matter less for developers who use AI primarily for completion, explanation, and small inline transformations, which is a large portion of daily use.

The Extensibility Answer

What makes the Emacs and Neovim cases interesting is that their extensibility model is not just a workaround for missing features; it’s a genuine advantage in certain dimensions. You can configure exactly when context is injected, which model handles which task, how completions are displayed, and what happens to AI output before it reaches your buffer. You own the integration.

For example, it’s straightforward in gptel to set up buffer-local configurations so that Python files use a model with strong code generation while Org Mode documents use a model with better writing quality. You can write a command that asks for a code review, strips out the preamble the model tends to add, and inserts the critique as a comment block at the top of the file. This kind of customization is not available in Cursor because Cursor’s AI integration is not designed to be reconfigured at that level.

The llm.el package in Emacs GNU ELPA provides a provider-agnostic API that third-party packages can build on, which means you can swap models without changing any of the packages that use them. This is the same pattern that made Emacs work well with LSP: standardize the protocol, let the implementations compete. The AI tooling is following the same path.

What This Means for the Long Run

Batsov’s position is essentially that Emacs and Vim have survived every editor transition by being better at adaptation than adoption. New editors win on defaults and polish; old editors win on reconfigurability and depth. AI integration is following the same pattern: VS Code, Cursor, and Zed have better defaults for AI workflows out of the box, while Emacs and Neovim offer more control to developers willing to configure for it.

The developers most likely to stay in these editors through the AI era are the ones who already treat their configuration as part of their development environment, the same people who have spent years tuning their keybindings, their LSP setup, their completion frameworks. For them, adding gptel or avante.nvim is one more layer in a system they understand and control. For developers who haven’t invested in that kind of configuration, and there are many of them, a purpose-built AI editor with good defaults is likely a better choice.

The honest summary: the AI integrations in Emacs and Neovim are mature enough to be genuinely productive, they require meaningful configuration investment, and they trade out-of-the-box codebase awareness for composability and control. Whether that trade is worth it depends on what kind of developer you are, not on any objective ranking of the tools.

Was this interesting?