· 6 min read ·

The Extensibility Advantage: How Emacs and Neovim Are Absorbing AI Tooling

Source: lobsters

The question of whether Emacs and Vim have a future in the AI coding era keeps surfacing in developer communities. Bozhidar Batsov, creator of CIDER and Projectile and one of the more visible contributors in the Emacs ecosystem, addressed this recently in a thoughtful piece on his blog. His central claim is that these editors remain well-positioned because of their extensibility. He is right, but the more interesting question is why the extensibility argument holds up specifically in the AI context, and where it breaks down.

The LSP Parallel

It helps to remember the last time this conversation happened. When Microsoft introduced the Language Server Protocol in 2016 and VS Code shipped with deep LSP integration, the common narrative was that IDE-quality language features would become a VS Code monopoly. That proved incorrect.

Emacs produced two serious LSP clients. lsp-mode aimed for feature parity with VS Code, exposing every server capability through familiar Emacs UI conventions. eglot took the opposite approach, minimalism over completeness, and proved popular enough to be included in Emacs core starting with version 29. Neovim took a more fundamental step: it shipped a native LSP client in version 0.5.0, built into the editor’s C core, with nvim-lspconfig handling per-language-server configuration in Lua. Both ecosystems caught up within a few years, and today the goto-definition and rename-symbol experience in either editor is competitive with any IDE.

AI tooling is following the same arc. The underlying protocol surface is simpler, since most AI coding tools are HTTP APIs returning text, and the ecosystem is moving faster as a result.

The Emacs AI Stack

The Emacs ecosystem has converged around a few packages that cover most use cases.

gptel, maintained by Karthik Chikmagalur, is the most versatile option. It functions as a multi-backend LLM client, supporting OpenAI, Anthropic Claude, Google Gemini, Ollama, Kagi FastGPT, and several others. You configure which backend to use per-session or globally:

(use-package gptel
  :config
  (setq gptel-model 'claude-sonnet-4-6
        gptel-backend
        (gptel-make-anthropic "Claude"
          :stream t
          :key (getenv "ANTHROPIC_API_KEY"))))

What distinguishes gptel is its buffer-native design. AI responses arrive in regular Emacs buffers, which means every editing command, movement primitive, and package integration you already have applies to them. You can search a response with isearch, pipe it through a shell command with shell-command-on-region, or pass it to magit-diff-dwim. Nothing lives in a special pane that exists outside your existing workflow.

copilot.el wraps the same Node.js agent used by VS Code’s Copilot extension, which means inline completion quality matches what you would get there. Completions appear as overlays using overlay-put, and you accept them with a single keybinding. If you already have a Copilot subscription, this gives you access to it without switching editors.

ellama focuses on local model interaction through Ollama, which matters for developers working with proprietary codebases who cannot send code to external APIs. It provides both chat-style interaction and inline editing commands, with no external API key required.

The Neovim AI Stack

Neovim’s Lua-based plugin system has produced several capable AI integrations, and the quality has improved significantly over the past year.

avante.nvim is the most ambitious. It explicitly models the Cursor AI sidebar experience: you describe a change in natural language, it generates a diff, and you review and apply it. Setup is straightforward Lua:

require('avante').setup({
  provider = 'claude',
  claude = {
    endpoint = 'https://api.anthropic.com',
    model = 'claude-sonnet-4-6',
    timeout = 30000,
    temperature = 0,
  },
})

Multi-file context support has been improving, and the plugin now handles project-level edits better than it did at launch. It is not identical to Cursor’s experience, but it is close enough that the workflow translates.

codecompanion.nvim splits the interaction model into two modes: a chat buffer for longer conversations and an inline editing flow triggered through an action palette. It integrates with Neovim’s native LSP infrastructure, which means it can pull diagnostics and symbol information into context without extra configuration.

copilot.lua, a Neovim-specific rewrite of copilot.vim, handles GitHub Copilot integration using Neovim’s API rather than Vim compatibility shims. Combined with copilot-cmp for nvim-cmp, it provides ghost-text inline completions that behave consistently with the rest of Neovim’s completion infrastructure.

Where the Plugin Model Has Real Advantages

The standard critique of plugin-based AI is fragmentation: you have to choose between packages, configure API keys yourself, and manage integration quirks. This is accurate. But fragmentation is the cost of flexibility, and that flexibility is meaningful in specific ways.

Provider choice is the clearest example. Cursor’s pricing model is tied to specific model providers. An Emacs or Neovim user can route to whichever model fits their budget, task requirements, or data residency constraints. A developer on Azure OpenAI can point gptel at an internal endpoint. Someone who needs an air-gapped setup can run Ollama locally and use ellama without modifying their configuration further. These are not hypothetical edge cases.

The more interesting advantage is composability. Emacs especially has decades of text manipulation infrastructure: narrowing and widening regions, keyboard macros, org-mode, the register system. When AI output lands in a buffer, it becomes part of that infrastructure immediately. You run normal search, apply normal transforms, use normal movement commands. The editing mental model does not fork based on whether text was generated or written by hand.

Neovim’s Lua configuration ecosystem provides a similar kind of composability at the integration level. Because everything is Lua calling into the same Neovim API, AI plugins can interact with telescope.nvim for fuzzy picking, with nvim-treesitter for structured code awareness, and with the native LSP client for diagnostic context. The plugins compose without special cases.

Where the Friction Is Real

Being honest about this requires naming where plugin-based AI falls short of purpose-built AI editors.

Context handling is the most significant gap. Cursor has invested in understanding project structure, recent changes, and open file context to make completions relevant. Most Emacs and Neovim AI packages handle context through explicit selection: you pass the current buffer, highlight a region, or configure a project root. Some packages are improving here, gptel has a gptel-context-add-file function for manually building context, and avante.nvim has expanded its project-level awareness, but this remains an area where tools like Cursor have a head start from dedicated engineering investment.

Onboarding cost is also higher. Setting up gptel or avante.nvim requires knowing what an API key is, which model name to use, and how package configuration works in your editor. Cursor requires a login. For developers with no prior investment in either editor’s ecosystem, the activation energy difference is real.

Multi-file edits remain the hardest thing to replicate cleanly. Cursor’s diff view, which shows proposed changes across multiple files simultaneously, is something avante.nvim approximates but does not yet fully match in the review experience.

What This Actually Means

Batsov’s piece makes the case that these editors are not being left behind, and the technical evidence supports that. gptel, avante.nvim, and their counterparts are serious pieces of software, actively maintained, with real feature depth.

The editors that struggle in a rapidly shifting tooling environment are the ones without viable extension models, or with extension models too limited to absorb new paradigms quickly. Emacs and Neovim are not those editors. When tree-sitter became viable for incremental parsing, Neovim shipped nvim-treesitter years before most editors integrated it, and Emacs added native tree-sitter support in version 29. The pattern repeats: something new emerges in the tooling world, and within one to two years, both editors have workable integrations.

The tradeoff is permanent. Purpose-built AI editors will always have a shorter path from design decision to shipping, because they optimize for a single user experience. Plugin-based editors offer more control at the cost of more configuration. For developers deeply embedded in these ecosystems, with years of configuration and established workflows, the tradeoff is favorable. For someone starting fresh who wants AI features working out of the box, the calculation is different.

That is not a new tradeoff. It is the same one that has defined the relationship between Emacs, Vim, and mainstream editors for decades. What has changed is not the nature of the choice, but that the AI tooling available through plugins has become good enough that choosing these editors no longer means meaningful capability loss.

Was this interesting?