Declarative MCP Configuration with Nix Home Manager
TL;DR
If you use Nix and want reproducible MCP configuration across machines:
- π¦ Copy the module from github.com/lewisflude/nix
- βοΈ Configure servers declaratively in your
home.nix - π Run
home-manager switch - β¨ Get type-safe, version-controlled MCP setup with integrated secret management
Read on for an architecture deep-dive, cross-platform optimizations, testing approach, and 10+ real-world server configurations.
Table of Contents
- Introduction
- Prerequisites
- The Solution: A Home Manager Module
- Architecture Deep Dive
- The Cursor-Agent Symlink Problem
- Secret Management with SOPS
- Platform-Specific Enhancements
- Testing and Validation
- Real-World Configuration Examples
- Benefits and Trade-offs
- Making It Reusable
- Conclusion
- Resources
- Appendix
Introduction
Anthropic's Model Context Protocol (MCP) gives Claude and other AI assistants superpowers. Instead of being limited to text, MCP-enabled assistants can execute actions, retrieve real-time data, and interact with your development environment: search engines, code repositories, filesystem access, API integrations, and much more.
I've been using MCP across multiple machinesβmy MacBook for development and my NixOS server for longer-running tasks. At first, I was manually editing JSON configuration files in multiple places:
~/.cursor/mcp.jsonfor Cursor~/Library/Application Support/Claude/claude_desktop_config.jsonfor Claude Desktop- Different paths on Linux vs macOS
- API keys scattered everywhere, hardcoded or stored in random shell scripts
This quickly became a maintenance nightmare. Every time I wanted to add a new MCP server or rotate an API key, I had to remember to update multiple files. Configuration drift crept inβmy MacBook had different servers than my Linux machine. When I set up a new machine, I'd spend hours manually recreating the same configuration.
As a Nix user, this felt wrong. I was managing my entire system declarativelyβpackages, services, dotfilesβbut my AI tooling configuration was still manual and error-prone. I decided to build a Home Manager module to manage MCP servers declaratively, just like everything else.
This article walks through that journey: the challenges I encountered, the solutions I discovered, and the patterns that emerged. You'll see how I solved the symlink problem with cursor-agent, integrated secret management with SOPS, and built a system that works across macOS and NixOS.
Prerequisites
This guide assumes familiarity with:
- Nix package manager and basic Nix language syntax
- Home Manager for user environment management
- Model Context Protocol (MCP) fundamentals
- (Optional) SOPS or similar secret management tools
New to Nix? Start with the official Nix tutorial before continuing. This is an intermediate-to-advanced guide focused on practical implementation.
The Solution: A Home Manager Module
Nix is a declarative package manager that makes software reproducible and reliable. Home Manager extends Nix to manage user environmentsβdotfiles, packages, and application configurationsβdeclaratively. Together, they enable infrastructure-as-code for your entire development environment.
My goal was simple: transform scattered manual JSON setup into a single, version-controlled Nix configuration. Instead of editing multiple files, I'd declare my MCP servers once in my home.nix, and Home Manager would generate the appropriate JSON files for each application.
Here's what the end-user experience looks like:
services.mcp = {
enable = true;
targets = {
cursor = {
directory = "${config.home.homeDirectory}/.cursor";
fileName = "mcp.json";
};
claude = {
directory = "${config.home.homeDirectory}/Library/Application Support/Claude";
fileName = "claude_desktop_config.json";
};
};
servers = {
kagi = {
command = "uvx";
args = ["kagimcp"];
env.KAGI_API_KEY = "...from secrets...";
};
filesystem = {
command = "npx";
args = [
"-y"
"@modelcontextprotocol/server-filesystem"
"${config.home.homeDirectory}/Code"
];
};
};
};Benefits
- Declare once, deploy everywhere: Single source of truth
- Secret management built-in: Integrate with SOPS, agenix, etc.
- Type-safe configuration: Build fails on errors
- Reproducible across machines:
git clone && home-manager switch - Version controlled: Track changes, rollback easily
Architecture Deep Dive
Now that we've seen the user-facing API, let's explore how the module works internally. I'll walk through the three main components: type definitions, multi-target deployment, and config generation logic.
Understanding the architecture helps when you need to extend the module or debug issues. It also shows how Nix's module system enables composable, reusable configuration.
Architecture Overview
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Nix Configuration (version controlled) β
β β
β services.mcp = { β
β servers = { kagi = {...}; git = {...}; }; β
β targets = { cursor = {...}; claude = {...}; }; β
β }; β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββ
β
ββ> Type validation (build-time)
β
ββ> JSON generation
β
v
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ~/.mcp-generated/ β
β βββ cursor/mcp.json (Nix store symlinks) β
β βββ claude/claude_desktop_config.json β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββ
β
ββ> Home Manager activation scripts
β
v
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Final Locations (real files, not symlinks) β
β βββ ~/.cursor/mcp.json β
β βββ ~/Library/.../claude_desktop_config.json β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββ
β
ββ> Applications can read configs
ββ> Cursor
ββ> Claude Desktop
ββ> Other MCP clientsThe Module Interface
The module defines a flexible type system that supports both CLI-based and remote MCP servers:
mcpServerType = types.submodule {
options = {
# CLI server options
command = mkOption {
type = types.nullOr types.str;
default = null;
description = "The command to run the MCP server (for CLI servers)";
};
args = mkOption {
type = types.listOf types.str;
default = [];
description = "Additional arguments to pass to the MCP server (for CLI servers)";
};
env = mkOption {
type = types.attrsOf types.str;
default = {};
description = "Environment variables to set for the MCP server (for CLI servers)";
};
# Remote server options
url = mkOption {
type = types.nullOr types.str;
default = null;
description = "The URL of the remote MCP server (for remote servers)";
};
headers = mkOption {
type = types.attrsOf types.str;
default = {};
description = "HTTP headers to send to the remote MCP server (for remote servers)";
};
# Optional metadata
port = mkOption {
type = types.nullOr types.port;
default = null;
description = "Port for the MCP server (optional metadata, not used in config generation)";
};
extraArgs = mkOption {
type = types.listOf types.str;
default = [];
description = "Additional arguments to pass to the MCP server (optional, not used in config generation)";
};
};
};Design decisions:
- Support both CLI (npx, uvx, docker) and remote (HTTP) servers
- Optional fields for flexibility
- Metadata fields like
portfor documentation purposes
Multi-Target Deployment
The targets system enables deploying the same configuration to multiple applications:
targets = mkOption {
type = types.attrsOf mcpTargetType;
default = {};
description = "MCP targets to configure";
};
# Usage
targets = {
cursor = {
directory = "/Users/${config.home.username}/.cursor";
fileName = "mcp.json";
};
claude = {
directory = "/Users/${config.home.username}/Library/Application Support/Claude";
fileName = "claude_desktop_config.json";
};
};Why this matters:
- Single source of truth for all applications
- Easy to add new tools (VSCode, Zed, etc.)
- Consistent configuration everywhere
Config Generation Logic
The module generates JSON configuration based on server type:
mkMcpConfig = _name: serverCfg:
# Remote server configuration (url-based)
if serverCfg.url != null
then
{
inherit (serverCfg) url;
}
// (optionalAttrs (serverCfg.headers != {}) {inherit (serverCfg) headers;})
# CLI server configuration (command-based)
else if serverCfg.command != null
then
{
inherit (serverCfg) command;
}
// (optionalAttrs (serverCfg.args != []) {inherit (serverCfg) args;})
// (optionalAttrs (serverCfg.env != {}) {inherit (serverCfg) env;})
else throw "MCP server '${_name}' must specify either 'command' (for CLI server) or 'url' (for remote server)";Benefits:
- Type-safe at build time
- Only includes non-empty fields
- Clear error messages
The Cursor-Agent Symlink Problem
How Nix Usually Works
Nix Home Manager typically uses symlinks for configuration files:
$ ls -la ~/.config/some-app/config.json
lrwxr-xr-x ... config.json -> /nix/store/xxx-config.jsonThis keeps everything immutable and traceable to the Nix store. When you run home-manager switch, it creates symlinks pointing to files in the Nix store. This is elegantβthe files are guaranteed to be identical, and you can trace exactly which Nix configuration generated them.
The Problem
I started by using the standard Home Manager approach: create a symlink for ~/.cursor/mcp.json pointing to a generated JSON file in the Nix store. Simple, right?
$ ls -la ~/.cursor/mcp.json
lrwxr-xr-x ... mcp.json -> /nix/store/xxx-mcp.json
$ cursor # Launch Cursor
# ... cursor-agent tries to read config ...
# Nothing happens. MCP servers don't load.I spent hours debugging this. The JSON was validβI could cat it and parse it with jq. Permissions were correct. The file path was right. But Cursor just wouldn't load the MCP servers.
I tried everything: checking Cursor's logs, verifying the JSON structure matched examples, testing with manually created files. Nothing worked. Then I tried copying the symlinked file to a real file:
$ cp ~/.cursor/mcp.json ~/.cursor/mcp.json.test
$ cursor # Still using the symlink, but...Wait. What if I replace the symlink with a real file?
$ rm ~/.cursor/mcp.json
$ cp /nix/store/xxx-mcp.json ~/.cursor/mcp.json
$ cursor # Launch Cursor
# MCP servers load! It works!After extensive debugging, I discovered that cursor-agent cannot follow symlinks when reading configuration files. This is a critical limitation that affects many Electron-based applications. The configuration JSON was valid, permissions were correct, but the symlink itself was the issue.
The Solution: Two-Stage Deployment
So I had a problem: Cursor needs real files, not symlinks. But I still want the benefits of Nixβversion control, reproducibility, and traceability. The solution is a two-stage deployment process.
Stage 1: Generate the config (as a symlink in a staging area)
First, I generate the JSON configuration file in the Nix store and symlink it to a staging directory:
home.file = builtins.listToAttrs (
mapAttrsToList
(name: target: {
name = ".mcp-generated/${name}/${target.fileName}";
value.text = builtins.toJSON mcpConfigJson;
})
cfg.targets
);This creates files like ~/.mcp-generated/cursor/mcp.json that are symlinks to the Nix store. These are still version-controlled and traceable.
Stage 2: Copy to final location (real files, not symlinks)
Then, during Home Manager activation, I copy these files to their final locations:
home.activation.copyMcpConfig = lib.hm.dag.entryAfter ["writeBoundary"] (
let
copyCommands = concatStringsSep "\n" (
mapAttrsToList
(name: target: ''
mkdir -p "${target.directory}"
cp -f "$HOME/.mcp-generated/${name}/${target.fileName}" "${target.directory}/${target.fileName}"
chmod 644 "${target.directory}/${target.fileName}"
'')
cfg.targets
);
in ''
${copyCommands}
''
);The activation script runs after Nix builds everything, copying the symlinked files to their final destinations as real files. Now Cursor can read them, but we still get all the benefits of Nixβthe files are generated from version-controlled configuration, and each home-manager switch ensures they're up to date.
Why this works:
- Files are still generated from Nix store (version controlled)
- Activation scripts run after Nix build completes
- Real files (not symlinks) end up in final locations
- Updates happen atomically on each
home-manager switchrun
Trade-off:
- Slightly less "pure" than typical Nix configs (no symlinks in final location)
- But much more practical for tools that can't handle symlinks
- Still completely traceableβyou can always check
~/.mcp-generated/to see what's in the Nix store
This pattern generalizes to other tools with similar limitationsβany Electron app or tool that can't follow symlinks can benefit from this approach.
Secret Management with SOPS
With the basic module working, I faced the next challenge: how do I handle API keys and tokens securely? MCP servers need access to API keys for services like Kagi search, GitHub, and OpenAI. But I can't hardcode secrets in my Nix configurationβthey'd end up in the Nix store (which is readable) and version control.
I wanted three things:
- Secrets stored encrypted in version control (using SOPS)
- Runtime injection without manual steps
- Type-safe configuration that fails at build time if secrets are missing
The solution is wrapper scriptsβa pattern I've found incredibly useful in Nix for runtime behavior that can't be expressed purely.
The Wrapper Script Pattern
Wrapper scripts are a powerful pattern in Nix for runtime behavior that can't be expressed purely. They're especially useful for:
- Reading secrets at runtime (not build time)
- Setting up environment variables dynamically
- Adapting tools that weren't designed for Nix
- Providing fallback behavior when paths might not exist
The idea is simple: instead of calling the MCP server directly, we call a wrapper script that reads the secret from SOPS and then executes the actual server with the secret injected as an environment variable.
Simple bash wrapper (macOS):
On macOS, SOPS secrets are typically stored in files managed by sops-nix. The wrapper reads from the file path and exports the value:
# Define the wrapper
home.file."bin/kagi-mcp-wrapper" = {
text = ''
if [ -r "${systemConfig.sops.secrets.KAGI_API_KEY.path or ""}" ]; then
export KAGI_API_KEY="$(cat "${systemConfig.sops.secrets.KAGI_API_KEY.path or ""}")"
fi
exec ${pkgs.uv}/bin/uvx kagimcp "$@"
'';
executable = true;
};
# Use in MCP config
servers.kagi = {
command = "${config.home.homeDirectory}/bin/kagi-mcp-wrapper";
args = [];
};This wrapper checks if the SOPS secret file exists and is readable, reads the API key, exports it as an environment variable, and then executes the actual uvx kagimcp command. The secret is never in the Nix storeβit's only read at runtime.
Advanced: Nix-built wrappers (NixOS):
On NixOS, we can use pkgs.writeShellApplication to create a more robust wrapper:
kagiWrapper = pkgs.writeShellApplication {
name = "kagi-mcp-wrapper";
runtimeInputs = [ pkgs.coreutils pkgs.uv ];
text = ''
set -euo pipefail
KAGI_API_KEY="$(cat ${systemConfig.sops.secrets.KAGI_API_KEY.path})"
export KAGI_API_KEY
export UV_PYTHON="${pkgs.python3}/bin/python3"
exec ${uvx} --from kagimcp kagimcp "$@"
'';
};
# Use directly
servers.kagi = {
command = "${kagiWrapper}/bin/kagi-mcp-wrapper";
args = [];
};The writeShellApplication function ensures the wrapper has access to all necessary runtime dependencies (like cat and uv) and properly handles errors with set -euo pipefail.
Benefits
- Secrets never in Nix store: Read at runtime only
- Type-safe paths: Build fails if secret doesn't exist
- Easy rotation: Update SOPS, rebuild
- Works with any secret manager: SOPS, agenix, pass, etc.
- Audit trail: Track secret access via wrapper scripts
Platform-Specific Enhancements
The base module provides cross-platform functionality, but I wanted to optimize for each platform's unique capabilities. macOS and NixOS have different service management systemsβmacOS uses launchd (though we use Home Manager activation scripts), while NixOS has systemd. Each platform also has different conventions for where files live and how services run.
I structured the configuration to share a common base module and then add platform-specific enhancements where it matters most.
Project Structure
home/
βββ common/
β βββ modules/
β βββ mcp.nix # Base module (shared)
βββ darwin/
β βββ mcp.nix # macOS-specific configuration
βββ nixos/
βββ mcp.nix # Linux-specific configurationmacOS: Simple and Stable
Activation-based Claude CLI registration:
home.activation.setupClaudeMcp = lib.hm.dag.entryAfter ["writeBoundary"] (
let
cfg = config.services.mcp;
mcpAddCommands = lib.concatStringsSep "\n " (
lib.mapAttrsToList
(
name: serverCfg: let
command = lib.escapeShellArg serverCfg.command;
argsStr = lib.concatStringsSep " " (map lib.escapeShellArg serverCfg.args);
argsPart = lib.optionalString (argsStr != "") "-- ${argsStr}";
envVars = lib.concatStringsSep " " (
lib.mapAttrsToList
(
key: value: "export ${lib.escapeShellArg key}=${lib.escapeShellArg value};"
)
serverCfg.env
);
in ''${envVars} claude mcp add ${lib.escapeShellArg name} -s user ${command} ${argsPart} || echo "Failed to add ${name} server"''
)
cfg.servers
);
in ''
if command -v claude >/dev/null 2>&1; then
echo "Registering MCP servers with Claude Code..."
${pkgs.findutils}/bin/find ~/.config/claude -name "*.json" -delete 2>/dev/null || true
$DRY_RUN_CMD ${pkgs.writeShellScript "setup-claude-mcp" ''
echo "Removing existing MCP servers..."
for server in ${
lib.concatStringsSep " " (lib.mapAttrsToList (name: _: lib.escapeShellArg name) cfg.servers)
}; do
claude mcp remove "$server" -s user 2>/dev/null || true
claude mcp remove "$server" -s project 2>/dev/null || true
claude mcp remove "$server" 2>/dev/null || true
done
echo "Running MCP server registration commands..."
${mcpAddCommands}
echo "Claude MCP server registration complete"
''}
else
echo "Claude CLI not found, skipping MCP server registration"
fi
''
);8 servers configured:
- kagi (search)
- fetch (web content)
- git (repository access)
- memory (persistent context)
- sequential-thinking (enhanced reasoning)
- github (GitHub API via Docker)
- general-filesystem (code/config/docs access)
- time (date/time utilities)
NixOS: Systemd Services
Registration service:
systemd.user.services.mcp-claude-register = {
Unit = {
Description = "Register MCP servers for Claude CLI (idempotent)";
After = ["graphical-session.target"];
PartOf = ["graphical-session.target"];
};
Service = {
Type = "oneshot";
ExecStart = "${registerScript}";
Environment = [
"PATH=/etc/profiles/per-user/%u/bin:%h/.nix-profile/bin:$PATH"
];
TimeoutStartSec = "300";
Restart = "on-failure";
RestartSec = "30";
};
Install = {
WantedBy = ["default.target"];
};
};Warm-up service for performance:
One problem I noticed: the first time Claude tried to use an MCP server after a reboot, it would take 10-30 seconds to start. This was because uvx and npx needed to download packages, and Rust-based servers needed to compile. I wanted instant startup times.
The solution is a systemd service that runs at boot and "warms up" all the MCP servers by running them once (with --help) to trigger package downloads and binary builds:
systemd.user.services.mcp-warm = {
Unit = {
Description = "Warm MCP servers (build binaries, prefetch packages)";
After = ["network-online.target"];
};
Service = {
Type = "oneshot";
ExecStart = "${warmScript}";
TimeoutStartSec = "900";
Environment = [
"PATH=/etc/profiles/per-user/%u/bin:%h/.nix-profile/bin:$PATH"
];
};
Install = {
WantedBy = ["default.target"];
};
};What the warm-up script does:
#!/usr/bin/env bash
set -euo pipefail
echo "[mcp-warm] Prebuilding rustdocs-mcp-server binaryβ¦"
rustdocs-mcp-wrapper --help >/dev/null 2>&1 || true
echo "[mcp-warm] Prefetching uvx serversβ¦"
uvx --from cli-mcp-server cli-mcp-server --help >/dev/null 2>&1 || true
uvx --from mcp-server-fetch mcp-server-fetch --help >/dev/null 2>&1 || true
uvx --from mcp-server-git mcp-server-git --help >/dev/null 2>&1 || true
uvx --from mcp-server-time mcp-server-time --help >/dev/null 2>&1 || true
echo "[mcp-warm] Prefetching npx serversβ¦"
npx -y @modelcontextprotocol/server-everything@latest --help >/dev/null 2>&1 || true
npx -y @modelcontextprotocol/server-filesystem@latest --help >/dev/null 2>&1 || true
npx -y @modelcontextprotocol/server-memory@latest --help >/dev/null 2>&1 || true
npx -y @modelcontextprotocol/server-sequential-thinking@latest --help >/dev/null 2>&1 || true
npx -y tritlo/lsp-mcp --help >/dev/null 2>&1 || true
# ... more servers
echo "[mcp-warm] Warm-up complete."Result:
- First Claude session starts instantly
- No waiting for package downloads
- npm/uv caches are pre-populated
- Binary builds are cached
10 servers on NixOS (macOS 8 + openai + rust-docs-bevy)
The Rust Docs Server: Advanced Example
Let's examine a complex real-world scenario: serving Rust documentation through MCP. This showcases advanced wrapper techniques, build caching, and dependency management.
The challenge:
The rustdocs-mcp-server must be built from sourceβit's not available as a pre-built binary or npm package. It requires system dependencies like OpenSSL, pkg-config, and systemd libraries. Users shouldn't need to install the Rust toolchain themselves.
I wanted a solution that:
- Builds the server automatically on first use
- Caches the binary for subsequent runs
- Handles all system dependencies transparently
- Works seamlessly with the MCP configuration
The wrapper solution:
rustdocsWrapper = pkgs.writeShellApplication {
name = "rustdocs-mcp-wrapper";
runtimeInputs = [
pkgs.coreutils
pkgs.nix
];
text = ''
set -euo pipefail
OPENAI_API_KEY="$(cat ${systemConfig.sops.secrets.OPENAI_API_KEY.path})"
export OPENAI_API_KEY
CACHE_DIR="''${XDG_CACHE_HOME:-$HOME/.cache}/mcp"
OUT_LINK="$CACHE_DIR/rustdocs-mcp-server"
mkdir -p "$CACHE_DIR"
if [ ! -e "$OUT_LINK" ] || [ ! -x "$OUT_LINK/bin/rustdocs_mcp_server" ]; then
echo "[rustdocs-wrapper] Building rustdocs-mcp-server via nixβ¦"
${pkgs.nix}/bin/nix build github:Govcraft/rust-docs-mcp-server
mv -T "$OUT_LINK.tmp" "$OUT_LINK"
fi
export PKG_CONFIG_PATH="${pkgs.alsa-lib.dev}/lib/pkgconfig:${pkgs.openssl.dev}/lib/pkgconfig:${pkgs.systemd.dev}/lib/pkgconfig:${pkgs.pkg-config}/lib/pkgconfig:''${PKG_CONFIG_PATH:-}"
export OPENSSL_DIR="${pkgs.openssl.out}"
export OPENSSL_LIB_DIR="${pkgs.openssl.out}/lib"
export OPENSSL_INCLUDE_DIR="${pkgs.openssl.dev}/include"
export SSL_CERT_FILE="${pkgs.cacert}/etc/ssl/certs/ca-bundle.crt"
if [ -n "''${MCP_NIX_SHELL:-}" ]; then
exec ${pkgs.nix}/bin/nix develop "''${MCP_NIX_SHELL}" -c "$OUT_LINK/bin/rustdocs_mcp_server" "$@"
fi
EXTRA_PKGS="''${MCP_EXTRA_NIX_PKGS:-}"
extra_args=()
if [ -n "$EXTRA_PKGS" ]; then
read -r -a extra_args <<< "$EXTRA_PKGS"
fi
exec ${pkgs.nix}/bin/nix shell \
${pkgs.pkg-config} ${pkgs.alsa-lib} ${pkgs.openssl} ${pkgs.openssl.dev} ${pkgs.cacert} ${pkgs.systemd} ${pkgs.systemd.dev} \
"''${extra_args[@]}" -c "$OUT_LINK/bin/rustdocs_mcp_server" "$@"
'';
};Server configuration:
servers.rust-docs-bevy = {
command = "${rustdocsWrapper}/bin/rustdocs-mcp-wrapper";
args = [
"[email protected]"
"-F"
"default"
];
};What users get:
- Rust documentation for Bevy framework in Claude
- No manual Rust/Cargo setup required
- Nix handles all dependencies transparently
- Binary cached in
~/.cache/mcpacross rebuilds - First run builds, subsequent runs instant
Testing and Validation
Why Testing Matters
Configuration errors break AI workflows at the worst possible momentsβwhen you're deep in a coding session and need your tools to work. I learned this the hard way when a typo in my MCP configuration broke all my servers, and I had to spend 20 minutes debugging instead of coding.
I wanted to catch these issues before deployment. Infrastructure code deserves the same testing rigor as application code. NixOS's VM-based testing framework lets us validate the entire stackβfrom module evaluation to file generation to actual deploymentβin an isolated, reproducible environment.
What We're Testing
The test validates:
- The module loads without evaluation errors
- Files are generated correctly in the staging area
- Activation scripts copy files to final locations
- The generated JSON is valid and contains expected structure
- File permissions are correct
NixOS Integration Test
{
pkgs,
lib,
inputs, # <-- Removed "? null", inputs are now required
...
}: let
# 1. Use the modern, canonical test runner
mkTest = pkgs.lib.nixosTest;
# 2. Get the module directly from inputs. No fallback.
hmModule = inputs.home-manager.nixosModules.home-manager;
in
mkTest {
name = "mcp-service-test";
nodes.machine = {...}: {
imports = [hmModule];
users.users.testuser = {
isNormalUser = true;
home = "/home/testuser";
extraGroups = ["wheel"];
};
home-manager = {
useUserPackages = true;
users.testuser = {
home.stateVersion = "24.11";
# Import the Home Manager MCP module and enable it minimally
imports = [../../home/common/modules/mcp.nix];
services.mcp = {
enable = true;
targets.cursor = {
directory = "/home/testuser/.cursor";
fileName = "mcp.json";
};
servers = {};
};
};
};
# Keep VM minimal
boot.loader.grub.enable = false;
boot.loader.systemd-boot.enable = lib.mkForce false;
fileSystems."/" = {
device = "/dev/vda";
fsType = "ext4";
};
virtualisation.graphics = false;
};
testScript = ''
machine.start()
# 3. Wait for the specific HM service, not the whole system
machine.wait_for_unit("home-manager-testuser.service")
# Verify user exists
machine.succeed("id testuser")
# Verify Home Manager generated MCP config and copied to target
machine.succeed("su - testuser -c 'test -f $HOME/.mcp-generated/cursor/mcp.json'")
machine.succeed("su - testuser -c 'test -f $HOME/.cursor/mcp.json'")
machine.succeed("su - testuser -c 'grep -q \"\\\"mcpServers\\\"\" $HOME/.cursor/mcp.json'")
'';
}What This Validates
- Module loads: No Nix evaluation errors
- Files generated: Config appears in
.mcp-generated/ - Activation runs: Copy operation succeeds
- Valid JSON: Output contains expected structure
- Permissions: Files are readable by user
Running the Test
# Build the test
nix build .#nixosTests-mcp
# Run with logs
nix build .#nixosTests-mcp --print-build-logsTest runs in a VM:
- Clean NixOS system
- Fresh user account
- Isolated from host system
- Reproducible across machines
Real-World Configuration Examples
Basic Utility Servers
servers = {
# Web content fetching
fetch = {
command = "${pkgs.uv}/bin/uvx";
args = [
"--from"
"mcp-server-fetch"
"mcp-server-fetch"
];
env = {
UV_PYTHON = "${pkgs.python3}/bin/python3";
};
};
# Time and date utilities
time = {
command = "${pkgs.uv}/bin/uvx";
args = [
"--from"
"mcp-server-time"
"mcp-server-time"
];
env = {
UV_PYTHON = "${pkgs.python3}/bin/python3";
};
};
# Persistent memory across sessions
memory = {
command = "${pkgs.nodejs}/bin/npx";
args = [
"-y"
"@modelcontextprotocol/server-memory@latest"
];
};
# Enhanced reasoning
sequential-thinking = {
command = "${pkgs.nodejs}/bin/npx";
args = [
"-y"
"@modelcontextprotocol/server-sequential-thinking@latest"
];
};
};Project-Specific Tools
servers = {
# Git access to specific project
git = {
command = "${pkgs.uv}/bin/uvx";
args = [
"--from"
"mcp-server-git"
"mcp-server-git"
"--repository"
"${config.home.homeDirectory}/Code/dex-web"
];
env = {
UV_PYTHON = "${pkgs.python3}/bin/python3";
};
};
# Filesystem access to work directories
general-filesystem = {
command = "${pkgs.nodejs}/bin/npx";
args = [
"-y"
"@modelcontextprotocol/server-filesystem@latest"
"${config.home.homeDirectory}/Code"
"${config.home.homeDirectory}/.config"
"${config.home.homeDirectory}/Documents"
];
};
};API-Based Servers with Secrets
servers = {
# Kagi search
kagi = {
command = "${kagiWrapper}/bin/kagi-mcp-wrapper";
args = [];
# Wrapper injects KAGI_API_KEY from SOPS
};
# GitHub integration (via npx wrapper)
github = {
command = "${githubWrapper}/bin/github-mcp-wrapper";
args = [];
port = 11434;
# Wrapper injects GITHUB_TOKEN from SOPS
};
# OpenAI direct access
openai = {
command = "${openaiWrapper}/bin/openai-mcp-wrapper";
args = [];
# Wrapper injects OPENAI_API_KEY from SOPS
};
};Custom Documentation Servers
servers = {
# Custom Lua 5.4 documentation server
lua-docs = {
command = "${pkgs.python3}/bin/python3";
args = ["${./scripts/mcp/mcp_lua_docs.py}"];
};
# Custom Love2D documentation server
love2d-docs = {
command = "${pkgs.python3}/bin/python3";
args = ["${./scripts/mcp/mcp_love2d_docs.py}"];
};
};Custom server example (mcp_lua_docs.py):
from mcp.server import Server, stdio
from mcp import types
import anyio
import httpx
from bs4 import BeautifulSoup
server = Server(name="lua-docs", version="0.1")
@server.list_tools()
async def list_tools():
return [
types.Tool(
name="lua_doc",
description="Fetch Lua 5.4 manual section by anchor",
inputSchema={
"type": "object",
"properties": {
"anchor": {
"type": "string",
"description": "Anchor id, e.g. 'pdf-print'"
}
},
"required": ["anchor"],
},
)
]
@server.call_tool()
async def call_tool(name, arguments):
if name != "lua_doc":
return [types.TextContent(type="text", text=f"Unknown tool {name}")]
anchor = arguments.get("anchor", "")
url = f"https://www.lua.org/manual/5.4/manual.html#{anchor}"
async with httpx.AsyncClient() as client:
resp = await client.get(url)
if resp.status_code != 200:
return [types.TextContent(type="text", text=f"Error fetching {url}: {resp.status_code}")]
soup = BeautifulSoup(resp.text, "html.parser")
text = soup.get_text("\n")
return [types.TextContent(type="text", text=text)]
async def main():
opts = server.create_initialization_options()
async with stdio.stdio_server() as (read, write):
await server.run(read, write, opts)
if __name__ == "__main__":
anyio.run(main)This shows how easy it is to extend MCP with custom tooling!
Benefits and Trade-offs
After exploring the implementation details, let's step back and evaluate whether this approach makes sense for your use case. Every architectural decision involves trade-offsβhere's an honest assessment.
Benefits
β Single source of truth
- One Nix configuration defines all MCP servers
- Deploy to Cursor, Claude Desktop, and more simultaneously
β Reproducible
git clone && home-manager switchon any machine- New team members get identical setup
β Type-safe
- Build fails on configuration errors
- Catch mistakes before deployment
β Integrated secret management
- No hardcoded API keys
- Works with SOPS, agenix, pass, etc.
- Runtime injection via wrapper scripts
β Cross-platform
- Shared base module
- Platform-specific optimizations where it matters
β Testable
- NixOS test framework validates correctness
- Automated integration testing
β Performant
- Warm-up scripts pre-build expensive binaries
- npm/uv caches pre-populated
- Instant startup times
β Maintainable
- Clear separation of concerns
- Module system encourages reuse
- Easy to add/remove servers
Trade-offs
β οΈ Learning curve
- Nix isn't trivial to learn
- Module system has its own concepts
- Worth it for infrastructure-as-code approach
β οΈ Copy vs symlink
- Less "pure" than typical Nix configs
- But pragmatic for cursor-agent compatibility
- Still traceable to Nix store
β οΈ Platform complexity
- Two implementations (macOS/NixOS) to maintain
- Shared base helps, but requires discipline
- Benefits outweigh cost for multi-platform users
β οΈ Initial build time
- First run involves compiling wrappers
- Building Rust servers takes time
- Warm-up service mitigates this
β οΈ Nix dependency
- Requires buy-in to Nix ecosystem
- Not for users who prefer imperative config
- But if you use Nix, this is natural
When This Approach Makes Sense
I've been using this module for several months now across multiple machines. Here's my honest take on when it's worth the investment:
β Great fit if you:
- Already use Nix or NixOS (the learning curve is smaller)
- Manage multiple machines (the reproducibility pays off immediately)
- Want reproducible AI tooling setup for a team
- Need robust secret management (SOPS integration is a game-changer)
- Value type-safety and testing (catch errors before they bite you)
β Consider alternatives if you:
- Prefer simple shell scripts for configuration
- Have a single machine where manual setup works fine
- Don't want to invest time learning Nix (it's a real commitment)
- Need something quick for a demo or one-off setup
The initial setup time is realβyou'll spend a few hours getting everything working. But once it's set up, adding new servers or rotating secrets becomes trivial. For me, the time investment has paid off many times over.
Making It Reusable
Ready to use this in your own setup? Here are two approaches depending on your needs and how you structure your Nix configuration.
Option 1: Copy the Module
# In your home.nix or home-manager configuration
{
imports = [
# Point to the module file
./path/to/mcp.nix
];
services.mcp = {
enable = true;
targets.cursor = {
directory = "${config.home.homeDirectory}/.cursor";
fileName = "mcp.json";
};
servers = {
# Your server configurations
fetch = {
command = "uvx";
args = ["mcp-server-fetch"];
};
};
};
}Option 2: As a Flake Input (Future)
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
home-manager = {
url = "github:nix-community/home-manager";
inputs.nixpkgs.follows = "nixpkgs";
};
# Add the MCP module
mcp-home-manager = {
url = "github:lewisflude/nix";
inputs.nixpkgs.follows = "nixpkgs";
};
};
outputs = { self, nixpkgs, home-manager, mcp-home-manager, ... }: {
homeConfigurations.lewis = home-manager.lib.homeManagerConfiguration {
pkgs = nixpkgs.legacyPackages.x86_64-linux;
modules = [
# Import the MCP module
mcp-home-manager.homeManagerModules.mcp
# Your configuration
{
services.mcp = {
enable = true;
servers = { /* ... */ };
};
}
];
};
};
}Quick Start Guide
Step 1: Clone the repository
git clone https://github.com/lewisflude/nix
cd nixStep 2: Review the module
# Check out the base module
cat home/common/modules/mcp.nix
# Platform-specific examples
cat home/darwin/mcp.nix
cat home/nixos/mcp.nixStep 3: Customize for your setup
Add your server configurations to your home.nix:
services.mcp = {
enable = true;
targets.cursor = {
directory = "${config.home.homeDirectory}/.cursor";
fileName = "mcp.json";
};
servers = {
# Add your own servers
my-custom-server = {
command = "/path/to/command";
args = ["arg1" "arg2"];
};
};
};Step 4: Build and activate
home-manager switchStep 5: Verify
# Check generated config
cat ~/.mcp-generated/cursor/mcp.json
# Verify Claude CLI registration
claude mcp listContributing
Have ideas for improvements? Found a bug? Want to share your MCP server configurations?
- GitHub: github.com/lewisflude/nix
- Issues: Report problems or request features
- Pull Requests: Contributions welcomeβespecially new server configurations!
- Discussions: Share your setup and learn from others
Conclusion
I now have a working Home Manager module for MCP that reliably reproduces MCP server configurations across all my machines. The module handles the cursor-agent symlink limitation, integrates seamlessly with SOPS for secret management, and provides a clean, type-safe API for declaring MCP servers.
The implications are huge. When I set up a new machine, I just run git clone && home-manager switch and my entire MCP setup is ready in minutes, not hours. That kind of power is what draws me to Nixβonce something works, it just works, everywhere.
The Bigger Picture
This pattern extends beyond MCP. Declarative configuration for AI tooling is still underexplored territory. As AI assistants become more integral to development workflows, having reliable, reproducible infrastructure becomes critical.
Nix's strengthsβreproducibility, type safety, and composabilityβmake it uniquely suited for managing the complex dependencies and secrets that AI tools require. This module is one step toward treating AI infrastructure with the same rigor we apply to production systems.
Final Thought
Model Context Protocol gives AI assistants superpowers.
Nix gives you superpowers to manage those superpowers.
Why choose between them?
The more I work with Nix, the less inclined I am to work with other tools. The initial struggle to set things up is real, but once something works, it just works. And that's exactly what I want for my AI tooling.
Resources
Related Links
- GitHub Repository: github.com/lewisflude/nix
- MCP Documentation: modelcontextprotocol.io
- MCP Server Registry: github.com/modelcontextprotocol/servers
- Nix Documentation: nixos.org/manual/nix/stable
- Home Manager Manual: nix-community.github.io/home-manager
- SOPS-nix: github.com/Mic92/sops-nix
Further Reading
Connect
- GitHub: @lewisflude
- Website: lewisflude.com
- Email: [email protected]
Appendix
A. Troubleshooting
MCP servers not loading in Cursor:
# Check config exists and is valid JSON
cat ~/.cursor/mcp.json | jq .
# Verify it's a real file, not symlink
ls -la ~/.cursor/mcp.json
# Restart CursorClaude CLI registration failing:
# Check Claude CLI is installed
which claude
# Manually test registration
claude mcp add test-server -s user uvx -- mcp-server-fetch
# Check logs
journalctl --user -u mcp-claude-register # NixOSSecret wrapper not working:
# Verify SOPS secret exists
ls -la /run/secrets/KAGI_API_KEY # or wherever SOPS puts it
# Test wrapper manually
~/bin/kagi-mcp-wrapper --help
# Check environment
echo $KAGI_API_KEY # Should be empty (not exported)B. Migration Guide
From manual JSON to this module:
-
Backup existing configs:
cp ~/.cursor/mcp.json ~/.cursor/mcp.json.backup cp "~/Library/Application Support/Claude/claude_desktop_config.json" \ ~/claude_desktop_config.json.backup -
Convert to Nix format
Start with your existing JSON configuration:
// ~/.cursor/mcp.json { "mcpServers": { "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }Then convert it to the Nix format:
# In your home.nix services.mcp.servers.fetch = { command = "uvx"; args = ["mcp-server-fetch"]; }; -
Apply and verify:
home-manager switch diff ~/.cursor/mcp.json ~/.cursor/mcp.json.backup
C. Complete Module Code
See the full implementation in the GitHub repository:
- Base module:
home/common/modules/mcp.nix - macOS config:
home/darwin/mcp.nix - NixOS config:
home/nixos/mcp.nix - Integration test:
tests/integration/mcp.nix
View the complete source code on GitHub
Published: October 29, 2025 Last updated: October 29, 2025 Reading time: ~18 minutes