How to Install NemoClaw on a Hostinger VPS (Step-by-Step Guide)

· 12 min read · By ClawMetry Team

NVIDIA NemoClaw wraps OpenClaw inside a secured, sandboxed container. Every network call, file access, and AI request goes through a security layer you control. Your API keys never touch the inside of the container.

The official docs call it a "one-line install." Reality is a bit messier, especially on a cloud server. This guide walks you through every step, from provisioning the VPS to chatting with your secured AI agent.

Total time: about 15 minutes.

What we'll cover

  1. Architecture overview
  2. Prerequisites
  3. Provision a Hostinger VPS
  4. Configure the firewall
  5. Install Docker, OpenShell, and NemoClaw
  6. Get your gateway token
  7. Set up Caddy for HTTPS
  8. Connect OpenAI or Claude
  9. Access and test your instance
  10. Monitor with ClawMetry
  11. Next steps

Architecture Overview

Before we start, here's what we're building. Understanding the layers will make every step make sense.

Browser / Telegram / Apps

Hostinger Firewall (ports 80 + 443)

Caddy (HTTPS reverse proxy)
↓ localhost:18789
┌──────────────────────────────────┐
OpenShell Sandbox                   │
│  Policy Engine + Privacy Router  │
│  OpenClaw Gateway (:18789)     │
└──────────────────────────────────┘
↑ API keys stored outside sandbox
OpenShell Provider System

OpenClaw is the AI agent (the thing you chat with). NemoClaw is the OpenClaw plugin for NVIDIA OpenShell that runs it inside a secured container. All communication goes through OpenShell's policy engine and privacy router.

Why Hostinger over NVIDIA's platform? NVIDIA's hosted option costs about $43/month for a comparable VM (8 GB RAM, 2 vCPUs at $0.06/hour). Hostinger's KVM2 plan gives you similar specs for around $10/month.

Prerequisites

You'll need three things before starting:

  1. Hostinger VPS account with a KVM2 plan or higher
  2. NVIDIA API key (free) from build.nvidia.com
  3. OpenAI or Anthropic API key (if you want to use GPT or Claude instead of NVIDIA Nemotron)

1 Provision a Hostinger VPS

Head to Hostinger VPS and set up a new server:

  1. Select KVM2 or higher (recommended for meaningful AI workloads)
  2. Choose your billing period
  3. Make sure Docker is selected as the pre-installed application
  4. Complete payment and set a root password
  5. Wait for the VPS to provision, then click Manage VPS

While that's setting up, grab your free NVIDIA API key:

  1. Go to build.nvidia.com and create a free account
  2. Click your profile, then API Keys
  3. Generate a new key and save it somewhere safe

2 Configure the Firewall

Hostinger drops all incoming traffic by default. We need to open ports 80 (HTTP) and 443 (HTTPS).

  1. In your Hostinger dashboard, click Security then Firewall
  2. Click Create Firewall and name it (e.g., nemoclaw-firewall)
  3. Click Edit and add two rules:
    • Protocol: TCP, Port: 80, Source: Anywhere
    • Protocol: TCP, Port: 443, Source: Anywhere
  4. Go back, activate the firewall, then click the three dots and Synchronize

Don't skip the sync. Adding the rules isn't enough. You must click Synchronize for the changes to actually apply to your VPS.

3 Install Docker, OpenShell, and NemoClaw

Open the VPS terminal from your Hostinger dashboard (or SSH in as root). Then run these commands in order:

3a. Install Docker

apt update && apt upgrade -y && \
curl -fsSL https://get.docker.com | sh && \
systemctl enable docker && systemctl start docker && \
usermod -aG docker $USER && newgrp docker

Tip: If the upgrade prompts about existing config files (like SSHD), keep the local version.

3b. Install OpenShell

curl -fsSL https://raw.githubusercontent.com/NVIDIA/openshell/main/install.sh | bash

3c. Install NemoClaw

curl -fsSL https://raw.githubusercontent.com/NVIDIA/nemoclaw/main/install.sh | bash

The NemoClaw installer will walk you through a setup wizard:

  1. Sandbox name: Enter something like nemoclaw-sandbox
  2. NVIDIA API key: Paste the key you generated earlier
  3. Model: Select Nemotron for now (we'll switch to OpenAI/Claude later)
  4. Policy presets: Select list mode and add: py npm slack telegram

Wait for the sandbox to build. You'll see a summary screen when it's done.

3d. Fix PATH for new sessions

NemoClaw and OpenShell binaries might not be found in new terminal sessions. Run this once to fix it:

# Add OpenShell + NemoClaw to PATH permanently
echo 'export PATH="$PATH:/opt/openshell/bin:/opt/nemoclaw/bin"' >> ~/.bashrc
source ~/.bashrc

4 Get Your Gateway Token

You'll need the OpenClaw gateway token to access the web UI. Get it by entering the sandbox:

# Enter the sandbox
nemoclaw connect

# Inside the sandbox, get the token
cat /root/.config/openclaw/gateway.yaml | grep token

# Exit the sandbox
exit

Save this token somewhere safe. You'll need it to connect to the dashboard.

5 Set Up Caddy for HTTPS

Caddy gives you a clean HTTPS address instead of typing IP:port. When you created your Hostinger VPS, you got a free subdomain (visible at the top of your dashboard).

5a. Install Caddy

apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | \
  gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | \
  tee /etc/apt/sources.list.d/caddy-stable.list
apt update && apt install -y caddy

Important: Run these commands on the VPS host, not inside the sandbox.

5b. Configure Caddy

Replace YOUR_SUBDOMAIN.hstgr.cloud with your actual Hostinger subdomain:

cat > /etc/caddy/Caddyfile << 'EOF'
YOUR_SUBDOMAIN.hstgr.cloud {
    reverse_proxy localhost:18789
}
EOF

systemctl restart caddy

5c. Enable port forwarding

OpenShell needs to forward traffic from Caddy to the OpenClaw gateway inside the sandbox:

# Reconnect the sandbox (may be needed after installing Caddy)
nemoclaw connect

# Exit and then set up forwarding
exit

# Enable the port forwarding
openshell network forward --sandbox nemoclaw-sandbox --port 18789

5d. Allow your domain in OpenClaw

# Enter the sandbox
nemoclaw connect

# Add your subdomain to allowed origins
openclaw config set gateway.allowedOrigins '["https://YOUR_SUBDOMAIN.hstgr.cloud"]'

# Restart the gateway
openclaw gateway restart

# Exit
exit

6 Connect OpenAI or Claude

NemoClaw defaults to NVIDIA Nemotron. To switch to OpenAI or Claude, use OpenShell's secure provider system. API keys are stored outside the sandbox, so they're never exposed to the AI agent.

Option A: OpenAI

# On the VPS host (NOT inside the sandbox)
export OPENAI_API_KEY="sk-your-key-here"

# Create the provider
openshell provider create openai --api-key "$OPENAI_API_KEY"

# Point inference to OpenAI with your preferred model
openshell inference set --provider openai --model gpt-5.1

# Verify
openshell inference get

Option B: Anthropic (Claude)

# On the VPS host (NOT inside the sandbox)
export ANTHROPIC_API_KEY="sk-ant-your-key-here"

# Create the provider
openshell provider create anthropic --api-key "$ANTHROPIC_API_KEY"

# Point inference to Claude
openshell inference set --provider anthropic --model claude-sonnet-4-20250514

# Verify
openshell inference get

Add models to OpenClaw config

After setting the provider, add the models to OpenClaw's config inside the sandbox:

# Enter sandbox
nemoclaw connect

# Add OpenAI models
openclaw config set models.openai.apiKey "FROM_OPENSHELL"
openclaw config set models.openai.baseUrl "https://inference.local/v1"

# Or add Anthropic models
openclaw config set models.anthropic.apiKey "FROM_OPENSHELL"
openclaw config set models.anthropic.baseUrl "https://inference.local/v1"

# Restart gateway to pick up changes
openclaw gateway restart

exit

7 Access and Test

Open your browser and navigate to https://YOUR_SUBDOMAIN.hstgr.cloud. You should see the OpenClaw dashboard.

  1. Go to Overview
  2. Enter your gateway token (from step 4)
  3. Click Connect
  4. Start a new session and send a message

To verify your model, ask: "Which model are you using?"

You can switch models anytime through the OpenClaw UI: go to config (raw mode), find the models section, and change the primary model.

Telegram ready: Since we added telegram to the policy presets, your agent can connect to Telegram right away. Just ask it: "How do I connect to Telegram?" and it'll walk you through the setup.

Monitor Your NemoClaw Instance with ClawMetry

Your NemoClaw instance is running. But how do you know what it's actually doing? Which tools it's calling, how many tokens it's burning, whether sub-agents are stuck?

ClawMetry gives you real-time observability for your OpenClaw agents. It works with standard OpenClaw and NemoClaw alike.

Install ClawMetry (inside the sandbox)

# Enter the sandbox
nemoclaw connect

# Install ClawMetry
pip install clawmetry

# Run it
clawmetry

exit

Connect to ClawMetry Cloud

For monitoring from anywhere (phone, laptop, another machine), connect to ClawMetry Cloud:

# Inside the sandbox
clawmetry cloud connect

This gives you a web dashboard at app.clawmetry.com where you can:

See what your NemoClaw agents are doing

Free, open source, works with standard OpenClaw and NemoClaw. One command to install.

Get ClawMetry

What's Next

What we set up today is the foundation. OpenShell's policies are still minimal (deny-all by default). To expand your agent's capabilities:

Quick Reference: All Commands

Here's every command from this guide in one block for easy copy-paste:

# 1. Install Docker
apt update && apt upgrade -y
curl -fsSL https://get.docker.com | sh
systemctl enable docker && systemctl start docker

# 2. Install OpenShell
curl -fsSL https://raw.githubusercontent.com/NVIDIA/openshell/main/install.sh | bash

# 3. Install NemoClaw
curl -fsSL https://raw.githubusercontent.com/NVIDIA/nemoclaw/main/install.sh | bash

# 4. Get gateway token
nemoclaw connect
cat /root/.config/openclaw/gateway.yaml | grep token
exit

# 5. Install and configure Caddy
apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | \
  gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | \
  tee /etc/apt/sources.list.d/caddy-stable.list
apt update && apt install -y caddy

# Edit /etc/caddy/Caddyfile with your subdomain
systemctl restart caddy

# 6. Set up AI provider (OpenAI example)
export OPENAI_API_KEY="sk-your-key-here"
openshell provider create openai --api-key "$OPENAI_API_KEY"
openshell inference set --provider openai --model gpt-5.1

# 7. Install ClawMetry
nemoclaw connect
pip install clawmetry && clawmetry
clawmetry cloud connect
exit