ClawPulse
English··OpenClaw agent performance tracking

OpenClaw Performance Tracking: Metrics That Matter

The Performance Tracking Problem

You deployed your OpenClaw agents. They are running. But are they performing well? Without structured performance tracking, you are guessing.

Most teams start with basic uptime checks — is the agent responding? But uptime alone tells you almost nothing. An agent can be "up" while being catastrophically slow, burning through your budget, or producing garbage outputs.

The Five Metrics Every OpenClaw Team Should Track

After working with dozens of OpenClaw deployments, here are the metrics that actually correlate with agent health:

1. Task Completion Rate

What percentage of tasks does your agent complete successfully? A healthy agent should maintain 95%+ completion. If this drops below 90%, something is wrong — maybe the underlying model is degrading, maybe your prompts need updating, or maybe an API dependency is flaky.

2. P95 Response Latency

Average latency is misleading. Your P95 (95th percentile) tells you what the slow experience actually looks like. For most OpenClaw agents, P95 latency under 30 seconds is acceptable. If your P95 is 2 minutes, you have a tail latency problem that needs investigation.

3. Resource Efficiency Score

CPU and memory usage per completed task. This is your cost efficiency metric. If Agent A uses 2GB of RAM to complete a task while Agent B uses 500MB for the same work, Agent B is four times more efficient. Track this over time to catch resource regressions.

4. Error Rate by Category

Not all errors are equal. Categorize them: infrastructure errors (OOM, disk full), model errors (timeout, rate limit), and logic errors (wrong output format, failed validation). Each category has a different root cause and fix.

5. Token Consumption Per Task

For agents using LLM APIs, token consumption directly impacts your bill. Track tokens per task type and set budgets. A sudden spike in token usage often indicates a prompt regression or an agent stuck in a retry loop.

How ClawPulse Tracks Agent Performance

ClawPulse automates performance tracking for OpenClaw agents. Instead of building custom dashboards and writing metric collection scripts, you get:

Automatic metric collection — CPU, memory, disk, load, and custom metrics are collected every 30 seconds with zero configuration.

Historical trend analysis — View 7, 14, 30, or 90-day trends. Spot gradual degradation that daily checks miss. Export data as CSV or JSON for custom analysis.

Threshold-based alerts — Set performance baselines and get notified the moment an agent deviates. "Alert me if P95 latency exceeds 45 seconds" or "Alert me if completion rate drops below 92%."

Instance comparison — Running multiple agents? Compare their performance side by side. Identify your best and worst performers instantly.

Setting Up Performance Baselines

The key to effective performance tracking is establishing baselines during a known-good period:

1. Run your agents for one week with ClawPulse collecting metrics

2. Review the weekly digest report to understand normal ranges

3. Set alert thresholds at 1.5x your normal values

4. Tighten thresholds over time as you optimize

This approach eliminates alert fatigue — you only get notified when something genuinely deviates from normal.

Common Performance Anti-Patterns

Watch out for these patterns in your tracking data:

  • Sawtooth memory — memory climbs steadily then drops sharply. This is a memory leak with periodic restarts masking the problem.
  • Bimodal latency — most requests are fast, but a second cluster is very slow. Usually indicates two different code paths or a caching issue.
  • Weekend performance cliff — agents slow down on weekends. Often caused by batch jobs or backups competing for resources.

Start Tracking Performance Today

Effective OpenClaw agent performance tracking does not require a dedicated SRE team. With ClawPulse, you can set up comprehensive tracking in minutes and start making data-driven decisions about your agent fleet.

Staying Ahead of the Curve: Emerging Trends in OpenClaw Performance Tracking

As the world of conversational AI continues to evolve, it's important for OpenClaw teams to stay ahead of the curve and embrace the latest trends in performance tracking. One emerging area that ClawPulse has been closely monitoring is the growing importance of contextual performance metrics.

In the past, teams have primarily focused on traditional metrics like task completion rate and response latency. While these are still essential, the most successful OpenClaw deployments are now looking at how agents perform in specific real-world scenarios. This could involve tracking things like customer sentiment, task success rate by user persona, or even agent behavior during complex multi-turn conversations.

By analyzing performance through this contextual lens, teams can gain deeper insights into how their agents are truly serving end-users. ClawPulse's advanced analytics capabilities make it easy to capture and visualize these more nuanced metrics, helping you identify areas for optimization and ensure your OpenClaw investment is delivering maximum value.

Another trend to watch is the integration of external data sources to enrich performance tracking. Leading teams are starting to correlate agent behavior with factors like website traffic, product sales, or even macroeconomic indicators. This holistic view can uncover hidden dependencies and provide a more complete picture of how your OpenClaw agents are impacting the broader business.

As the OpenClaw landscape continues to mature, ClawPulse is committed to staying at the forefront of these emerging best practices. By partnering with us, you can be confident that your performance tracking strategy will evolve alongside the technology, ensuring you're always one step ahead of the competition.

Real-Time Alerting: Turning Metrics Into Action

Collecting metrics is only half the battle. The other half is responding fast when things go wrong. Set up automated alerts on your most critical metrics—task completion rate dropping below 90%, P95 latency spiking above 60 seconds, or token consumption doubling unexpectedly. These alerts should trigger immediately so your team can investigate before users notice degradation.

Most teams benefit from tiered alerting: soft alerts (Slack notifications) for warning thresholds, and hard alerts (PagerDuty, SMS) for critical failures. The key is not to alert on everything—alert fatigue kills responsiveness. Focus on metrics that directly impact your users or your costs. Use your historical baseline data to set intelligent thresholds rather than arbitrary numbers. Many teams using ClawPulse find that context-aware dashboards help them correlate multiple metrics at once, spotting patterns that single-metric alerts would miss entirely.

Sign up at clawpulse.org/signup and get visibility into your agents today.

Start monitoring your OpenClaw agents in 2 minutes

Free 14-day trial. No credit card. Just drop in one curl command.

Prefer a walkthrough? Book a 15-min demo.

A Production Instrumentation Reference (Python)

The five metrics above are useless without a structured way to record them. Here is a minimal `cp_trace` context manager that any OpenClaw agent can wrap around its task loop. It captures the four pillars — duration, tokens, errors, completion status — and ships them to the ClawPulse ingest endpoint:

```python

import time, json, requests, threading

from contextlib import contextmanager

from typing import Optional

CLAWPULSE_URL = "https://www.clawpulse.org/api/dashboard/tasks"

CLAWPULSE_TOKEN = ""

class ErrorCategory:

INFRA = "infra" # OOM, disk full, ENOSPC, network down

MODEL = "model" # 429 rate limit, 500 model timeout, content filter

LOGIC = "logic" # invalid JSON, schema validation, empty output

TOOL = "tool" # tool call failed, schema drift, dependency error

@contextmanager

def cp_trace(task_type: str, agent_id: str, expected_tokens: Optional[int] = None):

started = time.time()

metrics = {"prompt_tokens": 0, "completion_tokens": 0, "tool_calls": 0,

"retries": 0, "error_category": None, "error_message": None,

"completed": False}

try:

yield metrics

metrics["completed"] = True

except Exception as e:

# Best-effort categorisation so error_rate_by_category is populated

msg = str(e).lower()

if "rate" in msg or "429" in msg or "timeout" in msg:

metrics["error_category"] = ErrorCategory.MODEL

elif "json" in msg or "schema" in msg or "validation" in msg:

metrics["error_category"] = ErrorCategory.LOGIC

elif "memory" in msg or "disk" in msg or "ENOSPC" in str(e):

metrics["error_category"] = ErrorCategory.INFRA

else:

metrics["error_category"] = ErrorCategory.TOOL

metrics["error_message"] = str(e)[:500]

raise

finally:

duration_ms = int((time.time() - started) * 1000)

payload = {

"agent_id": agent_id,

"task_type": task_type,

"duration_ms": duration_ms,

"expected_tokens": expected_tokens,

**metrics,

}

# Fire and forget — never block the agent's hot path on telemetry

threading.Thread(

target=lambda: requests.post(CLAWPULSE_URL, json=payload,

headers={"Authorization": f"Bearer {CLAWPULSE_TOKEN}"},

timeout=2),

daemon=True,

).start()

```

Usage in your agent:

```python

with cp_trace("customer_support_reply", agent_id="agent-prod-01", expected_tokens=4000) as m:

response = openclaw_agent.run(user_message)

m["prompt_tokens"] = response.usage.prompt_tokens

m["completion_tokens"] = response.usage.completion_tokens

m["tool_calls"] = len(response.tool_calls)

```

The `expected_tokens` field is what unlocks the token consumption per task metric — without an expected value, you cannot detect a 4× regression because you have no baseline.

SQL-Level Metric Definitions

Every metric in the five-metric list needs a precise formula. Vague metrics produce vague dashboards. Here are the formulas ClawPulse uses internally — copy them into your own SQL pipeline if you are rolling your own:

```sql

-- Task completion rate (windowed, per agent)

SELECT

agent_id,

DATE(created_at) AS day,

SUM(CASE WHEN completed = 1 THEN 1 ELSE 0 END) 1.0 / COUNT() AS completion_rate

FROM task_entries

WHERE created_at >= NOW() - INTERVAL 7 DAY

GROUP BY agent_id, day;

-- P95 response latency (PostgreSQL syntax — for MySQL use NTILE)

SELECT

agent_id,

PERCENTILE_CONT(0.95) WITHIN GROUP (ORDER BY duration_ms) AS p95_ms

FROM task_entries

WHERE created_at >= NOW() - INTERVAL '24 hours'

GROUP BY agent_id;

-- Resource efficiency (RSS bytes per completed task)

SELECT

agent_id,

AVG(peak_rss_bytes) / NULLIF(SUM(CASE WHEN completed THEN 1 ELSE 0 END), 0) AS bytes_per_task

FROM telemetry_snapshots t

JOIN task_entries e USING (agent_id)

WHERE t.created_at >= NOW() - INTERVAL 1 DAY;

-- Error rate by category (the metric that actually drives fixes)

SELECT

error_category,

COUNT(*) AS n,

COUNT() 1.0 / (SELECT COUNT(*) FROM task_entries WHERE created_at >= NOW() - INTERVAL 1 DAY) AS rate

FROM task_entries

WHERE created_at >= NOW() - INTERVAL 1 DAY

AND error_category IS NOT NULL

GROUP BY error_category;

-- Token consumption per task type with 7-day baseline ratio

WITH baseline AS (

SELECT task_type, AVG(prompt_tokens + completion_tokens) AS baseline_tokens

FROM task_entries

WHERE created_at BETWEEN NOW() - INTERVAL 14 DAY AND NOW() - INTERVAL 7 DAY

AND completed = 1

GROUP BY task_type

)

SELECT

e.task_type,

AVG(e.prompt_tokens + e.completion_tokens) AS recent_tokens,

b.baseline_tokens,

AVG(e.prompt_tokens + e.completion_tokens) / b.baseline_tokens AS ratio

FROM task_entries e

JOIN baseline b USING (task_type)

WHERE e.created_at >= NOW() - INTERVAL 1 DAY

GROUP BY e.task_type, b.baseline_tokens

HAVING ratio > 1.5; -- regression threshold

```

The last query is the one that catches the silent token-budget regressions everyone misses. A `ratio > 1.5` typically means a prompt change quietly inflated context, or an agent is stuck retrying a tool call with growing history.

Concrete Reference Ranges

Everyone asks "what is a good P95?" — there is no universal answer, but these are the ranges we see across 200+ OpenClaw deployments using ClawPulse. Treat them as starting baselines, not hard truths:

| Metric | Healthy | Watch | Alert |

| --- | --- | --- | --- |

| Task completion rate | ≥ 95% | 90–95% | < 90% |

| P95 latency (single-step agent) | < 8 s | 8–20 s | > 20 s |

| P95 latency (multi-step agent, 3–6 tool calls) | < 30 s | 30–60 s | > 60 s |

| P95 latency (deep multi-step agent, 7+ tool calls) | < 90 s | 90–180 s | > 180 s |

| Tokens per task vs 7-day baseline | 0.8×–1.2× | 1.2×–1.5× | > 1.5× |

| Error rate (any category) | < 2% | 2–5% | > 5% |

| Tool-call error rate | < 3% | 3–8% | > 8% |

| Memory growth over 24h | flat or sawtooth-restart | +10–30% | +30%+ (leak) |

If your agents land outside the "Healthy" column, do not panic — investigate first. Many production agents settle in "Watch" and stay healthy for years. The point of these ranges is to give you a structured starting point instead of an arbitrary threshold.

Multi-Tier Alert Configuration (YAML)

The fastest way to drown in pages is to alert on every deviation. The fastest way to miss real outages is to alert on none. Tier your alerts. Here is a battle-tested config:

```yaml

# clawpulse-alerts.yaml — multi-tier alerting for an OpenClaw agent fleet

version: 1

agent_id: "agent-prod-01"

rules:

- name: completion_rate_warning

metric: task_completion_rate

window: 15m

operator: lt

value: 0.92

severity: warning

destinations: [slack:#agents-monitoring]

- name: completion_rate_critical

metric: task_completion_rate

window: 5m

operator: lt

value: 0.85

severity: critical

destinations: [pagerduty:agents-oncall, slack:#agents-incidents]

- name: p95_latency_regression

metric: p95_latency_ms

window: 15m

operator: gt

value: 45000

baseline_ratio: 2.0 # only fire if also 2x the 7-day baseline

severity: warning

- name: token_budget_blown

metric: tokens_per_task_ratio

window: 30m

operator: gt

value: 1.5

severity: warning

destinations: [slack:#agents-cost]

- name: error_spike_model_layer

metric: error_rate_by_category

label: "category=model"

window: 5m

operator: gt

value: 0.10

severity: critical

destinations: [pagerduty:agents-oncall]

- name: memory_leak_detector

metric: memory_growth_24h_pct

window: 24h

operator: gt

value: 30.0

severity: warning

destinations: [slack:#infra]

```

Push this with a single curl to your ClawPulse alert endpoint:

```bash

curl -X POST https://www.clawpulse.org/api/dashboard/alerts \

-H "Authorization: Bearer $CLAWPULSE_TOKEN" \

-H "Content-Type: application/yaml" \

--data-binary @clawpulse-alerts.yaml

```

The `baseline_ratio` field is the most under-used pattern. A static "P95 > 45s" alert fires for an agent whose normal P95 is 50s. Ratio-based alerts only page when the value is both above the absolute threshold and a multiple of historical normal — eliminating most of your false positives in one stroke.

Where Performance Tracking Fits in the Broader Observability Stack

Performance tracking is one slice of agent observability. The other slices are equally important:

If you are starting from zero, instrument completion rate and P95 latency first. Add error categorisation in week 2. Add token consumption in week 3. By week 4 you have the full five-metric panel and a meaningful baseline.

External References

Frequently Asked Questions

How is this different from APM tools like Datadog or New Relic?

Traditional APM measures system metrics (CPU, RAM, request latency) which OpenClaw agents have, but it does not understand task completion rate, token consumption per task, or error categorisation specific to LLM agents. ClawPulse adds the agent-aware layer on top of system metrics. You can run both side by side — they are complementary, not competing.

Should I track cost or tokens?

Track both, but tokens first. Cost is a function of tokens × per-token price, and per-token prices change with provider negotiations. Tokens are the primitive metric; cost is derived. Setting alerts on `tokens_per_task_ratio > 1.5` catches regressions earlier than waiting for the monthly bill.

What sampling rate should I use?

For agents under 1000 tasks per day, capture 100% of tasks — the cost is negligible. Between 1000 and 100k per day, sample completed tasks at 10% but capture 100% of errored tasks (the errors are what you actually need). Above 100k per day, drop completed sampling to 1% and keep errors at 100%.

How long should I keep historical performance data?

Raw task records: 30 days. Hourly aggregates: 12 months. Daily aggregates: 3 years. This is enough to detect quarterly regressions, run year-over-year capacity planning, and stay under most compliance retention caps. ClawPulse rolls up automatically — you do not have to manage the retention pipeline.

My agent has wildly variable task durations. How do I set thresholds?

Use the `task_type` field in `cp_trace` to bucket tasks. A "summarisation" task and a "research with 8 tool calls" task should not share a P95 threshold. ClawPulse computes per-task-type baselines automatically when you populate `task_type`.

What if I do not use OpenClaw — does ClawPulse still work?

The instrumentation pattern (`cp_trace`, the SQL formulas, the alert YAML) works for any LLM-backed agent. The agent installer (`agent.sh`) is OpenClaw-aware but optional. You can ship metrics directly to `/api/dashboard/tasks` with any HTTP client. See the demo or start your trial to test it on your stack.

A 30-Minute Production Readiness Checklist

If you read this far, here is the action list to go from "no tracking" to "production-ready tracking" in one afternoon:

1. Drop the `cp_trace` context manager into your agent code (10 min).

2. Wrap your agent's main task handler with `with cp_trace(...)` (5 min).

3. Set `agent_id`, `task_type`, and `expected_tokens` on every call site (10 min — grep for entry points).

4. Run agents for 24–48h to populate baseline data (passive).

5. Open the ClawPulse monitoring dashboard and verify all five metrics are populating.

6. Apply the alert YAML above with `curl` — adjust `value` thresholds based on your 24h baseline, not the table defaults (5 min).

7. Schedule a weekly 10-minute review of the trend dashboard for the next four weeks. After that, you have enough history to tighten thresholds permanently.

Total effort: ~30 minutes of code + a 48-hour passive baseline. After this you can stop guessing whether your agents are healthy. Start your trial or book a demo to test the pipeline on your stack.

> MCP server in your stack? See Best practices for monitoring MCP server performance and How to prevent destructive behavior in MCP tool monitoring for the latest playbooks.

Start monitoring your AI agents in 2 minutes

Free 14-day trial. No credit card. One curl command and you’re live.

Prefer a walkthrough? Book a 15-min demo.

Back to all posts
C

Claudio

Assistant IA ClawPulse

Salut 👋 Je suis Claudio. En 30 secondes je peux te montrer comment ClawPulse remplace tes 12 onglets de monitoring par un seul dashboard. Tu veux voir une demo live, connaitre les tarifs, ou connecter tes agents OpenClaw maintenant ?

Propulse par ClawPulse AI