Files
coinhunter/SKILL.md

15 KiB
Raw Blame History

name, description
name description
Coin Hunter Hybrid short-term crypto trading system — combining mainstream coin scalping with meme-coin opportunistic rotation, backed by hourly review and continuous strategy iteration.

Coin Hunter

Overview

Coin Hunter is a short-term trading framework, not just a meme-coin scanner.

It operates on two tracks:

  1. Mainstream Short-Term (70%) — Trade liquid, high-volume coins (BTC, ETH, SOL, DOGE, PEPE, etc.) based on technical momentum, support/resistance, and market structure.
  2. Meme / 妖币 Rotation (30%) — Opportunistically rotate into breakout meme coins when narrative heat, volume, and timing align.

Core principle:

  • Profit maximization through concentration + discipline.
  • 妖币可遇不可求 — when a runner appears, capture it. When none exists, do not force trades; instead, scalp mainstream coins or sit in USDT.
  • Every decision is logged, and every hour is reviewed for quality and parameter tuning.

Portfolio-first rule

Always check the user's actual portfolio state under ~/.coinhunter/ before giving trade advice or executing orders.

Files to inspect:

  • positions.json
  • accounts.json
  • logs/decisions_YYYYMMDD.jsonl
  • logs/trades_YYYYMMDD.jsonl
  • reviews/review_YYYYMMDD_HHMMSS.json

Anchor all advice to the user's real balances, average costs, exchange, and current exposure.

Supported modes

  1. Single-coin triage — analyze a specific holding (mainstream or meme).
  2. Active discovery — scan for the best short-term opportunity across both mainstream and meme sectors.
  3. Execution — run trades via the CLI (coinhunter exec), evaluating whether to hold, sell, or rebalance.
  4. Review — generate an hourly report on decision quality, PnL, and recommended parameter adjustments.

Scientific analysis checklist (mandatory before every trade decision)

Before executing or recommending any action, answer:

  1. Trend posture — Is price above/below short-term MAs (1h/4h)?
  2. Volume-price fit — Is volume expanding with the move or diverging?
  3. Key levels — Where is the next support/resistance? How much room to run?
  4. Market context — Is BTC/ETH supportive or contradictory?
  5. Opportunity cost — Is holding current coin better than switching to new coin or sitting in USDT?
  6. Time window — Is this a good entry/exit time (liquidity, session, news flow)?

Read references/short-term-trading-framework.md before every active decision pass.

Workflow

Discovery & Scanning

  1. Mainstream scan — Use coinhunter probe bybit-ticker or ccxt for liquid coins.
    • Look for: breakouts, volume spikes, S/R flips, trend alignment.
  2. Meme scan — Use web_search + dex-search / gecko-search for narrative heat.
    • Look for: accelerating attention, DEX flow, CEX listing rumors, social spread.
  3. Cross-compare — Score the top 3-5 candidates against current holdings.

Execution

  1. Read balances and positions.
  2. Pull market data for holdings and candidates.
  3. Run the 6-question scientific checklist.
  4. Decide: HOLD / SELL_ALL / REBALANCE / BUY.
  5. Execute via the CLI (coinhunter exec ...).
  6. Log the full decision context via the CLI execution.

Review (every hour)

  1. Run coinhunter recap to analyze all decisions from the past hour.
  2. Compare decision prices to current prices.
  3. Flag patterns: missed runs, bad entries, over-trading, hesitation.
  4. Output recommendations for parameter or blacklist adjustments.
  5. Save the review report to ~/.coinhunter/reviews/.

Auto-trading architecture

CLI Command Purpose
coinhunter exec Order execution layer (buy / flat / rotate / hold)
coinhunter pre Lightweight threshold evaluator and trigger gate
coinhunter review Generate compact review context for the agent
coinhunter recap Hourly quality review and optimization suggestions
coinhunter probe Market data fetcher

Execution schedule

  • Trade bot — runs every 15-30 minutes via cronjob.
  • Review bot — runs every 1-12 hours via cronjob, depending on how much manual oversight is needed.

Low-cost cron architecture

When model cost or quota is tight, do not let every cron run perform full analysis from scratch.

Recommended pattern:

  1. Attach a lightweight Python script to the cron job (under ~/.hermes/scripts/) that fetches balances/tickers, computes hashes, and emits compact JSON context.
  2. Cache the last observed positions, top candidates, market regime, and last_deep_analysis_at under ~/.coinhunter/state/.
  3. Trigger full analysis only when one of these changes materially. Make the thresholds adaptive instead of fixed:
    • position structure changes (hard trigger)
    • per-position price/PnL moves beyond thresholds that widen for micro-capital / dust accounts and narrow during higher-volatility sessions
    • top candidate leadership changes materially, but discount this signal when free USDT is below actionable exchange minimums
    • BTC/ETH regime changes (hard trigger)
    • a max staleness timer forces refresh, with longer refresh windows for micro accounts to avoid pointless re-analysis
  4. In the cron prompt, if the injected context says should_analyze=false, respond with exactly [SILENT] and do not call tools.
  5. After a triggered deep-analysis pass completes, acknowledge it from the agent (for example by running the precheck script with an --ack flag) so the trigger is cleared.
  6. For even lower spend, move the high-frequency cadence outside Hermes cron entirely:
    • install a system crontab entry that runs a local gate script every 5-10 minutes
    • let that gate script run the lightweight precheck
    • only when should_analyze=true and no run is already queued, trigger the Hermes cron job via hermes cron run <job_id>
    • store a run_requested_at marker in ~/.coinhunter/state/precheck_state.json and clear it when the analysis acknowledges completion

This pattern preserves Telegram auto-delivery from Hermes cron while reducing model wakeups to trigger-only events.

Practical production notes for external gate mode

  • Put the external gate itself on system crontab (for example every 5 minutes) rather than on Hermes cron. That keeps the high-frequency loop completely local and model-free.
  • Keep the Hermes trading cron job on a low-frequency fallback schedule (for example once daily at 05:00 local time) so the main execution path remains trigger-driven.
  • Add a file lock around the external gate script so overlapping system-cron invocations cannot double-trigger.
  • Rotate ~/.coinhunter/logs/external_gate.log with logrotate (daily, keep ~14 compressed copies, copytruncate) and schedule the rotation a few minutes after the fallback Hermes cron run so they do not overlap.

Building your own gate (step-by-step)

If you want to replicate this low-cost trigger architecture, here is a complete blueprint.

1. File layout

User runtime state lives under ~/.coinhunter/state/:

~/.coinhunter/state/
  precheck_state.json          # last snapshot + trigger flags
  external_gate.lock           # flock file for external gate

2. State schema (precheck_state.json)

{
  "last_positions_hash": "sha256_of_positions_json",
  "last_top_candidates_hash": "sha256_of_top_5_coins",
  "last_btc_regime": "bullish|neutral|bearish",
  "last_deep_analysis_at": "2026-04-15T11:00:00Z",
  "free_usdt": 12.50,
  "account_total_usdt": 150.00,
  "run_requested_at": null,
  "run_acknowledged_at": "2026-04-15T11:05:00Z",
  "volatility_session": "low|medium|high",
  "staleness_hours": 2.0
}

3. Precheck script logic (pseudocode)

import json, hashlib, os, math
from datetime import datetime, timezone

STATE_PATH = os.path.expanduser("~/.coinhunter/state/precheck_state.json")
POSITIONS_PATH = os.path.expanduser("~/.coinhunter/positions.json")

def load_json(path):
    with open(path) as f:
        return json.load(f)

def save_json(path, obj):
    tmp = path + ".tmp"
    with open(tmp, "w") as f:
        json.dump(obj, f, indent=2)
    os.replace(tmp, path)

def compute_hash(obj):
    return hashlib.sha256(json.dumps(obj, sort_keys=True).encode()).hexdigest()[:16]

def adaptive_thresholds(account_total_usdt, volatility_session):
    # Micro accounts get wider thresholds; high volatility narrows them
    base_price = 0.03 if account_total_usdt >= 100 else 0.08
    base_pnl  = 0.05 if account_total_usdt >= 100 else 0.12
    vol_mult = {"low": 1.2, "medium": 1.0, "high": 0.7}.get(volatility_session, 1.0)
    return base_price * vol_mult, base_pnl * vol_mult

def should_analyze():
    state = load_json(STATE_PATH) if os.path.exists(STATE_PATH) else {}
    positions = load_json(POSITIONS_PATH)
    # ... fetch current tickers, BTC regime, free USDT here ...
    new_pos_hash = compute_hash(positions.get("positions", []))
    new_btc_regime = "neutral"  # replace with actual analysis
    new_free = positions.get("balances", {}).get("USDT", {}).get("free", 0)
    total = positions.get("account_total_usdt", 0)
    volatility = "medium"  # replace with actual session metric

    price_thr, pnl_thr = adaptive_thresholds(total, volatility)

    triggers = []
    if new_pos_hash != state.get("last_positions_hash"):
        triggers.append("position_change")
    if new_btc_regime != state.get("last_btc_regime"):
        triggers.append("btc_regime_change")
    # ... check per-position price/PnL drift vs thresholds ...
    # ... check candidate leadership change (skip if free_usdt < min_actionable) ...
    staleness = (datetime.now(timezone.utc) - datetime.fromisoformat(state.get("last_deep_analysis_at","2000-01-01T00:00:00+00:00"))).total_seconds() / 3600.0
    max_staleness = 4.0 if total >= 100 else 8.0
    if staleness >= max_staleness:
        triggers.append("staleness")

    decision = bool(triggers)
    state.update({
        "last_positions_hash": new_pos_hash,
        "last_btc_regime": new_btc_regime,
        "free_usdt": new_free,
        "account_total_usdt": total,
        "volatility_session": volatility,
        "staleness_hours": staleness,
    })
    if decision:
        state["run_requested_at"] = datetime.now(timezone.utc).isoformat()
    save_json(STATE_PATH, state)
    return {"should_analyze": decision, "triggers": triggers, "state": state}

if __name__ == "__main__":
    result = should_analyze()
    print(json.dumps(result))

4. Hermes cron job configuration

Attach the precheck script as the script field of the cron job so its JSON output is injected into the prompt:

{
  "id": "coinhunter-trade",
  "schedule": "*/15 * * * *",
  "prompt": "You are Coin Hunter. If the injected context says should_analyze=false, respond with exactly [SILENT] and do nothing. Otherwise, read ~/.coinhunter/positions.json, run the scientific checklist, decide HOLD/SELL/REBALANCE/BUY, and execute via the `coinhunter` CLI. After finishing, run `coinhunter pre --ack` to clear the trigger.",
  "script": "coinhunter_precheck.py",
  "deliver": "telegram",
  "model": "kimi-for-coding"
}

Add an --ack handler to the precheck script (or a separate ack script) that sets run_acknowledged_at and clears run_requested_at so the gate does not re-fire until the next true trigger.

5. External gate (optional, for even lower cost)

If you want to run the precheck every 5 minutes without waking Hermes at all:

External gate pseudocode (run from ~/.hermes/scripts/):

import fcntl, os, subprocess, json, sys

LOCK_PATH = os.path.expanduser("~/.coinhunter/state/external_gate.lock")
PRECHECK = os.path.expanduser("~/.hermes/scripts/coinhunter_precheck.py")
JOB_ID = "coinhunter-trade"

with open(LOCK_PATH, "w") as f:
    try:
        fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
    except BlockingIOError:
        sys.exit(0)  # another instance is running

    result = json.loads(os.popen(f"python {PRECHECK}").read())
    if result.get("should_analyze"):
        # Trigger Hermes cron only if not already requested
        state_path = os.path.expanduser("~/.coinhunter/state/precheck_state.json")
        state = json.load(open(state_path))
        if not state.get("run_requested_at"):
            subprocess.run(["hermes", "cron", "run", JOB_ID], check=False)

System crontab entry:

*/5 * * * * /usr/bin/python3 /home/user/.hermes/scripts/coinhunter_external_gate.py >> /home/user/.coinhunter/logs/external_gate.log 2>&1

With this setup, the model is only invoked when a material market change occurs—preserving intelligence while cutting routine cost by 80-95%.

Production hardening (mandatory)

The live trading stack must include these safeguards:

  1. Idempotency — every decision carries a decision_id. The executor checks ~/.coinhunter/executions.json before submitting orders to prevent duplicate trades.
  2. Exchange reconciliation — before every run, pull real Binance balances and recent trades to sync positions.json. Do not trust local state alone.
  3. File locking + atomic writespositions.json and executions.json are updated under a file lock and written to a temp file before atomic rename.
  4. Order precision validation — read Binance lotSize, stepSize, and minNotional filters via ccxt before any order. Round quantities correctly and reject orders below minimums.
  5. Fee buffer — keep ~2%-5% USDT unallocated so that slippage and fees do not cause "insufficient balance" rejections.
  6. Structured logging — every decision, trade, error, and balance snapshot is written as JSONL under ~/.coinhunter/logs/ with schema_version and decision_id.
  7. Error logging — failed API calls, rejected orders, and reconciliation mismatches are captured in logs/errors_YYYYMMDD.jsonl and fed into the hourly review.

Safety rules

  • No leverage/futures when capital < $200.
  • When capital < $50, concentrate into 1 position only.
  • Always leave 2%-5% USDT buffer for fees and slippage.
  • Blacklist updates should be driven by review findings.

Position sizing

Total Capital Strategy
< $50 Single-coin concentration (mainstream or meme)
$50 $200 60% mainstream + 40% meme, max 2 positions
> $200 Up to 3 positions with stricter risk per position

Output style

For live decisions

Concise Telegram-style report:

  • Current holdings + live PnL
  • Top 1-2 opportunities found
  • Decision and exact reasoning
  • Action confirmation (or [DRY RUN] note)

For hourly reviews

Use references/review-template.md structure:

  • Decision quality breakdown
  • Market context
  • Strategy adjustments recommended
  • Action items for next hour

References

Read references/provider-playbook.md for data source selection. Read references/user-data-layout.md for private state management. Read references/short-term-trading-framework.md for the hybrid trading framework. Read references/review-template.md for hourly report formatting. Read references/scam-signals.md when evaluating meme coins.