Compare commits

...

22 Commits

Author SHA1 Message Date
9224621d7e feat: add CLI aliases, flatten trade commands, and introduce --doc
- Add `coin` entry-point alias alongside `coinhunter`
- Add short aliases for all commands (e.g., a/acc, m, opp/o, b, s)
- Flatten `buy` and `sell` to top-level commands; remove `trade` parent
- Add `--doc` flag to print output schema and field descriptions per command
- Update README and tests
- Bump version to 2.1.0

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-17 00:40:24 +08:00
6923013694 fix: remove recvWindow from exchange_info wrapper
Binance exchangeInfo endpoint does not accept recvWindow, causing
RuntimeError when calling opportunity scan or any command that hits
exchange_info().

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 20:21:45 +08:00
0f862957b0 refactor: remove all futures-related capabilities
Delete USDT-M futures support since the user's Binance API key does not
support futures trading. This simplifies the CLI to spot-only:

- Remove futures client wrapper (um_futures_client.py)
- Remove futures trade commands and close position logic
- Simplify account service to spot-only (no market_type field)
- Remove futures references from opportunity service
- Update README and tests to reflect spot-only architecture
- Bump version to 2.0.7

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 20:10:15 +08:00
680bd3d33c fix: allow -a/--agent flag after subcommands
- Preprocess argv to reorder agent flag before subcommand parsing.
- Enables usage like `coinhunter account overview -s -f -a`.
- Bump version to 2.0.6.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 19:19:12 +08:00
f06a1a34f1 feat: Braille spinner, shell completions, TUI polish
- Add with_spinner context manager with cyan Braille animation for human mode.
- Wrap all query/execution commands in cli.py with loading spinners.
- Integrate shtab: auto-install shell completions during init for zsh/bash.
- Add `completion` subcommand for manual script generation.
- Fix stale output_format default in DEFAULT_CONFIG.
- Add help descriptions to all second-level subcommands.
- Bump version to 2.0.5.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 19:12:23 +08:00
536425e8ea feat: add Braille spinner, shell completions, and TUI polish
- Add with_spinner context manager with cyan Braille animation for human mode.
- Wrap all query/execution commands in cli.py with loading spinners.
- Integrate shtab: auto-install shell completions during init for zsh/bash.
- Add `completion` subcommand for manual script generation.
- Fix stale output_format default in DEFAULT_CONFIG (json → tui).
- Add help descriptions to all second-level subcommands.
- Version 2.0.4.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 19:11:40 +08:00
b857ea33f3 refactor: rename update command to upgrade
- Align CLI verb with pipx/pip terminology (`pipx upgrade`).
- Rename internal `self_update` to `self_upgrade` for consistency.
- Update README and tests accordingly.
- Bump version to 2.0.4.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 18:50:14 +08:00
cdc90a9be1 fix: clean up update TUI output and suppress noisy stderr
- Add dedicated render branch for self_update results.
- Hide progress-only stderr on success to eliminate pipx noise.
- Remove generic "RESULT" heading from fallback key-value output.
- Bump version to 2.0.3.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 18:47:08 +08:00
9395978440 feat: human-friendly TUI output with --agent flag for JSON/compact
- Replace default JSON output with styled TUI tables and ANSI colors.
- Add -a/--agent global flag: small payloads → JSON, large → pipe-delimited compact.
- Update README to reflect new output behavior and remove JSON-first references.
- Bump version to 2.0.2.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 18:36:23 +08:00
b78845eb43 feat: add self-update command and bump to 2.0.1
- Add `coinhunter update` CLI command for pipx/pip upgrade
- README: document update behavior and recommend pipx install
- Dynamic version badge with cacheSeconds=60
- Version bump: 2.0.0 → 2.0.1

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 18:00:43 +08:00
52cd76a750 refactor: rewrite to CoinHunter V2 flat architecture
Replace the V1 commands/services split with a flat, direct architecture:
- cli.py dispatches directly to service functions
- New services: account, market, trade, opportunity
- Thin Binance wrappers: spot_client, um_futures_client
- Add audit logging, runtime paths, and TOML config
- Remove legacy V1 code: commands/, precheck, review engine, smart executor
- Add ruff + mypy toolchain and fix edge cases in trade params

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 17:22:29 +08:00
3819e35a7b docs: recommend pipx for end-user installation to avoid externally-managed-environment errors
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 01:37:31 +08:00
72f5bbcbb7 docs: swap header emoji to coin
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 01:26:06 +08:00
da93f727e8 docs: refresh README with current architecture and beautified title
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 01:24:43 +08:00
62c40a9776 refactor: address high-priority debt and publish to PyPI
- Fix TOCTOU race conditions by wrapping read-modify-write cycles
  under single-file locks in execution_state, portfolio_service,
  precheck_state, state_manager, and precheck_service.
- Add missing test coverage (96 tests total):
  - test_review_service.py (15 tests)
  - test_check_api.py (6 tests)
  - test_external_gate.py main branches (+10 tests)
  - test_trade_execution.py new commands (+8 tests)
- Unify all agent-consumed JSON messages to English.
- Config-ize hardcoded values (volume filter, schema_version) via
  get_user_config with sensible defaults.
- Add 1-hour TTL to exchange cache with force_new override.
- Add ruff and mypy to dev dependencies; fix all type errors.
- Add __all__ declarations to 11 service modules.
- Sync README with new commands, config tuning docs, and PyPI badge.
- Publish package as coinhunter==1.0.0 on PyPI with MIT license.
- Deprecate coinhunter-cli==1.0.1 with runtime warning.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-16 01:21:27 +08:00
01bb54dee5 chore: migrate gstack from vendored to team mode 2026-04-15 22:49:40 +08:00
759086ebd7 chore: add gstack skill routing rules to CLAUDE.md 2026-04-15 22:47:31 +08:00
5fcdd015e1 chore: remove auto-trader command and aliases from CLI 2026-04-15 22:21:36 +08:00
f59388f69a docs: refresh README with current architecture and beautified title 2026-04-15 21:31:08 +08:00
a61c329496 refactor: split precheck_core and migrate commands to commands/
- Split 900-line precheck_core.py into 9 focused modules:
  precheck_constants, time_utils, data_utils, state_manager,
  market_data, candidate_scoring, snapshot_builder,
  adaptive_profile, trigger_analyzer
- Remove dead auto_trader command and module
- Migrate 7 root-level command modules into commands/:
  check_api, doctor, external_gate, init_user_state,
  market_probe, paths, rotate_external_gate_log
- Keep thin backward-compatible facades in root package
- Update cli.py MODULE_MAP to route through commands/
- Verified compileall and smoke tests for all key commands
2026-04-15 21:29:18 +08:00
db981e8e5f refactor: finish facade migration for precheck and executor 2026-04-15 20:50:38 +08:00
e6274d3a00 feat: polish exec cli ergonomics and output 2026-04-15 20:28:24 +08:00
52 changed files with 2558 additions and 3412 deletions

30
.gitignore vendored
View File

@@ -1,7 +1,35 @@
# Python
__pycache__/
*.pyc
*.py[cod]
*$py.class
.pytest_cache/
.mypy_cache/
.ruff_cache/
.coverage
htmlcov/
# Virtual environments
.venv/
venv/
# Build artifacts
dist/
build/
*.egg-info/
# IDE / editors
.vscode/
.idea/
*.swp
*.swo
*~
# OS files
.DS_Store
# Secrets / local env
.env
*.env
# Claude local overrides
.claude/skills/gstack/

62
CLAUDE.md Normal file
View File

@@ -0,0 +1,62 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Development commands
- **Install (dev):** `pip install -e ".[dev]"`
- **Run CLI locally:** `python -m coinhunter --help`
- **Run tests:** `pytest` or `python -m pytest tests/`
- **Run single test file:** `pytest tests/test_cli.py -v`
- **Lint:** `ruff check src tests`
- **Format:** `ruff format src tests`
- **Type-check:** `mypy src`
## Architecture
CoinHunter V2 is a Binance-first crypto trading CLI with a flat, direct architecture:
- **`src/coinhunter/cli.py`** — Single entrypoint (`main()`). Uses `argparse` to parse commands and directly dispatches to service functions. There is no separate `commands/` adapter layer.
- **`src/coinhunter/services/`** — Contains all domain logic:
- `account_service.py` — balances, positions, overview
- `market_service.py` — tickers, klines, scan universe, symbol normalization
- `trade_service.py` — spot and USDT-M futures order execution
- `opportunity_service.py` — portfolio recommendations and market scanning
- **`src/coinhunter/binance/`** — Thin wrappers around official Binance connectors:
- `spot_client.py` wraps `binance.spot.Spot`
- `um_futures_client.py` wraps `binance.um_futures.UMFutures`
Both normalize request errors into `RuntimeError` and handle single/multi-symbol ticker responses.
- **`src/coinhunter/config.py`** — `load_config()`, `get_binance_credentials()`, `ensure_init_files()`.
- **`src/coinhunter/runtime.py`** — `RuntimePaths`, `get_runtime_paths()`, `print_json()`.
- **`src/coinhunter/audit.py`** — Writes JSONL audit events to dated files.
## Runtime and environment
User data lives in `~/.coinhunter/` by default (override with `COINHUNTER_HOME`):
- `config.toml` — runtime, binance, trading, and opportunity settings
- `.env``BINANCE_API_KEY` and `BINANCE_API_SECRET`
- `logs/audit_YYYYMMDD.jsonl` — structured audit log
Run `coinhunter init` to generate the config and env templates.
## Key conventions
- **Symbol normalization:** `market_service.normalize_symbol()` strips `/`, `-`, `_`, and uppercases the symbol. CLI inputs like `ETH/USDT`, `eth-usdt`, and `ETHUSDT` are all normalized to `ETHUSDT`.
- **Dry-run behavior:** Trade commands support `--dry-run`. If omitted, the default falls back to `trading.dry_run_default` in `config.toml`.
- **Client injection:** Service functions accept `spot_client` / `futures_client` as keyword arguments. This enables easy unit testing with mocks.
- **Error handling:** Binance client wrappers catch `requests.exceptions.SSLError` and `RequestException` and re-raise as human-readable `RuntimeError`. The CLI catches all exceptions in `main()` and prints `error: {message}` to stderr with exit code 1.
## Testing
Tests live in `tests/` and use `unittest.TestCase` with `unittest.mock.patch`. The test suite covers:
- `test_cli.py` — parser smoke tests and dispatch behavior
- `test_config_runtime.py` — config loading, env parsing, path resolution
- `test_account_market_services.py` — balance/position/ticker/klines logic with mocked clients
- `test_trade_service.py` — spot and futures trade execution paths
- `test_opportunity_service.py` — portfolio and scan scoring logic
## Notes
- `AGENTS.md` in this repo is stale and describes a prior V1 architecture (commands/, smart executor, precheck, review engine). Do not rely on it.

21
LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2026 Tacit Lab
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

285
README.md
View File

@@ -1,222 +1,169 @@
# coinhunter-cli
<p align="center">
<strong>The executable CLI layer for CoinHunter.</strong><br/>
Runtime-safe trading operations, precheck orchestration, review tooling, and market probes.
<img src="https://capsule-render.vercel.app/api?type=waving&color=0:F7B93E,100:0f0f0f&height=220&section=header&text=%F0%9F%AA%99%20CoinHunter&fontSize=65&fontColor=fff&animation=fadeIn&fontAlignY=32&desc=Trade%20Smarter%20%C2%B7%20Execute%20Faster%20%C2%B7%20Sleep%20Better&descAlignY=55&descSize=18" alt="CoinHunter Banner" />
</p>
<p align="center">
<img alt="Python" src="https://img.shields.io/badge/python-3.10%2B-blue" />
<img alt="Status" src="https://img.shields.io/badge/status-active%20development-orange" />
<img alt="Architecture" src="https://img.shields.io/badge/architecture-runtime%20%2B%20commands%20%2B%20services-6f42c1" />
<img src="https://readme-typing-svg.demolab.com?font=JetBrains+Mono&weight=500&size=22&duration=2800&pause=800&color=F7B93E&center=true&vCenter=true&width=600&lines=Binance-first+Trading+CLI;Account+%E2%86%92+Market+%E2%86%92+Trade+%E2%86%92+Opportunity;Human-friendly+TUI+%7C+Agent+Mode" alt="Typing SVG" />
</p>
## Why this repo exists
<p align="center">
<strong>A Binance-first crypto trading CLI for balances, market data, opportunity scanning, and execution.</strong>
</p>
CoinHunter is evolving from a loose bundle of automation scripts into a proper installable command-line tool.
<p align="center">
<a href="https://pypi.org/project/coinhunter/"><img src="https://img.shields.io/pypi/v/coinhunter?style=flat-square&color=F7B93E&labelColor=1a1a1a&cacheSeconds=60" /></a>
<a href="#"><img src="https://img.shields.io/badge/python-3.10%2B-3776ab?style=flat-square&logo=python&logoColor=white&labelColor=1a1a1a" /></a>
<a href="#"><img src="https://img.shields.io/badge/tests-passing-22c55e?style=flat-square&labelColor=1a1a1a" /></a>
<a href="#"><img src="https://img.shields.io/badge/lint-ruff%20%2B%20mypy-8b5cf6?style=flat-square&labelColor=1a1a1a" /></a>
</p>
This repository is the tooling layer:
---
- Code and executable behavior live here.
- User runtime state lives in `~/.coinhunter/` by default.
- Hermes skills can call this CLI instead of embedding large script collections.
- Runtime paths can be overridden with `COINHUNTER_HOME`, `HERMES_HOME`, `COINHUNTER_ENV_FILE`, and `HERMES_BIN`.
## Install
In short:
- `coinhunter-cli` = tool
- CoinHunter skill = strategy / workflow / prompting layer
- `~/.coinhunter` = user data, logs, state, reviews
## Current architecture
```text
coinhunter-cli/
├── src/coinhunter/
│ ├── cli.py # top-level command router
│ ├── runtime.py # runtime paths + env loading
│ ├── doctor.py # diagnostics
│ ├── paths.py # runtime path inspection
│ ├── commands/ # thin CLI adapters
│ ├── services/ # orchestration / application services
│ └── *.py # compatibility modules + legacy logic under extraction
└── README.md
```
The repo is actively being refactored toward a cleaner split:
- `commands/` → argument / CLI adapters
- `services/` → orchestration and application workflows
- `runtime/` → paths, env, files, locks, config
- future `domain/` → trading and precheck core logic
## Implemented command/service splits
The first extraction pass is already live:
- `smart-executor``commands.smart_executor` + `services.smart_executor_service`
- `precheck``commands.precheck` + `services.precheck_service`
- `precheck` internals now also have dedicated service modules for:
- `services.precheck_state`
- `services.precheck_snapshot`
- `services.precheck_analysis`
This keeps behavior stable while giving the codebase a cleaner landing zone for deeper refactors.
## Installation
Editable install:
```bash
pip install -e .
```
Run directly after install:
For end users, install from PyPI with [pipx](https://pipx.pypa.io/) (recommended) to avoid polluting your system Python:
```bash
pipx install coinhunter
coinhunter --help
```
You can also use the shorter `coin` alias:
```bash
coin --help
```
Check the installed version:
```bash
coinhunter --version
```
## Quickstart
To update later:
Initialize user state:
```bash
pipx upgrade coinhunter
```
## Initialize runtime
```bash
coinhunter init
coinhunter init --force
```
Inspect runtime wiring:
This creates:
- `~/.coinhunter/config.toml`
- `~/.coinhunter/.env`
- `~/.coinhunter/logs/`
If you are using **zsh** or **bash**, `init` will also generate and install shell completion scripts automatically, and update your rc file (`~/.zshrc` or `~/.bashrc`) if needed.
`config.toml` stores runtime and strategy settings. `.env` stores:
```bash
coinhunter paths
coinhunter doctor
BINANCE_API_KEY=
BINANCE_API_SECRET=
```
Validate exchange credentials:
Override the default home directory with `COINHUNTER_HOME`.
## Commands
By default, CoinHunter prints human-friendly TUI tables. Add `--agent` to any command to get JSON output (or compact pipe-delimited tables for large datasets).
Add `--doc` to any command to see its output schema and field descriptions (great for AI agents):
```bash
coinhunter check-api
coin buy --doc
coin market klines --doc
```
Run precheck / gate plumbing:
### Examples
```bash
coinhunter precheck
coinhunter precheck --mark-run-requested "external-gate queued cron run"
coinhunter precheck --ack "analysis finished"
# Account (aliases: a, acc)
coinhunter account overview
coinhunter account overview --agent
coin a ov
coin acc bal
coin a pos
# Market (aliases: m)
coinhunter market tickers BTCUSDT ETH/USDT sol-usdt
coinhunter market klines BTCUSDT ETHUSDT --interval 1h --limit 50
coin m tk BTCUSDT ETHUSDT
coin m k BTCUSDT -i 1h -l 50
# Trade (buy / sell are now top-level commands)
coinhunter buy BTCUSDT --quote 100 --dry-run
coinhunter sell BTCUSDT --qty 0.01 --type limit --price 90000
coin b BTCUSDT -Q 100 -d
coin s BTCUSDT -q 0.01 -t limit -p 90000
# Opportunities (aliases: opp, o)
coinhunter opportunity portfolio
coinhunter opportunity scan
coinhunter opportunity scan --symbols BTCUSDT ETHUSDT SOLUSDT
coin opp pf
coin o scan -s BTCUSDT ETHUSDT
# Self-upgrade
coinhunter upgrade
coin upgrade
# Shell completion (manual)
coinhunter completion zsh > ~/.zsh/completions/_coinhunter
coinhunter completion bash > ~/.local/share/bash-completion/completions/coinhunter
```
Inspect balances or execute trading actions:
`upgrade` will try `pipx upgrade coinhunter` first, and fall back to `pip install --upgrade coinhunter` if pipx is not available.
```bash
coinhunter smart-executor balances
coinhunter smart-executor status
coinhunter smart-executor hold
coinhunter smart-executor buy ENJUSDT 50
coinhunter smart-executor sell-all ENJUSDT
```
## Architecture
Generate review data:
CoinHunter V2 uses a flat, direct architecture:
```bash
coinhunter review-context 12
coinhunter review-engine 12
```
| Layer | Responsibility | Key Files |
|-------|----------------|-----------|
| **CLI** | Single entrypoint, argument parsing | `cli.py` |
| **Binance** | Thin API wrappers with unified error handling | `binance/spot_client.py` |
| **Services** | Domain logic | `services/account_service.py`, `services/market_service.py`, `services/trade_service.py`, `services/opportunity_service.py` |
| **Config** | TOML config, `.env` secrets, path resolution | `config.py` |
| **Runtime** | Paths, TUI/JSON/compact output | `runtime.py` |
| **Audit** | Structured JSONL logging | `audit.py` |
Probe external market data:
## Logging
```bash
coinhunter market-probe bybit-ticker BTCUSDT
coinhunter market-probe bybit-klines BTCUSDT 60 20
```
## Runtime model
Default layout:
Audit logs are written to:
```text
~/.coinhunter/
├── accounts.json
├── config.json
├── executions.json
├── notes.json
├── positions.json
├── watchlist.json
├── logs/
├── reviews/
└── state/
~/.coinhunter/logs/audit_YYYYMMDD.jsonl
```
Credential loading:
Events include:
- Binance credentials are read from `~/.hermes/.env` by default.
- `COINHUNTER_ENV_FILE` can point to a different env file.
- `hermes` is resolved from `PATH` first, then `~/.local/bin/hermes`, unless `HERMES_BIN` overrides it.
- `trade_submitted`
- `trade_filled`
- `trade_failed`
- `opportunity_portfolio_generated`
- `opportunity_scan_generated`
## Useful commands
## Development
### Diagnostics
Clone the repo and install in editable mode:
```bash
coinhunter doctor
coinhunter paths
coinhunter check-api
git clone https://git.tacitlab.cc/TacitLab/coinhunter-cli.git
cd coinhunter-cli
pip install -e ".[dev]"
```
### Trading and execution
Run quality checks:
```bash
coinhunter smart-executor balances
coinhunter smart-executor status
coinhunter smart-executor hold
coinhunter smart-executor rebalance FROMUSDT TOUSDT
pytest tests/ # run tests
ruff check src tests # lint
mypy src # type check
```
### Precheck and orchestration
```bash
coinhunter precheck
coinhunter external-gate
coinhunter rotate-external-gate-log
```
### Review and market research
```bash
coinhunter review-context 12
coinhunter review-engine 12
coinhunter market-probe bybit-ticker BTCUSDT
```
## Development notes
This project is intentionally moving in small, safe refactor steps:
1. Separate runtime concerns from hardcoded paths.
2. Move command dispatch into thin adapters.
3. Introduce orchestration services.
4. Extract reusable domain logic from large compatibility modules.
5. Keep cron / Hermes integration stable during migration.
That means some compatibility modules still exist, but the direction is deliberate.
## Near-term roadmap
- Extract more logic from `smart_executor.py` into dedicated execution / portfolio services.
- Continue shrinking `precheck.py` by moving snapshot and analysis internals into reusable modules.
- Add `domain/` models for positions, signals, and trigger analysis.
- Add tests for runtime paths, precheck service behavior, and CLI stability.
- Evolve toward a more polished installable CLI workflow.
## Philosophy
CoinHunter should become:
- more professional
- more maintainable
- safer to operate
- easier for humans and agents to call
- less dependent on prompt-only correctness
This repo is where that evolution happens.

View File

@@ -3,23 +3,56 @@ requires = ["setuptools>=68", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "coinhunter-cli"
version = "0.1.0"
description = "CoinHunter trading CLI with user runtime data in ~/.coinhunter"
name = "coinhunter"
version = "2.1.0"
description = "Binance-first trading CLI for balances, market data, opportunity scanning, and execution."
readme = "README.md"
license = {text = "MIT"}
requires-python = ">=3.10"
dependencies = [
"ccxt>=4.4.0"
"binance-connector>=3.9.0",
"shtab>=1.7.0",
"tomli>=2.0.1; python_version < '3.11'",
]
authors = [
{name = "Tacit Lab", email = "ouyangcarlos@gmail.com"}
]
[project.optional-dependencies]
dev = [
"pytest>=8.0",
"ruff>=0.5.0",
"mypy>=1.10.0",
]
[project.scripts]
coinhunter = "coinhunter.cli:main"
coin = "coinhunter.cli:main"
[tool.setuptools]
package-dir = {"" = "src"}
[tool.setuptools.packages.find]
where = ["src"]
[tool.pytest.ini_options]
testpaths = ["tests"]
[tool.ruff]
target-version = "py310"
line-length = 120
[tool.ruff.lint]
select = ["E", "F", "I", "UP", "W"]
ignore = ["E501"]
[tool.ruff.lint.pydocstyle]
convention = "google"
[tool.mypy]
python_version = "3.10"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = true
ignore_missing_imports = true
exclude = [".venv", "build"]

View File

@@ -1 +1,8 @@
__version__ = "0.1.0"
"""CoinHunter V2."""
try:
from importlib.metadata import version
__version__ = version("coinhunter")
except Exception: # pragma: no cover
__version__ = "unknown"

View File

@@ -1,2 +1,3 @@
from .cli import main
raise SystemExit(main())

39
src/coinhunter/audit.py Normal file
View File

@@ -0,0 +1,39 @@
"""Audit logging for CoinHunter V2."""
from __future__ import annotations
import json
from datetime import datetime, timezone
from pathlib import Path
from typing import Any
from .config import load_config, resolve_log_dir
from .runtime import RuntimePaths, ensure_runtime_dirs, get_runtime_paths, json_default
_audit_dir_cache: dict[str, Path] = {}
def _resolve_audit_dir(paths: RuntimePaths) -> Path:
key = str(paths.root)
if key not in _audit_dir_cache:
config = load_config(paths)
_audit_dir_cache[key] = resolve_log_dir(config, paths)
return _audit_dir_cache[key]
def _audit_path(paths: RuntimePaths | None = None) -> Path:
paths = ensure_runtime_dirs(paths or get_runtime_paths())
logs_dir = _resolve_audit_dir(paths)
logs_dir.mkdir(parents=True, exist_ok=True)
return logs_dir / f"audit_{datetime.now(timezone.utc).strftime('%Y%m%d')}.jsonl"
def audit_event(event: str, payload: dict[str, Any], paths: RuntimePaths | None = None) -> dict[str, Any]:
entry = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"event": event,
**payload,
}
with _audit_path(paths).open("a", encoding="utf-8") as handle:
handle.write(json.dumps(entry, ensure_ascii=False, default=json_default) + "\n")
return entry

View File

@@ -1,289 +0,0 @@
#!/usr/bin/env python3
"""
Coin Hunter Auto Trader
全自动妖币猎人 + 币安执行器
运行前请在 ~/.hermes/.env 配置:
BINANCE_API_KEY=你的API_KEY
BINANCE_API_SECRET=你的API_SECRET
首次运行建议用 DRY_RUN=True 测试逻辑。
"""
import json
import os
import sys
import time
from datetime import datetime, timezone, timedelta
from pathlib import Path
import ccxt
from .runtime import get_runtime_paths, load_env_file
# ============== 配置 ==============
PATHS = get_runtime_paths()
COINS_DIR = PATHS.root
POSITIONS_FILE = PATHS.positions_file
ENV_FILE = PATHS.env_file
CST = timezone(timedelta(hours=8))
# 风控参数
DRY_RUN = os.getenv("DRY_RUN", "true").lower() == "true" # 默认测试模式
MAX_POSITIONS = 2 # 最大同时持仓数
# 资金配置(根据总资产动态计算)
CAPITAL_ALLOCATION_PCT = 0.95 # 用总资产的95%玩这个策略留5%缓冲给手续费和滑点)
MIN_POSITION_USDT = 50 # 单次最小下单金额(避免过小)
MIN_VOLUME_24H = 1_000_000 # 最小24h成交额 ($)
MIN_PRICE_CHANGE_24H = 0.05 # 最小涨幅 5%
MAX_PRICE = 1.0 # 只玩低价币meme特征
STOP_LOSS_PCT = -0.07 # 止损 -7%
TAKE_PROFIT_1_PCT = 0.15 # 止盈1 +15%
TAKE_PROFIT_2_PCT = 0.30 # 止盈2 +30%
BLACKLIST = {"USDC", "BUSD", "TUSD", "FDUSD", "USTC", "PAXG", "XRP", "ETH", "BTC"}
# ============== 工具函数 ==============
def log(msg: str):
print(f"[{datetime.now(CST).strftime('%Y-%m-%d %H:%M:%S')} CST] {msg}")
def load_positions() -> list:
if POSITIONS_FILE.exists():
return json.loads(POSITIONS_FILE.read_text(encoding="utf-8")).get("positions", [])
return []
def save_positions(positions: list):
COINS_DIR.mkdir(parents=True, exist_ok=True)
POSITIONS_FILE.write_text(json.dumps({"positions": positions}, indent=2, ensure_ascii=False), encoding="utf-8")
def load_env():
load_env_file(PATHS)
def calculate_position_size(total_usdt: float, available_usdt: float, open_slots: int) -> float:
"""
根据总资产动态计算每次下单金额。
逻辑:先确定策略总上限,再按剩余开仓位均分。
"""
strategy_cap = total_usdt * CAPITAL_ALLOCATION_PCT
# 已用于策略的资金约等于总上限 可用余额
used_in_strategy = max(0, strategy_cap - available_usdt)
remaining_strategy_cap = max(0, strategy_cap - used_in_strategy)
if open_slots <= 0 or remaining_strategy_cap < MIN_POSITION_USDT:
return 0
size = remaining_strategy_cap / open_slots
# 同时不能超过当前可用余额
size = min(size, available_usdt)
# 四舍五入到整数
size = max(0, round(size, 2))
return size if size >= MIN_POSITION_USDT else 0
# ============== 币安客户端 ==============
class BinanceTrader:
def __init__(self):
api_key = os.getenv("BINANCE_API_KEY")
secret = os.getenv("BINANCE_API_SECRET")
if not api_key or not secret:
raise RuntimeError("缺少 BINANCE_API_KEY 或 BINANCE_API_SECRET请配置 ~/.hermes/.env")
self.exchange = ccxt.binance({
"apiKey": api_key,
"secret": secret,
"options": {"defaultType": "spot"},
"enableRateLimit": True,
})
self.exchange.load_markets()
def get_balance(self, asset: str = "USDT") -> float:
bal = self.exchange.fetch_balance()["free"].get(asset, 0)
return float(bal)
def fetch_tickers(self) -> dict:
return self.exchange.fetch_tickers()
def create_market_buy_order(self, symbol: str, amount_usdt: float):
if DRY_RUN:
log(f"[DRY RUN] 模拟买入 {symbol},金额 ${amount_usdt}")
return {"id": "dry-run-buy", "price": None, "amount": amount_usdt}
ticker = self.exchange.fetch_ticker(symbol)
price = float(ticker["last"])
qty = amount_usdt / price
order = self.exchange.create_market_buy_order(symbol, qty)
log(f"✅ 买入 {symbol} | 数量 {qty:.4f} | 价格 ~${price}")
return order
def create_market_sell_order(self, symbol: str, qty: float):
if DRY_RUN:
log(f"[DRY RUN] 模拟卖出 {symbol},数量 {qty}")
return {"id": "dry-run-sell"}
order = self.exchange.create_market_sell_order(symbol, qty)
log(f"✅ 卖出 {symbol} | 数量 {qty:.4f}")
return order
# ============== 选币引擎 ==============
class CoinPicker:
def __init__(self, exchange: ccxt.binance):
self.exchange = exchange
def scan(self) -> list:
tickers = self.exchange.fetch_tickers()
candidates = []
for symbol, t in tickers.items():
if not symbol.endswith("/USDT"):
continue
base = symbol.replace("/USDT", "")
if base in BLACKLIST:
continue
price = float(t["last"] or 0)
change = float(t.get("percentage", 0)) / 100
volume = float(t.get("quoteVolume", 0))
if price <= 0 or price > MAX_PRICE:
continue
if volume < MIN_VOLUME_24H:
continue
if change < MIN_PRICE_CHANGE_24H:
continue
score = change * (volume / MIN_VOLUME_24H)
candidates.append({
"symbol": symbol,
"base": base,
"price": price,
"change_24h": change,
"volume_24h": volume,
"score": score,
})
candidates.sort(key=lambda x: x["score"], reverse=True)
return candidates[:5]
# ============== 主控制器 ==============
def run_cycle():
load_env()
trader = BinanceTrader()
picker = CoinPicker(trader.exchange)
positions = load_positions()
log(f"当前持仓数: {len(positions)} | 最大允许: {MAX_POSITIONS} | DRY_RUN={DRY_RUN}")
# 1. 检查现有持仓(止盈止损)
tickers = trader.fetch_tickers()
new_positions = []
for pos in positions:
sym = pos["symbol"]
qty = float(pos["quantity"])
cost = float(pos["avg_cost"])
# ccxt tickers 使用 slash 格式,如 PENGU/USDT
sym_ccxt = sym.replace("USDT", "/USDT") if "/" not in sym else sym
ticker = tickers.get(sym_ccxt)
if not ticker:
new_positions.append(pos)
continue
price = float(ticker["last"])
pnl_pct = (price - cost) / cost
log(f"监控 {sym} | 现价 ${price:.8f} | 成本 ${cost:.8f} | 盈亏 {pnl_pct:+.2%}")
action = None
if pnl_pct <= STOP_LOSS_PCT:
action = "STOP_LOSS"
elif pnl_pct >= TAKE_PROFIT_2_PCT:
action = "TAKE_PROFIT_2"
elif pnl_pct >= TAKE_PROFIT_1_PCT:
# 检查是否已经止盈过一部分
sold_pct = float(pos.get("take_profit_1_sold_pct", 0))
if sold_pct == 0:
action = "TAKE_PROFIT_1"
if action == "STOP_LOSS":
trader.create_market_sell_order(sym, qty)
log(f"🛑 {sym} 触发止损,全部清仓")
continue
if action == "TAKE_PROFIT_1":
sell_qty = qty * 0.5
trader.create_market_sell_order(sym, sell_qty)
pos["quantity"] = qty - sell_qty
pos["take_profit_1_sold_pct"] = 50
pos["updated_at"] = datetime.now(CST).isoformat()
log(f"🎯 {sym} 触发止盈1卖出50%,剩余 {pos['quantity']:.4f}")
new_positions.append(pos)
continue
if action == "TAKE_PROFIT_2":
trader.create_market_sell_order(sym, float(pos["quantity"]))
log(f"🚀 {sym} 触发止盈2全部清仓")
continue
new_positions.append(pos)
# 2. 开新仓
if len(new_positions) < MAX_POSITIONS:
candidates = picker.scan()
held_bases = {p["base_asset"] for p in new_positions}
total_usdt = trader.get_balance("USDT")
# 计算持仓市值并加入总资产
for pos in new_positions:
sym_ccxt = pos["symbol"].replace("USDT", "/USDT") if "/" not in pos["symbol"] else pos["symbol"]
ticker = tickers.get(sym_ccxt)
if ticker:
total_usdt += float(pos["quantity"]) * float(ticker["last"])
available_usdt = trader.get_balance("USDT")
open_slots = MAX_POSITIONS - len(new_positions)
position_size = calculate_position_size(total_usdt, available_usdt, open_slots)
log(f"总资产 USDT: ${total_usdt:.2f} | 策略上限({CAPITAL_ALLOCATION_PCT:.0%}): ${total_usdt*CAPITAL_ALLOCATION_PCT:.2f} | 每仓建议金额: ${position_size:.2f}")
for cand in candidates:
if len(new_positions) >= MAX_POSITIONS:
break
base = cand["base"]
if base in held_bases:
continue
if position_size <= 0:
log("策略资金已用完或余额不足,停止开新仓")
break
symbol = cand["symbol"]
order = trader.create_market_buy_order(symbol, position_size)
avg_price = float(order.get("price") or cand["price"])
qty = position_size / avg_price if avg_price else 0
new_positions.append({
"account_id": "binance-main",
"symbol": symbol.replace("/", ""),
"base_asset": base,
"quote_asset": "USDT",
"market_type": "spot",
"quantity": qty,
"avg_cost": avg_price,
"opened_at": datetime.now(CST).isoformat(),
"updated_at": datetime.now(CST).isoformat(),
"note": "Auto-trader entry",
})
held_bases.add(base)
available_usdt -= position_size
position_size = calculate_position_size(total_usdt, available_usdt, MAX_POSITIONS - len(new_positions))
log(f"📈 新开仓 {symbol} | 买入价 ${avg_price:.8f} | 数量 {qty:.2f}")
save_positions(new_positions)
log("周期结束,持仓已保存")
if __name__ == "__main__":
try:
run_cycle()
except Exception as e:
log(f"❌ 错误: {e}")
sys.exit(1)

View File

@@ -0,0 +1 @@
"""Official Binance connector wrappers."""

View File

@@ -0,0 +1,75 @@
"""Thin wrapper around the official Binance Spot connector."""
from __future__ import annotations
from collections.abc import Callable
from typing import Any
from requests.exceptions import RequestException, SSLError
class SpotBinanceClient:
def __init__(
self,
*,
api_key: str,
api_secret: str,
base_url: str,
recv_window: int,
client: Any | None = None,
) -> None:
self.recv_window = recv_window
if client is not None:
self._client = client
return
try:
from binance.spot import Spot
except ModuleNotFoundError as exc: # pragma: no cover
raise RuntimeError("binance-connector is not installed") from exc
self._client = Spot(api_key=api_key, api_secret=api_secret, base_url=base_url)
def _call(self, operation: str, func: Callable[..., Any], *args: Any, **kwargs: Any) -> Any:
try:
return func(*args, **kwargs)
except SSLError as exc:
raise RuntimeError(
"Binance Spot request failed because TLS certificate verification failed. "
"This usually means the local Python trust store is incomplete or a proxy is intercepting HTTPS. "
"Update the local CA trust chain or configure the host environment with the correct corporate/root CA."
) from exc
except RequestException as exc:
raise RuntimeError(f"Binance Spot request failed during {operation}: {exc}") from exc
def account_info(self) -> dict[str, Any]:
return self._call("account info", self._client.account, recvWindow=self.recv_window) # type: ignore[no-any-return]
def exchange_info(self, symbol: str | None = None) -> dict[str, Any]:
kwargs: dict[str, Any] = {}
if symbol:
kwargs["symbol"] = symbol
return self._call("exchange info", self._client.exchange_info, **kwargs) # type: ignore[no-any-return]
def ticker_24h(self, symbols: list[str] | None = None) -> list[dict[str, Any]]:
if not symbols:
response = self._call("24h ticker", self._client.ticker_24hr)
elif len(symbols) == 1:
response = self._call("24h ticker", self._client.ticker_24hr, symbol=symbols[0])
else:
response = self._call("24h ticker", self._client.ticker_24hr, symbols=symbols)
return response if isinstance(response, list) else [response] # type: ignore[no-any-return]
def ticker_price(self, symbols: list[str] | None = None) -> list[dict[str, Any]]:
if not symbols:
response = self._call("ticker price", self._client.ticker_price)
elif len(symbols) == 1:
response = self._call("ticker price", self._client.ticker_price, symbol=symbols[0])
else:
response = self._call("ticker price", self._client.ticker_price, symbols=symbols)
return response if isinstance(response, list) else [response] # type: ignore[no-any-return]
def klines(self, symbol: str, interval: str, limit: int) -> list[list[Any]]:
return self._call("klines", self._client.klines, symbol=symbol, interval=interval, limit=limit) # type: ignore[no-any-return]
def new_order(self, **kwargs: Any) -> dict[str, Any]:
kwargs.setdefault("recvWindow", self.recv_window)
return self._call("new order", self._client.new_order, **kwargs) # type: ignore[no-any-return]

View File

@@ -1,26 +0,0 @@
#!/usr/bin/env python3
"""检查自动交易的环境配置是否就绪"""
import os
from .runtime import load_env_file
def main():
load_env_file()
api_key = os.getenv("BINANCE_API_KEY", "")
secret = os.getenv("BINANCE_API_SECRET", "")
if not api_key or api_key.startswith("***") or api_key.startswith("your_"):
print("❌ 未配置 BINANCE_API_KEY")
return 1
if not secret or secret.startswith("***") or secret.startswith("your_"):
print("❌ 未配置 BINANCE_API_SECRET")
return 1
print("✅ API 配置正常")
return 0
if __name__ == "__main__":
raise SystemExit(main())

514
src/coinhunter/cli.py Executable file → Normal file
View File

@@ -1,89 +1,479 @@
"""CoinHunter unified CLI entrypoint."""
"""CoinHunter V2 CLI."""
from __future__ import annotations
import argparse
import importlib
import sys
from typing import Any
from . import __version__
from .binance.spot_client import SpotBinanceClient
from .config import ensure_init_files, get_binance_credentials, load_config
from .runtime import get_runtime_paths, install_shell_completion, print_output, self_upgrade, with_spinner
from .services import account_service, market_service, opportunity_service, trade_service
MODULE_MAP = {
"check-api": "check_api",
"doctor": "doctor",
"external-gate": "external_gate",
"init": "init_user_state",
"market-probe": "market_probe",
"paths": "paths",
"precheck": "commands.precheck",
"review-context": "review_context",
"review-engine": "review_engine",
"rotate-external-gate-log": "rotate_external_gate_log",
"smart-executor": "commands.smart_executor",
"auto-trader": "auto_trader",
EPILOG = """\
examples:
coin init
coin acc ov
coin m tk BTCUSDT ETHUSDT
coin m k BTCUSDT -i 1h -l 50
coin buy BTCUSDT -Q 100 -d
coin sell BTCUSDT --qty 0.01 --type limit --price 90000
coin opp scan -s BTCUSDT ETHUSDT
coin upgrade
"""
COMMAND_DOCS: dict[str, str] = {
"init": """\
Output: JSON
{
"root": "~/.coinhunter",
"files_created": ["config.toml", ".env"],
"completion": {"shell": "zsh", "installed": true}
}
Fields:
root runtime directory path
files_created list of generated files
completion shell completion installation status
""",
"account/overview": """\
Output: JSON
{
"total_btc": 1.234,
"total_usdt": 50000.0,
"assets": [{"asset": "BTC", "free": 0.5, "locked": 0.1}]
}
Fields:
total_btc total equity denominated in BTC
total_usdt total equity denominated in USDT
assets list of non-zero balances
""",
"account/balances": """\
Output: JSON array
[
{"asset": "BTC", "free": 0.5, "locked": 0.1, "total": 0.6}
]
Fields:
asset asset symbol
free available balance
locked frozen/locked balance
total free + locked
""",
"account/positions": """\
Output: JSON array
[
{"symbol": "BTCUSDT", "positionAmt": 0.01, "entryPrice": 90000.0}
]
Fields:
symbol trading pair
positionAmt quantity held (positive long, negative short)
entryPrice average entry price
""",
"market/tickers": """\
Output: JSON object keyed by normalized symbol
{
"BTCUSDT": {"lastPrice": "70000.00", "priceChangePercent": "2.5", "volume": "12345.67"}
}
Fields:
lastPrice latest traded price
priceChangePercent 24h change %
volume 24h base volume
""",
"market/klines": """\
Output: JSON object keyed by symbol, value is array of OHLCV candles
{
"BTCUSDT": [
{"open_time": 1713000000000, "open": 69000.0, "high": 69500.0, "low": 68800.0, "close": 69200.0, "volume": 100.5}
]
}
Fields per candle:
open_time candle open timestamp (ms)
open/high/low/close OHLC prices
volume traded base volume
""",
"buy": """\
Output: JSON
{
"trade": {
"market_type": "spot",
"symbol": "BTCUSDT",
"side": "BUY",
"order_type": "MARKET",
"status": "DRY_RUN",
"dry_run": true,
"request_payload": {...},
"response_payload": {...}
}
}
Fields:
market_type "spot"
side "BUY"
order_type MARKET or LIMIT
status order status from exchange (or DRY_RUN)
dry_run whether simulated
request_payload normalized order sent to Binance
response_payload raw exchange response
""",
"sell": """\
Output: JSON
{
"trade": {
"market_type": "spot",
"symbol": "BTCUSDT",
"side": "SELL",
"order_type": "LIMIT",
"status": "FILLED",
"dry_run": false,
"request_payload": {...},
"response_payload": {...}
}
}
Fields:
market_type "spot"
side "SELL"
order_type MARKET or LIMIT
status order status from exchange (or DRY_RUN)
dry_run whether simulated
request_payload normalized order sent to Binance
response_payload raw exchange response
""",
"opportunity/portfolio": """\
Output: JSON
{
"scores": [
{"asset": "BTC", "score": 0.75, "metrics": {"volatility": 0.02, "trend": 0.01}}
]
}
Fields:
asset scored asset
score composite opportunity score (0-1)
metrics breakdown of contributing signals
""",
"opportunity/scan": """\
Output: JSON
{
"opportunities": [
{"symbol": "ETHUSDT", "score": 0.82, "signals": ["momentum", "volume_spike"]}
]
}
Fields:
symbol trading pair scanned
score opportunity score (0-1)
signals list of triggered signal names
""",
"upgrade": """\
Output: JSON
{
"command": "pip install --upgrade coinhunter",
"returncode": 0,
"stdout": "...",
"stderr": ""
}
Fields:
command shell command executed
returncode process exit code (0 = success)
stdout command standard output
stderr command standard error
""",
"completion": """\
Output: shell script text (not JSON)
# bash/zsh completion script for coinhunter
...
Fields:
(raw shell script suitable for sourcing)
""",
}
class VersionAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
print(__version__)
raise SystemExit(0)
def _load_spot_client(config: dict[str, Any], *, client: Any | None = None) -> SpotBinanceClient:
credentials = get_binance_credentials()
binance_config = config["binance"]
return SpotBinanceClient(
api_key=credentials["api_key"],
api_secret=credentials["api_secret"],
base_url=binance_config["spot_base_url"],
recv_window=int(binance_config["recv_window"]),
client=client,
)
def build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(
prog="coinhunter",
description="CoinHunter trading operations CLI",
formatter_class=argparse.RawTextHelpFormatter,
epilog=(
"Examples:\n"
" coinhunter doctor\n"
" coinhunter paths\n"
" coinhunter check-api\n"
" coinhunter smart-executor balances\n"
" coinhunter smart-executor hold\n"
" coinhunter smart-executor --analysis '...' --reasoning '...' buy ENJUSDT 50\n"
" coinhunter precheck\n"
" coinhunter precheck --ack '分析完成HOLD'\n"
" coinhunter external-gate\n"
" coinhunter review-context 12\n"
" coinhunter market-probe bybit-ticker BTCUSDT\n"
" coinhunter init\n"
),
description="CoinHunter V2 Binance-first trading CLI",
epilog=EPILOG,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument("--version", nargs=0, action=VersionAction, help="Print installed version and exit")
parser.add_argument("command", nargs="?", choices=sorted(MODULE_MAP.keys()), help="CoinHunter command to run")
parser.add_argument("args", nargs=argparse.REMAINDER)
parser.add_argument("-v", "--version", action="version", version=__version__)
parser.add_argument("-a", "--agent", action="store_true", help="Output in agent-friendly format (JSON or compact)")
parser.add_argument("--doc", action="store_true", help="Show output schema and field descriptions for the command")
subparsers = parser.add_subparsers(dest="command")
init_parser = subparsers.add_parser("init", help="Generate config.toml, .env, and log directory")
init_parser.add_argument("-f", "--force", action="store_true", help="Overwrite existing files")
account_parser = subparsers.add_parser("account", aliases=["acc", "a"], help="Account overview, balances, and positions")
account_subparsers = account_parser.add_subparsers(dest="account_command")
account_commands_help = {
"overview": "Total equity and summary",
"balances": "List asset balances",
"positions": "List open positions",
}
account_aliases = {
"overview": ["ov"],
"balances": ["bal", "b"],
"positions": ["pos", "p"],
}
for name in ("overview", "balances", "positions"):
account_subparsers.add_parser(name, aliases=account_aliases[name], help=account_commands_help[name])
market_parser = subparsers.add_parser("market", aliases=["m"], help="Batch market queries")
market_subparsers = market_parser.add_subparsers(dest="market_command")
tickers_parser = market_subparsers.add_parser("tickers", aliases=["tk", "t"], help="Fetch 24h ticker data")
tickers_parser.add_argument("symbols", nargs="+", metavar="SYM", help="Symbols to query (e.g. BTCUSDT ETH/USDT)")
klines_parser = market_subparsers.add_parser("klines", aliases=["k"], help="Fetch OHLCV klines")
klines_parser.add_argument("symbols", nargs="+", metavar="SYM", help="Symbols to query")
klines_parser.add_argument("-i", "--interval", default="1h", help="Kline interval (default: 1h)")
klines_parser.add_argument("-l", "--limit", type=int, default=100, help="Number of candles (default: 100)")
buy_parser = subparsers.add_parser("buy", aliases=["b"], help="Buy base asset")
buy_parser.add_argument("symbol", metavar="SYM", help="Trading pair (e.g. BTCUSDT)")
buy_parser.add_argument("-q", "--qty", type=float, help="Base asset quantity (limit orders)")
buy_parser.add_argument("-Q", "--quote", type=float, help="Quote asset amount (market buy only)")
buy_parser.add_argument("-t", "--type", choices=["market", "limit"], default="market", help="Order type (default: market)")
buy_parser.add_argument("-p", "--price", type=float, help="Limit price")
buy_parser.add_argument("-d", "--dry-run", action="store_true", help="Simulate without sending")
sell_parser = subparsers.add_parser("sell", aliases=["s"], help="Sell base asset")
sell_parser.add_argument("symbol", metavar="SYM", help="Trading pair (e.g. BTCUSDT)")
sell_parser.add_argument("-q", "--qty", type=float, help="Base asset quantity")
sell_parser.add_argument("-t", "--type", choices=["market", "limit"], default="market", help="Order type (default: market)")
sell_parser.add_argument("-p", "--price", type=float, help="Limit price")
sell_parser.add_argument("-d", "--dry-run", action="store_true", help="Simulate without sending")
opportunity_parser = subparsers.add_parser("opportunity", aliases=["opp", "o"], help="Portfolio analysis and market scanning")
opportunity_subparsers = opportunity_parser.add_subparsers(dest="opportunity_command")
opportunity_subparsers.add_parser("portfolio", aliases=["pf", "p"], help="Score current holdings")
scan_parser = opportunity_subparsers.add_parser("scan", help="Scan market for opportunities")
scan_parser.add_argument("-s", "--symbols", nargs="*", metavar="SYM", help="Restrict scan to specific symbols")
subparsers.add_parser("upgrade", help="Upgrade coinhunter to the latest version")
completion_parser = subparsers.add_parser("completion", help="Generate shell completion script")
completion_parser.add_argument("shell", choices=["bash", "zsh"], help="Target shell")
return parser
def run_python_module(module_name: str, argv: list[str]) -> int:
module = importlib.import_module(f".{module_name}", package="coinhunter")
if not hasattr(module, "main"):
raise RuntimeError(f"Module {module_name} has no main()")
old_argv = sys.argv[:]
try:
sys.argv = [f"coinhunter {module_name}", *argv]
result = module.main()
return int(result) if isinstance(result, int) else 0
except SystemExit as exc:
return exc.code if isinstance(exc.code, int) else 0
finally:
sys.argv = old_argv
_CANONICAL_COMMANDS = {
"b": "buy",
"s": "sell",
"acc": "account",
"a": "account",
"m": "market",
"opp": "opportunity",
"o": "opportunity",
}
_CANONICAL_SUBCOMMANDS = {
"ov": "overview",
"bal": "balances",
"b": "balances",
"pos": "positions",
"p": "positions",
"tk": "tickers",
"t": "tickers",
"k": "klines",
"pf": "portfolio",
"p": "portfolio",
}
_COMMANDS_WITH_SUBCOMMANDS = {"account", "market", "opportunity"}
def main() -> int:
def _get_doc_key(argv: list[str]) -> str | None:
"""Infer command/subcommand from argv for --doc lookup."""
tokens = [a for a in argv if a != "--doc" and not a.startswith("-")]
if not tokens:
return None
cmd = _CANONICAL_COMMANDS.get(tokens[0], tokens[0])
if cmd in _COMMANDS_WITH_SUBCOMMANDS and len(tokens) > 1:
sub = _CANONICAL_SUBCOMMANDS.get(tokens[1], tokens[1])
return f"{cmd}/{sub}"
return cmd
def _reorder_flag(argv: list[str], flag: str, short_flag: str | None = None) -> list[str]:
"""Move a global flag from after subcommands to before them so argparse can parse it."""
flags = {flag}
if short_flag:
flags.add(short_flag)
subcommand_idx: int | None = None
for i, arg in enumerate(argv):
if not arg.startswith("-"):
subcommand_idx = i
break
if subcommand_idx is None:
return argv
new_argv: list[str] = []
present = False
for i, arg in enumerate(argv):
if i >= subcommand_idx and arg in flags:
present = True
continue
new_argv.append(arg)
if present:
new_argv.insert(subcommand_idx, flag)
return new_argv
def main(argv: list[str] | None = None) -> int:
raw_argv = argv if argv is not None else sys.argv[1:]
if "--doc" in raw_argv:
doc_key = _get_doc_key(raw_argv)
if doc_key is None:
print("Available docs: " + ", ".join(sorted(COMMAND_DOCS.keys())))
return 0
doc = COMMAND_DOCS.get(doc_key, f"No documentation available for {doc_key}.")
print(doc)
return 0
parser = build_parser()
parsed = parser.parse_args()
if not parsed.command:
raw_argv = _reorder_flag(raw_argv, "--agent", "-a")
args = parser.parse_args(raw_argv)
# Normalize aliases to canonical command names
if args.command:
args.command = _CANONICAL_COMMANDS.get(args.command, args.command)
for attr in ("account_command", "market_command", "opportunity_command"):
val = getattr(args, attr, None)
if val:
setattr(args, attr, _CANONICAL_SUBCOMMANDS.get(val, val))
try:
if not args.command:
parser.print_help()
return 0
module_name = MODULE_MAP[parsed.command]
argv = list(parsed.args)
if argv and argv[0] == "--":
argv = argv[1:]
return run_python_module(module_name, argv)
if args.command == "init":
init_result = ensure_init_files(get_runtime_paths(), force=args.force)
init_result["completion"] = install_shell_completion(parser)
print_output(init_result, agent=args.agent)
return 0
if __name__ == "__main__":
raise SystemExit(main())
if args.command == "completion":
import shtab
print(shtab.complete(parser, shell=args.shell, preamble=""))
return 0
config = load_config()
if args.command == "account":
spot_client = _load_spot_client(config)
if args.account_command == "overview":
with with_spinner("Fetching account overview...", enabled=not args.agent):
print_output(
account_service.get_overview(config, spot_client=spot_client),
agent=args.agent,
)
return 0
if args.account_command == "balances":
with with_spinner("Fetching balances...", enabled=not args.agent):
print_output(
account_service.get_balances(config, spot_client=spot_client),
agent=args.agent,
)
return 0
if args.account_command == "positions":
with with_spinner("Fetching positions...", enabled=not args.agent):
print_output(
account_service.get_positions(config, spot_client=spot_client),
agent=args.agent,
)
return 0
parser.error("account requires one of: overview, balances, positions")
if args.command == "market":
spot_client = _load_spot_client(config)
if args.market_command == "tickers":
with with_spinner("Fetching tickers...", enabled=not args.agent):
print_output(market_service.get_tickers(config, args.symbols, spot_client=spot_client), agent=args.agent)
return 0
if args.market_command == "klines":
with with_spinner("Fetching klines...", enabled=not args.agent):
print_output(
market_service.get_klines(
config,
args.symbols,
interval=args.interval,
limit=args.limit,
spot_client=spot_client,
),
agent=args.agent,
)
return 0
parser.error("market requires one of: tickers, klines")
if args.command == "buy":
spot_client = _load_spot_client(config)
with with_spinner("Placing order...", enabled=not args.agent):
print_output(
trade_service.execute_spot_trade(
config,
side="buy",
symbol=args.symbol,
qty=args.qty,
quote=args.quote,
order_type=args.type,
price=args.price,
dry_run=True if args.dry_run else None,
spot_client=spot_client,
),
agent=args.agent,
)
return 0
if args.command == "sell":
spot_client = _load_spot_client(config)
with with_spinner("Placing order...", enabled=not args.agent):
print_output(
trade_service.execute_spot_trade(
config,
side="sell",
symbol=args.symbol,
qty=args.qty,
quote=None,
order_type=args.type,
price=args.price,
dry_run=True if args.dry_run else None,
spot_client=spot_client,
),
agent=args.agent,
)
return 0
if args.command == "opportunity":
spot_client = _load_spot_client(config)
if args.opportunity_command == "portfolio":
with with_spinner("Analyzing portfolio...", enabled=not args.agent):
print_output(opportunity_service.analyze_portfolio(config, spot_client=spot_client), agent=args.agent)
return 0
if args.opportunity_command == "scan":
with with_spinner("Scanning opportunities...", enabled=not args.agent):
print_output(opportunity_service.scan_opportunities(config, spot_client=spot_client, symbols=args.symbols), agent=args.agent)
return 0
parser.error("opportunity requires `portfolio` or `scan`")
if args.command == "upgrade":
print_output(self_upgrade(), agent=args.agent)
return 0
parser.error(f"Unsupported command {args.command}")
return 2
except Exception as exc:
print(f"error: {exc}", file=sys.stderr)
return 1

View File

@@ -1 +0,0 @@
"""CLI command adapters for CoinHunter."""

View File

@@ -1,15 +0,0 @@
"""CLI adapter for precheck."""
from __future__ import annotations
import sys
from ..services.precheck_service import run
def main() -> int:
return run(sys.argv[1:])
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,15 +0,0 @@
"""CLI adapter for smart executor."""
from __future__ import annotations
import sys
from ..services.smart_executor_service import run
def main() -> int:
return run(sys.argv[1:])
if __name__ == "__main__":
raise SystemExit(main())

130
src/coinhunter/config.py Normal file
View File

@@ -0,0 +1,130 @@
"""Configuration and secret loading for CoinHunter V2."""
from __future__ import annotations
import os
from pathlib import Path
from typing import Any
from .runtime import RuntimePaths, ensure_runtime_dirs, get_runtime_paths
try:
import tomllib
except ModuleNotFoundError: # pragma: no cover
import tomli as tomllib
DEFAULT_CONFIG = """[runtime]
timezone = "Asia/Shanghai"
log_dir = "logs"
output_format = "tui"
[binance]
spot_base_url = "https://api.binance.com"
recv_window = 5000
[market]
default_quote = "USDT"
universe_allowlist = []
universe_denylist = []
[trading]
spot_enabled = true
dry_run_default = false
dust_usdt_threshold = 10.0
[opportunity]
min_quote_volume = 1000000.0
top_n = 10
scan_limit = 50
ignore_dust = true
lookback_intervals = ["1h", "4h", "1d"]
[opportunity.weights]
trend = 1.0
momentum = 1.0
breakout = 0.8
volume = 0.7
volatility_penalty = 0.5
position_concentration_penalty = 0.6
"""
DEFAULT_ENV = "BINANCE_API_KEY=\nBINANCE_API_SECRET=\n"
def _permission_denied_message(paths: RuntimePaths, exc: PermissionError) -> RuntimeError:
return RuntimeError(
"Unable to initialize CoinHunter runtime files because the target directory is not writable: "
f"{paths.root}. Set COINHUNTER_HOME to a writable directory or rerun with permissions that can write there. "
f"Original error: {exc}"
)
def ensure_init_files(paths: RuntimePaths | None = None, *, force: bool = False) -> dict[str, Any]:
paths = paths or get_runtime_paths()
try:
ensure_runtime_dirs(paths)
except PermissionError as exc:
raise _permission_denied_message(paths, exc) from exc
created: list[str] = []
updated: list[str] = []
for path, content in ((paths.config_file, DEFAULT_CONFIG), (paths.env_file, DEFAULT_ENV)):
if force or not path.exists():
try:
path.write_text(content, encoding="utf-8")
except PermissionError as exc:
raise _permission_denied_message(paths, exc) from exc
(updated if force and path.exists() else created).append(str(path))
return {
"root": str(paths.root),
"config_file": str(paths.config_file),
"env_file": str(paths.env_file),
"logs_dir": str(paths.logs_dir),
"created_or_updated": created + updated,
"force": force,
}
def load_config(paths: RuntimePaths | None = None) -> dict[str, Any]:
paths = paths or get_runtime_paths()
if not paths.config_file.exists():
raise RuntimeError(f"Missing config file at {paths.config_file}. Run `coinhunter init` first.")
return tomllib.loads(paths.config_file.read_text(encoding="utf-8")) # type: ignore[no-any-return]
def load_env_file(paths: RuntimePaths | None = None) -> dict[str, str]:
paths = paths or get_runtime_paths()
loaded: dict[str, str] = {}
if not paths.env_file.exists():
return loaded
for raw_line in paths.env_file.read_text(encoding="utf-8").splitlines():
line = raw_line.strip()
if not line or line.startswith("#") or "=" not in line:
continue
key, value = line.split("=", 1)
key = key.strip()
value = value.strip()
loaded[key] = value
os.environ[key] = value
return loaded
def get_binance_credentials(paths: RuntimePaths | None = None) -> dict[str, str]:
load_env_file(paths)
api_key = os.getenv("BINANCE_API_KEY", "").strip()
api_secret = os.getenv("BINANCE_API_SECRET", "").strip()
if not api_key or not api_secret:
runtime_paths = paths or get_runtime_paths()
raise RuntimeError(
"Missing BINANCE_API_KEY or BINANCE_API_SECRET. "
f"Populate {runtime_paths.env_file} or export them in the environment."
)
return {"api_key": api_key, "api_secret": api_secret}
def resolve_log_dir(config: dict[str, Any], paths: RuntimePaths | None = None) -> Path:
paths = paths or get_runtime_paths()
raw = config.get("runtime", {}).get("log_dir", "logs")
value = Path(raw).expanduser()
return value if value.is_absolute() else paths.root / value

View File

@@ -1,66 +0,0 @@
"""Runtime diagnostics for CoinHunter CLI."""
from __future__ import annotations
import json
import os
import platform
import shutil
import sys
from .runtime import ensure_runtime_dirs, get_runtime_paths, load_env_file, resolve_hermes_executable
REQUIRED_ENV_VARS = ["BINANCE_API_KEY", "BINANCE_API_SECRET"]
def main() -> int:
paths = ensure_runtime_dirs(get_runtime_paths())
env_file = load_env_file(paths)
hermes_executable = resolve_hermes_executable(paths)
env_checks = {}
missing_env = []
for name in REQUIRED_ENV_VARS:
present = bool(os.getenv(name))
env_checks[name] = present
if not present:
missing_env.append(name)
file_checks = {
"env_file_exists": env_file.exists(),
"config_exists": paths.config_file.exists(),
"positions_exists": paths.positions_file.exists(),
"logrotate_config_exists": paths.logrotate_config.exists(),
}
dir_checks = {
"root_exists": paths.root.exists(),
"state_dir_exists": paths.state_dir.exists(),
"logs_dir_exists": paths.logs_dir.exists(),
"reviews_dir_exists": paths.reviews_dir.exists(),
"cache_dir_exists": paths.cache_dir.exists(),
}
command_checks = {
"hermes": bool(shutil.which("hermes") or paths.hermes_bin.exists()),
"logrotate": bool(shutil.which("logrotate") or shutil.which("/usr/sbin/logrotate")),
}
report = {
"ok": not missing_env,
"python": sys.version.split()[0],
"platform": platform.platform(),
"env_file": str(env_file),
"hermes_executable": hermes_executable,
"paths": paths.as_dict(),
"env_checks": env_checks,
"missing_env": missing_env,
"file_checks": file_checks,
"dir_checks": dir_checks,
"command_checks": command_checks,
}
print(json.dumps(report, ensure_ascii=False, indent=2))
return 0 if report["ok"] else 1
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,82 +0,0 @@
#!/usr/bin/env python3
import fcntl
import json
import subprocess
import sys
from datetime import datetime, timezone
from .runtime import ensure_runtime_dirs, get_runtime_paths, resolve_hermes_executable
PATHS = get_runtime_paths()
STATE_DIR = PATHS.state_dir
LOCK_FILE = PATHS.external_gate_lock
COINHUNTER_MODULE = [sys.executable, "-m", "coinhunter"]
TRADE_JOB_ID = "4e6593fff158"
def utc_now():
return datetime.now(timezone.utc).isoformat()
def log(message: str):
print(f"[{utc_now()}] {message}")
def run_cmd(args: list[str]) -> subprocess.CompletedProcess:
return subprocess.run(args, capture_output=True, text=True)
def parse_json_output(text: str) -> dict:
text = (text or "").strip()
if not text:
return {}
return json.loads(text)
def main():
ensure_runtime_dirs(PATHS)
with open(LOCK_FILE, "w", encoding="utf-8") as lockf:
try:
fcntl.flock(lockf.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
except BlockingIOError:
log("gate already running; skip")
return 0
precheck = run_cmd(COINHUNTER_MODULE + ["precheck"])
if precheck.returncode != 0:
log(f"precheck returned non-zero ({precheck.returncode}); stdout={precheck.stdout.strip()} stderr={precheck.stderr.strip()}")
return 1
try:
data = parse_json_output(precheck.stdout)
except Exception as e:
log(f"failed to parse precheck JSON: {e}; raw={precheck.stdout.strip()[:1000]}")
return 1
if not data.get("should_analyze"):
log("no trigger; skip model run")
return 0
if data.get("run_requested"):
log(f"trigger already queued at {data.get('run_requested_at')}; skip duplicate")
return 0
mark = run_cmd(COINHUNTER_MODULE + ["precheck", "--mark-run-requested", "external-gate queued cron run"])
if mark.returncode != 0:
log(f"failed to mark run requested; stdout={mark.stdout.strip()} stderr={mark.stderr.strip()}")
return 1
trigger = run_cmd([resolve_hermes_executable(PATHS), "cron", "run", TRADE_JOB_ID])
if trigger.returncode != 0:
log(f"failed to trigger trade cron job; stdout={trigger.stdout.strip()} stderr={trigger.stderr.strip()}")
return 1
reasons = ", ".join(data.get("reasons", [])) or "unknown"
log(f"queued trade job {TRADE_JOB_ID}; reasons={reasons}")
if trigger.stdout.strip():
log(trigger.stdout.strip())
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,65 +0,0 @@
#!/usr/bin/env python3
import json
from datetime import datetime, timezone
from pathlib import Path
from .runtime import ensure_runtime_dirs, get_runtime_paths
PATHS = get_runtime_paths()
ROOT = PATHS.root
CACHE_DIR = PATHS.cache_dir
def now_iso():
return datetime.now(timezone.utc).replace(microsecond=0).isoformat()
def ensure_file(path: Path, payload: dict):
if path.exists():
return False
path.write_text(json.dumps(payload, ensure_ascii=False, indent=2) + "\n", encoding="utf-8")
return True
def main():
ensure_runtime_dirs(PATHS)
created = []
ts = now_iso()
templates = {
ROOT / "config.json": {
"default_exchange": "bybit",
"default_quote_currency": "USDT",
"timezone": "Asia/Shanghai",
"preferred_chains": ["solana", "base"],
"created_at": ts,
"updated_at": ts,
},
ROOT / "accounts.json": {
"accounts": []
},
ROOT / "positions.json": {
"positions": []
},
ROOT / "watchlist.json": {
"watchlist": []
},
ROOT / "notes.json": {
"notes": []
},
}
for path, payload in templates.items():
if ensure_file(path, payload):
created.append(str(path))
print(json.dumps({
"root": str(ROOT),
"created": created,
"cache_dir": str(CACHE_DIR),
}, ensure_ascii=False, indent=2))
if __name__ == "__main__":
main()

View File

@@ -1,107 +0,0 @@
#!/usr/bin/env python3
"""Coin Hunter structured logger."""
import json
import traceback
from datetime import datetime, timezone, timedelta
from .runtime import get_runtime_paths
LOG_DIR = get_runtime_paths().logs_dir
SCHEMA_VERSION = 2
CST = timezone(timedelta(hours=8))
def bj_now():
return datetime.now(CST)
def ensure_dir():
LOG_DIR.mkdir(parents=True, exist_ok=True)
def _append_jsonl(prefix: str, payload: dict):
ensure_dir()
date_str = bj_now().strftime("%Y%m%d")
log_file = LOG_DIR / f"{prefix}_{date_str}.jsonl"
with open(log_file, "a", encoding="utf-8") as f:
f.write(json.dumps(payload, ensure_ascii=False) + "\n")
def log_event(prefix: str, payload: dict):
entry = {
"schema_version": SCHEMA_VERSION,
"timestamp": bj_now().isoformat(),
**payload,
}
_append_jsonl(prefix, entry)
return entry
def log_decision(data: dict):
return log_event("decisions", data)
def log_trade(action: str, symbol: str, qty: float = None, amount_usdt: float = None,
price: float = None, note: str = "", **extra):
payload = {
"action": action,
"symbol": symbol,
"qty": qty,
"amount_usdt": amount_usdt,
"price": price,
"note": note,
**extra,
}
return log_event("trades", payload)
def log_snapshot(market_data: dict, note: str = "", **extra):
return log_event("snapshots", {"market_data": market_data, "note": note, **extra})
def log_error(where: str, error: Exception | str, **extra):
payload = {
"where": where,
"error_type": error.__class__.__name__ if isinstance(error, Exception) else "Error",
"error": str(error),
"traceback": traceback.format_exc() if isinstance(error, Exception) else None,
**extra,
}
return log_event("errors", payload)
def get_logs_by_date(log_type: str, date_str: str = None) -> list:
if date_str is None:
date_str = bj_now().strftime("%Y%m%d")
log_file = LOG_DIR / f"{log_type}_{date_str}.jsonl"
if not log_file.exists():
return []
entries = []
with open(log_file, "r", encoding="utf-8") as f:
for line in f:
line = line.strip()
if not line:
continue
try:
entries.append(json.loads(line))
except json.JSONDecodeError:
continue
return entries
def get_logs_last_n_hours(log_type: str, n_hours: int = 1) -> list:
now = bj_now()
cutoff = now - timedelta(hours=n_hours)
entries = []
for offset in [0, -1]:
date_str = (now + timedelta(days=offset)).strftime("%Y%m%d")
for entry in get_logs_by_date(log_type, date_str):
try:
ts = datetime.fromisoformat(entry["timestamp"])
except Exception:
continue
if ts >= cutoff:
entries.append(entry)
entries.sort(key=lambda x: x.get("timestamp", ""))
return entries

View File

@@ -1,243 +0,0 @@
#!/usr/bin/env python3
import argparse
import json
import os
import sys
import urllib.parse
import urllib.request
DEFAULT_TIMEOUT = 20
def fetch_json(url, headers=None, timeout=DEFAULT_TIMEOUT):
merged_headers = {
"Accept": "application/json",
"User-Agent": "Mozilla/5.0 (compatible; OpenClaw Coin Hunter/1.0)",
}
if headers:
merged_headers.update(headers)
req = urllib.request.Request(url, headers=merged_headers)
with urllib.request.urlopen(req, timeout=timeout) as resp:
data = resp.read()
return json.loads(data.decode("utf-8"))
def print_json(data):
print(json.dumps(data, ensure_ascii=False, indent=2))
def bybit_ticker(symbol: str):
url = (
"https://api.bybit.com/v5/market/tickers?category=spot&symbol="
+ urllib.parse.quote(symbol.upper())
)
payload = fetch_json(url)
items = payload.get("result", {}).get("list", [])
if not items:
raise SystemExit(f"No Bybit spot ticker found for {symbol}")
item = items[0]
out = {
"provider": "bybit",
"symbol": symbol.upper(),
"lastPrice": item.get("lastPrice"),
"price24hPcnt": item.get("price24hPcnt"),
"highPrice24h": item.get("highPrice24h"),
"lowPrice24h": item.get("lowPrice24h"),
"turnover24h": item.get("turnover24h"),
"volume24h": item.get("volume24h"),
"bid1Price": item.get("bid1Price"),
"ask1Price": item.get("ask1Price"),
}
print_json(out)
def bybit_klines(symbol: str, interval: str, limit: int):
params = urllib.parse.urlencode({
"category": "spot",
"symbol": symbol.upper(),
"interval": interval,
"limit": str(limit),
})
url = f"https://api.bybit.com/v5/market/kline?{params}"
payload = fetch_json(url)
rows = payload.get("result", {}).get("list", [])
out = {
"provider": "bybit",
"symbol": symbol.upper(),
"interval": interval,
"candles": [
{
"startTime": r[0],
"open": r[1],
"high": r[2],
"low": r[3],
"close": r[4],
"volume": r[5],
"turnover": r[6],
}
for r in rows
],
}
print_json(out)
def dexscreener_search(query: str):
url = "https://api.dexscreener.com/latest/dex/search/?q=" + urllib.parse.quote(query)
payload = fetch_json(url)
pairs = payload.get("pairs") or []
out = []
for p in pairs[:10]:
out.append({
"chainId": p.get("chainId"),
"dexId": p.get("dexId"),
"pairAddress": p.get("pairAddress"),
"url": p.get("url"),
"baseToken": p.get("baseToken"),
"quoteToken": p.get("quoteToken"),
"priceUsd": p.get("priceUsd"),
"liquidityUsd": (p.get("liquidity") or {}).get("usd"),
"fdv": p.get("fdv"),
"marketCap": p.get("marketCap"),
"volume24h": (p.get("volume") or {}).get("h24"),
"buys24h": ((p.get("txns") or {}).get("h24") or {}).get("buys"),
"sells24h": ((p.get("txns") or {}).get("h24") or {}).get("sells"),
})
print_json({"provider": "dexscreener", "query": query, "pairs": out})
def dexscreener_token(chain: str, address: str):
url = f"https://api.dexscreener.com/tokens/v1/{urllib.parse.quote(chain)}/{urllib.parse.quote(address)}"
payload = fetch_json(url)
pairs = payload if isinstance(payload, list) else payload.get("pairs") or []
out = []
for p in pairs[:10]:
out.append({
"chainId": p.get("chainId"),
"dexId": p.get("dexId"),
"pairAddress": p.get("pairAddress"),
"baseToken": p.get("baseToken"),
"quoteToken": p.get("quoteToken"),
"priceUsd": p.get("priceUsd"),
"liquidityUsd": (p.get("liquidity") or {}).get("usd"),
"fdv": p.get("fdv"),
"marketCap": p.get("marketCap"),
"volume24h": (p.get("volume") or {}).get("h24"),
})
print_json({"provider": "dexscreener", "chain": chain, "address": address, "pairs": out})
def coingecko_search(query: str):
url = "https://api.coingecko.com/api/v3/search?query=" + urllib.parse.quote(query)
payload = fetch_json(url)
coins = payload.get("coins") or []
out = []
for c in coins[:10]:
out.append({
"id": c.get("id"),
"name": c.get("name"),
"symbol": c.get("symbol"),
"marketCapRank": c.get("market_cap_rank"),
"thumb": c.get("thumb"),
})
print_json({"provider": "coingecko", "query": query, "coins": out})
def coingecko_coin(coin_id: str):
params = urllib.parse.urlencode({
"localization": "false",
"tickers": "false",
"market_data": "true",
"community_data": "false",
"developer_data": "false",
"sparkline": "false",
})
url = f"https://api.coingecko.com/api/v3/coins/{urllib.parse.quote(coin_id)}?{params}"
payload = fetch_json(url)
md = payload.get("market_data") or {}
out = {
"provider": "coingecko",
"id": payload.get("id"),
"symbol": payload.get("symbol"),
"name": payload.get("name"),
"marketCapRank": payload.get("market_cap_rank"),
"currentPriceUsd": (md.get("current_price") or {}).get("usd"),
"marketCapUsd": (md.get("market_cap") or {}).get("usd"),
"fullyDilutedValuationUsd": (md.get("fully_diluted_valuation") or {}).get("usd"),
"totalVolumeUsd": (md.get("total_volume") or {}).get("usd"),
"priceChangePercentage24h": md.get("price_change_percentage_24h"),
"priceChangePercentage7d": md.get("price_change_percentage_7d"),
"priceChangePercentage30d": md.get("price_change_percentage_30d"),
"circulatingSupply": md.get("circulating_supply"),
"totalSupply": md.get("total_supply"),
"maxSupply": md.get("max_supply"),
"homepage": (payload.get("links") or {}).get("homepage", [None])[0],
}
print_json(out)
def birdeye_token(address: str):
api_key = os.getenv("BIRDEYE_API_KEY") or os.getenv("BIRDEYE_APIKEY")
if not api_key:
raise SystemExit("Birdeye requires BIRDEYE_API_KEY in the environment")
url = "https://public-api.birdeye.so/defi/token_overview?address=" + urllib.parse.quote(address)
payload = fetch_json(url, headers={
"x-api-key": api_key,
"x-chain": "solana",
})
print_json({"provider": "birdeye", "address": address, "data": payload.get("data")})
def build_parser():
parser = argparse.ArgumentParser(description="Coin Hunter market data probe")
sub = parser.add_subparsers(dest="command", required=True)
p = sub.add_parser("bybit-ticker", help="Fetch Bybit spot ticker")
p.add_argument("symbol")
p = sub.add_parser("bybit-klines", help="Fetch Bybit spot klines")
p.add_argument("symbol")
p.add_argument("--interval", default="60", help="Bybit interval, e.g. 1, 5, 15, 60, 240, D")
p.add_argument("--limit", type=int, default=10)
p = sub.add_parser("dex-search", help="Search DexScreener by query")
p.add_argument("query")
p = sub.add_parser("dex-token", help="Fetch DexScreener token pairs by chain/address")
p.add_argument("chain")
p.add_argument("address")
p = sub.add_parser("gecko-search", help="Search CoinGecko")
p.add_argument("query")
p = sub.add_parser("gecko-coin", help="Fetch CoinGecko coin by id")
p.add_argument("coin_id")
p = sub.add_parser("birdeye-token", help="Fetch Birdeye token overview (Solana)")
p.add_argument("address")
return parser
def main():
parser = build_parser()
args = parser.parse_args()
if args.command == "bybit-ticker":
bybit_ticker(args.symbol)
elif args.command == "bybit-klines":
bybit_klines(args.symbol, args.interval, args.limit)
elif args.command == "dex-search":
dexscreener_search(args.query)
elif args.command == "dex-token":
dexscreener_token(args.chain, args.address)
elif args.command == "gecko-search":
coingecko_search(args.query)
elif args.command == "gecko-coin":
coingecko_coin(args.coin_id)
elif args.command == "birdeye-token":
birdeye_token(args.address)
else:
parser.error("Unknown command")
if __name__ == "__main__":
main()

View File

@@ -1,16 +0,0 @@
"""Print CoinHunter runtime paths."""
from __future__ import annotations
import json
from .runtime import get_runtime_paths
def main() -> int:
print(json.dumps(get_runtime_paths().as_dict(), ensure_ascii=False, indent=2))
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,925 +0,0 @@
#!/usr/bin/env python3
import json
import os
import re
import sys
import hashlib
from datetime import datetime, timezone, timedelta
from pathlib import Path
from zoneinfo import ZoneInfo
import ccxt
from .runtime import get_runtime_paths, load_env_file
PATHS = get_runtime_paths()
BASE_DIR = PATHS.root
STATE_DIR = PATHS.state_dir
STATE_FILE = PATHS.precheck_state_file
POSITIONS_FILE = PATHS.positions_file
CONFIG_FILE = PATHS.config_file
ENV_FILE = PATHS.env_file
BASE_PRICE_MOVE_TRIGGER_PCT = 0.025
BASE_PNL_TRIGGER_PCT = 0.03
BASE_PORTFOLIO_MOVE_TRIGGER_PCT = 0.03
BASE_CANDIDATE_SCORE_TRIGGER_RATIO = 1.15
BASE_FORCE_ANALYSIS_AFTER_MINUTES = 180
BASE_COOLDOWN_MINUTES = 45
TOP_CANDIDATES = 10
MIN_ACTIONABLE_USDT = 12.0
MIN_REAL_POSITION_VALUE_USDT = 8.0
BLACKLIST = {"USDC", "BUSD", "TUSD", "FDUSD", "USTC", "PAXG"}
HARD_STOP_PCT = -0.08
HARD_MOON_PCT = 0.25
MIN_CHANGE_PCT = 1.0
MAX_PRICE_CAP = None
HARD_REASON_DEDUP_MINUTES = 15
MAX_PENDING_TRIGGER_MINUTES = 30
MAX_RUN_REQUEST_MINUTES = 20
def utc_now():
return datetime.now(timezone.utc)
def utc_iso():
return utc_now().isoformat()
def parse_ts(value: str | None):
if not value:
return None
try:
ts = datetime.fromisoformat(value)
if ts.tzinfo is None:
ts = ts.replace(tzinfo=timezone.utc)
return ts
except Exception:
return None
def load_json(path: Path, default):
if not path.exists():
return default
try:
return json.loads(path.read_text(encoding="utf-8"))
except Exception:
return default
def load_env():
load_env_file(PATHS)
def load_positions():
return load_json(POSITIONS_FILE, {}).get("positions", [])
def load_state():
return load_json(STATE_FILE, {})
def load_config():
return load_json(CONFIG_FILE, {})
def clear_run_request_fields(state: dict):
state.pop("run_requested_at", None)
state.pop("run_request_note", None)
def sanitize_state_for_stale_triggers(state: dict):
sanitized = dict(state)
notes = []
now = utc_now()
run_requested_at = parse_ts(sanitized.get("run_requested_at"))
last_deep_analysis_at = parse_ts(sanitized.get("last_deep_analysis_at"))
last_triggered_at = parse_ts(sanitized.get("last_triggered_at"))
pending_trigger = bool(sanitized.get("pending_trigger"))
if run_requested_at and last_deep_analysis_at and last_deep_analysis_at >= run_requested_at:
clear_run_request_fields(sanitized)
if pending_trigger and (not last_triggered_at or last_deep_analysis_at >= last_triggered_at):
sanitized["pending_trigger"] = False
sanitized["pending_reasons"] = []
sanitized["last_ack_note"] = (
f"auto-cleared completed trigger at {utc_iso()} because last_deep_analysis_at >= run_requested_at"
)
pending_trigger = False
notes.append(
f"自动清理已完成的 run_requested 标记:最近深度分析时间 {last_deep_analysis_at.isoformat()} >= 请求时间 {run_requested_at.isoformat()}"
)
run_requested_at = None
if run_requested_at and now - run_requested_at > timedelta(minutes=MAX_RUN_REQUEST_MINUTES):
clear_run_request_fields(sanitized)
notes.append(
f"自动清理超时 run_requested 标记:已等待 {(now - run_requested_at).total_seconds() / 60:.1f} 分钟,超过 {MAX_RUN_REQUEST_MINUTES} 分钟"
)
run_requested_at = None
pending_anchor = run_requested_at or last_triggered_at or last_deep_analysis_at
if pending_trigger and pending_anchor and now - pending_anchor > timedelta(minutes=MAX_PENDING_TRIGGER_MINUTES):
sanitized["pending_trigger"] = False
sanitized["pending_reasons"] = []
sanitized["last_ack_note"] = (
f"auto-recovered stale pending trigger at {utc_iso()} after waiting "
f"{(now - pending_anchor).total_seconds() / 60:.1f} minutes"
)
notes.append(
f"自动解除 pending_trigger触发状态已悬挂 {(now - pending_anchor).total_seconds() / 60:.1f} 分钟,超过 {MAX_PENDING_TRIGGER_MINUTES} 分钟"
)
sanitized["_stale_recovery_notes"] = notes
return sanitized
def save_state(state: dict):
STATE_DIR.mkdir(parents=True, exist_ok=True)
state_to_save = dict(state)
state_to_save.pop("_stale_recovery_notes", None)
STATE_FILE.write_text(json.dumps(state_to_save, indent=2, ensure_ascii=False), encoding="utf-8")
def stable_hash(data) -> str:
payload = json.dumps(data, sort_keys=True, ensure_ascii=False, separators=(",", ":"))
return hashlib.sha1(payload.encode("utf-8")).hexdigest()
def get_exchange():
load_env()
api_key = os.getenv("BINANCE_API_KEY")
secret = os.getenv("BINANCE_API_SECRET")
if not api_key or not secret:
raise RuntimeError("Missing BINANCE_API_KEY or BINANCE_API_SECRET in ~/.hermes/.env")
ex = ccxt.binance({
"apiKey": api_key,
"secret": secret,
"options": {"defaultType": "spot"},
"enableRateLimit": True,
})
ex.load_markets()
return ex
def fetch_ohlcv_batch(ex, symbols: set, timeframe: str, limit: int):
results = {}
for sym in sorted(symbols):
try:
ohlcv = ex.fetch_ohlcv(sym, timeframe=timeframe, limit=limit)
if ohlcv and len(ohlcv) >= 2:
results[sym] = ohlcv
except Exception:
pass
return results
def compute_ohlcv_metrics(ohlcv_1h, ohlcv_4h, current_price, volume_24h=None):
metrics = {}
if ohlcv_1h and len(ohlcv_1h) >= 2:
closes = [c[4] for c in ohlcv_1h]
volumes = [c[5] for c in ohlcv_1h]
metrics["change_1h_pct"] = round((closes[-1] - closes[-2]) / closes[-2] * 100, 2) if closes[-2] != 0 else None
if len(closes) >= 5:
metrics["change_4h_pct"] = round((closes[-1] - closes[-5]) / closes[-5] * 100, 2) if closes[-5] != 0 else None
recent_vol = sum(volumes[-4:]) / 4 if len(volumes) >= 4 else None
metrics["volume_1h_avg"] = round(recent_vol, 2) if recent_vol else None
highs = [c[2] for c in ohlcv_1h[-4:]]
lows = [c[3] for c in ohlcv_1h[-4:]]
metrics["high_4h"] = round(max(highs), 8) if highs else None
metrics["low_4h"] = round(min(lows), 8) if lows else None
if ohlcv_4h and len(ohlcv_4h) >= 2:
closes_4h = [c[4] for c in ohlcv_4h]
volumes_4h = [c[5] for c in ohlcv_4h]
metrics["change_4h_pct_from_4h"] = round((closes_4h[-1] - closes_4h[-2]) / closes_4h[-2] * 100, 2) if closes_4h[-2] != 0 else None
recent_vol_4h = sum(volumes_4h[-2:]) / 2 if len(volumes_4h) >= 2 else None
metrics["volume_4h_avg"] = round(recent_vol_4h, 2) if recent_vol_4h else None
highs_4h = [c[2] for c in ohlcv_4h]
lows_4h = [c[3] for c in ohlcv_4h]
metrics["high_24h_calc"] = round(max(highs_4h), 8) if highs_4h else None
metrics["low_24h_calc"] = round(min(lows_4h), 8) if lows_4h else None
if highs_4h and lows_4h:
avg_price = sum(closes_4h) / len(closes_4h)
metrics["volatility_4h_pct"] = round((max(highs_4h) - min(lows_4h)) / avg_price * 100, 2)
if current_price:
if metrics.get("high_4h"):
metrics["distance_from_4h_high_pct"] = round((metrics["high_4h"] - current_price) / metrics["high_4h"] * 100, 2)
if metrics.get("low_4h"):
metrics["distance_from_4h_low_pct"] = round((current_price - metrics["low_4h"]) / metrics["low_4h"] * 100, 2)
if metrics.get("high_24h_calc"):
metrics["distance_from_24h_high_pct"] = round((metrics["high_24h_calc"] - current_price) / metrics["high_24h_calc"] * 100, 2)
if metrics.get("low_24h_calc"):
metrics["distance_from_24h_low_pct"] = round((current_price - metrics["low_24h_calc"]) / metrics["low_24h_calc"] * 100, 2)
if volume_24h and volume_24h > 0 and metrics.get("volume_1h_avg"):
daily_avg_1h = volume_24h / 24
metrics["volume_1h_multiple"] = round(metrics["volume_1h_avg"] / daily_avg_1h, 2)
if volume_24h and volume_24h > 0 and metrics.get("volume_4h_avg"):
daily_avg_4h = volume_24h / 6
metrics["volume_4h_multiple"] = round(metrics["volume_4h_avg"] / daily_avg_4h, 2)
return metrics
def enrich_candidates_and_positions(global_candidates, candidate_layers, positions_view, tickers, ex):
symbols = set()
for c in global_candidates:
symbols.add(c["symbol"])
for p in positions_view:
sym = p.get("symbol")
if sym:
sym_ccxt = norm_symbol(sym)
symbols.add(sym_ccxt)
ohlcv_1h = fetch_ohlcv_batch(ex, symbols, "1h", 24)
ohlcv_4h = fetch_ohlcv_batch(ex, symbols, "4h", 12)
def _apply(target_list):
for item in target_list:
sym = item.get("symbol")
if not sym:
continue
sym_ccxt = norm_symbol(sym)
v24h = to_float(tickers.get(sym_ccxt, {}).get("quoteVolume"))
metrics = compute_ohlcv_metrics(
ohlcv_1h.get(sym_ccxt),
ohlcv_4h.get(sym_ccxt),
item.get("price") or item.get("last_price"),
volume_24h=v24h,
)
item["metrics"] = metrics
_apply(global_candidates)
for band_list in candidate_layers.values():
_apply(band_list)
_apply(positions_view)
return global_candidates, candidate_layers, positions_view
def regime_from_pct(pct: float | None) -> str:
if pct is None:
return "unknown"
if pct >= 2.0:
return "bullish"
if pct <= -2.0:
return "bearish"
return "neutral"
def to_float(value, default=0.0):
try:
if value is None:
return default
return float(value)
except Exception:
return default
def norm_symbol(symbol: str) -> str:
s = symbol.upper().replace("-", "").replace("_", "")
if "/" in s:
return s
if s.endswith("USDT"):
return s[:-4] + "/USDT"
return s
def get_local_now(config: dict):
tz_name = config.get("timezone") or "Asia/Shanghai"
try:
tz = ZoneInfo(tz_name)
except Exception:
tz = ZoneInfo("Asia/Shanghai")
tz_name = "Asia/Shanghai"
return utc_now().astimezone(tz), tz_name
def session_label(local_dt: datetime) -> str:
hour = local_dt.hour
if 0 <= hour < 7:
return "overnight"
if 7 <= hour < 12:
return "asia-morning"
if 12 <= hour < 17:
return "asia-afternoon"
if 17 <= hour < 21:
return "europe-open"
return "us-session"
def _liquidity_score(volume: float) -> float:
return min(1.0, max(0.0, volume / 50_000_000))
def _breakout_score(price: float, avg_price: float | None) -> float:
if not avg_price or avg_price <= 0:
return 0.0
return (price - avg_price) / avg_price
def top_candidates_from_tickers(tickers: dict):
candidates = []
for symbol, ticker in tickers.items():
if not symbol.endswith("/USDT"):
continue
base = symbol.replace("/USDT", "")
if base in BLACKLIST:
continue
if not re.fullmatch(r"[A-Z0-9]{2,20}", base):
continue
price = to_float(ticker.get("last"))
change_pct = to_float(ticker.get("percentage"))
volume = to_float(ticker.get("quoteVolume"))
high = to_float(ticker.get("high"))
low = to_float(ticker.get("low"))
avg_price = to_float(ticker.get("average"), None)
if price <= 0:
continue
if MAX_PRICE_CAP is not None and price > MAX_PRICE_CAP:
continue
if volume < 500_000:
continue
if change_pct < MIN_CHANGE_PCT:
continue
momentum = change_pct / 10.0
liquidity = _liquidity_score(volume)
breakout = _breakout_score(price, avg_price)
score = round(momentum * 0.5 + liquidity * 0.3 + breakout * 0.2, 4)
band = "major" if price >= 10 else "mid" if price >= 1 else "meme"
distance_from_high = (high - price) / max(high, 1e-9) if high else None
candidates.append({
"symbol": symbol,
"base": base,
"price": round(price, 8),
"change_24h_pct": round(change_pct, 2),
"volume_24h": round(volume, 2),
"breakout_pct": round(breakout * 100, 2),
"high_24h": round(high, 8) if high else None,
"low_24h": round(low, 8) if low else None,
"distance_from_high_pct": round(distance_from_high * 100, 2) if distance_from_high is not None else None,
"score": score,
"band": band,
})
candidates.sort(key=lambda x: x["score"], reverse=True)
global_top = candidates[:TOP_CANDIDATES]
layers = {"major": [], "mid": [], "meme": []}
for c in candidates:
layers[c["band"]].append(c)
for k in layers:
layers[k] = layers[k][:5]
return global_top, layers
def build_snapshot():
config = load_config()
local_dt, tz_name = get_local_now(config)
ex = get_exchange()
positions = load_positions()
tickers = ex.fetch_tickers()
balances = ex.fetch_balance()["free"]
free_usdt = to_float(balances.get("USDT"))
positions_view = []
total_position_value = 0.0
largest_position_value = 0.0
actionable_positions = 0
for pos in positions:
symbol = pos.get("symbol") or ""
sym_ccxt = norm_symbol(symbol)
ticker = tickers.get(sym_ccxt, {})
last = to_float(ticker.get("last"), None)
qty = to_float(pos.get("quantity"))
avg_cost = to_float(pos.get("avg_cost"), None)
value = round(qty * last, 4) if last is not None else None
pnl_pct = round((last - avg_cost) / avg_cost, 4) if last is not None and avg_cost else None
high = to_float(ticker.get("high"))
low = to_float(ticker.get("low"))
distance_from_high = (high - last) / max(high, 1e-9) if high and last else None
if value is not None:
total_position_value += value
largest_position_value = max(largest_position_value, value)
if value >= MIN_REAL_POSITION_VALUE_USDT:
actionable_positions += 1
positions_view.append({
"symbol": symbol,
"base_asset": pos.get("base_asset"),
"quantity": qty,
"avg_cost": avg_cost,
"last_price": last,
"market_value_usdt": value,
"pnl_pct": pnl_pct,
"high_24h": round(high, 8) if high else None,
"low_24h": round(low, 8) if low else None,
"distance_from_high_pct": round(distance_from_high * 100, 2) if distance_from_high is not None else None,
})
btc_pct = to_float((tickers.get("BTC/USDT") or {}).get("percentage"), None)
eth_pct = to_float((tickers.get("ETH/USDT") or {}).get("percentage"), None)
global_candidates, candidate_layers = top_candidates_from_tickers(tickers)
global_candidates, candidate_layers, positions_view = enrich_candidates_and_positions(
global_candidates, candidate_layers, positions_view, tickers, ex
)
leader_score = global_candidates[0]["score"] if global_candidates else 0.0
portfolio_value = round(free_usdt + total_position_value, 4)
volatility_score = round(max(abs(to_float(btc_pct, 0)), abs(to_float(eth_pct, 0))), 2)
position_structure = [
{
"symbol": p.get("symbol"),
"base_asset": p.get("base_asset"),
"quantity": round(to_float(p.get("quantity"), 0), 10),
"avg_cost": to_float(p.get("avg_cost"), None),
}
for p in positions_view
]
snapshot = {
"generated_at": utc_iso(),
"timezone": tz_name,
"local_time": local_dt.isoformat(),
"session": session_label(local_dt),
"free_usdt": round(free_usdt, 4),
"portfolio_value_usdt": portfolio_value,
"largest_position_value_usdt": round(largest_position_value, 4),
"actionable_positions": actionable_positions,
"positions": positions_view,
"positions_hash": stable_hash(position_structure),
"top_candidates": global_candidates,
"top_candidates_layers": candidate_layers,
"candidates_hash": stable_hash({"global": global_candidates, "layers": candidate_layers}),
"market_regime": {
"btc_24h_pct": round(btc_pct, 2) if btc_pct is not None else None,
"btc_regime": regime_from_pct(btc_pct),
"eth_24h_pct": round(eth_pct, 2) if eth_pct is not None else None,
"eth_regime": regime_from_pct(eth_pct),
"volatility_score": volatility_score,
"leader_score": round(leader_score, 4),
},
}
snapshot["snapshot_hash"] = stable_hash({
"portfolio_value_usdt": snapshot["portfolio_value_usdt"],
"positions_hash": snapshot["positions_hash"],
"candidates_hash": snapshot["candidates_hash"],
"market_regime": snapshot["market_regime"],
"session": snapshot["session"],
})
return snapshot
def build_adaptive_profile(snapshot: dict):
portfolio_value = snapshot.get("portfolio_value_usdt", 0)
free_usdt = snapshot.get("free_usdt", 0)
session = snapshot.get("session")
market = snapshot.get("market_regime", {})
volatility_score = to_float(market.get("volatility_score"), 0)
leader_score = to_float(market.get("leader_score"), 0)
actionable_positions = int(snapshot.get("actionable_positions") or 0)
largest_position_value = to_float(snapshot.get("largest_position_value_usdt"), 0)
capital_band = "micro" if portfolio_value < 25 else "small" if portfolio_value < 100 else "normal"
session_mode = "quiet" if session in {"overnight", "asia-morning"} else "active"
volatility_mode = "high" if volatility_score >= 2.5 or leader_score >= 120 else "normal"
dust_mode = free_usdt < MIN_ACTIONABLE_USDT and largest_position_value < MIN_REAL_POSITION_VALUE_USDT
price_trigger = BASE_PRICE_MOVE_TRIGGER_PCT
pnl_trigger = BASE_PNL_TRIGGER_PCT
portfolio_trigger = BASE_PORTFOLIO_MOVE_TRIGGER_PCT
candidate_ratio = BASE_CANDIDATE_SCORE_TRIGGER_RATIO
force_minutes = BASE_FORCE_ANALYSIS_AFTER_MINUTES
cooldown_minutes = BASE_COOLDOWN_MINUTES
soft_score_threshold = 2.0
if capital_band == "micro":
price_trigger += 0.02
pnl_trigger += 0.03
portfolio_trigger += 0.04
candidate_ratio += 0.25
force_minutes += 180
cooldown_minutes += 30
soft_score_threshold += 1.0
elif capital_band == "small":
price_trigger += 0.01
pnl_trigger += 0.01
portfolio_trigger += 0.01
candidate_ratio += 0.1
force_minutes += 60
cooldown_minutes += 10
soft_score_threshold += 0.5
if session_mode == "quiet":
price_trigger += 0.01
pnl_trigger += 0.01
portfolio_trigger += 0.01
candidate_ratio += 0.05
soft_score_threshold += 0.5
else:
force_minutes = max(120, force_minutes - 30)
if volatility_mode == "high":
price_trigger = max(0.02, price_trigger - 0.01)
pnl_trigger = max(0.025, pnl_trigger - 0.005)
portfolio_trigger = max(0.025, portfolio_trigger - 0.005)
candidate_ratio = max(1.1, candidate_ratio - 0.1)
cooldown_minutes = max(20, cooldown_minutes - 10)
soft_score_threshold = max(1.0, soft_score_threshold - 0.5)
if dust_mode:
candidate_ratio += 0.3
force_minutes += 180
cooldown_minutes += 30
soft_score_threshold += 1.5
return {
"capital_band": capital_band,
"session_mode": session_mode,
"volatility_mode": volatility_mode,
"dust_mode": dust_mode,
"price_move_trigger_pct": round(price_trigger, 4),
"pnl_trigger_pct": round(pnl_trigger, 4),
"portfolio_move_trigger_pct": round(portfolio_trigger, 4),
"candidate_score_trigger_ratio": round(candidate_ratio, 4),
"force_analysis_after_minutes": int(force_minutes),
"cooldown_minutes": int(cooldown_minutes),
"soft_score_threshold": round(soft_score_threshold, 2),
"new_entries_allowed": free_usdt >= MIN_ACTIONABLE_USDT and not dust_mode,
"switching_allowed": actionable_positions > 0 or portfolio_value >= 25,
}
def _candidate_weight(snapshot: dict, profile: dict) -> float:
if not profile.get("new_entries_allowed"):
return 0.5
if profile.get("volatility_mode") == "high":
return 1.5
if snapshot.get("session") in {"europe-open", "us-session"}:
return 1.25
return 1.0
def analyze_trigger(snapshot: dict, state: dict):
reasons = []
details = list(state.get("_stale_recovery_notes", []))
hard_reasons = []
soft_reasons = []
soft_score = 0.0
profile = build_adaptive_profile(snapshot)
market = snapshot.get("market_regime", {})
now = utc_now()
last_positions_hash = state.get("last_positions_hash")
last_portfolio_value = state.get("last_portfolio_value_usdt")
last_market_regime = state.get("last_market_regime", {})
last_positions_map = state.get("last_positions_map", {})
last_top_candidate = state.get("last_top_candidate")
pending_trigger = bool(state.get("pending_trigger"))
run_requested_at = parse_ts(state.get("run_requested_at"))
last_deep_analysis_at = parse_ts(state.get("last_deep_analysis_at"))
last_triggered_at = parse_ts(state.get("last_triggered_at"))
last_trigger_snapshot_hash = state.get("last_trigger_snapshot_hash")
last_hard_reasons_at = state.get("last_hard_reasons_at", {})
price_trigger = profile["price_move_trigger_pct"]
pnl_trigger = profile["pnl_trigger_pct"]
portfolio_trigger = profile["portfolio_move_trigger_pct"]
candidate_ratio_trigger = profile["candidate_score_trigger_ratio"]
force_minutes = profile["force_analysis_after_minutes"]
cooldown_minutes = profile["cooldown_minutes"]
soft_score_threshold = profile["soft_score_threshold"]
if pending_trigger:
reasons.append("pending-trigger-unacked")
hard_reasons.append("pending-trigger-unacked")
details.append("上次已触发深度分析但尚未确认完成")
if run_requested_at:
details.append(f"外部门控已在 {run_requested_at.isoformat()} 请求运行分析任务")
if not last_deep_analysis_at:
reasons.append("first-analysis")
hard_reasons.append("first-analysis")
details.append("尚未记录过深度分析")
elif now - last_deep_analysis_at >= timedelta(minutes=force_minutes):
reasons.append("stale-analysis")
hard_reasons.append("stale-analysis")
details.append(f"距离上次深度分析已超过 {force_minutes} 分钟")
if last_positions_hash and snapshot["positions_hash"] != last_positions_hash:
reasons.append("positions-changed")
hard_reasons.append("positions-changed")
details.append("持仓结构发生变化")
if last_portfolio_value not in (None, 0):
portfolio_delta = abs(snapshot["portfolio_value_usdt"] - last_portfolio_value) / max(last_portfolio_value, 1e-9)
if portfolio_delta >= portfolio_trigger:
if portfolio_delta >= 1.0:
reasons.append("portfolio-extreme-move")
hard_reasons.append("portfolio-extreme-move")
details.append(f"组合净值剧烈变化 {portfolio_delta:.1%},超过 100%,视为硬触发")
else:
reasons.append("portfolio-move")
soft_reasons.append("portfolio-move")
soft_score += 1.0
details.append(f"组合净值变化 {portfolio_delta:.1%},阈值 {portfolio_trigger:.1%}")
for pos in snapshot["positions"]:
symbol = pos["symbol"]
prev = last_positions_map.get(symbol, {})
cur_price = pos.get("last_price")
prev_price = prev.get("last_price")
cur_pnl = pos.get("pnl_pct")
prev_pnl = prev.get("pnl_pct")
market_value = to_float(pos.get("market_value_usdt"), 0)
actionable_position = market_value >= MIN_REAL_POSITION_VALUE_USDT
if cur_price and prev_price:
price_move = abs(cur_price - prev_price) / max(prev_price, 1e-9)
if price_move >= price_trigger:
reasons.append(f"price-move:{symbol}")
soft_reasons.append(f"price-move:{symbol}")
soft_score += 1.0 if actionable_position else 0.4
details.append(f"{symbol} 价格变化 {price_move:.1%},阈值 {price_trigger:.1%}")
if cur_pnl is not None and prev_pnl is not None:
pnl_move = abs(cur_pnl - prev_pnl)
if pnl_move >= pnl_trigger:
reasons.append(f"pnl-move:{symbol}")
soft_reasons.append(f"pnl-move:{symbol}")
soft_score += 1.0 if actionable_position else 0.4
details.append(f"{symbol} 盈亏变化 {pnl_move:.1%},阈值 {pnl_trigger:.1%}")
if cur_pnl is not None:
stop_band = -0.06 if actionable_position else -0.12
take_band = 0.14 if actionable_position else 0.25
if cur_pnl <= stop_band or cur_pnl >= take_band:
reasons.append(f"risk-band:{symbol}")
hard_reasons.append(f"risk-band:{symbol}")
details.append(f"{symbol} 接近执行阈值,当前盈亏 {cur_pnl:.1%}")
if cur_pnl <= HARD_STOP_PCT:
reasons.append(f"hard-stop:{symbol}")
hard_reasons.append(f"hard-stop:{symbol}")
details.append(f"{symbol} 盈亏超过 {HARD_STOP_PCT:.1%},触发紧急硬触发")
current_market = snapshot.get("market_regime", {})
if last_market_regime:
if current_market.get("btc_regime") != last_market_regime.get("btc_regime"):
reasons.append("btc-regime-change")
hard_reasons.append("btc-regime-change")
details.append(f"BTC 由 {last_market_regime.get('btc_regime')} 切换为 {current_market.get('btc_regime')}")
if current_market.get("eth_regime") != last_market_regime.get("eth_regime"):
reasons.append("eth-regime-change")
hard_reasons.append("eth-regime-change")
details.append(f"ETH 由 {last_market_regime.get('eth_regime')} 切换为 {current_market.get('eth_regime')}")
# Candidate hard moon trigger
for cand in snapshot.get("top_candidates", []):
if cand.get("change_24h_pct", 0) >= HARD_MOON_PCT * 100:
reasons.append(f"hard-moon:{cand['symbol']}")
hard_reasons.append(f"hard-moon:{cand['symbol']}")
details.append(f"候选币 {cand['symbol']} 24h 涨幅 {cand['change_24h_pct']:.1f}%,触发强势硬触发")
current_leader = snapshot.get("top_candidates", [{}])[0] if snapshot.get("top_candidates") else None
candidate_weight = _candidate_weight(snapshot, profile)
# Layer leader changes
last_layers = state.get("last_candidates_layers", {})
current_layers = snapshot.get("top_candidates_layers", {})
for band in ("major", "mid", "meme"):
cur_band = current_layers.get(band, [])
prev_band = last_layers.get(band, [])
cur_leader = cur_band[0] if cur_band else None
prev_leader = prev_band[0] if prev_band else None
if cur_leader and prev_leader and cur_leader["symbol"] != prev_leader["symbol"]:
score_ratio = cur_leader.get("score", 0) / max(prev_leader.get("score", 0.0001), 0.0001)
if score_ratio >= candidate_ratio_trigger:
reasons.append(f"new-leader-{band}:{cur_leader['symbol']}")
soft_reasons.append(f"new-leader-{band}:{cur_leader['symbol']}")
soft_score += candidate_weight * 0.7
details.append(
f"{band} 层新榜首 {cur_leader['symbol']} 替代 {prev_leader['symbol']}score 比例 {score_ratio:.2f}"
)
current_leader = snapshot.get("top_candidates", [{}])[0] if snapshot.get("top_candidates") else None
if last_top_candidate and current_leader:
if current_leader.get("symbol") != last_top_candidate.get("symbol"):
score_ratio = current_leader.get("score", 0) / max(last_top_candidate.get("score", 0.0001), 0.0001)
if score_ratio >= candidate_ratio_trigger:
reasons.append("new-leader")
soft_reasons.append("new-leader")
soft_score += candidate_weight
details.append(
f"新候选币 {current_leader.get('symbol')} 领先上次榜首score 比例 {score_ratio:.2f},阈值 {candidate_ratio_trigger:.2f}"
)
elif current_leader and not last_top_candidate:
reasons.append("candidate-leader-init")
soft_reasons.append("candidate-leader-init")
soft_score += candidate_weight
details.append(f"首次记录候选榜首 {current_leader.get('symbol')}")
# --- adaptive cooldown based on signal change magnitude ---
def _signal_delta() -> float:
delta = 0.0
if last_trigger_snapshot_hash and snapshot.get("snapshot_hash") != last_trigger_snapshot_hash:
delta += 0.5
if snapshot["positions_hash"] != last_positions_hash:
delta += 1.5
for pos in snapshot["positions"]:
symbol = pos["symbol"]
prev = last_positions_map.get(symbol, {})
cur_price = pos.get("last_price")
prev_price = prev.get("last_price")
cur_pnl = pos.get("pnl_pct")
prev_pnl = prev.get("pnl_pct")
if cur_price and prev_price:
if abs(cur_price - prev_price) / max(prev_price, 1e-9) >= 0.02:
delta += 0.5
if cur_pnl is not None and prev_pnl is not None:
if abs(cur_pnl - prev_pnl) >= 0.03:
delta += 0.5
current_leader = snapshot.get("top_candidates", [{}])[0] if snapshot.get("top_candidates") else None
last_leader = state.get("last_top_candidate")
if current_leader and last_leader and current_leader.get("symbol") != last_leader.get("symbol"):
delta += 1.0
current_layers = snapshot.get("top_candidates_layers", {})
last_layers = state.get("last_candidates_layers", {})
for band in ("major", "mid", "meme"):
cur_band = current_layers.get(band, [])
prev_band = last_layers.get(band, [])
cur_l = cur_band[0] if cur_band else None
prev_l = prev_band[0] if prev_band else None
if cur_l and prev_l and cur_l.get("symbol") != prev_l.get("symbol"):
delta += 0.5
if last_market_regime:
if current_market.get("btc_regime") != last_market_regime.get("btc_regime"):
delta += 1.5
if current_market.get("eth_regime") != last_market_regime.get("eth_regime"):
delta += 1.5
if last_portfolio_value not in (None, 0):
portfolio_delta = abs(snapshot["portfolio_value_usdt"] - last_portfolio_value) / max(last_portfolio_value, 1e-9)
if portfolio_delta >= 0.05:
delta += 1.0
# fresh hard reason type not seen in last trigger
last_trigger_hard_types = {r.split(":")[0] for r in (state.get("last_trigger_hard_reasons") or [])}
current_hard_types = {r.split(":")[0] for r in hard_reasons}
if current_hard_types - last_trigger_hard_types:
delta += 2.0
return delta
signal_delta = _signal_delta()
effective_cooldown = cooldown_minutes
if signal_delta < 1.0:
effective_cooldown = max(cooldown_minutes, 90)
elif signal_delta >= 2.5:
effective_cooldown = max(0, cooldown_minutes - 15)
cooldown_active = bool(last_triggered_at and now - last_triggered_at < timedelta(minutes=effective_cooldown))
# Dedup hard reasons within window to avoid repeated model wakeups for the same event
dedup_window = timedelta(minutes=HARD_REASON_DEDUP_MINUTES)
for hr in list(hard_reasons):
last_at = parse_ts(last_hard_reasons_at.get(hr))
if last_at and now - last_at < dedup_window:
hard_reasons.remove(hr)
details.append(f"{hr} 近期已触发,{HARD_REASON_DEDUP_MINUTES}分钟内去重")
hard_trigger = bool(hard_reasons)
if profile.get("dust_mode") and not hard_trigger and soft_score < soft_score_threshold + 1.0:
details.append("微型资金/粉尘仓位模式:抬高软触发门槛,避免无意义分析")
if profile.get("dust_mode") and not profile.get("new_entries_allowed") and any(r in {"new-leader", "candidate-leader-init"} for r in soft_reasons):
details.append("当前可用资金低于可执行阈值,新候选币仅做观察,不单独触发深度分析")
soft_score = max(0.0, soft_score - 0.75)
should_analyze = hard_trigger or soft_score >= soft_score_threshold
if cooldown_active and not hard_trigger and should_analyze:
should_analyze = False
details.append(f"处于 {cooldown_minutes} 分钟冷却窗口,软触发先记录不升级")
if cooldown_active and not hard_trigger and reasons and soft_score < soft_score_threshold:
details.append(f"处于 {cooldown_minutes} 分钟冷却窗口,且软信号强度不足 ({soft_score:.2f} < {soft_score_threshold:.2f})")
status = "deep_analysis_required" if should_analyze else "stable"
compact_lines = [
f"状态: {status}",
f"组合净值: ${snapshot['portfolio_value_usdt']:.4f} | 可用USDT: ${snapshot['free_usdt']:.4f}",
f"本地时段: {snapshot['session']} | 时区: {snapshot['timezone']}",
f"BTC/ETH: {market.get('btc_regime')} ({market.get('btc_24h_pct')}%), {market.get('eth_regime')} ({market.get('eth_24h_pct')}%) | 波动分数 {market.get('volatility_score')}",
f"门控画像: capital={profile['capital_band']}, session={profile['session_mode']}, volatility={profile['volatility_mode']}, dust={profile['dust_mode']}",
f"阈值: price={price_trigger:.1%}, pnl={pnl_trigger:.1%}, portfolio={portfolio_trigger:.1%}, candidate={candidate_ratio_trigger:.2f}, cooldown={effective_cooldown}m({cooldown_minutes}m基础), force={force_minutes}m",
f"软信号分: {soft_score:.2f} / {soft_score_threshold:.2f}",
f"信号变化度: {signal_delta:.1f}",
]
if snapshot["positions"]:
compact_lines.append("持仓:")
for pos in snapshot["positions"][:4]:
pnl = pos.get("pnl_pct")
pnl_text = f"{pnl:+.1%}" if pnl is not None else "n/a"
compact_lines.append(
f"- {pos['symbol']}: qty={pos['quantity']}, px={pos.get('last_price')}, pnl={pnl_text}, value=${pos.get('market_value_usdt')}"
)
else:
compact_lines.append("持仓: 当前无现货仓位")
if snapshot["top_candidates"]:
compact_lines.append("候选榜:")
for cand in snapshot["top_candidates"]:
compact_lines.append(
f"- {cand['symbol']}: score={cand['score']}, 24h={cand['change_24h_pct']}%, vol=${cand['volume_24h']}"
)
layers = snapshot.get("top_candidates_layers", {})
for band, band_cands in layers.items():
if band_cands:
compact_lines.append(f"{band} 层:")
for cand in band_cands:
compact_lines.append(
f"- {cand['symbol']}: score={cand['score']}, 24h={cand['change_24h_pct']}%, vol=${cand['volume_24h']}"
)
if details:
compact_lines.append("触发说明:")
for item in details:
compact_lines.append(f"- {item}")
return {
"generated_at": snapshot["generated_at"],
"status": status,
"should_analyze": should_analyze,
"pending_trigger": pending_trigger,
"run_requested": bool(run_requested_at),
"run_requested_at": run_requested_at.isoformat() if run_requested_at else None,
"cooldown_active": cooldown_active,
"effective_cooldown_minutes": effective_cooldown,
"signal_delta": round(signal_delta, 2),
"reasons": reasons,
"hard_reasons": hard_reasons,
"soft_reasons": soft_reasons,
"soft_score": round(soft_score, 3),
"adaptive_profile": profile,
"portfolio_value_usdt": snapshot["portfolio_value_usdt"],
"free_usdt": snapshot["free_usdt"],
"market_regime": snapshot["market_regime"],
"session": snapshot["session"],
"positions": snapshot["positions"],
"top_candidates": snapshot["top_candidates"],
"top_candidates_layers": layers,
"snapshot_hash": snapshot["snapshot_hash"],
"compact_summary": "\n".join(compact_lines),
"details": details,
}
def update_state_after_observation(state: dict, snapshot: dict, analysis: dict):
new_state = dict(state)
new_state.update({
"last_observed_at": snapshot["generated_at"],
"last_snapshot_hash": snapshot["snapshot_hash"],
"last_positions_hash": snapshot["positions_hash"],
"last_candidates_hash": snapshot["candidates_hash"],
"last_portfolio_value_usdt": snapshot["portfolio_value_usdt"],
"last_market_regime": snapshot["market_regime"],
"last_positions_map": {p["symbol"]: {"last_price": p.get("last_price"), "pnl_pct": p.get("pnl_pct")} for p in snapshot["positions"]},
"last_top_candidate": snapshot["top_candidates"][0] if snapshot["top_candidates"] else None,
"last_candidates_layers": snapshot.get("top_candidates_layers", {}),
"last_adaptive_profile": analysis.get("adaptive_profile", {}),
})
if analysis["should_analyze"]:
new_state["pending_trigger"] = True
new_state["pending_reasons"] = analysis["details"]
new_state["last_triggered_at"] = snapshot["generated_at"]
new_state["last_trigger_snapshot_hash"] = snapshot["snapshot_hash"]
new_state["last_trigger_hard_reasons"] = analysis.get("hard_reasons", [])
new_state["last_trigger_signal_delta"] = analysis.get("signal_delta", 0.0)
# Update hard-reason dedup timestamps and prune old entries
last_hard_reasons_at = dict(state.get("last_hard_reasons_at", {}))
for hr in analysis.get("hard_reasons", []):
last_hard_reasons_at[hr] = snapshot["generated_at"]
cutoff = utc_now() - timedelta(hours=24)
pruned = {
k: v for k, v in last_hard_reasons_at.items()
if parse_ts(v) and parse_ts(v) > cutoff
}
new_state["last_hard_reasons_at"] = pruned
return new_state
def mark_run_requested(note: str = ""):
from .services.precheck_state import mark_run_requested as service_mark_run_requested
return service_mark_run_requested(note)
def ack_analysis(note: str = ""):
from .services.precheck_state import ack_analysis as service_ack_analysis
return service_ack_analysis(note)
def main():
from .services.precheck_service import run
return run(sys.argv[1:])
if __name__ == "__main__":
main()

View File

@@ -1,32 +0,0 @@
#!/usr/bin/env python3
import json
import sys
from . import review_engine
def main():
hours = int(sys.argv[1]) if len(sys.argv) > 1 else 12
review = review_engine.generate_review(hours)
compact = {
"review_period_hours": review.get("review_period_hours", hours),
"review_timestamp": review.get("review_timestamp"),
"total_decisions": review.get("total_decisions", 0),
"total_trades": review.get("total_trades", 0),
"total_errors": review.get("total_errors", 0),
"stats": review.get("stats", {}),
"insights": review.get("insights", []),
"recommendations": review.get("recommendations", []),
"decision_quality_top": review.get("decision_quality", [])[:5],
"should_report": bool(
review.get("total_decisions", 0)
or review.get("total_trades", 0)
or review.get("total_errors", 0)
or review.get("insights")
),
}
print(json.dumps(compact, ensure_ascii=False, indent=2))
if __name__ == "__main__":
main()

View File

@@ -1,312 +0,0 @@
#!/usr/bin/env python3
"""Coin Hunter hourly review engine."""
import json
import os
import sys
from datetime import datetime, timezone, timedelta
from pathlib import Path
import ccxt
from .logger import get_logs_last_n_hours, log_error
from .runtime import get_runtime_paths, load_env_file
PATHS = get_runtime_paths()
ENV_FILE = PATHS.env_file
REVIEW_DIR = PATHS.reviews_dir
CST = timezone(timedelta(hours=8))
def load_env():
load_env_file(PATHS)
def get_exchange():
load_env()
ex = ccxt.binance({
"apiKey": os.getenv("BINANCE_API_KEY"),
"secret": os.getenv("BINANCE_API_SECRET"),
"options": {"defaultType": "spot"},
"enableRateLimit": True,
})
ex.load_markets()
return ex
def ensure_review_dir():
REVIEW_DIR.mkdir(parents=True, exist_ok=True)
def norm_symbol(symbol: str) -> str:
s = symbol.upper().replace("-", "").replace("_", "")
if "/" in s:
return s
if s.endswith("USDT"):
return s[:-4] + "/USDT"
return s
def fetch_current_price(ex, symbol: str):
try:
return float(ex.fetch_ticker(norm_symbol(symbol))["last"])
except Exception:
return None
def analyze_trade(trade: dict, ex) -> dict:
symbol = trade.get("symbol")
price = trade.get("price")
action = trade.get("action", "")
current_price = fetch_current_price(ex, symbol) if symbol else None
pnl_estimate = None
outcome = "neutral"
if price and current_price and symbol:
change_pct = (current_price - float(price)) / float(price) * 100
if action == "BUY":
pnl_estimate = round(change_pct, 2)
outcome = "good" if change_pct > 2 else "bad" if change_pct < -2 else "neutral"
elif action == "SELL_ALL":
pnl_estimate = round(-change_pct, 2)
# Lowered missed threshold: >2% is a missed opportunity in short-term trading
outcome = "good" if change_pct < -2 else "missed" if change_pct > 2 else "neutral"
return {
"timestamp": trade.get("timestamp"),
"symbol": symbol,
"action": action,
"decision_id": trade.get("decision_id"),
"execution_price": price,
"current_price": current_price,
"pnl_estimate_pct": pnl_estimate,
"outcome_assessment": outcome,
}
def analyze_hold_passes(decisions: list, ex) -> list:
"""Check HOLD decisions where an opportunity was explicitly PASSed but later rallied."""
misses = []
for d in decisions:
if d.get("decision") != "HOLD":
continue
analysis = d.get("analysis")
if not isinstance(analysis, dict):
continue
opportunities = analysis.get("opportunities_evaluated", [])
market_snapshot = d.get("market_snapshot", {})
if not opportunities or not market_snapshot:
continue
for op in opportunities:
verdict = op.get("verdict", "")
if "PASS" not in verdict and "pass" not in verdict:
continue
symbol = op.get("symbol", "")
# Try to extract decision-time price from market_snapshot
snap = market_snapshot.get(symbol) or market_snapshot.get(symbol.replace("/", ""))
if not snap:
continue
decision_price = None
if isinstance(snap, dict):
decision_price = float(snap.get("lastPrice", 0)) or float(snap.get("last", 0))
elif isinstance(snap, (int, float, str)):
decision_price = float(snap)
if not decision_price:
continue
current_price = fetch_current_price(ex, symbol)
if not current_price:
continue
change_pct = (current_price - decision_price) / decision_price * 100
if change_pct > 3: # >3% rally after being passed = missed watch
misses.append({
"timestamp": d.get("timestamp"),
"symbol": symbol,
"decision_price": round(decision_price, 8),
"current_price": round(current_price, 8),
"change_pct": round(change_pct, 2),
"verdict_snippet": verdict[:80],
})
return misses
def analyze_cash_misses(decisions: list, ex) -> list:
"""If portfolio was mostly USDT but a watchlist coin rallied >5%, flag it."""
misses = []
watchlist = set()
for d in decisions:
snap = d.get("market_snapshot", {})
if isinstance(snap, dict):
for k in snap.keys():
if k.endswith("USDT"):
watchlist.add(k)
for d in decisions:
ts = d.get("timestamp")
balances = d.get("balances") or d.get("balances_before", {})
if not balances:
continue
total = sum(float(v) if isinstance(v, (int, float, str)) else 0 for v in balances.values())
usdt = float(balances.get("USDT", 0))
if total == 0 or (usdt / total) < 0.9:
continue
# Portfolio mostly cash — check watchlist performance
snap = d.get("market_snapshot", {})
if not isinstance(snap, dict):
continue
for symbol, data in snap.items():
if not symbol.endswith("USDT"):
continue
decision_price = None
if isinstance(data, dict):
decision_price = float(data.get("lastPrice", 0)) or float(data.get("last", 0))
elif isinstance(data, (int, float, str)):
decision_price = float(data)
if not decision_price:
continue
current_price = fetch_current_price(ex, symbol)
if not current_price:
continue
change_pct = (current_price - decision_price) / decision_price * 100
if change_pct > 5:
misses.append({
"timestamp": ts,
"symbol": symbol,
"decision_price": round(decision_price, 8),
"current_price": round(current_price, 8),
"change_pct": round(change_pct, 2),
})
# Deduplicate by symbol keeping the worst miss
seen = {}
for m in misses:
sym = m["symbol"]
if sym not in seen or m["change_pct"] > seen[sym]["change_pct"]:
seen[sym] = m
return list(seen.values())
def generate_review(hours: int = 1) -> dict:
decisions = get_logs_last_n_hours("decisions", hours)
trades = get_logs_last_n_hours("trades", hours)
errors = get_logs_last_n_hours("errors", hours)
review = {
"review_period_hours": hours,
"review_timestamp": datetime.now(CST).isoformat(),
"total_decisions": len(decisions),
"total_trades": len(trades),
"total_errors": len(errors),
"decision_quality": [],
"stats": {},
"insights": [],
"recommendations": [],
}
if not decisions and not trades:
review["insights"].append("本周期无决策/交易记录")
return review
ex = get_exchange()
outcomes = {"good": 0, "neutral": 0, "bad": 0, "missed": 0}
pnl_samples = []
for trade in trades:
analysis = analyze_trade(trade, ex)
review["decision_quality"].append(analysis)
outcomes[analysis["outcome_assessment"]] += 1
if analysis["pnl_estimate_pct"] is not None:
pnl_samples.append(analysis["pnl_estimate_pct"])
# New: analyze missed opportunities from HOLD / cash decisions
hold_pass_misses = analyze_hold_passes(decisions, ex)
cash_misses = analyze_cash_misses(decisions, ex)
total_missed = outcomes["missed"] + len(hold_pass_misses) + len(cash_misses)
review["stats"] = {
"good_decisions": outcomes["good"],
"neutral_decisions": outcomes["neutral"],
"bad_decisions": outcomes["bad"],
"missed_opportunities": total_missed,
"missed_sell_all": outcomes["missed"],
"missed_hold_passes": len(hold_pass_misses),
"missed_cash_sits": len(cash_misses),
"avg_estimated_edge_pct": round(sum(pnl_samples) / len(pnl_samples), 2) if pnl_samples else None,
}
if errors:
review["insights"].append(f"本周期出现 {len(errors)} 次执行/系统错误,健壮性需优先关注")
if outcomes["bad"] > outcomes["good"]:
review["insights"].append("最近交易质量偏弱,建议降低交易频率或提高入场门槛")
if total_missed > 0:
parts = []
if outcomes["missed"]:
parts.append(f"卖出后继续上涨 {outcomes['missed']}")
if hold_pass_misses:
parts.append(f"PASS 后错失 {len(hold_pass_misses)}")
if cash_misses:
parts.append(f"空仓观望错失 {len(cash_misses)}")
review["insights"].append("存在错失机会: " + "".join(parts) + ",建议放宽趋势跟随或入场条件")
if outcomes["good"] >= max(1, outcomes["bad"] + total_missed):
review["insights"].append("近期决策总体可接受")
if not trades and decisions:
review["insights"].append("有决策无成交,可能是观望、最小成交额限制或执行被拦截")
if len(trades) < len(decisions) * 0.1 and decisions:
review["insights"].append("大量决策未转化为交易,需检查执行门槛(最小成交额/精度/手续费缓冲)是否过高")
if hold_pass_misses:
for m in hold_pass_misses[:3]:
review["insights"].append(f"HOLD 时 PASS 了 {m['symbol']},之后上涨 {m['change_pct']}%")
if cash_misses:
for m in cash_misses[:3]:
review["insights"].append(f"持仓以 USDT 为主时 {m['symbol']} 上涨 {m['change_pct']}%")
review["recommendations"] = [
"优先检查最小成交额/精度拒单是否影响小资金执行",
"若连续两个复盘周期 edge 为负,下一小时减少换仓频率",
"若错误日志增加,优先进入防守模式(多持 USDT",
]
return review
def save_review(review: dict):
ensure_review_dir()
ts = datetime.now(CST).strftime("%Y%m%d_%H%M%S")
path = REVIEW_DIR / f"review_{ts}.json"
path.write_text(json.dumps(review, indent=2, ensure_ascii=False), encoding="utf-8")
return str(path)
def print_review(review: dict):
print("=" * 50)
print("📊 Coin Hunter 小时复盘报告")
print(f"复盘时间: {review['review_timestamp']}")
print(f"统计周期: 过去 {review['review_period_hours']} 小时")
print(f"总决策数: {review['total_decisions']} | 总交易数: {review['total_trades']} | 总错误数: {review['total_errors']}")
stats = review.get("stats", {})
print("\n决策质量统计:")
print(f" ✓ 优秀: {stats.get('good_decisions', 0)}")
print(f" ○ 中性: {stats.get('neutral_decisions', 0)}")
print(f" ✗ 失误: {stats.get('bad_decisions', 0)}")
print(f" ↗ 错过机会: {stats.get('missed_opportunities', 0)}")
if stats.get("avg_estimated_edge_pct") is not None:
print(f" 平均估计 edge: {stats['avg_estimated_edge_pct']}%")
if review.get("insights"):
print("\n💡 见解:")
for item in review["insights"]:
print(f"{item}")
if review.get("recommendations"):
print("\n🔧 优化建议:")
for item in review["recommendations"]:
print(f"{item}")
print("=" * 50)
def main():
try:
hours = int(sys.argv[1]) if len(sys.argv) > 1 else 1
review = generate_review(hours)
path = save_review(review)
print_review(review)
print(f"复盘已保存至: {path}")
except Exception as e:
log_error("review_engine", e)
raise
if __name__ == "__main__":
main()

View File

@@ -1,28 +0,0 @@
#!/usr/bin/env python3
"""Rotate external gate log using the user's logrotate config/state."""
import shutil
import subprocess
from .runtime import ensure_runtime_dirs, get_runtime_paths
PATHS = get_runtime_paths()
STATE_DIR = PATHS.state_dir
LOGROTATE_STATUS = PATHS.logrotate_status
LOGROTATE_CONF = PATHS.logrotate_config
LOGS_DIR = PATHS.logs_dir
def main():
ensure_runtime_dirs(PATHS)
logrotate_bin = shutil.which("logrotate") or "/usr/sbin/logrotate"
cmd = [logrotate_bin, "-s", str(LOGROTATE_STATUS), str(LOGROTATE_CONF)]
result = subprocess.run(cmd, capture_output=True, text=True)
if result.stdout.strip():
print(result.stdout.strip())
if result.stderr.strip():
print(result.stderr.strip())
return result.returncode
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,107 +1,510 @@
"""Runtime paths and environment helpers for CoinHunter CLI."""
"""Runtime helpers for CoinHunter V2."""
from __future__ import annotations
import argparse
import csv
import io
import json
import os
import re
import shutil
from dataclasses import asdict, dataclass
import subprocess
import sys
import threading
from collections.abc import Iterator
from contextlib import contextmanager
from dataclasses import asdict, dataclass, is_dataclass
from datetime import date, datetime
from pathlib import Path
from typing import Any
try:
import shtab
except Exception: # pragma: no cover
shtab = None # type: ignore[assignment]
@dataclass(frozen=True)
class RuntimePaths:
root: Path
cache_dir: Path
state_dir: Path
logs_dir: Path
reviews_dir: Path
config_file: Path
positions_file: Path
accounts_file: Path
executions_file: Path
watchlist_file: Path
notes_file: Path
positions_lock: Path
executions_lock: Path
precheck_state_file: Path
external_gate_lock: Path
logrotate_config: Path
logrotate_status: Path
hermes_home: Path
env_file: Path
hermes_bin: Path
logs_dir: Path
def as_dict(self) -> dict[str, str]:
return {key: str(value) for key, value in asdict(self).items()}
def _default_coinhunter_home() -> Path:
raw = os.getenv("COINHUNTER_HOME")
return Path(raw).expanduser() if raw else Path.home() / ".coinhunter"
def _default_hermes_home() -> Path:
raw = os.getenv("HERMES_HOME")
return Path(raw).expanduser() if raw else Path.home() / ".hermes"
def get_runtime_paths() -> RuntimePaths:
root = _default_coinhunter_home()
hermes_home = _default_hermes_home()
state_dir = root / "state"
root = Path(os.getenv("COINHUNTER_HOME", "~/.coinhunter")).expanduser()
return RuntimePaths(
root=root,
cache_dir=root / "cache",
state_dir=state_dir,
config_file=root / "config.toml",
env_file=root / ".env",
logs_dir=root / "logs",
reviews_dir=root / "reviews",
config_file=root / "config.json",
positions_file=root / "positions.json",
accounts_file=root / "accounts.json",
executions_file=root / "executions.json",
watchlist_file=root / "watchlist.json",
notes_file=root / "notes.json",
positions_lock=root / "positions.lock",
executions_lock=root / "executions.lock",
precheck_state_file=state_dir / "precheck_state.json",
external_gate_lock=state_dir / "external_gate.lock",
logrotate_config=root / "logrotate_external_gate.conf",
logrotate_status=state_dir / "logrotate_external_gate.status",
hermes_home=hermes_home,
env_file=Path(os.getenv("COINHUNTER_ENV_FILE", str(hermes_home / ".env"))).expanduser(),
hermes_bin=Path(os.getenv("HERMES_BIN", str(Path.home() / ".local" / "bin" / "hermes"))).expanduser(),
)
def ensure_runtime_dirs(paths: RuntimePaths | None = None) -> RuntimePaths:
paths = paths or get_runtime_paths()
for directory in (paths.root, paths.cache_dir, paths.state_dir, paths.logs_dir, paths.reviews_dir):
directory.mkdir(parents=True, exist_ok=True)
paths.root.mkdir(parents=True, exist_ok=True)
paths.logs_dir.mkdir(parents=True, exist_ok=True)
return paths
def load_env_file(paths: RuntimePaths | None = None) -> Path:
paths = paths or get_runtime_paths()
if paths.env_file.exists():
for line in paths.env_file.read_text(encoding="utf-8").splitlines():
line = line.strip()
if line and not line.startswith("#") and "=" in line:
key, value = line.split("=", 1)
os.environ.setdefault(key.strip(), value.strip())
return paths.env_file
def json_default(value: Any) -> Any:
if is_dataclass(value) and not isinstance(value, type):
return asdict(value)
if isinstance(value, (datetime, date)):
return value.isoformat()
if isinstance(value, Path):
return str(value)
raise TypeError(f"Object of type {type(value).__name__} is not JSON serializable")
def resolve_hermes_executable(paths: RuntimePaths | None = None) -> str:
paths = paths or get_runtime_paths()
discovered = shutil.which("hermes")
if discovered:
return discovered
return str(paths.hermes_bin)
def print_json(payload: Any) -> None:
print(json.dumps(payload, ensure_ascii=False, indent=2, sort_keys=True, default=json_default))
def mask_secret(value: str | None, *, tail: int = 4) -> str:
if not value:
def self_upgrade() -> dict[str, Any]:
if shutil.which("pipx"):
cmd = ["pipx", "upgrade", "coinhunter"]
else:
cmd = [sys.executable, "-m", "pip", "install", "--upgrade", "coinhunter"]
result = subprocess.run(cmd, capture_output=True, text=True)
return {
"command": " ".join(cmd),
"returncode": result.returncode,
"stdout": result.stdout.strip(),
"stderr": result.stderr.strip(),
}
# ---------------------------------------------------------------------------
# TUI / Agent output helpers
# ---------------------------------------------------------------------------
_ANSI_RE = re.compile(r"\033\[[0-9;]*m")
_BOLD = "\033[1m"
_RESET = "\033[0m"
_CYAN = "\033[36m"
_GREEN = "\033[32m"
_YELLOW = "\033[33m"
_RED = "\033[31m"
_DIM = "\033[2m"
def _strip_ansi(text: str) -> str:
return _ANSI_RE.sub("", text)
def _color(text: str, color: str) -> str:
return f"{color}{text}{_RESET}"
def _cell_width(text: str) -> int:
return len(_strip_ansi(text))
def _pad(text: str, width: int, align: str = "left") -> str:
pad = width - _cell_width(text)
if align == "right":
return " " * pad + text
return text + " " * pad
def _fmt_number(value: Any) -> str:
if value is None:
return ""
if isinstance(value, bool):
return "true" if value else "false"
if isinstance(value, (int, float)):
s = f"{value:,.4f}"
s = s.rstrip("0").rstrip(".")
return s
return str(value)
def _is_large_dataset(payload: Any, threshold: int = 8) -> bool:
if isinstance(payload, dict):
for value in payload.values():
if isinstance(value, list) and len(value) > threshold:
return True
return False
def _print_compact(payload: dict[str, Any]) -> None:
target_key = None
target_rows: list[Any] = []
for key, value in payload.items():
if isinstance(value, list) and len(value) > len(target_rows):
target_key = key
target_rows = value
if target_rows and isinstance(target_rows[0], dict):
headers = list(target_rows[0].keys())
output = io.StringIO()
writer = csv.writer(output, delimiter="|", lineterminator="\n")
writer.writerow(headers)
for row in target_rows:
writer.writerow([str(row.get(h, "")) for h in headers])
print(f"mode=compact|source={target_key}")
print(output.getvalue().strip())
else:
for key, value in payload.items():
print(f"{key}={value}")
def _h_line(widths: list[int], left: str, mid: str, right: str) -> str:
parts = ["" * (w + 2) for w in widths]
return left + mid.join(parts) + right
def _print_box_table(
title: str,
headers: list[str],
rows: list[list[str]],
aligns: list[str] | None = None,
) -> None:
if not rows:
print(f"{_BOLD}{_CYAN}{title}{_RESET}")
print(" (empty)")
return
aligns = aligns or ["left"] * len(headers)
col_widths = [_cell_width(h) for h in headers]
for row in rows:
for i, cell in enumerate(row):
col_widths[i] = max(col_widths[i], _cell_width(cell))
if title:
print(f"{_BOLD}{_CYAN}{title}{_RESET}")
print(_h_line(col_widths, "", "", ""))
header_cells = [_pad(headers[i], col_widths[i], aligns[i]) for i in range(len(headers))]
print("" + "".join(header_cells) + "")
print(_h_line(col_widths, "", "", ""))
for row in rows:
cells = [_pad(row[i], col_widths[i], aligns[i]) for i in range(len(row))]
print("" + "".join(cells) + "")
print(_h_line(col_widths, "", "", ""))
def _render_tui(payload: Any) -> None:
if not isinstance(payload, dict):
print(str(payload))
return
if "overview" in payload:
overview = payload.get("overview", {})
print(f"\n{_BOLD}{_CYAN} ACCOUNT OVERVIEW {_RESET}")
print(f" Total Equity: {_GREEN}{_fmt_number(overview.get('total_equity_usdt', 0))} USDT{_RESET}")
print(f" Spot Assets: {_fmt_number(overview.get('spot_asset_count', 0))}")
print(f" Positions: {_fmt_number(overview.get('spot_position_count', 0))}")
if payload.get("balances"):
print()
_render_tui({"balances": payload["balances"]})
if payload.get("positions"):
print()
_render_tui({"positions": payload["positions"]})
return
if "balances" in payload:
rows = payload["balances"]
table_rows: list[list[str]] = []
for r in rows:
table_rows.append(
[
r.get("market_type", ""),
r.get("asset", ""),
_fmt_number(r.get("free", 0)),
_fmt_number(r.get("locked", 0)),
_fmt_number(r.get("total", 0)),
_fmt_number(r.get("notional_usdt", 0)),
]
)
_print_box_table(
"BALANCES",
["Market", "Asset", "Free", "Locked", "Total", "Notional (USDT)"],
table_rows,
aligns=["left", "left", "right", "right", "right", "right"],
)
return
if "positions" in payload:
rows = payload["positions"]
table_rows = []
for r in rows:
entry = _fmt_number(r.get("entry_price")) if r.get("entry_price") is not None else ""
pnl = _fmt_number(r.get("unrealized_pnl")) if r.get("unrealized_pnl") is not None else ""
table_rows.append(
[
r.get("market_type", ""),
r.get("symbol", ""),
r.get("side", ""),
_fmt_number(r.get("quantity", 0)),
entry,
_fmt_number(r.get("mark_price", 0)),
_fmt_number(r.get("notional_usdt", 0)),
pnl,
]
)
_print_box_table(
"POSITIONS",
["Market", "Symbol", "Side", "Qty", "Entry", "Mark", "Notional", "PnL"],
table_rows,
aligns=["left", "left", "left", "right", "right", "right", "right", "right"],
)
return
if "tickers" in payload:
rows = payload["tickers"]
table_rows = []
for r in rows:
pct = r.get("price_change_pct", 0)
pct_str = _color(f"{pct:+.2f}%", _GREEN if pct >= 0 else _RED)
table_rows.append(
[
r.get("symbol", ""),
_fmt_number(r.get("last_price", 0)),
pct_str,
_fmt_number(r.get("quote_volume", 0)),
]
)
_print_box_table(
"24H TICKERS",
["Symbol", "Last Price", "Change %", "Quote Volume"],
table_rows,
aligns=["left", "right", "right", "right"],
)
return
if "klines" in payload:
rows = payload["klines"]
print(f"\n{_BOLD}{_CYAN} KLINES {_RESET} interval={payload.get('interval')} limit={payload.get('limit')} count={len(rows)}")
display_rows = rows[:10]
table_rows = []
for r in display_rows:
table_rows.append(
[
r.get("symbol", ""),
str(r.get("open_time", ""))[:10],
_fmt_number(r.get("open", 0)),
_fmt_number(r.get("high", 0)),
_fmt_number(r.get("low", 0)),
_fmt_number(r.get("close", 0)),
_fmt_number(r.get("volume", 0)),
]
)
_print_box_table(
"",
["Symbol", "Time", "Open", "High", "Low", "Close", "Vol"],
table_rows,
aligns=["left", "left", "right", "right", "right", "right", "right"],
)
if len(rows) > 10:
print(f" {_DIM}... and {len(rows) - 10} more rows{_RESET}")
return
if "trade" in payload:
t = payload["trade"]
status = t.get("status", "UNKNOWN")
status_color = _GREEN if status == "FILLED" else _YELLOW if status == "DRY_RUN" else _CYAN
print(f"\n{_BOLD}{_CYAN} TRADE RESULT {_RESET}")
print(f" Market: {t.get('market_type', '').upper()}")
print(f" Symbol: {t.get('symbol', '')}")
print(f" Side: {t.get('side', '')}")
print(f" Type: {t.get('order_type', '')}")
print(f" Status: {_color(status, status_color)}")
print(f" Dry Run: {_fmt_number(t.get('dry_run', False))}")
return
if "recommendations" in payload:
rows = payload["recommendations"]
print(f"\n{_BOLD}{_CYAN} RECOMMENDATIONS {_RESET} count={len(rows)}")
for i, r in enumerate(rows, 1):
score = r.get("score", 0)
action = r.get("action", "")
action_color = _GREEN if action == "add" else _YELLOW if action == "hold" else _RED if action == "exit" else _CYAN
print(f" {i}. {_BOLD}{r.get('symbol', '')}{_RESET} action={_color(action, action_color)} score={score:.4f}")
for reason in r.get("reasons", []):
print(f" · {reason}")
metrics = r.get("metrics", {})
if metrics:
metric_str = " ".join(f"{k}={v}" for k, v in metrics.items())
print(f" {_DIM}{metric_str}{_RESET}")
return
if "command" in payload and "returncode" in payload:
rc = payload.get("returncode", 0)
stdout = payload.get("stdout", "")
stderr = payload.get("stderr", "")
if rc == 0:
print(f"\n{_GREEN}{_RESET} Update completed")
else:
print(f"\n{_RED}{_RESET} Update failed (exit code {rc})")
if stdout:
for line in stdout.strip().splitlines():
print(f" {line}")
if rc != 0 and stderr:
print(f" {_YELLOW}Details:{_RESET}")
for line in stderr.strip().splitlines():
print(f" {line}")
return
if "created_or_updated" in payload:
print(f"\n{_BOLD}{_CYAN} INITIALIZED {_RESET}")
print(f" Root: {payload.get('root', '')}")
print(f" Config: {payload.get('config_file', '')}")
print(f" Env: {payload.get('env_file', '')}")
print(f" Logs: {payload.get('logs_dir', '')}")
files = payload.get("created_or_updated", [])
if files:
action = "overwritten" if payload.get("force") else "created"
print(f" Files {action}: {', '.join(files)}")
comp = payload.get("completion", {})
if comp.get("installed"):
print(f"\n {_GREEN}{_RESET} Shell completions installed for {comp.get('shell', '')}")
print(f" Path: {comp.get('path', '')}")
if comp.get("hint"):
print(f" Hint: {comp.get('hint', '')}")
elif comp.get("reason"):
print(f"\n Shell completions: {comp.get('reason', '')}")
return
# Generic fallback for single-list payloads
if len(payload) == 1:
key, value = next(iter(payload.items()))
if isinstance(value, list) and value and isinstance(value[0], dict):
_render_tui({key: value})
return
# Simple key-value fallback
for key, value in payload.items():
if isinstance(value, str) and "\n" in value:
print(f" {key}:")
for line in value.strip().splitlines():
print(f" {line}")
else:
print(f" {key}: {value}")
def print_output(payload: Any, *, agent: bool = False) -> None:
if agent:
if _is_large_dataset(payload):
_print_compact(payload)
else:
print_json(payload)
else:
_render_tui(payload)
# ---------------------------------------------------------------------------
# Spinner / loading animation
# ---------------------------------------------------------------------------
_SPINNER_FRAMES = ["", "", "", "", "", "", "", "", "", ""]
class _SpinnerThread(threading.Thread):
def __init__(self, message: str, interval: float = 0.08) -> None:
super().__init__(daemon=True)
self.message = message
self.interval = interval
self._stop_event = threading.Event()
def run(self) -> None:
i = 0
while not self._stop_event.is_set():
frame = _SPINNER_FRAMES[i % len(_SPINNER_FRAMES)]
sys.stdout.write(f"\r{_CYAN}{frame}{_RESET} {self.message} ")
sys.stdout.flush()
self._stop_event.wait(self.interval)
i += 1
def stop(self) -> None:
self._stop_event.set()
self.join()
sys.stdout.write("\r\033[K")
sys.stdout.flush()
@contextmanager
def with_spinner(message: str, *, enabled: bool = True) -> Iterator[None]:
if not enabled or not sys.stdout.isatty():
yield
return
spinner = _SpinnerThread(message)
spinner.start()
try:
yield
finally:
spinner.stop()
def _detect_shell() -> str:
shell = os.getenv("SHELL", "")
if "zsh" in shell:
return "zsh"
if "bash" in shell:
return "bash"
return ""
if len(value) <= tail:
return "*" * len(value)
return "*" * max(4, len(value) - tail) + value[-tail:]
def _zshrc_path() -> Path:
return Path.home() / ".zshrc"
def _bashrc_path() -> Path:
return Path.home() / ".bashrc"
def _rc_contains(rc_path: Path, snippet: str) -> bool:
if not rc_path.exists():
return False
return snippet in rc_path.read_text(encoding="utf-8")
def install_shell_completion(parser: argparse.ArgumentParser) -> dict[str, Any]:
if shtab is None:
return {"shell": None, "installed": False, "reason": "shtab is not installed"}
shell = _detect_shell()
if not shell:
return {"shell": None, "installed": False, "reason": "unable to detect shell from $SHELL"}
script = shtab.complete(parser, shell=shell, preamble="")
installed_path: Path | None = None
hint: str | None = None
if shell == "zsh":
comp_dir = Path.home() / ".zsh" / "completions"
comp_dir.mkdir(parents=True, exist_ok=True)
installed_path = comp_dir / "_coinhunter"
installed_path.write_text(script, encoding="utf-8")
rc_path = _zshrc_path()
fpath_line = "fpath+=(~/.zsh/completions)"
if not _rc_contains(rc_path, fpath_line):
rc_path.write_text(fpath_line + "\n" + rc_path.read_text(encoding="utf-8") if rc_path.exists() else fpath_line + "\n", encoding="utf-8")
hint = "Added fpath+=(~/.zsh/completions) to ~/.zshrc; restart your terminal or run 'compinit'"
else:
hint = "Run 'compinit' or restart your terminal to activate completions"
elif shell == "bash":
comp_dir = Path.home() / ".local" / "share" / "bash-completion" / "completions"
comp_dir.mkdir(parents=True, exist_ok=True)
installed_path = comp_dir / "coinhunter"
installed_path.write_text(script, encoding="utf-8")
rc_path = _bashrc_path()
source_line = '[[ -r "~/.local/share/bash-completion/completions/coinhunter" ]] && . "~/.local/share/bash-completion/completions/coinhunter"'
if not _rc_contains(rc_path, source_line):
rc_path.write_text(source_line + "\n" + rc_path.read_text(encoding="utf-8") if rc_path.exists() else source_line + "\n", encoding="utf-8")
hint = "Added bash completion source line to ~/.bashrc; restart your terminal"
else:
hint = "Restart your terminal or source ~/.bashrc to activate completions"
return {
"shell": shell,
"installed": True,
"path": str(installed_path) if installed_path else None,
"hint": hint,
}

View File

@@ -1 +1 @@
"""Application services for CoinHunter."""
"""Service layer for CoinHunter V2."""

View File

@@ -0,0 +1,172 @@
"""Account and position services."""
from __future__ import annotations
from dataclasses import asdict, dataclass
from typing import Any
@dataclass
class AssetBalance:
asset: str
free: float
locked: float
total: float
notional_usdt: float
@dataclass
class PositionView:
symbol: str
quantity: float
entry_price: float | None
mark_price: float
notional_usdt: float
side: str
@dataclass
class AccountOverview:
total_equity_usdt: float
spot_equity_usdt: float
spot_asset_count: int
spot_position_count: int
def _spot_price_map(spot_client: Any, quote: str, assets: list[str]) -> dict[str, float]:
symbols = [f"{asset}{quote}" for asset in assets if asset != quote]
price_map = {quote: 1.0}
if not symbols:
return price_map
for item in spot_client.ticker_price(symbols):
symbol = item.get("symbol", "")
if symbol.endswith(quote):
price_map[symbol.removesuffix(quote)] = float(item.get("price", 0.0))
return price_map
def _spot_account_data(spot_client: Any, quote: str) -> tuple[list[dict[str, Any]], list[str], dict[str, float]]:
account = spot_client.account_info()
balances = account.get("balances", [])
assets = [item["asset"] for item in balances if float(item.get("free", 0)) + float(item.get("locked", 0)) > 0]
price_map = _spot_price_map(spot_client, quote, assets)
return balances, assets, price_map
def get_balances(
config: dict[str, Any],
*,
spot_client: Any,
) -> dict[str, Any]:
quote = str(config.get("market", {}).get("default_quote", "USDT")).upper()
rows: list[dict[str, Any]] = []
balances, _, price_map = _spot_account_data(spot_client, quote)
for item in balances:
free = float(item.get("free", 0.0))
locked = float(item.get("locked", 0.0))
total = free + locked
if total <= 0:
continue
asset = item["asset"]
rows.append(
asdict(
AssetBalance(
asset=asset,
free=free,
locked=locked,
total=total,
notional_usdt=total * price_map.get(asset, 0.0),
)
)
)
return {"balances": rows}
def get_positions(
config: dict[str, Any],
*,
spot_client: Any,
) -> dict[str, Any]:
quote = str(config.get("market", {}).get("default_quote", "USDT")).upper()
dust = float(config.get("trading", {}).get("dust_usdt_threshold", 0.0))
rows: list[dict[str, Any]] = []
balances, _, price_map = _spot_account_data(spot_client, quote)
for item in balances:
quantity = float(item.get("free", 0.0)) + float(item.get("locked", 0.0))
if quantity <= 0:
continue
asset = item["asset"]
mark_price = price_map.get(asset, 1.0 if asset == quote else 0.0)
notional = quantity * mark_price
if notional < dust:
continue
rows.append(
asdict(
PositionView(
symbol=quote if asset == quote else f"{asset}{quote}",
quantity=quantity,
entry_price=None,
mark_price=mark_price,
notional_usdt=notional,
side="LONG",
)
)
)
return {"positions": rows}
def get_overview(
config: dict[str, Any],
*,
spot_client: Any,
) -> dict[str, Any]:
quote = str(config.get("market", {}).get("default_quote", "USDT")).upper()
dust = float(config.get("trading", {}).get("dust_usdt_threshold", 0.0))
balances: list[dict[str, Any]] = []
positions: list[dict[str, Any]] = []
spot_balances, _, price_map = _spot_account_data(spot_client, quote)
for item in spot_balances:
free = float(item.get("free", 0.0))
locked = float(item.get("locked", 0.0))
total = free + locked
if total <= 0:
continue
asset = item["asset"]
balances.append(
asdict(
AssetBalance(
asset=asset,
free=free,
locked=locked,
total=total,
notional_usdt=total * price_map.get(asset, 0.0),
)
)
)
mark_price = price_map.get(asset, 1.0 if asset == quote else 0.0)
notional = total * mark_price
if notional >= dust:
positions.append(
asdict(
PositionView(
symbol=quote if asset == quote else f"{asset}{quote}",
quantity=total,
entry_price=None,
mark_price=mark_price,
notional_usdt=notional,
side="LONG",
)
)
)
spot_equity = sum(item["notional_usdt"] for item in balances)
overview = asdict(
AccountOverview(
total_equity_usdt=spot_equity,
spot_equity_usdt=spot_equity,
spot_asset_count=len(balances),
spot_position_count=len(positions),
)
)
return {"overview": overview, "balances": balances, "positions": positions}

View File

@@ -1,125 +0,0 @@
"""Exchange helpers (ccxt, markets, balances, order prep)."""
import math
import os
import ccxt
from ..runtime import get_runtime_paths, load_env_file
from .trade_common import log
PATHS = get_runtime_paths()
def load_env():
load_env_file(PATHS)
def get_exchange():
load_env()
api_key = os.getenv("BINANCE_API_KEY")
secret = os.getenv("BINANCE_API_SECRET")
if not api_key or not secret:
raise RuntimeError("缺少 BINANCE_API_KEY 或 BINANCE_API_SECRET")
ex = ccxt.binance(
{
"apiKey": api_key,
"secret": secret,
"options": {"defaultType": "spot", "createMarketBuyOrderRequiresPrice": False},
"enableRateLimit": True,
}
)
ex.load_markets()
return ex
def norm_symbol(symbol: str) -> str:
s = symbol.upper().replace("-", "").replace("_", "")
if "/" in s:
return s
if s.endswith("USDT"):
return s[:-4] + "/USDT"
raise ValueError(f"不支持的 symbol: {symbol}")
def storage_symbol(symbol: str) -> str:
return norm_symbol(symbol).replace("/", "")
def fetch_balances(ex):
bal = ex.fetch_balance()["free"]
return {k: float(v) for k, v in bal.items() if float(v) > 0}
def build_market_snapshot(ex):
try:
tickers = ex.fetch_tickers()
except Exception:
return {}
snapshot = {}
for sym, t in tickers.items():
if not sym.endswith("/USDT"):
continue
price = t.get("last")
if price is None or float(price) <= 0:
continue
vol = float(t.get("quoteVolume") or 0)
if vol < 200_000:
continue
base = sym.replace("/", "")
snapshot[base] = {
"lastPrice": round(float(price), 8),
"price24hPcnt": round(float(t.get("percentage") or 0), 4),
"highPrice24h": round(float(t.get("high") or 0), 8) if t.get("high") else None,
"lowPrice24h": round(float(t.get("low") or 0), 8) if t.get("low") else None,
"turnover24h": round(float(vol), 2),
}
return snapshot
def market_and_ticker(ex, symbol: str):
sym = norm_symbol(symbol)
market = ex.market(sym)
ticker = ex.fetch_ticker(sym)
return sym, market, ticker
def floor_to_step(value: float, step: float) -> float:
if not step or step <= 0:
return value
return math.floor(value / step) * step
def prepare_buy_quantity(ex, symbol: str, amount_usdt: float):
from .trade_common import USDT_BUFFER_PCT
sym, market, ticker = market_and_ticker(ex, symbol)
ask = float(ticker.get("ask") or ticker.get("last") or 0)
if ask <= 0:
raise RuntimeError(f"{sym} 无法获取有效 ask 价格")
budget = amount_usdt * (1 - USDT_BUFFER_PCT)
raw_qty = budget / ask
qty = float(ex.amount_to_precision(sym, raw_qty))
min_amt = (market.get("limits", {}).get("amount", {}) or {}).get("min") or 0
min_cost = (market.get("limits", {}).get("cost", {}) or {}).get("min") or 0
if min_amt and qty < float(min_amt):
raise RuntimeError(f"{sym} 买入数量 {qty} 小于最小数量 {min_amt}")
est_cost = qty * ask
if min_cost and est_cost < float(min_cost):
raise RuntimeError(f"{sym} 买入金额 ${est_cost:.4f} 小于最小成交额 ${float(min_cost):.4f}")
return sym, qty, ask, est_cost
def prepare_sell_quantity(ex, symbol: str, free_qty: float):
sym, market, ticker = market_and_ticker(ex, symbol)
bid = float(ticker.get("bid") or ticker.get("last") or 0)
if bid <= 0:
raise RuntimeError(f"{sym} 无法获取有效 bid 价格")
qty = float(ex.amount_to_precision(sym, free_qty))
min_amt = (market.get("limits", {}).get("amount", {}) or {}).get("min") or 0
min_cost = (market.get("limits", {}).get("cost", {}) or {}).get("min") or 0
if min_amt and qty < float(min_amt):
raise RuntimeError(f"{sym} 卖出数量 {qty} 小于最小数量 {min_amt}")
est_cost = qty * bid
if min_cost and est_cost < float(min_cost):
raise RuntimeError(f"{sym} 卖出金额 ${est_cost:.4f} 小于最小成交额 ${float(min_cost):.4f}")
return sym, qty, bid, est_cost

View File

@@ -1,39 +0,0 @@
"""Execution state helpers (decision deduplication, executions.json)."""
import hashlib
from ..runtime import get_runtime_paths
from .file_utils import load_json_locked, save_json_locked
from .trade_common import bj_now_iso
PATHS = get_runtime_paths()
EXECUTIONS_FILE = PATHS.executions_file
EXECUTIONS_LOCK = PATHS.executions_lock
def default_decision_id(action: str, argv_tail: list[str]) -> str:
from datetime import datetime
from .trade_common import CST
now = datetime.now(CST)
bucket_min = (now.minute // 15) * 15
bucket = now.strftime(f"%Y%m%dT%H{bucket_min:02d}")
raw = f"{bucket}|{action}|{'|'.join(argv_tail)}"
return hashlib.sha1(raw.encode()).hexdigest()[:16]
def load_executions() -> dict:
return load_json_locked(EXECUTIONS_FILE, EXECUTIONS_LOCK, {"executions": {}}).get("executions", {})
def save_executions(executions: dict):
save_json_locked(EXECUTIONS_FILE, EXECUTIONS_LOCK, {"executions": executions})
def record_execution_state(decision_id: str, payload: dict):
executions = load_executions()
executions[decision_id] = payload
save_executions(executions)
def get_execution_state(decision_id: str):
return load_executions().get(decision_id)

View File

@@ -1,40 +0,0 @@
"""File locking and atomic JSON helpers."""
import fcntl
import json
import os
from contextlib import contextmanager
from pathlib import Path
@contextmanager
def locked_file(path: Path):
path.parent.mkdir(parents=True, exist_ok=True)
with open(path, "a+", encoding="utf-8") as f:
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
f.seek(0)
yield f
f.flush()
os.fsync(f.fileno())
fcntl.flock(f.fileno(), fcntl.LOCK_UN)
def atomic_write_json(path: Path, data: dict):
path.parent.mkdir(parents=True, exist_ok=True)
tmp = path.with_suffix(path.suffix + ".tmp")
tmp.write_text(json.dumps(data, indent=2, ensure_ascii=False), encoding="utf-8")
os.replace(tmp, path)
def load_json_locked(path: Path, lock_path: Path, default):
with locked_file(lock_path):
if not path.exists():
return default
try:
return json.loads(path.read_text(encoding="utf-8"))
except Exception:
return default
def save_json_locked(path: Path, lock_path: Path, data: dict):
with locked_file(lock_path):
atomic_write_json(path, data)

View File

@@ -0,0 +1,143 @@
"""Market data services and symbol normalization."""
from __future__ import annotations
from dataclasses import asdict, dataclass
from typing import Any
def normalize_symbol(symbol: str) -> str:
return symbol.upper().replace("/", "").replace("-", "").replace("_", "").strip()
def normalize_symbols(symbols: list[str]) -> list[str]:
seen: set[str] = set()
normalized: list[str] = []
for symbol in symbols:
value = normalize_symbol(symbol)
if value and value not in seen:
normalized.append(value)
seen.add(value)
return normalized
def base_asset(symbol: str, quote_asset: str) -> str:
symbol = normalize_symbol(symbol)
return symbol[: -len(quote_asset)] if symbol.endswith(quote_asset) else symbol
@dataclass
class TickerView:
symbol: str
last_price: float
price_change_pct: float
quote_volume: float
@dataclass
class KlineView:
symbol: str
interval: str
open_time: int
open: float
high: float
low: float
close: float
volume: float
close_time: int
quote_volume: float
def get_tickers(config: dict[str, Any], symbols: list[str], *, spot_client: Any) -> dict[str, Any]:
normalized = normalize_symbols(symbols)
rows = []
for ticker in spot_client.ticker_24h(normalized):
rows.append(
asdict(
TickerView(
symbol=normalize_symbol(ticker["symbol"]),
last_price=float(ticker.get("lastPrice") or ticker.get("last_price") or 0.0),
price_change_pct=float(ticker.get("priceChangePercent") or ticker.get("price_change_percent") or 0.0),
quote_volume=float(ticker.get("quoteVolume") or ticker.get("quote_volume") or 0.0),
)
)
)
return {"tickers": rows}
def get_klines(
config: dict[str, Any],
symbols: list[str],
*,
interval: str,
limit: int,
spot_client: Any,
) -> dict[str, Any]:
normalized = normalize_symbols(symbols)
rows = []
for symbol in normalized:
for item in spot_client.klines(symbol=symbol, interval=interval, limit=limit):
rows.append(
asdict(
KlineView(
symbol=symbol,
interval=interval,
open_time=int(item[0]),
open=float(item[1]),
high=float(item[2]),
low=float(item[3]),
close=float(item[4]),
volume=float(item[5]),
close_time=int(item[6]),
quote_volume=float(item[7]),
)
)
)
return {"interval": interval, "limit": limit, "klines": rows}
def get_scan_universe(
config: dict[str, Any],
*,
spot_client: Any,
symbols: list[str] | None = None,
) -> list[dict[str, Any]]:
market_config = config.get("market", {})
opportunity_config = config.get("opportunity", {})
quote = str(market_config.get("default_quote", "USDT")).upper()
allowlist = set(normalize_symbols(market_config.get("universe_allowlist", [])))
denylist = set(normalize_symbols(market_config.get("universe_denylist", [])))
requested = set(normalize_symbols(symbols or []))
min_quote_volume = float(opportunity_config.get("min_quote_volume", 0.0))
exchange_info = spot_client.exchange_info()
status_map = {normalize_symbol(item["symbol"]): item.get("status", "") for item in exchange_info.get("symbols", [])}
rows: list[dict[str, Any]] = []
for ticker in spot_client.ticker_24h(list(requested) if requested else None):
symbol = normalize_symbol(ticker["symbol"])
if not symbol.endswith(quote):
continue
if allowlist and symbol not in allowlist:
continue
if symbol in denylist:
continue
if requested and symbol not in requested:
continue
if status_map.get(symbol) != "TRADING":
continue
quote_volume = float(ticker.get("quoteVolume") or 0.0)
if quote_volume < min_quote_volume:
continue
rows.append(
{
"symbol": symbol,
"last_price": float(ticker.get("lastPrice") or 0.0),
"price_change_pct": float(ticker.get("priceChangePercent") or 0.0),
"quote_volume": quote_volume,
"high_price": float(ticker.get("highPrice") or 0.0),
"low_price": float(ticker.get("lowPrice") or 0.0),
}
)
rows.sort(key=lambda item: float(item["quote_volume"]), reverse=True)
return rows

View File

@@ -0,0 +1,207 @@
"""Opportunity analysis services."""
from __future__ import annotations
from dataclasses import asdict, dataclass
from statistics import mean
from typing import Any
from ..audit import audit_event
from .account_service import get_positions
from .market_service import base_asset, get_scan_universe, normalize_symbol
@dataclass
class OpportunityRecommendation:
symbol: str
action: str
score: float
reasons: list[str]
metrics: dict[str, float]
def _safe_pct(new: float, old: float) -> float:
if old == 0:
return 0.0
return (new - old) / old
def _score_candidate(closes: list[float], volumes: list[float], ticker: dict[str, Any], weights: dict[str, float], concentration: float) -> tuple[float, dict[str, float]]:
if len(closes) < 2 or not volumes:
return 0.0, {
"trend": 0.0,
"momentum": 0.0,
"breakout": 0.0,
"volume_confirmation": 1.0,
"volatility": 0.0,
"concentration": round(concentration, 4),
}
current = closes[-1]
sma_short = mean(closes[-5:]) if len(closes) >= 5 else current
sma_long = mean(closes[-20:]) if len(closes) >= 20 else mean(closes)
trend = 1.0 if current >= sma_short >= sma_long else -1.0 if current < sma_short < sma_long else 0.0
momentum = (
_safe_pct(closes[-1], closes[-2]) * 0.5
+ (_safe_pct(closes[-1], closes[-5]) * 0.3 if len(closes) >= 5 else 0.0)
+ float(ticker.get("price_change_pct", 0.0)) / 100.0 * 0.2
)
recent_high = max(closes[-20:]) if len(closes) >= 20 else max(closes)
breakout = 1.0 - max((recent_high - current) / recent_high, 0.0)
avg_volume = mean(volumes[:-1]) if len(volumes) > 1 else volumes[-1]
volume_confirmation = volumes[-1] / avg_volume if avg_volume else 1.0
volume_score = min(max(volume_confirmation - 1.0, -1.0), 2.0)
volatility = (max(closes[-10:]) - min(closes[-10:])) / current if len(closes) >= 10 and current else 0.0
score = (
weights.get("trend", 1.0) * trend
+ weights.get("momentum", 1.0) * momentum
+ weights.get("breakout", 0.8) * breakout
+ weights.get("volume", 0.7) * volume_score
- weights.get("volatility_penalty", 0.5) * volatility
- weights.get("position_concentration_penalty", 0.6) * concentration
)
metrics = {
"trend": round(trend, 4),
"momentum": round(momentum, 4),
"breakout": round(breakout, 4),
"volume_confirmation": round(volume_confirmation, 4),
"volatility": round(volatility, 4),
"concentration": round(concentration, 4),
}
return score, metrics
def _action_for(score: float, concentration: float) -> tuple[str, list[str]]:
reasons: list[str] = []
if concentration >= 0.5 and score < 0.4:
reasons.append("position concentration is high")
return "trim", reasons
if score >= 1.5:
reasons.append("trend, momentum, and breakout are aligned")
return "add", reasons
if score >= 0.6:
reasons.append("trend remains constructive")
return "hold", reasons
if score <= -0.2:
reasons.append("momentum and structure have weakened")
return "exit", reasons
reasons.append("signal is mixed and needs confirmation")
return "observe", reasons
def analyze_portfolio(config: dict[str, Any], *, spot_client: Any) -> dict[str, Any]:
quote = str(config.get("market", {}).get("default_quote", "USDT")).upper()
weights = config.get("opportunity", {}).get("weights", {})
positions = get_positions(config, spot_client=spot_client)["positions"]
positions = [item for item in positions if item["symbol"] != quote]
total_notional = sum(item["notional_usdt"] for item in positions) or 1.0
recommendations = []
for position in positions:
symbol = normalize_symbol(position["symbol"])
klines = spot_client.klines(symbol=symbol, interval="1h", limit=24)
closes = [float(item[4]) for item in klines]
volumes = [float(item[5]) for item in klines]
tickers = spot_client.ticker_24h([symbol])
ticker = tickers[0] if tickers else {"priceChangePercent": "0"}
concentration = position["notional_usdt"] / total_notional
score, metrics = _score_candidate(
closes,
volumes,
{
"price_change_pct": float(ticker.get("priceChangePercent") or 0.0),
},
weights,
concentration,
)
action, reasons = _action_for(score, concentration)
recommendations.append(
asdict(
OpportunityRecommendation(
symbol=symbol,
action=action,
score=round(score, 4),
reasons=reasons,
metrics=metrics,
)
)
)
payload = {"recommendations": sorted(recommendations, key=lambda item: item["score"], reverse=True)}
audit_event(
"opportunity_portfolio_generated",
{
"market_type": "spot",
"symbol": None,
"side": None,
"qty": None,
"quote_amount": None,
"order_type": None,
"dry_run": True,
"request_payload": {"mode": "portfolio"},
"response_payload": payload,
"status": "generated",
"error": None,
},
)
return payload
def scan_opportunities(
config: dict[str, Any],
*,
spot_client: Any,
symbols: list[str] | None = None,
) -> dict[str, Any]:
opportunity_config = config.get("opportunity", {})
weights = opportunity_config.get("weights", {})
scan_limit = int(opportunity_config.get("scan_limit", 50))
top_n = int(opportunity_config.get("top_n", 10))
quote = str(config.get("market", {}).get("default_quote", "USDT")).upper()
held_positions = get_positions(config, spot_client=spot_client)["positions"]
concentration_map = {
normalize_symbol(item["symbol"]): float(item["notional_usdt"])
for item in held_positions
}
total_held = sum(concentration_map.values()) or 1.0
universe = get_scan_universe(config, spot_client=spot_client, symbols=symbols)[:scan_limit]
recommendations = []
for ticker in universe:
symbol = normalize_symbol(ticker["symbol"])
klines = spot_client.klines(symbol=symbol, interval="1h", limit=24)
closes = [float(item[4]) for item in klines]
volumes = [float(item[5]) for item in klines]
concentration = concentration_map.get(symbol, 0.0) / total_held
score, metrics = _score_candidate(closes, volumes, ticker, weights, concentration)
action, reasons = _action_for(score, concentration)
if symbol.endswith(quote):
reasons.append(f"base asset {base_asset(symbol, quote)} passed liquidity and tradability filters")
recommendations.append(
asdict(
OpportunityRecommendation(
symbol=symbol,
action=action,
score=round(score, 4),
reasons=reasons,
metrics=metrics,
)
)
)
payload = {"recommendations": sorted(recommendations, key=lambda item: item["score"], reverse=True)[:top_n]}
audit_event(
"opportunity_scan_generated",
{
"market_type": "spot",
"symbol": None,
"side": None,
"qty": None,
"quote_amount": None,
"order_type": None,
"dry_run": True,
"request_payload": {"mode": "scan", "symbols": [normalize_symbol(item) for item in symbols or []]},
"response_payload": payload,
"status": "generated",
"error": None,
},
)
return payload

View File

@@ -1,57 +0,0 @@
"""Portfolio state helpers (positions.json, reconcile with exchange)."""
from ..runtime import get_runtime_paths
from .file_utils import load_json_locked, save_json_locked
from .trade_common import bj_now_iso
PATHS = get_runtime_paths()
POSITIONS_FILE = PATHS.positions_file
POSITIONS_LOCK = PATHS.positions_lock
def load_positions() -> list:
return load_json_locked(POSITIONS_FILE, POSITIONS_LOCK, {"positions": []}).get("positions", [])
def save_positions(positions: list):
save_json_locked(POSITIONS_FILE, POSITIONS_LOCK, {"positions": positions})
def upsert_position(positions: list, position: dict):
sym = position["symbol"]
for i, existing in enumerate(positions):
if existing.get("symbol") == sym:
positions[i] = position
return positions
positions.append(position)
return positions
def reconcile_positions_with_exchange(ex, positions: list):
from .exchange_service import fetch_balances
balances = fetch_balances(ex)
existing_by_symbol = {p.get("symbol"): p for p in positions}
reconciled = []
for asset, qty in balances.items():
if asset == "USDT":
continue
if qty <= 0:
continue
sym = f"{asset}USDT"
old = existing_by_symbol.get(sym, {})
reconciled.append(
{
"account_id": old.get("account_id", "binance-main"),
"symbol": sym,
"base_asset": asset,
"quote_asset": "USDT",
"market_type": "spot",
"quantity": qty,
"avg_cost": old.get("avg_cost"),
"opened_at": old.get("opened_at", bj_now_iso()),
"updated_at": bj_now_iso(),
"note": old.get("note", "Reconciled from Binance balances"),
}
)
save_positions(reconciled)
return reconciled, balances

View File

@@ -1,25 +0,0 @@
"""Analysis helpers for precheck."""
from __future__ import annotations
from .. import precheck as precheck_module
def analyze_trigger(snapshot: dict, state: dict) -> dict:
return precheck_module.analyze_trigger(snapshot, state)
def build_failure_payload(exc: Exception) -> dict:
return {
"generated_at": precheck_module.utc_iso(),
"status": "deep_analysis_required",
"should_analyze": True,
"pending_trigger": True,
"cooldown_active": False,
"reasons": ["precheck-error"],
"hard_reasons": ["precheck-error"],
"soft_reasons": [],
"soft_score": 0,
"details": [str(exc)],
"compact_summary": f"预检查失败,转入深度分析兜底: {exc}",
}

View File

@@ -1,30 +0,0 @@
"""Service entrypoint for precheck workflows."""
from __future__ import annotations
import json
import sys
from . import precheck_analysis, precheck_snapshot, precheck_state
def run(argv: list[str] | None = None) -> int:
argv = list(sys.argv[1:] if argv is None else argv)
if argv and argv[0] == "--ack":
precheck_state.ack_analysis(" ".join(argv[1:]).strip())
return 0
if argv and argv[0] == "--mark-run-requested":
precheck_state.mark_run_requested(" ".join(argv[1:]).strip())
return 0
try:
state = precheck_state.sanitize_state_for_stale_triggers(precheck_state.load_state())
snapshot = precheck_snapshot.build_snapshot()
analysis = precheck_analysis.analyze_trigger(snapshot, state)
precheck_state.save_state(precheck_state.update_state_after_observation(state, snapshot, analysis))
print(json.dumps(analysis, ensure_ascii=False, indent=2))
return 0
except Exception as exc:
print(json.dumps(precheck_analysis.build_failure_payload(exc), ensure_ascii=False, indent=2))
return 0

View File

@@ -1,9 +0,0 @@
"""Snapshot construction helpers for precheck."""
from __future__ import annotations
from .. import precheck as precheck_module
def build_snapshot() -> dict:
return precheck_module.build_snapshot()

View File

@@ -1,47 +0,0 @@
"""State helpers for precheck orchestration."""
from __future__ import annotations
import json
from .. import precheck as precheck_module
def load_state() -> dict:
return precheck_module.load_state()
def save_state(state: dict) -> None:
precheck_module.save_state(state)
def sanitize_state_for_stale_triggers(state: dict) -> dict:
return precheck_module.sanitize_state_for_stale_triggers(state)
def update_state_after_observation(state: dict, snapshot: dict, analysis: dict) -> dict:
return precheck_module.update_state_after_observation(state, snapshot, analysis)
def mark_run_requested(note: str = "") -> dict:
state = load_state()
state["run_requested_at"] = precheck_module.utc_iso()
state["run_request_note"] = note
save_state(state)
payload = {"ok": True, "run_requested_at": state["run_requested_at"], "note": note}
print(json.dumps(payload, ensure_ascii=False))
return payload
def ack_analysis(note: str = "") -> dict:
state = load_state()
state["last_deep_analysis_at"] = precheck_module.utc_iso()
state["pending_trigger"] = False
state["pending_reasons"] = []
state["last_ack_note"] = note
state.pop("run_requested_at", None)
state.pop("run_request_note", None)
save_state(state)
payload = {"ok": True, "acked_at": state["last_deep_analysis_at"], "note": note}
print(json.dumps(payload, ensure_ascii=False))
return payload

View File

@@ -1,145 +0,0 @@
"""CLI parser and legacy argument normalization for smart executor."""
import argparse
def build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(
description="Coin Hunter Smart Executor",
formatter_class=argparse.RawTextHelpFormatter,
epilog=(
"示例:\n"
" python smart_executor.py hold\n"
" python smart_executor.py sell-all ETHUSDT\n"
" python smart_executor.py buy ENJUSDT 100\n"
" python smart_executor.py rebalance PEPEUSDT ETHUSDT\n"
" python smart_executor.py balances\n\n"
"兼容旧调用:\n"
" python smart_executor.py HOLD\n"
" python smart_executor.py --decision HOLD --dry-run\n"
),
)
parser.add_argument("--decision-id", help="Override decision id (otherwise derived automatically)")
parser.add_argument("--analysis", help="Decision analysis text to persist into logs")
parser.add_argument("--reasoning", help="Decision reasoning text to persist into logs")
parser.add_argument("--dry-run", action="store_true", help="Force dry-run mode for this invocation")
subparsers = parser.add_subparsers(dest="command")
subparsers.add_parser("hold", help="Log a HOLD decision without trading")
subparsers.add_parser("balances", help="Print live balances as JSON")
subparsers.add_parser("balance", help="Alias of balances")
subparsers.add_parser("status", help="Print balances + positions + snapshot as JSON")
sell_all = subparsers.add_parser("sell-all", help="Sell all of one symbol")
sell_all.add_argument("symbol")
sell_all_legacy = subparsers.add_parser("sell_all", help=argparse.SUPPRESS)
sell_all_legacy.add_argument("symbol")
buy = subparsers.add_parser("buy", help="Buy symbol with USDT amount")
buy.add_argument("symbol")
buy.add_argument("amount_usdt", type=float)
rebalance = subparsers.add_parser("rebalance", help="Sell one symbol and rotate to another")
rebalance.add_argument("from_symbol")
rebalance.add_argument("to_symbol")
return parser
def normalize_legacy_argv(argv: list[str]) -> list[str]:
if not argv:
return argv
action_aliases = {
"HOLD": ["hold"],
"hold": ["hold"],
"SELL_ALL": ["sell-all"],
"sell_all": ["sell-all"],
"sell-all": ["sell-all"],
"BUY": ["buy"],
"buy": ["buy"],
"REBALANCE": ["rebalance"],
"rebalance": ["rebalance"],
"BALANCE": ["balances"],
"balance": ["balances"],
"BALANCES": ["balances"],
"balances": ["balances"],
"STATUS": ["status"],
"status": ["status"],
}
has_legacy_flag = any(t.startswith("--decision") for t in argv)
if not has_legacy_flag:
for idx, token in enumerate(argv):
if token in action_aliases:
prefix = argv[:idx]
suffix = argv[idx + 1 :]
return prefix + action_aliases[token] + suffix
if argv[0].startswith("-"):
legacy = argparse.ArgumentParser(add_help=False)
legacy.add_argument("--decision")
legacy.add_argument("--symbol")
legacy.add_argument("--from-symbol")
legacy.add_argument("--to-symbol")
legacy.add_argument("--amount-usdt", type=float)
legacy.add_argument("--decision-id")
legacy.add_argument("--analysis")
legacy.add_argument("--reasoning")
legacy.add_argument("--dry-run", action="store_true")
ns, unknown = legacy.parse_known_args(argv)
if ns.decision:
decision = (ns.decision or "").strip().upper()
rebuilt = []
if ns.decision_id:
rebuilt += ["--decision-id", ns.decision_id]
if ns.analysis:
rebuilt += ["--analysis", ns.analysis]
if ns.reasoning:
rebuilt += ["--reasoning", ns.reasoning]
if ns.dry_run:
rebuilt += ["--dry-run"]
if decision == "HOLD":
rebuilt += ["hold"]
elif decision == "SELL_ALL":
if not ns.symbol:
raise RuntimeError("旧式 --decision SELL_ALL 需要搭配 --symbol")
rebuilt += ["sell-all", ns.symbol]
elif decision == "BUY":
if not ns.symbol or ns.amount_usdt is None:
raise RuntimeError("旧式 --decision BUY 需要 --symbol 和 --amount-usdt")
rebuilt += ["buy", ns.symbol, str(ns.amount_usdt)]
elif decision == "REBALANCE":
if not ns.from_symbol or not ns.to_symbol:
raise RuntimeError("旧式 --decision REBALANCE 需要 --from-symbol 和 --to-symbol")
rebuilt += ["rebalance", ns.from_symbol, ns.to_symbol]
else:
raise RuntimeError(f"不支持的旧式 decision: {decision}")
return rebuilt + unknown
return argv
def parse_cli_args(argv: list[str]):
parser = build_parser()
normalized = normalize_legacy_argv(argv)
args = parser.parse_args(normalized)
if not args.command:
parser.print_help()
raise SystemExit(1)
if args.command == "sell_all":
args.command = "sell-all"
return args, normalized
def cli_action_args(args, action: str) -> list[str]:
if action == "sell_all":
return [args.symbol]
if action == "buy":
return [args.symbol, str(args.amount_usdt)]
if action == "rebalance":
return [args.from_symbol, args.to_symbol]
return []

View File

@@ -1,128 +0,0 @@
"""Service entrypoint for smart executor workflows."""
from __future__ import annotations
import os
import sys
from ..logger import log_decision, log_error
from .exchange_service import fetch_balances, build_market_snapshot
from .execution_state import default_decision_id, get_execution_state, record_execution_state
from .portfolio_service import load_positions
from .smart_executor_parser import parse_cli_args, cli_action_args
from .trade_common import is_dry_run, log, set_dry_run, bj_now_iso
from .trade_execution import (
command_balances,
command_status,
build_decision_context,
action_sell_all,
action_buy,
action_rebalance,
)
def run(argv: list[str] | None = None) -> int:
argv = list(sys.argv[1:] if argv is None else argv)
args, normalized_argv = parse_cli_args(argv)
action = args.command.replace("-", "_")
argv_tail = cli_action_args(args, action)
decision_id = (
args.decision_id
or os.getenv("DECISION_ID")
or default_decision_id(action, normalized_argv)
)
if args.dry_run:
set_dry_run(True)
previous = get_execution_state(decision_id)
read_only_action = action in {"balance", "balances", "status"}
if previous and previous.get("status") == "success" and not read_only_action:
log(f"⚠️ decision_id={decision_id} 已执行成功,跳过重复执行")
return 0
try:
from .exchange_service import get_exchange
ex = get_exchange()
if read_only_action:
if action in {"balance", "balances"}:
command_balances(ex)
else:
command_status(ex)
return 0
decision_context = build_decision_context(ex, action, argv_tail, decision_id)
if args.analysis:
decision_context["analysis"] = args.analysis
elif os.getenv("DECISION_ANALYSIS"):
decision_context["analysis"] = os.getenv("DECISION_ANALYSIS")
if args.reasoning:
decision_context["reasoning"] = args.reasoning
elif os.getenv("DECISION_REASONING"):
decision_context["reasoning"] = os.getenv("DECISION_REASONING")
record_execution_state(
decision_id,
{"status": "pending", "started_at": bj_now_iso(), "action": action, "args": argv_tail},
)
if action == "sell_all":
result = action_sell_all(ex, args.symbol, decision_id, decision_context)
elif action == "buy":
result = action_buy(ex, args.symbol, float(args.amount_usdt), decision_id, decision_context)
elif action == "rebalance":
result = action_rebalance(ex, args.from_symbol, args.to_symbol, decision_id, decision_context)
elif action == "hold":
balances = fetch_balances(ex)
positions = load_positions()
market_snapshot = build_market_snapshot(ex)
log_decision(
{
**decision_context,
"balances_after": balances,
"positions_after": positions,
"market_snapshot": market_snapshot,
"analysis": decision_context.get("analysis", "hold"),
"reasoning": decision_context.get("reasoning", "hold"),
"execution_result": {"status": "hold"},
}
)
log("😴 决策: 持续持有,无操作")
result = {"status": "hold"}
else:
raise RuntimeError(f"未知动作: {action};请运行 --help 查看正确 CLI 用法")
record_execution_state(
decision_id,
{
"status": "success",
"finished_at": bj_now_iso(),
"action": action,
"args": argv_tail,
"result": result,
},
)
log(f"✅ 执行完成 decision_id={decision_id}")
return 0
except Exception as exc:
record_execution_state(
decision_id,
{
"status": "failed",
"finished_at": bj_now_iso(),
"action": action,
"args": argv_tail,
"error": str(exc),
},
)
log_error(
"smart_executor",
exc,
decision_id=decision_id,
action=action,
args=argv_tail,
)
log(f"❌ 执行失败: {exc}")
return 1

View File

@@ -1,25 +0,0 @@
"""Common trade utilities (time, logging, constants)."""
import os
from datetime import datetime, timezone, timedelta
CST = timezone(timedelta(hours=8))
_DRY_RUN = {"value": os.getenv("DRY_RUN", "false").lower() == "true"}
USDT_BUFFER_PCT = 0.03
MIN_REMAINING_DUST_USDT = 1.0
def is_dry_run() -> bool:
return _DRY_RUN["value"]
def set_dry_run(value: bool):
_DRY_RUN["value"] = value
def log(msg: str):
print(f"[{datetime.now(CST).strftime('%Y-%m-%d %H:%M:%S')} CST] {msg}")
def bj_now_iso():
return datetime.now(CST).isoformat()

View File

@@ -1,178 +0,0 @@
"""Trade execution actions (buy, sell, rebalance, hold, status)."""
from ..logger import log_decision, log_trade
from .exchange_service import (
fetch_balances,
norm_symbol,
storage_symbol,
build_market_snapshot,
prepare_buy_quantity,
prepare_sell_quantity,
)
from .portfolio_service import load_positions, save_positions, upsert_position, reconcile_positions_with_exchange
from .trade_common import is_dry_run, USDT_BUFFER_PCT, log, bj_now_iso
def build_decision_context(ex, action: str, argv_tail: list[str], decision_id: str):
balances = fetch_balances(ex)
positions = load_positions()
return {
"decision_id": decision_id,
"balances_before": balances,
"positions_before": positions,
"decision": action.upper(),
"action_taken": f"{action} {' '.join(argv_tail)}".strip(),
"risk_level": "high" if len(positions) <= 1 else "medium",
"data_sources": ["binance"],
}
def market_sell(ex, symbol: str, qty: float, decision_id: str):
sym, qty, bid, est_cost = prepare_sell_quantity(ex, symbol, qty)
if is_dry_run():
log(f"[DRY RUN] 卖出 {sym} 数量 {qty}")
return {"id": f"dry-sell-{decision_id}", "symbol": sym, "amount": qty, "price": bid, "cost": est_cost, "status": "closed"}
order = ex.create_market_sell_order(sym, qty, params={"newClientOrderId": f"ch-{decision_id}-sell"})
return order
def market_buy(ex, symbol: str, amount_usdt: float, decision_id: str):
sym, qty, ask, est_cost = prepare_buy_quantity(ex, symbol, amount_usdt)
if is_dry_run():
log(f"[DRY RUN] 买入 {sym} 金额 ${est_cost:.4f} 数量 {qty}")
return {"id": f"dry-buy-{decision_id}", "symbol": sym, "amount": qty, "price": ask, "cost": est_cost, "status": "closed"}
order = ex.create_market_buy_order(sym, qty, params={"newClientOrderId": f"ch-{decision_id}-buy"})
return order
def action_sell_all(ex, symbol: str, decision_id: str, decision_context: dict):
balances_before = fetch_balances(ex)
base = norm_symbol(symbol).split("/")[0]
qty = float(balances_before.get(base, 0))
if qty <= 0:
raise RuntimeError(f"{base} 余额为0无法卖出")
order = market_sell(ex, symbol, qty, decision_id)
positions_after, balances_after = (
reconcile_positions_with_exchange(ex, load_positions())
if not is_dry_run()
else (load_positions(), balances_before)
)
log_trade(
"SELL_ALL",
norm_symbol(symbol),
qty=order.get("amount"),
price=order.get("price"),
amount_usdt=order.get("cost"),
note="Smart executor sell_all",
decision_id=decision_id,
order_id=order.get("id"),
status=order.get("status"),
balances_before=balances_before,
balances_after=balances_after,
)
log_decision(
{
**decision_context,
"balances_after": balances_after,
"positions_after": positions_after,
"execution_result": {"order": order},
"analysis": decision_context.get("analysis", ""),
"reasoning": decision_context.get("reasoning", "sell_all execution"),
}
)
return order
def action_buy(ex, symbol: str, amount_usdt: float, decision_id: str, decision_context: dict, simulated_usdt_balance: float = None):
balances_before = fetch_balances(ex) if simulated_usdt_balance is None else {"USDT": simulated_usdt_balance}
usdt = float(balances_before.get("USDT", 0))
if usdt < amount_usdt:
raise RuntimeError(f"USDT 余额不足(${usdt:.4f} < ${amount_usdt:.4f}")
order = market_buy(ex, symbol, amount_usdt, decision_id)
positions_existing = load_positions()
sym_store = storage_symbol(symbol)
price = float(order.get("price") or 0)
qty = float(order.get("amount") or 0)
position = {
"account_id": "binance-main",
"symbol": sym_store,
"base_asset": norm_symbol(symbol).split("/")[0],
"quote_asset": "USDT",
"market_type": "spot",
"quantity": qty,
"avg_cost": price,
"opened_at": bj_now_iso(),
"updated_at": bj_now_iso(),
"note": "Smart executor entry",
}
upsert_position(positions_existing, position)
if is_dry_run():
balances_after = balances_before
positions_after = positions_existing
else:
save_positions(positions_existing)
positions_after, balances_after = reconcile_positions_with_exchange(ex, positions_existing)
for p in positions_after:
if p["symbol"] == sym_store and price:
p["avg_cost"] = price
p["updated_at"] = bj_now_iso()
save_positions(positions_after)
log_trade(
"BUY",
norm_symbol(symbol),
qty=qty,
amount_usdt=order.get("cost"),
price=price,
note="Smart executor buy",
decision_id=decision_id,
order_id=order.get("id"),
status=order.get("status"),
balances_before=balances_before,
balances_after=balances_after,
)
log_decision(
{
**decision_context,
"balances_after": balances_after,
"positions_after": positions_after,
"execution_result": {"order": order},
"analysis": decision_context.get("analysis", ""),
"reasoning": decision_context.get("reasoning", "buy execution"),
}
)
return order
def action_rebalance(ex, from_symbol: str, to_symbol: str, decision_id: str, decision_context: dict):
sell_order = action_sell_all(ex, from_symbol, decision_id + "s", decision_context)
if is_dry_run():
sell_cost = float(sell_order.get("cost") or 0)
spend = sell_cost * (1 - USDT_BUFFER_PCT)
simulated_usdt = sell_cost
else:
balances = fetch_balances(ex)
usdt = float(balances.get("USDT", 0))
spend = usdt * (1 - USDT_BUFFER_PCT)
simulated_usdt = None
if spend < 5:
raise RuntimeError(f"卖出后 USDT ${spend:.4f} 不足,无法买入新币")
buy_order = action_buy(ex, to_symbol, spend, decision_id + "b", decision_context, simulated_usdt_balance=simulated_usdt)
return {"sell": sell_order, "buy": buy_order}
def command_status(ex):
balances = fetch_balances(ex)
positions = load_positions()
market_snapshot = build_market_snapshot(ex)
payload = {
"balances": balances,
"positions": positions,
"market_snapshot": market_snapshot,
}
print(payload)
return payload
def command_balances(ex):
balances = fetch_balances(ex)
print({"balances": balances})
return balances

View File

@@ -0,0 +1,150 @@
"""Trade execution services."""
from __future__ import annotations
from dataclasses import asdict, dataclass
from typing import Any
from ..audit import audit_event
from .market_service import normalize_symbol
@dataclass
class TradeIntent:
market_type: str
symbol: str
side: str
order_type: str
qty: float | None
quote_amount: float | None
price: float | None
reduce_only: bool
dry_run: bool
@dataclass
class TradeResult:
market_type: str
symbol: str
side: str
order_type: str
status: str
dry_run: bool
request_payload: dict[str, Any]
response_payload: dict[str, Any]
def _default_dry_run(config: dict[str, Any], dry_run: bool | None) -> bool:
if dry_run is not None:
return dry_run
return bool(config.get("trading", {}).get("dry_run_default", False))
def _trade_log_payload(intent: TradeIntent, payload: dict[str, Any], *, status: str, error: str | None = None) -> dict[str, Any]:
return {
"market_type": intent.market_type,
"symbol": intent.symbol,
"side": intent.side,
"qty": intent.qty,
"quote_amount": intent.quote_amount,
"order_type": intent.order_type,
"dry_run": intent.dry_run,
"request_payload": payload,
"response_payload": {} if error else payload,
"status": status,
"error": error,
}
def execute_spot_trade(
config: dict[str, Any],
*,
side: str,
symbol: str,
qty: float | None,
quote: float | None,
order_type: str,
price: float | None,
dry_run: bool | None,
spot_client: Any,
) -> dict[str, Any]:
normalized_symbol = normalize_symbol(symbol)
order_type = order_type.upper()
side = side.upper()
is_dry_run = _default_dry_run(config, dry_run)
if side == "BUY" and order_type == "MARKET":
if quote is None:
raise RuntimeError("Spot market buy requires --quote")
if qty is not None:
raise RuntimeError("Spot market buy accepts --quote only; do not pass --qty")
if side == "SELL":
if qty is None:
raise RuntimeError("Spot sell requires --qty")
if quote is not None:
raise RuntimeError("Spot sell accepts --qty only; do not pass --quote")
if order_type == "LIMIT" and (qty is None or price is None):
raise RuntimeError("Limit orders require both --qty and --price")
payload: dict[str, Any] = {
"symbol": normalized_symbol,
"side": side,
"type": order_type,
}
if qty is not None:
payload["quantity"] = qty
if quote is not None:
payload["quoteOrderQty"] = quote
if price is not None:
payload["price"] = price
payload["timeInForce"] = "GTC"
intent = TradeIntent(
market_type="spot",
symbol=normalized_symbol,
side=side,
order_type=order_type,
qty=qty,
quote_amount=quote,
price=price,
reduce_only=False,
dry_run=is_dry_run,
)
audit_event("trade_submitted", _trade_log_payload(intent, payload, status="submitted"))
if is_dry_run:
response = {"dry_run": True, "status": "DRY_RUN", "request": payload}
result = asdict(
TradeResult(
market_type="spot",
symbol=normalized_symbol,
side=side,
order_type=order_type,
status="DRY_RUN",
dry_run=True,
request_payload=payload,
response_payload=response,
)
)
audit_event("trade_filled", {**_trade_log_payload(intent, payload, status="DRY_RUN"), "response_payload": response})
return {"trade": result}
try:
response = spot_client.new_order(**payload)
except Exception as exc:
audit_event("trade_failed", _trade_log_payload(intent, payload, status="failed", error=str(exc)))
raise RuntimeError(f"Spot order failed: {exc}") from exc
result = asdict(
TradeResult(
market_type="spot",
symbol=normalized_symbol,
side=side,
order_type=order_type,
status=str(response.get("status", "UNKNOWN")),
dry_run=False,
request_payload=payload,
response_payload=response,
)
)
audit_event("trade_filled", {**_trade_log_payload(intent, payload, status=result["status"]), "response_payload": response})
return {"trade": result}

View File

@@ -1,29 +0,0 @@
#!/usr/bin/env python3
"""Coin Hunter robust smart executor — compatibility facade."""
import sys
from .runtime import get_runtime_paths, load_env_file
from .services.trade_common import CST, is_dry_run, USDT_BUFFER_PCT, MIN_REMAINING_DUST_USDT, log, bj_now_iso, set_dry_run
from .services.file_utils import locked_file, atomic_write_json, load_json_locked, save_json_locked
from .services.smart_executor_parser import build_parser, normalize_legacy_argv, parse_cli_args, cli_action_args
from .services.execution_state import default_decision_id, record_execution_state, get_execution_state, load_executions, save_executions
from .services.portfolio_service import load_positions, save_positions, upsert_position, reconcile_positions_with_exchange
from .services.exchange_service import get_exchange, norm_symbol, storage_symbol, fetch_balances, build_market_snapshot, market_and_ticker, floor_to_step, prepare_buy_quantity, prepare_sell_quantity
from .services.trade_execution import build_decision_context, market_sell, market_buy, action_sell_all, action_buy, action_rebalance, command_status, command_balances
from .services.smart_executor_service import run as _run_service
PATHS = get_runtime_paths()
ENV_FILE = PATHS.env_file
def load_env():
load_env_file(PATHS)
def main(argv=None):
return _run_service(argv)
if __name__ == "__main__":
raise SystemExit(main())

0
tests/__init__.py Normal file
View File

View File

@@ -0,0 +1,68 @@
"""Account and market service tests."""
from __future__ import annotations
import unittest
from coinhunter.services import account_service, market_service
class FakeSpotClient:
def account_info(self):
return {
"balances": [
{"asset": "USDT", "free": "120.0", "locked": "0"},
{"asset": "BTC", "free": "0.01", "locked": "0"},
{"asset": "DOGE", "free": "1", "locked": "0"},
]
}
def ticker_price(self, symbols=None):
prices = {
"BTCUSDT": {"symbol": "BTCUSDT", "price": "60000"},
"DOGEUSDT": {"symbol": "DOGEUSDT", "price": "0.1"},
}
if not symbols:
return list(prices.values())
return [prices[symbol] for symbol in symbols]
def ticker_24h(self, symbols=None):
rows = [
{"symbol": "BTCUSDT", "lastPrice": "60000", "priceChangePercent": "4.5", "quoteVolume": "10000000", "highPrice": "61000", "lowPrice": "58000"},
{"symbol": "ETHUSDT", "lastPrice": "3000", "priceChangePercent": "3.0", "quoteVolume": "8000000", "highPrice": "3050", "lowPrice": "2900"},
{"symbol": "DOGEUSDT", "lastPrice": "0.1", "priceChangePercent": "1.0", "quoteVolume": "200", "highPrice": "0.11", "lowPrice": "0.09"},
]
if not symbols:
return rows
wanted = set(symbols)
return [row for row in rows if row["symbol"] in wanted]
def exchange_info(self):
return {"symbols": [{"symbol": "BTCUSDT", "status": "TRADING"}, {"symbol": "ETHUSDT", "status": "TRADING"}, {"symbol": "DOGEUSDT", "status": "BREAK"}]}
class AccountMarketServicesTestCase(unittest.TestCase):
def test_account_overview_and_dust_filter(self):
config = {
"market": {"default_quote": "USDT"},
"trading": {"dust_usdt_threshold": 10.0},
}
payload = account_service.get_overview(
config,
spot_client=FakeSpotClient(),
)
self.assertEqual(payload["overview"]["total_equity_usdt"], 720.1)
symbols = {item["symbol"] for item in payload["positions"]}
self.assertNotIn("DOGEUSDT", symbols)
self.assertIn("BTCUSDT", symbols)
def test_market_tickers_and_scan_universe(self):
config = {
"market": {"default_quote": "USDT", "universe_allowlist": [], "universe_denylist": []},
"opportunity": {"min_quote_volume": 1000},
}
tickers = market_service.get_tickers(config, ["btc/usdt", "ETH-USDT"], spot_client=FakeSpotClient())
self.assertEqual([item["symbol"] for item in tickers["tickers"]], ["BTCUSDT", "ETHUSDT"])
universe = market_service.get_scan_universe(config, spot_client=FakeSpotClient())
self.assertEqual([item["symbol"] for item in universe], ["BTCUSDT", "ETHUSDT"])

95
tests/test_cli.py Normal file
View File

@@ -0,0 +1,95 @@
"""CLI tests for CoinHunter V2."""
from __future__ import annotations
import io
import unittest
from unittest.mock import patch
from coinhunter import cli
class CLITestCase(unittest.TestCase):
def test_help_includes_v2_commands(self):
parser = cli.build_parser()
help_text = parser.format_help()
self.assertIn("init", help_text)
self.assertIn("account", help_text)
self.assertIn("buy", help_text)
self.assertIn("sell", help_text)
self.assertIn("opportunity", help_text)
self.assertIn("--doc", help_text)
def test_init_dispatches(self):
captured = {}
with patch.object(cli, "ensure_init_files", return_value={"force": True, "root": "/tmp/ch"}), patch.object(
cli, "install_shell_completion", return_value={"shell": "zsh", "installed": True, "path": "/tmp/ch/_coinhunter"}
), patch.object(
cli, "print_output", side_effect=lambda payload, **kwargs: captured.setdefault("payload", payload)
):
result = cli.main(["init", "--force"])
self.assertEqual(result, 0)
self.assertTrue(captured["payload"]["force"])
self.assertIn("completion", captured["payload"])
def test_old_command_is_rejected(self):
with self.assertRaises(SystemExit):
cli.main(["exec", "bal"])
def test_runtime_error_is_rendered_cleanly(self):
stderr = io.StringIO()
with patch.object(cli, "load_config", side_effect=RuntimeError("boom")), patch("sys.stderr", stderr):
result = cli.main(["market", "tickers", "BTCUSDT"])
self.assertEqual(result, 1)
self.assertIn("error: boom", stderr.getvalue())
def test_buy_dispatches(self):
captured = {}
with patch.object(cli, "load_config", return_value={"binance": {"spot_base_url": "https://test", "recv_window": 5000}, "trading": {"dry_run_default": True}}), patch.object(
cli, "get_binance_credentials", return_value={"api_key": "k", "api_secret": "s"}
), patch.object(
cli, "SpotBinanceClient"
), patch.object(
cli.trade_service, "execute_spot_trade", return_value={"trade": {"status": "DRY_RUN"}}
), patch.object(
cli, "print_output", side_effect=lambda payload, **kwargs: captured.setdefault("payload", payload)
):
result = cli.main(["buy", "BTCUSDT", "-Q", "100"])
self.assertEqual(result, 0)
self.assertEqual(captured["payload"]["trade"]["status"], "DRY_RUN")
def test_sell_dispatches(self):
captured = {}
with patch.object(cli, "load_config", return_value={"binance": {"spot_base_url": "https://test", "recv_window": 5000}, "trading": {"dry_run_default": True}}), patch.object(
cli, "get_binance_credentials", return_value={"api_key": "k", "api_secret": "s"}
), patch.object(
cli, "SpotBinanceClient"
), patch.object(
cli.trade_service, "execute_spot_trade", return_value={"trade": {"status": "DRY_RUN"}}
), patch.object(
cli, "print_output", side_effect=lambda payload, **kwargs: captured.setdefault("payload", payload)
):
result = cli.main(["sell", "BTCUSDT", "-q", "0.01"])
self.assertEqual(result, 0)
self.assertEqual(captured["payload"]["trade"]["status"], "DRY_RUN")
def test_doc_flag_prints_documentation(self):
import io
from unittest.mock import patch
stdout = io.StringIO()
with patch("sys.stdout", stdout):
result = cli.main(["market", "tickers", "--doc"])
self.assertEqual(result, 0)
output = stdout.getvalue()
self.assertIn("lastPrice", output)
self.assertIn("BTCUSDT", output)
def test_upgrade_dispatches(self):
captured = {}
with patch.object(cli, "self_upgrade", return_value={"command": "pipx upgrade coinhunter", "returncode": 0}), patch.object(
cli, "print_output", side_effect=lambda payload, **kwargs: captured.setdefault("payload", payload)
):
result = cli.main(["upgrade"])
self.assertEqual(result, 0)
self.assertEqual(captured["payload"]["returncode"], 0)

View File

@@ -0,0 +1,79 @@
"""Config and runtime tests."""
from __future__ import annotations
import os
import tempfile
import unittest
from pathlib import Path
from unittest.mock import patch
from coinhunter.config import ensure_init_files, get_binance_credentials, load_config, load_env_file
from coinhunter.runtime import get_runtime_paths
class ConfigRuntimeTestCase(unittest.TestCase):
def test_init_files_created_in_coinhunter_home(self):
with tempfile.TemporaryDirectory() as tmp_dir, patch.dict(os.environ, {"COINHUNTER_HOME": str(Path(tmp_dir) / "home")}, clear=False):
paths = get_runtime_paths()
payload = ensure_init_files(paths)
self.assertTrue(paths.config_file.exists())
self.assertTrue(paths.env_file.exists())
self.assertTrue(paths.logs_dir.exists())
self.assertEqual(payload["root"], str(paths.root))
def test_load_config_and_env(self):
with tempfile.TemporaryDirectory() as tmp_dir, patch.dict(
os.environ,
{"COINHUNTER_HOME": str(Path(tmp_dir) / "home")},
clear=False,
):
paths = get_runtime_paths()
ensure_init_files(paths)
paths.env_file.write_text("BINANCE_API_KEY=abc\nBINANCE_API_SECRET=def\n", encoding="utf-8")
config = load_config(paths)
loaded = load_env_file(paths)
self.assertEqual(config["market"]["default_quote"], "USDT")
self.assertEqual(loaded["BINANCE_API_KEY"], "abc")
self.assertEqual(os.environ["BINANCE_API_SECRET"], "def")
def test_env_file_overrides_existing_environment(self):
with tempfile.TemporaryDirectory() as tmp_dir, patch.dict(
os.environ,
{"COINHUNTER_HOME": str(Path(tmp_dir) / "home"), "BINANCE_API_KEY": "old_key"},
clear=False,
):
paths = get_runtime_paths()
ensure_init_files(paths)
paths.env_file.write_text("BINANCE_API_KEY=new_key\nBINANCE_API_SECRET=new_secret\n", encoding="utf-8")
load_env_file(paths)
self.assertEqual(os.environ["BINANCE_API_KEY"], "new_key")
self.assertEqual(os.environ["BINANCE_API_SECRET"], "new_secret")
def test_missing_credentials_raise(self):
with tempfile.TemporaryDirectory() as tmp_dir, patch.dict(
os.environ,
{"COINHUNTER_HOME": str(Path(tmp_dir) / "home")},
clear=False,
):
os.environ.pop("BINANCE_API_KEY", None)
os.environ.pop("BINANCE_API_SECRET", None)
paths = get_runtime_paths()
ensure_init_files(paths)
with self.assertRaisesRegex(RuntimeError, "Missing BINANCE_API_KEY"):
get_binance_credentials(paths)
def test_permission_error_is_explained(self):
with tempfile.TemporaryDirectory() as tmp_dir, patch.dict(
os.environ,
{"COINHUNTER_HOME": str(Path(tmp_dir) / "home")},
clear=False,
):
paths = get_runtime_paths()
with patch("coinhunter.config.ensure_runtime_dirs", side_effect=PermissionError("no write access")):
with self.assertRaisesRegex(RuntimeError, "Set COINHUNTER_HOME to a writable directory"):
ensure_init_files(paths)

View File

@@ -0,0 +1,94 @@
"""Opportunity service tests."""
from __future__ import annotations
import unittest
from unittest.mock import patch
from coinhunter.services import opportunity_service
class FakeSpotClient:
def account_info(self):
return {
"balances": [
{"asset": "USDT", "free": "50", "locked": "0"},
{"asset": "BTC", "free": "0.01", "locked": "0"},
{"asset": "ETH", "free": "0.5", "locked": "0"},
{"asset": "DOGE", "free": "1", "locked": "0"},
]
}
def ticker_price(self, symbols=None):
mapping = {
"BTCUSDT": {"symbol": "BTCUSDT", "price": "60000"},
"ETHUSDT": {"symbol": "ETHUSDT", "price": "3000"},
"DOGEUSDT": {"symbol": "DOGEUSDT", "price": "0.1"},
}
return [mapping[symbol] for symbol in symbols]
def ticker_24h(self, symbols=None):
rows = {
"BTCUSDT": {"symbol": "BTCUSDT", "lastPrice": "60000", "priceChangePercent": "5", "quoteVolume": "9000000", "highPrice": "60200", "lowPrice": "55000"},
"ETHUSDT": {"symbol": "ETHUSDT", "lastPrice": "3000", "priceChangePercent": "3", "quoteVolume": "8000000", "highPrice": "3100", "lowPrice": "2800"},
"SOLUSDT": {"symbol": "SOLUSDT", "lastPrice": "150", "priceChangePercent": "8", "quoteVolume": "10000000", "highPrice": "152", "lowPrice": "130"},
"DOGEUSDT": {"symbol": "DOGEUSDT", "lastPrice": "0.1", "priceChangePercent": "1", "quoteVolume": "100", "highPrice": "0.11", "lowPrice": "0.09"},
}
if not symbols:
return list(rows.values())
return [rows[symbol] for symbol in symbols]
def exchange_info(self):
return {"symbols": [{"symbol": "BTCUSDT", "status": "TRADING"}, {"symbol": "ETHUSDT", "status": "TRADING"}, {"symbol": "SOLUSDT", "status": "TRADING"}, {"symbol": "DOGEUSDT", "status": "TRADING"}]}
def klines(self, symbol, interval, limit):
curves = {
"BTCUSDT": [50000, 52000, 54000, 56000, 58000, 59000, 60000],
"ETHUSDT": [2600, 2650, 2700, 2800, 2900, 2950, 3000],
"SOLUSDT": [120, 125, 130, 135, 140, 145, 150],
"DOGEUSDT": [0.11, 0.108, 0.105, 0.103, 0.102, 0.101, 0.1],
}[symbol]
rows = []
for index, close in enumerate(curves[-limit:]):
rows.append([index, close * 0.98, close * 1.01, close * 0.97, close, 100 + index * 10, index + 1, close * (100 + index * 10)])
return rows
class OpportunityServiceTestCase(unittest.TestCase):
def setUp(self):
self.config = {
"market": {"default_quote": "USDT", "universe_allowlist": [], "universe_denylist": []},
"trading": {"dust_usdt_threshold": 10.0},
"opportunity": {
"scan_limit": 10,
"top_n": 5,
"min_quote_volume": 1000.0,
"weights": {
"trend": 1.0,
"momentum": 1.0,
"breakout": 0.8,
"volume": 0.7,
"volatility_penalty": 0.5,
"position_concentration_penalty": 0.6,
},
},
}
def test_portfolio_analysis_ignores_dust_and_emits_recommendations(self):
events = []
with patch.object(opportunity_service, "audit_event", side_effect=lambda event, payload: events.append(event)):
payload = opportunity_service.analyze_portfolio(self.config, spot_client=FakeSpotClient())
symbols = [item["symbol"] for item in payload["recommendations"]]
self.assertNotIn("DOGEUSDT", symbols)
self.assertEqual(symbols, ["BTCUSDT", "ETHUSDT"])
self.assertEqual(events, ["opportunity_portfolio_generated"])
def test_scan_is_deterministic(self):
with patch.object(opportunity_service, "audit_event", return_value=None):
payload = opportunity_service.scan_opportunities(self.config | {"opportunity": self.config["opportunity"] | {"top_n": 2}}, spot_client=FakeSpotClient())
self.assertEqual([item["symbol"] for item in payload["recommendations"]], ["SOLUSDT", "BTCUSDT"])
def test_score_candidate_handles_empty_klines(self):
score, metrics = opportunity_service._score_candidate([], [], {"price_change_pct": 1.0}, {}, 0.0)
self.assertEqual(score, 0.0)
self.assertEqual(metrics["trend"], 0.0)

100
tests/test_trade_service.py Normal file
View File

@@ -0,0 +1,100 @@
"""Trade execution tests."""
from __future__ import annotations
import unittest
from unittest.mock import patch
from coinhunter.services import trade_service
class FakeSpotClient:
def __init__(self):
self.calls = []
def new_order(self, **kwargs):
self.calls.append(kwargs)
return {"symbol": kwargs["symbol"], "status": "FILLED", "orderId": 1}
class TradeServiceTestCase(unittest.TestCase):
def test_spot_market_buy_dry_run_does_not_call_client(self):
events = []
with patch.object(trade_service, "audit_event", side_effect=lambda event, payload: events.append((event, payload))):
client = FakeSpotClient()
payload = trade_service.execute_spot_trade(
{"trading": {"dry_run_default": False}},
side="buy",
symbol="btc/usdt",
qty=None,
quote=100,
order_type="market",
price=None,
dry_run=True,
spot_client=client,
)
self.assertEqual(payload["trade"]["status"], "DRY_RUN")
self.assertEqual(client.calls, [])
self.assertEqual([event for event, _ in events], ["trade_submitted", "trade_filled"])
def test_spot_limit_sell_maps_payload(self):
with patch.object(trade_service, "audit_event", return_value=None):
client = FakeSpotClient()
payload = trade_service.execute_spot_trade(
{"trading": {"dry_run_default": False}},
side="sell",
symbol="BTCUSDT",
qty=0.1,
quote=None,
order_type="limit",
price=90000,
dry_run=False,
spot_client=client,
)
self.assertEqual(payload["trade"]["status"], "FILLED")
self.assertEqual(client.calls[0]["timeInForce"], "GTC")
def test_spot_market_buy_requires_quote(self):
with patch.object(trade_service, "audit_event", return_value=None):
with self.assertRaisesRegex(RuntimeError, "requires --quote"):
trade_service.execute_spot_trade(
{"trading": {"dry_run_default": False}},
side="buy",
symbol="BTCUSDT",
qty=None,
quote=None,
order_type="market",
price=None,
dry_run=False,
spot_client=FakeSpotClient(),
)
def test_spot_market_buy_rejects_qty(self):
with patch.object(trade_service, "audit_event", return_value=None):
with self.assertRaisesRegex(RuntimeError, "accepts --quote only"):
trade_service.execute_spot_trade(
{"trading": {"dry_run_default": False}},
side="buy",
symbol="BTCUSDT",
qty=0.1,
quote=100,
order_type="market",
price=None,
dry_run=False,
spot_client=FakeSpotClient(),
)
def test_spot_market_sell_rejects_quote(self):
with patch.object(trade_service, "audit_event", return_value=None):
with self.assertRaisesRegex(RuntimeError, "accepts --qty only"):
trade_service.execute_spot_trade(
{"trading": {"dry_run_default": False}},
side="sell",
symbol="BTCUSDT",
qty=0.1,
quote=100,
order_type="market",
price=None,
dry_run=False,
spot_client=FakeSpotClient(),
)