Compare commits
41 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
10b314aa2b | ||
| 003212de99 | |||
| d3408dabba | |||
| 076a5f1b1c | |||
|
|
436bef4814 | ||
| 50402e4aa7 | |||
| 4761067c30 | |||
| a9f6cf4c46 | |||
| 69f447f538 | |||
| 1da08415f1 | |||
| 4312b16288 | |||
| cf26a3dd3a | |||
| e37993c8b5 | |||
| 3855477155 | |||
| d629c25232 | |||
| 4602583760 | |||
| ca0625b199 | |||
| a0e01ca56f | |||
| f528575aa8 | |||
| 9224621d7e | |||
| 6923013694 | |||
| 0f862957b0 | |||
| 680bd3d33c | |||
| f06a1a34f1 | |||
| 536425e8ea | |||
| b857ea33f3 | |||
| cdc90a9be1 | |||
| 9395978440 | |||
| b78845eb43 | |||
| 52cd76a750 | |||
| 3819e35a7b | |||
| 72f5bbcbb7 | |||
| da93f727e8 | |||
| 62c40a9776 | |||
| 01bb54dee5 | |||
| 759086ebd7 | |||
| 5fcdd015e1 | |||
| f59388f69a | |||
| a61c329496 | |||
| db981e8e5f | |||
| e6274d3a00 |
30
.gitignore
vendored
30
.gitignore
vendored
@@ -1,7 +1,35 @@
|
||||
# Python
|
||||
__pycache__/
|
||||
*.pyc
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
.pytest_cache/
|
||||
.mypy_cache/
|
||||
.ruff_cache/
|
||||
.coverage
|
||||
htmlcov/
|
||||
|
||||
# Virtual environments
|
||||
.venv/
|
||||
venv/
|
||||
|
||||
# Build artifacts
|
||||
dist/
|
||||
build/
|
||||
*.egg-info/
|
||||
|
||||
# IDE / editors
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# OS files
|
||||
.DS_Store
|
||||
|
||||
# Secrets / local env
|
||||
.env
|
||||
*.env
|
||||
|
||||
# Claude local overrides
|
||||
.claude/skills/gstack/
|
||||
|
||||
68
AGENTS.md
Normal file
68
AGENTS.md
Normal file
@@ -0,0 +1,68 @@
|
||||
# Repository Guidelines
|
||||
|
||||
## Project Structure & Module Organization
|
||||
|
||||
CoinHunter is a Python CLI package using a `src/` layout. Application code lives in `src/coinhunter/`.
|
||||
|
||||
- `src/coinhunter/cli.py` defines CLI parsing and command dispatch for `coinhunter` and `coin`.
|
||||
- `src/coinhunter/binance/` contains thin Binance Spot client wrappers.
|
||||
- `src/coinhunter/services/` contains domain logic for account, market, trade, portfolio, opportunity, dataset, research, and evaluation flows.
|
||||
- `src/coinhunter/config.py`, `runtime.py`, and `audit.py` handle runtime config, output, completions, upgrade flow, and logs.
|
||||
- `tests/` contains pytest/unittest coverage by service area.
|
||||
- `dist/` contains built release artifacts; do not edit these manually.
|
||||
|
||||
## Build, Test, and Development Commands
|
||||
|
||||
Install locally with development tools:
|
||||
|
||||
```bash
|
||||
python -m pip install -e '.[dev]'
|
||||
```
|
||||
|
||||
Run the CLI from the working tree:
|
||||
|
||||
```bash
|
||||
coinhunter --help
|
||||
coin opportunity -s BTCUSDT ETHUSDT --agent
|
||||
```
|
||||
|
||||
Quality checks:
|
||||
|
||||
```bash
|
||||
pytest tests/ # run the full test suite
|
||||
ruff check src tests # lint and import ordering
|
||||
mypy src # static type checks
|
||||
```
|
||||
|
||||
## Coding Style & Naming Conventions
|
||||
|
||||
Use Python 3.10+ syntax and 4-space indentation. Keep modules small and service-oriented; prefer adding logic under `src/coinhunter/services/` and keeping `cli.py` focused on argument parsing and dispatch.
|
||||
|
||||
Use `snake_case` for functions, variables, and modules. Use `PascalCase` for classes and dataclasses. Preserve existing payload key naming conventions such as `notional_usdt`, `quote_volume`, and `opportunity_score`.
|
||||
|
||||
Ruff enforces `E`, `F`, `I`, `W`, `UP`, `B`, `C4`, and `SIM`; line length `E501` is ignored.
|
||||
|
||||
## Testing Guidelines
|
||||
|
||||
Tests use `pytest` with `unittest.TestCase`. Add tests near the changed behavior:
|
||||
|
||||
- CLI dispatch: `tests/test_cli.py`
|
||||
- Config/runtime: `tests/test_config_runtime.py`
|
||||
- Opportunity logic: `tests/test_opportunity_service.py`
|
||||
- Dataset/evaluation flows: `tests/test_opportunity_dataset_service.py`
|
||||
|
||||
Name tests as `test_<behavior>`. Prefer fake clients and injected HTTP functions over live network calls. Run `pytest tests/` before submitting changes.
|
||||
|
||||
## Commit & Pull Request Guidelines
|
||||
|
||||
Recent history uses short imperative subjects, often Conventional Commit prefixes:
|
||||
|
||||
- `feat: configurable ticker window for market stats`
|
||||
- `fix: use rolling_window_ticker for symbol-specific queries`
|
||||
- `refactor: flatten account command to a single balances view`
|
||||
|
||||
Keep commits focused and describe user-visible behavior. Pull requests should include a concise summary, validation commands run, and any config or CLI changes. Link issues when applicable. For CLI output changes, include before/after examples or JSON snippets.
|
||||
|
||||
## Security & Configuration Tips
|
||||
|
||||
Never commit Binance API keys, secrets, runtime logs, or local `~/.coinhunter` files. Runtime secrets belong in `~/.coinhunter/.env`; configuration belongs in `~/.coinhunter/config.toml`. Use `COINHUNTER_HOME` for isolated test runs.
|
||||
64
CLAUDE.md
Normal file
64
CLAUDE.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Development commands
|
||||
|
||||
- **Install (dev):** `pip install -e ".[dev]"`
|
||||
- **Run CLI locally:** `python -m coinhunter --help`
|
||||
- **Run tests:** `pytest` or `python -m pytest tests/`
|
||||
- **Run single test file:** `pytest tests/test_cli.py -v`
|
||||
- **Lint:** `ruff check src tests`
|
||||
- **Format:** `ruff format src tests`
|
||||
- **Type-check:** `mypy src`
|
||||
|
||||
## Architecture
|
||||
|
||||
CoinHunter V2 is a Binance-first crypto trading CLI with a flat, direct architecture:
|
||||
|
||||
- **`src/coinhunter/cli.py`** — Single entrypoint (`main()`). Uses `argparse` to parse commands and directly dispatches to service functions. There is no separate `commands/` adapter layer.
|
||||
- **`src/coinhunter/services/`** — Contains all domain logic:
|
||||
- `account_service.py` — balances, positions, overview
|
||||
- `market_service.py` — tickers, klines, scan universe, symbol normalization
|
||||
- `signal_service.py` — shared market signal scoring used by scan and portfolio analysis
|
||||
- `portfolio_service.py` — held-position review and add/hold/trim/exit recommendations
|
||||
- `trade_service.py` — spot and USDT-M futures order execution
|
||||
- `opportunity_service.py` — market scanning and entry/watch/skip recommendations
|
||||
- **`src/coinhunter/binance/`** — Thin wrappers around official Binance connectors:
|
||||
- `spot_client.py` wraps `binance.spot.Spot`
|
||||
- `um_futures_client.py` wraps `binance.um_futures.UMFutures`
|
||||
Both normalize request errors into `RuntimeError` and handle single/multi-symbol ticker responses.
|
||||
- **`src/coinhunter/config.py`** — `load_config()`, `get_binance_credentials()`, `ensure_init_files()`.
|
||||
- **`src/coinhunter/runtime.py`** — `RuntimePaths`, `get_runtime_paths()`, `print_json()`.
|
||||
- **`src/coinhunter/audit.py`** — Writes JSONL audit events to dated files.
|
||||
|
||||
## Runtime and environment
|
||||
|
||||
User data lives in `~/.coinhunter/` by default (override with `COINHUNTER_HOME`):
|
||||
|
||||
- `config.toml` — runtime, binance, trading, signal, opportunity, and portfolio settings
|
||||
- `.env` — `BINANCE_API_KEY` and `BINANCE_API_SECRET`
|
||||
- `logs/audit_YYYYMMDD.jsonl` — structured audit log
|
||||
|
||||
Run `coinhunter init` to generate the config and env templates.
|
||||
|
||||
## Key conventions
|
||||
|
||||
- **Symbol normalization:** `market_service.normalize_symbol()` strips `/`, `-`, `_`, and uppercases the symbol. CLI inputs like `ETH/USDT`, `eth-usdt`, and `ETHUSDT` are all normalized to `ETHUSDT`.
|
||||
- **Dry-run behavior:** Trade commands support `--dry-run`. If omitted, the default falls back to `trading.dry_run_default` in `config.toml`.
|
||||
- **Client injection:** Service functions accept `spot_client` / `futures_client` as keyword arguments. This enables easy unit testing with mocks.
|
||||
- **Error handling:** Binance client wrappers catch `requests.exceptions.SSLError` and `RequestException` and re-raise as human-readable `RuntimeError`. The CLI catches all exceptions in `main()` and prints `error: {message}` to stderr with exit code 1.
|
||||
|
||||
## Testing
|
||||
|
||||
Tests live in `tests/` and use `unittest.TestCase` with `unittest.mock.patch`. The test suite covers:
|
||||
|
||||
- `test_cli.py` — parser smoke tests and dispatch behavior
|
||||
- `test_config_runtime.py` — config loading, env parsing, path resolution
|
||||
- `test_account_market_services.py` — balance/position/ticker/klines logic with mocked clients
|
||||
- `test_trade_service.py` — spot and futures trade execution paths
|
||||
- `test_opportunity_service.py` — portfolio and scan scoring logic
|
||||
|
||||
## Notes
|
||||
|
||||
- `AGENTS.md` in this repo is stale and describes a prior V1 architecture (commands/, smart executor, precheck, review engine). Do not rely on it.
|
||||
21
LICENSE
Normal file
21
LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2026 Tacit Lab
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
320
README.md
320
README.md
@@ -1,222 +1,220 @@
|
||||
# coinhunter-cli
|
||||
|
||||
<p align="center">
|
||||
<strong>The executable CLI layer for CoinHunter.</strong><br/>
|
||||
Runtime-safe trading operations, precheck orchestration, review tooling, and market probes.
|
||||
<img src="https://capsule-render.vercel.app/api?type=waving&color=0:F7B93E,100:0f0f0f&height=220§ion=header&text=%F0%9F%AA%99%20CoinHunter&fontSize=65&fontColor=fff&animation=fadeIn&fontAlignY=32&desc=Trade%20Smarter%20%C2%B7%20Execute%20Faster%20%C2%B7%20Sleep%20Better&descAlignY=55&descSize=18" alt="CoinHunter Banner" />
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<img alt="Python" src="https://img.shields.io/badge/python-3.10%2B-blue" />
|
||||
<img alt="Status" src="https://img.shields.io/badge/status-active%20development-orange" />
|
||||
<img alt="Architecture" src="https://img.shields.io/badge/architecture-runtime%20%2B%20commands%20%2B%20services-6f42c1" />
|
||||
<img src="https://readme-typing-svg.demolab.com?font=JetBrains+Mono&weight=500&size=22&duration=2800&pause=800&color=F7B93E¢er=true&vCenter=true&width=600&lines=Binance-first+Trading+CLI;Account+%E2%86%92+Market+%E2%86%92+Trade+%E2%86%92+Opportunity;Human-friendly+TUI+%7C+Agent+Mode" alt="Typing SVG" />
|
||||
</p>
|
||||
|
||||
## Why this repo exists
|
||||
<p align="center">
|
||||
<strong>A Binance-first crypto trading CLI for balances, market data, opportunity scanning, and execution.</strong>
|
||||
</p>
|
||||
|
||||
CoinHunter is evolving from a loose bundle of automation scripts into a proper installable command-line tool.
|
||||
<p align="center">
|
||||
<a href="https://pypi.org/project/coinhunter/"><img src="https://img.shields.io/pypi/v/coinhunter?style=flat-square&color=F7B93E&labelColor=1a1a1a&cacheSeconds=60" /></a>
|
||||
<a href="#"><img src="https://img.shields.io/badge/python-3.10%2B-3776ab?style=flat-square&logo=python&logoColor=white&labelColor=1a1a1a" /></a>
|
||||
<a href="#"><img src="https://img.shields.io/badge/tests-passing-22c55e?style=flat-square&labelColor=1a1a1a" /></a>
|
||||
<a href="#"><img src="https://img.shields.io/badge/lint-ruff%20%2B%20mypy-8b5cf6?style=flat-square&labelColor=1a1a1a" /></a>
|
||||
</p>
|
||||
|
||||
This repository is the tooling layer:
|
||||
---
|
||||
|
||||
- Code and executable behavior live here.
|
||||
- User runtime state lives in `~/.coinhunter/` by default.
|
||||
- Hermes skills can call this CLI instead of embedding large script collections.
|
||||
- Runtime paths can be overridden with `COINHUNTER_HOME`, `HERMES_HOME`, `COINHUNTER_ENV_FILE`, and `HERMES_BIN`.
|
||||
## What's New in 3.0.1
|
||||
|
||||
In short:
|
||||
- **Fix ticker API compatibility** — `rolling_window_ticker` replaces the removed `ticker` method in `binance-connector>=3.12.0`.
|
||||
- **Expand ticker window choices** — `market tickers --window` now supports `1m`, `2m`, `5m`, `15m`, `30m`, `1h`, `2h`, `4h`, `6h`, `8h`, `12h`, `1d`, `2d`, `3d`, `5d`, `7d`, `15d`, `30d`.
|
||||
- **Smart API fallback** — full-market scan (no symbols) falls back to 24h ticker; symbol-specific queries use rolling window.
|
||||
|
||||
- `coinhunter-cli` = tool
|
||||
- CoinHunter skill = strategy / workflow / prompting layer
|
||||
- `~/.coinhunter` = user data, logs, state, reviews
|
||||
## What's New in 3.0
|
||||
|
||||
## Current architecture
|
||||
- **Split decision models** — portfolio (add/hold/trim/exit) and opportunity (trigger/setup/chase/skip) now use independent scoring logic.
|
||||
- **Configurable ticker windows** — `market tickers` supports `--window 1h`, `4h`, or `1d`.
|
||||
- **Live / dry-run audit logs** — audit logs are written to separate subdirectories; use `catlog --dry-run` to review simulations.
|
||||
- **Flattened commands** — `account`, `opportunity`, and `config` are now top-level for fewer keystrokes.
|
||||
- **Runtime config management** — `config get`, `config set`, and `config key/secret` let you edit settings without touching files manually.
|
||||
|
||||
```text
|
||||
coinhunter-cli/
|
||||
├── src/coinhunter/
|
||||
│ ├── cli.py # top-level command router
|
||||
│ ├── runtime.py # runtime paths + env loading
|
||||
│ ├── doctor.py # diagnostics
|
||||
│ ├── paths.py # runtime path inspection
|
||||
│ ├── commands/ # thin CLI adapters
|
||||
│ ├── services/ # orchestration / application services
|
||||
│ └── *.py # compatibility modules + legacy logic under extraction
|
||||
└── README.md
|
||||
```
|
||||
## Install
|
||||
|
||||
The repo is actively being refactored toward a cleaner split:
|
||||
|
||||
- `commands/` → argument / CLI adapters
|
||||
- `services/` → orchestration and application workflows
|
||||
- `runtime/` → paths, env, files, locks, config
|
||||
- future `domain/` → trading and precheck core logic
|
||||
|
||||
## Implemented command/service splits
|
||||
|
||||
The first extraction pass is already live:
|
||||
|
||||
- `smart-executor` → `commands.smart_executor` + `services.smart_executor_service`
|
||||
- `precheck` → `commands.precheck` + `services.precheck_service`
|
||||
- `precheck` internals now also have dedicated service modules for:
|
||||
- `services.precheck_state`
|
||||
- `services.precheck_snapshot`
|
||||
- `services.precheck_analysis`
|
||||
|
||||
This keeps behavior stable while giving the codebase a cleaner landing zone for deeper refactors.
|
||||
|
||||
## Installation
|
||||
|
||||
Editable install:
|
||||
|
||||
```bash
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
Run directly after install:
|
||||
For end users, install from PyPI with [pipx](https://pipx.pypa.io/) (recommended) to avoid polluting your system Python:
|
||||
|
||||
```bash
|
||||
pipx install coinhunter
|
||||
coinhunter --help
|
||||
```
|
||||
|
||||
You can also use the shorter `coin` alias:
|
||||
|
||||
```bash
|
||||
coin --help
|
||||
```
|
||||
|
||||
Check the installed version:
|
||||
|
||||
```bash
|
||||
coinhunter --version
|
||||
```
|
||||
|
||||
## Quickstart
|
||||
To update later:
|
||||
|
||||
Initialize user state:
|
||||
```bash
|
||||
pipx upgrade coinhunter
|
||||
```
|
||||
|
||||
## Initialize runtime
|
||||
|
||||
```bash
|
||||
coinhunter init
|
||||
coinhunter init --force
|
||||
```
|
||||
|
||||
Inspect runtime wiring:
|
||||
This creates:
|
||||
|
||||
- `~/.coinhunter/config.toml`
|
||||
- `~/.coinhunter/.env`
|
||||
- `~/.coinhunter/logs/`
|
||||
|
||||
If you are using **zsh** or **bash**, `init` will also generate and install shell completion scripts automatically, and update your rc file (`~/.zshrc` or `~/.bashrc`) if needed.
|
||||
|
||||
`init` interactively prompts for your Binance API key and secret if they are missing. Use `--no-prompt` to skip this.
|
||||
|
||||
`config.toml` stores runtime and strategy settings. `.env` stores:
|
||||
|
||||
```bash
|
||||
coinhunter paths
|
||||
coinhunter doctor
|
||||
BINANCE_API_KEY=
|
||||
BINANCE_API_SECRET=
|
||||
```
|
||||
|
||||
Validate exchange credentials:
|
||||
Strategy settings are split into three blocks:
|
||||
|
||||
- `[signal]` for shared market-signal weights and lookback interval
|
||||
- `[opportunity]` for scan thresholds, liquidity filters, and top-N output
|
||||
- `[portfolio]` for add/hold/trim/exit thresholds and max position weight
|
||||
|
||||
Override the default home directory with `COINHUNTER_HOME`.
|
||||
|
||||
## Commands
|
||||
|
||||
By default, CoinHunter prints human-friendly TUI tables. Add `--agent` to any command to get JSON output (or compact pipe-delimited tables for large datasets).
|
||||
|
||||
Add `--doc` to any command to see its output schema and field descriptions (great for AI agents):
|
||||
|
||||
```bash
|
||||
coinhunter check-api
|
||||
coin buy --doc
|
||||
coin market klines --doc
|
||||
```
|
||||
|
||||
Run precheck / gate plumbing:
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
coinhunter precheck
|
||||
coinhunter precheck --mark-run-requested "external-gate queued cron run"
|
||||
coinhunter precheck --ack "analysis finished"
|
||||
# Account (aliases: a, acc)
|
||||
coinhunter account
|
||||
coinhunter account --agent
|
||||
coin a
|
||||
|
||||
# Market (aliases: m)
|
||||
coinhunter market tickers BTCUSDT ETH/USDT sol-usdt --window 1h
|
||||
coinhunter market klines BTCUSDT ETHUSDT --interval 1h --limit 50
|
||||
coin m tk BTCUSDT ETHUSDT -w 1d
|
||||
coin m k BTCUSDT -i 1h -l 50
|
||||
|
||||
# Trade (buy / sell are now top-level commands)
|
||||
coinhunter buy BTCUSDT --quote 100 --dry-run
|
||||
coinhunter sell BTCUSDT --qty 0.01 --type limit --price 90000
|
||||
coin b BTCUSDT -Q 100 -d
|
||||
coin s BTCUSDT -q 0.01 -t limit -p 90000
|
||||
|
||||
# Portfolio (aliases: pf, p)
|
||||
coinhunter portfolio
|
||||
coinhunter portfolio --agent
|
||||
coin pf
|
||||
|
||||
# Opportunity scanning (aliases: o)
|
||||
coinhunter opportunity
|
||||
coinhunter opportunity --symbols BTCUSDT ETHUSDT SOLUSDT
|
||||
coin o -s BTCUSDT ETHUSDT
|
||||
|
||||
# Audit log
|
||||
coinhunter catlog
|
||||
coinhunter catlog -n 20
|
||||
coinhunter catlog -n 10 -o 10
|
||||
coinhunter catlog --dry-run
|
||||
|
||||
# Configuration management (aliases: cfg, c)
|
||||
coinhunter config get # show all config
|
||||
coinhunter config get binance.recv_window
|
||||
coinhunter config set opportunity.top_n 20
|
||||
coinhunter config set signal.lookback_interval 4h
|
||||
coinhunter config set portfolio.max_position_weight 0.25
|
||||
coinhunter config set trading.dry_run_default true
|
||||
coinhunter config set market.universe_allowlist BTCUSDT,ETHUSDT
|
||||
coinhunter config key YOUR_API_KEY # or omit value to prompt interactively
|
||||
coinhunter config secret YOUR_SECRET # or omit value to prompt interactively
|
||||
coin c get opportunity.top_n
|
||||
coin c set trading.dry_run_default false
|
||||
|
||||
# Self-upgrade
|
||||
coinhunter upgrade
|
||||
coin upgrade
|
||||
|
||||
# Shell completion (manual)
|
||||
coinhunter completion zsh > ~/.zsh/completions/_coinhunter
|
||||
coinhunter completion bash > ~/.local/share/bash-completion/completions/coinhunter
|
||||
```
|
||||
|
||||
Inspect balances or execute trading actions:
|
||||
`upgrade` will try `pipx upgrade coinhunter` first, and fall back to `pip install --upgrade coinhunter` if pipx is not available.
|
||||
|
||||
```bash
|
||||
coinhunter smart-executor balances
|
||||
coinhunter smart-executor status
|
||||
coinhunter smart-executor hold
|
||||
coinhunter smart-executor buy ENJUSDT 50
|
||||
coinhunter smart-executor sell-all ENJUSDT
|
||||
```
|
||||
## Architecture
|
||||
|
||||
Generate review data:
|
||||
CoinHunter V2 uses a flat, direct architecture:
|
||||
|
||||
```bash
|
||||
coinhunter review-context 12
|
||||
coinhunter review-engine 12
|
||||
```
|
||||
| Layer | Responsibility | Key Files |
|
||||
|-------|----------------|-----------|
|
||||
| **CLI** | Single entrypoint, argument parsing | `cli.py` |
|
||||
| **Binance** | Thin API wrappers with unified error handling | `binance/spot_client.py` |
|
||||
| **Services** | Domain logic | `services/account_service.py`, `services/market_service.py`, `services/signal_service.py`, `services/opportunity_service.py`, `services/portfolio_service.py`, `services/trade_service.py` |
|
||||
| **Config** | TOML config, `.env` secrets, path resolution | `config.py` |
|
||||
| **Runtime** | Paths, TUI/JSON/compact output | `runtime.py` |
|
||||
| **Audit** | Structured JSONL logging | `audit.py` |
|
||||
|
||||
Probe external market data:
|
||||
## Logging
|
||||
|
||||
```bash
|
||||
coinhunter market-probe bybit-ticker BTCUSDT
|
||||
coinhunter market-probe bybit-klines BTCUSDT 60 20
|
||||
```
|
||||
|
||||
## Runtime model
|
||||
|
||||
Default layout:
|
||||
Audit logs are written to:
|
||||
|
||||
```text
|
||||
~/.coinhunter/
|
||||
├── accounts.json
|
||||
├── config.json
|
||||
├── executions.json
|
||||
├── notes.json
|
||||
├── positions.json
|
||||
├── watchlist.json
|
||||
├── logs/
|
||||
├── reviews/
|
||||
└── state/
|
||||
~/.coinhunter/logs/audit_YYYYMMDD.jsonl
|
||||
```
|
||||
|
||||
Credential loading:
|
||||
Events include:
|
||||
|
||||
- Binance credentials are read from `~/.hermes/.env` by default.
|
||||
- `COINHUNTER_ENV_FILE` can point to a different env file.
|
||||
- `hermes` is resolved from `PATH` first, then `~/.local/bin/hermes`, unless `HERMES_BIN` overrides it.
|
||||
- `trade_submitted`
|
||||
- `trade_filled`
|
||||
- `trade_failed`
|
||||
- `opportunity_portfolio_generated`
|
||||
- `opportunity_scan_generated`
|
||||
|
||||
## Useful commands
|
||||
Use `coinhunter catlog` to read recent entries in the terminal. It aggregates across all days and supports pagination with `-n/--limit` and `-o/--offset`.
|
||||
|
||||
### Diagnostics
|
||||
## Development
|
||||
|
||||
Clone the repo and install in editable mode:
|
||||
|
||||
```bash
|
||||
coinhunter doctor
|
||||
coinhunter paths
|
||||
coinhunter check-api
|
||||
git clone https://git.tacitlab.cc/TacitLab/coinhunter-cli.git
|
||||
cd coinhunter-cli
|
||||
pip install -e ".[dev]"
|
||||
```
|
||||
|
||||
### Trading and execution
|
||||
Or use the provided Conda environment:
|
||||
|
||||
```bash
|
||||
coinhunter smart-executor balances
|
||||
coinhunter smart-executor status
|
||||
coinhunter smart-executor hold
|
||||
coinhunter smart-executor rebalance FROMUSDT TOUSDT
|
||||
conda env create -f environment.yml
|
||||
conda activate coinhunter
|
||||
```
|
||||
|
||||
### Precheck and orchestration
|
||||
Run quality checks:
|
||||
|
||||
```bash
|
||||
coinhunter precheck
|
||||
coinhunter external-gate
|
||||
coinhunter rotate-external-gate-log
|
||||
pytest tests/ # run tests
|
||||
ruff check src tests # lint
|
||||
mypy src # type check
|
||||
```
|
||||
|
||||
### Review and market research
|
||||
|
||||
```bash
|
||||
coinhunter review-context 12
|
||||
coinhunter review-engine 12
|
||||
coinhunter market-probe bybit-ticker BTCUSDT
|
||||
```
|
||||
|
||||
## Development notes
|
||||
|
||||
This project is intentionally moving in small, safe refactor steps:
|
||||
|
||||
1. Separate runtime concerns from hardcoded paths.
|
||||
2. Move command dispatch into thin adapters.
|
||||
3. Introduce orchestration services.
|
||||
4. Extract reusable domain logic from large compatibility modules.
|
||||
5. Keep cron / Hermes integration stable during migration.
|
||||
|
||||
That means some compatibility modules still exist, but the direction is deliberate.
|
||||
|
||||
## Near-term roadmap
|
||||
|
||||
- Extract more logic from `smart_executor.py` into dedicated execution / portfolio services.
|
||||
- Continue shrinking `precheck.py` by moving snapshot and analysis internals into reusable modules.
|
||||
- Add `domain/` models for positions, signals, and trigger analysis.
|
||||
- Add tests for runtime paths, precheck service behavior, and CLI stability.
|
||||
- Evolve toward a more polished installable CLI workflow.
|
||||
|
||||
## Philosophy
|
||||
|
||||
CoinHunter should become:
|
||||
|
||||
- more professional
|
||||
- more maintainable
|
||||
- safer to operate
|
||||
- easier for humans and agents to call
|
||||
- less dependent on prompt-only correctness
|
||||
|
||||
This repo is where that evolution happens.
|
||||
|
||||
9
environment.yml
Normal file
9
environment.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
name: coinhunter
|
||||
channels:
|
||||
- defaults
|
||||
- conda-forge
|
||||
dependencies:
|
||||
- python>=3.10
|
||||
- pip
|
||||
- pip:
|
||||
- -e ".[dev]"
|
||||
@@ -3,23 +3,51 @@ requires = ["setuptools>=68", "wheel"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "coinhunter-cli"
|
||||
version = "0.1.0"
|
||||
description = "CoinHunter trading CLI with user runtime data in ~/.coinhunter"
|
||||
name = "coinhunter"
|
||||
version = "3.0.1"
|
||||
description = "Binance-first trading CLI for balances, market data, opportunity scanning, and execution."
|
||||
readme = "README.md"
|
||||
license = {text = "MIT"}
|
||||
requires-python = ">=3.10"
|
||||
dependencies = [
|
||||
"ccxt>=4.4.0"
|
||||
"binance-connector>=3.9.0",
|
||||
"requests>=2.31.0",
|
||||
"shtab>=1.7.0",
|
||||
"tomli>=2.0.1; python_version < '3.11'",
|
||||
"tomli-w>=1.0.0",
|
||||
]
|
||||
authors = [
|
||||
{name = "Tacit Lab", email = "ouyangcarlos@gmail.com"}
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = [
|
||||
"pytest>=8.0",
|
||||
"ruff>=0.5.0",
|
||||
"mypy>=1.10.0",
|
||||
"types-requests>=2.31.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
coinhunter = "coinhunter.cli:main"
|
||||
coin = "coinhunter.cli:main"
|
||||
|
||||
[tool.setuptools]
|
||||
package-dir = {"" = "src"}
|
||||
|
||||
[tool.setuptools.packages.find]
|
||||
where = ["src"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
addopts = "-v"
|
||||
|
||||
[tool.ruff.lint]
|
||||
select = ["E", "F", "I", "W", "UP", "B", "C4", "SIM"]
|
||||
ignore = ["E501"]
|
||||
|
||||
[tool.ruff.lint.pydocstyle]
|
||||
convention = "google"
|
||||
|
||||
[tool.mypy]
|
||||
python_version = "3.10"
|
||||
warn_return_any = true
|
||||
warn_unused_ignores = true
|
||||
ignore_missing_imports = true
|
||||
|
||||
@@ -1 +1,8 @@
|
||||
__version__ = "0.1.0"
|
||||
"""CoinHunter V2."""
|
||||
|
||||
try:
|
||||
from importlib.metadata import version
|
||||
|
||||
__version__ = version("coinhunter")
|
||||
except Exception: # pragma: no cover
|
||||
__version__ = "unknown"
|
||||
|
||||
@@ -1,2 +1,3 @@
|
||||
from .cli import main
|
||||
|
||||
raise SystemExit(main())
|
||||
|
||||
78
src/coinhunter/audit.py
Normal file
78
src/coinhunter/audit.py
Normal file
@@ -0,0 +1,78 @@
|
||||
"""Audit logging for CoinHunter V2."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from collections import deque
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from .config import load_config, resolve_log_dir
|
||||
from .runtime import RuntimePaths, ensure_runtime_dirs, get_runtime_paths, json_default
|
||||
|
||||
_audit_dir_cache: dict[str, Path] = {}
|
||||
|
||||
|
||||
def _resolve_audit_dir(paths: RuntimePaths) -> Path:
|
||||
key = str(paths.root)
|
||||
if key not in _audit_dir_cache:
|
||||
config = load_config(paths)
|
||||
_audit_dir_cache[key] = resolve_log_dir(config, paths)
|
||||
return _audit_dir_cache[key]
|
||||
|
||||
|
||||
def _audit_path(paths: RuntimePaths | None = None, *, dry_run: bool = False) -> Path:
|
||||
paths = ensure_runtime_dirs(paths or get_runtime_paths())
|
||||
logs_dir = _resolve_audit_dir(paths)
|
||||
subdir = logs_dir / ("dryrun" if dry_run else "live")
|
||||
subdir.mkdir(parents=True, exist_ok=True)
|
||||
return subdir / f"audit_{datetime.now(timezone.utc).strftime('%Y%m%d')}.jsonl"
|
||||
|
||||
|
||||
def audit_event(
|
||||
event: str, payload: dict[str, Any], paths: RuntimePaths | None = None, *, dry_run: bool = False
|
||||
) -> dict[str, Any]:
|
||||
entry = {
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
"event": event,
|
||||
**payload,
|
||||
}
|
||||
with _audit_path(paths, dry_run=dry_run).open("a", encoding="utf-8") as handle:
|
||||
handle.write(json.dumps(entry, ensure_ascii=False, default=json_default) + "\n")
|
||||
return entry
|
||||
|
||||
|
||||
def read_audit_log(
|
||||
paths: RuntimePaths | None = None, limit: int = 10, offset: int = 0, *, dry_run: bool = False
|
||||
) -> list[dict[str, Any]]:
|
||||
paths = ensure_runtime_dirs(paths or get_runtime_paths())
|
||||
logs_dir = _resolve_audit_dir(paths)
|
||||
if not logs_dir.exists():
|
||||
return []
|
||||
subdir = logs_dir / ("dryrun" if dry_run else "live")
|
||||
if not subdir.exists():
|
||||
return []
|
||||
audit_files = sorted(subdir.glob("audit_*.jsonl"), reverse=True)
|
||||
needed = offset + limit
|
||||
chunks: list[list[dict[str, Any]]] = []
|
||||
total = 0
|
||||
for audit_file in audit_files:
|
||||
remaining = needed - total
|
||||
if remaining <= 0:
|
||||
break
|
||||
entries: list[dict[str, Any]] = []
|
||||
with audit_file.open("r", encoding="utf-8") as handle:
|
||||
entries = list(deque((json.loads(line) for line in handle if line.strip()), maxlen=remaining))
|
||||
if entries:
|
||||
chunks.append(entries)
|
||||
total += len(entries)
|
||||
if not chunks:
|
||||
return []
|
||||
all_entries: list[dict[str, Any]] = []
|
||||
for chunk in reversed(chunks):
|
||||
all_entries.extend(chunk)
|
||||
start = -(offset + limit) if (offset + limit) <= len(all_entries) else -len(all_entries)
|
||||
if offset == 0:
|
||||
return all_entries[start:]
|
||||
return all_entries[start:-offset]
|
||||
@@ -1,289 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Coin Hunter Auto Trader
|
||||
全自动妖币猎人 + 币安执行器
|
||||
|
||||
运行前请在 ~/.hermes/.env 配置:
|
||||
BINANCE_API_KEY=你的API_KEY
|
||||
BINANCE_API_SECRET=你的API_SECRET
|
||||
|
||||
首次运行建议用 DRY_RUN=True 测试逻辑。
|
||||
"""
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
import ccxt
|
||||
|
||||
from .runtime import get_runtime_paths, load_env_file
|
||||
|
||||
# ============== 配置 ==============
|
||||
PATHS = get_runtime_paths()
|
||||
COINS_DIR = PATHS.root
|
||||
POSITIONS_FILE = PATHS.positions_file
|
||||
ENV_FILE = PATHS.env_file
|
||||
|
||||
CST = timezone(timedelta(hours=8))
|
||||
|
||||
# 风控参数
|
||||
DRY_RUN = os.getenv("DRY_RUN", "true").lower() == "true" # 默认测试模式
|
||||
MAX_POSITIONS = 2 # 最大同时持仓数
|
||||
|
||||
# 资金配置(根据总资产动态计算)
|
||||
CAPITAL_ALLOCATION_PCT = 0.95 # 用总资产的95%玩这个策略(留5%缓冲给手续费和滑点)
|
||||
MIN_POSITION_USDT = 50 # 单次最小下单金额(避免过小)
|
||||
|
||||
MIN_VOLUME_24H = 1_000_000 # 最小24h成交额 ($)
|
||||
MIN_PRICE_CHANGE_24H = 0.05 # 最小涨幅 5%
|
||||
MAX_PRICE = 1.0 # 只玩低价币(meme特征)
|
||||
STOP_LOSS_PCT = -0.07 # 止损 -7%
|
||||
TAKE_PROFIT_1_PCT = 0.15 # 止盈1 +15%
|
||||
TAKE_PROFIT_2_PCT = 0.30 # 止盈2 +30%
|
||||
BLACKLIST = {"USDC", "BUSD", "TUSD", "FDUSD", "USTC", "PAXG", "XRP", "ETH", "BTC"}
|
||||
|
||||
# ============== 工具函数 ==============
|
||||
def log(msg: str):
|
||||
print(f"[{datetime.now(CST).strftime('%Y-%m-%d %H:%M:%S')} CST] {msg}")
|
||||
|
||||
|
||||
def load_positions() -> list:
|
||||
if POSITIONS_FILE.exists():
|
||||
return json.loads(POSITIONS_FILE.read_text(encoding="utf-8")).get("positions", [])
|
||||
return []
|
||||
|
||||
|
||||
def save_positions(positions: list):
|
||||
COINS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
POSITIONS_FILE.write_text(json.dumps({"positions": positions}, indent=2, ensure_ascii=False), encoding="utf-8")
|
||||
|
||||
|
||||
def load_env():
|
||||
load_env_file(PATHS)
|
||||
|
||||
|
||||
def calculate_position_size(total_usdt: float, available_usdt: float, open_slots: int) -> float:
|
||||
"""
|
||||
根据总资产动态计算每次下单金额。
|
||||
逻辑:先确定策略总上限,再按剩余开仓位均分。
|
||||
"""
|
||||
strategy_cap = total_usdt * CAPITAL_ALLOCATION_PCT
|
||||
# 已用于策略的资金约等于总上限 − 可用余额
|
||||
used_in_strategy = max(0, strategy_cap - available_usdt)
|
||||
remaining_strategy_cap = max(0, strategy_cap - used_in_strategy)
|
||||
|
||||
if open_slots <= 0 or remaining_strategy_cap < MIN_POSITION_USDT:
|
||||
return 0
|
||||
|
||||
size = remaining_strategy_cap / open_slots
|
||||
# 同时不能超过当前可用余额
|
||||
size = min(size, available_usdt)
|
||||
# 四舍五入到整数
|
||||
size = max(0, round(size, 2))
|
||||
return size if size >= MIN_POSITION_USDT else 0
|
||||
|
||||
|
||||
# ============== 币安客户端 ==============
|
||||
class BinanceTrader:
|
||||
def __init__(self):
|
||||
api_key = os.getenv("BINANCE_API_KEY")
|
||||
secret = os.getenv("BINANCE_API_SECRET")
|
||||
if not api_key or not secret:
|
||||
raise RuntimeError("缺少 BINANCE_API_KEY 或 BINANCE_API_SECRET,请配置 ~/.hermes/.env")
|
||||
self.exchange = ccxt.binance({
|
||||
"apiKey": api_key,
|
||||
"secret": secret,
|
||||
"options": {"defaultType": "spot"},
|
||||
"enableRateLimit": True,
|
||||
})
|
||||
self.exchange.load_markets()
|
||||
|
||||
def get_balance(self, asset: str = "USDT") -> float:
|
||||
bal = self.exchange.fetch_balance()["free"].get(asset, 0)
|
||||
return float(bal)
|
||||
|
||||
def fetch_tickers(self) -> dict:
|
||||
return self.exchange.fetch_tickers()
|
||||
|
||||
def create_market_buy_order(self, symbol: str, amount_usdt: float):
|
||||
if DRY_RUN:
|
||||
log(f"[DRY RUN] 模拟买入 {symbol},金额 ${amount_usdt}")
|
||||
return {"id": "dry-run-buy", "price": None, "amount": amount_usdt}
|
||||
ticker = self.exchange.fetch_ticker(symbol)
|
||||
price = float(ticker["last"])
|
||||
qty = amount_usdt / price
|
||||
order = self.exchange.create_market_buy_order(symbol, qty)
|
||||
log(f"✅ 买入 {symbol} | 数量 {qty:.4f} | 价格 ~${price}")
|
||||
return order
|
||||
|
||||
def create_market_sell_order(self, symbol: str, qty: float):
|
||||
if DRY_RUN:
|
||||
log(f"[DRY RUN] 模拟卖出 {symbol},数量 {qty}")
|
||||
return {"id": "dry-run-sell"}
|
||||
order = self.exchange.create_market_sell_order(symbol, qty)
|
||||
log(f"✅ 卖出 {symbol} | 数量 {qty:.4f}")
|
||||
return order
|
||||
|
||||
|
||||
# ============== 选币引擎 ==============
|
||||
class CoinPicker:
|
||||
def __init__(self, exchange: ccxt.binance):
|
||||
self.exchange = exchange
|
||||
|
||||
def scan(self) -> list:
|
||||
tickers = self.exchange.fetch_tickers()
|
||||
candidates = []
|
||||
for symbol, t in tickers.items():
|
||||
if not symbol.endswith("/USDT"):
|
||||
continue
|
||||
base = symbol.replace("/USDT", "")
|
||||
if base in BLACKLIST:
|
||||
continue
|
||||
|
||||
price = float(t["last"] or 0)
|
||||
change = float(t.get("percentage", 0)) / 100
|
||||
volume = float(t.get("quoteVolume", 0))
|
||||
|
||||
if price <= 0 or price > MAX_PRICE:
|
||||
continue
|
||||
if volume < MIN_VOLUME_24H:
|
||||
continue
|
||||
if change < MIN_PRICE_CHANGE_24H:
|
||||
continue
|
||||
|
||||
score = change * (volume / MIN_VOLUME_24H)
|
||||
candidates.append({
|
||||
"symbol": symbol,
|
||||
"base": base,
|
||||
"price": price,
|
||||
"change_24h": change,
|
||||
"volume_24h": volume,
|
||||
"score": score,
|
||||
})
|
||||
|
||||
candidates.sort(key=lambda x: x["score"], reverse=True)
|
||||
return candidates[:5]
|
||||
|
||||
|
||||
# ============== 主控制器 ==============
|
||||
def run_cycle():
|
||||
load_env()
|
||||
trader = BinanceTrader()
|
||||
picker = CoinPicker(trader.exchange)
|
||||
positions = load_positions()
|
||||
|
||||
log(f"当前持仓数: {len(positions)} | 最大允许: {MAX_POSITIONS} | DRY_RUN={DRY_RUN}")
|
||||
|
||||
# 1. 检查现有持仓(止盈止损)
|
||||
tickers = trader.fetch_tickers()
|
||||
new_positions = []
|
||||
for pos in positions:
|
||||
sym = pos["symbol"]
|
||||
qty = float(pos["quantity"])
|
||||
cost = float(pos["avg_cost"])
|
||||
# ccxt tickers 使用 slash 格式,如 PENGU/USDT
|
||||
sym_ccxt = sym.replace("USDT", "/USDT") if "/" not in sym else sym
|
||||
ticker = tickers.get(sym_ccxt)
|
||||
if not ticker:
|
||||
new_positions.append(pos)
|
||||
continue
|
||||
|
||||
price = float(ticker["last"])
|
||||
pnl_pct = (price - cost) / cost
|
||||
log(f"监控 {sym} | 现价 ${price:.8f} | 成本 ${cost:.8f} | 盈亏 {pnl_pct:+.2%}")
|
||||
|
||||
action = None
|
||||
if pnl_pct <= STOP_LOSS_PCT:
|
||||
action = "STOP_LOSS"
|
||||
elif pnl_pct >= TAKE_PROFIT_2_PCT:
|
||||
action = "TAKE_PROFIT_2"
|
||||
elif pnl_pct >= TAKE_PROFIT_1_PCT:
|
||||
# 检查是否已经止盈过一部分
|
||||
sold_pct = float(pos.get("take_profit_1_sold_pct", 0))
|
||||
if sold_pct == 0:
|
||||
action = "TAKE_PROFIT_1"
|
||||
|
||||
if action == "STOP_LOSS":
|
||||
trader.create_market_sell_order(sym, qty)
|
||||
log(f"🛑 {sym} 触发止损,全部清仓")
|
||||
continue
|
||||
|
||||
if action == "TAKE_PROFIT_1":
|
||||
sell_qty = qty * 0.5
|
||||
trader.create_market_sell_order(sym, sell_qty)
|
||||
pos["quantity"] = qty - sell_qty
|
||||
pos["take_profit_1_sold_pct"] = 50
|
||||
pos["updated_at"] = datetime.now(CST).isoformat()
|
||||
log(f"🎯 {sym} 触发止盈1,卖出50%,剩余 {pos['quantity']:.4f}")
|
||||
new_positions.append(pos)
|
||||
continue
|
||||
|
||||
if action == "TAKE_PROFIT_2":
|
||||
trader.create_market_sell_order(sym, float(pos["quantity"]))
|
||||
log(f"🚀 {sym} 触发止盈2,全部清仓")
|
||||
continue
|
||||
|
||||
new_positions.append(pos)
|
||||
|
||||
# 2. 开新仓
|
||||
if len(new_positions) < MAX_POSITIONS:
|
||||
candidates = picker.scan()
|
||||
held_bases = {p["base_asset"] for p in new_positions}
|
||||
total_usdt = trader.get_balance("USDT")
|
||||
# 计算持仓市值并加入总资产
|
||||
for pos in new_positions:
|
||||
sym_ccxt = pos["symbol"].replace("USDT", "/USDT") if "/" not in pos["symbol"] else pos["symbol"]
|
||||
ticker = tickers.get(sym_ccxt)
|
||||
if ticker:
|
||||
total_usdt += float(pos["quantity"]) * float(ticker["last"])
|
||||
|
||||
available_usdt = trader.get_balance("USDT")
|
||||
open_slots = MAX_POSITIONS - len(new_positions)
|
||||
position_size = calculate_position_size(total_usdt, available_usdt, open_slots)
|
||||
|
||||
log(f"总资产 USDT: ${total_usdt:.2f} | 策略上限({CAPITAL_ALLOCATION_PCT:.0%}): ${total_usdt*CAPITAL_ALLOCATION_PCT:.2f} | 每仓建议金额: ${position_size:.2f}")
|
||||
|
||||
for cand in candidates:
|
||||
if len(new_positions) >= MAX_POSITIONS:
|
||||
break
|
||||
base = cand["base"]
|
||||
if base in held_bases:
|
||||
continue
|
||||
if position_size <= 0:
|
||||
log("策略资金已用完或余额不足,停止开新仓")
|
||||
break
|
||||
|
||||
symbol = cand["symbol"]
|
||||
order = trader.create_market_buy_order(symbol, position_size)
|
||||
avg_price = float(order.get("price") or cand["price"])
|
||||
qty = position_size / avg_price if avg_price else 0
|
||||
|
||||
new_positions.append({
|
||||
"account_id": "binance-main",
|
||||
"symbol": symbol.replace("/", ""),
|
||||
"base_asset": base,
|
||||
"quote_asset": "USDT",
|
||||
"market_type": "spot",
|
||||
"quantity": qty,
|
||||
"avg_cost": avg_price,
|
||||
"opened_at": datetime.now(CST).isoformat(),
|
||||
"updated_at": datetime.now(CST).isoformat(),
|
||||
"note": "Auto-trader entry",
|
||||
})
|
||||
held_bases.add(base)
|
||||
available_usdt -= position_size
|
||||
position_size = calculate_position_size(total_usdt, available_usdt, MAX_POSITIONS - len(new_positions))
|
||||
log(f"📈 新开仓 {symbol} | 买入价 ${avg_price:.8f} | 数量 {qty:.2f}")
|
||||
|
||||
save_positions(new_positions)
|
||||
log("周期结束,持仓已保存")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
run_cycle()
|
||||
except Exception as e:
|
||||
log(f"❌ 错误: {e}")
|
||||
sys.exit(1)
|
||||
1
src/coinhunter/binance/__init__.py
Normal file
1
src/coinhunter/binance/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Official Binance connector wrappers."""
|
||||
81
src/coinhunter/binance/spot_client.py
Normal file
81
src/coinhunter/binance/spot_client.py
Normal file
@@ -0,0 +1,81 @@
|
||||
"""Thin wrapper around the official Binance Spot connector."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Callable
|
||||
from typing import Any
|
||||
|
||||
from requests.exceptions import (
|
||||
RequestException,
|
||||
SSLError,
|
||||
)
|
||||
|
||||
|
||||
class SpotBinanceClient:
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
api_key: str,
|
||||
api_secret: str,
|
||||
base_url: str,
|
||||
recv_window: int,
|
||||
client: Any | None = None,
|
||||
) -> None:
|
||||
self.recv_window = recv_window
|
||||
if client is not None:
|
||||
self._client = client
|
||||
return
|
||||
try:
|
||||
from binance.spot import Spot
|
||||
except ModuleNotFoundError as exc: # pragma: no cover
|
||||
raise RuntimeError("binance-connector is not installed") from exc
|
||||
self._client = Spot(api_key=api_key, api_secret=api_secret, base_url=base_url)
|
||||
|
||||
def _call(self, operation: str, func: Callable[..., Any], *args: Any, **kwargs: Any) -> Any:
|
||||
try:
|
||||
return func(*args, **kwargs)
|
||||
except SSLError as exc:
|
||||
raise RuntimeError(
|
||||
"Binance Spot request failed because TLS certificate verification failed. "
|
||||
"This usually means the local Python trust store is incomplete or a proxy is intercepting HTTPS. "
|
||||
"Update the local CA trust chain or configure the host environment with the correct corporate/root CA."
|
||||
) from exc
|
||||
except RequestException as exc:
|
||||
raise RuntimeError(f"Binance Spot request failed during {operation}: {exc}") from exc
|
||||
|
||||
def account_info(self) -> dict[str, Any]:
|
||||
return self._call("account info", self._client.account, recvWindow=self.recv_window) # type: ignore[no-any-return]
|
||||
|
||||
def exchange_info(self, symbol: str | None = None) -> dict[str, Any]:
|
||||
kwargs: dict[str, Any] = {}
|
||||
if symbol:
|
||||
kwargs["symbol"] = symbol
|
||||
return self._call("exchange info", self._client.exchange_info, **kwargs) # type: ignore[no-any-return]
|
||||
|
||||
def ticker_stats(self, symbols: list[str] | None = None, *, window: str = "1d") -> list[dict[str, Any]]:
|
||||
if symbols:
|
||||
kwargs: dict[str, Any] = {"windowSize": window}
|
||||
if len(symbols) == 1:
|
||||
kwargs["symbol"] = symbols[0]
|
||||
else:
|
||||
kwargs["symbols"] = symbols
|
||||
response = self._call("ticker stats", self._client.rolling_window_ticker, **kwargs)
|
||||
else:
|
||||
response = self._call("ticker stats", self._client.ticker_24hr)
|
||||
return response if isinstance(response, list) else [response]
|
||||
|
||||
def ticker_price(self, symbols: list[str] | None = None) -> list[dict[str, Any]]:
|
||||
if not symbols:
|
||||
response = self._call("ticker price", self._client.ticker_price)
|
||||
elif len(symbols) == 1:
|
||||
response = self._call("ticker price", self._client.ticker_price, symbol=symbols[0])
|
||||
else:
|
||||
response = self._call("ticker price", self._client.ticker_price, symbols=symbols)
|
||||
return response if isinstance(response, list) else [response]
|
||||
|
||||
def klines(self, symbol: str, interval: str, limit: int) -> list[list[Any]]:
|
||||
return self._call("klines", self._client.klines, symbol=symbol, interval=interval, limit=limit) # type: ignore[no-any-return]
|
||||
|
||||
def new_order(self, **kwargs: Any) -> dict[str, Any]:
|
||||
kwargs.setdefault("recvWindow", self.recv_window)
|
||||
return self._call("new order", self._client.new_order, **kwargs) # type: ignore[no-any-return]
|
||||
@@ -1,26 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""检查自动交易的环境配置是否就绪"""
|
||||
import os
|
||||
|
||||
from .runtime import load_env_file
|
||||
|
||||
|
||||
def main():
|
||||
load_env_file()
|
||||
|
||||
api_key = os.getenv("BINANCE_API_KEY", "")
|
||||
secret = os.getenv("BINANCE_API_SECRET", "")
|
||||
|
||||
if not api_key or api_key.startswith("***") or api_key.startswith("your_"):
|
||||
print("❌ 未配置 BINANCE_API_KEY")
|
||||
return 1
|
||||
if not secret or secret.startswith("***") or secret.startswith("your_"):
|
||||
print("❌ 未配置 BINANCE_API_SECRET")
|
||||
return 1
|
||||
|
||||
print("✅ API 配置正常")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
1357
src/coinhunter/cli.py
Executable file → Normal file
1357
src/coinhunter/cli.py
Executable file → Normal file
File diff suppressed because it is too large
Load Diff
@@ -1 +0,0 @@
|
||||
"""CLI command adapters for CoinHunter."""
|
||||
@@ -1,15 +0,0 @@
|
||||
"""CLI adapter for precheck."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
|
||||
from ..services.precheck_service import run
|
||||
|
||||
|
||||
def main() -> int:
|
||||
return run(sys.argv[1:])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -1,15 +0,0 @@
|
||||
"""CLI adapter for smart executor."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
|
||||
from ..services.smart_executor_service import run
|
||||
|
||||
|
||||
def main() -> int:
|
||||
return run(sys.argv[1:])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
270
src/coinhunter/config.py
Normal file
270
src/coinhunter/config.py
Normal file
@@ -0,0 +1,270 @@
|
||||
"""Configuration and secret loading for CoinHunter V2."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from .runtime import RuntimePaths, ensure_runtime_dirs, get_runtime_paths
|
||||
|
||||
try:
|
||||
import tomllib
|
||||
except ModuleNotFoundError: # pragma: no cover
|
||||
import tomli as tomllib
|
||||
|
||||
try:
|
||||
import tomli_w
|
||||
except ModuleNotFoundError: # pragma: no cover
|
||||
tomli_w = None # type: ignore[assignment]
|
||||
|
||||
|
||||
DEFAULT_CONFIG = """[runtime]
|
||||
timezone = "Asia/Shanghai"
|
||||
log_dir = "logs"
|
||||
output_format = "tui"
|
||||
|
||||
[binance]
|
||||
spot_base_url = "https://api.binance.com"
|
||||
recv_window = 5000
|
||||
|
||||
[market]
|
||||
default_quote = "USDT"
|
||||
universe_allowlist = []
|
||||
universe_denylist = []
|
||||
|
||||
[trading]
|
||||
spot_enabled = true
|
||||
dry_run_default = false
|
||||
dust_usdt_threshold = 10.0
|
||||
|
||||
[opportunity]
|
||||
min_quote_volume = 1000000.0
|
||||
top_n = 10
|
||||
scan_limit = 50
|
||||
ignore_dust = true
|
||||
entry_threshold = 1.5
|
||||
watch_threshold = 0.6
|
||||
min_trigger_score = 0.45
|
||||
min_setup_score = 0.35
|
||||
overlap_penalty = 0.6
|
||||
lookback_intervals = ["1h", "4h", "1d"]
|
||||
auto_research = true
|
||||
research_provider = "coingecko"
|
||||
research_timeout_seconds = 4.0
|
||||
simulate_days = 7
|
||||
run_days = 7
|
||||
dataset_timeout_seconds = 15.0
|
||||
evaluation_horizon_hours = 24.0
|
||||
evaluation_take_profit_pct = 2.0
|
||||
evaluation_stop_loss_pct = 1.5
|
||||
evaluation_setup_target_pct = 1.0
|
||||
evaluation_lookback = 24
|
||||
|
||||
[opportunity.risk_limits]
|
||||
min_liquidity = 0.0
|
||||
max_overextension = 0.08
|
||||
max_downside_risk = 0.3
|
||||
max_unlock_risk = 0.75
|
||||
max_regulatory_risk = 0.75
|
||||
min_quality_for_add = 0.0
|
||||
|
||||
[opportunity.weights]
|
||||
trend = 1.0
|
||||
momentum = 1.0
|
||||
breakout = 0.8
|
||||
pullback = 0.4
|
||||
volume = 0.7
|
||||
liquidity = 0.3
|
||||
trend_alignment = 0.8
|
||||
fundamental = 0.8
|
||||
tokenomics = 0.7
|
||||
catalyst = 0.5
|
||||
adoption = 0.4
|
||||
smart_money = 0.3
|
||||
volatility_penalty = 0.5
|
||||
overextension_penalty = 0.7
|
||||
downside_penalty = 0.5
|
||||
unlock_penalty = 0.8
|
||||
regulatory_penalty = 0.4
|
||||
position_concentration_penalty = 0.6
|
||||
|
||||
[opportunity.model_weights]
|
||||
trend = 0.1406
|
||||
compression = 0.1688
|
||||
breakout_proximity = 0.0875
|
||||
higher_lows = 0.15
|
||||
range_position = 0.45
|
||||
fresh_breakout = 0.2
|
||||
volume = 0.525
|
||||
momentum = 0.1562
|
||||
setup = 1.875
|
||||
trigger = 1.875
|
||||
liquidity = 0.3
|
||||
volatility_penalty = 0.8
|
||||
extension_penalty = 0.45
|
||||
|
||||
[signal]
|
||||
lookback_interval = "1h"
|
||||
trend = 1.0
|
||||
momentum = 1.0
|
||||
breakout = 0.8
|
||||
volume = 0.7
|
||||
volatility_penalty = 0.5
|
||||
|
||||
[portfolio]
|
||||
add_threshold = 1.5
|
||||
hold_threshold = 0.6
|
||||
trim_threshold = 0.2
|
||||
exit_threshold = -0.2
|
||||
max_position_weight = 0.6
|
||||
"""
|
||||
|
||||
DEFAULT_ENV = "BINANCE_API_KEY=\nBINANCE_API_SECRET=\n"
|
||||
|
||||
|
||||
def _permission_denied_message(paths: RuntimePaths, exc: PermissionError) -> RuntimeError:
|
||||
return RuntimeError(
|
||||
"Unable to initialize CoinHunter runtime files because the target directory is not writable: "
|
||||
f"{paths.root}. Set COINHUNTER_HOME to a writable directory or rerun with permissions that can write there. "
|
||||
f"Original error: {exc}"
|
||||
)
|
||||
|
||||
|
||||
def ensure_init_files(paths: RuntimePaths | None = None, *, force: bool = False) -> dict[str, Any]:
|
||||
paths = paths or get_runtime_paths()
|
||||
try:
|
||||
ensure_runtime_dirs(paths)
|
||||
except PermissionError as exc:
|
||||
raise _permission_denied_message(paths, exc) from exc
|
||||
created: list[str] = []
|
||||
updated: list[str] = []
|
||||
|
||||
for path, content in ((paths.config_file, DEFAULT_CONFIG), (paths.env_file, DEFAULT_ENV)):
|
||||
if force or not path.exists():
|
||||
try:
|
||||
path.write_text(content, encoding="utf-8")
|
||||
except PermissionError as exc:
|
||||
raise _permission_denied_message(paths, exc) from exc
|
||||
(updated if force and path.exists() else created).append(str(path))
|
||||
return {
|
||||
"root": str(paths.root),
|
||||
"config_file": str(paths.config_file),
|
||||
"env_file": str(paths.env_file),
|
||||
"logs_dir": str(paths.logs_dir),
|
||||
"created_or_updated": created + updated,
|
||||
"force": force,
|
||||
}
|
||||
|
||||
|
||||
def load_config(paths: RuntimePaths | None = None) -> dict[str, Any]:
|
||||
paths = paths or get_runtime_paths()
|
||||
if not paths.config_file.exists():
|
||||
raise RuntimeError(f"Missing config file at {paths.config_file}. Run `coinhunter init` first.")
|
||||
return tomllib.loads(paths.config_file.read_text(encoding="utf-8")) # type: ignore[no-any-return]
|
||||
|
||||
|
||||
def load_env_file(paths: RuntimePaths | None = None) -> dict[str, str]:
|
||||
paths = paths or get_runtime_paths()
|
||||
loaded: dict[str, str] = {}
|
||||
if not paths.env_file.exists():
|
||||
return loaded
|
||||
for raw_line in paths.env_file.read_text(encoding="utf-8").splitlines():
|
||||
line = raw_line.strip()
|
||||
if not line or line.startswith("#") or "=" not in line:
|
||||
continue
|
||||
key, value = line.split("=", 1)
|
||||
key = key.strip()
|
||||
value = value.strip()
|
||||
loaded[key] = value
|
||||
os.environ[key] = value
|
||||
return loaded
|
||||
|
||||
|
||||
def get_binance_credentials(paths: RuntimePaths | None = None) -> dict[str, str]:
|
||||
load_env_file(paths)
|
||||
api_key = os.getenv("BINANCE_API_KEY", "").strip()
|
||||
api_secret = os.getenv("BINANCE_API_SECRET", "").strip()
|
||||
if not api_key or not api_secret:
|
||||
runtime_paths = paths or get_runtime_paths()
|
||||
raise RuntimeError(
|
||||
"Missing BINANCE_API_KEY or BINANCE_API_SECRET. "
|
||||
f"Populate {runtime_paths.env_file} or export them in the environment."
|
||||
)
|
||||
return {"api_key": api_key, "api_secret": api_secret}
|
||||
|
||||
|
||||
def resolve_log_dir(config: dict[str, Any], paths: RuntimePaths | None = None) -> Path:
|
||||
paths = paths or get_runtime_paths()
|
||||
raw = config.get("runtime", {}).get("log_dir", "logs")
|
||||
value = Path(raw).expanduser()
|
||||
return value if value.is_absolute() else paths.root / value
|
||||
|
||||
|
||||
def get_config_value(config: dict[str, Any], key_path: str) -> Any:
|
||||
keys = key_path.split(".")
|
||||
node = config
|
||||
for key in keys:
|
||||
if not isinstance(node, dict) or key not in node:
|
||||
return None
|
||||
node = node[key]
|
||||
return node
|
||||
|
||||
|
||||
def set_config_value(config_file: Path, key_path: str, value: Any) -> None:
|
||||
if tomli_w is None:
|
||||
raise RuntimeError("tomli-w is not installed. Run `pip install tomli-w`.")
|
||||
if not config_file.exists():
|
||||
raise RuntimeError(f"Config file not found: {config_file}")
|
||||
config = tomllib.loads(config_file.read_text(encoding="utf-8"))
|
||||
keys = key_path.split(".")
|
||||
node = config
|
||||
for key in keys[:-1]:
|
||||
if key not in node:
|
||||
node[key] = {}
|
||||
node = node[key]
|
||||
|
||||
# Coerce type from existing value when possible
|
||||
existing = node.get(keys[-1])
|
||||
if isinstance(existing, bool) and isinstance(value, str):
|
||||
value = value.lower() in ("true", "1", "yes", "on")
|
||||
elif isinstance(existing, (int, float)) and isinstance(value, str):
|
||||
try:
|
||||
value = type(existing)(value)
|
||||
except (ValueError, TypeError) as exc:
|
||||
raise RuntimeError(
|
||||
f"Cannot set {key_path} to {value!r}: expected {type(existing).__name__}, got {value!r}"
|
||||
) from exc
|
||||
elif isinstance(existing, list) and isinstance(value, str):
|
||||
value = [item.strip() for item in value.split(",") if item.strip()]
|
||||
|
||||
node[keys[-1]] = value
|
||||
config_file.write_text(tomli_w.dumps(config), encoding="utf-8")
|
||||
|
||||
|
||||
def get_env_value(paths: RuntimePaths | None = None, key: str = "") -> str:
|
||||
paths = paths or get_runtime_paths()
|
||||
if not paths.env_file.exists():
|
||||
return ""
|
||||
env_data = load_env_file(paths)
|
||||
return env_data.get(key, "")
|
||||
|
||||
|
||||
def set_env_value(paths: RuntimePaths | None = None, key: str = "", value: str = "") -> None:
|
||||
paths = paths or get_runtime_paths()
|
||||
if not paths.env_file.exists():
|
||||
raise RuntimeError(f"Env file not found: {paths.env_file}. Run `coin init` first.")
|
||||
|
||||
lines = paths.env_file.read_text(encoding="utf-8").splitlines()
|
||||
found = False
|
||||
for i, line in enumerate(lines):
|
||||
stripped = line.strip()
|
||||
if stripped.startswith(f"{key}=") or stripped.startswith(f"{key} ="):
|
||||
lines[i] = f"{key}={value}"
|
||||
found = True
|
||||
break
|
||||
if not found:
|
||||
lines.append(f"{key}={value}")
|
||||
|
||||
paths.env_file.write_text("\n".join(lines) + "\n", encoding="utf-8")
|
||||
os.environ[key] = value
|
||||
@@ -1,66 +0,0 @@
|
||||
"""Runtime diagnostics for CoinHunter CLI."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import platform
|
||||
import shutil
|
||||
import sys
|
||||
|
||||
from .runtime import ensure_runtime_dirs, get_runtime_paths, load_env_file, resolve_hermes_executable
|
||||
|
||||
|
||||
REQUIRED_ENV_VARS = ["BINANCE_API_KEY", "BINANCE_API_SECRET"]
|
||||
|
||||
|
||||
def main() -> int:
|
||||
paths = ensure_runtime_dirs(get_runtime_paths())
|
||||
env_file = load_env_file(paths)
|
||||
hermes_executable = resolve_hermes_executable(paths)
|
||||
|
||||
env_checks = {}
|
||||
missing_env = []
|
||||
for name in REQUIRED_ENV_VARS:
|
||||
present = bool(os.getenv(name))
|
||||
env_checks[name] = present
|
||||
if not present:
|
||||
missing_env.append(name)
|
||||
|
||||
file_checks = {
|
||||
"env_file_exists": env_file.exists(),
|
||||
"config_exists": paths.config_file.exists(),
|
||||
"positions_exists": paths.positions_file.exists(),
|
||||
"logrotate_config_exists": paths.logrotate_config.exists(),
|
||||
}
|
||||
dir_checks = {
|
||||
"root_exists": paths.root.exists(),
|
||||
"state_dir_exists": paths.state_dir.exists(),
|
||||
"logs_dir_exists": paths.logs_dir.exists(),
|
||||
"reviews_dir_exists": paths.reviews_dir.exists(),
|
||||
"cache_dir_exists": paths.cache_dir.exists(),
|
||||
}
|
||||
command_checks = {
|
||||
"hermes": bool(shutil.which("hermes") or paths.hermes_bin.exists()),
|
||||
"logrotate": bool(shutil.which("logrotate") or shutil.which("/usr/sbin/logrotate")),
|
||||
}
|
||||
|
||||
report = {
|
||||
"ok": not missing_env,
|
||||
"python": sys.version.split()[0],
|
||||
"platform": platform.platform(),
|
||||
"env_file": str(env_file),
|
||||
"hermes_executable": hermes_executable,
|
||||
"paths": paths.as_dict(),
|
||||
"env_checks": env_checks,
|
||||
"missing_env": missing_env,
|
||||
"file_checks": file_checks,
|
||||
"dir_checks": dir_checks,
|
||||
"command_checks": command_checks,
|
||||
}
|
||||
print(json.dumps(report, ensure_ascii=False, indent=2))
|
||||
return 0 if report["ok"] else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -1,82 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
import fcntl
|
||||
import json
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from .runtime import ensure_runtime_dirs, get_runtime_paths, resolve_hermes_executable
|
||||
|
||||
PATHS = get_runtime_paths()
|
||||
STATE_DIR = PATHS.state_dir
|
||||
LOCK_FILE = PATHS.external_gate_lock
|
||||
COINHUNTER_MODULE = [sys.executable, "-m", "coinhunter"]
|
||||
TRADE_JOB_ID = "4e6593fff158"
|
||||
|
||||
|
||||
def utc_now():
|
||||
return datetime.now(timezone.utc).isoformat()
|
||||
|
||||
|
||||
def log(message: str):
|
||||
print(f"[{utc_now()}] {message}")
|
||||
|
||||
|
||||
def run_cmd(args: list[str]) -> subprocess.CompletedProcess:
|
||||
return subprocess.run(args, capture_output=True, text=True)
|
||||
|
||||
|
||||
def parse_json_output(text: str) -> dict:
|
||||
text = (text or "").strip()
|
||||
if not text:
|
||||
return {}
|
||||
return json.loads(text)
|
||||
|
||||
|
||||
def main():
|
||||
ensure_runtime_dirs(PATHS)
|
||||
with open(LOCK_FILE, "w", encoding="utf-8") as lockf:
|
||||
try:
|
||||
fcntl.flock(lockf.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
|
||||
except BlockingIOError:
|
||||
log("gate already running; skip")
|
||||
return 0
|
||||
|
||||
precheck = run_cmd(COINHUNTER_MODULE + ["precheck"])
|
||||
if precheck.returncode != 0:
|
||||
log(f"precheck returned non-zero ({precheck.returncode}); stdout={precheck.stdout.strip()} stderr={precheck.stderr.strip()}")
|
||||
return 1
|
||||
|
||||
try:
|
||||
data = parse_json_output(precheck.stdout)
|
||||
except Exception as e:
|
||||
log(f"failed to parse precheck JSON: {e}; raw={precheck.stdout.strip()[:1000]}")
|
||||
return 1
|
||||
|
||||
if not data.get("should_analyze"):
|
||||
log("no trigger; skip model run")
|
||||
return 0
|
||||
|
||||
if data.get("run_requested"):
|
||||
log(f"trigger already queued at {data.get('run_requested_at')}; skip duplicate")
|
||||
return 0
|
||||
|
||||
mark = run_cmd(COINHUNTER_MODULE + ["precheck", "--mark-run-requested", "external-gate queued cron run"])
|
||||
if mark.returncode != 0:
|
||||
log(f"failed to mark run requested; stdout={mark.stdout.strip()} stderr={mark.stderr.strip()}")
|
||||
return 1
|
||||
|
||||
trigger = run_cmd([resolve_hermes_executable(PATHS), "cron", "run", TRADE_JOB_ID])
|
||||
if trigger.returncode != 0:
|
||||
log(f"failed to trigger trade cron job; stdout={trigger.stdout.strip()} stderr={trigger.stderr.strip()}")
|
||||
return 1
|
||||
|
||||
reasons = ", ".join(data.get("reasons", [])) or "unknown"
|
||||
log(f"queued trade job {TRADE_JOB_ID}; reasons={reasons}")
|
||||
if trigger.stdout.strip():
|
||||
log(trigger.stdout.strip())
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -1,65 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
from .runtime import ensure_runtime_dirs, get_runtime_paths
|
||||
|
||||
PATHS = get_runtime_paths()
|
||||
ROOT = PATHS.root
|
||||
CACHE_DIR = PATHS.cache_dir
|
||||
|
||||
|
||||
def now_iso():
|
||||
return datetime.now(timezone.utc).replace(microsecond=0).isoformat()
|
||||
|
||||
|
||||
def ensure_file(path: Path, payload: dict):
|
||||
if path.exists():
|
||||
return False
|
||||
path.write_text(json.dumps(payload, ensure_ascii=False, indent=2) + "\n", encoding="utf-8")
|
||||
return True
|
||||
|
||||
|
||||
def main():
|
||||
ensure_runtime_dirs(PATHS)
|
||||
|
||||
created = []
|
||||
ts = now_iso()
|
||||
|
||||
templates = {
|
||||
ROOT / "config.json": {
|
||||
"default_exchange": "bybit",
|
||||
"default_quote_currency": "USDT",
|
||||
"timezone": "Asia/Shanghai",
|
||||
"preferred_chains": ["solana", "base"],
|
||||
"created_at": ts,
|
||||
"updated_at": ts,
|
||||
},
|
||||
ROOT / "accounts.json": {
|
||||
"accounts": []
|
||||
},
|
||||
ROOT / "positions.json": {
|
||||
"positions": []
|
||||
},
|
||||
ROOT / "watchlist.json": {
|
||||
"watchlist": []
|
||||
},
|
||||
ROOT / "notes.json": {
|
||||
"notes": []
|
||||
},
|
||||
}
|
||||
|
||||
for path, payload in templates.items():
|
||||
if ensure_file(path, payload):
|
||||
created.append(str(path))
|
||||
|
||||
print(json.dumps({
|
||||
"root": str(ROOT),
|
||||
"created": created,
|
||||
"cache_dir": str(CACHE_DIR),
|
||||
}, ensure_ascii=False, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,107 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Coin Hunter structured logger."""
|
||||
import json
|
||||
import traceback
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
from .runtime import get_runtime_paths
|
||||
|
||||
LOG_DIR = get_runtime_paths().logs_dir
|
||||
SCHEMA_VERSION = 2
|
||||
|
||||
CST = timezone(timedelta(hours=8))
|
||||
|
||||
|
||||
def bj_now():
|
||||
return datetime.now(CST)
|
||||
|
||||
|
||||
def ensure_dir():
|
||||
LOG_DIR.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
|
||||
def _append_jsonl(prefix: str, payload: dict):
|
||||
ensure_dir()
|
||||
date_str = bj_now().strftime("%Y%m%d")
|
||||
log_file = LOG_DIR / f"{prefix}_{date_str}.jsonl"
|
||||
with open(log_file, "a", encoding="utf-8") as f:
|
||||
f.write(json.dumps(payload, ensure_ascii=False) + "\n")
|
||||
|
||||
|
||||
def log_event(prefix: str, payload: dict):
|
||||
entry = {
|
||||
"schema_version": SCHEMA_VERSION,
|
||||
"timestamp": bj_now().isoformat(),
|
||||
**payload,
|
||||
}
|
||||
_append_jsonl(prefix, entry)
|
||||
return entry
|
||||
|
||||
|
||||
def log_decision(data: dict):
|
||||
return log_event("decisions", data)
|
||||
|
||||
|
||||
def log_trade(action: str, symbol: str, qty: float = None, amount_usdt: float = None,
|
||||
price: float = None, note: str = "", **extra):
|
||||
payload = {
|
||||
"action": action,
|
||||
"symbol": symbol,
|
||||
"qty": qty,
|
||||
"amount_usdt": amount_usdt,
|
||||
"price": price,
|
||||
"note": note,
|
||||
**extra,
|
||||
}
|
||||
return log_event("trades", payload)
|
||||
|
||||
|
||||
def log_snapshot(market_data: dict, note: str = "", **extra):
|
||||
return log_event("snapshots", {"market_data": market_data, "note": note, **extra})
|
||||
|
||||
|
||||
def log_error(where: str, error: Exception | str, **extra):
|
||||
payload = {
|
||||
"where": where,
|
||||
"error_type": error.__class__.__name__ if isinstance(error, Exception) else "Error",
|
||||
"error": str(error),
|
||||
"traceback": traceback.format_exc() if isinstance(error, Exception) else None,
|
||||
**extra,
|
||||
}
|
||||
return log_event("errors", payload)
|
||||
|
||||
|
||||
def get_logs_by_date(log_type: str, date_str: str = None) -> list:
|
||||
if date_str is None:
|
||||
date_str = bj_now().strftime("%Y%m%d")
|
||||
log_file = LOG_DIR / f"{log_type}_{date_str}.jsonl"
|
||||
if not log_file.exists():
|
||||
return []
|
||||
entries = []
|
||||
with open(log_file, "r", encoding="utf-8") as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
try:
|
||||
entries.append(json.loads(line))
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
return entries
|
||||
|
||||
|
||||
def get_logs_last_n_hours(log_type: str, n_hours: int = 1) -> list:
|
||||
now = bj_now()
|
||||
cutoff = now - timedelta(hours=n_hours)
|
||||
entries = []
|
||||
for offset in [0, -1]:
|
||||
date_str = (now + timedelta(days=offset)).strftime("%Y%m%d")
|
||||
for entry in get_logs_by_date(log_type, date_str):
|
||||
try:
|
||||
ts = datetime.fromisoformat(entry["timestamp"])
|
||||
except Exception:
|
||||
continue
|
||||
if ts >= cutoff:
|
||||
entries.append(entry)
|
||||
entries.sort(key=lambda x: x.get("timestamp", ""))
|
||||
return entries
|
||||
@@ -1,243 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
|
||||
DEFAULT_TIMEOUT = 20
|
||||
|
||||
|
||||
def fetch_json(url, headers=None, timeout=DEFAULT_TIMEOUT):
|
||||
merged_headers = {
|
||||
"Accept": "application/json",
|
||||
"User-Agent": "Mozilla/5.0 (compatible; OpenClaw Coin Hunter/1.0)",
|
||||
}
|
||||
if headers:
|
||||
merged_headers.update(headers)
|
||||
req = urllib.request.Request(url, headers=merged_headers)
|
||||
with urllib.request.urlopen(req, timeout=timeout) as resp:
|
||||
data = resp.read()
|
||||
return json.loads(data.decode("utf-8"))
|
||||
|
||||
|
||||
def print_json(data):
|
||||
print(json.dumps(data, ensure_ascii=False, indent=2))
|
||||
|
||||
|
||||
def bybit_ticker(symbol: str):
|
||||
url = (
|
||||
"https://api.bybit.com/v5/market/tickers?category=spot&symbol="
|
||||
+ urllib.parse.quote(symbol.upper())
|
||||
)
|
||||
payload = fetch_json(url)
|
||||
items = payload.get("result", {}).get("list", [])
|
||||
if not items:
|
||||
raise SystemExit(f"No Bybit spot ticker found for {symbol}")
|
||||
item = items[0]
|
||||
out = {
|
||||
"provider": "bybit",
|
||||
"symbol": symbol.upper(),
|
||||
"lastPrice": item.get("lastPrice"),
|
||||
"price24hPcnt": item.get("price24hPcnt"),
|
||||
"highPrice24h": item.get("highPrice24h"),
|
||||
"lowPrice24h": item.get("lowPrice24h"),
|
||||
"turnover24h": item.get("turnover24h"),
|
||||
"volume24h": item.get("volume24h"),
|
||||
"bid1Price": item.get("bid1Price"),
|
||||
"ask1Price": item.get("ask1Price"),
|
||||
}
|
||||
print_json(out)
|
||||
|
||||
|
||||
def bybit_klines(symbol: str, interval: str, limit: int):
|
||||
params = urllib.parse.urlencode({
|
||||
"category": "spot",
|
||||
"symbol": symbol.upper(),
|
||||
"interval": interval,
|
||||
"limit": str(limit),
|
||||
})
|
||||
url = f"https://api.bybit.com/v5/market/kline?{params}"
|
||||
payload = fetch_json(url)
|
||||
rows = payload.get("result", {}).get("list", [])
|
||||
out = {
|
||||
"provider": "bybit",
|
||||
"symbol": symbol.upper(),
|
||||
"interval": interval,
|
||||
"candles": [
|
||||
{
|
||||
"startTime": r[0],
|
||||
"open": r[1],
|
||||
"high": r[2],
|
||||
"low": r[3],
|
||||
"close": r[4],
|
||||
"volume": r[5],
|
||||
"turnover": r[6],
|
||||
}
|
||||
for r in rows
|
||||
],
|
||||
}
|
||||
print_json(out)
|
||||
|
||||
|
||||
def dexscreener_search(query: str):
|
||||
url = "https://api.dexscreener.com/latest/dex/search/?q=" + urllib.parse.quote(query)
|
||||
payload = fetch_json(url)
|
||||
pairs = payload.get("pairs") or []
|
||||
out = []
|
||||
for p in pairs[:10]:
|
||||
out.append({
|
||||
"chainId": p.get("chainId"),
|
||||
"dexId": p.get("dexId"),
|
||||
"pairAddress": p.get("pairAddress"),
|
||||
"url": p.get("url"),
|
||||
"baseToken": p.get("baseToken"),
|
||||
"quoteToken": p.get("quoteToken"),
|
||||
"priceUsd": p.get("priceUsd"),
|
||||
"liquidityUsd": (p.get("liquidity") or {}).get("usd"),
|
||||
"fdv": p.get("fdv"),
|
||||
"marketCap": p.get("marketCap"),
|
||||
"volume24h": (p.get("volume") or {}).get("h24"),
|
||||
"buys24h": ((p.get("txns") or {}).get("h24") or {}).get("buys"),
|
||||
"sells24h": ((p.get("txns") or {}).get("h24") or {}).get("sells"),
|
||||
})
|
||||
print_json({"provider": "dexscreener", "query": query, "pairs": out})
|
||||
|
||||
|
||||
def dexscreener_token(chain: str, address: str):
|
||||
url = f"https://api.dexscreener.com/tokens/v1/{urllib.parse.quote(chain)}/{urllib.parse.quote(address)}"
|
||||
payload = fetch_json(url)
|
||||
pairs = payload if isinstance(payload, list) else payload.get("pairs") or []
|
||||
out = []
|
||||
for p in pairs[:10]:
|
||||
out.append({
|
||||
"chainId": p.get("chainId"),
|
||||
"dexId": p.get("dexId"),
|
||||
"pairAddress": p.get("pairAddress"),
|
||||
"baseToken": p.get("baseToken"),
|
||||
"quoteToken": p.get("quoteToken"),
|
||||
"priceUsd": p.get("priceUsd"),
|
||||
"liquidityUsd": (p.get("liquidity") or {}).get("usd"),
|
||||
"fdv": p.get("fdv"),
|
||||
"marketCap": p.get("marketCap"),
|
||||
"volume24h": (p.get("volume") or {}).get("h24"),
|
||||
})
|
||||
print_json({"provider": "dexscreener", "chain": chain, "address": address, "pairs": out})
|
||||
|
||||
|
||||
def coingecko_search(query: str):
|
||||
url = "https://api.coingecko.com/api/v3/search?query=" + urllib.parse.quote(query)
|
||||
payload = fetch_json(url)
|
||||
coins = payload.get("coins") or []
|
||||
out = []
|
||||
for c in coins[:10]:
|
||||
out.append({
|
||||
"id": c.get("id"),
|
||||
"name": c.get("name"),
|
||||
"symbol": c.get("symbol"),
|
||||
"marketCapRank": c.get("market_cap_rank"),
|
||||
"thumb": c.get("thumb"),
|
||||
})
|
||||
print_json({"provider": "coingecko", "query": query, "coins": out})
|
||||
|
||||
|
||||
def coingecko_coin(coin_id: str):
|
||||
params = urllib.parse.urlencode({
|
||||
"localization": "false",
|
||||
"tickers": "false",
|
||||
"market_data": "true",
|
||||
"community_data": "false",
|
||||
"developer_data": "false",
|
||||
"sparkline": "false",
|
||||
})
|
||||
url = f"https://api.coingecko.com/api/v3/coins/{urllib.parse.quote(coin_id)}?{params}"
|
||||
payload = fetch_json(url)
|
||||
md = payload.get("market_data") or {}
|
||||
out = {
|
||||
"provider": "coingecko",
|
||||
"id": payload.get("id"),
|
||||
"symbol": payload.get("symbol"),
|
||||
"name": payload.get("name"),
|
||||
"marketCapRank": payload.get("market_cap_rank"),
|
||||
"currentPriceUsd": (md.get("current_price") or {}).get("usd"),
|
||||
"marketCapUsd": (md.get("market_cap") or {}).get("usd"),
|
||||
"fullyDilutedValuationUsd": (md.get("fully_diluted_valuation") or {}).get("usd"),
|
||||
"totalVolumeUsd": (md.get("total_volume") or {}).get("usd"),
|
||||
"priceChangePercentage24h": md.get("price_change_percentage_24h"),
|
||||
"priceChangePercentage7d": md.get("price_change_percentage_7d"),
|
||||
"priceChangePercentage30d": md.get("price_change_percentage_30d"),
|
||||
"circulatingSupply": md.get("circulating_supply"),
|
||||
"totalSupply": md.get("total_supply"),
|
||||
"maxSupply": md.get("max_supply"),
|
||||
"homepage": (payload.get("links") or {}).get("homepage", [None])[0],
|
||||
}
|
||||
print_json(out)
|
||||
|
||||
|
||||
def birdeye_token(address: str):
|
||||
api_key = os.getenv("BIRDEYE_API_KEY") or os.getenv("BIRDEYE_APIKEY")
|
||||
if not api_key:
|
||||
raise SystemExit("Birdeye requires BIRDEYE_API_KEY in the environment")
|
||||
url = "https://public-api.birdeye.so/defi/token_overview?address=" + urllib.parse.quote(address)
|
||||
payload = fetch_json(url, headers={
|
||||
"x-api-key": api_key,
|
||||
"x-chain": "solana",
|
||||
})
|
||||
print_json({"provider": "birdeye", "address": address, "data": payload.get("data")})
|
||||
|
||||
|
||||
def build_parser():
|
||||
parser = argparse.ArgumentParser(description="Coin Hunter market data probe")
|
||||
sub = parser.add_subparsers(dest="command", required=True)
|
||||
|
||||
p = sub.add_parser("bybit-ticker", help="Fetch Bybit spot ticker")
|
||||
p.add_argument("symbol")
|
||||
|
||||
p = sub.add_parser("bybit-klines", help="Fetch Bybit spot klines")
|
||||
p.add_argument("symbol")
|
||||
p.add_argument("--interval", default="60", help="Bybit interval, e.g. 1, 5, 15, 60, 240, D")
|
||||
p.add_argument("--limit", type=int, default=10)
|
||||
|
||||
p = sub.add_parser("dex-search", help="Search DexScreener by query")
|
||||
p.add_argument("query")
|
||||
|
||||
p = sub.add_parser("dex-token", help="Fetch DexScreener token pairs by chain/address")
|
||||
p.add_argument("chain")
|
||||
p.add_argument("address")
|
||||
|
||||
p = sub.add_parser("gecko-search", help="Search CoinGecko")
|
||||
p.add_argument("query")
|
||||
|
||||
p = sub.add_parser("gecko-coin", help="Fetch CoinGecko coin by id")
|
||||
p.add_argument("coin_id")
|
||||
|
||||
p = sub.add_parser("birdeye-token", help="Fetch Birdeye token overview (Solana)")
|
||||
p.add_argument("address")
|
||||
|
||||
return parser
|
||||
|
||||
|
||||
def main():
|
||||
parser = build_parser()
|
||||
args = parser.parse_args()
|
||||
if args.command == "bybit-ticker":
|
||||
bybit_ticker(args.symbol)
|
||||
elif args.command == "bybit-klines":
|
||||
bybit_klines(args.symbol, args.interval, args.limit)
|
||||
elif args.command == "dex-search":
|
||||
dexscreener_search(args.query)
|
||||
elif args.command == "dex-token":
|
||||
dexscreener_token(args.chain, args.address)
|
||||
elif args.command == "gecko-search":
|
||||
coingecko_search(args.query)
|
||||
elif args.command == "gecko-coin":
|
||||
coingecko_coin(args.coin_id)
|
||||
elif args.command == "birdeye-token":
|
||||
birdeye_token(args.address)
|
||||
else:
|
||||
parser.error("Unknown command")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,16 +0,0 @@
|
||||
"""Print CoinHunter runtime paths."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
|
||||
from .runtime import get_runtime_paths
|
||||
|
||||
|
||||
def main() -> int:
|
||||
print(json.dumps(get_runtime_paths().as_dict(), ensure_ascii=False, indent=2))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -1,925 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import hashlib
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from pathlib import Path
|
||||
from zoneinfo import ZoneInfo
|
||||
|
||||
import ccxt
|
||||
|
||||
from .runtime import get_runtime_paths, load_env_file
|
||||
|
||||
PATHS = get_runtime_paths()
|
||||
BASE_DIR = PATHS.root
|
||||
STATE_DIR = PATHS.state_dir
|
||||
STATE_FILE = PATHS.precheck_state_file
|
||||
POSITIONS_FILE = PATHS.positions_file
|
||||
CONFIG_FILE = PATHS.config_file
|
||||
ENV_FILE = PATHS.env_file
|
||||
|
||||
BASE_PRICE_MOVE_TRIGGER_PCT = 0.025
|
||||
BASE_PNL_TRIGGER_PCT = 0.03
|
||||
BASE_PORTFOLIO_MOVE_TRIGGER_PCT = 0.03
|
||||
BASE_CANDIDATE_SCORE_TRIGGER_RATIO = 1.15
|
||||
BASE_FORCE_ANALYSIS_AFTER_MINUTES = 180
|
||||
BASE_COOLDOWN_MINUTES = 45
|
||||
TOP_CANDIDATES = 10
|
||||
MIN_ACTIONABLE_USDT = 12.0
|
||||
MIN_REAL_POSITION_VALUE_USDT = 8.0
|
||||
BLACKLIST = {"USDC", "BUSD", "TUSD", "FDUSD", "USTC", "PAXG"}
|
||||
HARD_STOP_PCT = -0.08
|
||||
HARD_MOON_PCT = 0.25
|
||||
MIN_CHANGE_PCT = 1.0
|
||||
MAX_PRICE_CAP = None
|
||||
HARD_REASON_DEDUP_MINUTES = 15
|
||||
MAX_PENDING_TRIGGER_MINUTES = 30
|
||||
MAX_RUN_REQUEST_MINUTES = 20
|
||||
|
||||
|
||||
def utc_now():
|
||||
return datetime.now(timezone.utc)
|
||||
|
||||
|
||||
def utc_iso():
|
||||
return utc_now().isoformat()
|
||||
|
||||
|
||||
def parse_ts(value: str | None):
|
||||
if not value:
|
||||
return None
|
||||
try:
|
||||
ts = datetime.fromisoformat(value)
|
||||
if ts.tzinfo is None:
|
||||
ts = ts.replace(tzinfo=timezone.utc)
|
||||
return ts
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def load_json(path: Path, default):
|
||||
if not path.exists():
|
||||
return default
|
||||
try:
|
||||
return json.loads(path.read_text(encoding="utf-8"))
|
||||
except Exception:
|
||||
return default
|
||||
|
||||
|
||||
def load_env():
|
||||
load_env_file(PATHS)
|
||||
|
||||
|
||||
def load_positions():
|
||||
return load_json(POSITIONS_FILE, {}).get("positions", [])
|
||||
|
||||
|
||||
def load_state():
|
||||
return load_json(STATE_FILE, {})
|
||||
|
||||
|
||||
def load_config():
|
||||
return load_json(CONFIG_FILE, {})
|
||||
|
||||
|
||||
def clear_run_request_fields(state: dict):
|
||||
state.pop("run_requested_at", None)
|
||||
state.pop("run_request_note", None)
|
||||
|
||||
|
||||
def sanitize_state_for_stale_triggers(state: dict):
|
||||
sanitized = dict(state)
|
||||
notes = []
|
||||
now = utc_now()
|
||||
run_requested_at = parse_ts(sanitized.get("run_requested_at"))
|
||||
last_deep_analysis_at = parse_ts(sanitized.get("last_deep_analysis_at"))
|
||||
last_triggered_at = parse_ts(sanitized.get("last_triggered_at"))
|
||||
pending_trigger = bool(sanitized.get("pending_trigger"))
|
||||
|
||||
if run_requested_at and last_deep_analysis_at and last_deep_analysis_at >= run_requested_at:
|
||||
clear_run_request_fields(sanitized)
|
||||
if pending_trigger and (not last_triggered_at or last_deep_analysis_at >= last_triggered_at):
|
||||
sanitized["pending_trigger"] = False
|
||||
sanitized["pending_reasons"] = []
|
||||
sanitized["last_ack_note"] = (
|
||||
f"auto-cleared completed trigger at {utc_iso()} because last_deep_analysis_at >= run_requested_at"
|
||||
)
|
||||
pending_trigger = False
|
||||
notes.append(
|
||||
f"自动清理已完成的 run_requested 标记:最近深度分析时间 {last_deep_analysis_at.isoformat()} >= 请求时间 {run_requested_at.isoformat()}"
|
||||
)
|
||||
run_requested_at = None
|
||||
|
||||
if run_requested_at and now - run_requested_at > timedelta(minutes=MAX_RUN_REQUEST_MINUTES):
|
||||
clear_run_request_fields(sanitized)
|
||||
notes.append(
|
||||
f"自动清理超时 run_requested 标记:已等待 {(now - run_requested_at).total_seconds() / 60:.1f} 分钟,超过 {MAX_RUN_REQUEST_MINUTES} 分钟"
|
||||
)
|
||||
run_requested_at = None
|
||||
|
||||
pending_anchor = run_requested_at or last_triggered_at or last_deep_analysis_at
|
||||
if pending_trigger and pending_anchor and now - pending_anchor > timedelta(minutes=MAX_PENDING_TRIGGER_MINUTES):
|
||||
sanitized["pending_trigger"] = False
|
||||
sanitized["pending_reasons"] = []
|
||||
sanitized["last_ack_note"] = (
|
||||
f"auto-recovered stale pending trigger at {utc_iso()} after waiting "
|
||||
f"{(now - pending_anchor).total_seconds() / 60:.1f} minutes"
|
||||
)
|
||||
notes.append(
|
||||
f"自动解除 pending_trigger:触发状态已悬挂 {(now - pending_anchor).total_seconds() / 60:.1f} 分钟,超过 {MAX_PENDING_TRIGGER_MINUTES} 分钟"
|
||||
)
|
||||
|
||||
sanitized["_stale_recovery_notes"] = notes
|
||||
return sanitized
|
||||
|
||||
|
||||
def save_state(state: dict):
|
||||
STATE_DIR.mkdir(parents=True, exist_ok=True)
|
||||
state_to_save = dict(state)
|
||||
state_to_save.pop("_stale_recovery_notes", None)
|
||||
STATE_FILE.write_text(json.dumps(state_to_save, indent=2, ensure_ascii=False), encoding="utf-8")
|
||||
|
||||
|
||||
def stable_hash(data) -> str:
|
||||
payload = json.dumps(data, sort_keys=True, ensure_ascii=False, separators=(",", ":"))
|
||||
return hashlib.sha1(payload.encode("utf-8")).hexdigest()
|
||||
|
||||
|
||||
def get_exchange():
|
||||
load_env()
|
||||
api_key = os.getenv("BINANCE_API_KEY")
|
||||
secret = os.getenv("BINANCE_API_SECRET")
|
||||
if not api_key or not secret:
|
||||
raise RuntimeError("Missing BINANCE_API_KEY or BINANCE_API_SECRET in ~/.hermes/.env")
|
||||
ex = ccxt.binance({
|
||||
"apiKey": api_key,
|
||||
"secret": secret,
|
||||
"options": {"defaultType": "spot"},
|
||||
"enableRateLimit": True,
|
||||
})
|
||||
ex.load_markets()
|
||||
return ex
|
||||
|
||||
|
||||
def fetch_ohlcv_batch(ex, symbols: set, timeframe: str, limit: int):
|
||||
results = {}
|
||||
for sym in sorted(symbols):
|
||||
try:
|
||||
ohlcv = ex.fetch_ohlcv(sym, timeframe=timeframe, limit=limit)
|
||||
if ohlcv and len(ohlcv) >= 2:
|
||||
results[sym] = ohlcv
|
||||
except Exception:
|
||||
pass
|
||||
return results
|
||||
|
||||
|
||||
def compute_ohlcv_metrics(ohlcv_1h, ohlcv_4h, current_price, volume_24h=None):
|
||||
metrics = {}
|
||||
if ohlcv_1h and len(ohlcv_1h) >= 2:
|
||||
closes = [c[4] for c in ohlcv_1h]
|
||||
volumes = [c[5] for c in ohlcv_1h]
|
||||
metrics["change_1h_pct"] = round((closes[-1] - closes[-2]) / closes[-2] * 100, 2) if closes[-2] != 0 else None
|
||||
if len(closes) >= 5:
|
||||
metrics["change_4h_pct"] = round((closes[-1] - closes[-5]) / closes[-5] * 100, 2) if closes[-5] != 0 else None
|
||||
recent_vol = sum(volumes[-4:]) / 4 if len(volumes) >= 4 else None
|
||||
metrics["volume_1h_avg"] = round(recent_vol, 2) if recent_vol else None
|
||||
highs = [c[2] for c in ohlcv_1h[-4:]]
|
||||
lows = [c[3] for c in ohlcv_1h[-4:]]
|
||||
metrics["high_4h"] = round(max(highs), 8) if highs else None
|
||||
metrics["low_4h"] = round(min(lows), 8) if lows else None
|
||||
|
||||
if ohlcv_4h and len(ohlcv_4h) >= 2:
|
||||
closes_4h = [c[4] for c in ohlcv_4h]
|
||||
volumes_4h = [c[5] for c in ohlcv_4h]
|
||||
metrics["change_4h_pct_from_4h"] = round((closes_4h[-1] - closes_4h[-2]) / closes_4h[-2] * 100, 2) if closes_4h[-2] != 0 else None
|
||||
recent_vol_4h = sum(volumes_4h[-2:]) / 2 if len(volumes_4h) >= 2 else None
|
||||
metrics["volume_4h_avg"] = round(recent_vol_4h, 2) if recent_vol_4h else None
|
||||
highs_4h = [c[2] for c in ohlcv_4h]
|
||||
lows_4h = [c[3] for c in ohlcv_4h]
|
||||
metrics["high_24h_calc"] = round(max(highs_4h), 8) if highs_4h else None
|
||||
metrics["low_24h_calc"] = round(min(lows_4h), 8) if lows_4h else None
|
||||
if highs_4h and lows_4h:
|
||||
avg_price = sum(closes_4h) / len(closes_4h)
|
||||
metrics["volatility_4h_pct"] = round((max(highs_4h) - min(lows_4h)) / avg_price * 100, 2)
|
||||
|
||||
if current_price:
|
||||
if metrics.get("high_4h"):
|
||||
metrics["distance_from_4h_high_pct"] = round((metrics["high_4h"] - current_price) / metrics["high_4h"] * 100, 2)
|
||||
if metrics.get("low_4h"):
|
||||
metrics["distance_from_4h_low_pct"] = round((current_price - metrics["low_4h"]) / metrics["low_4h"] * 100, 2)
|
||||
if metrics.get("high_24h_calc"):
|
||||
metrics["distance_from_24h_high_pct"] = round((metrics["high_24h_calc"] - current_price) / metrics["high_24h_calc"] * 100, 2)
|
||||
if metrics.get("low_24h_calc"):
|
||||
metrics["distance_from_24h_low_pct"] = round((current_price - metrics["low_24h_calc"]) / metrics["low_24h_calc"] * 100, 2)
|
||||
|
||||
if volume_24h and volume_24h > 0 and metrics.get("volume_1h_avg"):
|
||||
daily_avg_1h = volume_24h / 24
|
||||
metrics["volume_1h_multiple"] = round(metrics["volume_1h_avg"] / daily_avg_1h, 2)
|
||||
if volume_24h and volume_24h > 0 and metrics.get("volume_4h_avg"):
|
||||
daily_avg_4h = volume_24h / 6
|
||||
metrics["volume_4h_multiple"] = round(metrics["volume_4h_avg"] / daily_avg_4h, 2)
|
||||
|
||||
return metrics
|
||||
|
||||
|
||||
def enrich_candidates_and_positions(global_candidates, candidate_layers, positions_view, tickers, ex):
|
||||
symbols = set()
|
||||
for c in global_candidates:
|
||||
symbols.add(c["symbol"])
|
||||
for p in positions_view:
|
||||
sym = p.get("symbol")
|
||||
if sym:
|
||||
sym_ccxt = norm_symbol(sym)
|
||||
symbols.add(sym_ccxt)
|
||||
|
||||
ohlcv_1h = fetch_ohlcv_batch(ex, symbols, "1h", 24)
|
||||
ohlcv_4h = fetch_ohlcv_batch(ex, symbols, "4h", 12)
|
||||
|
||||
def _apply(target_list):
|
||||
for item in target_list:
|
||||
sym = item.get("symbol")
|
||||
if not sym:
|
||||
continue
|
||||
sym_ccxt = norm_symbol(sym)
|
||||
v24h = to_float(tickers.get(sym_ccxt, {}).get("quoteVolume"))
|
||||
metrics = compute_ohlcv_metrics(
|
||||
ohlcv_1h.get(sym_ccxt),
|
||||
ohlcv_4h.get(sym_ccxt),
|
||||
item.get("price") or item.get("last_price"),
|
||||
volume_24h=v24h,
|
||||
)
|
||||
item["metrics"] = metrics
|
||||
|
||||
_apply(global_candidates)
|
||||
for band_list in candidate_layers.values():
|
||||
_apply(band_list)
|
||||
_apply(positions_view)
|
||||
return global_candidates, candidate_layers, positions_view
|
||||
|
||||
|
||||
def regime_from_pct(pct: float | None) -> str:
|
||||
if pct is None:
|
||||
return "unknown"
|
||||
if pct >= 2.0:
|
||||
return "bullish"
|
||||
if pct <= -2.0:
|
||||
return "bearish"
|
||||
return "neutral"
|
||||
|
||||
|
||||
def to_float(value, default=0.0):
|
||||
try:
|
||||
if value is None:
|
||||
return default
|
||||
return float(value)
|
||||
except Exception:
|
||||
return default
|
||||
|
||||
|
||||
def norm_symbol(symbol: str) -> str:
|
||||
s = symbol.upper().replace("-", "").replace("_", "")
|
||||
if "/" in s:
|
||||
return s
|
||||
if s.endswith("USDT"):
|
||||
return s[:-4] + "/USDT"
|
||||
return s
|
||||
|
||||
|
||||
def get_local_now(config: dict):
|
||||
tz_name = config.get("timezone") or "Asia/Shanghai"
|
||||
try:
|
||||
tz = ZoneInfo(tz_name)
|
||||
except Exception:
|
||||
tz = ZoneInfo("Asia/Shanghai")
|
||||
tz_name = "Asia/Shanghai"
|
||||
return utc_now().astimezone(tz), tz_name
|
||||
|
||||
|
||||
def session_label(local_dt: datetime) -> str:
|
||||
hour = local_dt.hour
|
||||
if 0 <= hour < 7:
|
||||
return "overnight"
|
||||
if 7 <= hour < 12:
|
||||
return "asia-morning"
|
||||
if 12 <= hour < 17:
|
||||
return "asia-afternoon"
|
||||
if 17 <= hour < 21:
|
||||
return "europe-open"
|
||||
return "us-session"
|
||||
|
||||
|
||||
def _liquidity_score(volume: float) -> float:
|
||||
return min(1.0, max(0.0, volume / 50_000_000))
|
||||
|
||||
|
||||
def _breakout_score(price: float, avg_price: float | None) -> float:
|
||||
if not avg_price or avg_price <= 0:
|
||||
return 0.0
|
||||
return (price - avg_price) / avg_price
|
||||
|
||||
|
||||
def top_candidates_from_tickers(tickers: dict):
|
||||
candidates = []
|
||||
for symbol, ticker in tickers.items():
|
||||
if not symbol.endswith("/USDT"):
|
||||
continue
|
||||
base = symbol.replace("/USDT", "")
|
||||
if base in BLACKLIST:
|
||||
continue
|
||||
if not re.fullmatch(r"[A-Z0-9]{2,20}", base):
|
||||
continue
|
||||
price = to_float(ticker.get("last"))
|
||||
change_pct = to_float(ticker.get("percentage"))
|
||||
volume = to_float(ticker.get("quoteVolume"))
|
||||
high = to_float(ticker.get("high"))
|
||||
low = to_float(ticker.get("low"))
|
||||
avg_price = to_float(ticker.get("average"), None)
|
||||
if price <= 0:
|
||||
continue
|
||||
if MAX_PRICE_CAP is not None and price > MAX_PRICE_CAP:
|
||||
continue
|
||||
if volume < 500_000:
|
||||
continue
|
||||
if change_pct < MIN_CHANGE_PCT:
|
||||
continue
|
||||
momentum = change_pct / 10.0
|
||||
liquidity = _liquidity_score(volume)
|
||||
breakout = _breakout_score(price, avg_price)
|
||||
score = round(momentum * 0.5 + liquidity * 0.3 + breakout * 0.2, 4)
|
||||
band = "major" if price >= 10 else "mid" if price >= 1 else "meme"
|
||||
distance_from_high = (high - price) / max(high, 1e-9) if high else None
|
||||
candidates.append({
|
||||
"symbol": symbol,
|
||||
"base": base,
|
||||
"price": round(price, 8),
|
||||
"change_24h_pct": round(change_pct, 2),
|
||||
"volume_24h": round(volume, 2),
|
||||
"breakout_pct": round(breakout * 100, 2),
|
||||
"high_24h": round(high, 8) if high else None,
|
||||
"low_24h": round(low, 8) if low else None,
|
||||
"distance_from_high_pct": round(distance_from_high * 100, 2) if distance_from_high is not None else None,
|
||||
"score": score,
|
||||
"band": band,
|
||||
})
|
||||
candidates.sort(key=lambda x: x["score"], reverse=True)
|
||||
global_top = candidates[:TOP_CANDIDATES]
|
||||
layers = {"major": [], "mid": [], "meme": []}
|
||||
for c in candidates:
|
||||
layers[c["band"]].append(c)
|
||||
for k in layers:
|
||||
layers[k] = layers[k][:5]
|
||||
return global_top, layers
|
||||
|
||||
|
||||
def build_snapshot():
|
||||
config = load_config()
|
||||
local_dt, tz_name = get_local_now(config)
|
||||
ex = get_exchange()
|
||||
positions = load_positions()
|
||||
tickers = ex.fetch_tickers()
|
||||
balances = ex.fetch_balance()["free"]
|
||||
free_usdt = to_float(balances.get("USDT"))
|
||||
|
||||
positions_view = []
|
||||
total_position_value = 0.0
|
||||
largest_position_value = 0.0
|
||||
actionable_positions = 0
|
||||
for pos in positions:
|
||||
symbol = pos.get("symbol") or ""
|
||||
sym_ccxt = norm_symbol(symbol)
|
||||
ticker = tickers.get(sym_ccxt, {})
|
||||
last = to_float(ticker.get("last"), None)
|
||||
qty = to_float(pos.get("quantity"))
|
||||
avg_cost = to_float(pos.get("avg_cost"), None)
|
||||
value = round(qty * last, 4) if last is not None else None
|
||||
pnl_pct = round((last - avg_cost) / avg_cost, 4) if last is not None and avg_cost else None
|
||||
high = to_float(ticker.get("high"))
|
||||
low = to_float(ticker.get("low"))
|
||||
distance_from_high = (high - last) / max(high, 1e-9) if high and last else None
|
||||
if value is not None:
|
||||
total_position_value += value
|
||||
largest_position_value = max(largest_position_value, value)
|
||||
if value >= MIN_REAL_POSITION_VALUE_USDT:
|
||||
actionable_positions += 1
|
||||
positions_view.append({
|
||||
"symbol": symbol,
|
||||
"base_asset": pos.get("base_asset"),
|
||||
"quantity": qty,
|
||||
"avg_cost": avg_cost,
|
||||
"last_price": last,
|
||||
"market_value_usdt": value,
|
||||
"pnl_pct": pnl_pct,
|
||||
"high_24h": round(high, 8) if high else None,
|
||||
"low_24h": round(low, 8) if low else None,
|
||||
"distance_from_high_pct": round(distance_from_high * 100, 2) if distance_from_high is not None else None,
|
||||
})
|
||||
|
||||
btc_pct = to_float((tickers.get("BTC/USDT") or {}).get("percentage"), None)
|
||||
eth_pct = to_float((tickers.get("ETH/USDT") or {}).get("percentage"), None)
|
||||
global_candidates, candidate_layers = top_candidates_from_tickers(tickers)
|
||||
global_candidates, candidate_layers, positions_view = enrich_candidates_and_positions(
|
||||
global_candidates, candidate_layers, positions_view, tickers, ex
|
||||
)
|
||||
leader_score = global_candidates[0]["score"] if global_candidates else 0.0
|
||||
portfolio_value = round(free_usdt + total_position_value, 4)
|
||||
volatility_score = round(max(abs(to_float(btc_pct, 0)), abs(to_float(eth_pct, 0))), 2)
|
||||
|
||||
position_structure = [
|
||||
{
|
||||
"symbol": p.get("symbol"),
|
||||
"base_asset": p.get("base_asset"),
|
||||
"quantity": round(to_float(p.get("quantity"), 0), 10),
|
||||
"avg_cost": to_float(p.get("avg_cost"), None),
|
||||
}
|
||||
for p in positions_view
|
||||
]
|
||||
|
||||
snapshot = {
|
||||
"generated_at": utc_iso(),
|
||||
"timezone": tz_name,
|
||||
"local_time": local_dt.isoformat(),
|
||||
"session": session_label(local_dt),
|
||||
"free_usdt": round(free_usdt, 4),
|
||||
"portfolio_value_usdt": portfolio_value,
|
||||
"largest_position_value_usdt": round(largest_position_value, 4),
|
||||
"actionable_positions": actionable_positions,
|
||||
"positions": positions_view,
|
||||
"positions_hash": stable_hash(position_structure),
|
||||
"top_candidates": global_candidates,
|
||||
"top_candidates_layers": candidate_layers,
|
||||
"candidates_hash": stable_hash({"global": global_candidates, "layers": candidate_layers}),
|
||||
"market_regime": {
|
||||
"btc_24h_pct": round(btc_pct, 2) if btc_pct is not None else None,
|
||||
"btc_regime": regime_from_pct(btc_pct),
|
||||
"eth_24h_pct": round(eth_pct, 2) if eth_pct is not None else None,
|
||||
"eth_regime": regime_from_pct(eth_pct),
|
||||
"volatility_score": volatility_score,
|
||||
"leader_score": round(leader_score, 4),
|
||||
},
|
||||
}
|
||||
snapshot["snapshot_hash"] = stable_hash({
|
||||
"portfolio_value_usdt": snapshot["portfolio_value_usdt"],
|
||||
"positions_hash": snapshot["positions_hash"],
|
||||
"candidates_hash": snapshot["candidates_hash"],
|
||||
"market_regime": snapshot["market_regime"],
|
||||
"session": snapshot["session"],
|
||||
})
|
||||
return snapshot
|
||||
|
||||
|
||||
def build_adaptive_profile(snapshot: dict):
|
||||
portfolio_value = snapshot.get("portfolio_value_usdt", 0)
|
||||
free_usdt = snapshot.get("free_usdt", 0)
|
||||
session = snapshot.get("session")
|
||||
market = snapshot.get("market_regime", {})
|
||||
volatility_score = to_float(market.get("volatility_score"), 0)
|
||||
leader_score = to_float(market.get("leader_score"), 0)
|
||||
actionable_positions = int(snapshot.get("actionable_positions") or 0)
|
||||
largest_position_value = to_float(snapshot.get("largest_position_value_usdt"), 0)
|
||||
|
||||
capital_band = "micro" if portfolio_value < 25 else "small" if portfolio_value < 100 else "normal"
|
||||
session_mode = "quiet" if session in {"overnight", "asia-morning"} else "active"
|
||||
volatility_mode = "high" if volatility_score >= 2.5 or leader_score >= 120 else "normal"
|
||||
dust_mode = free_usdt < MIN_ACTIONABLE_USDT and largest_position_value < MIN_REAL_POSITION_VALUE_USDT
|
||||
|
||||
price_trigger = BASE_PRICE_MOVE_TRIGGER_PCT
|
||||
pnl_trigger = BASE_PNL_TRIGGER_PCT
|
||||
portfolio_trigger = BASE_PORTFOLIO_MOVE_TRIGGER_PCT
|
||||
candidate_ratio = BASE_CANDIDATE_SCORE_TRIGGER_RATIO
|
||||
force_minutes = BASE_FORCE_ANALYSIS_AFTER_MINUTES
|
||||
cooldown_minutes = BASE_COOLDOWN_MINUTES
|
||||
soft_score_threshold = 2.0
|
||||
|
||||
if capital_band == "micro":
|
||||
price_trigger += 0.02
|
||||
pnl_trigger += 0.03
|
||||
portfolio_trigger += 0.04
|
||||
candidate_ratio += 0.25
|
||||
force_minutes += 180
|
||||
cooldown_minutes += 30
|
||||
soft_score_threshold += 1.0
|
||||
elif capital_band == "small":
|
||||
price_trigger += 0.01
|
||||
pnl_trigger += 0.01
|
||||
portfolio_trigger += 0.01
|
||||
candidate_ratio += 0.1
|
||||
force_minutes += 60
|
||||
cooldown_minutes += 10
|
||||
soft_score_threshold += 0.5
|
||||
|
||||
if session_mode == "quiet":
|
||||
price_trigger += 0.01
|
||||
pnl_trigger += 0.01
|
||||
portfolio_trigger += 0.01
|
||||
candidate_ratio += 0.05
|
||||
soft_score_threshold += 0.5
|
||||
else:
|
||||
force_minutes = max(120, force_minutes - 30)
|
||||
|
||||
if volatility_mode == "high":
|
||||
price_trigger = max(0.02, price_trigger - 0.01)
|
||||
pnl_trigger = max(0.025, pnl_trigger - 0.005)
|
||||
portfolio_trigger = max(0.025, portfolio_trigger - 0.005)
|
||||
candidate_ratio = max(1.1, candidate_ratio - 0.1)
|
||||
cooldown_minutes = max(20, cooldown_minutes - 10)
|
||||
soft_score_threshold = max(1.0, soft_score_threshold - 0.5)
|
||||
|
||||
if dust_mode:
|
||||
candidate_ratio += 0.3
|
||||
force_minutes += 180
|
||||
cooldown_minutes += 30
|
||||
soft_score_threshold += 1.5
|
||||
|
||||
return {
|
||||
"capital_band": capital_band,
|
||||
"session_mode": session_mode,
|
||||
"volatility_mode": volatility_mode,
|
||||
"dust_mode": dust_mode,
|
||||
"price_move_trigger_pct": round(price_trigger, 4),
|
||||
"pnl_trigger_pct": round(pnl_trigger, 4),
|
||||
"portfolio_move_trigger_pct": round(portfolio_trigger, 4),
|
||||
"candidate_score_trigger_ratio": round(candidate_ratio, 4),
|
||||
"force_analysis_after_minutes": int(force_minutes),
|
||||
"cooldown_minutes": int(cooldown_minutes),
|
||||
"soft_score_threshold": round(soft_score_threshold, 2),
|
||||
"new_entries_allowed": free_usdt >= MIN_ACTIONABLE_USDT and not dust_mode,
|
||||
"switching_allowed": actionable_positions > 0 or portfolio_value >= 25,
|
||||
}
|
||||
|
||||
|
||||
def _candidate_weight(snapshot: dict, profile: dict) -> float:
|
||||
if not profile.get("new_entries_allowed"):
|
||||
return 0.5
|
||||
if profile.get("volatility_mode") == "high":
|
||||
return 1.5
|
||||
if snapshot.get("session") in {"europe-open", "us-session"}:
|
||||
return 1.25
|
||||
return 1.0
|
||||
|
||||
|
||||
def analyze_trigger(snapshot: dict, state: dict):
|
||||
reasons = []
|
||||
details = list(state.get("_stale_recovery_notes", []))
|
||||
hard_reasons = []
|
||||
soft_reasons = []
|
||||
soft_score = 0.0
|
||||
|
||||
profile = build_adaptive_profile(snapshot)
|
||||
market = snapshot.get("market_regime", {})
|
||||
now = utc_now()
|
||||
|
||||
last_positions_hash = state.get("last_positions_hash")
|
||||
last_portfolio_value = state.get("last_portfolio_value_usdt")
|
||||
last_market_regime = state.get("last_market_regime", {})
|
||||
last_positions_map = state.get("last_positions_map", {})
|
||||
last_top_candidate = state.get("last_top_candidate")
|
||||
pending_trigger = bool(state.get("pending_trigger"))
|
||||
run_requested_at = parse_ts(state.get("run_requested_at"))
|
||||
last_deep_analysis_at = parse_ts(state.get("last_deep_analysis_at"))
|
||||
last_triggered_at = parse_ts(state.get("last_triggered_at"))
|
||||
last_trigger_snapshot_hash = state.get("last_trigger_snapshot_hash")
|
||||
last_hard_reasons_at = state.get("last_hard_reasons_at", {})
|
||||
|
||||
price_trigger = profile["price_move_trigger_pct"]
|
||||
pnl_trigger = profile["pnl_trigger_pct"]
|
||||
portfolio_trigger = profile["portfolio_move_trigger_pct"]
|
||||
candidate_ratio_trigger = profile["candidate_score_trigger_ratio"]
|
||||
force_minutes = profile["force_analysis_after_minutes"]
|
||||
cooldown_minutes = profile["cooldown_minutes"]
|
||||
soft_score_threshold = profile["soft_score_threshold"]
|
||||
|
||||
if pending_trigger:
|
||||
reasons.append("pending-trigger-unacked")
|
||||
hard_reasons.append("pending-trigger-unacked")
|
||||
details.append("上次已触发深度分析但尚未确认完成")
|
||||
if run_requested_at:
|
||||
details.append(f"外部门控已在 {run_requested_at.isoformat()} 请求运行分析任务")
|
||||
|
||||
if not last_deep_analysis_at:
|
||||
reasons.append("first-analysis")
|
||||
hard_reasons.append("first-analysis")
|
||||
details.append("尚未记录过深度分析")
|
||||
elif now - last_deep_analysis_at >= timedelta(minutes=force_minutes):
|
||||
reasons.append("stale-analysis")
|
||||
hard_reasons.append("stale-analysis")
|
||||
details.append(f"距离上次深度分析已超过 {force_minutes} 分钟")
|
||||
|
||||
if last_positions_hash and snapshot["positions_hash"] != last_positions_hash:
|
||||
reasons.append("positions-changed")
|
||||
hard_reasons.append("positions-changed")
|
||||
details.append("持仓结构发生变化")
|
||||
|
||||
if last_portfolio_value not in (None, 0):
|
||||
portfolio_delta = abs(snapshot["portfolio_value_usdt"] - last_portfolio_value) / max(last_portfolio_value, 1e-9)
|
||||
if portfolio_delta >= portfolio_trigger:
|
||||
if portfolio_delta >= 1.0:
|
||||
reasons.append("portfolio-extreme-move")
|
||||
hard_reasons.append("portfolio-extreme-move")
|
||||
details.append(f"组合净值剧烈变化 {portfolio_delta:.1%},超过 100%,视为硬触发")
|
||||
else:
|
||||
reasons.append("portfolio-move")
|
||||
soft_reasons.append("portfolio-move")
|
||||
soft_score += 1.0
|
||||
details.append(f"组合净值变化 {portfolio_delta:.1%},阈值 {portfolio_trigger:.1%}")
|
||||
|
||||
for pos in snapshot["positions"]:
|
||||
symbol = pos["symbol"]
|
||||
prev = last_positions_map.get(symbol, {})
|
||||
cur_price = pos.get("last_price")
|
||||
prev_price = prev.get("last_price")
|
||||
cur_pnl = pos.get("pnl_pct")
|
||||
prev_pnl = prev.get("pnl_pct")
|
||||
market_value = to_float(pos.get("market_value_usdt"), 0)
|
||||
actionable_position = market_value >= MIN_REAL_POSITION_VALUE_USDT
|
||||
|
||||
if cur_price and prev_price:
|
||||
price_move = abs(cur_price - prev_price) / max(prev_price, 1e-9)
|
||||
if price_move >= price_trigger:
|
||||
reasons.append(f"price-move:{symbol}")
|
||||
soft_reasons.append(f"price-move:{symbol}")
|
||||
soft_score += 1.0 if actionable_position else 0.4
|
||||
details.append(f"{symbol} 价格变化 {price_move:.1%},阈值 {price_trigger:.1%}")
|
||||
if cur_pnl is not None and prev_pnl is not None:
|
||||
pnl_move = abs(cur_pnl - prev_pnl)
|
||||
if pnl_move >= pnl_trigger:
|
||||
reasons.append(f"pnl-move:{symbol}")
|
||||
soft_reasons.append(f"pnl-move:{symbol}")
|
||||
soft_score += 1.0 if actionable_position else 0.4
|
||||
details.append(f"{symbol} 盈亏变化 {pnl_move:.1%},阈值 {pnl_trigger:.1%}")
|
||||
if cur_pnl is not None:
|
||||
stop_band = -0.06 if actionable_position else -0.12
|
||||
take_band = 0.14 if actionable_position else 0.25
|
||||
if cur_pnl <= stop_band or cur_pnl >= take_band:
|
||||
reasons.append(f"risk-band:{symbol}")
|
||||
hard_reasons.append(f"risk-band:{symbol}")
|
||||
details.append(f"{symbol} 接近执行阈值,当前盈亏 {cur_pnl:.1%}")
|
||||
if cur_pnl <= HARD_STOP_PCT:
|
||||
reasons.append(f"hard-stop:{symbol}")
|
||||
hard_reasons.append(f"hard-stop:{symbol}")
|
||||
details.append(f"{symbol} 盈亏超过 {HARD_STOP_PCT:.1%},触发紧急硬触发")
|
||||
|
||||
current_market = snapshot.get("market_regime", {})
|
||||
if last_market_regime:
|
||||
if current_market.get("btc_regime") != last_market_regime.get("btc_regime"):
|
||||
reasons.append("btc-regime-change")
|
||||
hard_reasons.append("btc-regime-change")
|
||||
details.append(f"BTC 由 {last_market_regime.get('btc_regime')} 切换为 {current_market.get('btc_regime')}")
|
||||
if current_market.get("eth_regime") != last_market_regime.get("eth_regime"):
|
||||
reasons.append("eth-regime-change")
|
||||
hard_reasons.append("eth-regime-change")
|
||||
details.append(f"ETH 由 {last_market_regime.get('eth_regime')} 切换为 {current_market.get('eth_regime')}")
|
||||
|
||||
# Candidate hard moon trigger
|
||||
for cand in snapshot.get("top_candidates", []):
|
||||
if cand.get("change_24h_pct", 0) >= HARD_MOON_PCT * 100:
|
||||
reasons.append(f"hard-moon:{cand['symbol']}")
|
||||
hard_reasons.append(f"hard-moon:{cand['symbol']}")
|
||||
details.append(f"候选币 {cand['symbol']} 24h 涨幅 {cand['change_24h_pct']:.1f}%,触发强势硬触发")
|
||||
|
||||
current_leader = snapshot.get("top_candidates", [{}])[0] if snapshot.get("top_candidates") else None
|
||||
candidate_weight = _candidate_weight(snapshot, profile)
|
||||
|
||||
# Layer leader changes
|
||||
last_layers = state.get("last_candidates_layers", {})
|
||||
current_layers = snapshot.get("top_candidates_layers", {})
|
||||
for band in ("major", "mid", "meme"):
|
||||
cur_band = current_layers.get(band, [])
|
||||
prev_band = last_layers.get(band, [])
|
||||
cur_leader = cur_band[0] if cur_band else None
|
||||
prev_leader = prev_band[0] if prev_band else None
|
||||
if cur_leader and prev_leader and cur_leader["symbol"] != prev_leader["symbol"]:
|
||||
score_ratio = cur_leader.get("score", 0) / max(prev_leader.get("score", 0.0001), 0.0001)
|
||||
if score_ratio >= candidate_ratio_trigger:
|
||||
reasons.append(f"new-leader-{band}:{cur_leader['symbol']}")
|
||||
soft_reasons.append(f"new-leader-{band}:{cur_leader['symbol']}")
|
||||
soft_score += candidate_weight * 0.7
|
||||
details.append(
|
||||
f"{band} 层新榜首 {cur_leader['symbol']} 替代 {prev_leader['symbol']},score 比例 {score_ratio:.2f}"
|
||||
)
|
||||
|
||||
current_leader = snapshot.get("top_candidates", [{}])[0] if snapshot.get("top_candidates") else None
|
||||
if last_top_candidate and current_leader:
|
||||
if current_leader.get("symbol") != last_top_candidate.get("symbol"):
|
||||
score_ratio = current_leader.get("score", 0) / max(last_top_candidate.get("score", 0.0001), 0.0001)
|
||||
if score_ratio >= candidate_ratio_trigger:
|
||||
reasons.append("new-leader")
|
||||
soft_reasons.append("new-leader")
|
||||
soft_score += candidate_weight
|
||||
details.append(
|
||||
f"新候选币 {current_leader.get('symbol')} 领先上次榜首,score 比例 {score_ratio:.2f},阈值 {candidate_ratio_trigger:.2f}"
|
||||
)
|
||||
elif current_leader and not last_top_candidate:
|
||||
reasons.append("candidate-leader-init")
|
||||
soft_reasons.append("candidate-leader-init")
|
||||
soft_score += candidate_weight
|
||||
details.append(f"首次记录候选榜首 {current_leader.get('symbol')}")
|
||||
|
||||
# --- adaptive cooldown based on signal change magnitude ---
|
||||
def _signal_delta() -> float:
|
||||
delta = 0.0
|
||||
if last_trigger_snapshot_hash and snapshot.get("snapshot_hash") != last_trigger_snapshot_hash:
|
||||
delta += 0.5
|
||||
if snapshot["positions_hash"] != last_positions_hash:
|
||||
delta += 1.5
|
||||
for pos in snapshot["positions"]:
|
||||
symbol = pos["symbol"]
|
||||
prev = last_positions_map.get(symbol, {})
|
||||
cur_price = pos.get("last_price")
|
||||
prev_price = prev.get("last_price")
|
||||
cur_pnl = pos.get("pnl_pct")
|
||||
prev_pnl = prev.get("pnl_pct")
|
||||
if cur_price and prev_price:
|
||||
if abs(cur_price - prev_price) / max(prev_price, 1e-9) >= 0.02:
|
||||
delta += 0.5
|
||||
if cur_pnl is not None and prev_pnl is not None:
|
||||
if abs(cur_pnl - prev_pnl) >= 0.03:
|
||||
delta += 0.5
|
||||
current_leader = snapshot.get("top_candidates", [{}])[0] if snapshot.get("top_candidates") else None
|
||||
last_leader = state.get("last_top_candidate")
|
||||
if current_leader and last_leader and current_leader.get("symbol") != last_leader.get("symbol"):
|
||||
delta += 1.0
|
||||
current_layers = snapshot.get("top_candidates_layers", {})
|
||||
last_layers = state.get("last_candidates_layers", {})
|
||||
for band in ("major", "mid", "meme"):
|
||||
cur_band = current_layers.get(band, [])
|
||||
prev_band = last_layers.get(band, [])
|
||||
cur_l = cur_band[0] if cur_band else None
|
||||
prev_l = prev_band[0] if prev_band else None
|
||||
if cur_l and prev_l and cur_l.get("symbol") != prev_l.get("symbol"):
|
||||
delta += 0.5
|
||||
if last_market_regime:
|
||||
if current_market.get("btc_regime") != last_market_regime.get("btc_regime"):
|
||||
delta += 1.5
|
||||
if current_market.get("eth_regime") != last_market_regime.get("eth_regime"):
|
||||
delta += 1.5
|
||||
if last_portfolio_value not in (None, 0):
|
||||
portfolio_delta = abs(snapshot["portfolio_value_usdt"] - last_portfolio_value) / max(last_portfolio_value, 1e-9)
|
||||
if portfolio_delta >= 0.05:
|
||||
delta += 1.0
|
||||
# fresh hard reason type not seen in last trigger
|
||||
last_trigger_hard_types = {r.split(":")[0] for r in (state.get("last_trigger_hard_reasons") or [])}
|
||||
current_hard_types = {r.split(":")[0] for r in hard_reasons}
|
||||
if current_hard_types - last_trigger_hard_types:
|
||||
delta += 2.0
|
||||
return delta
|
||||
|
||||
signal_delta = _signal_delta()
|
||||
effective_cooldown = cooldown_minutes
|
||||
if signal_delta < 1.0:
|
||||
effective_cooldown = max(cooldown_minutes, 90)
|
||||
elif signal_delta >= 2.5:
|
||||
effective_cooldown = max(0, cooldown_minutes - 15)
|
||||
|
||||
cooldown_active = bool(last_triggered_at and now - last_triggered_at < timedelta(minutes=effective_cooldown))
|
||||
|
||||
# Dedup hard reasons within window to avoid repeated model wakeups for the same event
|
||||
dedup_window = timedelta(minutes=HARD_REASON_DEDUP_MINUTES)
|
||||
for hr in list(hard_reasons):
|
||||
last_at = parse_ts(last_hard_reasons_at.get(hr))
|
||||
if last_at and now - last_at < dedup_window:
|
||||
hard_reasons.remove(hr)
|
||||
details.append(f"{hr} 近期已触发,{HARD_REASON_DEDUP_MINUTES}分钟内去重")
|
||||
|
||||
hard_trigger = bool(hard_reasons)
|
||||
if profile.get("dust_mode") and not hard_trigger and soft_score < soft_score_threshold + 1.0:
|
||||
details.append("微型资金/粉尘仓位模式:抬高软触发门槛,避免无意义分析")
|
||||
|
||||
if profile.get("dust_mode") and not profile.get("new_entries_allowed") and any(r in {"new-leader", "candidate-leader-init"} for r in soft_reasons):
|
||||
details.append("当前可用资金低于可执行阈值,新候选币仅做观察,不单独触发深度分析")
|
||||
soft_score = max(0.0, soft_score - 0.75)
|
||||
|
||||
should_analyze = hard_trigger or soft_score >= soft_score_threshold
|
||||
|
||||
if cooldown_active and not hard_trigger and should_analyze:
|
||||
should_analyze = False
|
||||
details.append(f"处于 {cooldown_minutes} 分钟冷却窗口,软触发先记录不升级")
|
||||
|
||||
if cooldown_active and not hard_trigger and reasons and soft_score < soft_score_threshold:
|
||||
details.append(f"处于 {cooldown_minutes} 分钟冷却窗口,且软信号强度不足 ({soft_score:.2f} < {soft_score_threshold:.2f})")
|
||||
|
||||
status = "deep_analysis_required" if should_analyze else "stable"
|
||||
|
||||
compact_lines = [
|
||||
f"状态: {status}",
|
||||
f"组合净值: ${snapshot['portfolio_value_usdt']:.4f} | 可用USDT: ${snapshot['free_usdt']:.4f}",
|
||||
f"本地时段: {snapshot['session']} | 时区: {snapshot['timezone']}",
|
||||
f"BTC/ETH: {market.get('btc_regime')} ({market.get('btc_24h_pct')}%), {market.get('eth_regime')} ({market.get('eth_24h_pct')}%) | 波动分数 {market.get('volatility_score')}",
|
||||
f"门控画像: capital={profile['capital_band']}, session={profile['session_mode']}, volatility={profile['volatility_mode']}, dust={profile['dust_mode']}",
|
||||
f"阈值: price={price_trigger:.1%}, pnl={pnl_trigger:.1%}, portfolio={portfolio_trigger:.1%}, candidate={candidate_ratio_trigger:.2f}, cooldown={effective_cooldown}m({cooldown_minutes}m基础), force={force_minutes}m",
|
||||
f"软信号分: {soft_score:.2f} / {soft_score_threshold:.2f}",
|
||||
f"信号变化度: {signal_delta:.1f}",
|
||||
]
|
||||
if snapshot["positions"]:
|
||||
compact_lines.append("持仓:")
|
||||
for pos in snapshot["positions"][:4]:
|
||||
pnl = pos.get("pnl_pct")
|
||||
pnl_text = f"{pnl:+.1%}" if pnl is not None else "n/a"
|
||||
compact_lines.append(
|
||||
f"- {pos['symbol']}: qty={pos['quantity']}, px={pos.get('last_price')}, pnl={pnl_text}, value=${pos.get('market_value_usdt')}"
|
||||
)
|
||||
else:
|
||||
compact_lines.append("持仓: 当前无现货仓位")
|
||||
if snapshot["top_candidates"]:
|
||||
compact_lines.append("候选榜:")
|
||||
for cand in snapshot["top_candidates"]:
|
||||
compact_lines.append(
|
||||
f"- {cand['symbol']}: score={cand['score']}, 24h={cand['change_24h_pct']}%, vol=${cand['volume_24h']}"
|
||||
)
|
||||
layers = snapshot.get("top_candidates_layers", {})
|
||||
for band, band_cands in layers.items():
|
||||
if band_cands:
|
||||
compact_lines.append(f"{band} 层:")
|
||||
for cand in band_cands:
|
||||
compact_lines.append(
|
||||
f"- {cand['symbol']}: score={cand['score']}, 24h={cand['change_24h_pct']}%, vol=${cand['volume_24h']}"
|
||||
)
|
||||
if details:
|
||||
compact_lines.append("触发说明:")
|
||||
for item in details:
|
||||
compact_lines.append(f"- {item}")
|
||||
|
||||
return {
|
||||
"generated_at": snapshot["generated_at"],
|
||||
"status": status,
|
||||
"should_analyze": should_analyze,
|
||||
"pending_trigger": pending_trigger,
|
||||
"run_requested": bool(run_requested_at),
|
||||
"run_requested_at": run_requested_at.isoformat() if run_requested_at else None,
|
||||
"cooldown_active": cooldown_active,
|
||||
"effective_cooldown_minutes": effective_cooldown,
|
||||
"signal_delta": round(signal_delta, 2),
|
||||
"reasons": reasons,
|
||||
"hard_reasons": hard_reasons,
|
||||
"soft_reasons": soft_reasons,
|
||||
"soft_score": round(soft_score, 3),
|
||||
"adaptive_profile": profile,
|
||||
"portfolio_value_usdt": snapshot["portfolio_value_usdt"],
|
||||
"free_usdt": snapshot["free_usdt"],
|
||||
"market_regime": snapshot["market_regime"],
|
||||
"session": snapshot["session"],
|
||||
"positions": snapshot["positions"],
|
||||
"top_candidates": snapshot["top_candidates"],
|
||||
"top_candidates_layers": layers,
|
||||
"snapshot_hash": snapshot["snapshot_hash"],
|
||||
"compact_summary": "\n".join(compact_lines),
|
||||
"details": details,
|
||||
}
|
||||
|
||||
|
||||
def update_state_after_observation(state: dict, snapshot: dict, analysis: dict):
|
||||
new_state = dict(state)
|
||||
new_state.update({
|
||||
"last_observed_at": snapshot["generated_at"],
|
||||
"last_snapshot_hash": snapshot["snapshot_hash"],
|
||||
"last_positions_hash": snapshot["positions_hash"],
|
||||
"last_candidates_hash": snapshot["candidates_hash"],
|
||||
"last_portfolio_value_usdt": snapshot["portfolio_value_usdt"],
|
||||
"last_market_regime": snapshot["market_regime"],
|
||||
"last_positions_map": {p["symbol"]: {"last_price": p.get("last_price"), "pnl_pct": p.get("pnl_pct")} for p in snapshot["positions"]},
|
||||
"last_top_candidate": snapshot["top_candidates"][0] if snapshot["top_candidates"] else None,
|
||||
"last_candidates_layers": snapshot.get("top_candidates_layers", {}),
|
||||
"last_adaptive_profile": analysis.get("adaptive_profile", {}),
|
||||
})
|
||||
if analysis["should_analyze"]:
|
||||
new_state["pending_trigger"] = True
|
||||
new_state["pending_reasons"] = analysis["details"]
|
||||
new_state["last_triggered_at"] = snapshot["generated_at"]
|
||||
new_state["last_trigger_snapshot_hash"] = snapshot["snapshot_hash"]
|
||||
new_state["last_trigger_hard_reasons"] = analysis.get("hard_reasons", [])
|
||||
new_state["last_trigger_signal_delta"] = analysis.get("signal_delta", 0.0)
|
||||
|
||||
# Update hard-reason dedup timestamps and prune old entries
|
||||
last_hard_reasons_at = dict(state.get("last_hard_reasons_at", {}))
|
||||
for hr in analysis.get("hard_reasons", []):
|
||||
last_hard_reasons_at[hr] = snapshot["generated_at"]
|
||||
cutoff = utc_now() - timedelta(hours=24)
|
||||
pruned = {
|
||||
k: v for k, v in last_hard_reasons_at.items()
|
||||
if parse_ts(v) and parse_ts(v) > cutoff
|
||||
}
|
||||
new_state["last_hard_reasons_at"] = pruned
|
||||
return new_state
|
||||
|
||||
|
||||
def mark_run_requested(note: str = ""):
|
||||
from .services.precheck_state import mark_run_requested as service_mark_run_requested
|
||||
|
||||
return service_mark_run_requested(note)
|
||||
|
||||
|
||||
def ack_analysis(note: str = ""):
|
||||
from .services.precheck_state import ack_analysis as service_ack_analysis
|
||||
|
||||
return service_ack_analysis(note)
|
||||
|
||||
|
||||
def main():
|
||||
from .services.precheck_service import run
|
||||
|
||||
return run(sys.argv[1:])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,32 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
import json
|
||||
import sys
|
||||
|
||||
from . import review_engine
|
||||
|
||||
|
||||
def main():
|
||||
hours = int(sys.argv[1]) if len(sys.argv) > 1 else 12
|
||||
review = review_engine.generate_review(hours)
|
||||
compact = {
|
||||
"review_period_hours": review.get("review_period_hours", hours),
|
||||
"review_timestamp": review.get("review_timestamp"),
|
||||
"total_decisions": review.get("total_decisions", 0),
|
||||
"total_trades": review.get("total_trades", 0),
|
||||
"total_errors": review.get("total_errors", 0),
|
||||
"stats": review.get("stats", {}),
|
||||
"insights": review.get("insights", []),
|
||||
"recommendations": review.get("recommendations", []),
|
||||
"decision_quality_top": review.get("decision_quality", [])[:5],
|
||||
"should_report": bool(
|
||||
review.get("total_decisions", 0)
|
||||
or review.get("total_trades", 0)
|
||||
or review.get("total_errors", 0)
|
||||
or review.get("insights")
|
||||
),
|
||||
}
|
||||
print(json.dumps(compact, ensure_ascii=False, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,312 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Coin Hunter hourly review engine."""
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
import ccxt
|
||||
|
||||
from .logger import get_logs_last_n_hours, log_error
|
||||
from .runtime import get_runtime_paths, load_env_file
|
||||
|
||||
PATHS = get_runtime_paths()
|
||||
ENV_FILE = PATHS.env_file
|
||||
REVIEW_DIR = PATHS.reviews_dir
|
||||
|
||||
CST = timezone(timedelta(hours=8))
|
||||
|
||||
|
||||
def load_env():
|
||||
load_env_file(PATHS)
|
||||
|
||||
|
||||
def get_exchange():
|
||||
load_env()
|
||||
ex = ccxt.binance({
|
||||
"apiKey": os.getenv("BINANCE_API_KEY"),
|
||||
"secret": os.getenv("BINANCE_API_SECRET"),
|
||||
"options": {"defaultType": "spot"},
|
||||
"enableRateLimit": True,
|
||||
})
|
||||
ex.load_markets()
|
||||
return ex
|
||||
|
||||
|
||||
def ensure_review_dir():
|
||||
REVIEW_DIR.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
|
||||
def norm_symbol(symbol: str) -> str:
|
||||
s = symbol.upper().replace("-", "").replace("_", "")
|
||||
if "/" in s:
|
||||
return s
|
||||
if s.endswith("USDT"):
|
||||
return s[:-4] + "/USDT"
|
||||
return s
|
||||
|
||||
|
||||
def fetch_current_price(ex, symbol: str):
|
||||
try:
|
||||
return float(ex.fetch_ticker(norm_symbol(symbol))["last"])
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def analyze_trade(trade: dict, ex) -> dict:
|
||||
symbol = trade.get("symbol")
|
||||
price = trade.get("price")
|
||||
action = trade.get("action", "")
|
||||
current_price = fetch_current_price(ex, symbol) if symbol else None
|
||||
pnl_estimate = None
|
||||
outcome = "neutral"
|
||||
if price and current_price and symbol:
|
||||
change_pct = (current_price - float(price)) / float(price) * 100
|
||||
if action == "BUY":
|
||||
pnl_estimate = round(change_pct, 2)
|
||||
outcome = "good" if change_pct > 2 else "bad" if change_pct < -2 else "neutral"
|
||||
elif action == "SELL_ALL":
|
||||
pnl_estimate = round(-change_pct, 2)
|
||||
# Lowered missed threshold: >2% is a missed opportunity in short-term trading
|
||||
outcome = "good" if change_pct < -2 else "missed" if change_pct > 2 else "neutral"
|
||||
return {
|
||||
"timestamp": trade.get("timestamp"),
|
||||
"symbol": symbol,
|
||||
"action": action,
|
||||
"decision_id": trade.get("decision_id"),
|
||||
"execution_price": price,
|
||||
"current_price": current_price,
|
||||
"pnl_estimate_pct": pnl_estimate,
|
||||
"outcome_assessment": outcome,
|
||||
}
|
||||
|
||||
|
||||
def analyze_hold_passes(decisions: list, ex) -> list:
|
||||
"""Check HOLD decisions where an opportunity was explicitly PASSed but later rallied."""
|
||||
misses = []
|
||||
for d in decisions:
|
||||
if d.get("decision") != "HOLD":
|
||||
continue
|
||||
analysis = d.get("analysis")
|
||||
if not isinstance(analysis, dict):
|
||||
continue
|
||||
opportunities = analysis.get("opportunities_evaluated", [])
|
||||
market_snapshot = d.get("market_snapshot", {})
|
||||
if not opportunities or not market_snapshot:
|
||||
continue
|
||||
for op in opportunities:
|
||||
verdict = op.get("verdict", "")
|
||||
if "PASS" not in verdict and "pass" not in verdict:
|
||||
continue
|
||||
symbol = op.get("symbol", "")
|
||||
# Try to extract decision-time price from market_snapshot
|
||||
snap = market_snapshot.get(symbol) or market_snapshot.get(symbol.replace("/", ""))
|
||||
if not snap:
|
||||
continue
|
||||
decision_price = None
|
||||
if isinstance(snap, dict):
|
||||
decision_price = float(snap.get("lastPrice", 0)) or float(snap.get("last", 0))
|
||||
elif isinstance(snap, (int, float, str)):
|
||||
decision_price = float(snap)
|
||||
if not decision_price:
|
||||
continue
|
||||
current_price = fetch_current_price(ex, symbol)
|
||||
if not current_price:
|
||||
continue
|
||||
change_pct = (current_price - decision_price) / decision_price * 100
|
||||
if change_pct > 3: # >3% rally after being passed = missed watch
|
||||
misses.append({
|
||||
"timestamp": d.get("timestamp"),
|
||||
"symbol": symbol,
|
||||
"decision_price": round(decision_price, 8),
|
||||
"current_price": round(current_price, 8),
|
||||
"change_pct": round(change_pct, 2),
|
||||
"verdict_snippet": verdict[:80],
|
||||
})
|
||||
return misses
|
||||
|
||||
|
||||
def analyze_cash_misses(decisions: list, ex) -> list:
|
||||
"""If portfolio was mostly USDT but a watchlist coin rallied >5%, flag it."""
|
||||
misses = []
|
||||
watchlist = set()
|
||||
for d in decisions:
|
||||
snap = d.get("market_snapshot", {})
|
||||
if isinstance(snap, dict):
|
||||
for k in snap.keys():
|
||||
if k.endswith("USDT"):
|
||||
watchlist.add(k)
|
||||
for d in decisions:
|
||||
ts = d.get("timestamp")
|
||||
balances = d.get("balances") or d.get("balances_before", {})
|
||||
if not balances:
|
||||
continue
|
||||
total = sum(float(v) if isinstance(v, (int, float, str)) else 0 for v in balances.values())
|
||||
usdt = float(balances.get("USDT", 0))
|
||||
if total == 0 or (usdt / total) < 0.9:
|
||||
continue
|
||||
# Portfolio mostly cash — check watchlist performance
|
||||
snap = d.get("market_snapshot", {})
|
||||
if not isinstance(snap, dict):
|
||||
continue
|
||||
for symbol, data in snap.items():
|
||||
if not symbol.endswith("USDT"):
|
||||
continue
|
||||
decision_price = None
|
||||
if isinstance(data, dict):
|
||||
decision_price = float(data.get("lastPrice", 0)) or float(data.get("last", 0))
|
||||
elif isinstance(data, (int, float, str)):
|
||||
decision_price = float(data)
|
||||
if not decision_price:
|
||||
continue
|
||||
current_price = fetch_current_price(ex, symbol)
|
||||
if not current_price:
|
||||
continue
|
||||
change_pct = (current_price - decision_price) / decision_price * 100
|
||||
if change_pct > 5:
|
||||
misses.append({
|
||||
"timestamp": ts,
|
||||
"symbol": symbol,
|
||||
"decision_price": round(decision_price, 8),
|
||||
"current_price": round(current_price, 8),
|
||||
"change_pct": round(change_pct, 2),
|
||||
})
|
||||
# Deduplicate by symbol keeping the worst miss
|
||||
seen = {}
|
||||
for m in misses:
|
||||
sym = m["symbol"]
|
||||
if sym not in seen or m["change_pct"] > seen[sym]["change_pct"]:
|
||||
seen[sym] = m
|
||||
return list(seen.values())
|
||||
|
||||
|
||||
def generate_review(hours: int = 1) -> dict:
|
||||
decisions = get_logs_last_n_hours("decisions", hours)
|
||||
trades = get_logs_last_n_hours("trades", hours)
|
||||
errors = get_logs_last_n_hours("errors", hours)
|
||||
|
||||
review = {
|
||||
"review_period_hours": hours,
|
||||
"review_timestamp": datetime.now(CST).isoformat(),
|
||||
"total_decisions": len(decisions),
|
||||
"total_trades": len(trades),
|
||||
"total_errors": len(errors),
|
||||
"decision_quality": [],
|
||||
"stats": {},
|
||||
"insights": [],
|
||||
"recommendations": [],
|
||||
}
|
||||
|
||||
if not decisions and not trades:
|
||||
review["insights"].append("本周期无决策/交易记录")
|
||||
return review
|
||||
|
||||
ex = get_exchange()
|
||||
outcomes = {"good": 0, "neutral": 0, "bad": 0, "missed": 0}
|
||||
pnl_samples = []
|
||||
|
||||
for trade in trades:
|
||||
analysis = analyze_trade(trade, ex)
|
||||
review["decision_quality"].append(analysis)
|
||||
outcomes[analysis["outcome_assessment"]] += 1
|
||||
if analysis["pnl_estimate_pct"] is not None:
|
||||
pnl_samples.append(analysis["pnl_estimate_pct"])
|
||||
|
||||
# New: analyze missed opportunities from HOLD / cash decisions
|
||||
hold_pass_misses = analyze_hold_passes(decisions, ex)
|
||||
cash_misses = analyze_cash_misses(decisions, ex)
|
||||
total_missed = outcomes["missed"] + len(hold_pass_misses) + len(cash_misses)
|
||||
|
||||
review["stats"] = {
|
||||
"good_decisions": outcomes["good"],
|
||||
"neutral_decisions": outcomes["neutral"],
|
||||
"bad_decisions": outcomes["bad"],
|
||||
"missed_opportunities": total_missed,
|
||||
"missed_sell_all": outcomes["missed"],
|
||||
"missed_hold_passes": len(hold_pass_misses),
|
||||
"missed_cash_sits": len(cash_misses),
|
||||
"avg_estimated_edge_pct": round(sum(pnl_samples) / len(pnl_samples), 2) if pnl_samples else None,
|
||||
}
|
||||
|
||||
if errors:
|
||||
review["insights"].append(f"本周期出现 {len(errors)} 次执行/系统错误,健壮性需优先关注")
|
||||
if outcomes["bad"] > outcomes["good"]:
|
||||
review["insights"].append("最近交易质量偏弱,建议降低交易频率或提高入场门槛")
|
||||
if total_missed > 0:
|
||||
parts = []
|
||||
if outcomes["missed"]:
|
||||
parts.append(f"卖出后继续上涨 {outcomes['missed']} 次")
|
||||
if hold_pass_misses:
|
||||
parts.append(f"PASS 后错失 {len(hold_pass_misses)} 次")
|
||||
if cash_misses:
|
||||
parts.append(f"空仓观望错失 {len(cash_misses)} 次")
|
||||
review["insights"].append("存在错失机会: " + ",".join(parts) + ",建议放宽趋势跟随或入场条件")
|
||||
if outcomes["good"] >= max(1, outcomes["bad"] + total_missed):
|
||||
review["insights"].append("近期决策总体可接受")
|
||||
if not trades and decisions:
|
||||
review["insights"].append("有决策无成交,可能是观望、最小成交额限制或执行被拦截")
|
||||
if len(trades) < len(decisions) * 0.1 and decisions:
|
||||
review["insights"].append("大量决策未转化为交易,需检查执行门槛(最小成交额/精度/手续费缓冲)是否过高")
|
||||
if hold_pass_misses:
|
||||
for m in hold_pass_misses[:3]:
|
||||
review["insights"].append(f"HOLD 时 PASS 了 {m['symbol']},之后上涨 {m['change_pct']}%")
|
||||
if cash_misses:
|
||||
for m in cash_misses[:3]:
|
||||
review["insights"].append(f"持仓以 USDT 为主时 {m['symbol']} 上涨 {m['change_pct']}%")
|
||||
|
||||
review["recommendations"] = [
|
||||
"优先检查最小成交额/精度拒单是否影响小资金执行",
|
||||
"若连续两个复盘周期 edge 为负,下一小时减少换仓频率",
|
||||
"若错误日志增加,优先进入防守模式(多持 USDT)",
|
||||
]
|
||||
return review
|
||||
|
||||
|
||||
def save_review(review: dict):
|
||||
ensure_review_dir()
|
||||
ts = datetime.now(CST).strftime("%Y%m%d_%H%M%S")
|
||||
path = REVIEW_DIR / f"review_{ts}.json"
|
||||
path.write_text(json.dumps(review, indent=2, ensure_ascii=False), encoding="utf-8")
|
||||
return str(path)
|
||||
|
||||
|
||||
def print_review(review: dict):
|
||||
print("=" * 50)
|
||||
print("📊 Coin Hunter 小时复盘报告")
|
||||
print(f"复盘时间: {review['review_timestamp']}")
|
||||
print(f"统计周期: 过去 {review['review_period_hours']} 小时")
|
||||
print(f"总决策数: {review['total_decisions']} | 总交易数: {review['total_trades']} | 总错误数: {review['total_errors']}")
|
||||
stats = review.get("stats", {})
|
||||
print("\n决策质量统计:")
|
||||
print(f" ✓ 优秀: {stats.get('good_decisions', 0)}")
|
||||
print(f" ○ 中性: {stats.get('neutral_decisions', 0)}")
|
||||
print(f" ✗ 失误: {stats.get('bad_decisions', 0)}")
|
||||
print(f" ↗ 错过机会: {stats.get('missed_opportunities', 0)}")
|
||||
if stats.get("avg_estimated_edge_pct") is not None:
|
||||
print(f" 平均估计 edge: {stats['avg_estimated_edge_pct']}%")
|
||||
if review.get("insights"):
|
||||
print("\n💡 见解:")
|
||||
for item in review["insights"]:
|
||||
print(f" • {item}")
|
||||
if review.get("recommendations"):
|
||||
print("\n🔧 优化建议:")
|
||||
for item in review["recommendations"]:
|
||||
print(f" • {item}")
|
||||
print("=" * 50)
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
hours = int(sys.argv[1]) if len(sys.argv) > 1 else 1
|
||||
review = generate_review(hours)
|
||||
path = save_review(review)
|
||||
print_review(review)
|
||||
print(f"复盘已保存至: {path}")
|
||||
except Exception as e:
|
||||
log_error("review_engine", e)
|
||||
raise
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,28 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Rotate external gate log using the user's logrotate config/state."""
|
||||
import shutil
|
||||
import subprocess
|
||||
|
||||
from .runtime import ensure_runtime_dirs, get_runtime_paths
|
||||
|
||||
PATHS = get_runtime_paths()
|
||||
STATE_DIR = PATHS.state_dir
|
||||
LOGROTATE_STATUS = PATHS.logrotate_status
|
||||
LOGROTATE_CONF = PATHS.logrotate_config
|
||||
LOGS_DIR = PATHS.logs_dir
|
||||
|
||||
|
||||
def main():
|
||||
ensure_runtime_dirs(PATHS)
|
||||
logrotate_bin = shutil.which("logrotate") or "/usr/sbin/logrotate"
|
||||
cmd = [logrotate_bin, "-s", str(LOGROTATE_STATUS), str(LOGROTATE_CONF)]
|
||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||
if result.stdout.strip():
|
||||
print(result.stdout.strip())
|
||||
if result.stderr.strip():
|
||||
print(result.stderr.strip())
|
||||
return result.returncode
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -1,107 +1,561 @@
|
||||
"""Runtime paths and environment helpers for CoinHunter CLI."""
|
||||
"""Runtime helpers for CoinHunter V2."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import csv
|
||||
import io
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
from dataclasses import asdict, dataclass
|
||||
import subprocess
|
||||
import sys
|
||||
import threading
|
||||
from collections.abc import Iterator
|
||||
from contextlib import contextmanager
|
||||
from dataclasses import asdict, dataclass, is_dataclass
|
||||
from datetime import date, datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
try:
|
||||
import shtab
|
||||
except Exception: # pragma: no cover
|
||||
shtab = None # type: ignore[assignment]
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class RuntimePaths:
|
||||
root: Path
|
||||
cache_dir: Path
|
||||
state_dir: Path
|
||||
logs_dir: Path
|
||||
reviews_dir: Path
|
||||
config_file: Path
|
||||
positions_file: Path
|
||||
accounts_file: Path
|
||||
executions_file: Path
|
||||
watchlist_file: Path
|
||||
notes_file: Path
|
||||
positions_lock: Path
|
||||
executions_lock: Path
|
||||
precheck_state_file: Path
|
||||
external_gate_lock: Path
|
||||
logrotate_config: Path
|
||||
logrotate_status: Path
|
||||
hermes_home: Path
|
||||
env_file: Path
|
||||
hermes_bin: Path
|
||||
logs_dir: Path
|
||||
|
||||
def as_dict(self) -> dict[str, str]:
|
||||
return {key: str(value) for key, value in asdict(self).items()}
|
||||
|
||||
|
||||
def _default_coinhunter_home() -> Path:
|
||||
raw = os.getenv("COINHUNTER_HOME")
|
||||
return Path(raw).expanduser() if raw else Path.home() / ".coinhunter"
|
||||
|
||||
|
||||
def _default_hermes_home() -> Path:
|
||||
raw = os.getenv("HERMES_HOME")
|
||||
return Path(raw).expanduser() if raw else Path.home() / ".hermes"
|
||||
|
||||
|
||||
def get_runtime_paths() -> RuntimePaths:
|
||||
root = _default_coinhunter_home()
|
||||
hermes_home = _default_hermes_home()
|
||||
state_dir = root / "state"
|
||||
root = Path(os.getenv("COINHUNTER_HOME", "~/.coinhunter")).expanduser()
|
||||
return RuntimePaths(
|
||||
root=root,
|
||||
cache_dir=root / "cache",
|
||||
state_dir=state_dir,
|
||||
config_file=root / "config.toml",
|
||||
env_file=root / ".env",
|
||||
logs_dir=root / "logs",
|
||||
reviews_dir=root / "reviews",
|
||||
config_file=root / "config.json",
|
||||
positions_file=root / "positions.json",
|
||||
accounts_file=root / "accounts.json",
|
||||
executions_file=root / "executions.json",
|
||||
watchlist_file=root / "watchlist.json",
|
||||
notes_file=root / "notes.json",
|
||||
positions_lock=root / "positions.lock",
|
||||
executions_lock=root / "executions.lock",
|
||||
precheck_state_file=state_dir / "precheck_state.json",
|
||||
external_gate_lock=state_dir / "external_gate.lock",
|
||||
logrotate_config=root / "logrotate_external_gate.conf",
|
||||
logrotate_status=state_dir / "logrotate_external_gate.status",
|
||||
hermes_home=hermes_home,
|
||||
env_file=Path(os.getenv("COINHUNTER_ENV_FILE", str(hermes_home / ".env"))).expanduser(),
|
||||
hermes_bin=Path(os.getenv("HERMES_BIN", str(Path.home() / ".local" / "bin" / "hermes"))).expanduser(),
|
||||
)
|
||||
|
||||
|
||||
def ensure_runtime_dirs(paths: RuntimePaths | None = None) -> RuntimePaths:
|
||||
paths = paths or get_runtime_paths()
|
||||
for directory in (paths.root, paths.cache_dir, paths.state_dir, paths.logs_dir, paths.reviews_dir):
|
||||
directory.mkdir(parents=True, exist_ok=True)
|
||||
paths.root.mkdir(parents=True, exist_ok=True)
|
||||
paths.logs_dir.mkdir(parents=True, exist_ok=True)
|
||||
return paths
|
||||
|
||||
|
||||
def load_env_file(paths: RuntimePaths | None = None) -> Path:
|
||||
paths = paths or get_runtime_paths()
|
||||
if paths.env_file.exists():
|
||||
for line in paths.env_file.read_text(encoding="utf-8").splitlines():
|
||||
line = line.strip()
|
||||
if line and not line.startswith("#") and "=" in line:
|
||||
key, value = line.split("=", 1)
|
||||
os.environ.setdefault(key.strip(), value.strip())
|
||||
return paths.env_file
|
||||
def json_default(value: Any) -> Any:
|
||||
if is_dataclass(value) and not isinstance(value, type):
|
||||
return asdict(value)
|
||||
if isinstance(value, (datetime, date)):
|
||||
return value.isoformat()
|
||||
if isinstance(value, Path):
|
||||
return str(value)
|
||||
raise TypeError(f"Object of type {type(value).__name__} is not JSON serializable")
|
||||
|
||||
|
||||
def resolve_hermes_executable(paths: RuntimePaths | None = None) -> str:
|
||||
paths = paths or get_runtime_paths()
|
||||
discovered = shutil.which("hermes")
|
||||
if discovered:
|
||||
return discovered
|
||||
return str(paths.hermes_bin)
|
||||
def print_json(payload: Any) -> None:
|
||||
print(json.dumps(payload, ensure_ascii=False, indent=2, sort_keys=True, default=json_default))
|
||||
|
||||
|
||||
def mask_secret(value: str | None, *, tail: int = 4) -> str:
|
||||
if not value:
|
||||
def self_upgrade() -> dict[str, Any]:
|
||||
if shutil.which("pipx"):
|
||||
cmd = ["pipx", "upgrade", "coinhunter"]
|
||||
else:
|
||||
cmd = [sys.executable, "-m", "pip", "install", "--upgrade", "coinhunter"]
|
||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||
return {
|
||||
"command": " ".join(cmd),
|
||||
"returncode": result.returncode,
|
||||
"stdout": result.stdout.strip(),
|
||||
"stderr": result.stderr.strip(),
|
||||
}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# TUI / Agent output helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_ANSI_RE = re.compile(r"\033\[[0-9;]*m")
|
||||
_BOLD = "\033[1m"
|
||||
_RESET = "\033[0m"
|
||||
_CYAN = "\033[36m"
|
||||
_GREEN = "\033[32m"
|
||||
_YELLOW = "\033[33m"
|
||||
_RED = "\033[31m"
|
||||
_DIM = "\033[2m"
|
||||
|
||||
|
||||
def _strip_ansi(text: str) -> str:
|
||||
return _ANSI_RE.sub("", text)
|
||||
|
||||
|
||||
def _color(text: str, color: str) -> str:
|
||||
return f"{color}{text}{_RESET}"
|
||||
|
||||
|
||||
def _cell_width(text: str) -> int:
|
||||
return len(_strip_ansi(text))
|
||||
|
||||
|
||||
def _pad(text: str, width: int, align: str = "left") -> str:
|
||||
pad = width - _cell_width(text)
|
||||
if align == "right":
|
||||
return " " * pad + text
|
||||
return text + " " * pad
|
||||
|
||||
|
||||
def _fmt_number(value: Any) -> str:
|
||||
if value is None:
|
||||
return "—"
|
||||
if isinstance(value, bool):
|
||||
return "true" if value else "false"
|
||||
if isinstance(value, (int, float)):
|
||||
s = f"{value:,.4f}"
|
||||
s = s.rstrip("0").rstrip(".")
|
||||
return s
|
||||
return str(value)
|
||||
|
||||
|
||||
def _fmt_local_ts(ts: str) -> str:
|
||||
try:
|
||||
dt = datetime.fromisoformat(ts.replace("Z", "+00:00"))
|
||||
return dt.astimezone().strftime("%Y-%m-%d %H:%M:%S")
|
||||
except Exception:
|
||||
return ts
|
||||
|
||||
|
||||
def _event_color(event: str) -> str:
|
||||
if "failed" in event or "error" in event:
|
||||
return f"{_DIM}{_RED}"
|
||||
if event.startswith("trade"):
|
||||
return f"{_DIM}{_GREEN}"
|
||||
if event.startswith("opportunity"):
|
||||
return f"{_DIM}{_YELLOW}"
|
||||
return _DIM
|
||||
|
||||
|
||||
def _is_large_dataset(payload: Any, threshold: int = 8) -> bool:
|
||||
if isinstance(payload, dict):
|
||||
for value in payload.values():
|
||||
if isinstance(value, list) and len(value) > threshold:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def _print_compact(payload: dict[str, Any]) -> None:
|
||||
target_key = None
|
||||
target_rows: list[Any] = []
|
||||
for key, value in payload.items():
|
||||
if isinstance(value, list) and len(value) > len(target_rows):
|
||||
target_key = key
|
||||
target_rows = value
|
||||
|
||||
if target_rows and isinstance(target_rows[0], dict):
|
||||
headers = list(target_rows[0].keys())
|
||||
output = io.StringIO()
|
||||
writer = csv.writer(output, delimiter="|", lineterminator="\n")
|
||||
writer.writerow(headers)
|
||||
for row in target_rows:
|
||||
writer.writerow([str(row.get(h, "")) for h in headers])
|
||||
print(f"mode=compact|source={target_key}")
|
||||
print(output.getvalue().strip())
|
||||
else:
|
||||
for key, value in payload.items():
|
||||
print(f"{key}={value}")
|
||||
|
||||
|
||||
def _h_line(widths: list[int], left: str, mid: str, right: str) -> str:
|
||||
parts = ["─" * (w + 2) for w in widths]
|
||||
return left + mid.join(parts) + right
|
||||
|
||||
|
||||
def _print_box_table(
|
||||
title: str,
|
||||
headers: list[str],
|
||||
rows: list[list[str]],
|
||||
aligns: list[str] | None = None,
|
||||
) -> None:
|
||||
if not rows:
|
||||
print(f"{_BOLD}{_CYAN}{title}{_RESET}")
|
||||
print(" (empty)")
|
||||
return
|
||||
|
||||
aligns = aligns or ["left"] * len(headers)
|
||||
col_widths = [_cell_width(h) for h in headers]
|
||||
for row in rows:
|
||||
for i, cell in enumerate(row):
|
||||
col_widths[i] = max(col_widths[i], _cell_width(cell))
|
||||
|
||||
if title:
|
||||
print(f"{_BOLD}{_CYAN}{title}{_RESET}")
|
||||
print(_h_line(col_widths, "┌", "┬", "┐"))
|
||||
header_cells = [_pad(headers[i], col_widths[i], aligns[i]) for i in range(len(headers))]
|
||||
print("│ " + " │ ".join(header_cells) + " │")
|
||||
print(_h_line(col_widths, "├", "┼", "┤"))
|
||||
for row in rows:
|
||||
cells = [_pad(row[i], col_widths[i], aligns[i]) for i in range(len(row))]
|
||||
print("│ " + " │ ".join(cells) + " │")
|
||||
print(_h_line(col_widths, "└", "┴", "┘"))
|
||||
|
||||
|
||||
def _render_tui(payload: Any) -> None:
|
||||
if not isinstance(payload, dict):
|
||||
print(str(payload))
|
||||
return
|
||||
|
||||
if "balances" in payload:
|
||||
rows = payload["balances"]
|
||||
table_rows: list[list[str]] = []
|
||||
for r in rows:
|
||||
is_dust = r.get("is_dust", False)
|
||||
dust_label = f"{_DIM}dust{_RESET}" if is_dust else ""
|
||||
table_rows.append(
|
||||
[
|
||||
r.get("asset", ""),
|
||||
_fmt_number(r.get("free", 0)),
|
||||
_fmt_number(r.get("locked", 0)),
|
||||
_fmt_number(r.get("total", 0)),
|
||||
_fmt_number(r.get("notional_usdt", 0)),
|
||||
dust_label,
|
||||
]
|
||||
)
|
||||
_print_box_table(
|
||||
"BALANCES",
|
||||
["Asset", "Free", "Locked", "Total", "Notional (USDT)", ""],
|
||||
table_rows,
|
||||
aligns=["left", "right", "right", "right", "right", "left"],
|
||||
)
|
||||
return
|
||||
|
||||
if "positions" in payload:
|
||||
rows = payload["positions"]
|
||||
table_rows = []
|
||||
for r in rows:
|
||||
entry = _fmt_number(r.get("entry_price")) if r.get("entry_price") is not None else "—"
|
||||
pnl = _fmt_number(r.get("unrealized_pnl")) if r.get("unrealized_pnl") is not None else "—"
|
||||
table_rows.append(
|
||||
[
|
||||
r.get("market_type", ""),
|
||||
r.get("symbol", ""),
|
||||
r.get("side", ""),
|
||||
_fmt_number(r.get("quantity", 0)),
|
||||
entry,
|
||||
_fmt_number(r.get("mark_price", 0)),
|
||||
_fmt_number(r.get("notional_usdt", 0)),
|
||||
pnl,
|
||||
]
|
||||
)
|
||||
_print_box_table(
|
||||
"POSITIONS",
|
||||
["Market", "Symbol", "Side", "Qty", "Entry", "Mark", "Notional", "PnL"],
|
||||
table_rows,
|
||||
aligns=["left", "left", "left", "right", "right", "right", "right", "right"],
|
||||
)
|
||||
return
|
||||
|
||||
if "tickers" in payload:
|
||||
rows = payload["tickers"]
|
||||
table_rows = []
|
||||
for r in rows:
|
||||
pct = r.get("price_change_pct", 0)
|
||||
pct_str = _color(f"{pct:+.2f}%", _GREEN if pct >= 0 else _RED)
|
||||
table_rows.append(
|
||||
[
|
||||
r.get("symbol", ""),
|
||||
_fmt_number(r.get("last_price", 0)),
|
||||
pct_str,
|
||||
_fmt_number(r.get("quote_volume", 0)),
|
||||
]
|
||||
)
|
||||
_print_box_table(
|
||||
f"TICKERS window={payload.get('window', '1d')}",
|
||||
["Symbol", "Last Price", "Change %", "Quote Volume"],
|
||||
table_rows,
|
||||
aligns=["left", "right", "right", "right"],
|
||||
)
|
||||
return
|
||||
|
||||
if "klines" in payload:
|
||||
rows = payload["klines"]
|
||||
print(
|
||||
f"\n{_BOLD}{_CYAN} KLINES {_RESET} interval={payload.get('interval')} limit={payload.get('limit')} count={len(rows)}"
|
||||
)
|
||||
display_rows = rows[:10]
|
||||
table_rows = []
|
||||
for r in display_rows:
|
||||
table_rows.append(
|
||||
[
|
||||
r.get("symbol", ""),
|
||||
str(r.get("open_time", ""))[:10],
|
||||
_fmt_number(r.get("open", 0)),
|
||||
_fmt_number(r.get("high", 0)),
|
||||
_fmt_number(r.get("low", 0)),
|
||||
_fmt_number(r.get("close", 0)),
|
||||
_fmt_number(r.get("volume", 0)),
|
||||
]
|
||||
)
|
||||
_print_box_table(
|
||||
"",
|
||||
["Symbol", "Time", "Open", "High", "Low", "Close", "Vol"],
|
||||
table_rows,
|
||||
aligns=["left", "left", "right", "right", "right", "right", "right"],
|
||||
)
|
||||
if len(rows) > 10:
|
||||
print(f" {_DIM}... and {len(rows) - 10} more rows{_RESET}")
|
||||
return
|
||||
|
||||
if "trade" in payload:
|
||||
t = payload["trade"]
|
||||
status = t.get("status", "UNKNOWN")
|
||||
status_color = _GREEN if status == "FILLED" else _YELLOW if status == "DRY_RUN" else _CYAN
|
||||
print(f"\n{_BOLD}{_CYAN} TRADE RESULT {_RESET}")
|
||||
print(f" Market: {t.get('market_type', '').upper()}")
|
||||
print(f" Symbol: {t.get('symbol', '')}")
|
||||
print(f" Side: {t.get('side', '')}")
|
||||
print(f" Type: {t.get('order_type', '')}")
|
||||
print(f" Status: {_color(status, status_color)}")
|
||||
print(f" Dry Run: {_fmt_number(t.get('dry_run', False))}")
|
||||
return
|
||||
|
||||
if "recommendations" in payload:
|
||||
rows = payload["recommendations"]
|
||||
print(f"\n{_BOLD}{_CYAN} RECOMMENDATIONS {_RESET} count={len(rows)}")
|
||||
for i, r in enumerate(rows, 1):
|
||||
score = r.get("score", 0)
|
||||
action = r.get("action", "")
|
||||
action_color = (
|
||||
_GREEN
|
||||
if action in {"add", "trigger"}
|
||||
else _YELLOW
|
||||
if action in {"hold", "setup", "review"}
|
||||
else _RED
|
||||
if action in {"chase", "exit", "skip", "trim"}
|
||||
else _CYAN
|
||||
)
|
||||
print(
|
||||
f" {i}. {_BOLD}{r.get('symbol', '')}{_RESET} action={_color(action, action_color)} score={score:.4f}"
|
||||
)
|
||||
for reason in r.get("reasons", []):
|
||||
print(f" · {reason}")
|
||||
metrics = r.get("metrics", {})
|
||||
if metrics:
|
||||
metric_str = " ".join(f"{k}={v}" for k, v in metrics.items())
|
||||
print(f" {_DIM}{metric_str}{_RESET}")
|
||||
return
|
||||
|
||||
if "command" in payload and "returncode" in payload:
|
||||
rc = payload.get("returncode", 0)
|
||||
stdout = payload.get("stdout", "")
|
||||
stderr = payload.get("stderr", "")
|
||||
if rc == 0:
|
||||
print(f"{_GREEN}✓{_RESET} Update completed")
|
||||
else:
|
||||
print(f"{_RED}✗{_RESET} Update failed (exit code {rc})")
|
||||
if stdout:
|
||||
for line in stdout.strip().splitlines():
|
||||
print(f" {line}")
|
||||
if rc != 0 and stderr:
|
||||
print(f" {_YELLOW}Details:{_RESET}")
|
||||
for line in stderr.strip().splitlines():
|
||||
print(f" {line}")
|
||||
return
|
||||
|
||||
if "entries" in payload:
|
||||
rows = payload["entries"]
|
||||
print(f"\n{_BOLD}{_CYAN} AUDIT LOG {_RESET}")
|
||||
if not rows:
|
||||
print(" (no audit entries)")
|
||||
return
|
||||
for r in rows:
|
||||
ts = _fmt_local_ts(r.get("timestamp", ""))
|
||||
event = r.get("event", "")
|
||||
detail_parts: list[str] = []
|
||||
for key in ("symbol", "side", "qty", "quote_amount", "order_type", "status", "dry_run", "error"):
|
||||
val = r.get(key)
|
||||
if val is not None:
|
||||
detail_parts.append(f"{key}={val}")
|
||||
if not detail_parts:
|
||||
for key, val in r.items():
|
||||
if key not in ("timestamp", "event") and not isinstance(val, (dict, list)):
|
||||
detail_parts.append(f"{key}={val}")
|
||||
print(f"\n {_DIM}{ts}{_RESET} {_event_color(event)}{event}{_RESET}")
|
||||
if detail_parts:
|
||||
print(f" {' '.join(detail_parts)}")
|
||||
return
|
||||
|
||||
if "created_or_updated" in payload:
|
||||
print(f"\n{_BOLD}{_CYAN} INITIALIZED {_RESET}")
|
||||
print(f" Root: {payload.get('root', '')}")
|
||||
print(f" Config: {payload.get('config_file', '')}")
|
||||
print(f" Env: {payload.get('env_file', '')}")
|
||||
print(f" Logs: {payload.get('logs_dir', '')}")
|
||||
files = payload.get("created_or_updated", [])
|
||||
if files:
|
||||
action = "overwritten" if payload.get("force") else "created"
|
||||
print(f" Files {action}: {', '.join(files)}")
|
||||
comp = payload.get("completion", {})
|
||||
if comp.get("installed"):
|
||||
print(f"\n {_GREEN}✓{_RESET} Shell completions installed for {comp.get('shell', '')}")
|
||||
print(f" Path: {comp.get('path', '')}")
|
||||
if comp.get("hint"):
|
||||
print(f" Hint: {comp.get('hint', '')}")
|
||||
elif comp.get("reason"):
|
||||
print(f"\n Shell completions: {comp.get('reason', '')}")
|
||||
return
|
||||
|
||||
# Generic fallback for single-list payloads
|
||||
if len(payload) == 1:
|
||||
key, value = next(iter(payload.items()))
|
||||
if isinstance(value, list) and value and isinstance(value[0], dict):
|
||||
_render_tui({key: value})
|
||||
return
|
||||
|
||||
# Simple key-value fallback
|
||||
for key, value in payload.items():
|
||||
if isinstance(value, str) and "\n" in value:
|
||||
print(f" {key}:")
|
||||
for line in value.strip().splitlines():
|
||||
print(f" {line}")
|
||||
else:
|
||||
print(f" {key}: {value}")
|
||||
|
||||
|
||||
def print_output(payload: Any, *, agent: bool = False) -> None:
|
||||
if agent:
|
||||
print_json(payload)
|
||||
else:
|
||||
_render_tui(payload)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Spinner / loading animation
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
_SPINNER_FRAMES = ["⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"]
|
||||
|
||||
|
||||
class _SpinnerThread(threading.Thread):
|
||||
def __init__(self, message: str, interval: float = 0.08) -> None:
|
||||
super().__init__(daemon=True)
|
||||
self.message = message
|
||||
self.interval = interval
|
||||
self._stop_event = threading.Event()
|
||||
|
||||
def run(self) -> None:
|
||||
i = 0
|
||||
while not self._stop_event.is_set():
|
||||
frame = _SPINNER_FRAMES[i % len(_SPINNER_FRAMES)]
|
||||
sys.stdout.write(f"\r{_CYAN}{frame}{_RESET} {self.message} ")
|
||||
sys.stdout.flush()
|
||||
self._stop_event.wait(self.interval)
|
||||
i += 1
|
||||
|
||||
def stop(self) -> None:
|
||||
self._stop_event.set()
|
||||
self.join()
|
||||
sys.stdout.write("\r\033[K")
|
||||
sys.stdout.flush()
|
||||
|
||||
|
||||
@contextmanager
|
||||
def with_spinner(message: str, *, enabled: bool = True) -> Iterator[None]:
|
||||
if not enabled or not sys.stdout.isatty():
|
||||
yield
|
||||
return
|
||||
spinner = _SpinnerThread(message)
|
||||
spinner.start()
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
spinner.stop()
|
||||
|
||||
|
||||
def _detect_shell() -> str:
|
||||
shell = os.getenv("SHELL", "")
|
||||
if "zsh" in shell:
|
||||
return "zsh"
|
||||
if "bash" in shell:
|
||||
return "bash"
|
||||
return ""
|
||||
if len(value) <= tail:
|
||||
return "*" * len(value)
|
||||
return "*" * max(4, len(value) - tail) + value[-tail:]
|
||||
|
||||
|
||||
def _zshrc_path() -> Path:
|
||||
return Path.home() / ".zshrc"
|
||||
|
||||
|
||||
def _bashrc_path() -> Path:
|
||||
return Path.home() / ".bashrc"
|
||||
|
||||
|
||||
def _rc_contains(rc_path: Path, snippet: str) -> bool:
|
||||
if not rc_path.exists():
|
||||
return False
|
||||
return snippet in rc_path.read_text(encoding="utf-8")
|
||||
|
||||
|
||||
def install_shell_completion(parser: argparse.ArgumentParser) -> dict[str, Any]:
|
||||
if shtab is None:
|
||||
return {"shell": None, "installed": False, "reason": "shtab is not installed"}
|
||||
|
||||
shell = _detect_shell()
|
||||
if not shell:
|
||||
return {"shell": None, "installed": False, "reason": "unable to detect shell from $SHELL"}
|
||||
|
||||
script = shtab.complete(parser, shell=shell, preamble="")
|
||||
# Also register completion for the "coinhunter" alias
|
||||
prog = parser.prog.replace("-", "_")
|
||||
func = f"_shtab_{prog}"
|
||||
if shell == "bash":
|
||||
script += f"\ncomplete -o filenames -F {func} coinhunter\n"
|
||||
elif shell == "zsh":
|
||||
script += f"\ncompdef {func} coinhunter\n"
|
||||
installed_path: Path | None = None
|
||||
hint: str | None = None
|
||||
|
||||
if shell == "zsh":
|
||||
comp_dir = Path.home() / ".zsh" / "completions"
|
||||
comp_dir.mkdir(parents=True, exist_ok=True)
|
||||
installed_path = comp_dir / "_coinhunter"
|
||||
installed_path.write_text(script, encoding="utf-8")
|
||||
rc_path = _zshrc_path()
|
||||
fpath_line = "fpath+=(~/.zsh/completions)"
|
||||
if not _rc_contains(rc_path, fpath_line):
|
||||
rc_path.write_text(
|
||||
fpath_line + "\n" + rc_path.read_text(encoding="utf-8") if rc_path.exists() else fpath_line + "\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
hint = "Added fpath+=(~/.zsh/completions) to ~/.zshrc; restart your terminal or run 'compinit'"
|
||||
else:
|
||||
hint = "Run 'compinit' or restart your terminal to activate completions"
|
||||
elif shell == "bash":
|
||||
comp_dir = Path.home() / ".local" / "share" / "bash-completion" / "completions"
|
||||
comp_dir.mkdir(parents=True, exist_ok=True)
|
||||
installed_path = comp_dir / "coinhunter"
|
||||
installed_path.write_text(script, encoding="utf-8")
|
||||
rc_path = _bashrc_path()
|
||||
source_line = '[[ -r "~/.local/share/bash-completion/completions/coinhunter" ]] && . "~/.local/share/bash-completion/completions/coinhunter"'
|
||||
if not _rc_contains(rc_path, source_line):
|
||||
rc_path.write_text(
|
||||
source_line + "\n" + rc_path.read_text(encoding="utf-8") if rc_path.exists() else source_line + "\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
hint = "Added bash completion source line to ~/.bashrc; restart your terminal"
|
||||
else:
|
||||
hint = "Restart your terminal or source ~/.bashrc to activate completions"
|
||||
|
||||
return {
|
||||
"shell": shell,
|
||||
"installed": True,
|
||||
"path": str(installed_path) if installed_path else None,
|
||||
"hint": hint,
|
||||
}
|
||||
|
||||
@@ -1 +1 @@
|
||||
"""Application services for CoinHunter."""
|
||||
"""Service layer for CoinHunter V2."""
|
||||
|
||||
120
src/coinhunter/services/account_service.py
Normal file
120
src/coinhunter/services/account_service.py
Normal file
@@ -0,0 +1,120 @@
|
||||
"""Account and position services."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import asdict, dataclass
|
||||
from typing import Any
|
||||
|
||||
|
||||
@dataclass
|
||||
class AssetBalance:
|
||||
asset: str
|
||||
free: float
|
||||
locked: float
|
||||
total: float
|
||||
notional_usdt: float
|
||||
is_dust: bool
|
||||
|
||||
|
||||
@dataclass
|
||||
class PositionView:
|
||||
symbol: str
|
||||
quantity: float
|
||||
entry_price: float | None
|
||||
mark_price: float
|
||||
notional_usdt: float
|
||||
side: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class AccountOverview:
|
||||
total_equity_usdt: float
|
||||
spot_equity_usdt: float
|
||||
spot_asset_count: int
|
||||
spot_position_count: int
|
||||
|
||||
|
||||
def _spot_price_map(spot_client: Any, quote: str, assets: list[str]) -> dict[str, float]:
|
||||
symbols = [f"{asset}{quote}" for asset in assets if asset != quote]
|
||||
price_map = {quote: 1.0}
|
||||
if not symbols:
|
||||
return price_map
|
||||
for item in spot_client.ticker_price(symbols):
|
||||
symbol = item.get("symbol", "")
|
||||
if symbol.endswith(quote):
|
||||
price_map[symbol.removesuffix(quote)] = float(item.get("price", 0.0))
|
||||
return price_map
|
||||
|
||||
|
||||
def _spot_account_data(spot_client: Any, quote: str) -> tuple[list[dict[str, Any]], list[str], dict[str, float]]:
|
||||
account = spot_client.account_info()
|
||||
balances = account.get("balances", [])
|
||||
assets = [item["asset"] for item in balances if float(item.get("free", 0)) + float(item.get("locked", 0)) > 0]
|
||||
price_map = _spot_price_map(spot_client, quote, assets)
|
||||
return balances, assets, price_map
|
||||
|
||||
|
||||
def get_balances(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
spot_client: Any,
|
||||
) -> dict[str, Any]:
|
||||
quote = str(config.get("market", {}).get("default_quote", "USDT")).upper()
|
||||
dust = float(config.get("trading", {}).get("dust_usdt_threshold", 0.0))
|
||||
rows: list[dict[str, Any]] = []
|
||||
balances, _, price_map = _spot_account_data(spot_client, quote)
|
||||
for item in balances:
|
||||
free = float(item.get("free", 0.0))
|
||||
locked = float(item.get("locked", 0.0))
|
||||
total = free + locked
|
||||
if total <= 0:
|
||||
continue
|
||||
asset = item["asset"]
|
||||
notional = total * price_map.get(asset, 0.0)
|
||||
rows.append(
|
||||
asdict(
|
||||
AssetBalance(
|
||||
asset=asset,
|
||||
free=free,
|
||||
locked=locked,
|
||||
total=total,
|
||||
notional_usdt=notional,
|
||||
is_dust=notional < dust,
|
||||
)
|
||||
)
|
||||
)
|
||||
return {"balances": rows}
|
||||
|
||||
|
||||
def get_positions(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
spot_client: Any,
|
||||
ignore_dust: bool = True,
|
||||
) -> dict[str, Any]:
|
||||
quote = str(config.get("market", {}).get("default_quote", "USDT")).upper()
|
||||
dust = float(config.get("trading", {}).get("dust_usdt_threshold", 0.0))
|
||||
rows: list[dict[str, Any]] = []
|
||||
balances, _, price_map = _spot_account_data(spot_client, quote)
|
||||
for item in balances:
|
||||
quantity = float(item.get("free", 0.0)) + float(item.get("locked", 0.0))
|
||||
if quantity <= 0:
|
||||
continue
|
||||
asset = item["asset"]
|
||||
mark_price = price_map.get(asset, 1.0 if asset == quote else 0.0)
|
||||
notional = quantity * mark_price
|
||||
if ignore_dust and notional < dust:
|
||||
continue
|
||||
rows.append(
|
||||
asdict(
|
||||
PositionView(
|
||||
symbol=quote if asset == quote else f"{asset}{quote}",
|
||||
quantity=quantity,
|
||||
entry_price=None,
|
||||
mark_price=mark_price,
|
||||
notional_usdt=notional,
|
||||
side="LONG",
|
||||
)
|
||||
)
|
||||
)
|
||||
return {"positions": rows}
|
||||
@@ -1,125 +0,0 @@
|
||||
"""Exchange helpers (ccxt, markets, balances, order prep)."""
|
||||
import math
|
||||
import os
|
||||
|
||||
import ccxt
|
||||
|
||||
from ..runtime import get_runtime_paths, load_env_file
|
||||
from .trade_common import log
|
||||
|
||||
PATHS = get_runtime_paths()
|
||||
|
||||
|
||||
def load_env():
|
||||
load_env_file(PATHS)
|
||||
|
||||
|
||||
def get_exchange():
|
||||
load_env()
|
||||
api_key = os.getenv("BINANCE_API_KEY")
|
||||
secret = os.getenv("BINANCE_API_SECRET")
|
||||
if not api_key or not secret:
|
||||
raise RuntimeError("缺少 BINANCE_API_KEY 或 BINANCE_API_SECRET")
|
||||
ex = ccxt.binance(
|
||||
{
|
||||
"apiKey": api_key,
|
||||
"secret": secret,
|
||||
"options": {"defaultType": "spot", "createMarketBuyOrderRequiresPrice": False},
|
||||
"enableRateLimit": True,
|
||||
}
|
||||
)
|
||||
ex.load_markets()
|
||||
return ex
|
||||
|
||||
|
||||
def norm_symbol(symbol: str) -> str:
|
||||
s = symbol.upper().replace("-", "").replace("_", "")
|
||||
if "/" in s:
|
||||
return s
|
||||
if s.endswith("USDT"):
|
||||
return s[:-4] + "/USDT"
|
||||
raise ValueError(f"不支持的 symbol: {symbol}")
|
||||
|
||||
|
||||
def storage_symbol(symbol: str) -> str:
|
||||
return norm_symbol(symbol).replace("/", "")
|
||||
|
||||
|
||||
def fetch_balances(ex):
|
||||
bal = ex.fetch_balance()["free"]
|
||||
return {k: float(v) for k, v in bal.items() if float(v) > 0}
|
||||
|
||||
|
||||
def build_market_snapshot(ex):
|
||||
try:
|
||||
tickers = ex.fetch_tickers()
|
||||
except Exception:
|
||||
return {}
|
||||
snapshot = {}
|
||||
for sym, t in tickers.items():
|
||||
if not sym.endswith("/USDT"):
|
||||
continue
|
||||
price = t.get("last")
|
||||
if price is None or float(price) <= 0:
|
||||
continue
|
||||
vol = float(t.get("quoteVolume") or 0)
|
||||
if vol < 200_000:
|
||||
continue
|
||||
base = sym.replace("/", "")
|
||||
snapshot[base] = {
|
||||
"lastPrice": round(float(price), 8),
|
||||
"price24hPcnt": round(float(t.get("percentage") or 0), 4),
|
||||
"highPrice24h": round(float(t.get("high") or 0), 8) if t.get("high") else None,
|
||||
"lowPrice24h": round(float(t.get("low") or 0), 8) if t.get("low") else None,
|
||||
"turnover24h": round(float(vol), 2),
|
||||
}
|
||||
return snapshot
|
||||
|
||||
|
||||
def market_and_ticker(ex, symbol: str):
|
||||
sym = norm_symbol(symbol)
|
||||
market = ex.market(sym)
|
||||
ticker = ex.fetch_ticker(sym)
|
||||
return sym, market, ticker
|
||||
|
||||
|
||||
def floor_to_step(value: float, step: float) -> float:
|
||||
if not step or step <= 0:
|
||||
return value
|
||||
return math.floor(value / step) * step
|
||||
|
||||
|
||||
def prepare_buy_quantity(ex, symbol: str, amount_usdt: float):
|
||||
from .trade_common import USDT_BUFFER_PCT
|
||||
|
||||
sym, market, ticker = market_and_ticker(ex, symbol)
|
||||
ask = float(ticker.get("ask") or ticker.get("last") or 0)
|
||||
if ask <= 0:
|
||||
raise RuntimeError(f"{sym} 无法获取有效 ask 价格")
|
||||
budget = amount_usdt * (1 - USDT_BUFFER_PCT)
|
||||
raw_qty = budget / ask
|
||||
qty = float(ex.amount_to_precision(sym, raw_qty))
|
||||
min_amt = (market.get("limits", {}).get("amount", {}) or {}).get("min") or 0
|
||||
min_cost = (market.get("limits", {}).get("cost", {}) or {}).get("min") or 0
|
||||
if min_amt and qty < float(min_amt):
|
||||
raise RuntimeError(f"{sym} 买入数量 {qty} 小于最小数量 {min_amt}")
|
||||
est_cost = qty * ask
|
||||
if min_cost and est_cost < float(min_cost):
|
||||
raise RuntimeError(f"{sym} 买入金额 ${est_cost:.4f} 小于最小成交额 ${float(min_cost):.4f}")
|
||||
return sym, qty, ask, est_cost
|
||||
|
||||
|
||||
def prepare_sell_quantity(ex, symbol: str, free_qty: float):
|
||||
sym, market, ticker = market_and_ticker(ex, symbol)
|
||||
bid = float(ticker.get("bid") or ticker.get("last") or 0)
|
||||
if bid <= 0:
|
||||
raise RuntimeError(f"{sym} 无法获取有效 bid 价格")
|
||||
qty = float(ex.amount_to_precision(sym, free_qty))
|
||||
min_amt = (market.get("limits", {}).get("amount", {}) or {}).get("min") or 0
|
||||
min_cost = (market.get("limits", {}).get("cost", {}) or {}).get("min") or 0
|
||||
if min_amt and qty < float(min_amt):
|
||||
raise RuntimeError(f"{sym} 卖出数量 {qty} 小于最小数量 {min_amt}")
|
||||
est_cost = qty * bid
|
||||
if min_cost and est_cost < float(min_cost):
|
||||
raise RuntimeError(f"{sym} 卖出金额 ${est_cost:.4f} 小于最小成交额 ${float(min_cost):.4f}")
|
||||
return sym, qty, bid, est_cost
|
||||
@@ -1,39 +0,0 @@
|
||||
"""Execution state helpers (decision deduplication, executions.json)."""
|
||||
import hashlib
|
||||
|
||||
from ..runtime import get_runtime_paths
|
||||
from .file_utils import load_json_locked, save_json_locked
|
||||
from .trade_common import bj_now_iso
|
||||
|
||||
PATHS = get_runtime_paths()
|
||||
EXECUTIONS_FILE = PATHS.executions_file
|
||||
EXECUTIONS_LOCK = PATHS.executions_lock
|
||||
|
||||
|
||||
def default_decision_id(action: str, argv_tail: list[str]) -> str:
|
||||
from datetime import datetime
|
||||
from .trade_common import CST
|
||||
|
||||
now = datetime.now(CST)
|
||||
bucket_min = (now.minute // 15) * 15
|
||||
bucket = now.strftime(f"%Y%m%dT%H{bucket_min:02d}")
|
||||
raw = f"{bucket}|{action}|{'|'.join(argv_tail)}"
|
||||
return hashlib.sha1(raw.encode()).hexdigest()[:16]
|
||||
|
||||
|
||||
def load_executions() -> dict:
|
||||
return load_json_locked(EXECUTIONS_FILE, EXECUTIONS_LOCK, {"executions": {}}).get("executions", {})
|
||||
|
||||
|
||||
def save_executions(executions: dict):
|
||||
save_json_locked(EXECUTIONS_FILE, EXECUTIONS_LOCK, {"executions": executions})
|
||||
|
||||
|
||||
def record_execution_state(decision_id: str, payload: dict):
|
||||
executions = load_executions()
|
||||
executions[decision_id] = payload
|
||||
save_executions(executions)
|
||||
|
||||
|
||||
def get_execution_state(decision_id: str):
|
||||
return load_executions().get(decision_id)
|
||||
@@ -1,40 +0,0 @@
|
||||
"""File locking and atomic JSON helpers."""
|
||||
import fcntl
|
||||
import json
|
||||
import os
|
||||
from contextlib import contextmanager
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
@contextmanager
|
||||
def locked_file(path: Path):
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(path, "a+", encoding="utf-8") as f:
|
||||
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
|
||||
f.seek(0)
|
||||
yield f
|
||||
f.flush()
|
||||
os.fsync(f.fileno())
|
||||
fcntl.flock(f.fileno(), fcntl.LOCK_UN)
|
||||
|
||||
|
||||
def atomic_write_json(path: Path, data: dict):
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
tmp = path.with_suffix(path.suffix + ".tmp")
|
||||
tmp.write_text(json.dumps(data, indent=2, ensure_ascii=False), encoding="utf-8")
|
||||
os.replace(tmp, path)
|
||||
|
||||
|
||||
def load_json_locked(path: Path, lock_path: Path, default):
|
||||
with locked_file(lock_path):
|
||||
if not path.exists():
|
||||
return default
|
||||
try:
|
||||
return json.loads(path.read_text(encoding="utf-8"))
|
||||
except Exception:
|
||||
return default
|
||||
|
||||
|
||||
def save_json_locked(path: Path, lock_path: Path, data: dict):
|
||||
with locked_file(lock_path):
|
||||
atomic_write_json(path, data)
|
||||
146
src/coinhunter/services/market_service.py
Normal file
146
src/coinhunter/services/market_service.py
Normal file
@@ -0,0 +1,146 @@
|
||||
"""Market data services and symbol normalization."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import asdict, dataclass
|
||||
from typing import Any
|
||||
|
||||
|
||||
def normalize_symbol(symbol: str) -> str:
|
||||
return symbol.upper().replace("/", "").replace("-", "").replace("_", "").strip()
|
||||
|
||||
|
||||
def normalize_symbols(symbols: list[str]) -> list[str]:
|
||||
seen: set[str] = set()
|
||||
normalized: list[str] = []
|
||||
for symbol in symbols:
|
||||
value = normalize_symbol(symbol)
|
||||
if value and value not in seen:
|
||||
normalized.append(value)
|
||||
seen.add(value)
|
||||
return normalized
|
||||
|
||||
|
||||
def base_asset(symbol: str, quote_asset: str) -> str:
|
||||
symbol = normalize_symbol(symbol)
|
||||
return symbol[: -len(quote_asset)] if symbol.endswith(quote_asset) else symbol
|
||||
|
||||
|
||||
@dataclass
|
||||
class TickerView:
|
||||
symbol: str
|
||||
last_price: float
|
||||
price_change_pct: float
|
||||
quote_volume: float
|
||||
|
||||
|
||||
@dataclass
|
||||
class KlineView:
|
||||
symbol: str
|
||||
interval: str
|
||||
open_time: int
|
||||
open: float
|
||||
high: float
|
||||
low: float
|
||||
close: float
|
||||
volume: float
|
||||
close_time: int
|
||||
quote_volume: float
|
||||
|
||||
|
||||
def get_tickers(config: dict[str, Any], symbols: list[str], *, spot_client: Any, window: str = "1d") -> dict[str, Any]:
|
||||
normalized = normalize_symbols(symbols)
|
||||
rows = []
|
||||
for ticker in spot_client.ticker_stats(normalized, window=window):
|
||||
rows.append(
|
||||
asdict(
|
||||
TickerView(
|
||||
symbol=normalize_symbol(ticker["symbol"]),
|
||||
last_price=float(ticker.get("lastPrice") or ticker.get("last_price") or 0.0),
|
||||
price_change_pct=float(
|
||||
ticker.get("priceChangePercent") or ticker.get("price_change_percent") or 0.0
|
||||
),
|
||||
quote_volume=float(ticker.get("quoteVolume") or ticker.get("quote_volume") or 0.0),
|
||||
)
|
||||
)
|
||||
)
|
||||
return {"tickers": rows, "window": window}
|
||||
|
||||
|
||||
def get_klines(
|
||||
config: dict[str, Any],
|
||||
symbols: list[str],
|
||||
*,
|
||||
interval: str,
|
||||
limit: int,
|
||||
spot_client: Any,
|
||||
) -> dict[str, Any]:
|
||||
normalized = normalize_symbols(symbols)
|
||||
rows = []
|
||||
for symbol in normalized:
|
||||
for item in spot_client.klines(symbol=symbol, interval=interval, limit=limit):
|
||||
rows.append(
|
||||
asdict(
|
||||
KlineView(
|
||||
symbol=symbol,
|
||||
interval=interval,
|
||||
open_time=int(item[0]),
|
||||
open=float(item[1]),
|
||||
high=float(item[2]),
|
||||
low=float(item[3]),
|
||||
close=float(item[4]),
|
||||
volume=float(item[5]),
|
||||
close_time=int(item[6]),
|
||||
quote_volume=float(item[7]),
|
||||
)
|
||||
)
|
||||
)
|
||||
return {"interval": interval, "limit": limit, "klines": rows}
|
||||
|
||||
|
||||
def get_scan_universe(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
spot_client: Any,
|
||||
symbols: list[str] | None = None,
|
||||
window: str = "1d",
|
||||
) -> list[dict[str, Any]]:
|
||||
market_config = config.get("market", {})
|
||||
opportunity_config = config.get("opportunity", {})
|
||||
quote = str(market_config.get("default_quote", "USDT")).upper()
|
||||
allowlist = set(normalize_symbols(market_config.get("universe_allowlist", [])))
|
||||
denylist = set(normalize_symbols(market_config.get("universe_denylist", [])))
|
||||
requested = set(normalize_symbols(symbols or []))
|
||||
min_quote_volume = float(opportunity_config.get("min_quote_volume", 0.0))
|
||||
|
||||
exchange_info = spot_client.exchange_info()
|
||||
status_map = {normalize_symbol(item["symbol"]): item.get("status", "") for item in exchange_info.get("symbols", [])}
|
||||
|
||||
rows: list[dict[str, Any]] = []
|
||||
for ticker in spot_client.ticker_stats(list(requested) if requested else None, window=window):
|
||||
symbol = normalize_symbol(ticker["symbol"])
|
||||
if not symbol.endswith(quote):
|
||||
continue
|
||||
if allowlist and symbol not in allowlist:
|
||||
continue
|
||||
if symbol in denylist:
|
||||
continue
|
||||
if requested and symbol not in requested:
|
||||
continue
|
||||
if status_map.get(symbol) != "TRADING":
|
||||
continue
|
||||
quote_volume = float(ticker.get("quoteVolume") or 0.0)
|
||||
if quote_volume < min_quote_volume:
|
||||
continue
|
||||
rows.append(
|
||||
{
|
||||
"symbol": symbol,
|
||||
"last_price": float(ticker.get("lastPrice") or 0.0),
|
||||
"price_change_pct": float(ticker.get("priceChangePercent") or 0.0),
|
||||
"quote_volume": quote_volume,
|
||||
"high_price": float(ticker.get("highPrice") or 0.0),
|
||||
"low_price": float(ticker.get("lowPrice") or 0.0),
|
||||
}
|
||||
)
|
||||
rows.sort(key=lambda item: float(item["quote_volume"]), reverse=True)
|
||||
return rows
|
||||
372
src/coinhunter/services/opportunity_dataset_service.py
Normal file
372
src/coinhunter/services/opportunity_dataset_service.py
Normal file
@@ -0,0 +1,372 @@
|
||||
"""Historical dataset collection for opportunity evaluation."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import time
|
||||
from collections.abc import Callable
|
||||
from dataclasses import asdict, dataclass
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
from urllib.parse import parse_qs, urlencode, urlparse
|
||||
|
||||
import requests
|
||||
from requests.exceptions import RequestException
|
||||
|
||||
from ..runtime import get_runtime_paths
|
||||
from .market_service import normalize_symbol, normalize_symbols
|
||||
|
||||
HttpGet = Callable[[str, dict[str, str], float], Any]
|
||||
_PUBLIC_HTTP_ATTEMPTS = 5
|
||||
|
||||
_INTERVAL_SECONDS = {
|
||||
"1m": 60,
|
||||
"3m": 180,
|
||||
"5m": 300,
|
||||
"15m": 900,
|
||||
"30m": 1800,
|
||||
"1h": 3600,
|
||||
"2h": 7200,
|
||||
"4h": 14400,
|
||||
"6h": 21600,
|
||||
"8h": 28800,
|
||||
"12h": 43200,
|
||||
"1d": 86400,
|
||||
"3d": 259200,
|
||||
"1w": 604800,
|
||||
}
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class DatasetPlan:
|
||||
intervals: list[str]
|
||||
kline_limit: int
|
||||
reference_days: float
|
||||
simulate_days: float
|
||||
run_days: float
|
||||
total_days: float
|
||||
start: datetime
|
||||
simulation_start: datetime
|
||||
simulation_end: datetime
|
||||
end: datetime
|
||||
|
||||
|
||||
def _as_float(value: Any, default: float = 0.0) -> float:
|
||||
try:
|
||||
return float(value)
|
||||
except (TypeError, ValueError):
|
||||
return default
|
||||
|
||||
|
||||
def _as_int(value: Any, default: int = 0) -> int:
|
||||
try:
|
||||
return int(value)
|
||||
except (TypeError, ValueError):
|
||||
return default
|
||||
|
||||
|
||||
def _public_http_get(url: str, headers: dict[str, str], timeout: float) -> Any:
|
||||
last_error: RequestException | None = None
|
||||
for attempt in range(_PUBLIC_HTTP_ATTEMPTS):
|
||||
try:
|
||||
response = requests.get(url, headers=headers, timeout=timeout)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except RequestException as exc:
|
||||
last_error = exc
|
||||
if attempt < _PUBLIC_HTTP_ATTEMPTS - 1:
|
||||
time.sleep(0.5 * (attempt + 1))
|
||||
if last_error is not None:
|
||||
raise last_error
|
||||
raise RuntimeError("public HTTP request failed")
|
||||
|
||||
|
||||
def _public_http_status(url: str, headers: dict[str, str], timeout: float) -> tuple[int, str]:
|
||||
last_error: RequestException | None = None
|
||||
for attempt in range(_PUBLIC_HTTP_ATTEMPTS):
|
||||
try:
|
||||
response = requests.get(url, headers=headers, timeout=timeout)
|
||||
return response.status_code, response.text
|
||||
except RequestException as exc:
|
||||
last_error = exc
|
||||
if attempt < _PUBLIC_HTTP_ATTEMPTS - 1:
|
||||
time.sleep(0.5 * (attempt + 1))
|
||||
if last_error is not None:
|
||||
raise last_error
|
||||
raise RuntimeError("public HTTP status request failed")
|
||||
|
||||
|
||||
def _build_url(base_url: str, path: str, params: dict[str, str]) -> str:
|
||||
return f"{base_url.rstrip('/')}{path}?{urlencode(params)}"
|
||||
|
||||
|
||||
def _iso(dt: datetime) -> str:
|
||||
return dt.astimezone(timezone.utc).replace(microsecond=0).isoformat().replace("+00:00", "Z")
|
||||
|
||||
|
||||
def _ms(dt: datetime) -> int:
|
||||
return int(dt.timestamp() * 1000)
|
||||
|
||||
|
||||
def _default_intervals(config: dict[str, Any]) -> list[str]:
|
||||
configured = config.get("opportunity", {}).get("lookback_intervals", ["1h", "4h", "1d"])
|
||||
intervals = [str(item).strip() for item in configured if str(item).strip()]
|
||||
return intervals or ["1h"]
|
||||
|
||||
|
||||
def reference_days_for(config: dict[str, Any]) -> float:
|
||||
opportunity_config = config.get("opportunity", {})
|
||||
intervals = _default_intervals(config)
|
||||
kline_limit = _as_int(opportunity_config.get("kline_limit"), 48)
|
||||
seconds = [(_INTERVAL_SECONDS.get(interval) or 0) * kline_limit for interval in intervals]
|
||||
return round(max(seconds or [0]) / 86400, 4)
|
||||
|
||||
|
||||
def build_dataset_plan(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
simulate_days: float | None = None,
|
||||
run_days: float | None = None,
|
||||
now: datetime | None = None,
|
||||
) -> DatasetPlan:
|
||||
opportunity_config = config.get("opportunity", {})
|
||||
intervals = _default_intervals(config)
|
||||
kline_limit = _as_int(opportunity_config.get("kline_limit"), 48)
|
||||
reference_days = reference_days_for(config)
|
||||
simulate = _as_float(
|
||||
simulate_days if simulate_days is not None else opportunity_config.get("simulate_days"),
|
||||
7.0,
|
||||
)
|
||||
run = _as_float(run_days if run_days is not None else opportunity_config.get("run_days"), 7.0)
|
||||
end = (now or datetime.now(timezone.utc)).astimezone(timezone.utc).replace(microsecond=0)
|
||||
total = reference_days + simulate + run
|
||||
start = end - timedelta(days=total)
|
||||
simulation_start = start + timedelta(days=reference_days)
|
||||
simulation_end = simulation_start + timedelta(days=run)
|
||||
return DatasetPlan(
|
||||
intervals=intervals,
|
||||
kline_limit=kline_limit,
|
||||
reference_days=reference_days,
|
||||
simulate_days=simulate,
|
||||
run_days=run,
|
||||
total_days=round(total, 4),
|
||||
start=start,
|
||||
simulation_start=simulation_start,
|
||||
simulation_end=simulation_end,
|
||||
end=end,
|
||||
)
|
||||
|
||||
|
||||
def _binance_base_url(config: dict[str, Any]) -> str:
|
||||
return str(config.get("binance", {}).get("spot_base_url", "https://api.binance.com"))
|
||||
|
||||
|
||||
def _select_universe(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
symbols: list[str] | None,
|
||||
http_get: HttpGet,
|
||||
timeout: float,
|
||||
) -> list[str]:
|
||||
if symbols:
|
||||
return normalize_symbols(symbols)
|
||||
|
||||
market_config = config.get("market", {})
|
||||
opportunity_config = config.get("opportunity", {})
|
||||
quote = str(market_config.get("default_quote", "USDT")).upper()
|
||||
allowlist = set(normalize_symbols(market_config.get("universe_allowlist", [])))
|
||||
denylist = set(normalize_symbols(market_config.get("universe_denylist", [])))
|
||||
scan_limit = _as_int(opportunity_config.get("scan_limit"), 50)
|
||||
min_quote_volume = _as_float(opportunity_config.get("min_quote_volume"), 0.0)
|
||||
base_url = _binance_base_url(config)
|
||||
headers = {"accept": "application/json", "user-agent": "coinhunter/2"}
|
||||
|
||||
exchange_info = http_get(_build_url(base_url, "/api/v3/exchangeInfo", {}), headers, timeout)
|
||||
status_map = {normalize_symbol(item["symbol"]): item.get("status", "") for item in exchange_info.get("symbols", [])}
|
||||
rows = http_get(_build_url(base_url, "/api/v3/ticker/24hr", {}), headers, timeout)
|
||||
|
||||
universe: list[tuple[str, float]] = []
|
||||
for ticker in rows if isinstance(rows, list) else []:
|
||||
symbol = normalize_symbol(ticker.get("symbol", ""))
|
||||
if not symbol.endswith(quote):
|
||||
continue
|
||||
if allowlist and symbol not in allowlist:
|
||||
continue
|
||||
if symbol in denylist:
|
||||
continue
|
||||
if status_map.get(symbol) != "TRADING":
|
||||
continue
|
||||
quote_volume = _as_float(ticker.get("quoteVolume"))
|
||||
if quote_volume < min_quote_volume:
|
||||
continue
|
||||
universe.append((symbol, quote_volume))
|
||||
universe.sort(key=lambda item: item[1], reverse=True)
|
||||
return [symbol for symbol, _ in universe[:scan_limit]]
|
||||
|
||||
|
||||
def _fetch_klines(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
symbol: str,
|
||||
interval: str,
|
||||
start: datetime,
|
||||
end: datetime,
|
||||
http_get: HttpGet,
|
||||
timeout: float,
|
||||
) -> list[list[Any]]:
|
||||
base_url = _binance_base_url(config)
|
||||
headers = {"accept": "application/json", "user-agent": "coinhunter/2"}
|
||||
interval_ms = (_INTERVAL_SECONDS.get(interval) or 60) * 1000
|
||||
cursor = _ms(start)
|
||||
end_ms = _ms(end)
|
||||
rows: list[list[Any]] = []
|
||||
while cursor <= end_ms:
|
||||
url = _build_url(
|
||||
base_url,
|
||||
"/api/v3/klines",
|
||||
{
|
||||
"symbol": symbol,
|
||||
"interval": interval,
|
||||
"startTime": str(cursor),
|
||||
"endTime": str(end_ms),
|
||||
"limit": "1000",
|
||||
},
|
||||
)
|
||||
chunk = http_get(url, headers, timeout)
|
||||
if not chunk:
|
||||
break
|
||||
rows.extend(chunk)
|
||||
last_open = int(chunk[-1][0])
|
||||
next_cursor = last_open + interval_ms
|
||||
if next_cursor <= cursor:
|
||||
break
|
||||
cursor = next_cursor
|
||||
if len(chunk) < 1000:
|
||||
break
|
||||
return rows
|
||||
|
||||
|
||||
def _probe_external_history(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
plan: DatasetPlan,
|
||||
timeout: float,
|
||||
http_status: Callable[[str, dict[str, str], float], tuple[int, str]] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
opportunity_config = config.get("opportunity", {})
|
||||
provider = str(opportunity_config.get("research_provider", "coingecko")).lower().strip()
|
||||
if not bool(opportunity_config.get("auto_research", True)) or provider in {"", "off", "none", "disabled"}:
|
||||
return {"provider": provider or "disabled", "status": "disabled"}
|
||||
if provider != "coingecko":
|
||||
return {"provider": provider, "status": "unsupported"}
|
||||
|
||||
coingecko_config = config.get("coingecko", {})
|
||||
base_url = str(coingecko_config.get("base_url", "https://api.coingecko.com/api/v3"))
|
||||
api_key = str(coingecko_config.get("api_key", "")).strip()
|
||||
headers = {"accept": "application/json", "user-agent": "coinhunter/2"}
|
||||
if api_key:
|
||||
headers["x-cg-demo-api-key"] = api_key
|
||||
sample_date = plan.simulation_start.strftime("%d-%m-%Y")
|
||||
url = _build_url(base_url, "/coins/bitcoin/history", {"date": sample_date})
|
||||
http_status = http_status or _public_http_status
|
||||
try:
|
||||
status, body = http_status(url, headers, timeout)
|
||||
except (TimeoutError, RequestException, OSError) as exc:
|
||||
return {"provider": "coingecko", "status": "failed", "sample_date": sample_date, "error": str(exc)}
|
||||
if status == 200:
|
||||
return {"provider": "coingecko", "status": "available", "sample_date": sample_date}
|
||||
lowered = body.lower()
|
||||
if "allowed time range" in lowered or "365 days" in lowered:
|
||||
result_status = "limited"
|
||||
elif status == 429:
|
||||
result_status = "rate_limited"
|
||||
elif status in {401, 403}:
|
||||
result_status = "unauthorized"
|
||||
else:
|
||||
result_status = "failed"
|
||||
return {
|
||||
"provider": "coingecko",
|
||||
"status": result_status,
|
||||
"sample_date": sample_date,
|
||||
"http_status": status,
|
||||
"message": body[:240],
|
||||
}
|
||||
|
||||
|
||||
def _default_output_path(plan: DatasetPlan) -> Path:
|
||||
dataset_dir = get_runtime_paths().root / "datasets"
|
||||
dataset_dir.mkdir(parents=True, exist_ok=True)
|
||||
stamp = plan.end.strftime("%Y%m%dT%H%M%SZ")
|
||||
return dataset_dir / f"opportunity_dataset_{stamp}.json"
|
||||
|
||||
|
||||
def collect_opportunity_dataset(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
symbols: list[str] | None = None,
|
||||
simulate_days: float | None = None,
|
||||
run_days: float | None = None,
|
||||
output_path: str | None = None,
|
||||
http_get: HttpGet | None = None,
|
||||
http_status: Callable[[str, dict[str, str], float], tuple[int, str]] | None = None,
|
||||
now: datetime | None = None,
|
||||
) -> dict[str, Any]:
|
||||
opportunity_config = config.get("opportunity", {})
|
||||
timeout = _as_float(opportunity_config.get("dataset_timeout_seconds"), 15.0)
|
||||
plan = build_dataset_plan(config, simulate_days=simulate_days, run_days=run_days, now=now)
|
||||
http_get = http_get or _public_http_get
|
||||
selected_symbols = _select_universe(config, symbols=symbols, http_get=http_get, timeout=timeout)
|
||||
klines: dict[str, dict[str, list[list[Any]]]] = {}
|
||||
counts: dict[str, dict[str, int]] = {}
|
||||
for symbol in selected_symbols:
|
||||
klines[symbol] = {}
|
||||
counts[symbol] = {}
|
||||
for interval in plan.intervals:
|
||||
rows = _fetch_klines(
|
||||
config,
|
||||
symbol=symbol,
|
||||
interval=interval,
|
||||
start=plan.start,
|
||||
end=plan.end,
|
||||
http_get=http_get,
|
||||
timeout=timeout,
|
||||
)
|
||||
klines[symbol][interval] = rows
|
||||
counts[symbol][interval] = len(rows)
|
||||
|
||||
external_history = _probe_external_history(config, plan=plan, timeout=timeout, http_status=http_status)
|
||||
path = Path(output_path).expanduser() if output_path else _default_output_path(plan)
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
metadata = {
|
||||
"created_at": _iso(datetime.now(timezone.utc)),
|
||||
"quote": str(config.get("market", {}).get("default_quote", "USDT")).upper(),
|
||||
"symbols": selected_symbols,
|
||||
"plan": {
|
||||
**{
|
||||
key: value
|
||||
for key, value in asdict(plan).items()
|
||||
if key not in {"start", "simulation_start", "simulation_end", "end"}
|
||||
},
|
||||
"start": _iso(plan.start),
|
||||
"simulation_start": _iso(plan.simulation_start),
|
||||
"simulation_end": _iso(plan.simulation_end),
|
||||
"end": _iso(plan.end),
|
||||
},
|
||||
"external_history": external_history,
|
||||
}
|
||||
dataset = {"metadata": metadata, "klines": klines}
|
||||
path.write_text(json.dumps(dataset, ensure_ascii=False, indent=2), encoding="utf-8")
|
||||
return {
|
||||
"path": str(path),
|
||||
"symbols": selected_symbols,
|
||||
"counts": counts,
|
||||
"plan": metadata["plan"],
|
||||
"external_history": external_history,
|
||||
}
|
||||
|
||||
|
||||
def parse_query(url: str) -> dict[str, str]:
|
||||
"""Test helper for fake HTTP clients."""
|
||||
parsed = urlparse(url)
|
||||
return {key: values[-1] for key, values in parse_qs(parsed.query).items()}
|
||||
536
src/coinhunter/services/opportunity_evaluation_service.py
Normal file
536
src/coinhunter/services/opportunity_evaluation_service.py
Normal file
@@ -0,0 +1,536 @@
|
||||
"""Walk-forward evaluation for historical opportunity datasets."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from collections import defaultdict
|
||||
from copy import deepcopy
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from statistics import mean
|
||||
from typing import Any
|
||||
|
||||
from .market_service import normalize_symbol
|
||||
from .opportunity_service import _action_for_opportunity, _opportunity_thresholds
|
||||
from .signal_service import (
|
||||
get_opportunity_model_weights,
|
||||
get_signal_interval,
|
||||
score_opportunity_signal,
|
||||
)
|
||||
|
||||
_OPTIMIZE_WEIGHT_KEYS = [
|
||||
"trend",
|
||||
"compression",
|
||||
"breakout_proximity",
|
||||
"higher_lows",
|
||||
"range_position",
|
||||
"fresh_breakout",
|
||||
"volume",
|
||||
"momentum",
|
||||
"setup",
|
||||
"trigger",
|
||||
"liquidity",
|
||||
"volatility_penalty",
|
||||
"extension_penalty",
|
||||
]
|
||||
_OPTIMIZE_MULTIPLIERS = [0.5, 0.75, 1.25, 1.5]
|
||||
|
||||
|
||||
def _as_float(value: Any, default: float = 0.0) -> float:
|
||||
try:
|
||||
return float(value)
|
||||
except (TypeError, ValueError):
|
||||
return default
|
||||
|
||||
|
||||
def _as_int(value: Any, default: int = 0) -> int:
|
||||
try:
|
||||
return int(value)
|
||||
except (TypeError, ValueError):
|
||||
return default
|
||||
|
||||
|
||||
def _parse_dt(value: Any) -> datetime | None:
|
||||
if not value:
|
||||
return None
|
||||
try:
|
||||
return datetime.fromisoformat(str(value).replace("Z", "+00:00")).astimezone(timezone.utc)
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def _iso_from_ms(value: int) -> str:
|
||||
return datetime.fromtimestamp(value / 1000, tz=timezone.utc).replace(microsecond=0).isoformat().replace("+00:00", "Z")
|
||||
|
||||
|
||||
def _close(row: list[Any]) -> float:
|
||||
return _as_float(row[4])
|
||||
|
||||
|
||||
def _high(row: list[Any]) -> float:
|
||||
return _as_float(row[2])
|
||||
|
||||
|
||||
def _low(row: list[Any]) -> float:
|
||||
return _as_float(row[3])
|
||||
|
||||
|
||||
def _volume(row: list[Any]) -> float:
|
||||
return _as_float(row[5])
|
||||
|
||||
|
||||
def _quote_volume(row: list[Any]) -> float:
|
||||
if len(row) > 7:
|
||||
return _as_float(row[7])
|
||||
return _close(row) * _volume(row)
|
||||
|
||||
|
||||
def _open_ms(row: list[Any]) -> int:
|
||||
return int(row[0])
|
||||
|
||||
|
||||
def _ticker_from_window(symbol: str, rows: list[list[Any]]) -> dict[str, Any]:
|
||||
first = _close(rows[0])
|
||||
last = _close(rows[-1])
|
||||
price_change_pct = ((last - first) / first * 100.0) if first else 0.0
|
||||
return {
|
||||
"symbol": symbol,
|
||||
"price_change_pct": price_change_pct,
|
||||
"quote_volume": sum(_quote_volume(row) for row in rows),
|
||||
"high_price": max(_high(row) for row in rows),
|
||||
"low_price": min(_low(row) for row in rows),
|
||||
}
|
||||
|
||||
|
||||
def _window_series(rows: list[list[Any]]) -> tuple[list[float], list[float]]:
|
||||
return [_close(row) for row in rows], [_volume(row) for row in rows]
|
||||
|
||||
|
||||
def _pct(new: float, old: float) -> float:
|
||||
if old == 0:
|
||||
return 0.0
|
||||
return (new - old) / old
|
||||
|
||||
|
||||
def _path_stats(entry: float, future_rows: list[list[Any]], take_profit: float, stop_loss: float) -> dict[str, Any]:
|
||||
if not future_rows:
|
||||
return {
|
||||
"event": "missing",
|
||||
"exit_return": 0.0,
|
||||
"final_return": 0.0,
|
||||
"max_upside": 0.0,
|
||||
"max_drawdown": 0.0,
|
||||
"bars": 0,
|
||||
}
|
||||
|
||||
for row in future_rows:
|
||||
high_return = _pct(_high(row), entry)
|
||||
low_return = _pct(_low(row), entry)
|
||||
if low_return <= -stop_loss:
|
||||
return {
|
||||
"event": "stop",
|
||||
"exit_return": -stop_loss,
|
||||
"final_return": _pct(_close(future_rows[-1]), entry),
|
||||
"max_upside": max(_pct(_high(item), entry) for item in future_rows),
|
||||
"max_drawdown": min(_pct(_low(item), entry) for item in future_rows),
|
||||
"bars": len(future_rows),
|
||||
}
|
||||
if high_return >= take_profit:
|
||||
return {
|
||||
"event": "target",
|
||||
"exit_return": take_profit,
|
||||
"final_return": _pct(_close(future_rows[-1]), entry),
|
||||
"max_upside": max(_pct(_high(item), entry) for item in future_rows),
|
||||
"max_drawdown": min(_pct(_low(item), entry) for item in future_rows),
|
||||
"bars": len(future_rows),
|
||||
}
|
||||
|
||||
return {
|
||||
"event": "horizon",
|
||||
"exit_return": _pct(_close(future_rows[-1]), entry),
|
||||
"final_return": _pct(_close(future_rows[-1]), entry),
|
||||
"max_upside": max(_pct(_high(item), entry) for item in future_rows),
|
||||
"max_drawdown": min(_pct(_low(item), entry) for item in future_rows),
|
||||
"bars": len(future_rows),
|
||||
}
|
||||
|
||||
|
||||
def _is_correct(action: str, trigger_path: dict[str, Any], setup_path: dict[str, Any]) -> bool:
|
||||
if action == "entry":
|
||||
return str(trigger_path["event"]) == "target"
|
||||
if action == "watch":
|
||||
return str(setup_path["event"]) == "target"
|
||||
if action == "avoid":
|
||||
return str(setup_path["event"]) != "target"
|
||||
return False
|
||||
|
||||
|
||||
def _round_float(value: Any, digits: int = 4) -> float:
|
||||
return round(_as_float(value), digits)
|
||||
|
||||
|
||||
def _finalize_bucket(bucket: dict[str, Any]) -> dict[str, Any]:
|
||||
count = int(bucket["count"])
|
||||
correct = int(bucket["correct"])
|
||||
returns = bucket["forward_returns"]
|
||||
trade_returns = bucket["trade_returns"]
|
||||
return {
|
||||
"count": count,
|
||||
"correct": correct,
|
||||
"incorrect": count - correct,
|
||||
"accuracy": round(correct / count, 4) if count else 0.0,
|
||||
"avg_forward_return": round(mean(returns), 4) if returns else 0.0,
|
||||
"avg_trade_return": round(mean(trade_returns), 4) if trade_returns else 0.0,
|
||||
}
|
||||
|
||||
|
||||
def _bucket() -> dict[str, Any]:
|
||||
return {"count": 0, "correct": 0, "forward_returns": [], "trade_returns": []}
|
||||
|
||||
|
||||
def evaluate_opportunity_dataset(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
dataset_path: str,
|
||||
horizon_hours: float | None = None,
|
||||
take_profit: float | None = None,
|
||||
stop_loss: float | None = None,
|
||||
setup_target: float | None = None,
|
||||
lookback: int | None = None,
|
||||
top_n: int | None = None,
|
||||
max_examples: int = 20,
|
||||
) -> dict[str, Any]:
|
||||
"""Evaluate opportunity actions using only point-in-time historical candles."""
|
||||
dataset_file = Path(dataset_path).expanduser()
|
||||
dataset = json.loads(dataset_file.read_text(encoding="utf-8"))
|
||||
metadata = dataset.get("metadata", {})
|
||||
plan = metadata.get("plan", {})
|
||||
klines = dataset.get("klines", {})
|
||||
opportunity_config = config.get("opportunity", {})
|
||||
|
||||
intervals = list(plan.get("intervals") or [])
|
||||
configured_interval = get_signal_interval(config)
|
||||
primary_interval = configured_interval if configured_interval in intervals else (intervals[0] if intervals else "1h")
|
||||
simulation_start = _parse_dt(plan.get("simulation_start"))
|
||||
simulation_end = _parse_dt(plan.get("simulation_end"))
|
||||
if simulation_start is None or simulation_end is None:
|
||||
raise ValueError("dataset metadata must include plan.simulation_start and plan.simulation_end")
|
||||
|
||||
horizon = _as_float(horizon_hours, 0.0)
|
||||
if horizon <= 0:
|
||||
horizon = _as_float(plan.get("simulate_days"), 0.0) * 24.0
|
||||
if horizon <= 0:
|
||||
horizon = _as_float(opportunity_config.get("evaluation_horizon_hours"), 24.0)
|
||||
|
||||
take_profit_value = take_profit if take_profit is not None else _as_float(opportunity_config.get("evaluation_take_profit_pct"), 2.0) / 100.0
|
||||
stop_loss_value = stop_loss if stop_loss is not None else _as_float(opportunity_config.get("evaluation_stop_loss_pct"), 1.5) / 100.0
|
||||
setup_target_value = setup_target if setup_target is not None else _as_float(opportunity_config.get("evaluation_setup_target_pct"), 1.0) / 100.0
|
||||
lookback_bars = lookback or _as_int(opportunity_config.get("evaluation_lookback"), 24)
|
||||
selected_top_n = top_n or _as_int(opportunity_config.get("top_n"), 10)
|
||||
thresholds = _opportunity_thresholds(config)
|
||||
horizon_ms = int(horizon * 60 * 60 * 1000)
|
||||
start_ms = int(simulation_start.timestamp() * 1000)
|
||||
end_ms = int(simulation_end.timestamp() * 1000)
|
||||
|
||||
rows_by_symbol: dict[str, list[list[Any]]] = {}
|
||||
index_by_symbol: dict[str, dict[int, int]] = {}
|
||||
for symbol, by_interval in klines.items():
|
||||
rows = by_interval.get(primary_interval, [])
|
||||
normalized = normalize_symbol(symbol)
|
||||
if rows:
|
||||
rows_by_symbol[normalized] = rows
|
||||
index_by_symbol[normalized] = {_open_ms(row): index for index, row in enumerate(rows)}
|
||||
|
||||
decision_times = sorted(
|
||||
{
|
||||
_open_ms(row)
|
||||
for rows in rows_by_symbol.values()
|
||||
for row in rows
|
||||
if start_ms <= _open_ms(row) < end_ms
|
||||
}
|
||||
)
|
||||
|
||||
judgments: list[dict[str, Any]] = []
|
||||
skipped_missing_future = 0
|
||||
skipped_warmup = 0
|
||||
for decision_time in decision_times:
|
||||
candidates: list[dict[str, Any]] = []
|
||||
for symbol, rows in rows_by_symbol.items():
|
||||
index = index_by_symbol[symbol].get(decision_time)
|
||||
if index is None:
|
||||
continue
|
||||
window = rows[max(0, index - lookback_bars + 1) : index + 1]
|
||||
if len(window) < lookback_bars:
|
||||
skipped_warmup += 1
|
||||
continue
|
||||
future_rows = [row for row in rows[index + 1 :] if _open_ms(row) <= decision_time + horizon_ms]
|
||||
if not future_rows:
|
||||
skipped_missing_future += 1
|
||||
continue
|
||||
closes, volumes = _window_series(window)
|
||||
ticker = _ticker_from_window(symbol, window)
|
||||
opportunity_score, metrics = score_opportunity_signal(closes, volumes, ticker, opportunity_config)
|
||||
score = opportunity_score
|
||||
metrics["opportunity_score"] = round(opportunity_score, 4)
|
||||
metrics["position_weight"] = 0.0
|
||||
metrics["research_score"] = 0.0
|
||||
action, reasons, _confidence = _action_for_opportunity(score, metrics, thresholds)
|
||||
candidates.append(
|
||||
{
|
||||
"symbol": symbol,
|
||||
"time": decision_time,
|
||||
"action": action,
|
||||
"score": round(score, 4),
|
||||
"metrics": metrics,
|
||||
"reasons": reasons,
|
||||
"entry_price": _close(window[-1]),
|
||||
"future_rows": future_rows,
|
||||
}
|
||||
)
|
||||
|
||||
for rank, candidate in enumerate(sorted(candidates, key=lambda item: item["score"], reverse=True)[:selected_top_n], start=1):
|
||||
trigger_path = _path_stats(candidate["entry_price"], candidate["future_rows"], take_profit_value, stop_loss_value)
|
||||
setup_path = _path_stats(candidate["entry_price"], candidate["future_rows"], setup_target_value, stop_loss_value)
|
||||
correct = _is_correct(candidate["action"], trigger_path, setup_path)
|
||||
judgments.append(
|
||||
{
|
||||
"time": _iso_from_ms(candidate["time"]),
|
||||
"rank": rank,
|
||||
"symbol": candidate["symbol"],
|
||||
"action": candidate["action"],
|
||||
"score": candidate["score"],
|
||||
"correct": correct,
|
||||
"entry_price": round(candidate["entry_price"], 8),
|
||||
"forward_return": _round_float(trigger_path["final_return"]),
|
||||
"max_upside": _round_float(trigger_path["max_upside"]),
|
||||
"max_drawdown": _round_float(trigger_path["max_drawdown"]),
|
||||
"trade_return": _round_float(trigger_path["exit_return"]) if candidate["action"] == "entry" else 0.0,
|
||||
"trigger_event": trigger_path["event"],
|
||||
"setup_event": setup_path["event"],
|
||||
"metrics": candidate["metrics"],
|
||||
"reason": candidate["reasons"][0] if candidate["reasons"] else "",
|
||||
}
|
||||
)
|
||||
|
||||
overall = _bucket()
|
||||
by_action: dict[str, dict[str, Any]] = defaultdict(_bucket)
|
||||
trigger_returns: list[float] = []
|
||||
for judgment in judgments:
|
||||
action = judgment["action"]
|
||||
for bucket in (overall, by_action[action]):
|
||||
bucket["count"] += 1
|
||||
bucket["correct"] += 1 if judgment["correct"] else 0
|
||||
bucket["forward_returns"].append(judgment["forward_return"])
|
||||
if action == "entry":
|
||||
bucket["trade_returns"].append(judgment["trade_return"])
|
||||
if action == "entry":
|
||||
trigger_returns.append(judgment["trade_return"])
|
||||
|
||||
by_action_result = {action: _finalize_bucket(bucket) for action, bucket in sorted(by_action.items())}
|
||||
incorrect_examples = [item for item in judgments if not item["correct"]][:max_examples]
|
||||
examples = judgments[:max_examples]
|
||||
trigger_count = by_action_result.get("entry", {}).get("count", 0)
|
||||
trigger_correct = by_action_result.get("entry", {}).get("correct", 0)
|
||||
return {
|
||||
"summary": {
|
||||
**_finalize_bucket(overall),
|
||||
"decision_times": len(decision_times),
|
||||
"symbols": sorted(rows_by_symbol),
|
||||
"interval": primary_interval,
|
||||
"top_n": selected_top_n,
|
||||
"skipped_warmup": skipped_warmup,
|
||||
"skipped_missing_future": skipped_missing_future,
|
||||
},
|
||||
"by_action": by_action_result,
|
||||
"trade_simulation": {
|
||||
"trigger_trades": trigger_count,
|
||||
"wins": trigger_correct,
|
||||
"losses": trigger_count - trigger_correct,
|
||||
"win_rate": round(trigger_correct / trigger_count, 4) if trigger_count else 0.0,
|
||||
"avg_trade_return": round(mean(trigger_returns), 4) if trigger_returns else 0.0,
|
||||
},
|
||||
"rules": {
|
||||
"dataset": str(dataset_file),
|
||||
"interval": primary_interval,
|
||||
"horizon_hours": round(horizon, 4),
|
||||
"lookback_bars": lookback_bars,
|
||||
"take_profit": round(take_profit_value, 4),
|
||||
"stop_loss": round(stop_loss_value, 4),
|
||||
"setup_target": round(setup_target_value, 4),
|
||||
"same_candle_policy": "stop_first",
|
||||
"research_mode": "disabled: dataset has no point-in-time research snapshots",
|
||||
},
|
||||
"examples": examples,
|
||||
"incorrect_examples": incorrect_examples,
|
||||
}
|
||||
|
||||
|
||||
def _objective(result: dict[str, Any]) -> float:
|
||||
summary = result.get("summary", {})
|
||||
by_action = result.get("by_action", {})
|
||||
trade = result.get("trade_simulation", {})
|
||||
count = _as_float(summary.get("count"))
|
||||
trigger_trades = _as_float(trade.get("trigger_trades"))
|
||||
trigger_rate = trigger_trades / count if count else 0.0
|
||||
avg_trade_return = _as_float(trade.get("avg_trade_return"))
|
||||
bounded_trade_return = max(min(avg_trade_return, 0.03), -0.03)
|
||||
trigger_coverage = min(trigger_rate / 0.08, 1.0)
|
||||
return round(
|
||||
0.45 * _as_float(summary.get("accuracy"))
|
||||
+ 0.20 * _as_float(by_action.get("watch", {}).get("accuracy"))
|
||||
+ 0.25 * _as_float(trade.get("win_rate"))
|
||||
+ 6.0 * bounded_trade_return
|
||||
+ 0.05 * trigger_coverage,
|
||||
6,
|
||||
)
|
||||
|
||||
|
||||
def _copy_config_with_weights(config: dict[str, Any], weights: dict[str, float]) -> dict[str, Any]:
|
||||
candidate = deepcopy(config)
|
||||
candidate.setdefault("opportunity", {})["model_weights"] = weights
|
||||
return candidate
|
||||
|
||||
|
||||
def _evaluation_snapshot(result: dict[str, Any], objective: float, weights: dict[str, float]) -> dict[str, Any]:
|
||||
return {
|
||||
"objective": objective,
|
||||
"weights": {key: round(value, 4) for key, value in sorted(weights.items())},
|
||||
"summary": result.get("summary", {}),
|
||||
"by_action": result.get("by_action", {}),
|
||||
"trade_simulation": result.get("trade_simulation", {}),
|
||||
}
|
||||
|
||||
|
||||
def optimize_opportunity_model(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
dataset_path: str,
|
||||
horizon_hours: float | None = None,
|
||||
take_profit: float | None = None,
|
||||
stop_loss: float | None = None,
|
||||
setup_target: float | None = None,
|
||||
lookback: int | None = None,
|
||||
top_n: int | None = None,
|
||||
passes: int = 2,
|
||||
) -> dict[str, Any]:
|
||||
"""Coordinate-search model weights against a walk-forward dataset.
|
||||
|
||||
This intentionally optimizes model feature weights only. Entry/watch policy
|
||||
thresholds remain fixed so the search improves signal construction instead
|
||||
of fitting decision cutoffs to a sample.
|
||||
"""
|
||||
base_weights = get_opportunity_model_weights(config.get("opportunity", {}))
|
||||
|
||||
def evaluate(weights: dict[str, float]) -> tuple[dict[str, Any], float]:
|
||||
result = evaluate_opportunity_dataset(
|
||||
_copy_config_with_weights(config, weights),
|
||||
dataset_path=dataset_path,
|
||||
horizon_hours=horizon_hours,
|
||||
take_profit=take_profit,
|
||||
stop_loss=stop_loss,
|
||||
setup_target=setup_target,
|
||||
lookback=lookback,
|
||||
top_n=top_n,
|
||||
max_examples=0,
|
||||
)
|
||||
return result, _objective(result)
|
||||
|
||||
baseline_result, baseline_objective = evaluate(base_weights)
|
||||
best_weights = dict(base_weights)
|
||||
best_result = baseline_result
|
||||
best_objective = baseline_objective
|
||||
evaluations = 1
|
||||
history: list[dict[str, Any]] = [
|
||||
{
|
||||
"pass": 0,
|
||||
"key": "baseline",
|
||||
"multiplier": 1.0,
|
||||
"objective": baseline_objective,
|
||||
"accuracy": baseline_result["summary"]["accuracy"],
|
||||
"trigger_win_rate": baseline_result["trade_simulation"]["win_rate"],
|
||||
}
|
||||
]
|
||||
|
||||
for pass_index in range(max(passes, 0)):
|
||||
improved = False
|
||||
for key in _OPTIMIZE_WEIGHT_KEYS:
|
||||
current_value = best_weights.get(key, 0.0)
|
||||
if current_value <= 0:
|
||||
continue
|
||||
local_best_weights = best_weights
|
||||
local_best_result = best_result
|
||||
local_best_objective = best_objective
|
||||
local_best_multiplier = 1.0
|
||||
for multiplier in _OPTIMIZE_MULTIPLIERS:
|
||||
candidate_weights = dict(best_weights)
|
||||
candidate_weights[key] = round(max(current_value * multiplier, 0.01), 4)
|
||||
candidate_result, candidate_objective = evaluate(candidate_weights)
|
||||
evaluations += 1
|
||||
history.append(
|
||||
{
|
||||
"pass": pass_index + 1,
|
||||
"key": key,
|
||||
"multiplier": multiplier,
|
||||
"objective": candidate_objective,
|
||||
"accuracy": candidate_result["summary"]["accuracy"],
|
||||
"trigger_win_rate": candidate_result["trade_simulation"]["win_rate"],
|
||||
}
|
||||
)
|
||||
if candidate_objective > local_best_objective:
|
||||
local_best_weights = candidate_weights
|
||||
local_best_result = candidate_result
|
||||
local_best_objective = candidate_objective
|
||||
local_best_multiplier = multiplier
|
||||
if local_best_objective > best_objective:
|
||||
best_weights = local_best_weights
|
||||
best_result = local_best_result
|
||||
best_objective = local_best_objective
|
||||
improved = True
|
||||
history.append(
|
||||
{
|
||||
"pass": pass_index + 1,
|
||||
"key": key,
|
||||
"multiplier": local_best_multiplier,
|
||||
"objective": best_objective,
|
||||
"accuracy": best_result["summary"]["accuracy"],
|
||||
"trigger_win_rate": best_result["trade_simulation"]["win_rate"],
|
||||
"selected": True,
|
||||
}
|
||||
)
|
||||
if not improved:
|
||||
break
|
||||
|
||||
recommended_config = {
|
||||
f"opportunity.model_weights.{key}": round(value, 4)
|
||||
for key, value in sorted(best_weights.items())
|
||||
}
|
||||
return {
|
||||
"baseline": _evaluation_snapshot(baseline_result, baseline_objective, base_weights),
|
||||
"best": _evaluation_snapshot(best_result, best_objective, best_weights),
|
||||
"improvement": {
|
||||
"objective": round(best_objective - baseline_objective, 6),
|
||||
"accuracy": round(
|
||||
_as_float(best_result["summary"].get("accuracy")) - _as_float(baseline_result["summary"].get("accuracy")),
|
||||
4,
|
||||
),
|
||||
"trigger_win_rate": round(
|
||||
_as_float(best_result["trade_simulation"].get("win_rate"))
|
||||
- _as_float(baseline_result["trade_simulation"].get("win_rate")),
|
||||
4,
|
||||
),
|
||||
"avg_trade_return": round(
|
||||
_as_float(best_result["trade_simulation"].get("avg_trade_return"))
|
||||
- _as_float(baseline_result["trade_simulation"].get("avg_trade_return")),
|
||||
4,
|
||||
),
|
||||
},
|
||||
"recommended_config": recommended_config,
|
||||
"search": {
|
||||
"passes": passes,
|
||||
"evaluations": evaluations,
|
||||
"optimized": "model_weights_only",
|
||||
"thresholds": "fixed",
|
||||
"objective": "0.45*accuracy + 0.20*setup_accuracy + 0.25*trigger_win_rate + 6*avg_trade_return + 0.05*trigger_coverage",
|
||||
},
|
||||
"history": history[-20:],
|
||||
}
|
||||
227
src/coinhunter/services/opportunity_service.py
Normal file
227
src/coinhunter/services/opportunity_service.py
Normal file
@@ -0,0 +1,227 @@
|
||||
"""Opportunity scanning services."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import asdict, dataclass
|
||||
from statistics import mean
|
||||
from typing import Any
|
||||
|
||||
from ..audit import audit_event
|
||||
from .account_service import get_positions
|
||||
from .market_service import base_asset, get_scan_universe, normalize_symbol
|
||||
from .research_service import get_external_research
|
||||
from .signal_service import get_signal_interval, score_opportunity_signal
|
||||
|
||||
|
||||
@dataclass
|
||||
class OpportunityRecommendation:
|
||||
symbol: str
|
||||
action: str
|
||||
score: float
|
||||
confidence: int
|
||||
reasons: list[str]
|
||||
metrics: dict[str, float]
|
||||
|
||||
|
||||
def _opportunity_thresholds(config: dict[str, Any]) -> dict[str, float]:
|
||||
opportunity_config = config.get("opportunity", {})
|
||||
return {
|
||||
"entry_threshold": float(opportunity_config.get("entry_threshold", 1.5)),
|
||||
"watch_threshold": float(opportunity_config.get("watch_threshold", 0.6)),
|
||||
"min_trigger_score": float(opportunity_config.get("min_trigger_score", 0.45)),
|
||||
"min_setup_score": float(opportunity_config.get("min_setup_score", 0.35)),
|
||||
"overlap_penalty": float(opportunity_config.get("overlap_penalty", 0.6)),
|
||||
}
|
||||
|
||||
|
||||
def _clamp(value: float, low: float, high: float) -> float:
|
||||
return min(max(value, low), high)
|
||||
|
||||
|
||||
def _as_float(value: Any, default: float = 0.0) -> float:
|
||||
try:
|
||||
return float(value)
|
||||
except (TypeError, ValueError):
|
||||
return default
|
||||
|
||||
|
||||
def _series_from_klines(klines: list[list[Any]]) -> tuple[list[float], list[float]]:
|
||||
return [float(item[4]) for item in klines], [float(item[5]) for item in klines]
|
||||
|
||||
|
||||
def _normalized_research_score(value: Any) -> float:
|
||||
"""Normalize provider research inputs to 0..1.
|
||||
|
||||
Provider values can be expressed as either 0..1 or 0..100.
|
||||
"""
|
||||
score = _as_float(value)
|
||||
if score > 1.0:
|
||||
score = score / 100.0
|
||||
return _clamp(score, 0.0, 1.0)
|
||||
|
||||
|
||||
def _research_signals(research: dict[str, Any] | None) -> dict[str, float]:
|
||||
research = research or {}
|
||||
return {
|
||||
"fundamental": _normalized_research_score(research.get("fundamental")),
|
||||
"tokenomics": _normalized_research_score(research.get("tokenomics")),
|
||||
"catalyst": _normalized_research_score(research.get("catalyst")),
|
||||
"adoption": _normalized_research_score(research.get("adoption")),
|
||||
"smart_money": _normalized_research_score(research.get("smart_money")),
|
||||
"unlock_risk": _normalized_research_score(research.get("unlock_risk")),
|
||||
"regulatory_risk": _normalized_research_score(research.get("regulatory_risk")),
|
||||
"research_confidence": _normalized_research_score(research.get("research_confidence")),
|
||||
}
|
||||
|
||||
|
||||
def _confidence_from_edge(edge_score: float) -> int:
|
||||
return int(_clamp((edge_score + 1.0) / 2.0, 0.0, 1.0) * 100)
|
||||
|
||||
|
||||
def _action_for_opportunity(score: float, metrics: dict[str, float], thresholds: dict[str, float]) -> tuple[str, list[str], int]:
|
||||
reasons: list[str] = []
|
||||
extension_penalty = metrics.get("extension_penalty", 0.0)
|
||||
recent_runup = metrics.get("recent_runup", 0.0)
|
||||
breakout_pct = metrics.get("breakout_pct", 0.0)
|
||||
setup_score = metrics.get("setup_score", 0.0)
|
||||
trigger_score = metrics.get("trigger_score", 0.0)
|
||||
edge_score = metrics.get("edge_score", 0.0)
|
||||
|
||||
min_trigger_score = thresholds["min_trigger_score"]
|
||||
min_setup_score = thresholds["min_setup_score"]
|
||||
confidence = _confidence_from_edge(edge_score)
|
||||
|
||||
# Avoid: overextended or clearly negative edge — do not enter
|
||||
if extension_penalty >= 1.0 and (recent_runup >= 0.10 or breakout_pct >= 0.03):
|
||||
reasons.append("price is already extended, chasing here is risky")
|
||||
return "avoid", reasons, confidence
|
||||
|
||||
if edge_score < -0.2:
|
||||
reasons.append("overall signal quality is poor")
|
||||
return "avoid", reasons, confidence
|
||||
|
||||
# Entry: high-confidence breakout — setup + trigger + not overextended
|
||||
if (
|
||||
edge_score >= 0.3
|
||||
and trigger_score >= min_trigger_score
|
||||
and setup_score >= min_setup_score
|
||||
and extension_penalty < 0.5
|
||||
):
|
||||
reasons.append("fresh breakout trigger with clean setup and manageable extension")
|
||||
return "entry", reasons, confidence
|
||||
|
||||
# Watch: constructive but not clean enough
|
||||
if edge_score >= 0.0 and setup_score >= min_setup_score:
|
||||
reasons.append("setup is constructive but the trigger is not clean enough yet")
|
||||
return "watch", reasons, confidence
|
||||
|
||||
# Default avoid
|
||||
reasons.append("setup, trigger, or overall quality is too weak")
|
||||
return "avoid", reasons, confidence
|
||||
|
||||
|
||||
def _add_research_metrics(metrics: dict[str, float], research: dict[str, Any] | None) -> None:
|
||||
research_signals = _research_signals(research)
|
||||
for key, value in research_signals.items():
|
||||
metrics[key] = round(value, 4)
|
||||
metrics["quality"] = round(
|
||||
mean(
|
||||
[
|
||||
research_signals["fundamental"],
|
||||
research_signals["tokenomics"],
|
||||
research_signals["catalyst"],
|
||||
research_signals["adoption"],
|
||||
research_signals["smart_money"],
|
||||
]
|
||||
),
|
||||
4,
|
||||
)
|
||||
|
||||
|
||||
def _research_score(research: dict[str, Any] | None, weights: dict[str, float]) -> float:
|
||||
signals = _research_signals(research)
|
||||
return (
|
||||
weights.get("fundamental", 0.8) * signals["fundamental"]
|
||||
+ weights.get("tokenomics", 0.7) * signals["tokenomics"]
|
||||
+ weights.get("catalyst", 0.5) * signals["catalyst"]
|
||||
+ weights.get("adoption", 0.4) * signals["adoption"]
|
||||
+ weights.get("smart_money", 0.3) * signals["smart_money"]
|
||||
- weights.get("unlock_penalty", 0.8) * signals["unlock_risk"]
|
||||
- weights.get("regulatory_penalty", 0.4) * signals["regulatory_risk"]
|
||||
)
|
||||
|
||||
|
||||
def scan_opportunities(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
spot_client: Any,
|
||||
symbols: list[str] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
opportunity_config = config.get("opportunity", {})
|
||||
weights = opportunity_config.get("weights", {})
|
||||
ignore_dust = bool(opportunity_config.get("ignore_dust", True))
|
||||
interval = get_signal_interval(config)
|
||||
thresholds = _opportunity_thresholds(config)
|
||||
scan_limit = int(opportunity_config.get("scan_limit", 50))
|
||||
top_n = int(opportunity_config.get("top_n", 10))
|
||||
quote = str(config.get("market", {}).get("default_quote", "USDT")).upper()
|
||||
held_positions = get_positions(config, spot_client=spot_client, ignore_dust=ignore_dust)["positions"]
|
||||
concentration_map = {normalize_symbol(item["symbol"]): float(item["notional_usdt"]) for item in held_positions}
|
||||
total_held = sum(concentration_map.values()) or 1.0
|
||||
|
||||
universe = get_scan_universe(config, spot_client=spot_client, symbols=symbols)[:scan_limit]
|
||||
external_research = get_external_research(
|
||||
config,
|
||||
symbols=[normalize_symbol(ticker["symbol"]) for ticker in universe],
|
||||
quote=quote,
|
||||
)
|
||||
recommendations = []
|
||||
for ticker in universe:
|
||||
symbol = normalize_symbol(ticker["symbol"])
|
||||
klines = spot_client.klines(symbol=symbol, interval=interval, limit=24)
|
||||
closes, volumes = _series_from_klines(klines)
|
||||
concentration = concentration_map.get(symbol, 0.0) / total_held
|
||||
opportunity_score, metrics = score_opportunity_signal(closes, volumes, ticker, opportunity_config)
|
||||
score = opportunity_score - thresholds["overlap_penalty"] * concentration
|
||||
metrics["opportunity_score"] = round(opportunity_score, 4)
|
||||
metrics["position_weight"] = round(concentration, 4)
|
||||
research = external_research.get(symbol, {})
|
||||
research_score = _research_score(research, weights)
|
||||
score += research_score
|
||||
metrics["research_score"] = round(research_score, 4)
|
||||
_add_research_metrics(metrics, research)
|
||||
action, reasons, confidence = _action_for_opportunity(score, metrics, thresholds)
|
||||
if symbol.endswith(quote):
|
||||
reasons.append(f"base asset {base_asset(symbol, quote)} passed liquidity and tradability filters")
|
||||
if concentration > 0:
|
||||
reasons.append("symbol is already held, so the opportunity score is discounted for overlap")
|
||||
recommendations.append(
|
||||
asdict(
|
||||
OpportunityRecommendation(
|
||||
symbol=symbol,
|
||||
action=action,
|
||||
score=round(score, 4),
|
||||
confidence=confidence,
|
||||
reasons=reasons,
|
||||
metrics=metrics,
|
||||
)
|
||||
)
|
||||
)
|
||||
payload = {"recommendations": sorted(recommendations, key=lambda item: item["score"], reverse=True)[:top_n]}
|
||||
audit_event(
|
||||
"opportunity_scan_generated",
|
||||
{
|
||||
"market_type": "spot",
|
||||
"symbol": None,
|
||||
"side": None,
|
||||
"qty": None,
|
||||
"quote_amount": None,
|
||||
"order_type": None,
|
||||
"dry_run": True,
|
||||
"request_payload": {"mode": "scan", "symbols": [normalize_symbol(item) for item in symbols or []]},
|
||||
"response_payload": payload,
|
||||
"status": "generated",
|
||||
"error": None,
|
||||
},
|
||||
)
|
||||
return payload
|
||||
@@ -1,57 +1,113 @@
|
||||
"""Portfolio state helpers (positions.json, reconcile with exchange)."""
|
||||
from ..runtime import get_runtime_paths
|
||||
from .file_utils import load_json_locked, save_json_locked
|
||||
from .trade_common import bj_now_iso
|
||||
"""Portfolio analysis and position management signals."""
|
||||
|
||||
PATHS = get_runtime_paths()
|
||||
POSITIONS_FILE = PATHS.positions_file
|
||||
POSITIONS_LOCK = PATHS.positions_lock
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import asdict, dataclass
|
||||
from typing import Any
|
||||
|
||||
from ..audit import audit_event
|
||||
from .account_service import get_positions
|
||||
from .market_service import normalize_symbol
|
||||
from .signal_service import (
|
||||
get_signal_interval,
|
||||
get_signal_weights,
|
||||
score_portfolio_signal,
|
||||
)
|
||||
|
||||
|
||||
def load_positions() -> list:
|
||||
return load_json_locked(POSITIONS_FILE, POSITIONS_LOCK, {"positions": []}).get("positions", [])
|
||||
@dataclass
|
||||
class PortfolioRecommendation:
|
||||
symbol: str
|
||||
action: str
|
||||
score: float
|
||||
reasons: list[str]
|
||||
metrics: dict[str, float]
|
||||
|
||||
|
||||
def save_positions(positions: list):
|
||||
save_json_locked(POSITIONS_FILE, POSITIONS_LOCK, {"positions": positions})
|
||||
|
||||
|
||||
def upsert_position(positions: list, position: dict):
|
||||
sym = position["symbol"]
|
||||
for i, existing in enumerate(positions):
|
||||
if existing.get("symbol") == sym:
|
||||
positions[i] = position
|
||||
return positions
|
||||
positions.append(position)
|
||||
return positions
|
||||
|
||||
|
||||
def reconcile_positions_with_exchange(ex, positions: list):
|
||||
from .exchange_service import fetch_balances
|
||||
|
||||
balances = fetch_balances(ex)
|
||||
existing_by_symbol = {p.get("symbol"): p for p in positions}
|
||||
reconciled = []
|
||||
for asset, qty in balances.items():
|
||||
if asset == "USDT":
|
||||
continue
|
||||
if qty <= 0:
|
||||
continue
|
||||
sym = f"{asset}USDT"
|
||||
old = existing_by_symbol.get(sym, {})
|
||||
reconciled.append(
|
||||
{
|
||||
"account_id": old.get("account_id", "binance-main"),
|
||||
"symbol": sym,
|
||||
"base_asset": asset,
|
||||
"quote_asset": "USDT",
|
||||
"market_type": "spot",
|
||||
"quantity": qty,
|
||||
"avg_cost": old.get("avg_cost"),
|
||||
"opened_at": old.get("opened_at", bj_now_iso()),
|
||||
"updated_at": bj_now_iso(),
|
||||
"note": old.get("note", "Reconciled from Binance balances"),
|
||||
def _portfolio_thresholds(config: dict[str, Any]) -> dict[str, float]:
|
||||
portfolio_config = config.get("portfolio", {})
|
||||
return {
|
||||
"add_threshold": float(portfolio_config.get("add_threshold", 1.5)),
|
||||
"hold_threshold": float(portfolio_config.get("hold_threshold", 0.6)),
|
||||
"trim_threshold": float(portfolio_config.get("trim_threshold", 0.2)),
|
||||
"exit_threshold": float(portfolio_config.get("exit_threshold", -0.2)),
|
||||
"max_position_weight": float(portfolio_config.get("max_position_weight", 0.6)),
|
||||
}
|
||||
|
||||
|
||||
def _action_for_position(score: float, concentration: float, thresholds: dict[str, float]) -> tuple[str, list[str]]:
|
||||
reasons: list[str] = []
|
||||
max_weight = thresholds["max_position_weight"]
|
||||
if concentration >= max_weight and score < thresholds["hold_threshold"]:
|
||||
reasons.append("position weight is above the portfolio risk budget")
|
||||
return "trim", reasons
|
||||
if score >= thresholds["add_threshold"] and concentration < max_weight:
|
||||
reasons.append("market signal is strong and position still has room")
|
||||
return "add", reasons
|
||||
if score >= thresholds["hold_threshold"]:
|
||||
reasons.append("market structure remains supportive for holding")
|
||||
return "hold", reasons
|
||||
if score <= thresholds["exit_threshold"]:
|
||||
reasons.append("market signal has weakened enough to justify an exit review")
|
||||
return "exit", reasons
|
||||
if score <= thresholds["trim_threshold"]:
|
||||
reasons.append("edge has faded and the position should be reduced")
|
||||
return "trim", reasons
|
||||
reasons.append("signal is mixed and the position needs review")
|
||||
return "review", reasons
|
||||
|
||||
|
||||
def analyze_portfolio(config: dict[str, Any], *, spot_client: Any) -> dict[str, Any]:
|
||||
quote = str(config.get("market", {}).get("default_quote", "USDT")).upper()
|
||||
signal_weights = get_signal_weights(config)
|
||||
interval = get_signal_interval(config)
|
||||
thresholds = _portfolio_thresholds(config)
|
||||
positions = get_positions(config, spot_client=spot_client)["positions"]
|
||||
positions = [item for item in positions if item["symbol"] != quote]
|
||||
total_notional = sum(item["notional_usdt"] for item in positions) or 1.0
|
||||
recommendations = []
|
||||
for position in positions:
|
||||
symbol = normalize_symbol(position["symbol"])
|
||||
klines = spot_client.klines(symbol=symbol, interval=interval, limit=24)
|
||||
closes = [float(item[4]) for item in klines]
|
||||
volumes = [float(item[5]) for item in klines]
|
||||
tickers = spot_client.ticker_stats([symbol], window="1d")
|
||||
ticker = tickers[0] if tickers else {"priceChangePercent": "0"}
|
||||
concentration = position["notional_usdt"] / total_notional
|
||||
score, metrics = score_portfolio_signal(
|
||||
closes,
|
||||
volumes,
|
||||
{"price_change_pct": float(ticker.get("priceChangePercent") or 0.0)},
|
||||
signal_weights,
|
||||
)
|
||||
save_positions(reconciled)
|
||||
return reconciled, balances
|
||||
action, reasons = _action_for_position(score, concentration, thresholds)
|
||||
metrics["position_weight"] = round(concentration, 4)
|
||||
recommendations.append(
|
||||
asdict(
|
||||
PortfolioRecommendation(
|
||||
symbol=symbol,
|
||||
action=action,
|
||||
score=round(score, 4),
|
||||
reasons=reasons,
|
||||
metrics=metrics,
|
||||
)
|
||||
)
|
||||
)
|
||||
payload = {"recommendations": sorted(recommendations, key=lambda item: item["score"], reverse=True)}
|
||||
audit_event(
|
||||
"opportunity_portfolio_generated",
|
||||
{
|
||||
"market_type": "spot",
|
||||
"symbol": None,
|
||||
"side": None,
|
||||
"qty": None,
|
||||
"quote_amount": None,
|
||||
"order_type": None,
|
||||
"dry_run": True,
|
||||
"request_payload": {"mode": "portfolio"},
|
||||
"response_payload": payload,
|
||||
"status": "generated",
|
||||
"error": None,
|
||||
},
|
||||
)
|
||||
return payload
|
||||
|
||||
@@ -1,25 +0,0 @@
|
||||
"""Analysis helpers for precheck."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from .. import precheck as precheck_module
|
||||
|
||||
|
||||
def analyze_trigger(snapshot: dict, state: dict) -> dict:
|
||||
return precheck_module.analyze_trigger(snapshot, state)
|
||||
|
||||
|
||||
def build_failure_payload(exc: Exception) -> dict:
|
||||
return {
|
||||
"generated_at": precheck_module.utc_iso(),
|
||||
"status": "deep_analysis_required",
|
||||
"should_analyze": True,
|
||||
"pending_trigger": True,
|
||||
"cooldown_active": False,
|
||||
"reasons": ["precheck-error"],
|
||||
"hard_reasons": ["precheck-error"],
|
||||
"soft_reasons": [],
|
||||
"soft_score": 0,
|
||||
"details": [str(exc)],
|
||||
"compact_summary": f"预检查失败,转入深度分析兜底: {exc}",
|
||||
}
|
||||
@@ -1,30 +0,0 @@
|
||||
"""Service entrypoint for precheck workflows."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import sys
|
||||
|
||||
from . import precheck_analysis, precheck_snapshot, precheck_state
|
||||
|
||||
|
||||
def run(argv: list[str] | None = None) -> int:
|
||||
argv = list(sys.argv[1:] if argv is None else argv)
|
||||
|
||||
if argv and argv[0] == "--ack":
|
||||
precheck_state.ack_analysis(" ".join(argv[1:]).strip())
|
||||
return 0
|
||||
if argv and argv[0] == "--mark-run-requested":
|
||||
precheck_state.mark_run_requested(" ".join(argv[1:]).strip())
|
||||
return 0
|
||||
|
||||
try:
|
||||
state = precheck_state.sanitize_state_for_stale_triggers(precheck_state.load_state())
|
||||
snapshot = precheck_snapshot.build_snapshot()
|
||||
analysis = precheck_analysis.analyze_trigger(snapshot, state)
|
||||
precheck_state.save_state(precheck_state.update_state_after_observation(state, snapshot, analysis))
|
||||
print(json.dumps(analysis, ensure_ascii=False, indent=2))
|
||||
return 0
|
||||
except Exception as exc:
|
||||
print(json.dumps(precheck_analysis.build_failure_payload(exc), ensure_ascii=False, indent=2))
|
||||
return 0
|
||||
@@ -1,9 +0,0 @@
|
||||
"""Snapshot construction helpers for precheck."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from .. import precheck as precheck_module
|
||||
|
||||
|
||||
def build_snapshot() -> dict:
|
||||
return precheck_module.build_snapshot()
|
||||
@@ -1,47 +0,0 @@
|
||||
"""State helpers for precheck orchestration."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
|
||||
from .. import precheck as precheck_module
|
||||
|
||||
|
||||
def load_state() -> dict:
|
||||
return precheck_module.load_state()
|
||||
|
||||
|
||||
def save_state(state: dict) -> None:
|
||||
precheck_module.save_state(state)
|
||||
|
||||
|
||||
def sanitize_state_for_stale_triggers(state: dict) -> dict:
|
||||
return precheck_module.sanitize_state_for_stale_triggers(state)
|
||||
|
||||
|
||||
def update_state_after_observation(state: dict, snapshot: dict, analysis: dict) -> dict:
|
||||
return precheck_module.update_state_after_observation(state, snapshot, analysis)
|
||||
|
||||
|
||||
def mark_run_requested(note: str = "") -> dict:
|
||||
state = load_state()
|
||||
state["run_requested_at"] = precheck_module.utc_iso()
|
||||
state["run_request_note"] = note
|
||||
save_state(state)
|
||||
payload = {"ok": True, "run_requested_at": state["run_requested_at"], "note": note}
|
||||
print(json.dumps(payload, ensure_ascii=False))
|
||||
return payload
|
||||
|
||||
|
||||
def ack_analysis(note: str = "") -> dict:
|
||||
state = load_state()
|
||||
state["last_deep_analysis_at"] = precheck_module.utc_iso()
|
||||
state["pending_trigger"] = False
|
||||
state["pending_reasons"] = []
|
||||
state["last_ack_note"] = note
|
||||
state.pop("run_requested_at", None)
|
||||
state.pop("run_request_note", None)
|
||||
save_state(state)
|
||||
payload = {"ok": True, "acked_at": state["last_deep_analysis_at"], "note": note}
|
||||
print(json.dumps(payload, ensure_ascii=False))
|
||||
return payload
|
||||
227
src/coinhunter/services/research_service.py
Normal file
227
src/coinhunter/services/research_service.py
Normal file
@@ -0,0 +1,227 @@
|
||||
"""External research signal providers for opportunity scoring."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import time
|
||||
from collections.abc import Callable
|
||||
from math import log10
|
||||
from typing import Any
|
||||
from urllib.parse import urlencode
|
||||
|
||||
import requests
|
||||
from requests.exceptions import RequestException
|
||||
|
||||
from .market_service import base_asset, normalize_symbol
|
||||
|
||||
HttpGet = Callable[[str, dict[str, str], float], Any]
|
||||
_PUBLIC_HTTP_ATTEMPTS = 5
|
||||
|
||||
|
||||
def _clamp(value: float, low: float = 0.0, high: float = 1.0) -> float:
|
||||
return min(max(value, low), high)
|
||||
|
||||
|
||||
def _as_float(value: Any, default: float = 0.0) -> float:
|
||||
try:
|
||||
return float(value)
|
||||
except (TypeError, ValueError):
|
||||
return default
|
||||
|
||||
|
||||
def _safe_ratio(numerator: float, denominator: float) -> float:
|
||||
if denominator <= 0:
|
||||
return 0.0
|
||||
return numerator / denominator
|
||||
|
||||
|
||||
def _log_score(value: float, *, floor: float, span: float) -> float:
|
||||
if value <= 0:
|
||||
return 0.0
|
||||
return _clamp((log10(value) - floor) / span)
|
||||
|
||||
|
||||
def _pct_score(value: float, *, low: float, high: float) -> float:
|
||||
if high <= low:
|
||||
return 0.0
|
||||
return _clamp((value - low) / (high - low))
|
||||
|
||||
|
||||
def _public_http_get(url: str, headers: dict[str, str], timeout: float) -> Any:
|
||||
last_error: RequestException | None = None
|
||||
for attempt in range(_PUBLIC_HTTP_ATTEMPTS):
|
||||
try:
|
||||
response = requests.get(url, headers=headers, timeout=timeout)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except RequestException as exc:
|
||||
last_error = exc
|
||||
if attempt < _PUBLIC_HTTP_ATTEMPTS - 1:
|
||||
time.sleep(0.5 * (attempt + 1))
|
||||
if last_error is not None:
|
||||
raise last_error
|
||||
raise RuntimeError("public HTTP request failed")
|
||||
|
||||
|
||||
def _build_url(base_url: str, path: str, params: dict[str, str]) -> str:
|
||||
return f"{base_url.rstrip('/')}{path}?{urlencode(params)}"
|
||||
|
||||
|
||||
def _chunked(items: list[str], size: int) -> list[list[str]]:
|
||||
return [items[index : index + size] for index in range(0, len(items), size)]
|
||||
|
||||
|
||||
def _coingecko_market_to_signals(row: dict[str, Any], *, is_trending: bool = False) -> dict[str, float]:
|
||||
market_cap = _as_float(row.get("market_cap"))
|
||||
fdv = _as_float(row.get("fully_diluted_valuation"))
|
||||
volume = _as_float(row.get("total_volume"))
|
||||
rank = _as_float(row.get("market_cap_rank"), 9999.0)
|
||||
circulating = _as_float(row.get("circulating_supply"))
|
||||
total_supply = _as_float(row.get("total_supply"))
|
||||
max_supply = _as_float(row.get("max_supply"))
|
||||
supply_cap = max_supply or total_supply
|
||||
|
||||
rank_score = _clamp(1.0 - (log10(max(rank, 1.0)) / 4.0))
|
||||
size_score = _log_score(market_cap, floor=7.0, span=5.0)
|
||||
volume_to_mcap = _safe_ratio(volume, market_cap)
|
||||
liquidity_quality = _clamp(volume_to_mcap / 0.10)
|
||||
|
||||
fdv_ratio = _safe_ratio(fdv, market_cap) if fdv and market_cap else 1.0
|
||||
fdv_dilution_risk = _clamp((fdv_ratio - 1.0) / 4.0)
|
||||
supply_unlocked = _clamp(_safe_ratio(circulating, supply_cap)) if supply_cap else max(0.0, 1.0 - fdv_dilution_risk)
|
||||
supply_dilution_risk = 1.0 - supply_unlocked
|
||||
unlock_risk = max(fdv_dilution_risk, supply_dilution_risk * 0.8)
|
||||
|
||||
pct_7d = _as_float(row.get("price_change_percentage_7d_in_currency"))
|
||||
pct_30d = _as_float(row.get("price_change_percentage_30d_in_currency"))
|
||||
pct_200d = _as_float(row.get("price_change_percentage_200d_in_currency"))
|
||||
medium_momentum = _pct_score(pct_30d, low=-15.0, high=60.0)
|
||||
long_momentum = _pct_score(pct_200d, low=-40.0, high=150.0)
|
||||
trend_catalyst = _pct_score(pct_7d, low=-5.0, high=25.0)
|
||||
trend_bonus = 1.0 if is_trending else 0.0
|
||||
|
||||
tokenomics = _clamp(0.65 * supply_unlocked + 0.35 * (1.0 - fdv_dilution_risk))
|
||||
fundamental = _clamp(0.40 * rank_score + 0.35 * size_score + 0.25 * liquidity_quality)
|
||||
catalyst = _clamp(0.45 * trend_catalyst + 0.40 * medium_momentum + 0.15 * trend_bonus)
|
||||
adoption = _clamp(0.45 * rank_score + 0.35 * liquidity_quality + 0.20 * long_momentum)
|
||||
smart_money = _clamp(0.35 * rank_score + 0.35 * liquidity_quality + 0.30 * (1.0 - unlock_risk))
|
||||
regulatory_risk = 0.10 if rank <= 100 else 0.20 if rank <= 500 else 0.35
|
||||
|
||||
populated_fields = sum(
|
||||
1
|
||||
for value in (market_cap, fdv, volume, rank, circulating, supply_cap, pct_7d, pct_30d, pct_200d)
|
||||
if value
|
||||
)
|
||||
confidence = _clamp(populated_fields / 9.0)
|
||||
|
||||
return {
|
||||
"fundamental": round(fundamental, 4),
|
||||
"tokenomics": round(tokenomics, 4),
|
||||
"catalyst": round(catalyst, 4),
|
||||
"adoption": round(adoption, 4),
|
||||
"smart_money": round(smart_money, 4),
|
||||
"unlock_risk": round(unlock_risk, 4),
|
||||
"regulatory_risk": round(regulatory_risk, 4),
|
||||
"research_confidence": round(confidence, 4),
|
||||
}
|
||||
|
||||
|
||||
def _coingecko_headers(config: dict[str, Any]) -> dict[str, str]:
|
||||
coingecko_config = config.get("coingecko", {})
|
||||
headers = {"accept": "application/json", "user-agent": "coinhunter/2"}
|
||||
api_key = str(coingecko_config.get("api_key", "")).strip()
|
||||
if api_key:
|
||||
headers["x-cg-demo-api-key"] = api_key
|
||||
return headers
|
||||
|
||||
|
||||
def _fetch_coingecko_research(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
symbols: list[str],
|
||||
quote: str,
|
||||
http_get: HttpGet | None = None,
|
||||
) -> dict[str, dict[str, float]]:
|
||||
if not symbols:
|
||||
return {}
|
||||
|
||||
opportunity_config = config.get("opportunity", {})
|
||||
coingecko_config = config.get("coingecko", {})
|
||||
base_url = str(coingecko_config.get("base_url", "https://api.coingecko.com/api/v3"))
|
||||
timeout = _as_float(opportunity_config.get("research_timeout_seconds"), 4.0)
|
||||
headers = _coingecko_headers(config)
|
||||
http_get = http_get or _public_http_get
|
||||
|
||||
base_to_symbol = {
|
||||
base_asset(normalize_symbol(symbol), quote).lower(): normalize_symbol(symbol)
|
||||
for symbol in symbols
|
||||
if normalize_symbol(symbol)
|
||||
}
|
||||
bases = sorted(base_to_symbol)
|
||||
if not bases:
|
||||
return {}
|
||||
|
||||
trending_ids: set[str] = set()
|
||||
try:
|
||||
trending_url = _build_url(base_url, "/search/trending", {})
|
||||
trending_payload = http_get(trending_url, headers, timeout)
|
||||
for item in trending_payload.get("coins", []):
|
||||
coin = item.get("item", {})
|
||||
coin_id = str(coin.get("id", "")).strip()
|
||||
if coin_id:
|
||||
trending_ids.add(coin_id)
|
||||
except Exception:
|
||||
trending_ids = set()
|
||||
|
||||
research: dict[str, dict[str, float]] = {}
|
||||
for chunk in _chunked(bases, 50):
|
||||
params = {
|
||||
"vs_currency": "usd",
|
||||
"symbols": ",".join(chunk),
|
||||
"include_tokens": "top",
|
||||
"order": "market_cap_desc",
|
||||
"per_page": "250",
|
||||
"page": "1",
|
||||
"sparkline": "false",
|
||||
"price_change_percentage": "7d,30d,200d",
|
||||
}
|
||||
try:
|
||||
markets_url = _build_url(base_url, "/coins/markets", params)
|
||||
rows = http_get(markets_url, headers, timeout)
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
seen_bases: set[str] = set()
|
||||
for row in rows if isinstance(rows, list) else []:
|
||||
symbol = str(row.get("symbol", "")).lower()
|
||||
if symbol in seen_bases or symbol not in base_to_symbol:
|
||||
continue
|
||||
seen_bases.add(symbol)
|
||||
normalized = base_to_symbol[symbol]
|
||||
research[normalized] = _coingecko_market_to_signals(
|
||||
row,
|
||||
is_trending=str(row.get("id", "")) in trending_ids,
|
||||
)
|
||||
return research
|
||||
|
||||
|
||||
def get_external_research(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
symbols: list[str],
|
||||
quote: str,
|
||||
http_get: HttpGet | None = None,
|
||||
) -> dict[str, dict[str, float]]:
|
||||
"""Fetch automated research signals for symbols.
|
||||
|
||||
Returns an empty map when disabled or when the configured provider is unavailable.
|
||||
Opportunity scans should continue rather than fail because a research endpoint timed out.
|
||||
"""
|
||||
opportunity_config = config.get("opportunity", {})
|
||||
if not bool(opportunity_config.get("auto_research", True)):
|
||||
return {}
|
||||
provider = str(opportunity_config.get("research_provider", "coingecko")).strip().lower()
|
||||
if provider in {"", "off", "none", "disabled"}:
|
||||
return {}
|
||||
if provider != "coingecko":
|
||||
return {}
|
||||
return _fetch_coingecko_research(config, symbols=symbols, quote=quote, http_get=http_get)
|
||||
292
src/coinhunter/services/signal_service.py
Normal file
292
src/coinhunter/services/signal_service.py
Normal file
@@ -0,0 +1,292 @@
|
||||
"""Market signal scoring primitives and domain-specific models."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from math import log10
|
||||
from statistics import mean
|
||||
from typing import Any
|
||||
|
||||
|
||||
def _clamp(value: float, low: float, high: float) -> float:
|
||||
return max(low, min(value, high))
|
||||
|
||||
|
||||
def _safe_pct(new: float, old: float) -> float:
|
||||
if old == 0:
|
||||
return 0.0
|
||||
return (new - old) / old
|
||||
|
||||
|
||||
def _range_pct(values: list[float], denominator: float) -> float:
|
||||
if not values or denominator == 0:
|
||||
return 0.0
|
||||
return (max(values) - min(values)) / denominator
|
||||
|
||||
|
||||
_DEFAULT_OPPORTUNITY_MODEL_WEIGHTS = {
|
||||
"trend": 0.1406,
|
||||
"compression": 0.1688,
|
||||
"breakout_proximity": 0.0875,
|
||||
"higher_lows": 0.15,
|
||||
"range_position": 0.45,
|
||||
"fresh_breakout": 0.2,
|
||||
"volume": 0.525,
|
||||
"momentum": 0.1562,
|
||||
"setup": 1.875,
|
||||
"trigger": 1.875,
|
||||
"liquidity": 0.3,
|
||||
"volatility_penalty": 0.8,
|
||||
"extension_penalty": 0.45,
|
||||
}
|
||||
|
||||
|
||||
def get_opportunity_model_weights(opportunity_config: dict[str, Any]) -> dict[str, float]:
|
||||
configured = opportunity_config.get("model_weights", {})
|
||||
return {
|
||||
key: float(configured.get(key, default))
|
||||
for key, default in _DEFAULT_OPPORTUNITY_MODEL_WEIGHTS.items()
|
||||
}
|
||||
|
||||
|
||||
def _weighted_quality(values: dict[str, float], weights: dict[str, float]) -> float:
|
||||
weighted_sum = 0.0
|
||||
total_weight = 0.0
|
||||
for key, value in values.items():
|
||||
weight = max(float(weights.get(key, 0.0)), 0.0)
|
||||
if weight == 0:
|
||||
continue
|
||||
weighted_sum += weight * value
|
||||
total_weight += weight
|
||||
if total_weight == 0:
|
||||
return 0.0
|
||||
return _clamp(weighted_sum / total_weight, -1.0, 1.0)
|
||||
|
||||
|
||||
def get_signal_weights(config: dict[str, Any]) -> dict[str, float]:
|
||||
signal_config = config.get("signal", {})
|
||||
return {
|
||||
"trend": float(signal_config.get("trend", 1.0)),
|
||||
"momentum": float(signal_config.get("momentum", 1.0)),
|
||||
"breakout": float(signal_config.get("breakout", 0.8)),
|
||||
"volume": float(signal_config.get("volume", 0.7)),
|
||||
"volatility_penalty": float(signal_config.get("volatility_penalty", 0.5)),
|
||||
}
|
||||
|
||||
|
||||
def get_signal_interval(config: dict[str, Any]) -> str:
|
||||
signal_config = config.get("signal", {})
|
||||
if signal_config.get("lookback_interval"):
|
||||
return str(signal_config["lookback_interval"])
|
||||
return "1h"
|
||||
|
||||
|
||||
def score_market_signal(
|
||||
closes: list[float],
|
||||
volumes: list[float],
|
||||
ticker: dict[str, Any],
|
||||
weights: dict[str, float],
|
||||
) -> tuple[float, dict[str, float]]:
|
||||
return score_portfolio_signal(closes, volumes, ticker, weights)
|
||||
|
||||
|
||||
def score_portfolio_signal(
|
||||
closes: list[float],
|
||||
volumes: list[float],
|
||||
ticker: dict[str, Any],
|
||||
weights: dict[str, float],
|
||||
) -> tuple[float, dict[str, float]]:
|
||||
if len(closes) < 2 or not volumes:
|
||||
return 0.0, {
|
||||
"trend": 0.0,
|
||||
"momentum": 0.0,
|
||||
"breakout": 0.0,
|
||||
"volume_confirmation": 1.0,
|
||||
"volatility": 0.0,
|
||||
}
|
||||
|
||||
current = closes[-1]
|
||||
sma_short = mean(closes[-5:]) if len(closes) >= 5 else current
|
||||
sma_long = mean(closes[-20:]) if len(closes) >= 20 else mean(closes)
|
||||
trend = 1.0 if current >= sma_short >= sma_long else -1.0 if current < sma_short < sma_long else 0.0
|
||||
momentum = (
|
||||
_safe_pct(closes[-1], closes[-2]) * 0.5
|
||||
+ (_safe_pct(closes[-1], closes[-5]) * 0.3 if len(closes) >= 5 else 0.0)
|
||||
+ float(ticker.get("price_change_pct", 0.0)) / 100.0 * 0.2
|
||||
)
|
||||
recent_high = max(closes[-20:]) if len(closes) >= 20 else max(closes)
|
||||
breakout = 1.0 - max((recent_high - current) / recent_high, 0.0)
|
||||
avg_volume = mean(volumes[:-1]) if len(volumes) > 1 else volumes[-1]
|
||||
volume_confirmation = volumes[-1] / avg_volume if avg_volume else 1.0
|
||||
volume_score = min(max(volume_confirmation - 1.0, -1.0), 2.0)
|
||||
volatility = (max(closes[-10:]) - min(closes[-10:])) / current if len(closes) >= 10 and current else 0.0
|
||||
|
||||
score = (
|
||||
weights.get("trend", 1.0) * trend
|
||||
+ weights.get("momentum", 1.0) * momentum
|
||||
+ weights.get("breakout", 0.8) * breakout
|
||||
+ weights.get("volume", 0.7) * volume_score
|
||||
- weights.get("volatility_penalty", 0.5) * volatility
|
||||
)
|
||||
metrics = {
|
||||
"trend": round(trend, 4),
|
||||
"momentum": round(momentum, 4),
|
||||
"breakout": round(breakout, 4),
|
||||
"volume_confirmation": round(volume_confirmation, 4),
|
||||
"volatility": round(volatility, 4),
|
||||
}
|
||||
return score, metrics
|
||||
|
||||
|
||||
def score_opportunity_signal(
|
||||
closes: list[float],
|
||||
volumes: list[float],
|
||||
ticker: dict[str, Any],
|
||||
opportunity_config: dict[str, Any],
|
||||
) -> tuple[float, dict[str, float]]:
|
||||
model_weights = get_opportunity_model_weights(opportunity_config)
|
||||
if len(closes) < 6 or len(volumes) < 2:
|
||||
return 0.0, {
|
||||
"setup_score": 0.0,
|
||||
"trigger_score": 0.0,
|
||||
"liquidity_score": 0.0,
|
||||
"edge_score": 0.0,
|
||||
"setup_quality": 0.0,
|
||||
"trigger_quality": 0.0,
|
||||
"liquidity_quality": 0.0,
|
||||
"risk_quality": 0.0,
|
||||
"extension_penalty": 0.0,
|
||||
"breakout_pct": 0.0,
|
||||
"recent_runup": 0.0,
|
||||
"volume_confirmation": 1.0,
|
||||
"volatility": 0.0,
|
||||
}
|
||||
|
||||
current = closes[-1]
|
||||
sma_short = mean(closes[-5:])
|
||||
sma_long = mean(closes[-20:]) if len(closes) >= 20 else mean(closes)
|
||||
if current >= sma_short >= sma_long:
|
||||
trend_quality = 1.0
|
||||
elif current < sma_short < sma_long:
|
||||
trend_quality = -1.0
|
||||
else:
|
||||
trend_quality = 0.0
|
||||
prior_closes = closes[:-1]
|
||||
prev_high = max(prior_closes[-20:]) if prior_closes else current
|
||||
recent_low = min(closes[-20:])
|
||||
range_width = prev_high - recent_low
|
||||
range_position = _clamp((current - recent_low) / range_width, 0.0, 1.2) if range_width else 0.0
|
||||
range_position_quality = 2.0 * _clamp(1.0 - abs(range_position - 0.62) / 0.62, 0.0, 1.0) - 1.0
|
||||
breakout_pct = _safe_pct(current, prev_high)
|
||||
|
||||
recent_range = _range_pct(closes[-6:], current)
|
||||
prior_window = closes[-20:-6] if len(closes) >= 20 else closes[:-6]
|
||||
prior_range = _range_pct(prior_window, current) if prior_window else recent_range
|
||||
compression = _clamp(1.0 - (recent_range / prior_range), -1.0, 1.0) if prior_range else 0.0
|
||||
|
||||
recent_low_window = min(closes[-5:])
|
||||
prior_low_window = min(closes[-10:-5]) if len(closes) >= 10 else min(closes[:-5])
|
||||
higher_lows = 1.0 if recent_low_window > prior_low_window else -1.0
|
||||
breakout_proximity = _clamp(1.0 - abs(breakout_pct) / 0.03, 0.0, 1.0)
|
||||
breakout_proximity_quality = 2.0 * breakout_proximity - 1.0
|
||||
setup_quality = _weighted_quality(
|
||||
{
|
||||
"trend": trend_quality,
|
||||
"compression": compression,
|
||||
"breakout_proximity": breakout_proximity_quality,
|
||||
"higher_lows": higher_lows,
|
||||
"range_position": range_position_quality,
|
||||
},
|
||||
model_weights,
|
||||
)
|
||||
setup_score = _clamp((setup_quality + 1.0) / 2.0, 0.0, 1.0)
|
||||
|
||||
avg_volume = mean(volumes[:-1])
|
||||
volume_confirmation = volumes[-1] / avg_volume if avg_volume else 1.0
|
||||
volume_score = _clamp((volume_confirmation - 1.0) / 1.5, -1.0, 1.0)
|
||||
momentum_3 = _safe_pct(closes[-1], closes[-4])
|
||||
if momentum_3 <= 0:
|
||||
controlled_momentum = _clamp(momentum_3 / 0.05, -1.0, 0.0)
|
||||
elif momentum_3 <= 0.05:
|
||||
controlled_momentum = momentum_3 / 0.05
|
||||
elif momentum_3 <= 0.12:
|
||||
controlled_momentum = 1.0 - ((momentum_3 - 0.05) / 0.07) * 0.5
|
||||
else:
|
||||
controlled_momentum = -0.2
|
||||
fresh_breakout = _clamp(1.0 - abs(breakout_pct) / 0.025, 0.0, 1.0)
|
||||
fresh_breakout_quality = 2.0 * fresh_breakout - 1.0
|
||||
trigger_quality = _weighted_quality(
|
||||
{
|
||||
"fresh_breakout": fresh_breakout_quality,
|
||||
"volume": volume_score,
|
||||
"momentum": controlled_momentum,
|
||||
},
|
||||
model_weights,
|
||||
)
|
||||
trigger_score = _clamp((trigger_quality + 1.0) / 2.0, 0.0, 1.0)
|
||||
|
||||
extension_from_short = _safe_pct(current, sma_short)
|
||||
recent_runup = _safe_pct(current, closes[-6])
|
||||
extension_penalty = (
|
||||
_clamp((extension_from_short - 0.025) / 0.075, 0.0, 1.0)
|
||||
+ _clamp((recent_runup - 0.08) / 0.12, 0.0, 1.0)
|
||||
+ _clamp((float(ticker.get("price_change_pct", 0.0)) / 100.0 - 0.12) / 0.18, 0.0, 1.0)
|
||||
)
|
||||
volatility = _range_pct(closes[-10:], current)
|
||||
|
||||
min_quote_volume = float(opportunity_config.get("min_quote_volume", 0.0))
|
||||
quote_volume = float(ticker.get("quote_volume") or ticker.get("quoteVolume") or 0.0)
|
||||
if min_quote_volume > 0 and quote_volume > 0:
|
||||
liquidity_score = _clamp(log10(max(quote_volume / min_quote_volume, 1.0)) / 2.0, 0.0, 1.0)
|
||||
else:
|
||||
liquidity_score = 1.0
|
||||
liquidity_quality = 2.0 * liquidity_score - 1.0
|
||||
volatility_quality = 1.0 - 2.0 * _clamp(volatility / 0.12, 0.0, 1.0)
|
||||
extension_quality = 1.0 - 2.0 * _clamp(extension_penalty / 2.0, 0.0, 1.0)
|
||||
risk_quality = _weighted_quality(
|
||||
{
|
||||
"volatility_penalty": volatility_quality,
|
||||
"extension_penalty": extension_quality,
|
||||
},
|
||||
model_weights,
|
||||
)
|
||||
edge_score = _weighted_quality(
|
||||
{
|
||||
"setup": setup_quality,
|
||||
"trigger": trigger_quality,
|
||||
"liquidity": liquidity_quality,
|
||||
"trend": trend_quality,
|
||||
"range_position": range_position_quality,
|
||||
"volatility_penalty": volatility_quality,
|
||||
"extension_penalty": extension_quality,
|
||||
},
|
||||
model_weights,
|
||||
)
|
||||
|
||||
score = 1.0 + edge_score
|
||||
metrics = {
|
||||
"setup_score": round(setup_score, 4),
|
||||
"trigger_score": round(trigger_score, 4),
|
||||
"liquidity_score": round(liquidity_score, 4),
|
||||
"edge_score": round(edge_score, 4),
|
||||
"setup_quality": round(setup_quality, 4),
|
||||
"trigger_quality": round(trigger_quality, 4),
|
||||
"liquidity_quality": round(liquidity_quality, 4),
|
||||
"risk_quality": round(risk_quality, 4),
|
||||
"trend_quality": round(trend_quality, 4),
|
||||
"range_position_quality": round(range_position_quality, 4),
|
||||
"breakout_proximity_quality": round(breakout_proximity_quality, 4),
|
||||
"volume_quality": round(volume_score, 4),
|
||||
"momentum_quality": round(controlled_momentum, 4),
|
||||
"extension_quality": round(extension_quality, 4),
|
||||
"volatility_quality": round(volatility_quality, 4),
|
||||
"extension_penalty": round(extension_penalty, 4),
|
||||
"compression": round(compression, 4),
|
||||
"range_position": round(range_position, 4),
|
||||
"breakout_pct": round(breakout_pct, 4),
|
||||
"recent_runup": round(recent_runup, 4),
|
||||
"volume_confirmation": round(volume_confirmation, 4),
|
||||
"volatility": round(volatility, 4),
|
||||
"sma_short_distance": round(extension_from_short, 4),
|
||||
"sma_long_distance": round(_safe_pct(current, sma_long), 4),
|
||||
}
|
||||
return score, metrics
|
||||
@@ -1,145 +0,0 @@
|
||||
"""CLI parser and legacy argument normalization for smart executor."""
|
||||
import argparse
|
||||
|
||||
|
||||
def build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Coin Hunter Smart Executor",
|
||||
formatter_class=argparse.RawTextHelpFormatter,
|
||||
epilog=(
|
||||
"示例:\n"
|
||||
" python smart_executor.py hold\n"
|
||||
" python smart_executor.py sell-all ETHUSDT\n"
|
||||
" python smart_executor.py buy ENJUSDT 100\n"
|
||||
" python smart_executor.py rebalance PEPEUSDT ETHUSDT\n"
|
||||
" python smart_executor.py balances\n\n"
|
||||
"兼容旧调用:\n"
|
||||
" python smart_executor.py HOLD\n"
|
||||
" python smart_executor.py --decision HOLD --dry-run\n"
|
||||
),
|
||||
)
|
||||
parser.add_argument("--decision-id", help="Override decision id (otherwise derived automatically)")
|
||||
parser.add_argument("--analysis", help="Decision analysis text to persist into logs")
|
||||
parser.add_argument("--reasoning", help="Decision reasoning text to persist into logs")
|
||||
parser.add_argument("--dry-run", action="store_true", help="Force dry-run mode for this invocation")
|
||||
|
||||
subparsers = parser.add_subparsers(dest="command")
|
||||
|
||||
subparsers.add_parser("hold", help="Log a HOLD decision without trading")
|
||||
subparsers.add_parser("balances", help="Print live balances as JSON")
|
||||
subparsers.add_parser("balance", help="Alias of balances")
|
||||
subparsers.add_parser("status", help="Print balances + positions + snapshot as JSON")
|
||||
|
||||
sell_all = subparsers.add_parser("sell-all", help="Sell all of one symbol")
|
||||
sell_all.add_argument("symbol")
|
||||
sell_all_legacy = subparsers.add_parser("sell_all", help=argparse.SUPPRESS)
|
||||
sell_all_legacy.add_argument("symbol")
|
||||
|
||||
buy = subparsers.add_parser("buy", help="Buy symbol with USDT amount")
|
||||
buy.add_argument("symbol")
|
||||
buy.add_argument("amount_usdt", type=float)
|
||||
|
||||
rebalance = subparsers.add_parser("rebalance", help="Sell one symbol and rotate to another")
|
||||
rebalance.add_argument("from_symbol")
|
||||
rebalance.add_argument("to_symbol")
|
||||
|
||||
return parser
|
||||
|
||||
|
||||
def normalize_legacy_argv(argv: list[str]) -> list[str]:
|
||||
if not argv:
|
||||
return argv
|
||||
|
||||
action_aliases = {
|
||||
"HOLD": ["hold"],
|
||||
"hold": ["hold"],
|
||||
"SELL_ALL": ["sell-all"],
|
||||
"sell_all": ["sell-all"],
|
||||
"sell-all": ["sell-all"],
|
||||
"BUY": ["buy"],
|
||||
"buy": ["buy"],
|
||||
"REBALANCE": ["rebalance"],
|
||||
"rebalance": ["rebalance"],
|
||||
"BALANCE": ["balances"],
|
||||
"balance": ["balances"],
|
||||
"BALANCES": ["balances"],
|
||||
"balances": ["balances"],
|
||||
"STATUS": ["status"],
|
||||
"status": ["status"],
|
||||
}
|
||||
|
||||
has_legacy_flag = any(t.startswith("--decision") for t in argv)
|
||||
if not has_legacy_flag:
|
||||
for idx, token in enumerate(argv):
|
||||
if token in action_aliases:
|
||||
prefix = argv[:idx]
|
||||
suffix = argv[idx + 1 :]
|
||||
return prefix + action_aliases[token] + suffix
|
||||
|
||||
if argv[0].startswith("-"):
|
||||
legacy = argparse.ArgumentParser(add_help=False)
|
||||
legacy.add_argument("--decision")
|
||||
legacy.add_argument("--symbol")
|
||||
legacy.add_argument("--from-symbol")
|
||||
legacy.add_argument("--to-symbol")
|
||||
legacy.add_argument("--amount-usdt", type=float)
|
||||
legacy.add_argument("--decision-id")
|
||||
legacy.add_argument("--analysis")
|
||||
legacy.add_argument("--reasoning")
|
||||
legacy.add_argument("--dry-run", action="store_true")
|
||||
ns, unknown = legacy.parse_known_args(argv)
|
||||
|
||||
if ns.decision:
|
||||
decision = (ns.decision or "").strip().upper()
|
||||
rebuilt = []
|
||||
if ns.decision_id:
|
||||
rebuilt += ["--decision-id", ns.decision_id]
|
||||
if ns.analysis:
|
||||
rebuilt += ["--analysis", ns.analysis]
|
||||
if ns.reasoning:
|
||||
rebuilt += ["--reasoning", ns.reasoning]
|
||||
if ns.dry_run:
|
||||
rebuilt += ["--dry-run"]
|
||||
|
||||
if decision == "HOLD":
|
||||
rebuilt += ["hold"]
|
||||
elif decision == "SELL_ALL":
|
||||
if not ns.symbol:
|
||||
raise RuntimeError("旧式 --decision SELL_ALL 需要搭配 --symbol")
|
||||
rebuilt += ["sell-all", ns.symbol]
|
||||
elif decision == "BUY":
|
||||
if not ns.symbol or ns.amount_usdt is None:
|
||||
raise RuntimeError("旧式 --decision BUY 需要 --symbol 和 --amount-usdt")
|
||||
rebuilt += ["buy", ns.symbol, str(ns.amount_usdt)]
|
||||
elif decision == "REBALANCE":
|
||||
if not ns.from_symbol or not ns.to_symbol:
|
||||
raise RuntimeError("旧式 --decision REBALANCE 需要 --from-symbol 和 --to-symbol")
|
||||
rebuilt += ["rebalance", ns.from_symbol, ns.to_symbol]
|
||||
else:
|
||||
raise RuntimeError(f"不支持的旧式 decision: {decision}")
|
||||
|
||||
return rebuilt + unknown
|
||||
|
||||
return argv
|
||||
|
||||
|
||||
def parse_cli_args(argv: list[str]):
|
||||
parser = build_parser()
|
||||
normalized = normalize_legacy_argv(argv)
|
||||
args = parser.parse_args(normalized)
|
||||
if not args.command:
|
||||
parser.print_help()
|
||||
raise SystemExit(1)
|
||||
if args.command == "sell_all":
|
||||
args.command = "sell-all"
|
||||
return args, normalized
|
||||
|
||||
|
||||
def cli_action_args(args, action: str) -> list[str]:
|
||||
if action == "sell_all":
|
||||
return [args.symbol]
|
||||
if action == "buy":
|
||||
return [args.symbol, str(args.amount_usdt)]
|
||||
if action == "rebalance":
|
||||
return [args.from_symbol, args.to_symbol]
|
||||
return []
|
||||
@@ -1,128 +0,0 @@
|
||||
"""Service entrypoint for smart executor workflows."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
from ..logger import log_decision, log_error
|
||||
from .exchange_service import fetch_balances, build_market_snapshot
|
||||
from .execution_state import default_decision_id, get_execution_state, record_execution_state
|
||||
from .portfolio_service import load_positions
|
||||
from .smart_executor_parser import parse_cli_args, cli_action_args
|
||||
from .trade_common import is_dry_run, log, set_dry_run, bj_now_iso
|
||||
from .trade_execution import (
|
||||
command_balances,
|
||||
command_status,
|
||||
build_decision_context,
|
||||
action_sell_all,
|
||||
action_buy,
|
||||
action_rebalance,
|
||||
)
|
||||
|
||||
|
||||
def run(argv: list[str] | None = None) -> int:
|
||||
argv = list(sys.argv[1:] if argv is None else argv)
|
||||
args, normalized_argv = parse_cli_args(argv)
|
||||
action = args.command.replace("-", "_")
|
||||
argv_tail = cli_action_args(args, action)
|
||||
decision_id = (
|
||||
args.decision_id
|
||||
or os.getenv("DECISION_ID")
|
||||
or default_decision_id(action, normalized_argv)
|
||||
)
|
||||
|
||||
if args.dry_run:
|
||||
set_dry_run(True)
|
||||
|
||||
previous = get_execution_state(decision_id)
|
||||
read_only_action = action in {"balance", "balances", "status"}
|
||||
if previous and previous.get("status") == "success" and not read_only_action:
|
||||
log(f"⚠️ decision_id={decision_id} 已执行成功,跳过重复执行")
|
||||
return 0
|
||||
|
||||
try:
|
||||
from .exchange_service import get_exchange
|
||||
ex = get_exchange()
|
||||
|
||||
if read_only_action:
|
||||
if action in {"balance", "balances"}:
|
||||
command_balances(ex)
|
||||
else:
|
||||
command_status(ex)
|
||||
return 0
|
||||
|
||||
decision_context = build_decision_context(ex, action, argv_tail, decision_id)
|
||||
if args.analysis:
|
||||
decision_context["analysis"] = args.analysis
|
||||
elif os.getenv("DECISION_ANALYSIS"):
|
||||
decision_context["analysis"] = os.getenv("DECISION_ANALYSIS")
|
||||
if args.reasoning:
|
||||
decision_context["reasoning"] = args.reasoning
|
||||
elif os.getenv("DECISION_REASONING"):
|
||||
decision_context["reasoning"] = os.getenv("DECISION_REASONING")
|
||||
|
||||
record_execution_state(
|
||||
decision_id,
|
||||
{"status": "pending", "started_at": bj_now_iso(), "action": action, "args": argv_tail},
|
||||
)
|
||||
|
||||
if action == "sell_all":
|
||||
result = action_sell_all(ex, args.symbol, decision_id, decision_context)
|
||||
elif action == "buy":
|
||||
result = action_buy(ex, args.symbol, float(args.amount_usdt), decision_id, decision_context)
|
||||
elif action == "rebalance":
|
||||
result = action_rebalance(ex, args.from_symbol, args.to_symbol, decision_id, decision_context)
|
||||
elif action == "hold":
|
||||
balances = fetch_balances(ex)
|
||||
positions = load_positions()
|
||||
market_snapshot = build_market_snapshot(ex)
|
||||
log_decision(
|
||||
{
|
||||
**decision_context,
|
||||
"balances_after": balances,
|
||||
"positions_after": positions,
|
||||
"market_snapshot": market_snapshot,
|
||||
"analysis": decision_context.get("analysis", "hold"),
|
||||
"reasoning": decision_context.get("reasoning", "hold"),
|
||||
"execution_result": {"status": "hold"},
|
||||
}
|
||||
)
|
||||
log("😴 决策: 持续持有,无操作")
|
||||
result = {"status": "hold"}
|
||||
else:
|
||||
raise RuntimeError(f"未知动作: {action};请运行 --help 查看正确 CLI 用法")
|
||||
|
||||
record_execution_state(
|
||||
decision_id,
|
||||
{
|
||||
"status": "success",
|
||||
"finished_at": bj_now_iso(),
|
||||
"action": action,
|
||||
"args": argv_tail,
|
||||
"result": result,
|
||||
},
|
||||
)
|
||||
log(f"✅ 执行完成 decision_id={decision_id}")
|
||||
return 0
|
||||
|
||||
except Exception as exc:
|
||||
record_execution_state(
|
||||
decision_id,
|
||||
{
|
||||
"status": "failed",
|
||||
"finished_at": bj_now_iso(),
|
||||
"action": action,
|
||||
"args": argv_tail,
|
||||
"error": str(exc),
|
||||
},
|
||||
)
|
||||
log_error(
|
||||
"smart_executor",
|
||||
exc,
|
||||
decision_id=decision_id,
|
||||
action=action,
|
||||
args=argv_tail,
|
||||
)
|
||||
log(f"❌ 执行失败: {exc}")
|
||||
return 1
|
||||
@@ -1,25 +0,0 @@
|
||||
"""Common trade utilities (time, logging, constants)."""
|
||||
import os
|
||||
from datetime import datetime, timezone, timedelta
|
||||
|
||||
CST = timezone(timedelta(hours=8))
|
||||
|
||||
_DRY_RUN = {"value": os.getenv("DRY_RUN", "false").lower() == "true"}
|
||||
USDT_BUFFER_PCT = 0.03
|
||||
MIN_REMAINING_DUST_USDT = 1.0
|
||||
|
||||
|
||||
def is_dry_run() -> bool:
|
||||
return _DRY_RUN["value"]
|
||||
|
||||
|
||||
def set_dry_run(value: bool):
|
||||
_DRY_RUN["value"] = value
|
||||
|
||||
|
||||
def log(msg: str):
|
||||
print(f"[{datetime.now(CST).strftime('%Y-%m-%d %H:%M:%S')} CST] {msg}")
|
||||
|
||||
|
||||
def bj_now_iso():
|
||||
return datetime.now(CST).isoformat()
|
||||
@@ -1,178 +0,0 @@
|
||||
"""Trade execution actions (buy, sell, rebalance, hold, status)."""
|
||||
from ..logger import log_decision, log_trade
|
||||
from .exchange_service import (
|
||||
fetch_balances,
|
||||
norm_symbol,
|
||||
storage_symbol,
|
||||
build_market_snapshot,
|
||||
prepare_buy_quantity,
|
||||
prepare_sell_quantity,
|
||||
)
|
||||
from .portfolio_service import load_positions, save_positions, upsert_position, reconcile_positions_with_exchange
|
||||
from .trade_common import is_dry_run, USDT_BUFFER_PCT, log, bj_now_iso
|
||||
|
||||
|
||||
def build_decision_context(ex, action: str, argv_tail: list[str], decision_id: str):
|
||||
balances = fetch_balances(ex)
|
||||
positions = load_positions()
|
||||
return {
|
||||
"decision_id": decision_id,
|
||||
"balances_before": balances,
|
||||
"positions_before": positions,
|
||||
"decision": action.upper(),
|
||||
"action_taken": f"{action} {' '.join(argv_tail)}".strip(),
|
||||
"risk_level": "high" if len(positions) <= 1 else "medium",
|
||||
"data_sources": ["binance"],
|
||||
}
|
||||
|
||||
|
||||
def market_sell(ex, symbol: str, qty: float, decision_id: str):
|
||||
sym, qty, bid, est_cost = prepare_sell_quantity(ex, symbol, qty)
|
||||
if is_dry_run():
|
||||
log(f"[DRY RUN] 卖出 {sym} 数量 {qty}")
|
||||
return {"id": f"dry-sell-{decision_id}", "symbol": sym, "amount": qty, "price": bid, "cost": est_cost, "status": "closed"}
|
||||
order = ex.create_market_sell_order(sym, qty, params={"newClientOrderId": f"ch-{decision_id}-sell"})
|
||||
return order
|
||||
|
||||
|
||||
def market_buy(ex, symbol: str, amount_usdt: float, decision_id: str):
|
||||
sym, qty, ask, est_cost = prepare_buy_quantity(ex, symbol, amount_usdt)
|
||||
if is_dry_run():
|
||||
log(f"[DRY RUN] 买入 {sym} 金额 ${est_cost:.4f} 数量 {qty}")
|
||||
return {"id": f"dry-buy-{decision_id}", "symbol": sym, "amount": qty, "price": ask, "cost": est_cost, "status": "closed"}
|
||||
order = ex.create_market_buy_order(sym, qty, params={"newClientOrderId": f"ch-{decision_id}-buy"})
|
||||
return order
|
||||
|
||||
|
||||
def action_sell_all(ex, symbol: str, decision_id: str, decision_context: dict):
|
||||
balances_before = fetch_balances(ex)
|
||||
base = norm_symbol(symbol).split("/")[0]
|
||||
qty = float(balances_before.get(base, 0))
|
||||
if qty <= 0:
|
||||
raise RuntimeError(f"{base} 余额为0,无法卖出")
|
||||
order = market_sell(ex, symbol, qty, decision_id)
|
||||
positions_after, balances_after = (
|
||||
reconcile_positions_with_exchange(ex, load_positions())
|
||||
if not is_dry_run()
|
||||
else (load_positions(), balances_before)
|
||||
)
|
||||
log_trade(
|
||||
"SELL_ALL",
|
||||
norm_symbol(symbol),
|
||||
qty=order.get("amount"),
|
||||
price=order.get("price"),
|
||||
amount_usdt=order.get("cost"),
|
||||
note="Smart executor sell_all",
|
||||
decision_id=decision_id,
|
||||
order_id=order.get("id"),
|
||||
status=order.get("status"),
|
||||
balances_before=balances_before,
|
||||
balances_after=balances_after,
|
||||
)
|
||||
log_decision(
|
||||
{
|
||||
**decision_context,
|
||||
"balances_after": balances_after,
|
||||
"positions_after": positions_after,
|
||||
"execution_result": {"order": order},
|
||||
"analysis": decision_context.get("analysis", ""),
|
||||
"reasoning": decision_context.get("reasoning", "sell_all execution"),
|
||||
}
|
||||
)
|
||||
return order
|
||||
|
||||
|
||||
def action_buy(ex, symbol: str, amount_usdt: float, decision_id: str, decision_context: dict, simulated_usdt_balance: float = None):
|
||||
balances_before = fetch_balances(ex) if simulated_usdt_balance is None else {"USDT": simulated_usdt_balance}
|
||||
usdt = float(balances_before.get("USDT", 0))
|
||||
if usdt < amount_usdt:
|
||||
raise RuntimeError(f"USDT 余额不足(${usdt:.4f} < ${amount_usdt:.4f})")
|
||||
order = market_buy(ex, symbol, amount_usdt, decision_id)
|
||||
positions_existing = load_positions()
|
||||
sym_store = storage_symbol(symbol)
|
||||
price = float(order.get("price") or 0)
|
||||
qty = float(order.get("amount") or 0)
|
||||
position = {
|
||||
"account_id": "binance-main",
|
||||
"symbol": sym_store,
|
||||
"base_asset": norm_symbol(symbol).split("/")[0],
|
||||
"quote_asset": "USDT",
|
||||
"market_type": "spot",
|
||||
"quantity": qty,
|
||||
"avg_cost": price,
|
||||
"opened_at": bj_now_iso(),
|
||||
"updated_at": bj_now_iso(),
|
||||
"note": "Smart executor entry",
|
||||
}
|
||||
upsert_position(positions_existing, position)
|
||||
if is_dry_run():
|
||||
balances_after = balances_before
|
||||
positions_after = positions_existing
|
||||
else:
|
||||
save_positions(positions_existing)
|
||||
positions_after, balances_after = reconcile_positions_with_exchange(ex, positions_existing)
|
||||
for p in positions_after:
|
||||
if p["symbol"] == sym_store and price:
|
||||
p["avg_cost"] = price
|
||||
p["updated_at"] = bj_now_iso()
|
||||
save_positions(positions_after)
|
||||
log_trade(
|
||||
"BUY",
|
||||
norm_symbol(symbol),
|
||||
qty=qty,
|
||||
amount_usdt=order.get("cost"),
|
||||
price=price,
|
||||
note="Smart executor buy",
|
||||
decision_id=decision_id,
|
||||
order_id=order.get("id"),
|
||||
status=order.get("status"),
|
||||
balances_before=balances_before,
|
||||
balances_after=balances_after,
|
||||
)
|
||||
log_decision(
|
||||
{
|
||||
**decision_context,
|
||||
"balances_after": balances_after,
|
||||
"positions_after": positions_after,
|
||||
"execution_result": {"order": order},
|
||||
"analysis": decision_context.get("analysis", ""),
|
||||
"reasoning": decision_context.get("reasoning", "buy execution"),
|
||||
}
|
||||
)
|
||||
return order
|
||||
|
||||
|
||||
def action_rebalance(ex, from_symbol: str, to_symbol: str, decision_id: str, decision_context: dict):
|
||||
sell_order = action_sell_all(ex, from_symbol, decision_id + "s", decision_context)
|
||||
if is_dry_run():
|
||||
sell_cost = float(sell_order.get("cost") or 0)
|
||||
spend = sell_cost * (1 - USDT_BUFFER_PCT)
|
||||
simulated_usdt = sell_cost
|
||||
else:
|
||||
balances = fetch_balances(ex)
|
||||
usdt = float(balances.get("USDT", 0))
|
||||
spend = usdt * (1 - USDT_BUFFER_PCT)
|
||||
simulated_usdt = None
|
||||
if spend < 5:
|
||||
raise RuntimeError(f"卖出后 USDT ${spend:.4f} 不足,无法买入新币")
|
||||
buy_order = action_buy(ex, to_symbol, spend, decision_id + "b", decision_context, simulated_usdt_balance=simulated_usdt)
|
||||
return {"sell": sell_order, "buy": buy_order}
|
||||
|
||||
|
||||
def command_status(ex):
|
||||
balances = fetch_balances(ex)
|
||||
positions = load_positions()
|
||||
market_snapshot = build_market_snapshot(ex)
|
||||
payload = {
|
||||
"balances": balances,
|
||||
"positions": positions,
|
||||
"market_snapshot": market_snapshot,
|
||||
}
|
||||
print(payload)
|
||||
return payload
|
||||
|
||||
|
||||
def command_balances(ex):
|
||||
balances = fetch_balances(ex)
|
||||
print({"balances": balances})
|
||||
return balances
|
||||
157
src/coinhunter/services/trade_service.py
Normal file
157
src/coinhunter/services/trade_service.py
Normal file
@@ -0,0 +1,157 @@
|
||||
"""Trade execution services."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import asdict, dataclass
|
||||
from typing import Any
|
||||
|
||||
from ..audit import audit_event
|
||||
from .market_service import normalize_symbol
|
||||
|
||||
|
||||
@dataclass
|
||||
class TradeIntent:
|
||||
market_type: str
|
||||
symbol: str
|
||||
side: str
|
||||
order_type: str
|
||||
qty: float | None
|
||||
quote_amount: float | None
|
||||
price: float | None
|
||||
reduce_only: bool
|
||||
dry_run: bool
|
||||
|
||||
|
||||
@dataclass
|
||||
class TradeResult:
|
||||
market_type: str
|
||||
symbol: str
|
||||
side: str
|
||||
order_type: str
|
||||
status: str
|
||||
dry_run: bool
|
||||
request_payload: dict[str, Any]
|
||||
response_payload: dict[str, Any]
|
||||
|
||||
|
||||
def _default_dry_run(config: dict[str, Any], dry_run: bool | None) -> bool:
|
||||
if dry_run is not None:
|
||||
return dry_run
|
||||
return bool(config.get("trading", {}).get("dry_run_default", False))
|
||||
|
||||
|
||||
def _trade_log_payload(
|
||||
intent: TradeIntent, payload: dict[str, Any], *, status: str, error: str | None = None
|
||||
) -> dict[str, Any]:
|
||||
return {
|
||||
"market_type": intent.market_type,
|
||||
"symbol": intent.symbol,
|
||||
"side": intent.side,
|
||||
"qty": intent.qty,
|
||||
"quote_amount": intent.quote_amount,
|
||||
"order_type": intent.order_type,
|
||||
"dry_run": intent.dry_run,
|
||||
"request_payload": payload,
|
||||
"response_payload": {} if error else payload,
|
||||
"status": status,
|
||||
"error": error,
|
||||
}
|
||||
|
||||
|
||||
def execute_spot_trade(
|
||||
config: dict[str, Any],
|
||||
*,
|
||||
side: str,
|
||||
symbol: str,
|
||||
qty: float | None,
|
||||
quote: float | None,
|
||||
order_type: str,
|
||||
price: float | None,
|
||||
dry_run: bool | None,
|
||||
spot_client: Any,
|
||||
) -> dict[str, Any]:
|
||||
normalized_symbol = normalize_symbol(symbol)
|
||||
order_type = order_type.upper()
|
||||
side = side.upper()
|
||||
is_dry_run = _default_dry_run(config, dry_run)
|
||||
if side == "BUY" and order_type == "MARKET":
|
||||
if quote is None:
|
||||
raise RuntimeError("Spot market buy requires --quote")
|
||||
if qty is not None:
|
||||
raise RuntimeError("Spot market buy accepts --quote only; do not pass --qty")
|
||||
if side == "SELL":
|
||||
if qty is None:
|
||||
raise RuntimeError("Spot sell requires --qty")
|
||||
if quote is not None:
|
||||
raise RuntimeError("Spot sell accepts --qty only; do not pass --quote")
|
||||
if order_type == "LIMIT" and (qty is None or price is None):
|
||||
raise RuntimeError("Limit orders require both --qty and --price")
|
||||
|
||||
payload: dict[str, Any] = {
|
||||
"symbol": normalized_symbol,
|
||||
"side": side,
|
||||
"type": order_type,
|
||||
}
|
||||
if qty is not None:
|
||||
payload["quantity"] = qty
|
||||
if quote is not None:
|
||||
payload["quoteOrderQty"] = quote
|
||||
if price is not None:
|
||||
payload["price"] = price
|
||||
payload["timeInForce"] = "GTC"
|
||||
|
||||
intent = TradeIntent(
|
||||
market_type="spot",
|
||||
symbol=normalized_symbol,
|
||||
side=side,
|
||||
order_type=order_type,
|
||||
qty=qty,
|
||||
quote_amount=quote,
|
||||
price=price,
|
||||
reduce_only=False,
|
||||
dry_run=is_dry_run,
|
||||
)
|
||||
|
||||
audit_event("trade_submitted", _trade_log_payload(intent, payload, status="submitted"), dry_run=intent.dry_run)
|
||||
if is_dry_run:
|
||||
response = {"dry_run": True, "status": "DRY_RUN", "request": payload}
|
||||
result = asdict(
|
||||
TradeResult(
|
||||
market_type="spot",
|
||||
symbol=normalized_symbol,
|
||||
side=side,
|
||||
order_type=order_type,
|
||||
status="DRY_RUN",
|
||||
dry_run=True,
|
||||
request_payload=payload,
|
||||
response_payload=response,
|
||||
)
|
||||
)
|
||||
audit_event(
|
||||
"trade_filled", {**_trade_log_payload(intent, payload, status="DRY_RUN"), "response_payload": response}, dry_run=intent.dry_run
|
||||
)
|
||||
return {"trade": result}
|
||||
|
||||
try:
|
||||
response = spot_client.new_order(**payload)
|
||||
except Exception as exc:
|
||||
audit_event("trade_failed", _trade_log_payload(intent, payload, status="failed", error=str(exc)), dry_run=intent.dry_run)
|
||||
raise RuntimeError(f"Spot order failed: {exc}") from exc
|
||||
|
||||
result = asdict(
|
||||
TradeResult(
|
||||
market_type="spot",
|
||||
symbol=normalized_symbol,
|
||||
side=side,
|
||||
order_type=order_type,
|
||||
status=str(response.get("status", "UNKNOWN")),
|
||||
dry_run=False,
|
||||
request_payload=payload,
|
||||
response_payload=response,
|
||||
)
|
||||
)
|
||||
audit_event(
|
||||
"trade_filled", {**_trade_log_payload(intent, payload, status=result["status"]), "response_payload": response},
|
||||
dry_run=intent.dry_run,
|
||||
)
|
||||
return {"trade": result}
|
||||
@@ -1,29 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Coin Hunter robust smart executor — compatibility facade."""
|
||||
|
||||
import sys
|
||||
|
||||
from .runtime import get_runtime_paths, load_env_file
|
||||
from .services.trade_common import CST, is_dry_run, USDT_BUFFER_PCT, MIN_REMAINING_DUST_USDT, log, bj_now_iso, set_dry_run
|
||||
from .services.file_utils import locked_file, atomic_write_json, load_json_locked, save_json_locked
|
||||
from .services.smart_executor_parser import build_parser, normalize_legacy_argv, parse_cli_args, cli_action_args
|
||||
from .services.execution_state import default_decision_id, record_execution_state, get_execution_state, load_executions, save_executions
|
||||
from .services.portfolio_service import load_positions, save_positions, upsert_position, reconcile_positions_with_exchange
|
||||
from .services.exchange_service import get_exchange, norm_symbol, storage_symbol, fetch_balances, build_market_snapshot, market_and_ticker, floor_to_step, prepare_buy_quantity, prepare_sell_quantity
|
||||
from .services.trade_execution import build_decision_context, market_sell, market_buy, action_sell_all, action_buy, action_rebalance, command_status, command_balances
|
||||
from .services.smart_executor_service import run as _run_service
|
||||
|
||||
PATHS = get_runtime_paths()
|
||||
ENV_FILE = PATHS.env_file
|
||||
|
||||
|
||||
def load_env():
|
||||
load_env_file(PATHS)
|
||||
|
||||
|
||||
def main(argv=None):
|
||||
return _run_service(argv)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
0
tests/__init__.py
Normal file
0
tests/__init__.py
Normal file
106
tests/test_account_market_services.py
Normal file
106
tests/test_account_market_services.py
Normal file
@@ -0,0 +1,106 @@
|
||||
"""Account and market service tests."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import unittest
|
||||
|
||||
from coinhunter.services import account_service, market_service
|
||||
|
||||
|
||||
class FakeSpotClient:
|
||||
def account_info(self):
|
||||
return {
|
||||
"balances": [
|
||||
{"asset": "USDT", "free": "120.0", "locked": "0"},
|
||||
{"asset": "BTC", "free": "0.01", "locked": "0"},
|
||||
{"asset": "DOGE", "free": "1", "locked": "0"},
|
||||
]
|
||||
}
|
||||
|
||||
def ticker_price(self, symbols=None):
|
||||
prices = {
|
||||
"BTCUSDT": {"symbol": "BTCUSDT", "price": "60000"},
|
||||
"DOGEUSDT": {"symbol": "DOGEUSDT", "price": "0.1"},
|
||||
}
|
||||
if not symbols:
|
||||
return list(prices.values())
|
||||
return [prices[symbol] for symbol in symbols]
|
||||
|
||||
def ticker_stats(self, symbols=None, *, window="1d"):
|
||||
rows = [
|
||||
{
|
||||
"symbol": "BTCUSDT",
|
||||
"lastPrice": "60000",
|
||||
"priceChangePercent": "4.5",
|
||||
"quoteVolume": "10000000",
|
||||
"highPrice": "61000",
|
||||
"lowPrice": "58000",
|
||||
},
|
||||
{
|
||||
"symbol": "ETHUSDT",
|
||||
"lastPrice": "3000",
|
||||
"priceChangePercent": "3.0",
|
||||
"quoteVolume": "8000000",
|
||||
"highPrice": "3050",
|
||||
"lowPrice": "2900",
|
||||
},
|
||||
{
|
||||
"symbol": "DOGEUSDT",
|
||||
"lastPrice": "0.1",
|
||||
"priceChangePercent": "1.0",
|
||||
"quoteVolume": "200",
|
||||
"highPrice": "0.11",
|
||||
"lowPrice": "0.09",
|
||||
},
|
||||
]
|
||||
if not symbols:
|
||||
return rows
|
||||
wanted = set(symbols)
|
||||
return [row for row in rows if row["symbol"] in wanted]
|
||||
|
||||
def exchange_info(self):
|
||||
return {
|
||||
"symbols": [
|
||||
{"symbol": "BTCUSDT", "status": "TRADING"},
|
||||
{"symbol": "ETHUSDT", "status": "TRADING"},
|
||||
{"symbol": "DOGEUSDT", "status": "BREAK"},
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
class AccountMarketServicesTestCase(unittest.TestCase):
|
||||
def test_get_balances_with_dust_flag(self):
|
||||
config = {
|
||||
"market": {"default_quote": "USDT"},
|
||||
"trading": {"dust_usdt_threshold": 10.0},
|
||||
}
|
||||
payload = account_service.get_balances(
|
||||
config,
|
||||
spot_client=FakeSpotClient(),
|
||||
)
|
||||
balances = {item["asset"]: item for item in payload["balances"]}
|
||||
self.assertFalse(balances["USDT"]["is_dust"])
|
||||
self.assertFalse(balances["BTC"]["is_dust"])
|
||||
self.assertTrue(balances["DOGE"]["is_dust"])
|
||||
|
||||
def test_market_tickers_and_scan_universe(self):
|
||||
config = {
|
||||
"market": {"default_quote": "USDT", "universe_allowlist": [], "universe_denylist": []},
|
||||
"opportunity": {"min_quote_volume": 1000},
|
||||
}
|
||||
tickers = market_service.get_tickers(config, ["btc/usdt", "ETH-USDT"], spot_client=FakeSpotClient())
|
||||
self.assertEqual([item["symbol"] for item in tickers["tickers"]], ["BTCUSDT", "ETHUSDT"])
|
||||
|
||||
universe = market_service.get_scan_universe(config, spot_client=FakeSpotClient())
|
||||
self.assertEqual([item["symbol"] for item in universe], ["BTCUSDT", "ETHUSDT"])
|
||||
|
||||
def test_get_positions_can_include_dust(self):
|
||||
config = {
|
||||
"market": {"default_quote": "USDT"},
|
||||
"trading": {"dust_usdt_threshold": 10.0},
|
||||
}
|
||||
ignored = account_service.get_positions(config, spot_client=FakeSpotClient())
|
||||
included = account_service.get_positions(config, spot_client=FakeSpotClient(), ignore_dust=False)
|
||||
|
||||
self.assertEqual([item["symbol"] for item in ignored["positions"]], ["USDT", "BTCUSDT"])
|
||||
self.assertEqual([item["symbol"] for item in included["positions"]], ["USDT", "BTCUSDT", "DOGEUSDT"])
|
||||
392
tests/test_cli.py
Normal file
392
tests/test_cli.py
Normal file
@@ -0,0 +1,392 @@
|
||||
"""CLI tests for CoinHunter V2."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import io
|
||||
import unittest
|
||||
from unittest.mock import patch
|
||||
|
||||
from coinhunter import cli
|
||||
|
||||
|
||||
class CLITestCase(unittest.TestCase):
|
||||
def test_help_includes_v2_commands(self):
|
||||
parser = cli.build_parser()
|
||||
help_text = parser.format_help()
|
||||
self.assertIn("init", help_text)
|
||||
self.assertIn("account", help_text)
|
||||
self.assertIn("buy", help_text)
|
||||
self.assertIn("sell", help_text)
|
||||
self.assertIn("portfolio", help_text)
|
||||
self.assertIn("opportunity", help_text)
|
||||
self.assertIn("--doc", help_text)
|
||||
|
||||
def test_init_dispatches(self):
|
||||
captured = {}
|
||||
with (
|
||||
patch.object(cli, "ensure_init_files", return_value={"force": True, "root": "/tmp/ch"}),
|
||||
patch.object(
|
||||
cli,
|
||||
"install_shell_completion",
|
||||
return_value={"shell": "zsh", "installed": True, "path": "/tmp/ch/_coinhunter"},
|
||||
),
|
||||
patch.object(
|
||||
cli, "print_output", side_effect=lambda payload, **kwargs: captured.setdefault("payload", payload)
|
||||
),
|
||||
):
|
||||
result = cli.main(["init", "--force"])
|
||||
self.assertEqual(result, 0)
|
||||
self.assertTrue(captured["payload"]["force"])
|
||||
self.assertIn("completion", captured["payload"])
|
||||
|
||||
def test_old_command_is_rejected(self):
|
||||
with self.assertRaises(SystemExit):
|
||||
cli.main(["exec", "bal"])
|
||||
|
||||
def test_runtime_error_is_rendered_cleanly(self):
|
||||
stderr = io.StringIO()
|
||||
with patch.object(cli, "load_config", side_effect=RuntimeError("boom")), patch("sys.stderr", stderr):
|
||||
result = cli.main(["market", "tickers", "BTCUSDT"])
|
||||
self.assertEqual(result, 1)
|
||||
self.assertIn("error: boom", stderr.getvalue())
|
||||
|
||||
def test_buy_dispatches(self):
|
||||
captured = {}
|
||||
with patch.object(cli, "load_config", return_value={"binance": {"spot_base_url": "https://test", "recv_window": 5000}, "trading": {"dry_run_default": True}}), patch.object(
|
||||
cli, "get_binance_credentials", return_value={"api_key": "k", "api_secret": "s"}
|
||||
), patch.object(
|
||||
cli, "SpotBinanceClient"
|
||||
), patch.object(
|
||||
cli.trade_service, "execute_spot_trade", return_value={"trade": {"status": "DRY_RUN"}}
|
||||
), patch.object(
|
||||
cli, "print_output", side_effect=lambda payload, **kwargs: captured.setdefault("payload", payload)
|
||||
):
|
||||
result = cli.main(["buy", "BTCUSDT", "-Q", "100"])
|
||||
self.assertEqual(result, 0)
|
||||
self.assertEqual(captured["payload"]["trade"]["status"], "DRY_RUN")
|
||||
|
||||
def test_sell_dispatches(self):
|
||||
captured = {}
|
||||
with patch.object(cli, "load_config", return_value={"binance": {"spot_base_url": "https://test", "recv_window": 5000}, "trading": {"dry_run_default": True}}), patch.object(
|
||||
cli, "get_binance_credentials", return_value={"api_key": "k", "api_secret": "s"}
|
||||
), patch.object(
|
||||
cli, "SpotBinanceClient"
|
||||
), patch.object(
|
||||
cli.trade_service, "execute_spot_trade", return_value={"trade": {"status": "DRY_RUN"}}
|
||||
), patch.object(
|
||||
cli, "print_output", side_effect=lambda payload, **kwargs: captured.setdefault("payload", payload)
|
||||
):
|
||||
result = cli.main(["sell", "BTCUSDT", "-q", "0.01"])
|
||||
self.assertEqual(result, 0)
|
||||
self.assertEqual(captured["payload"]["trade"]["status"], "DRY_RUN")
|
||||
|
||||
def test_doc_flag_prints_tui_documentation(self):
|
||||
stdout = io.StringIO()
|
||||
with patch("sys.stdout", stdout):
|
||||
result = cli.main(["market", "tickers", "--doc"])
|
||||
self.assertEqual(result, 0)
|
||||
output = stdout.getvalue()
|
||||
self.assertIn("TUI Output", output)
|
||||
self.assertIn("Last Price", output)
|
||||
self.assertIn("BTCUSDT", output)
|
||||
|
||||
def test_doc_flag_prints_json_documentation(self):
|
||||
stdout = io.StringIO()
|
||||
with patch("sys.stdout", stdout):
|
||||
result = cli.main(["market", "tickers", "--doc", "--agent"])
|
||||
self.assertEqual(result, 0)
|
||||
output = stdout.getvalue()
|
||||
self.assertIn("JSON Output", output)
|
||||
self.assertIn("last_price", output)
|
||||
self.assertIn("BTCUSDT", output)
|
||||
|
||||
def test_account_dispatches(self):
|
||||
captured = {}
|
||||
with (
|
||||
patch.object(
|
||||
cli, "load_config", return_value={"binance": {"spot_base_url": "https://test", "recv_window": 5000}, "market": {"default_quote": "USDT"}, "trading": {"dust_usdt_threshold": 10.0}}
|
||||
),
|
||||
patch.object(cli, "get_binance_credentials", return_value={"api_key": "k", "api_secret": "s"}),
|
||||
patch.object(cli, "SpotBinanceClient"),
|
||||
patch.object(
|
||||
cli.account_service, "get_balances", return_value={"balances": [{"asset": "BTC", "is_dust": False}]}
|
||||
),
|
||||
patch.object(
|
||||
cli, "print_output", side_effect=lambda payload, **kwargs: captured.setdefault("payload", payload)
|
||||
),
|
||||
):
|
||||
result = cli.main(["account"])
|
||||
self.assertEqual(result, 0)
|
||||
self.assertEqual(captured["payload"]["balances"][0]["asset"], "BTC")
|
||||
|
||||
def test_upgrade_dispatches(self):
|
||||
captured = {}
|
||||
with (
|
||||
patch.object(cli, "self_upgrade", return_value={"command": "pipx upgrade coinhunter", "returncode": 0}),
|
||||
patch.object(
|
||||
cli, "print_output", side_effect=lambda payload, **kwargs: captured.setdefault("payload", payload)
|
||||
),
|
||||
):
|
||||
result = cli.main(["upgrade"])
|
||||
self.assertEqual(result, 0)
|
||||
self.assertEqual(captured["payload"]["returncode"], 0)
|
||||
|
||||
def test_portfolio_dispatches(self):
|
||||
captured = {}
|
||||
with (
|
||||
patch.object(
|
||||
cli, "load_config", return_value={"binance": {"spot_base_url": "https://test", "recv_window": 5000}, "market": {"default_quote": "USDT"}, "opportunity": {"top_n": 10}}
|
||||
),
|
||||
patch.object(cli, "get_binance_credentials", return_value={"api_key": "k", "api_secret": "s"}),
|
||||
patch.object(cli, "SpotBinanceClient"),
|
||||
patch.object(
|
||||
cli.portfolio_service, "analyze_portfolio", return_value={"recommendations": [{"symbol": "BTCUSDT", "score": 0.75}]}
|
||||
),
|
||||
patch.object(
|
||||
cli, "print_output", side_effect=lambda payload, **kwargs: captured.setdefault("payload", payload)
|
||||
),
|
||||
):
|
||||
result = cli.main(["portfolio"])
|
||||
self.assertEqual(result, 0)
|
||||
self.assertEqual(captured["payload"]["recommendations"][0]["symbol"], "BTCUSDT")
|
||||
|
||||
def test_opportunity_dispatches(self):
|
||||
captured = {}
|
||||
with (
|
||||
patch.object(
|
||||
cli, "load_config", return_value={"binance": {"spot_base_url": "https://test", "recv_window": 5000}, "market": {"default_quote": "USDT"}, "opportunity": {"top_n": 10}}
|
||||
),
|
||||
patch.object(cli, "get_binance_credentials", return_value={"api_key": "k", "api_secret": "s"}),
|
||||
patch.object(cli, "SpotBinanceClient"),
|
||||
patch.object(
|
||||
cli.opportunity_service,
|
||||
"scan_opportunities",
|
||||
return_value={"recommendations": [{"symbol": "BTCUSDT", "score": 0.82}]},
|
||||
),
|
||||
patch.object(
|
||||
cli, "print_output", side_effect=lambda payload, **kwargs: captured.setdefault("payload", payload)
|
||||
),
|
||||
):
|
||||
result = cli.main(["opportunity", "-s", "BTCUSDT", "ETHUSDT"])
|
||||
self.assertEqual(result, 0)
|
||||
self.assertEqual(captured["payload"]["recommendations"][0]["symbol"], "BTCUSDT")
|
||||
|
||||
def test_catlog_dispatches(self):
|
||||
captured = {}
|
||||
with (
|
||||
patch.object(
|
||||
cli, "read_audit_log", return_value=[{"timestamp": "2026-04-17T12:00:00Z", "event": "test_event"}]
|
||||
),
|
||||
patch.object(
|
||||
cli, "print_output", side_effect=lambda payload, **kwargs: captured.setdefault("payload", payload)
|
||||
),
|
||||
):
|
||||
result = cli.main(["catlog", "-n", "5", "-o", "10"])
|
||||
self.assertEqual(result, 0)
|
||||
self.assertEqual(captured["payload"]["limit"], 5)
|
||||
self.assertEqual(captured["payload"]["offset"], 10)
|
||||
self.assertIn("entries", captured["payload"])
|
||||
self.assertEqual(captured["payload"]["total"], 1)
|
||||
|
||||
def test_config_get_dispatches(self):
|
||||
captured = {}
|
||||
with (
|
||||
patch.object(cli, "load_config", return_value={"binance": {"recv_window": 5000}}),
|
||||
patch.object(
|
||||
cli, "print_output", side_effect=lambda payload, **kwargs: captured.setdefault("payload", payload)
|
||||
),
|
||||
):
|
||||
result = cli.main(["config", "get", "binance.recv_window"])
|
||||
self.assertEqual(result, 0)
|
||||
self.assertEqual(captured["payload"]["binance.recv_window"], 5000)
|
||||
|
||||
def test_config_set_dispatches(self):
|
||||
import tempfile
|
||||
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".toml", delete=False) as f:
|
||||
f.write('[binance]\nrecv_window = 5000\n')
|
||||
tmp_path = f.name
|
||||
|
||||
with patch.object(cli, "get_runtime_paths") as mock_paths:
|
||||
mock_paths.return_value.config_file = __import__("pathlib").Path(tmp_path)
|
||||
result = cli.main(["config", "set", "binance.recv_window", "10000"])
|
||||
self.assertEqual(result, 0)
|
||||
|
||||
# Verify the file was updated
|
||||
content = __import__("pathlib").Path(tmp_path).read_text()
|
||||
self.assertIn("recv_window = 10000", content)
|
||||
__import__("os").unlink(tmp_path)
|
||||
|
||||
def test_config_key_dispatches(self):
|
||||
import tempfile
|
||||
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".env", delete=False) as f:
|
||||
f.write("BINANCE_API_KEY=\n")
|
||||
tmp_path = f.name
|
||||
|
||||
with patch.object(cli, "get_runtime_paths") as mock_paths:
|
||||
mock_paths.return_value.env_file = __import__("pathlib").Path(tmp_path)
|
||||
result = cli.main(["config", "key", "test_key_value"])
|
||||
self.assertEqual(result, 0)
|
||||
|
||||
content = __import__("pathlib").Path(tmp_path).read_text()
|
||||
self.assertIn("BINANCE_API_KEY=test_key_value", content)
|
||||
__import__("os").unlink(tmp_path)
|
||||
|
||||
def test_config_secret_dispatches(self):
|
||||
import tempfile
|
||||
|
||||
with tempfile.NamedTemporaryFile(mode="w", suffix=".env", delete=False) as f:
|
||||
f.write("BINANCE_API_SECRET=\n")
|
||||
tmp_path = f.name
|
||||
|
||||
with patch.object(cli, "get_runtime_paths") as mock_paths:
|
||||
mock_paths.return_value.env_file = __import__("pathlib").Path(tmp_path)
|
||||
result = cli.main(["config", "secret", "test_secret_value"])
|
||||
self.assertEqual(result, 0)
|
||||
|
||||
content = __import__("pathlib").Path(tmp_path).read_text()
|
||||
self.assertIn("BINANCE_API_SECRET=test_secret_value", content)
|
||||
__import__("os").unlink(tmp_path)
|
||||
|
||||
def test_opportunity_dataset_dispatches_without_private_client(self):
|
||||
captured = {}
|
||||
config = {"market": {"default_quote": "USDT"}, "opportunity": {}}
|
||||
with (
|
||||
patch.object(cli, "load_config", return_value=config),
|
||||
patch.object(cli, "_load_spot_client", side_effect=AssertionError("dataset should use public data")),
|
||||
patch.object(
|
||||
cli.opportunity_dataset_service,
|
||||
"collect_opportunity_dataset",
|
||||
return_value={"path": "/tmp/dataset.json", "symbols": ["BTCUSDT"]},
|
||||
) as collect_mock,
|
||||
patch.object(
|
||||
cli,
|
||||
"print_output",
|
||||
side_effect=lambda payload, **kwargs: captured.update({"payload": payload, "agent": kwargs["agent"]}),
|
||||
),
|
||||
):
|
||||
result = cli.main(
|
||||
["opportunity", "dataset", "--symbols", "BTCUSDT", "--simulate-days", "3", "--run-days", "7", "--agent"]
|
||||
)
|
||||
|
||||
self.assertEqual(result, 0)
|
||||
self.assertEqual(captured["payload"]["path"], "/tmp/dataset.json")
|
||||
self.assertTrue(captured["agent"])
|
||||
collect_mock.assert_called_once_with(
|
||||
config,
|
||||
symbols=["BTCUSDT"],
|
||||
simulate_days=3.0,
|
||||
run_days=7.0,
|
||||
output_path=None,
|
||||
)
|
||||
|
||||
def test_opportunity_evaluate_dispatches_without_private_client(self):
|
||||
captured = {}
|
||||
config = {"market": {"default_quote": "USDT"}, "opportunity": {}}
|
||||
with (
|
||||
patch.object(cli, "load_config", return_value=config),
|
||||
patch.object(cli, "_load_spot_client", side_effect=AssertionError("evaluate should use dataset only")),
|
||||
patch.object(
|
||||
cli.opportunity_evaluation_service,
|
||||
"evaluate_opportunity_dataset",
|
||||
return_value={"summary": {"count": 1, "correct": 1}},
|
||||
) as evaluate_mock,
|
||||
patch.object(
|
||||
cli,
|
||||
"print_output",
|
||||
side_effect=lambda payload, **kwargs: captured.update({"payload": payload, "agent": kwargs["agent"]}),
|
||||
),
|
||||
):
|
||||
result = cli.main(
|
||||
[
|
||||
"opportunity",
|
||||
"evaluate",
|
||||
"/tmp/dataset.json",
|
||||
"--horizon-hours",
|
||||
"6",
|
||||
"--take-profit-pct",
|
||||
"2",
|
||||
"--stop-loss-pct",
|
||||
"1.5",
|
||||
"--setup-target-pct",
|
||||
"1",
|
||||
"--lookback",
|
||||
"24",
|
||||
"--top-n",
|
||||
"3",
|
||||
"--examples",
|
||||
"5",
|
||||
"--agent",
|
||||
]
|
||||
)
|
||||
|
||||
self.assertEqual(result, 0)
|
||||
self.assertEqual(captured["payload"]["summary"]["correct"], 1)
|
||||
self.assertTrue(captured["agent"])
|
||||
evaluate_mock.assert_called_once_with(
|
||||
config,
|
||||
dataset_path="/tmp/dataset.json",
|
||||
horizon_hours=6.0,
|
||||
take_profit=0.02,
|
||||
stop_loss=0.015,
|
||||
setup_target=0.01,
|
||||
lookback=24,
|
||||
top_n=3,
|
||||
max_examples=5,
|
||||
)
|
||||
|
||||
def test_opportunity_optimize_dispatches_without_private_client(self):
|
||||
captured = {}
|
||||
config = {"market": {"default_quote": "USDT"}, "opportunity": {}}
|
||||
with (
|
||||
patch.object(cli, "load_config", return_value=config),
|
||||
patch.object(cli, "_load_spot_client", side_effect=AssertionError("optimize should use dataset only")),
|
||||
patch.object(
|
||||
cli.opportunity_evaluation_service,
|
||||
"optimize_opportunity_model",
|
||||
return_value={"best": {"summary": {"accuracy": 0.7}}},
|
||||
) as optimize_mock,
|
||||
patch.object(
|
||||
cli,
|
||||
"print_output",
|
||||
side_effect=lambda payload, **kwargs: captured.update({"payload": payload, "agent": kwargs["agent"]}),
|
||||
),
|
||||
):
|
||||
result = cli.main(
|
||||
[
|
||||
"opportunity",
|
||||
"optimize",
|
||||
"/tmp/dataset.json",
|
||||
"--horizon-hours",
|
||||
"6",
|
||||
"--take-profit-pct",
|
||||
"2",
|
||||
"--stop-loss-pct",
|
||||
"1.5",
|
||||
"--setup-target-pct",
|
||||
"1",
|
||||
"--lookback",
|
||||
"24",
|
||||
"--top-n",
|
||||
"3",
|
||||
"--passes",
|
||||
"1",
|
||||
"--agent",
|
||||
]
|
||||
)
|
||||
|
||||
self.assertEqual(result, 0)
|
||||
self.assertEqual(captured["payload"]["best"]["summary"]["accuracy"], 0.7)
|
||||
self.assertTrue(captured["agent"])
|
||||
optimize_mock.assert_called_once_with(
|
||||
config,
|
||||
dataset_path="/tmp/dataset.json",
|
||||
horizon_hours=6.0,
|
||||
take_profit=0.02,
|
||||
stop_loss=0.015,
|
||||
setup_target=0.01,
|
||||
lookback=24,
|
||||
top_n=3,
|
||||
passes=1,
|
||||
)
|
||||
101
tests/test_config_runtime.py
Normal file
101
tests/test_config_runtime.py
Normal file
@@ -0,0 +1,101 @@
|
||||
"""Config and runtime tests."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
from coinhunter.config import (
|
||||
ensure_init_files,
|
||||
get_binance_credentials,
|
||||
load_config,
|
||||
load_env_file,
|
||||
)
|
||||
from coinhunter.runtime import get_runtime_paths
|
||||
|
||||
|
||||
class ConfigRuntimeTestCase(unittest.TestCase):
|
||||
def test_init_files_created_in_coinhunter_home(self):
|
||||
with (
|
||||
tempfile.TemporaryDirectory() as tmp_dir,
|
||||
patch.dict(os.environ, {"COINHUNTER_HOME": str(Path(tmp_dir) / "home")}, clear=False),
|
||||
):
|
||||
paths = get_runtime_paths()
|
||||
payload = ensure_init_files(paths)
|
||||
self.assertTrue(paths.config_file.exists())
|
||||
self.assertTrue(paths.env_file.exists())
|
||||
self.assertTrue(paths.logs_dir.exists())
|
||||
self.assertEqual(payload["root"], str(paths.root))
|
||||
|
||||
def test_load_config_and_env(self):
|
||||
with (
|
||||
tempfile.TemporaryDirectory() as tmp_dir,
|
||||
patch.dict(
|
||||
os.environ,
|
||||
{"COINHUNTER_HOME": str(Path(tmp_dir) / "home")},
|
||||
clear=False,
|
||||
),
|
||||
):
|
||||
paths = get_runtime_paths()
|
||||
ensure_init_files(paths)
|
||||
paths.env_file.write_text("BINANCE_API_KEY=abc\nBINANCE_API_SECRET=def\n", encoding="utf-8")
|
||||
|
||||
config = load_config(paths)
|
||||
loaded = load_env_file(paths)
|
||||
|
||||
self.assertEqual(config["market"]["default_quote"], "USDT")
|
||||
self.assertEqual(loaded["BINANCE_API_KEY"], "abc")
|
||||
self.assertEqual(os.environ["BINANCE_API_SECRET"], "def")
|
||||
|
||||
def test_env_file_overrides_existing_environment(self):
|
||||
with (
|
||||
tempfile.TemporaryDirectory() as tmp_dir,
|
||||
patch.dict(
|
||||
os.environ,
|
||||
{"COINHUNTER_HOME": str(Path(tmp_dir) / "home"), "BINANCE_API_KEY": "old_key"},
|
||||
clear=False,
|
||||
),
|
||||
):
|
||||
paths = get_runtime_paths()
|
||||
ensure_init_files(paths)
|
||||
paths.env_file.write_text("BINANCE_API_KEY=new_key\nBINANCE_API_SECRET=new_secret\n", encoding="utf-8")
|
||||
|
||||
load_env_file(paths)
|
||||
|
||||
self.assertEqual(os.environ["BINANCE_API_KEY"], "new_key")
|
||||
self.assertEqual(os.environ["BINANCE_API_SECRET"], "new_secret")
|
||||
|
||||
def test_missing_credentials_raise(self):
|
||||
with (
|
||||
tempfile.TemporaryDirectory() as tmp_dir,
|
||||
patch.dict(
|
||||
os.environ,
|
||||
{"COINHUNTER_HOME": str(Path(tmp_dir) / "home")},
|
||||
clear=False,
|
||||
),
|
||||
):
|
||||
os.environ.pop("BINANCE_API_KEY", None)
|
||||
os.environ.pop("BINANCE_API_SECRET", None)
|
||||
paths = get_runtime_paths()
|
||||
ensure_init_files(paths)
|
||||
with self.assertRaisesRegex(RuntimeError, "Missing BINANCE_API_KEY"):
|
||||
get_binance_credentials(paths)
|
||||
|
||||
def test_permission_error_is_explained(self):
|
||||
with (
|
||||
tempfile.TemporaryDirectory() as tmp_dir,
|
||||
patch.dict(
|
||||
os.environ,
|
||||
{"COINHUNTER_HOME": str(Path(tmp_dir) / "home")},
|
||||
clear=False,
|
||||
),
|
||||
):
|
||||
paths = get_runtime_paths()
|
||||
with (
|
||||
patch("coinhunter.config.ensure_runtime_dirs", side_effect=PermissionError("no write access")),
|
||||
self.assertRaisesRegex(RuntimeError, "Set COINHUNTER_HOME to a writable directory"),
|
||||
):
|
||||
ensure_init_files(paths)
|
||||
280
tests/test_opportunity_dataset_service.py
Normal file
280
tests/test_opportunity_dataset_service.py
Normal file
@@ -0,0 +1,280 @@
|
||||
"""Opportunity dataset collection tests."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
import unittest
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
from coinhunter.services import (
|
||||
opportunity_dataset_service,
|
||||
opportunity_evaluation_service,
|
||||
)
|
||||
|
||||
|
||||
class OpportunityDatasetServiceTestCase(unittest.TestCase):
|
||||
def test_default_plan_uses_widest_scan_reference_window(self):
|
||||
config = {"opportunity": {"lookback_intervals": ["1h", "4h", "1d"]}}
|
||||
plan = opportunity_dataset_service.build_dataset_plan(
|
||||
config,
|
||||
now=datetime(2026, 4, 21, tzinfo=timezone.utc),
|
||||
)
|
||||
|
||||
self.assertEqual(plan.kline_limit, 48)
|
||||
self.assertEqual(plan.reference_days, 48.0)
|
||||
self.assertEqual(plan.simulate_days, 7.0)
|
||||
self.assertEqual(plan.run_days, 7.0)
|
||||
self.assertEqual(plan.total_days, 62.0)
|
||||
|
||||
def test_collect_dataset_writes_klines_and_probe_metadata(self):
|
||||
config = {
|
||||
"binance": {"spot_base_url": "https://api.binance.test"},
|
||||
"market": {"default_quote": "USDT"},
|
||||
"opportunity": {
|
||||
"lookback_intervals": ["1d"],
|
||||
"kline_limit": 2,
|
||||
"simulate_days": 1,
|
||||
"run_days": 1,
|
||||
"auto_research": True,
|
||||
"research_provider": "coingecko",
|
||||
},
|
||||
}
|
||||
|
||||
def fake_http_get(url, headers, timeout):
|
||||
query = opportunity_dataset_service.parse_query(url)
|
||||
interval_seconds = 86400
|
||||
start = int(query["startTime"])
|
||||
end = int(query["endTime"])
|
||||
rows = []
|
||||
cursor = start
|
||||
index = 0
|
||||
while cursor <= end:
|
||||
close = 100 + index
|
||||
rows.append([cursor, close - 1, close + 1, close - 2, close, 10, cursor + interval_seconds * 1000 - 1, close * 10])
|
||||
cursor += interval_seconds * 1000
|
||||
index += 1
|
||||
return rows
|
||||
|
||||
def fake_http_status(url, headers, timeout):
|
||||
return 200, "{}"
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
output = Path(tmpdir) / "dataset.json"
|
||||
payload = opportunity_dataset_service.collect_opportunity_dataset(
|
||||
config,
|
||||
symbols=["BTCUSDT"],
|
||||
output_path=str(output),
|
||||
http_get=fake_http_get,
|
||||
http_status=fake_http_status,
|
||||
now=datetime(2026, 4, 21, tzinfo=timezone.utc),
|
||||
)
|
||||
dataset = json.loads(output.read_text(encoding="utf-8"))
|
||||
|
||||
self.assertEqual(payload["plan"]["reference_days"], 2.0)
|
||||
self.assertEqual(payload["plan"]["total_days"], 4.0)
|
||||
self.assertEqual(payload["external_history"]["status"], "available")
|
||||
self.assertEqual(payload["counts"]["BTCUSDT"]["1d"], 5)
|
||||
self.assertEqual(len(dataset["klines"]["BTCUSDT"]["1d"]), 5)
|
||||
|
||||
|
||||
class OpportunityEvaluationServiceTestCase(unittest.TestCase):
|
||||
def _rows(self, closes):
|
||||
start = int(datetime(2026, 4, 20, tzinfo=timezone.utc).timestamp() * 1000)
|
||||
rows = []
|
||||
for index, close in enumerate(closes):
|
||||
open_time = start + index * 60 * 60 * 1000
|
||||
rows.append(
|
||||
[
|
||||
open_time,
|
||||
close * 0.995,
|
||||
close * 1.01,
|
||||
close * 0.995,
|
||||
close,
|
||||
100 + index * 10,
|
||||
open_time + 60 * 60 * 1000 - 1,
|
||||
close * (100 + index * 10),
|
||||
]
|
||||
)
|
||||
return rows
|
||||
|
||||
def test_evaluate_dataset_counts_walk_forward_accuracy(self):
|
||||
good = [
|
||||
100,
|
||||
105,
|
||||
98,
|
||||
106,
|
||||
99,
|
||||
107,
|
||||
100,
|
||||
106,
|
||||
101,
|
||||
105,
|
||||
102,
|
||||
104,
|
||||
102.5,
|
||||
103,
|
||||
102.8,
|
||||
103.2,
|
||||
103.0,
|
||||
103.4,
|
||||
103.1,
|
||||
103.6,
|
||||
103.3,
|
||||
103.8,
|
||||
104.2,
|
||||
106,
|
||||
108.5,
|
||||
109,
|
||||
]
|
||||
weak = [
|
||||
100,
|
||||
99,
|
||||
98,
|
||||
97,
|
||||
96,
|
||||
95,
|
||||
94,
|
||||
93,
|
||||
92,
|
||||
91,
|
||||
90,
|
||||
89,
|
||||
88,
|
||||
87,
|
||||
86,
|
||||
85,
|
||||
84,
|
||||
83,
|
||||
82,
|
||||
81,
|
||||
80,
|
||||
79,
|
||||
78,
|
||||
77,
|
||||
76,
|
||||
75,
|
||||
]
|
||||
good_rows = self._rows(good)
|
||||
weak_rows = self._rows(weak)
|
||||
simulation_start = datetime.fromtimestamp(good_rows[23][0] / 1000, tz=timezone.utc)
|
||||
simulation_end = datetime.fromtimestamp(good_rows[24][0] / 1000, tz=timezone.utc)
|
||||
dataset = {
|
||||
"metadata": {
|
||||
"symbols": ["GOODUSDT", "WEAKUSDT"],
|
||||
"plan": {
|
||||
"intervals": ["1h"],
|
||||
"simulate_days": 1 / 12,
|
||||
"simulation_start": simulation_start.isoformat().replace("+00:00", "Z"),
|
||||
"simulation_end": simulation_end.isoformat().replace("+00:00", "Z"),
|
||||
},
|
||||
},
|
||||
"klines": {
|
||||
"GOODUSDT": {"1h": good_rows},
|
||||
"WEAKUSDT": {"1h": weak_rows},
|
||||
},
|
||||
}
|
||||
config = {
|
||||
"signal": {"lookback_interval": "1h"},
|
||||
"opportunity": {
|
||||
"top_n": 2,
|
||||
"min_quote_volume": 0.0,
|
||||
"entry_threshold": 1.5,
|
||||
"watch_threshold": 0.6,
|
||||
"min_trigger_score": 0.45,
|
||||
"min_setup_score": 0.35,
|
||||
},
|
||||
}
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "dataset.json"
|
||||
path.write_text(json.dumps(dataset), encoding="utf-8")
|
||||
result = opportunity_evaluation_service.evaluate_opportunity_dataset(
|
||||
config,
|
||||
dataset_path=str(path),
|
||||
take_profit=0.02,
|
||||
stop_loss=0.015,
|
||||
setup_target=0.01,
|
||||
max_examples=2,
|
||||
)
|
||||
|
||||
self.assertEqual(result["summary"]["count"], 2)
|
||||
self.assertEqual(result["summary"]["correct"], 2)
|
||||
self.assertEqual(result["summary"]["accuracy"], 1.0)
|
||||
self.assertEqual(result["by_action"]["entry"]["correct"], 1)
|
||||
self.assertEqual(result["trade_simulation"]["wins"], 1)
|
||||
|
||||
def test_optimize_model_reports_recommended_weights(self):
|
||||
rows = self._rows(
|
||||
[
|
||||
100,
|
||||
105,
|
||||
98,
|
||||
106,
|
||||
99,
|
||||
107,
|
||||
100,
|
||||
106,
|
||||
101,
|
||||
105,
|
||||
102,
|
||||
104,
|
||||
102.5,
|
||||
103,
|
||||
102.8,
|
||||
103.2,
|
||||
103.0,
|
||||
103.4,
|
||||
103.1,
|
||||
103.6,
|
||||
103.3,
|
||||
103.8,
|
||||
104.2,
|
||||
106,
|
||||
108.5,
|
||||
109,
|
||||
]
|
||||
)
|
||||
simulation_start = datetime.fromtimestamp(rows[23][0] / 1000, tz=timezone.utc)
|
||||
simulation_end = datetime.fromtimestamp(rows[24][0] / 1000, tz=timezone.utc)
|
||||
dataset = {
|
||||
"metadata": {
|
||||
"symbols": ["GOODUSDT"],
|
||||
"plan": {
|
||||
"intervals": ["1h"],
|
||||
"simulate_days": 1 / 12,
|
||||
"simulation_start": simulation_start.isoformat().replace("+00:00", "Z"),
|
||||
"simulation_end": simulation_end.isoformat().replace("+00:00", "Z"),
|
||||
},
|
||||
},
|
||||
"klines": {"GOODUSDT": {"1h": rows}},
|
||||
}
|
||||
config = {
|
||||
"signal": {"lookback_interval": "1h"},
|
||||
"opportunity": {
|
||||
"top_n": 1,
|
||||
"min_quote_volume": 0.0,
|
||||
"entry_threshold": 1.5,
|
||||
"watch_threshold": 0.6,
|
||||
"min_trigger_score": 0.45,
|
||||
"min_setup_score": 0.35,
|
||||
},
|
||||
}
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
path = Path(tmpdir) / "dataset.json"
|
||||
path.write_text(json.dumps(dataset), encoding="utf-8")
|
||||
result = opportunity_evaluation_service.optimize_opportunity_model(
|
||||
config,
|
||||
dataset_path=str(path),
|
||||
passes=1,
|
||||
take_profit=0.02,
|
||||
stop_loss=0.015,
|
||||
setup_target=0.01,
|
||||
)
|
||||
|
||||
self.assertIn("baseline", result)
|
||||
self.assertIn("best", result)
|
||||
self.assertIn("opportunity.model_weights.trigger", result["recommended_config"])
|
||||
self.assertEqual(result["search"]["optimized"], "model_weights_only")
|
||||
90
tests/test_opportunity_evaluation_service.py
Normal file
90
tests/test_opportunity_evaluation_service.py
Normal file
@@ -0,0 +1,90 @@
|
||||
"""Opportunity historical evaluation tests."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
from coinhunter.services import opportunity_evaluation_service
|
||||
|
||||
|
||||
def _rows(start_ms: int, closes: list[float]) -> list[list[float]]:
|
||||
rows = []
|
||||
for index, close in enumerate(closes):
|
||||
open_time = start_ms + index * 3_600_000
|
||||
volume = 1_000 + index * 10
|
||||
rows.append(
|
||||
[
|
||||
float(open_time),
|
||||
close * 0.99,
|
||||
close * 1.02,
|
||||
close * 0.98,
|
||||
close,
|
||||
float(volume),
|
||||
float(open_time + 3_599_999),
|
||||
close * volume,
|
||||
]
|
||||
)
|
||||
return rows
|
||||
|
||||
|
||||
class OpportunityEvaluationServiceTestCase(unittest.TestCase):
|
||||
def test_evaluate_opportunity_dataset_scores_historical_samples(self):
|
||||
start_ms = 1_767_225_600_000
|
||||
dataset = {
|
||||
"metadata": {
|
||||
"plan": {
|
||||
"intervals": ["1h"],
|
||||
"simulation_start": "2026-01-01T04:00:00Z",
|
||||
"simulation_end": "2026-01-01T07:00:00Z",
|
||||
"simulate_days": 1,
|
||||
}
|
||||
},
|
||||
"klines": {
|
||||
"GOODUSDT": {"1h": _rows(start_ms, [100, 101, 102, 103, 104, 106, 108, 109, 110])},
|
||||
"BADUSDT": {"1h": _rows(start_ms, [100, 99, 98, 97, 96, 95, 94, 93, 92])},
|
||||
},
|
||||
}
|
||||
config = {
|
||||
"market": {"default_quote": "USDT"},
|
||||
"opportunity": {
|
||||
"entry_threshold": 1.5,
|
||||
"watch_threshold": 0.6,
|
||||
"evaluation_horizon_hours": 2.0,
|
||||
"evaluation_take_profit_pct": 1.0,
|
||||
"evaluation_stop_loss_pct": 2.0,
|
||||
"evaluation_setup_target_pct": 0.5,
|
||||
"evaluation_lookback": 4,
|
||||
"top_n": 2,
|
||||
},
|
||||
}
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmp_dir:
|
||||
dataset_path = Path(tmp_dir) / "opportunity-dataset.json"
|
||||
dataset_path.write_text(json.dumps(dataset), encoding="utf-8")
|
||||
|
||||
payload = opportunity_evaluation_service.evaluate_opportunity_dataset(
|
||||
config,
|
||||
dataset_path=str(dataset_path),
|
||||
horizon_hours=2.0,
|
||||
take_profit=0.01,
|
||||
stop_loss=0.02,
|
||||
setup_target=0.005,
|
||||
lookback=4,
|
||||
top_n=2,
|
||||
max_examples=3,
|
||||
)
|
||||
|
||||
self.assertEqual(payload["summary"]["symbols"], ["BADUSDT", "GOODUSDT"])
|
||||
self.assertEqual(payload["summary"]["interval"], "1h")
|
||||
self.assertGreater(payload["summary"]["count"], 0)
|
||||
self.assertIn("by_action", payload)
|
||||
self.assertIn("trade_simulation", payload)
|
||||
self.assertEqual(payload["rules"]["research_mode"], "disabled: dataset has no point-in-time research snapshots")
|
||||
self.assertLessEqual(len(payload["examples"]), 3)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
436
tests/test_opportunity_service.py
Normal file
436
tests/test_opportunity_service.py
Normal file
@@ -0,0 +1,436 @@
|
||||
"""Signal, opportunity, and portfolio service tests."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import unittest
|
||||
from unittest.mock import patch
|
||||
|
||||
from coinhunter.services import (
|
||||
opportunity_service,
|
||||
portfolio_service,
|
||||
research_service,
|
||||
signal_service,
|
||||
)
|
||||
|
||||
|
||||
class FakeSpotClient:
|
||||
def account_info(self):
|
||||
return {
|
||||
"balances": [
|
||||
{"asset": "USDT", "free": "50", "locked": "0"},
|
||||
{"asset": "BTC", "free": "0.01", "locked": "0"},
|
||||
{"asset": "ETH", "free": "0.5", "locked": "0"},
|
||||
{"asset": "DOGE", "free": "1", "locked": "0"},
|
||||
]
|
||||
}
|
||||
|
||||
def ticker_price(self, symbols=None):
|
||||
mapping = {
|
||||
"BTCUSDT": {"symbol": "BTCUSDT", "price": "60000"},
|
||||
"ETHUSDT": {"symbol": "ETHUSDT", "price": "3000"},
|
||||
"DOGEUSDT": {"symbol": "DOGEUSDT", "price": "0.1"},
|
||||
}
|
||||
return [mapping[symbol] for symbol in symbols]
|
||||
|
||||
def ticker_stats(self, symbols=None, *, window="1d"):
|
||||
rows = {
|
||||
"BTCUSDT": {
|
||||
"symbol": "BTCUSDT",
|
||||
"lastPrice": "60000",
|
||||
"priceChangePercent": "5",
|
||||
"quoteVolume": "9000000",
|
||||
"highPrice": "60200",
|
||||
"lowPrice": "55000",
|
||||
},
|
||||
"ETHUSDT": {
|
||||
"symbol": "ETHUSDT",
|
||||
"lastPrice": "3000",
|
||||
"priceChangePercent": "3",
|
||||
"quoteVolume": "8000000",
|
||||
"highPrice": "3100",
|
||||
"lowPrice": "2800",
|
||||
},
|
||||
"SOLUSDT": {
|
||||
"symbol": "SOLUSDT",
|
||||
"lastPrice": "150",
|
||||
"priceChangePercent": "8",
|
||||
"quoteVolume": "10000000",
|
||||
"highPrice": "152",
|
||||
"lowPrice": "130",
|
||||
},
|
||||
"DOGEUSDT": {
|
||||
"symbol": "DOGEUSDT",
|
||||
"lastPrice": "0.1",
|
||||
"priceChangePercent": "1",
|
||||
"quoteVolume": "100",
|
||||
"highPrice": "0.11",
|
||||
"lowPrice": "0.09",
|
||||
},
|
||||
}
|
||||
if not symbols:
|
||||
return list(rows.values())
|
||||
return [rows[symbol] for symbol in symbols]
|
||||
|
||||
def exchange_info(self):
|
||||
return {
|
||||
"symbols": [
|
||||
{"symbol": "BTCUSDT", "status": "TRADING"},
|
||||
{"symbol": "ETHUSDT", "status": "TRADING"},
|
||||
{"symbol": "SOLUSDT", "status": "TRADING"},
|
||||
{"symbol": "DOGEUSDT", "status": "TRADING"},
|
||||
]
|
||||
}
|
||||
|
||||
def klines(self, symbol, interval, limit):
|
||||
curves = {
|
||||
"BTCUSDT": [50000, 52000, 54000, 56000, 58000, 59000, 60000],
|
||||
"ETHUSDT": [2600, 2650, 2700, 2800, 2900, 2950, 3000],
|
||||
"SOLUSDT": [120, 125, 130, 135, 140, 145, 150],
|
||||
"DOGEUSDT": [0.11, 0.108, 0.105, 0.103, 0.102, 0.101, 0.1],
|
||||
}[symbol]
|
||||
rows = []
|
||||
for index, close in enumerate(curves[-limit:]):
|
||||
rows.append(
|
||||
[
|
||||
index,
|
||||
close * 0.98,
|
||||
close * 1.01,
|
||||
close * 0.97,
|
||||
close,
|
||||
100 + index * 10,
|
||||
index + 1,
|
||||
close * (100 + index * 10),
|
||||
]
|
||||
)
|
||||
return rows
|
||||
|
||||
|
||||
class DustOverlapSpotClient(FakeSpotClient):
|
||||
def account_info(self):
|
||||
return {"balances": [{"asset": "XRP", "free": "5", "locked": "0"}]}
|
||||
|
||||
def ticker_price(self, symbols=None):
|
||||
mapping = {"XRPUSDT": {"symbol": "XRPUSDT", "price": "1.5"}}
|
||||
return [mapping[symbol] for symbol in symbols]
|
||||
|
||||
def ticker_stats(self, symbols=None, *, window="1d"):
|
||||
rows = {
|
||||
"XRPUSDT": {
|
||||
"symbol": "XRPUSDT",
|
||||
"lastPrice": "1.5",
|
||||
"priceChangePercent": "10",
|
||||
"quoteVolume": "5000000",
|
||||
"highPrice": "1.52",
|
||||
"lowPrice": "1.2",
|
||||
}
|
||||
}
|
||||
if not symbols:
|
||||
return list(rows.values())
|
||||
return [rows[symbol] for symbol in symbols]
|
||||
|
||||
def exchange_info(self):
|
||||
return {"symbols": [{"symbol": "XRPUSDT", "status": "TRADING"}]}
|
||||
|
||||
def klines(self, symbol, interval, limit):
|
||||
rows = []
|
||||
setup_curve = [
|
||||
1.4151,
|
||||
1.4858,
|
||||
1.3868,
|
||||
1.5,
|
||||
1.4009,
|
||||
1.5142,
|
||||
1.4151,
|
||||
1.5,
|
||||
1.4292,
|
||||
1.4858,
|
||||
1.4434,
|
||||
1.4717,
|
||||
1.4505,
|
||||
1.4575,
|
||||
1.4547,
|
||||
1.4604,
|
||||
1.4575,
|
||||
1.4632,
|
||||
1.4599,
|
||||
1.466,
|
||||
1.4618,
|
||||
1.4698,
|
||||
1.4745,
|
||||
1.5,
|
||||
]
|
||||
for index, close in enumerate(setup_curve[-limit:]):
|
||||
rows.append([index, close * 0.98, close * 1.01, close * 0.97, close, 100 + index * 10, index + 1, close * 100])
|
||||
return rows
|
||||
|
||||
|
||||
class OpportunityPatternSpotClient:
|
||||
def account_info(self):
|
||||
return {"balances": [{"asset": "USDT", "free": "100", "locked": "0"}]}
|
||||
|
||||
def ticker_price(self, symbols=None):
|
||||
return []
|
||||
|
||||
def ticker_stats(self, symbols=None, *, window="1d"):
|
||||
rows = {
|
||||
"SETUPUSDT": {
|
||||
"symbol": "SETUPUSDT",
|
||||
"lastPrice": "106",
|
||||
"priceChangePercent": "4",
|
||||
"quoteVolume": "10000000",
|
||||
"highPrice": "107",
|
||||
"lowPrice": "98",
|
||||
},
|
||||
"CHASEUSDT": {
|
||||
"symbol": "CHASEUSDT",
|
||||
"lastPrice": "150",
|
||||
"priceChangePercent": "18",
|
||||
"quoteVolume": "9000000",
|
||||
"highPrice": "152",
|
||||
"lowPrice": "120",
|
||||
},
|
||||
}
|
||||
if not symbols:
|
||||
return list(rows.values())
|
||||
return [rows[symbol] for symbol in symbols]
|
||||
|
||||
def exchange_info(self):
|
||||
return {
|
||||
"symbols": [
|
||||
{"symbol": "SETUPUSDT", "status": "TRADING"},
|
||||
{"symbol": "CHASEUSDT", "status": "TRADING"},
|
||||
]
|
||||
}
|
||||
|
||||
def klines(self, symbol, interval, limit):
|
||||
curves = {
|
||||
"SETUPUSDT": [
|
||||
100,
|
||||
105,
|
||||
98,
|
||||
106,
|
||||
99,
|
||||
107,
|
||||
100,
|
||||
106,
|
||||
101,
|
||||
105,
|
||||
102,
|
||||
104,
|
||||
102.5,
|
||||
103,
|
||||
102.8,
|
||||
103.2,
|
||||
103.0,
|
||||
103.4,
|
||||
103.1,
|
||||
103.6,
|
||||
103.3,
|
||||
103.8,
|
||||
104.2,
|
||||
106,
|
||||
],
|
||||
"CHASEUSDT": [120, 125, 130, 135, 140, 145, 150],
|
||||
}[symbol]
|
||||
rows = []
|
||||
for index, close in enumerate(curves[-limit:]):
|
||||
rows.append([index, close * 0.98, close * 1.01, close * 0.97, close, 100 + index * 20, index + 1, close * 100])
|
||||
return rows
|
||||
|
||||
|
||||
class OpportunityServiceTestCase(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.config = {
|
||||
"market": {"default_quote": "USDT", "universe_allowlist": [], "universe_denylist": []},
|
||||
"trading": {"dust_usdt_threshold": 10.0},
|
||||
"signal": {
|
||||
"lookback_interval": "1h",
|
||||
"trend": 1.0,
|
||||
"momentum": 1.0,
|
||||
"breakout": 0.8,
|
||||
"volume": 0.7,
|
||||
"volatility_penalty": 0.5,
|
||||
},
|
||||
"opportunity": {
|
||||
"scan_limit": 10,
|
||||
"top_n": 5,
|
||||
"min_quote_volume": 1000.0,
|
||||
"entry_threshold": 1.5,
|
||||
"watch_threshold": 0.6,
|
||||
"overlap_penalty": 0.6,
|
||||
"auto_research": False,
|
||||
"research_provider": "coingecko",
|
||||
"research_timeout_seconds": 4.0,
|
||||
"risk_limits": {
|
||||
"min_liquidity": 0.0,
|
||||
"max_overextension": 0.08,
|
||||
"max_downside_risk": 0.3,
|
||||
"max_unlock_risk": 0.75,
|
||||
"max_regulatory_risk": 0.75,
|
||||
"min_quality_for_add": 0.0,
|
||||
},
|
||||
"weights": {
|
||||
"trend": 1.0,
|
||||
"momentum": 1.0,
|
||||
"breakout": 0.8,
|
||||
"pullback": 0.4,
|
||||
"volume": 0.7,
|
||||
"liquidity": 0.3,
|
||||
"trend_alignment": 0.8,
|
||||
"fundamental": 0.8,
|
||||
"tokenomics": 0.7,
|
||||
"catalyst": 0.5,
|
||||
"adoption": 0.4,
|
||||
"smart_money": 0.3,
|
||||
"volatility_penalty": 0.5,
|
||||
"overextension_penalty": 0.7,
|
||||
"downside_penalty": 0.5,
|
||||
"unlock_penalty": 0.8,
|
||||
"regulatory_penalty": 0.4,
|
||||
"position_concentration_penalty": 0.6,
|
||||
},
|
||||
},
|
||||
"portfolio": {
|
||||
"add_threshold": 1.5,
|
||||
"hold_threshold": 0.6,
|
||||
"trim_threshold": 0.2,
|
||||
"exit_threshold": -0.2,
|
||||
"max_position_weight": 0.6,
|
||||
},
|
||||
}
|
||||
|
||||
def test_portfolio_analysis_ignores_dust_and_emits_recommendations(self):
|
||||
events = []
|
||||
with patch.object(portfolio_service, "audit_event", side_effect=lambda event, payload, **kwargs: events.append(event)):
|
||||
payload = portfolio_service.analyze_portfolio(self.config, spot_client=FakeSpotClient())
|
||||
symbols = [item["symbol"] for item in payload["recommendations"]]
|
||||
self.assertNotIn("DOGEUSDT", symbols)
|
||||
self.assertEqual(symbols, ["BTCUSDT", "ETHUSDT"])
|
||||
self.assertEqual(payload["recommendations"][0]["action"], "add")
|
||||
self.assertEqual(payload["recommendations"][1]["action"], "hold")
|
||||
self.assertEqual(events, ["opportunity_portfolio_generated"])
|
||||
|
||||
def test_scan_is_deterministic(self):
|
||||
with patch.object(opportunity_service, "audit_event", return_value=None):
|
||||
payload = opportunity_service.scan_opportunities(
|
||||
self.config | {"opportunity": self.config["opportunity"] | {"top_n": 2}},
|
||||
spot_client=OpportunityPatternSpotClient(),
|
||||
)
|
||||
self.assertEqual([item["symbol"] for item in payload["recommendations"]], ["SETUPUSDT", "CHASEUSDT"])
|
||||
self.assertEqual([item["action"] for item in payload["recommendations"]], ["entry", "avoid"])
|
||||
self.assertGreater(payload["recommendations"][0]["metrics"]["setup_score"], 0.6)
|
||||
self.assertGreater(payload["recommendations"][1]["metrics"]["extension_penalty"], 1.0)
|
||||
|
||||
def test_scan_respects_ignore_dust_for_overlap_penalty(self):
|
||||
client = DustOverlapSpotClient()
|
||||
base_config = self.config | {
|
||||
"opportunity": self.config["opportunity"] | {
|
||||
"top_n": 1,
|
||||
"ignore_dust": True,
|
||||
"overlap_penalty": 2.0,
|
||||
}
|
||||
}
|
||||
with patch.object(opportunity_service, "audit_event", return_value=None):
|
||||
ignored = opportunity_service.scan_opportunities(base_config, spot_client=client, symbols=["XRPUSDT"])
|
||||
included = opportunity_service.scan_opportunities(
|
||||
base_config | {"opportunity": base_config["opportunity"] | {"ignore_dust": False}},
|
||||
spot_client=client,
|
||||
symbols=["XRPUSDT"],
|
||||
)
|
||||
ignored_rec = ignored["recommendations"][0]
|
||||
included_rec = included["recommendations"][0]
|
||||
|
||||
self.assertEqual(ignored_rec["action"], "entry")
|
||||
self.assertEqual(ignored_rec["metrics"]["position_weight"], 0.0)
|
||||
self.assertEqual(included_rec["action"], "entry")
|
||||
self.assertEqual(included_rec["metrics"]["position_weight"], 1.0)
|
||||
self.assertLess(included_rec["score"], ignored_rec["score"])
|
||||
|
||||
def test_signal_score_handles_empty_klines(self):
|
||||
score, metrics = signal_service.score_market_signal([], [], {"price_change_pct": 1.0}, {})
|
||||
self.assertEqual(score, 0.0)
|
||||
self.assertEqual(metrics["trend"], 0.0)
|
||||
|
||||
def test_scan_uses_automatic_external_research(self):
|
||||
config = self.config | {
|
||||
"opportunity": self.config["opportunity"]
|
||||
| {
|
||||
"auto_research": True,
|
||||
"top_n": 2,
|
||||
}
|
||||
}
|
||||
with (
|
||||
patch.object(opportunity_service, "audit_event", return_value=None),
|
||||
patch.object(
|
||||
opportunity_service,
|
||||
"get_external_research",
|
||||
return_value={
|
||||
"SOLUSDT": {
|
||||
"fundamental": 0.9,
|
||||
"tokenomics": 0.8,
|
||||
"catalyst": 0.9,
|
||||
"adoption": 0.8,
|
||||
"smart_money": 0.7,
|
||||
"unlock_risk": 0.1,
|
||||
"regulatory_risk": 0.1,
|
||||
"research_confidence": 0.9,
|
||||
}
|
||||
},
|
||||
) as research_mock,
|
||||
):
|
||||
payload = opportunity_service.scan_opportunities(config, spot_client=FakeSpotClient())
|
||||
|
||||
research_mock.assert_called_once()
|
||||
sol = next(item for item in payload["recommendations"] if item["symbol"] == "SOLUSDT")
|
||||
self.assertEqual(sol["metrics"]["fundamental"], 0.9)
|
||||
self.assertEqual(sol["metrics"]["research_confidence"], 0.9)
|
||||
|
||||
def test_weak_setup_and_trigger_becomes_avoid(self):
|
||||
metrics = {
|
||||
"extension_penalty": 0.0,
|
||||
"recent_runup": 0.0,
|
||||
"breakout_pct": -0.01,
|
||||
"setup_score": 0.12,
|
||||
"trigger_score": 0.18,
|
||||
"edge_score": 0.0,
|
||||
}
|
||||
action, reasons, confidence = opportunity_service._action_for_opportunity(
|
||||
2.5,
|
||||
metrics,
|
||||
{
|
||||
"entry_threshold": 1.5,
|
||||
"watch_threshold": 0.6,
|
||||
"min_trigger_score": 0.45,
|
||||
"min_setup_score": 0.35,
|
||||
},
|
||||
)
|
||||
|
||||
self.assertEqual(action, "avoid")
|
||||
self.assertIn("setup, trigger, or overall quality is too weak", reasons[0])
|
||||
self.assertEqual(confidence, 50)
|
||||
|
||||
|
||||
class ResearchServiceTestCase(unittest.TestCase):
|
||||
def test_coingecko_market_data_becomes_research_signals(self):
|
||||
signals = research_service._coingecko_market_to_signals(
|
||||
{
|
||||
"id": "solana",
|
||||
"symbol": "sol",
|
||||
"market_cap": 80_000_000_000,
|
||||
"fully_diluted_valuation": 95_000_000_000,
|
||||
"total_volume": 5_000_000_000,
|
||||
"market_cap_rank": 6,
|
||||
"circulating_supply": 550_000_000,
|
||||
"total_supply": 600_000_000,
|
||||
"max_supply": None,
|
||||
"price_change_percentage_7d_in_currency": 12.0,
|
||||
"price_change_percentage_30d_in_currency": 35.0,
|
||||
"price_change_percentage_200d_in_currency": 80.0,
|
||||
},
|
||||
is_trending=True,
|
||||
)
|
||||
|
||||
self.assertGreater(signals["fundamental"], 0.6)
|
||||
self.assertGreater(signals["tokenomics"], 0.8)
|
||||
self.assertGreater(signals["catalyst"], 0.6)
|
||||
self.assertLess(signals["unlock_risk"], 0.2)
|
||||
108
tests/test_trade_service.py
Normal file
108
tests/test_trade_service.py
Normal file
@@ -0,0 +1,108 @@
|
||||
"""Trade execution tests."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import unittest
|
||||
from unittest.mock import patch
|
||||
|
||||
from coinhunter.services import trade_service
|
||||
|
||||
|
||||
class FakeSpotClient:
|
||||
def __init__(self):
|
||||
self.calls = []
|
||||
|
||||
def new_order(self, **kwargs):
|
||||
self.calls.append(kwargs)
|
||||
return {"symbol": kwargs["symbol"], "status": "FILLED", "orderId": 1}
|
||||
|
||||
|
||||
class TradeServiceTestCase(unittest.TestCase):
|
||||
def test_spot_market_buy_dry_run_does_not_call_client(self):
|
||||
events = []
|
||||
with patch.object(
|
||||
trade_service, "audit_event", side_effect=lambda event, payload, **kwargs: events.append((event, payload))
|
||||
):
|
||||
client = FakeSpotClient()
|
||||
payload = trade_service.execute_spot_trade(
|
||||
{"trading": {"dry_run_default": False}},
|
||||
side="buy",
|
||||
symbol="btc/usdt",
|
||||
qty=None,
|
||||
quote=100,
|
||||
order_type="market",
|
||||
price=None,
|
||||
dry_run=True,
|
||||
spot_client=client,
|
||||
)
|
||||
self.assertEqual(payload["trade"]["status"], "DRY_RUN")
|
||||
self.assertEqual(client.calls, [])
|
||||
self.assertEqual([event for event, _ in events], ["trade_submitted", "trade_filled"])
|
||||
|
||||
def test_spot_limit_sell_maps_payload(self):
|
||||
with patch.object(trade_service, "audit_event", return_value=None):
|
||||
client = FakeSpotClient()
|
||||
payload = trade_service.execute_spot_trade(
|
||||
{"trading": {"dry_run_default": False}},
|
||||
side="sell",
|
||||
symbol="BTCUSDT",
|
||||
qty=0.1,
|
||||
quote=None,
|
||||
order_type="limit",
|
||||
price=90000,
|
||||
dry_run=False,
|
||||
spot_client=client,
|
||||
)
|
||||
self.assertEqual(payload["trade"]["status"], "FILLED")
|
||||
self.assertEqual(client.calls[0]["timeInForce"], "GTC")
|
||||
|
||||
def test_spot_market_buy_requires_quote(self):
|
||||
with (
|
||||
patch.object(trade_service, "audit_event", return_value=None),
|
||||
self.assertRaisesRegex(RuntimeError, "requires --quote"),
|
||||
):
|
||||
trade_service.execute_spot_trade(
|
||||
{"trading": {"dry_run_default": False}},
|
||||
side="buy",
|
||||
symbol="BTCUSDT",
|
||||
qty=None,
|
||||
quote=None,
|
||||
order_type="market",
|
||||
price=None,
|
||||
dry_run=False,
|
||||
spot_client=FakeSpotClient(),
|
||||
)
|
||||
|
||||
def test_spot_market_buy_rejects_qty(self):
|
||||
with (
|
||||
patch.object(trade_service, "audit_event", return_value=None),
|
||||
self.assertRaisesRegex(RuntimeError, "accepts --quote only"),
|
||||
):
|
||||
trade_service.execute_spot_trade(
|
||||
{"trading": {"dry_run_default": False}},
|
||||
side="buy",
|
||||
symbol="BTCUSDT",
|
||||
qty=0.1,
|
||||
quote=100,
|
||||
order_type="market",
|
||||
price=None,
|
||||
dry_run=False,
|
||||
spot_client=FakeSpotClient(),
|
||||
)
|
||||
|
||||
def test_spot_market_sell_rejects_quote(self):
|
||||
with (
|
||||
patch.object(trade_service, "audit_event", return_value=None),
|
||||
self.assertRaisesRegex(RuntimeError, "accepts --qty only"),
|
||||
):
|
||||
trade_service.execute_spot_trade(
|
||||
{"trading": {"dry_run_default": False}},
|
||||
side="sell",
|
||||
symbol="BTCUSDT",
|
||||
qty=0.1,
|
||||
quote=100,
|
||||
order_type="market",
|
||||
price=None,
|
||||
dry_run=False,
|
||||
spot_client=FakeSpotClient(),
|
||||
)
|
||||
Reference in New Issue
Block a user