Compare commits
10 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 72f5bbcbb7 | |||
| da93f727e8 | |||
| 62c40a9776 | |||
| 01bb54dee5 | |||
| 759086ebd7 | |||
| 5fcdd015e1 | |||
| f59388f69a | |||
| a61c329496 | |||
| db981e8e5f | |||
| e6274d3a00 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -5,3 +5,4 @@ __pycache__/
|
|||||||
dist/
|
dist/
|
||||||
build/
|
build/
|
||||||
*.egg-info/
|
*.egg-info/
|
||||||
|
.claude/skills/gstack/
|
||||||
|
|||||||
114
CLAUDE.md
Normal file
114
CLAUDE.md
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
# CLAUDE.md
|
||||||
|
|
||||||
|
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||||
|
|
||||||
|
## Development commands
|
||||||
|
|
||||||
|
- **Editable install:** `pip install -e .`
|
||||||
|
- **Run the CLI locally:** `python -m coinhunter --help`
|
||||||
|
- **Install for end users:** `./scripts/install_local.sh` (standard `pip install -e .` wrapper)
|
||||||
|
- **Tests:** There is no test suite yet. The README lists next priorities as adding pytest coverage for runtime paths, state manager, and trigger analyzer.
|
||||||
|
- **Lint / type-check:** Not configured yet.
|
||||||
|
|
||||||
|
## CLI command routing
|
||||||
|
|
||||||
|
`src/coinhunter/cli.py` is the single entrypoint. It resolves aliases to canonical commands, maps canonical commands to Python modules via `MODULE_MAP`, then imports the module and calls `module.main()` after mutating `sys.argv` to match the display name.
|
||||||
|
|
||||||
|
Active commands live in `src/coinhunter/commands/` and are thin adapters that delegate to `src/coinhunter/services/`. Root-level backward-compat facades (e.g., `precheck.py`, `smart_executor.py`, `review_engine.py`, `review_context.py`) re-export the moved commands.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Layer responsibilities
|
||||||
|
|
||||||
|
- **CLI (`cli.py`)** — argument parsing, alias resolution, module dispatch.
|
||||||
|
- **Commands (`commands/`)** — thin, stateless adapters that call into services.
|
||||||
|
- **Services (`services/`)** — orchestration, domain logic, and exchange interaction.
|
||||||
|
- **Runtime (`runtime.py`)** — path resolution, env loading, hermes binary discovery.
|
||||||
|
- **Logger (`logger.py`)** — structured JSONL logging to `~/.coinhunter/logs/`.
|
||||||
|
|
||||||
|
### Runtime and environment
|
||||||
|
|
||||||
|
`runtime.py` defines `RuntimePaths` and `get_runtime_paths()`. User data lives in `~/.coinhunter/` by default (override with `COINHUNTER_HOME`). Credentials are loaded from `~/.hermes/.env` by default (override with `COINHUNTER_ENV_FILE`). Modules should call `get_runtime_paths()` at function scope rather than eagerly at import time.
|
||||||
|
|
||||||
|
### Smart executor (`exec`)
|
||||||
|
|
||||||
|
`commands/smart_executor.py` → `services/smart_executor_service.py` → `services/trade_execution.py`.
|
||||||
|
|
||||||
|
Supported verbs: `bal`, `overview`, `hold`, `buy SYMBOL USDT`, `flat SYMBOL`, `rotate FROM TO`.
|
||||||
|
|
||||||
|
- `smart_executor_parser.py` normalizes legacy argv and exposes `parse_cli_args()`.
|
||||||
|
- `trade_common.py` holds a global dry-run flag (`set_dry_run` / `is_dry_run`).
|
||||||
|
- `execution_state.py` tracks decision IDs in JSON to prevent duplicate executions.
|
||||||
|
- `exchange_service.py` wraps `ccxt.binance` and handles symbol normalization.
|
||||||
|
- `portfolio_service.py` manages `positions.json` load/save/reconcile.
|
||||||
|
|
||||||
|
### Precheck
|
||||||
|
|
||||||
|
`commands/precheck.py` → `services/precheck_service.py`.
|
||||||
|
|
||||||
|
The precheck workflow:
|
||||||
|
1. Load and sanitize state (`precheck_state.py` — clears stale triggers and run requests).
|
||||||
|
2. Build a market snapshot (`precheck_snapshot.py` → `snapshot_builder.py`).
|
||||||
|
3. Analyze whether to trigger deep analysis (`precheck_analysis.py` → `trigger_analyzer.py`).
|
||||||
|
4. Update and save state (`precheck_state.update_state_after_observation`).
|
||||||
|
|
||||||
|
State is stored in `~/.coinhunter/state/precheck_state.json`.
|
||||||
|
|
||||||
|
### Review commands
|
||||||
|
|
||||||
|
- `review N` (`commands/review_context.py` → `services/review_service.py`) — generates review context for the last N hours.
|
||||||
|
- `recap N` (`commands/review_engine.py` → `services/review_service.py`) — generates a full review report by reading JSONL decision/trade/error logs, computing PnL estimates, missed opportunities, and saving a report to `~/.coinhunter/reviews/`.
|
||||||
|
|
||||||
|
### Logging model
|
||||||
|
|
||||||
|
`logger.py` writes JSONL to dated files in `~/.coinhunter/logs/`:
|
||||||
|
- `decisions_YYYYMMDD.jsonl`
|
||||||
|
- `trades_YYYYMMDD.jsonl`
|
||||||
|
- `errors_YYYYMMDD.jsonl`
|
||||||
|
- `snapshots_YYYYMMDD.jsonl`
|
||||||
|
|
||||||
|
Use `get_logs_last_n_hours(log_type, hours)` to query recent entries.
|
||||||
|
|
||||||
|
## Common command reference
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Diagnostics
|
||||||
|
python -m coinhunter diag
|
||||||
|
python -m coinhunter paths
|
||||||
|
python -m coinhunter api-check
|
||||||
|
|
||||||
|
# Execution (dry-run)
|
||||||
|
python -m coinhunter exec bal
|
||||||
|
python -m coinhunter exec overview
|
||||||
|
python -m coinhunter exec buy ENJUSDT 50 --dry-run
|
||||||
|
python -m coinhunter exec flat ENJUSDT --dry-run
|
||||||
|
|
||||||
|
# Precheck
|
||||||
|
python -m coinhunter precheck
|
||||||
|
python -m coinhunter precheck --ack "analysis completed"
|
||||||
|
python -m coinhunter precheck --mark-run-requested "reason"
|
||||||
|
|
||||||
|
# Review
|
||||||
|
python -m coinhunter review 12
|
||||||
|
python -m coinhunter recap 12
|
||||||
|
```
|
||||||
|
|
||||||
|
## Skill routing
|
||||||
|
|
||||||
|
When the user's request matches an available skill, ALWAYS invoke it using the Skill
|
||||||
|
tool as your FIRST action. Do NOT answer directly, do NOT use other tools first.
|
||||||
|
The skill has specialized workflows that produce better results than ad-hoc answers.
|
||||||
|
|
||||||
|
Key routing rules:
|
||||||
|
- Product ideas, "is this worth building", brainstorming → invoke office-hours
|
||||||
|
- Bugs, errors, "why is this broken", 500 errors → invoke investigate
|
||||||
|
- Ship, deploy, push, create PR → invoke ship
|
||||||
|
- QA, test the site, find bugs → invoke qa
|
||||||
|
- Code review, check my diff → invoke review
|
||||||
|
- Update docs after shipping → invoke document-release
|
||||||
|
- Weekly retro → invoke retro
|
||||||
|
- Design system, brand → invoke design-consultation
|
||||||
|
- Visual audit, design polish → invoke design-review
|
||||||
|
- Architecture review → invoke plan-eng-review
|
||||||
|
- Save progress, checkpoint, resume → invoke checkpoint
|
||||||
|
- Code quality, health check → invoke health
|
||||||
21
LICENSE
Normal file
21
LICENSE
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
MIT License
|
||||||
|
|
||||||
|
Copyright (c) 2026 Tacit Lab
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all
|
||||||
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
SOFTWARE.
|
||||||
368
README.md
368
README.md
@@ -1,222 +1,316 @@
|
|||||||
# coinhunter-cli
|
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<strong>The executable CLI layer for CoinHunter.</strong><br/>
|
<img src="https://capsule-render.vercel.app/api?type=waving&color=0:F7B93E,100:0f0f0f&height=220§ion=header&text=%F0%9F%AA%99%20CoinHunter&fontSize=65&fontColor=fff&animation=fadeIn&fontAlignY=32&desc=Trade%20Smarter%20%C2%B7%20Execute%20Faster%20%C2%B7%20Sleep%20Better&descAlignY=55&descSize=18" alt="CoinHunter Banner" />
|
||||||
Runtime-safe trading operations, precheck orchestration, review tooling, and market probes.
|
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img alt="Python" src="https://img.shields.io/badge/python-3.10%2B-blue" />
|
<img src="https://readme-typing-svg.demolab.com?font=JetBrains+Mono&weight=500&size=22&duration=2800&pause=800&color=F7B93E¢er=true&vCenter=true&width=600&lines=Spot+Trading+Orchestration+for+Terminal+Cowboys;Precheck+%E2%86%92+Execute+%E2%86%92+Review+%E2%86%92+Repeat;JSON-first+CLI+with+Dry-run+Safety" alt="Typing SVG" />
|
||||||
<img alt="Status" src="https://img.shields.io/badge/status-active%20development-orange" />
|
|
||||||
<img alt="Architecture" src="https://img.shields.io/badge/architecture-runtime%20%2B%20commands%20%2B%20services-6f42c1" />
|
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
## Why this repo exists
|
<p align="center">
|
||||||
|
<strong>Runtime-safe trading operations, precheck orchestration, review tooling, and market probes.</strong>
|
||||||
|
</p>
|
||||||
|
|
||||||
CoinHunter is evolving from a loose bundle of automation scripts into a proper installable command-line tool.
|
<p align="center">
|
||||||
|
<a href="https://pypi.org/project/coinhunter/"><img src="https://img.shields.io/pypi/v/coinhunter?style=flat-square&color=F7B93E&labelColor=1a1a1a" /></a>
|
||||||
|
<a href="#"><img src="https://img.shields.io/badge/python-3.10%2B-3776ab?style=flat-square&logo=python&logoColor=white&labelColor=1a1a1a" /></a>
|
||||||
|
<a href="#"><img src="https://img.shields.io/badge/tests-96%20passed-22c55e?style=flat-square&labelColor=1a1a1a" /></a>
|
||||||
|
<a href="#"><img src="https://img.shields.io/badge/lint-ruff%20%2B%20mypy-8b5cf6?style=flat-square&labelColor=1a1a1a" /></a>
|
||||||
|
</p>
|
||||||
|
|
||||||
This repository is the tooling layer:
|
---
|
||||||
|
|
||||||
- Code and executable behavior live here.
|
## What is this?
|
||||||
- User runtime state lives in `~/.coinhunter/` by default.
|
|
||||||
- Hermes skills can call this CLI instead of embedding large script collections.
|
|
||||||
- Runtime paths can be overridden with `COINHUNTER_HOME`, `HERMES_HOME`, `COINHUNTER_ENV_FILE`, and `HERMES_BIN`.
|
|
||||||
|
|
||||||
In short:
|
`coinhunter` is the **executable tooling layer** for CoinHunter — an installable Python CLI that handles trading operations, market probes, precheck orchestration, and review workflows.
|
||||||
|
|
||||||
- `coinhunter-cli` = tool
|
> **Note:** The old package name `coinhunter-cli` is deprecated. Please install `coinhunter` going forward.
|
||||||
- CoinHunter skill = strategy / workflow / prompting layer
|
|
||||||
- `~/.coinhunter` = user data, logs, state, reviews
|
|
||||||
|
|
||||||
## Current architecture
|
| Layer | Responsibility | Location |
|
||||||
|
|-------|----------------|----------|
|
||||||
|
| **CLI** | Top-level command router | `cli.py` |
|
||||||
|
| **Commands** | Thin argument adapters | `commands/` |
|
||||||
|
| **Services** | Orchestration & execution logic | `services/` |
|
||||||
|
| **Runtime** | Paths, env, locks, config | `runtime.py` |
|
||||||
|
| **User Data** | State, logs, reviews, positions | `~/.coinhunter/` |
|
||||||
|
|
||||||
|
> **Separation of concerns:** Code lives in this repo. Your data lives in `~/.coinhunter/`. Strategy and prompting live in Hermes skills.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
```text
|
```text
|
||||||
coinhunter-cli/
|
src/coinhunter/
|
||||||
├── src/coinhunter/
|
├── cli.py # Unified command router
|
||||||
│ ├── cli.py # top-level command router
|
├── runtime.py # Runtime paths + env loading
|
||||||
│ ├── runtime.py # runtime paths + env loading
|
├── logger.py # Structured logging utilities
|
||||||
│ ├── doctor.py # diagnostics
|
├── commands/ # CLI adapters (thin, stateless)
|
||||||
│ ├── paths.py # runtime path inspection
|
│ ├── precheck.py
|
||||||
│ ├── commands/ # thin CLI adapters
|
│ ├── smart_executor.py
|
||||||
│ ├── services/ # orchestration / application services
|
│ ├── check_api.py
|
||||||
│ └── *.py # compatibility modules + legacy logic under extraction
|
│ ├── doctor.py
|
||||||
└── README.md
|
│ ├── external_gate.py
|
||||||
|
│ ├── init_user_state.py
|
||||||
|
│ ├── market_probe.py
|
||||||
|
│ ├── paths.py
|
||||||
|
│ ├── review_context.py
|
||||||
|
│ ├── review_engine.py
|
||||||
|
│ └── rotate_external_gate_log.py
|
||||||
|
├── services/ # Orchestration & domain logic
|
||||||
|
│ ├── exchange_service.py
|
||||||
|
│ ├── portfolio_service.py
|
||||||
|
│ ├── trade_execution.py
|
||||||
|
│ ├── smart_executor_service.py
|
||||||
|
│ ├── smart_executor_parser.py
|
||||||
|
│ ├── execution_state.py
|
||||||
|
│ ├── precheck_service.py
|
||||||
|
│ ├── review_service.py # review generation logic
|
||||||
|
│ ├── precheck_constants.py # thresholds
|
||||||
|
│ ├── time_utils.py # UTC/local time helpers
|
||||||
|
│ ├── data_utils.py # json, hash, float, symbol norm
|
||||||
|
│ ├── state_manager.py # state load/save/sanitize
|
||||||
|
│ ├── market_data.py # exchange, OHLCV, metrics
|
||||||
|
│ ├── candidate_scoring.py # coin selection & scoring
|
||||||
|
│ ├── snapshot_builder.py # precheck snapshot construction
|
||||||
|
│ ├── adaptive_profile.py # trigger profile builder
|
||||||
|
│ ├── trigger_analyzer.py # trigger analysis core
|
||||||
|
│ ├── precheck_analysis.py # failure payloads
|
||||||
|
│ ├── precheck_snapshot.py # snapshot facade
|
||||||
|
│ ├── precheck_state.py # state facade
|
||||||
|
│ └── precheck_core.py # backward-compat export facade
|
||||||
|
├── precheck.py # Backward-compat root facade
|
||||||
|
├── smart_executor.py # Backward-compat root facade
|
||||||
|
└── *.py # Other compat / utility modules
|
||||||
```
|
```
|
||||||
|
|
||||||
The repo is actively being refactored toward a cleaner split:
|
---
|
||||||
|
|
||||||
- `commands/` → argument / CLI adapters
|
|
||||||
- `services/` → orchestration and application workflows
|
|
||||||
- `runtime/` → paths, env, files, locks, config
|
|
||||||
- future `domain/` → trading and precheck core logic
|
|
||||||
|
|
||||||
## Implemented command/service splits
|
|
||||||
|
|
||||||
The first extraction pass is already live:
|
|
||||||
|
|
||||||
- `smart-executor` → `commands.smart_executor` + `services.smart_executor_service`
|
|
||||||
- `precheck` → `commands.precheck` + `services.precheck_service`
|
|
||||||
- `precheck` internals now also have dedicated service modules for:
|
|
||||||
- `services.precheck_state`
|
|
||||||
- `services.precheck_snapshot`
|
|
||||||
- `services.precheck_analysis`
|
|
||||||
|
|
||||||
This keeps behavior stable while giving the codebase a cleaner landing zone for deeper refactors.
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
Editable install:
|
### From PyPI (recommended)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install -e .
|
pip install coinhunter
|
||||||
```
|
```
|
||||||
|
|
||||||
Run directly after install:
|
This installs the latest stable release and creates the `coinhunter` console script entry point.
|
||||||
|
|
||||||
|
Verify:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
coinhunter --help
|
coinhunter --help
|
||||||
coinhunter --version
|
coinhunter --version
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Development install (editable)
|
||||||
|
|
||||||
|
If you're working on this repo locally:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install -e ".[dev]"
|
||||||
|
```
|
||||||
|
|
||||||
|
Or use the convenience script:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./scripts/install_local.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
A thin wrapper that runs `pip install -e .` and verifies the entrypoint is on your PATH.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Command Reference
|
||||||
|
|
||||||
|
### Short aliases (recommended)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
coinhunter diag # runtime diagnostics
|
||||||
|
coinhunter paths # print resolved paths
|
||||||
|
coinhunter api-check # validate exchange credentials
|
||||||
|
coinhunter precheck # run precheck snapshot + trigger analysis
|
||||||
|
coinhunter exec bal # print balances as JSON
|
||||||
|
coinhunter exec overview # account overview as JSON
|
||||||
|
coinhunter exec hold # record a HOLD decision
|
||||||
|
coinhunter exec buy ENJUSDT 50 # buy $50 of ENJUSDT
|
||||||
|
coinhunter exec flat ENJUSDT # sell entire ENJUSDT position
|
||||||
|
coinhunter exec rotate PEPEUSDT ETHUSDT # rotate exposure
|
||||||
|
coinhunter exec orders # list open spot orders
|
||||||
|
coinhunter exec order-status ENJUSDT 123456 # check specific order
|
||||||
|
coinhunter exec cancel ENJUSDT 123456 # cancel an open order
|
||||||
|
coinhunter gate # external gate orchestration
|
||||||
|
coinhunter review 12 # generate review context (last 12h)
|
||||||
|
coinhunter recap 12 # generate review report (last 12h)
|
||||||
|
coinhunter probe bybit-ticker BTCUSDT # market probe
|
||||||
|
coinhunter rotate-log # rotate external gate logs
|
||||||
|
```
|
||||||
|
|
||||||
|
### Legacy long forms (still supported)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
coinhunter doctor
|
||||||
|
coinhunter check-api
|
||||||
|
coinhunter smart-executor bal
|
||||||
|
coinhunter review-context 12
|
||||||
|
coinhunter review-engine 12
|
||||||
|
coinhunter market-probe bybit-ticker BTCUSDT
|
||||||
|
coinhunter external-gate
|
||||||
|
coinhunter rotate-external-gate-log
|
||||||
|
```
|
||||||
|
|
||||||
|
### All supported commands
|
||||||
|
|
||||||
|
| Canonical | Aliases |
|
||||||
|
|-----------|---------|
|
||||||
|
| `check-api` | `api-check` |
|
||||||
|
| `doctor` | `diag` |
|
||||||
|
| `external-gate` | `gate` |
|
||||||
|
| `init` | — |
|
||||||
|
| `market-probe` | `probe` |
|
||||||
|
| `paths` | — |
|
||||||
|
| `precheck` | — |
|
||||||
|
| `review-context` | `review` |
|
||||||
|
| `review-engine` | `recap` |
|
||||||
|
| `rotate-external-gate-log` | `rotate-gate-log`, `rotate-log` |
|
||||||
|
| `smart-executor` | `exec` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Quickstart
|
## Quickstart
|
||||||
|
|
||||||
Initialize user state:
|
Initialize runtime state:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
coinhunter init
|
coinhunter init
|
||||||
```
|
```
|
||||||
|
|
||||||
Inspect runtime wiring:
|
Inspect the environment:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
coinhunter paths
|
coinhunter paths
|
||||||
coinhunter doctor
|
coinhunter diag
|
||||||
```
|
```
|
||||||
|
|
||||||
Validate exchange credentials:
|
Validate API keys:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
coinhunter check-api
|
coinhunter api-check
|
||||||
```
|
```
|
||||||
|
|
||||||
Run precheck / gate plumbing:
|
Run the precheck workflow:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
coinhunter precheck
|
coinhunter precheck
|
||||||
coinhunter precheck --mark-run-requested "external-gate queued cron run"
|
coinhunter precheck --ack "analysis completed"
|
||||||
coinhunter precheck --ack "analysis finished"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Inspect balances or execute trading actions:
|
Run the external gate:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
coinhunter smart-executor balances
|
coinhunter gate
|
||||||
coinhunter smart-executor status
|
|
||||||
coinhunter smart-executor hold
|
|
||||||
coinhunter smart-executor buy ENJUSDT 50
|
|
||||||
coinhunter smart-executor sell-all ENJUSDT
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Generate review data:
|
The gate reads `trigger_command` from `~/.coinhunter/config.json` under `external_gate`.
|
||||||
|
- By default, no external trigger is configured — gate runs precheck and marks state, then exits cleanly.
|
||||||
|
- Set `trigger_command` to a command list to integrate with your own scheduler:
|
||||||
|
|
||||||
```bash
|
```json
|
||||||
coinhunter review-context 12
|
{
|
||||||
coinhunter review-engine 12
|
"external_gate": {
|
||||||
|
"trigger_command": ["hermes", "cron", "run", "JOB_ID"]
|
||||||
|
}
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Probe external market data:
|
- Set to `null` or `[]` to explicitly disable the external trigger.
|
||||||
|
|
||||||
```bash
|
### Dynamic tuning via `config.json`
|
||||||
coinhunter market-probe bybit-ticker BTCUSDT
|
|
||||||
coinhunter market-probe bybit-klines BTCUSDT 60 20
|
You can override internal defaults without editing code by adding keys to `~/.coinhunter/config.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"external_gate": {
|
||||||
|
"trigger_command": ["hermes", "cron", "run", "JOB_ID"]
|
||||||
|
},
|
||||||
|
"exchange": {
|
||||||
|
"min_quote_volume": 200000,
|
||||||
|
"cache_ttl_seconds": 3600
|
||||||
|
},
|
||||||
|
"logging": {
|
||||||
|
"schema_version": 2
|
||||||
|
}
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Runtime model
|
| Key | Default | Effect |
|
||||||
|
|-----|---------|--------|
|
||||||
|
| `exchange.min_quote_volume` | `200000` | Minimum 24h quote volume for a symbol to appear in market snapshots |
|
||||||
|
| `exchange.cache_ttl_seconds` | `3600` | How long the ccxt exchange instance (and `load_markets()` result) is cached |
|
||||||
|
| `logging.schema_version` | `2` | Schema version stamped on every JSONL log entry |
|
||||||
|
|
||||||
Default layout:
|
---
|
||||||
|
|
||||||
|
## Runtime Model
|
||||||
|
|
||||||
|
Default data layout:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
~/.coinhunter/
|
~/.coinhunter/
|
||||||
├── accounts.json
|
|
||||||
├── config.json
|
├── config.json
|
||||||
├── executions.json
|
|
||||||
├── notes.json
|
|
||||||
├── positions.json
|
├── positions.json
|
||||||
|
├── accounts.json
|
||||||
├── watchlist.json
|
├── watchlist.json
|
||||||
|
├── notes.json
|
||||||
|
├── executions.json
|
||||||
├── logs/
|
├── logs/
|
||||||
├── reviews/
|
├── reviews/
|
||||||
|
├── cache/
|
||||||
└── state/
|
└── state/
|
||||||
|
├── precheck_state.json
|
||||||
|
└── external_gate.lock
|
||||||
```
|
```
|
||||||
|
|
||||||
Credential loading:
|
Credential resolution:
|
||||||
|
|
||||||
- Binance credentials are read from `~/.hermes/.env` by default.
|
- Binance API keys are read from `~/.hermes/.env` by default.
|
||||||
- `COINHUNTER_ENV_FILE` can point to a different env file.
|
- Override with `COINHUNTER_ENV_FILE`.
|
||||||
- `hermes` is resolved from `PATH` first, then `~/.local/bin/hermes`, unless `HERMES_BIN` overrides it.
|
- Override home with `COINHUNTER_HOME` or `HERMES_HOME`.
|
||||||
|
- `hermes` binary is resolved from `PATH`, then `~/.local/bin/hermes`, unless `HERMES_BIN` is set.
|
||||||
|
|
||||||
## Useful commands
|
---
|
||||||
|
|
||||||
### Diagnostics
|
## Development Status
|
||||||
|
|
||||||
```bash
|
The codebase is actively maintained and refactored in small, safe steps.
|
||||||
coinhunter doctor
|
|
||||||
coinhunter paths
|
|
||||||
coinhunter check-api
|
|
||||||
```
|
|
||||||
|
|
||||||
### Trading and execution
|
**Recently completed:**
|
||||||
|
- ✅ Unified CLI entrypoint with short aliases
|
||||||
|
- ✅ Extracted `smart-executor` into `commands/` + `services/`
|
||||||
|
- ✅ Extracted `precheck` into 9 focused service modules
|
||||||
|
- ✅ Migrated all active command modules into `commands/`
|
||||||
|
- ✅ Extracted `review_engine.py` core logic into `services/review_service.py`
|
||||||
|
- ✅ Removed eager `PATHS` instantiation across services and commands
|
||||||
|
- ✅ Fixed `smart_executor.py` lazy-loading facade
|
||||||
|
- ✅ Standardized install to use `pip install -e .`
|
||||||
|
- ✅ Made `external_gate` trigger_command configurable (no longer hardcodes hermes)
|
||||||
|
- ✅ Removed dead `auto-trader` command
|
||||||
|
- ✅ Backward-compatible root facades preserved
|
||||||
|
|
||||||
```bash
|
**Next priorities:**
|
||||||
coinhunter smart-executor balances
|
- 🔧 Add basic CI (lint + compileall + pytest)
|
||||||
coinhunter smart-executor status
|
- 🔧 Unify output contract (JSON-first with `--pretty` option)
|
||||||
coinhunter smart-executor hold
|
|
||||||
coinhunter smart-executor rebalance FROMUSDT TOUSDT
|
|
||||||
```
|
|
||||||
|
|
||||||
### Precheck and orchestration
|
---
|
||||||
|
|
||||||
```bash
|
|
||||||
coinhunter precheck
|
|
||||||
coinhunter external-gate
|
|
||||||
coinhunter rotate-external-gate-log
|
|
||||||
```
|
|
||||||
|
|
||||||
### Review and market research
|
|
||||||
|
|
||||||
```bash
|
|
||||||
coinhunter review-context 12
|
|
||||||
coinhunter review-engine 12
|
|
||||||
coinhunter market-probe bybit-ticker BTCUSDT
|
|
||||||
```
|
|
||||||
|
|
||||||
## Development notes
|
|
||||||
|
|
||||||
This project is intentionally moving in small, safe refactor steps:
|
|
||||||
|
|
||||||
1. Separate runtime concerns from hardcoded paths.
|
|
||||||
2. Move command dispatch into thin adapters.
|
|
||||||
3. Introduce orchestration services.
|
|
||||||
4. Extract reusable domain logic from large compatibility modules.
|
|
||||||
5. Keep cron / Hermes integration stable during migration.
|
|
||||||
|
|
||||||
That means some compatibility modules still exist, but the direction is deliberate.
|
|
||||||
|
|
||||||
## Near-term roadmap
|
|
||||||
|
|
||||||
- Extract more logic from `smart_executor.py` into dedicated execution / portfolio services.
|
|
||||||
- Continue shrinking `precheck.py` by moving snapshot and analysis internals into reusable modules.
|
|
||||||
- Add `domain/` models for positions, signals, and trigger analysis.
|
|
||||||
- Add tests for runtime paths, precheck service behavior, and CLI stability.
|
|
||||||
- Evolve toward a more polished installable CLI workflow.
|
|
||||||
|
|
||||||
## Philosophy
|
## Philosophy
|
||||||
|
|
||||||
CoinHunter should become:
|
CoinHunter is evolving toward:
|
||||||
|
|
||||||
- more professional
|
- **Professional execution** — scientific position sizing, not moonshot gambling
|
||||||
- more maintainable
|
- **Maintainable architecture** — clear boundaries between CLI, services, and domain logic
|
||||||
- safer to operate
|
- **Safer operations** — dry-run, validation gates, and explicit decision logging
|
||||||
- easier for humans and agents to call
|
- **Agent-friendly interfaces** — stable JSON outputs and predictable command contracts
|
||||||
- less dependent on prompt-only correctness
|
- **Less dependence on prompt-only correctness** — logic belongs in code, not just in system prompts
|
||||||
|
|
||||||
This repo is where that evolution happens.
|
This repo is where that evolution happens.
|
||||||
|
|||||||
@@ -3,10 +3,11 @@ requires = ["setuptools>=68", "wheel"]
|
|||||||
build-backend = "setuptools.build_meta"
|
build-backend = "setuptools.build_meta"
|
||||||
|
|
||||||
[project]
|
[project]
|
||||||
name = "coinhunter-cli"
|
name = "coinhunter"
|
||||||
version = "0.1.0"
|
version = "1.0.0"
|
||||||
description = "CoinHunter trading CLI with user runtime data in ~/.coinhunter"
|
description = "CoinHunter trading CLI with user runtime data in ~/.coinhunter"
|
||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
|
license = {text = "MIT"}
|
||||||
requires-python = ">=3.10"
|
requires-python = ">=3.10"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"ccxt>=4.4.0"
|
"ccxt>=4.4.0"
|
||||||
@@ -15,6 +16,9 @@ authors = [
|
|||||||
{name = "Tacit Lab", email = "ouyangcarlos@gmail.com"}
|
{name = "Tacit Lab", email = "ouyangcarlos@gmail.com"}
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[project.optional-dependencies]
|
||||||
|
dev = ["pytest>=8.0", "ruff>=0.4.0", "mypy>=1.0"]
|
||||||
|
|
||||||
[project.scripts]
|
[project.scripts]
|
||||||
coinhunter = "coinhunter.cli:main"
|
coinhunter = "coinhunter.cli:main"
|
||||||
|
|
||||||
@@ -23,3 +27,17 @@ package-dir = {"" = "src"}
|
|||||||
|
|
||||||
[tool.setuptools.packages.find]
|
[tool.setuptools.packages.find]
|
||||||
where = ["src"]
|
where = ["src"]
|
||||||
|
|
||||||
|
[tool.ruff]
|
||||||
|
line-length = 120
|
||||||
|
target-version = "py310"
|
||||||
|
|
||||||
|
[tool.ruff.lint]
|
||||||
|
select = ["E", "F", "W", "I"]
|
||||||
|
ignore = ["E501"]
|
||||||
|
|
||||||
|
[tool.mypy]
|
||||||
|
python_version = "3.10"
|
||||||
|
warn_return_any = true
|
||||||
|
warn_unused_configs = true
|
||||||
|
ignore_missing_imports = true
|
||||||
|
|||||||
30
scripts/install_local.sh
Executable file
30
scripts/install_local.sh
Executable file
@@ -0,0 +1,30 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# Standard local install using pip editable mode.
|
||||||
|
# This creates the 'coinhunter' console script entry point as defined in pyproject.toml.
|
||||||
|
|
||||||
|
BIN_DIR="${COINHUNTER_BIN_DIR:-$HOME/.local/bin}"
|
||||||
|
PYTHON_BIN="${PYTHON:-}"
|
||||||
|
|
||||||
|
if [[ -z "$PYTHON_BIN" ]]; then
|
||||||
|
if command -v python3 >/dev/null 2>&1; then
|
||||||
|
PYTHON_BIN="$(command -v python3)"
|
||||||
|
elif command -v python >/dev/null 2>&1; then
|
||||||
|
PYTHON_BIN="$(command -v python)"
|
||||||
|
else
|
||||||
|
echo "error: python3/python not found in PATH" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
mkdir -p "$BIN_DIR"
|
||||||
|
|
||||||
|
"$PYTHON_BIN" -m pip install --upgrade pip setuptools wheel
|
||||||
|
"$PYTHON_BIN" -m pip install --upgrade -e "$(pwd)[dev]"
|
||||||
|
|
||||||
|
echo "Installed coinhunter in editable mode."
|
||||||
|
echo " python: $PYTHON_BIN"
|
||||||
|
echo " entrypoint: $(command -v coinhunter || echo 'not in PATH')"
|
||||||
|
echo ""
|
||||||
|
echo "Make sure '$BIN_DIR' is in your PATH if the entrypoint is not found."
|
||||||
@@ -1,2 +1,3 @@
|
|||||||
from .cli import main
|
from .cli import main
|
||||||
|
|
||||||
raise SystemExit(main())
|
raise SystemExit(main())
|
||||||
|
|||||||
@@ -1,289 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Coin Hunter Auto Trader
|
|
||||||
全自动妖币猎人 + 币安执行器
|
|
||||||
|
|
||||||
运行前请在 ~/.hermes/.env 配置:
|
|
||||||
BINANCE_API_KEY=你的API_KEY
|
|
||||||
BINANCE_API_SECRET=你的API_SECRET
|
|
||||||
|
|
||||||
首次运行建议用 DRY_RUN=True 测试逻辑。
|
|
||||||
"""
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
from datetime import datetime, timezone, timedelta
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
import ccxt
|
|
||||||
|
|
||||||
from .runtime import get_runtime_paths, load_env_file
|
|
||||||
|
|
||||||
# ============== 配置 ==============
|
|
||||||
PATHS = get_runtime_paths()
|
|
||||||
COINS_DIR = PATHS.root
|
|
||||||
POSITIONS_FILE = PATHS.positions_file
|
|
||||||
ENV_FILE = PATHS.env_file
|
|
||||||
|
|
||||||
CST = timezone(timedelta(hours=8))
|
|
||||||
|
|
||||||
# 风控参数
|
|
||||||
DRY_RUN = os.getenv("DRY_RUN", "true").lower() == "true" # 默认测试模式
|
|
||||||
MAX_POSITIONS = 2 # 最大同时持仓数
|
|
||||||
|
|
||||||
# 资金配置(根据总资产动态计算)
|
|
||||||
CAPITAL_ALLOCATION_PCT = 0.95 # 用总资产的95%玩这个策略(留5%缓冲给手续费和滑点)
|
|
||||||
MIN_POSITION_USDT = 50 # 单次最小下单金额(避免过小)
|
|
||||||
|
|
||||||
MIN_VOLUME_24H = 1_000_000 # 最小24h成交额 ($)
|
|
||||||
MIN_PRICE_CHANGE_24H = 0.05 # 最小涨幅 5%
|
|
||||||
MAX_PRICE = 1.0 # 只玩低价币(meme特征)
|
|
||||||
STOP_LOSS_PCT = -0.07 # 止损 -7%
|
|
||||||
TAKE_PROFIT_1_PCT = 0.15 # 止盈1 +15%
|
|
||||||
TAKE_PROFIT_2_PCT = 0.30 # 止盈2 +30%
|
|
||||||
BLACKLIST = {"USDC", "BUSD", "TUSD", "FDUSD", "USTC", "PAXG", "XRP", "ETH", "BTC"}
|
|
||||||
|
|
||||||
# ============== 工具函数 ==============
|
|
||||||
def log(msg: str):
|
|
||||||
print(f"[{datetime.now(CST).strftime('%Y-%m-%d %H:%M:%S')} CST] {msg}")
|
|
||||||
|
|
||||||
|
|
||||||
def load_positions() -> list:
|
|
||||||
if POSITIONS_FILE.exists():
|
|
||||||
return json.loads(POSITIONS_FILE.read_text(encoding="utf-8")).get("positions", [])
|
|
||||||
return []
|
|
||||||
|
|
||||||
|
|
||||||
def save_positions(positions: list):
|
|
||||||
COINS_DIR.mkdir(parents=True, exist_ok=True)
|
|
||||||
POSITIONS_FILE.write_text(json.dumps({"positions": positions}, indent=2, ensure_ascii=False), encoding="utf-8")
|
|
||||||
|
|
||||||
|
|
||||||
def load_env():
|
|
||||||
load_env_file(PATHS)
|
|
||||||
|
|
||||||
|
|
||||||
def calculate_position_size(total_usdt: float, available_usdt: float, open_slots: int) -> float:
|
|
||||||
"""
|
|
||||||
根据总资产动态计算每次下单金额。
|
|
||||||
逻辑:先确定策略总上限,再按剩余开仓位均分。
|
|
||||||
"""
|
|
||||||
strategy_cap = total_usdt * CAPITAL_ALLOCATION_PCT
|
|
||||||
# 已用于策略的资金约等于总上限 − 可用余额
|
|
||||||
used_in_strategy = max(0, strategy_cap - available_usdt)
|
|
||||||
remaining_strategy_cap = max(0, strategy_cap - used_in_strategy)
|
|
||||||
|
|
||||||
if open_slots <= 0 or remaining_strategy_cap < MIN_POSITION_USDT:
|
|
||||||
return 0
|
|
||||||
|
|
||||||
size = remaining_strategy_cap / open_slots
|
|
||||||
# 同时不能超过当前可用余额
|
|
||||||
size = min(size, available_usdt)
|
|
||||||
# 四舍五入到整数
|
|
||||||
size = max(0, round(size, 2))
|
|
||||||
return size if size >= MIN_POSITION_USDT else 0
|
|
||||||
|
|
||||||
|
|
||||||
# ============== 币安客户端 ==============
|
|
||||||
class BinanceTrader:
|
|
||||||
def __init__(self):
|
|
||||||
api_key = os.getenv("BINANCE_API_KEY")
|
|
||||||
secret = os.getenv("BINANCE_API_SECRET")
|
|
||||||
if not api_key or not secret:
|
|
||||||
raise RuntimeError("缺少 BINANCE_API_KEY 或 BINANCE_API_SECRET,请配置 ~/.hermes/.env")
|
|
||||||
self.exchange = ccxt.binance({
|
|
||||||
"apiKey": api_key,
|
|
||||||
"secret": secret,
|
|
||||||
"options": {"defaultType": "spot"},
|
|
||||||
"enableRateLimit": True,
|
|
||||||
})
|
|
||||||
self.exchange.load_markets()
|
|
||||||
|
|
||||||
def get_balance(self, asset: str = "USDT") -> float:
|
|
||||||
bal = self.exchange.fetch_balance()["free"].get(asset, 0)
|
|
||||||
return float(bal)
|
|
||||||
|
|
||||||
def fetch_tickers(self) -> dict:
|
|
||||||
return self.exchange.fetch_tickers()
|
|
||||||
|
|
||||||
def create_market_buy_order(self, symbol: str, amount_usdt: float):
|
|
||||||
if DRY_RUN:
|
|
||||||
log(f"[DRY RUN] 模拟买入 {symbol},金额 ${amount_usdt}")
|
|
||||||
return {"id": "dry-run-buy", "price": None, "amount": amount_usdt}
|
|
||||||
ticker = self.exchange.fetch_ticker(symbol)
|
|
||||||
price = float(ticker["last"])
|
|
||||||
qty = amount_usdt / price
|
|
||||||
order = self.exchange.create_market_buy_order(symbol, qty)
|
|
||||||
log(f"✅ 买入 {symbol} | 数量 {qty:.4f} | 价格 ~${price}")
|
|
||||||
return order
|
|
||||||
|
|
||||||
def create_market_sell_order(self, symbol: str, qty: float):
|
|
||||||
if DRY_RUN:
|
|
||||||
log(f"[DRY RUN] 模拟卖出 {symbol},数量 {qty}")
|
|
||||||
return {"id": "dry-run-sell"}
|
|
||||||
order = self.exchange.create_market_sell_order(symbol, qty)
|
|
||||||
log(f"✅ 卖出 {symbol} | 数量 {qty:.4f}")
|
|
||||||
return order
|
|
||||||
|
|
||||||
|
|
||||||
# ============== 选币引擎 ==============
|
|
||||||
class CoinPicker:
|
|
||||||
def __init__(self, exchange: ccxt.binance):
|
|
||||||
self.exchange = exchange
|
|
||||||
|
|
||||||
def scan(self) -> list:
|
|
||||||
tickers = self.exchange.fetch_tickers()
|
|
||||||
candidates = []
|
|
||||||
for symbol, t in tickers.items():
|
|
||||||
if not symbol.endswith("/USDT"):
|
|
||||||
continue
|
|
||||||
base = symbol.replace("/USDT", "")
|
|
||||||
if base in BLACKLIST:
|
|
||||||
continue
|
|
||||||
|
|
||||||
price = float(t["last"] or 0)
|
|
||||||
change = float(t.get("percentage", 0)) / 100
|
|
||||||
volume = float(t.get("quoteVolume", 0))
|
|
||||||
|
|
||||||
if price <= 0 or price > MAX_PRICE:
|
|
||||||
continue
|
|
||||||
if volume < MIN_VOLUME_24H:
|
|
||||||
continue
|
|
||||||
if change < MIN_PRICE_CHANGE_24H:
|
|
||||||
continue
|
|
||||||
|
|
||||||
score = change * (volume / MIN_VOLUME_24H)
|
|
||||||
candidates.append({
|
|
||||||
"symbol": symbol,
|
|
||||||
"base": base,
|
|
||||||
"price": price,
|
|
||||||
"change_24h": change,
|
|
||||||
"volume_24h": volume,
|
|
||||||
"score": score,
|
|
||||||
})
|
|
||||||
|
|
||||||
candidates.sort(key=lambda x: x["score"], reverse=True)
|
|
||||||
return candidates[:5]
|
|
||||||
|
|
||||||
|
|
||||||
# ============== 主控制器 ==============
|
|
||||||
def run_cycle():
|
|
||||||
load_env()
|
|
||||||
trader = BinanceTrader()
|
|
||||||
picker = CoinPicker(trader.exchange)
|
|
||||||
positions = load_positions()
|
|
||||||
|
|
||||||
log(f"当前持仓数: {len(positions)} | 最大允许: {MAX_POSITIONS} | DRY_RUN={DRY_RUN}")
|
|
||||||
|
|
||||||
# 1. 检查现有持仓(止盈止损)
|
|
||||||
tickers = trader.fetch_tickers()
|
|
||||||
new_positions = []
|
|
||||||
for pos in positions:
|
|
||||||
sym = pos["symbol"]
|
|
||||||
qty = float(pos["quantity"])
|
|
||||||
cost = float(pos["avg_cost"])
|
|
||||||
# ccxt tickers 使用 slash 格式,如 PENGU/USDT
|
|
||||||
sym_ccxt = sym.replace("USDT", "/USDT") if "/" not in sym else sym
|
|
||||||
ticker = tickers.get(sym_ccxt)
|
|
||||||
if not ticker:
|
|
||||||
new_positions.append(pos)
|
|
||||||
continue
|
|
||||||
|
|
||||||
price = float(ticker["last"])
|
|
||||||
pnl_pct = (price - cost) / cost
|
|
||||||
log(f"监控 {sym} | 现价 ${price:.8f} | 成本 ${cost:.8f} | 盈亏 {pnl_pct:+.2%}")
|
|
||||||
|
|
||||||
action = None
|
|
||||||
if pnl_pct <= STOP_LOSS_PCT:
|
|
||||||
action = "STOP_LOSS"
|
|
||||||
elif pnl_pct >= TAKE_PROFIT_2_PCT:
|
|
||||||
action = "TAKE_PROFIT_2"
|
|
||||||
elif pnl_pct >= TAKE_PROFIT_1_PCT:
|
|
||||||
# 检查是否已经止盈过一部分
|
|
||||||
sold_pct = float(pos.get("take_profit_1_sold_pct", 0))
|
|
||||||
if sold_pct == 0:
|
|
||||||
action = "TAKE_PROFIT_1"
|
|
||||||
|
|
||||||
if action == "STOP_LOSS":
|
|
||||||
trader.create_market_sell_order(sym, qty)
|
|
||||||
log(f"🛑 {sym} 触发止损,全部清仓")
|
|
||||||
continue
|
|
||||||
|
|
||||||
if action == "TAKE_PROFIT_1":
|
|
||||||
sell_qty = qty * 0.5
|
|
||||||
trader.create_market_sell_order(sym, sell_qty)
|
|
||||||
pos["quantity"] = qty - sell_qty
|
|
||||||
pos["take_profit_1_sold_pct"] = 50
|
|
||||||
pos["updated_at"] = datetime.now(CST).isoformat()
|
|
||||||
log(f"🎯 {sym} 触发止盈1,卖出50%,剩余 {pos['quantity']:.4f}")
|
|
||||||
new_positions.append(pos)
|
|
||||||
continue
|
|
||||||
|
|
||||||
if action == "TAKE_PROFIT_2":
|
|
||||||
trader.create_market_sell_order(sym, float(pos["quantity"]))
|
|
||||||
log(f"🚀 {sym} 触发止盈2,全部清仓")
|
|
||||||
continue
|
|
||||||
|
|
||||||
new_positions.append(pos)
|
|
||||||
|
|
||||||
# 2. 开新仓
|
|
||||||
if len(new_positions) < MAX_POSITIONS:
|
|
||||||
candidates = picker.scan()
|
|
||||||
held_bases = {p["base_asset"] for p in new_positions}
|
|
||||||
total_usdt = trader.get_balance("USDT")
|
|
||||||
# 计算持仓市值并加入总资产
|
|
||||||
for pos in new_positions:
|
|
||||||
sym_ccxt = pos["symbol"].replace("USDT", "/USDT") if "/" not in pos["symbol"] else pos["symbol"]
|
|
||||||
ticker = tickers.get(sym_ccxt)
|
|
||||||
if ticker:
|
|
||||||
total_usdt += float(pos["quantity"]) * float(ticker["last"])
|
|
||||||
|
|
||||||
available_usdt = trader.get_balance("USDT")
|
|
||||||
open_slots = MAX_POSITIONS - len(new_positions)
|
|
||||||
position_size = calculate_position_size(total_usdt, available_usdt, open_slots)
|
|
||||||
|
|
||||||
log(f"总资产 USDT: ${total_usdt:.2f} | 策略上限({CAPITAL_ALLOCATION_PCT:.0%}): ${total_usdt*CAPITAL_ALLOCATION_PCT:.2f} | 每仓建议金额: ${position_size:.2f}")
|
|
||||||
|
|
||||||
for cand in candidates:
|
|
||||||
if len(new_positions) >= MAX_POSITIONS:
|
|
||||||
break
|
|
||||||
base = cand["base"]
|
|
||||||
if base in held_bases:
|
|
||||||
continue
|
|
||||||
if position_size <= 0:
|
|
||||||
log("策略资金已用完或余额不足,停止开新仓")
|
|
||||||
break
|
|
||||||
|
|
||||||
symbol = cand["symbol"]
|
|
||||||
order = trader.create_market_buy_order(symbol, position_size)
|
|
||||||
avg_price = float(order.get("price") or cand["price"])
|
|
||||||
qty = position_size / avg_price if avg_price else 0
|
|
||||||
|
|
||||||
new_positions.append({
|
|
||||||
"account_id": "binance-main",
|
|
||||||
"symbol": symbol.replace("/", ""),
|
|
||||||
"base_asset": base,
|
|
||||||
"quote_asset": "USDT",
|
|
||||||
"market_type": "spot",
|
|
||||||
"quantity": qty,
|
|
||||||
"avg_cost": avg_price,
|
|
||||||
"opened_at": datetime.now(CST).isoformat(),
|
|
||||||
"updated_at": datetime.now(CST).isoformat(),
|
|
||||||
"note": "Auto-trader entry",
|
|
||||||
})
|
|
||||||
held_bases.add(base)
|
|
||||||
available_usdt -= position_size
|
|
||||||
position_size = calculate_position_size(total_usdt, available_usdt, MAX_POSITIONS - len(new_positions))
|
|
||||||
log(f"📈 新开仓 {symbol} | 买入价 ${avg_price:.8f} | 数量 {qty:.2f}")
|
|
||||||
|
|
||||||
save_positions(new_positions)
|
|
||||||
log("周期结束,持仓已保存")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
try:
|
|
||||||
run_cycle()
|
|
||||||
except Exception as e:
|
|
||||||
log(f"❌ 错误: {e}")
|
|
||||||
sys.exit(1)
|
|
||||||
@@ -1,26 +1,8 @@
|
|||||||
#!/usr/bin/env python3
|
"""Backward-compatible facade for check_api."""
|
||||||
"""检查自动交易的环境配置是否就绪"""
|
|
||||||
import os
|
|
||||||
|
|
||||||
from .runtime import load_env_file
|
from __future__ import annotations
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
load_env_file()
|
|
||||||
|
|
||||||
api_key = os.getenv("BINANCE_API_KEY", "")
|
|
||||||
secret = os.getenv("BINANCE_API_SECRET", "")
|
|
||||||
|
|
||||||
if not api_key or api_key.startswith("***") or api_key.startswith("your_"):
|
|
||||||
print("❌ 未配置 BINANCE_API_KEY")
|
|
||||||
return 1
|
|
||||||
if not secret or secret.startswith("***") or secret.startswith("your_"):
|
|
||||||
print("❌ 未配置 BINANCE_API_SECRET")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
print("✅ API 配置正常")
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
from .commands.check_api import main
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
raise SystemExit(main())
|
raise SystemExit(main())
|
||||||
|
|||||||
@@ -9,20 +9,57 @@ import sys
|
|||||||
from . import __version__
|
from . import __version__
|
||||||
|
|
||||||
MODULE_MAP = {
|
MODULE_MAP = {
|
||||||
"check-api": "check_api",
|
"check-api": "commands.check_api",
|
||||||
"doctor": "doctor",
|
"doctor": "commands.doctor",
|
||||||
"external-gate": "external_gate",
|
"external-gate": "commands.external_gate",
|
||||||
"init": "init_user_state",
|
"init": "commands.init_user_state",
|
||||||
"market-probe": "market_probe",
|
"market-probe": "commands.market_probe",
|
||||||
"paths": "paths",
|
"paths": "commands.paths",
|
||||||
"precheck": "commands.precheck",
|
"precheck": "commands.precheck",
|
||||||
"review-context": "review_context",
|
"review-context": "commands.review_context",
|
||||||
"review-engine": "review_engine",
|
"review-engine": "commands.review_engine",
|
||||||
"rotate-external-gate-log": "rotate_external_gate_log",
|
"rotate-external-gate-log": "commands.rotate_external_gate_log",
|
||||||
"smart-executor": "commands.smart_executor",
|
"smart-executor": "commands.smart_executor",
|
||||||
"auto-trader": "auto_trader",
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ALIASES = {
|
||||||
|
"api-check": "check-api",
|
||||||
|
"diag": "doctor",
|
||||||
|
"env": "paths",
|
||||||
|
"gate": "external-gate",
|
||||||
|
"pre": "precheck",
|
||||||
|
"probe": "market-probe",
|
||||||
|
"review": "review-context",
|
||||||
|
"recap": "review-engine",
|
||||||
|
"rotate-gate-log": "rotate-external-gate-log",
|
||||||
|
"rotate-log": "rotate-external-gate-log",
|
||||||
|
"scan": "precheck",
|
||||||
|
"setup": "init",
|
||||||
|
"exec": "smart-executor",
|
||||||
|
}
|
||||||
|
|
||||||
|
COMMAND_HELP = [
|
||||||
|
("api-check", "check-api", "Validate exchange/API connectivity"),
|
||||||
|
("diag", "doctor", "Inspect runtime wiring and diagnostics"),
|
||||||
|
("gate", "external-gate", "Run external gate orchestration"),
|
||||||
|
("setup", "init", "Initialize user runtime state"),
|
||||||
|
("env", "paths", "Print runtime path resolution"),
|
||||||
|
("pre, scan", "precheck", "Run precheck workflow"),
|
||||||
|
("probe", "market-probe", "Query external market data"),
|
||||||
|
("review", "review-context", "Generate review context"),
|
||||||
|
("recap", "review-engine", "Generate review recap/engine output"),
|
||||||
|
("rotate-gate-log, rotate-log", "rotate-external-gate-log", "Rotate external gate logs"),
|
||||||
|
("exec", "smart-executor", "Trading and execution actions"),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _command_listing() -> str:
|
||||||
|
lines = []
|
||||||
|
for names, canonical, summary in COMMAND_HELP:
|
||||||
|
label = names if canonical is None else f"{names} (alias for {canonical})"
|
||||||
|
lines.append(f" {label:<45} {summary}")
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
class VersionAction(argparse.Action):
|
class VersionAction(argparse.Action):
|
||||||
def __call__(self, parser, namespace, values, option_string=None):
|
def __call__(self, parser, namespace, values, option_string=None):
|
||||||
@@ -36,34 +73,49 @@ def build_parser() -> argparse.ArgumentParser:
|
|||||||
description="CoinHunter trading operations CLI",
|
description="CoinHunter trading operations CLI",
|
||||||
formatter_class=argparse.RawTextHelpFormatter,
|
formatter_class=argparse.RawTextHelpFormatter,
|
||||||
epilog=(
|
epilog=(
|
||||||
|
"Commands:\n"
|
||||||
|
f"{_command_listing()}\n\n"
|
||||||
"Examples:\n"
|
"Examples:\n"
|
||||||
" coinhunter doctor\n"
|
" coinhunter diag\n"
|
||||||
" coinhunter paths\n"
|
" coinhunter env\n"
|
||||||
" coinhunter check-api\n"
|
" coinhunter setup\n"
|
||||||
" coinhunter smart-executor balances\n"
|
" coinhunter api-check\n"
|
||||||
" coinhunter smart-executor hold\n"
|
" coinhunter exec bal\n"
|
||||||
" coinhunter smart-executor --analysis '...' --reasoning '...' buy ENJUSDT 50\n"
|
" coinhunter exec overview\n"
|
||||||
" coinhunter precheck\n"
|
" coinhunter exec hold\n"
|
||||||
" coinhunter precheck --ack '分析完成:HOLD'\n"
|
" coinhunter exec --analysis '...' --reasoning '...' buy ENJUSDT 50\n"
|
||||||
" coinhunter external-gate\n"
|
" coinhunter exec orders\n"
|
||||||
" coinhunter review-context 12\n"
|
" coinhunter exec order-status ENJUSDT 123456\n"
|
||||||
" coinhunter market-probe bybit-ticker BTCUSDT\n"
|
" coinhunter exec cancel ENJUSDT 123456\n"
|
||||||
" coinhunter init\n"
|
" coinhunter pre\n"
|
||||||
|
" coinhunter pre --ack 'Analysis complete: HOLD'\n"
|
||||||
|
" coinhunter gate\n"
|
||||||
|
" coinhunter review 12\n"
|
||||||
|
" coinhunter recap 12\n"
|
||||||
|
" coinhunter probe bybit-ticker BTCUSDT\n"
|
||||||
|
"\n"
|
||||||
|
"Preferred exec verbs are bal, overview, hold, buy, flat, rotate, orders, order-status, and cancel.\n"
|
||||||
|
"Legacy command names remain supported for backward compatibility.\n"
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
parser.add_argument("--version", nargs=0, action=VersionAction, help="Print installed version and exit")
|
parser.add_argument("--version", nargs=0, action=VersionAction, help="Print installed version and exit")
|
||||||
parser.add_argument("command", nargs="?", choices=sorted(MODULE_MAP.keys()), help="CoinHunter command to run")
|
parser.add_argument(
|
||||||
|
"command",
|
||||||
|
nargs="?",
|
||||||
|
metavar="COMMAND",
|
||||||
|
help="Command to run. Use --help to see canonical names and short aliases.",
|
||||||
|
)
|
||||||
parser.add_argument("args", nargs=argparse.REMAINDER)
|
parser.add_argument("args", nargs=argparse.REMAINDER)
|
||||||
return parser
|
return parser
|
||||||
|
|
||||||
|
|
||||||
def run_python_module(module_name: str, argv: list[str]) -> int:
|
def run_python_module(module_name: str, argv: list[str], display_name: str) -> int:
|
||||||
module = importlib.import_module(f".{module_name}", package="coinhunter")
|
module = importlib.import_module(f".{module_name}", package="coinhunter")
|
||||||
if not hasattr(module, "main"):
|
if not hasattr(module, "main"):
|
||||||
raise RuntimeError(f"Module {module_name} has no main()")
|
raise RuntimeError(f"Module {module_name} has no main()")
|
||||||
old_argv = sys.argv[:]
|
old_argv = sys.argv[:]
|
||||||
try:
|
try:
|
||||||
sys.argv = [f"coinhunter {module_name}", *argv]
|
sys.argv = [display_name, *argv]
|
||||||
result = module.main()
|
result = module.main()
|
||||||
return int(result) if isinstance(result, int) else 0
|
return int(result) if isinstance(result, int) else 0
|
||||||
except SystemExit as exc:
|
except SystemExit as exc:
|
||||||
@@ -78,11 +130,16 @@ def main() -> int:
|
|||||||
if not parsed.command:
|
if not parsed.command:
|
||||||
parser.print_help()
|
parser.print_help()
|
||||||
return 0
|
return 0
|
||||||
module_name = MODULE_MAP[parsed.command]
|
command = ALIASES.get(parsed.command, parsed.command)
|
||||||
|
if command not in MODULE_MAP:
|
||||||
|
parser.error(
|
||||||
|
f"invalid command: {parsed.command!r}. Use `coinhunter --help` to see supported commands and aliases."
|
||||||
|
)
|
||||||
|
module_name = MODULE_MAP[command]
|
||||||
argv = list(parsed.args)
|
argv = list(parsed.args)
|
||||||
if argv and argv[0] == "--":
|
if argv and argv[0] == "--":
|
||||||
argv = argv[1:]
|
argv = argv[1:]
|
||||||
return run_python_module(module_name, argv)
|
return run_python_module(module_name, argv, f"coinhunter {parsed.command}")
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|||||||
56
src/coinhunter/commands/check_api.py
Executable file
56
src/coinhunter/commands/check_api.py
Executable file
@@ -0,0 +1,56 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Check whether the trading environment is ready and API permissions are sufficient."""
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
|
||||||
|
import ccxt
|
||||||
|
|
||||||
|
from ..runtime import load_env_file
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
load_env_file()
|
||||||
|
|
||||||
|
api_key = os.getenv("BINANCE_API_KEY", "")
|
||||||
|
secret = os.getenv("BINANCE_API_SECRET", "")
|
||||||
|
|
||||||
|
if not api_key or api_key.startswith("***") or api_key.startswith("your_"):
|
||||||
|
print(json.dumps({"ok": False, "error": "BINANCE_API_KEY not configured"}, ensure_ascii=False))
|
||||||
|
return 1
|
||||||
|
if not secret or secret.startswith("***") or secret.startswith("your_"):
|
||||||
|
print(json.dumps({"ok": False, "error": "BINANCE_API_SECRET not configured"}, ensure_ascii=False))
|
||||||
|
return 1
|
||||||
|
|
||||||
|
try:
|
||||||
|
ex = ccxt.binance({
|
||||||
|
"apiKey": api_key,
|
||||||
|
"secret": secret,
|
||||||
|
"options": {"defaultType": "spot"},
|
||||||
|
"enableRateLimit": True,
|
||||||
|
})
|
||||||
|
balance = ex.fetch_balance()
|
||||||
|
except Exception as e:
|
||||||
|
print(json.dumps({"ok": False, "error": f"Failed to connect or fetch balance: {e}"}, ensure_ascii=False))
|
||||||
|
return 1
|
||||||
|
|
||||||
|
read_permission = bool(balance and isinstance(balance, dict))
|
||||||
|
|
||||||
|
spot_trading_enabled = None
|
||||||
|
try:
|
||||||
|
restrictions = ex.sapi_get_account_api_restrictions()
|
||||||
|
spot_trading_enabled = restrictions.get("enableSpotAndMarginTrading") or restrictions.get("enableSpotTrading")
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
report = {
|
||||||
|
"ok": read_permission,
|
||||||
|
"read_permission": read_permission,
|
||||||
|
"spot_trading_enabled": spot_trading_enabled,
|
||||||
|
"note": "spot_trading_enabled may be null if the key lacks permission to query restrictions; it does not necessarily mean trading is disabled.",
|
||||||
|
}
|
||||||
|
print(json.dumps(report, ensure_ascii=False, indent=2))
|
||||||
|
return 0 if read_permission else 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
65
src/coinhunter/commands/doctor.py
Normal file
65
src/coinhunter/commands/doctor.py
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
"""Runtime diagnostics for CoinHunter CLI."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import platform
|
||||||
|
import shutil
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from ..runtime import ensure_runtime_dirs, get_runtime_paths, load_env_file, resolve_hermes_executable
|
||||||
|
|
||||||
|
REQUIRED_ENV_VARS = ["BINANCE_API_KEY", "BINANCE_API_SECRET"]
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
paths = ensure_runtime_dirs(get_runtime_paths())
|
||||||
|
env_file = load_env_file(paths)
|
||||||
|
hermes_executable = resolve_hermes_executable(paths)
|
||||||
|
|
||||||
|
env_checks = {}
|
||||||
|
missing_env = []
|
||||||
|
for name in REQUIRED_ENV_VARS:
|
||||||
|
present = bool(os.getenv(name))
|
||||||
|
env_checks[name] = present
|
||||||
|
if not present:
|
||||||
|
missing_env.append(name)
|
||||||
|
|
||||||
|
file_checks = {
|
||||||
|
"env_file_exists": env_file.exists(),
|
||||||
|
"config_exists": paths.config_file.exists(),
|
||||||
|
"positions_exists": paths.positions_file.exists(),
|
||||||
|
"logrotate_config_exists": paths.logrotate_config.exists(),
|
||||||
|
}
|
||||||
|
dir_checks = {
|
||||||
|
"root_exists": paths.root.exists(),
|
||||||
|
"state_dir_exists": paths.state_dir.exists(),
|
||||||
|
"logs_dir_exists": paths.logs_dir.exists(),
|
||||||
|
"reviews_dir_exists": paths.reviews_dir.exists(),
|
||||||
|
"cache_dir_exists": paths.cache_dir.exists(),
|
||||||
|
}
|
||||||
|
command_checks = {
|
||||||
|
"hermes": bool(shutil.which("hermes") or paths.hermes_bin.exists()),
|
||||||
|
"logrotate": bool(shutil.which("logrotate") or shutil.which("/usr/sbin/logrotate")),
|
||||||
|
}
|
||||||
|
|
||||||
|
report = {
|
||||||
|
"ok": not missing_env,
|
||||||
|
"python": sys.version.split()[0],
|
||||||
|
"platform": platform.platform(),
|
||||||
|
"env_file": str(env_file),
|
||||||
|
"hermes_executable": hermes_executable,
|
||||||
|
"paths": paths.as_dict(),
|
||||||
|
"env_checks": env_checks,
|
||||||
|
"missing_env": missing_env,
|
||||||
|
"file_checks": file_checks,
|
||||||
|
"dir_checks": dir_checks,
|
||||||
|
"command_checks": command_checks,
|
||||||
|
}
|
||||||
|
print(json.dumps(report, ensure_ascii=False, indent=2))
|
||||||
|
return 0 if report["ok"] else 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
161
src/coinhunter/commands/external_gate.py
Executable file
161
src/coinhunter/commands/external_gate.py
Executable file
@@ -0,0 +1,161 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
import fcntl
|
||||||
|
import json
|
||||||
|
import subprocess
|
||||||
|
import sys
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
|
||||||
|
from ..runtime import ensure_runtime_dirs, get_runtime_paths
|
||||||
|
|
||||||
|
|
||||||
|
def _paths():
|
||||||
|
return get_runtime_paths()
|
||||||
|
|
||||||
|
|
||||||
|
COINHUNTER_MODULE = [sys.executable, "-m", "coinhunter"]
|
||||||
|
|
||||||
|
|
||||||
|
def utc_now():
|
||||||
|
return datetime.now(timezone.utc).isoformat()
|
||||||
|
|
||||||
|
|
||||||
|
def log(message: str):
|
||||||
|
print(f"[{utc_now()}] {message}", file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
|
def run_cmd(args: list[str]) -> subprocess.CompletedProcess:
|
||||||
|
return subprocess.run(args, capture_output=True, text=True)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_json_output(text: str) -> dict:
|
||||||
|
text = (text or "").strip()
|
||||||
|
if not text:
|
||||||
|
return {}
|
||||||
|
return json.loads(text) # type: ignore[no-any-return]
|
||||||
|
|
||||||
|
|
||||||
|
def _load_config() -> dict:
|
||||||
|
config_path = _paths().config_file
|
||||||
|
if not config_path.exists():
|
||||||
|
return {}
|
||||||
|
try:
|
||||||
|
return json.loads(config_path.read_text(encoding="utf-8")) # type: ignore[no-any-return]
|
||||||
|
except Exception:
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
def _resolve_trigger_command(paths) -> list[str] | None:
|
||||||
|
config = _load_config()
|
||||||
|
gate_config = config.get("external_gate", {})
|
||||||
|
|
||||||
|
if "trigger_command" not in gate_config:
|
||||||
|
return None
|
||||||
|
|
||||||
|
trigger = gate_config["trigger_command"]
|
||||||
|
|
||||||
|
if trigger is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
if isinstance(trigger, str):
|
||||||
|
return [trigger]
|
||||||
|
|
||||||
|
if isinstance(trigger, list):
|
||||||
|
if not trigger:
|
||||||
|
return None
|
||||||
|
return [str(item) for item in trigger]
|
||||||
|
|
||||||
|
log(f"warn: unexpected trigger_command type {type(trigger).__name__}; skipping trigger")
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
paths = _paths()
|
||||||
|
ensure_runtime_dirs(paths)
|
||||||
|
result = {"ok": False, "triggered": False, "reason": "", "logs": []}
|
||||||
|
lock_file = paths.external_gate_lock
|
||||||
|
|
||||||
|
def append_log(msg: str):
|
||||||
|
log(msg)
|
||||||
|
result["logs"].append(msg)
|
||||||
|
|
||||||
|
with open(lock_file, "w", encoding="utf-8") as lockf:
|
||||||
|
try:
|
||||||
|
fcntl.flock(lockf.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
|
||||||
|
except BlockingIOError:
|
||||||
|
append_log("gate already running; skip")
|
||||||
|
result["reason"] = "already_running"
|
||||||
|
print(json.dumps(result, ensure_ascii=False))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
precheck = run_cmd(COINHUNTER_MODULE + ["precheck"])
|
||||||
|
if precheck.returncode != 0:
|
||||||
|
append_log(f"precheck returned non-zero ({precheck.returncode}); stdout={precheck.stdout.strip()} stderr={precheck.stderr.strip()}")
|
||||||
|
result["reason"] = "precheck_failed"
|
||||||
|
print(json.dumps(result, ensure_ascii=False))
|
||||||
|
return 1
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = parse_json_output(precheck.stdout)
|
||||||
|
except Exception as e:
|
||||||
|
append_log(f"failed to parse precheck JSON: {e}; raw={precheck.stdout.strip()[:1000]}")
|
||||||
|
result["reason"] = "precheck_parse_error"
|
||||||
|
print(json.dumps(result, ensure_ascii=False))
|
||||||
|
return 1
|
||||||
|
|
||||||
|
if not data.get("ok"):
|
||||||
|
append_log("precheck reported failure; skip model run")
|
||||||
|
result["reason"] = "precheck_not_ok"
|
||||||
|
print(json.dumps(result, ensure_ascii=False))
|
||||||
|
return 1
|
||||||
|
|
||||||
|
if not data.get("should_analyze"):
|
||||||
|
append_log("no trigger; skip model run")
|
||||||
|
result["ok"] = True
|
||||||
|
result["reason"] = "no_trigger"
|
||||||
|
print(json.dumps(result, ensure_ascii=False))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
if data.get("run_requested"):
|
||||||
|
append_log(f"trigger already queued at {data.get('run_requested_at')}; skip duplicate")
|
||||||
|
result["ok"] = True
|
||||||
|
result["reason"] = "already_queued"
|
||||||
|
print(json.dumps(result, ensure_ascii=False))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
mark = run_cmd(COINHUNTER_MODULE + ["precheck", "--mark-run-requested", "external-gate queued cron run"])
|
||||||
|
if mark.returncode != 0:
|
||||||
|
append_log(f"failed to mark run requested; stdout={mark.stdout.strip()} stderr={mark.stderr.strip()}")
|
||||||
|
result["reason"] = "mark_failed"
|
||||||
|
print(json.dumps(result, ensure_ascii=False))
|
||||||
|
return 1
|
||||||
|
|
||||||
|
trigger_cmd = _resolve_trigger_command(paths)
|
||||||
|
if trigger_cmd is None:
|
||||||
|
append_log("trigger_command is disabled; skipping external trigger")
|
||||||
|
result["ok"] = True
|
||||||
|
result["reason"] = "trigger_disabled"
|
||||||
|
print(json.dumps(result, ensure_ascii=False))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
trigger = run_cmd(trigger_cmd)
|
||||||
|
if trigger.returncode != 0:
|
||||||
|
append_log(f"failed to trigger trade job; cmd={' '.join(trigger_cmd)}; stdout={trigger.stdout.strip()} stderr={trigger.stderr.strip()}")
|
||||||
|
result["reason"] = "trigger_failed"
|
||||||
|
print(json.dumps(result, ensure_ascii=False))
|
||||||
|
return 1
|
||||||
|
|
||||||
|
reasons = ", ".join(data.get("reasons", [])) or "unknown"
|
||||||
|
append_log(f"queued trade job via {' '.join(trigger_cmd)}; reasons={reasons}")
|
||||||
|
if trigger.stdout.strip():
|
||||||
|
append_log(trigger.stdout.strip())
|
||||||
|
|
||||||
|
result["ok"] = True
|
||||||
|
result["triggered"] = True
|
||||||
|
result["reason"] = reasons
|
||||||
|
result["command"] = trigger_cmd
|
||||||
|
print(json.dumps(result, ensure_ascii=False))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
95
src/coinhunter/commands/init_user_state.py
Executable file
95
src/coinhunter/commands/init_user_state.py
Executable file
@@ -0,0 +1,95 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
import json
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from ..runtime import ensure_runtime_dirs, get_runtime_paths
|
||||||
|
|
||||||
|
|
||||||
|
def _paths():
|
||||||
|
return get_runtime_paths()
|
||||||
|
|
||||||
|
|
||||||
|
def now_iso():
|
||||||
|
return datetime.now(timezone.utc).replace(microsecond=0).isoformat()
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_file(path: Path, payload: dict):
|
||||||
|
if path.exists():
|
||||||
|
return False
|
||||||
|
path.write_text(json.dumps(payload, ensure_ascii=False, indent=2) + "\n", encoding="utf-8")
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
paths = _paths()
|
||||||
|
ensure_runtime_dirs(paths)
|
||||||
|
|
||||||
|
created = []
|
||||||
|
ts = now_iso()
|
||||||
|
|
||||||
|
templates = {
|
||||||
|
paths.root / "config.json": {
|
||||||
|
"default_exchange": "bybit",
|
||||||
|
"default_quote_currency": "USDT",
|
||||||
|
"timezone": "Asia/Shanghai",
|
||||||
|
"preferred_chains": ["solana", "base"],
|
||||||
|
"external_gate": {
|
||||||
|
"trigger_command": None,
|
||||||
|
"_comment": "Set to a command list like ['hermes', 'cron', 'run', 'JOB_ID'] or null to disable"
|
||||||
|
},
|
||||||
|
"trading": {
|
||||||
|
"usdt_buffer_pct": 0.03,
|
||||||
|
"min_remaining_dust_usdt": 1.0,
|
||||||
|
"_comment": "Adjust buffer and dust thresholds for your account size"
|
||||||
|
},
|
||||||
|
"precheck": {
|
||||||
|
"base_price_move_trigger_pct": 0.025,
|
||||||
|
"base_pnl_trigger_pct": 0.03,
|
||||||
|
"base_portfolio_move_trigger_pct": 0.03,
|
||||||
|
"base_candidate_score_trigger_ratio": 1.15,
|
||||||
|
"base_force_analysis_after_minutes": 180,
|
||||||
|
"base_cooldown_minutes": 45,
|
||||||
|
"top_candidates": 10,
|
||||||
|
"min_actionable_usdt": 12.0,
|
||||||
|
"min_real_position_value_usdt": 8.0,
|
||||||
|
"blacklist": ["USDC", "BUSD", "TUSD", "FDUSD", "USTC", "PAXG"],
|
||||||
|
"hard_stop_pct": -0.08,
|
||||||
|
"hard_moon_pct": 0.25,
|
||||||
|
"min_change_pct": 1.0,
|
||||||
|
"max_price_cap": None,
|
||||||
|
"hard_reason_dedup_minutes": 15,
|
||||||
|
"max_pending_trigger_minutes": 30,
|
||||||
|
"max_run_request_minutes": 20,
|
||||||
|
"_comment": "Tune trigger sensitivity without redeploying code"
|
||||||
|
},
|
||||||
|
"created_at": ts,
|
||||||
|
"updated_at": ts,
|
||||||
|
},
|
||||||
|
paths.root / "accounts.json": {
|
||||||
|
"accounts": []
|
||||||
|
},
|
||||||
|
paths.root / "positions.json": {
|
||||||
|
"positions": []
|
||||||
|
},
|
||||||
|
paths.root / "watchlist.json": {
|
||||||
|
"watchlist": []
|
||||||
|
},
|
||||||
|
paths.root / "notes.json": {
|
||||||
|
"notes": []
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for path, payload in templates.items():
|
||||||
|
if ensure_file(path, payload):
|
||||||
|
created.append(str(path))
|
||||||
|
|
||||||
|
print(json.dumps({
|
||||||
|
"root": str(paths.root),
|
||||||
|
"created": created,
|
||||||
|
"cache_dir": str(paths.cache_dir),
|
||||||
|
}, ensure_ascii=False, indent=2))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
242
src/coinhunter/commands/market_probe.py
Executable file
242
src/coinhunter/commands/market_probe.py
Executable file
@@ -0,0 +1,242 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import urllib.parse
|
||||||
|
import urllib.request
|
||||||
|
|
||||||
|
DEFAULT_TIMEOUT = 20
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_json(url, headers=None, timeout=DEFAULT_TIMEOUT):
|
||||||
|
merged_headers = {
|
||||||
|
"Accept": "application/json",
|
||||||
|
"User-Agent": "Mozilla/5.0 (compatible; OpenClaw Coin Hunter/1.0)",
|
||||||
|
}
|
||||||
|
if headers:
|
||||||
|
merged_headers.update(headers)
|
||||||
|
req = urllib.request.Request(url, headers=merged_headers)
|
||||||
|
with urllib.request.urlopen(req, timeout=timeout) as resp:
|
||||||
|
data = resp.read()
|
||||||
|
return json.loads(data.decode("utf-8"))
|
||||||
|
|
||||||
|
|
||||||
|
def print_json(data):
|
||||||
|
print(json.dumps(data, ensure_ascii=False, indent=2))
|
||||||
|
|
||||||
|
|
||||||
|
def bybit_ticker(symbol: str):
|
||||||
|
url = (
|
||||||
|
"https://api.bybit.com/v5/market/tickers?category=spot&symbol="
|
||||||
|
+ urllib.parse.quote(symbol.upper())
|
||||||
|
)
|
||||||
|
payload = fetch_json(url)
|
||||||
|
items = payload.get("result", {}).get("list", [])
|
||||||
|
if not items:
|
||||||
|
raise SystemExit(f"No Bybit spot ticker found for {symbol}")
|
||||||
|
item = items[0]
|
||||||
|
out = {
|
||||||
|
"provider": "bybit",
|
||||||
|
"symbol": symbol.upper(),
|
||||||
|
"lastPrice": item.get("lastPrice"),
|
||||||
|
"price24hPcnt": item.get("price24hPcnt"),
|
||||||
|
"highPrice24h": item.get("highPrice24h"),
|
||||||
|
"lowPrice24h": item.get("lowPrice24h"),
|
||||||
|
"turnover24h": item.get("turnover24h"),
|
||||||
|
"volume24h": item.get("volume24h"),
|
||||||
|
"bid1Price": item.get("bid1Price"),
|
||||||
|
"ask1Price": item.get("ask1Price"),
|
||||||
|
}
|
||||||
|
print_json(out)
|
||||||
|
|
||||||
|
|
||||||
|
def bybit_klines(symbol: str, interval: str, limit: int):
|
||||||
|
params = urllib.parse.urlencode({
|
||||||
|
"category": "spot",
|
||||||
|
"symbol": symbol.upper(),
|
||||||
|
"interval": interval,
|
||||||
|
"limit": str(limit),
|
||||||
|
})
|
||||||
|
url = f"https://api.bybit.com/v5/market/kline?{params}"
|
||||||
|
payload = fetch_json(url)
|
||||||
|
rows = payload.get("result", {}).get("list", [])
|
||||||
|
out = {
|
||||||
|
"provider": "bybit",
|
||||||
|
"symbol": symbol.upper(),
|
||||||
|
"interval": interval,
|
||||||
|
"candles": [
|
||||||
|
{
|
||||||
|
"startTime": r[0],
|
||||||
|
"open": r[1],
|
||||||
|
"high": r[2],
|
||||||
|
"low": r[3],
|
||||||
|
"close": r[4],
|
||||||
|
"volume": r[5],
|
||||||
|
"turnover": r[6],
|
||||||
|
}
|
||||||
|
for r in rows
|
||||||
|
],
|
||||||
|
}
|
||||||
|
print_json(out)
|
||||||
|
|
||||||
|
|
||||||
|
def dexscreener_search(query: str):
|
||||||
|
url = "https://api.dexscreener.com/latest/dex/search/?q=" + urllib.parse.quote(query)
|
||||||
|
payload = fetch_json(url)
|
||||||
|
pairs = payload.get("pairs") or []
|
||||||
|
out = []
|
||||||
|
for p in pairs[:10]:
|
||||||
|
out.append({
|
||||||
|
"chainId": p.get("chainId"),
|
||||||
|
"dexId": p.get("dexId"),
|
||||||
|
"pairAddress": p.get("pairAddress"),
|
||||||
|
"url": p.get("url"),
|
||||||
|
"baseToken": p.get("baseToken"),
|
||||||
|
"quoteToken": p.get("quoteToken"),
|
||||||
|
"priceUsd": p.get("priceUsd"),
|
||||||
|
"liquidityUsd": (p.get("liquidity") or {}).get("usd"),
|
||||||
|
"fdv": p.get("fdv"),
|
||||||
|
"marketCap": p.get("marketCap"),
|
||||||
|
"volume24h": (p.get("volume") or {}).get("h24"),
|
||||||
|
"buys24h": ((p.get("txns") or {}).get("h24") or {}).get("buys"),
|
||||||
|
"sells24h": ((p.get("txns") or {}).get("h24") or {}).get("sells"),
|
||||||
|
})
|
||||||
|
print_json({"provider": "dexscreener", "query": query, "pairs": out})
|
||||||
|
|
||||||
|
|
||||||
|
def dexscreener_token(chain: str, address: str):
|
||||||
|
url = f"https://api.dexscreener.com/tokens/v1/{urllib.parse.quote(chain)}/{urllib.parse.quote(address)}"
|
||||||
|
payload = fetch_json(url)
|
||||||
|
pairs = payload if isinstance(payload, list) else payload.get("pairs") or []
|
||||||
|
out = []
|
||||||
|
for p in pairs[:10]:
|
||||||
|
out.append({
|
||||||
|
"chainId": p.get("chainId"),
|
||||||
|
"dexId": p.get("dexId"),
|
||||||
|
"pairAddress": p.get("pairAddress"),
|
||||||
|
"baseToken": p.get("baseToken"),
|
||||||
|
"quoteToken": p.get("quoteToken"),
|
||||||
|
"priceUsd": p.get("priceUsd"),
|
||||||
|
"liquidityUsd": (p.get("liquidity") or {}).get("usd"),
|
||||||
|
"fdv": p.get("fdv"),
|
||||||
|
"marketCap": p.get("marketCap"),
|
||||||
|
"volume24h": (p.get("volume") or {}).get("h24"),
|
||||||
|
})
|
||||||
|
print_json({"provider": "dexscreener", "chain": chain, "address": address, "pairs": out})
|
||||||
|
|
||||||
|
|
||||||
|
def coingecko_search(query: str):
|
||||||
|
url = "https://api.coingecko.com/api/v3/search?query=" + urllib.parse.quote(query)
|
||||||
|
payload = fetch_json(url)
|
||||||
|
coins = payload.get("coins") or []
|
||||||
|
out = []
|
||||||
|
for c in coins[:10]:
|
||||||
|
out.append({
|
||||||
|
"id": c.get("id"),
|
||||||
|
"name": c.get("name"),
|
||||||
|
"symbol": c.get("symbol"),
|
||||||
|
"marketCapRank": c.get("market_cap_rank"),
|
||||||
|
"thumb": c.get("thumb"),
|
||||||
|
})
|
||||||
|
print_json({"provider": "coingecko", "query": query, "coins": out})
|
||||||
|
|
||||||
|
|
||||||
|
def coingecko_coin(coin_id: str):
|
||||||
|
params = urllib.parse.urlencode({
|
||||||
|
"localization": "false",
|
||||||
|
"tickers": "false",
|
||||||
|
"market_data": "true",
|
||||||
|
"community_data": "false",
|
||||||
|
"developer_data": "false",
|
||||||
|
"sparkline": "false",
|
||||||
|
})
|
||||||
|
url = f"https://api.coingecko.com/api/v3/coins/{urllib.parse.quote(coin_id)}?{params}"
|
||||||
|
payload = fetch_json(url)
|
||||||
|
md = payload.get("market_data") or {}
|
||||||
|
out = {
|
||||||
|
"provider": "coingecko",
|
||||||
|
"id": payload.get("id"),
|
||||||
|
"symbol": payload.get("symbol"),
|
||||||
|
"name": payload.get("name"),
|
||||||
|
"marketCapRank": payload.get("market_cap_rank"),
|
||||||
|
"currentPriceUsd": (md.get("current_price") or {}).get("usd"),
|
||||||
|
"marketCapUsd": (md.get("market_cap") or {}).get("usd"),
|
||||||
|
"fullyDilutedValuationUsd": (md.get("fully_diluted_valuation") or {}).get("usd"),
|
||||||
|
"totalVolumeUsd": (md.get("total_volume") or {}).get("usd"),
|
||||||
|
"priceChangePercentage24h": md.get("price_change_percentage_24h"),
|
||||||
|
"priceChangePercentage7d": md.get("price_change_percentage_7d"),
|
||||||
|
"priceChangePercentage30d": md.get("price_change_percentage_30d"),
|
||||||
|
"circulatingSupply": md.get("circulating_supply"),
|
||||||
|
"totalSupply": md.get("total_supply"),
|
||||||
|
"maxSupply": md.get("max_supply"),
|
||||||
|
"homepage": (payload.get("links") or {}).get("homepage", [None])[0],
|
||||||
|
}
|
||||||
|
print_json(out)
|
||||||
|
|
||||||
|
|
||||||
|
def birdeye_token(address: str):
|
||||||
|
api_key = os.getenv("BIRDEYE_API_KEY") or os.getenv("BIRDEYE_APIKEY")
|
||||||
|
if not api_key:
|
||||||
|
raise SystemExit("Birdeye requires BIRDEYE_API_KEY in the environment")
|
||||||
|
url = "https://public-api.birdeye.so/defi/token_overview?address=" + urllib.parse.quote(address)
|
||||||
|
payload = fetch_json(url, headers={
|
||||||
|
"x-api-key": api_key,
|
||||||
|
"x-chain": "solana",
|
||||||
|
})
|
||||||
|
print_json({"provider": "birdeye", "address": address, "data": payload.get("data")})
|
||||||
|
|
||||||
|
|
||||||
|
def build_parser():
|
||||||
|
parser = argparse.ArgumentParser(description="Coin Hunter market data probe")
|
||||||
|
sub = parser.add_subparsers(dest="command", required=True)
|
||||||
|
|
||||||
|
p = sub.add_parser("bybit-ticker", help="Fetch Bybit spot ticker")
|
||||||
|
p.add_argument("symbol")
|
||||||
|
|
||||||
|
p = sub.add_parser("bybit-klines", help="Fetch Bybit spot klines")
|
||||||
|
p.add_argument("symbol")
|
||||||
|
p.add_argument("--interval", default="60", help="Bybit interval, e.g. 1, 5, 15, 60, 240, D")
|
||||||
|
p.add_argument("--limit", type=int, default=10)
|
||||||
|
|
||||||
|
p = sub.add_parser("dex-search", help="Search DexScreener by query")
|
||||||
|
p.add_argument("query")
|
||||||
|
|
||||||
|
p = sub.add_parser("dex-token", help="Fetch DexScreener token pairs by chain/address")
|
||||||
|
p.add_argument("chain")
|
||||||
|
p.add_argument("address")
|
||||||
|
|
||||||
|
p = sub.add_parser("gecko-search", help="Search CoinGecko")
|
||||||
|
p.add_argument("query")
|
||||||
|
|
||||||
|
p = sub.add_parser("gecko-coin", help="Fetch CoinGecko coin by id")
|
||||||
|
p.add_argument("coin_id")
|
||||||
|
|
||||||
|
p = sub.add_parser("birdeye-token", help="Fetch Birdeye token overview (Solana)")
|
||||||
|
p.add_argument("address")
|
||||||
|
|
||||||
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = build_parser()
|
||||||
|
args = parser.parse_args()
|
||||||
|
if args.command == "bybit-ticker":
|
||||||
|
bybit_ticker(args.symbol)
|
||||||
|
elif args.command == "bybit-klines":
|
||||||
|
bybit_klines(args.symbol, args.interval, args.limit)
|
||||||
|
elif args.command == "dex-search":
|
||||||
|
dexscreener_search(args.query)
|
||||||
|
elif args.command == "dex-token":
|
||||||
|
dexscreener_token(args.chain, args.address)
|
||||||
|
elif args.command == "gecko-search":
|
||||||
|
coingecko_search(args.query)
|
||||||
|
elif args.command == "gecko-coin":
|
||||||
|
coingecko_coin(args.coin_id)
|
||||||
|
elif args.command == "birdeye-token":
|
||||||
|
birdeye_token(args.address)
|
||||||
|
else:
|
||||||
|
parser.error("Unknown command")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
16
src/coinhunter/commands/paths.py
Normal file
16
src/coinhunter/commands/paths.py
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
"""Print CoinHunter runtime paths."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
|
||||||
|
from ..runtime import get_runtime_paths
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> int:
|
||||||
|
print(json.dumps(get_runtime_paths().as_dict(), ensure_ascii=False, indent=2))
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
34
src/coinhunter/commands/review_context.py
Normal file
34
src/coinhunter/commands/review_context.py
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""CLI adapter for review context."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from ..services.review_service import generate_review
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
hours = int(sys.argv[1]) if len(sys.argv) > 1 else 12
|
||||||
|
review = generate_review(hours)
|
||||||
|
compact = {
|
||||||
|
"review_period_hours": review.get("review_period_hours", hours),
|
||||||
|
"review_timestamp": review.get("review_timestamp"),
|
||||||
|
"total_decisions": review.get("total_decisions", 0),
|
||||||
|
"total_trades": review.get("total_trades", 0),
|
||||||
|
"total_errors": review.get("total_errors", 0),
|
||||||
|
"stats": review.get("stats", {}),
|
||||||
|
"insights": review.get("insights", []),
|
||||||
|
"recommendations": review.get("recommendations", []),
|
||||||
|
"decision_quality_top": review.get("decision_quality", [])[:5],
|
||||||
|
"should_report": bool(
|
||||||
|
review.get("total_decisions", 0)
|
||||||
|
or review.get("total_trades", 0)
|
||||||
|
or review.get("total_errors", 0)
|
||||||
|
or review.get("insights")
|
||||||
|
),
|
||||||
|
}
|
||||||
|
print(json.dumps(compact, ensure_ascii=False, indent=2))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
24
src/coinhunter/commands/review_engine.py
Normal file
24
src/coinhunter/commands/review_engine.py
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""CLI adapter for review engine."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from ..services.review_service import generate_review, save_review
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
try:
|
||||||
|
hours = int(sys.argv[1]) if len(sys.argv) > 1 else 1
|
||||||
|
review = generate_review(hours)
|
||||||
|
path = save_review(review)
|
||||||
|
print(json.dumps({"ok": True, "saved_path": path, "review": review}, ensure_ascii=False, indent=2))
|
||||||
|
except Exception as e:
|
||||||
|
from ..logger import log_error
|
||||||
|
log_error("review_engine", e)
|
||||||
|
print(json.dumps({"ok": False, "error": str(e)}, ensure_ascii=False), file=sys.stderr)
|
||||||
|
raise SystemExit(1)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
31
src/coinhunter/commands/rotate_external_gate_log.py
Executable file
31
src/coinhunter/commands/rotate_external_gate_log.py
Executable file
@@ -0,0 +1,31 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Rotate external gate log using the user's logrotate config/state."""
|
||||||
|
import json
|
||||||
|
import shutil
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
from ..runtime import ensure_runtime_dirs, get_runtime_paths
|
||||||
|
|
||||||
|
|
||||||
|
def _paths():
|
||||||
|
return get_runtime_paths()
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
paths = _paths()
|
||||||
|
ensure_runtime_dirs(paths)
|
||||||
|
logrotate_bin = shutil.which("logrotate") or "/usr/sbin/logrotate"
|
||||||
|
cmd = [logrotate_bin, "-s", str(paths.logrotate_status), str(paths.logrotate_config)]
|
||||||
|
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||||
|
output = {
|
||||||
|
"ok": result.returncode == 0,
|
||||||
|
"returncode": result.returncode,
|
||||||
|
"stdout": result.stdout.strip(),
|
||||||
|
"stderr": result.stderr.strip(),
|
||||||
|
}
|
||||||
|
print(json.dumps(output, ensure_ascii=False, indent=2))
|
||||||
|
return 0 if result.returncode == 0 else 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
@@ -1,66 +1,8 @@
|
|||||||
"""Runtime diagnostics for CoinHunter CLI."""
|
"""Backward-compatible facade for doctor."""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
from .commands.doctor import main
|
||||||
import os
|
|
||||||
import platform
|
|
||||||
import shutil
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from .runtime import ensure_runtime_dirs, get_runtime_paths, load_env_file, resolve_hermes_executable
|
|
||||||
|
|
||||||
|
|
||||||
REQUIRED_ENV_VARS = ["BINANCE_API_KEY", "BINANCE_API_SECRET"]
|
|
||||||
|
|
||||||
|
|
||||||
def main() -> int:
|
|
||||||
paths = ensure_runtime_dirs(get_runtime_paths())
|
|
||||||
env_file = load_env_file(paths)
|
|
||||||
hermes_executable = resolve_hermes_executable(paths)
|
|
||||||
|
|
||||||
env_checks = {}
|
|
||||||
missing_env = []
|
|
||||||
for name in REQUIRED_ENV_VARS:
|
|
||||||
present = bool(os.getenv(name))
|
|
||||||
env_checks[name] = present
|
|
||||||
if not present:
|
|
||||||
missing_env.append(name)
|
|
||||||
|
|
||||||
file_checks = {
|
|
||||||
"env_file_exists": env_file.exists(),
|
|
||||||
"config_exists": paths.config_file.exists(),
|
|
||||||
"positions_exists": paths.positions_file.exists(),
|
|
||||||
"logrotate_config_exists": paths.logrotate_config.exists(),
|
|
||||||
}
|
|
||||||
dir_checks = {
|
|
||||||
"root_exists": paths.root.exists(),
|
|
||||||
"state_dir_exists": paths.state_dir.exists(),
|
|
||||||
"logs_dir_exists": paths.logs_dir.exists(),
|
|
||||||
"reviews_dir_exists": paths.reviews_dir.exists(),
|
|
||||||
"cache_dir_exists": paths.cache_dir.exists(),
|
|
||||||
}
|
|
||||||
command_checks = {
|
|
||||||
"hermes": bool(shutil.which("hermes") or paths.hermes_bin.exists()),
|
|
||||||
"logrotate": bool(shutil.which("logrotate") or shutil.which("/usr/sbin/logrotate")),
|
|
||||||
}
|
|
||||||
|
|
||||||
report = {
|
|
||||||
"ok": not missing_env,
|
|
||||||
"python": sys.version.split()[0],
|
|
||||||
"platform": platform.platform(),
|
|
||||||
"env_file": str(env_file),
|
|
||||||
"hermes_executable": hermes_executable,
|
|
||||||
"paths": paths.as_dict(),
|
|
||||||
"env_checks": env_checks,
|
|
||||||
"missing_env": missing_env,
|
|
||||||
"file_checks": file_checks,
|
|
||||||
"dir_checks": dir_checks,
|
|
||||||
"command_checks": command_checks,
|
|
||||||
}
|
|
||||||
print(json.dumps(report, ensure_ascii=False, indent=2))
|
|
||||||
return 0 if report["ok"] else 1
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
raise SystemExit(main())
|
raise SystemExit(main())
|
||||||
|
|||||||
@@ -1,82 +1,8 @@
|
|||||||
#!/usr/bin/env python3
|
"""Backward-compatible facade for external_gate."""
|
||||||
import fcntl
|
|
||||||
import json
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
from datetime import datetime, timezone
|
|
||||||
|
|
||||||
from .runtime import ensure_runtime_dirs, get_runtime_paths, resolve_hermes_executable
|
from __future__ import annotations
|
||||||
|
|
||||||
PATHS = get_runtime_paths()
|
|
||||||
STATE_DIR = PATHS.state_dir
|
|
||||||
LOCK_FILE = PATHS.external_gate_lock
|
|
||||||
COINHUNTER_MODULE = [sys.executable, "-m", "coinhunter"]
|
|
||||||
TRADE_JOB_ID = "4e6593fff158"
|
|
||||||
|
|
||||||
|
|
||||||
def utc_now():
|
|
||||||
return datetime.now(timezone.utc).isoformat()
|
|
||||||
|
|
||||||
|
|
||||||
def log(message: str):
|
|
||||||
print(f"[{utc_now()}] {message}")
|
|
||||||
|
|
||||||
|
|
||||||
def run_cmd(args: list[str]) -> subprocess.CompletedProcess:
|
|
||||||
return subprocess.run(args, capture_output=True, text=True)
|
|
||||||
|
|
||||||
|
|
||||||
def parse_json_output(text: str) -> dict:
|
|
||||||
text = (text or "").strip()
|
|
||||||
if not text:
|
|
||||||
return {}
|
|
||||||
return json.loads(text)
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
ensure_runtime_dirs(PATHS)
|
|
||||||
with open(LOCK_FILE, "w", encoding="utf-8") as lockf:
|
|
||||||
try:
|
|
||||||
fcntl.flock(lockf.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
|
|
||||||
except BlockingIOError:
|
|
||||||
log("gate already running; skip")
|
|
||||||
return 0
|
|
||||||
|
|
||||||
precheck = run_cmd(COINHUNTER_MODULE + ["precheck"])
|
|
||||||
if precheck.returncode != 0:
|
|
||||||
log(f"precheck returned non-zero ({precheck.returncode}); stdout={precheck.stdout.strip()} stderr={precheck.stderr.strip()}")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
try:
|
|
||||||
data = parse_json_output(precheck.stdout)
|
|
||||||
except Exception as e:
|
|
||||||
log(f"failed to parse precheck JSON: {e}; raw={precheck.stdout.strip()[:1000]}")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
if not data.get("should_analyze"):
|
|
||||||
log("no trigger; skip model run")
|
|
||||||
return 0
|
|
||||||
|
|
||||||
if data.get("run_requested"):
|
|
||||||
log(f"trigger already queued at {data.get('run_requested_at')}; skip duplicate")
|
|
||||||
return 0
|
|
||||||
|
|
||||||
mark = run_cmd(COINHUNTER_MODULE + ["precheck", "--mark-run-requested", "external-gate queued cron run"])
|
|
||||||
if mark.returncode != 0:
|
|
||||||
log(f"failed to mark run requested; stdout={mark.stdout.strip()} stderr={mark.stderr.strip()}")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
trigger = run_cmd([resolve_hermes_executable(PATHS), "cron", "run", TRADE_JOB_ID])
|
|
||||||
if trigger.returncode != 0:
|
|
||||||
log(f"failed to trigger trade cron job; stdout={trigger.stdout.strip()} stderr={trigger.stderr.strip()}")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
reasons = ", ".join(data.get("reasons", [])) or "unknown"
|
|
||||||
log(f"queued trade job {TRADE_JOB_ID}; reasons={reasons}")
|
|
||||||
if trigger.stdout.strip():
|
|
||||||
log(trigger.stdout.strip())
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
from .commands.external_gate import main
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
raise SystemExit(main())
|
raise SystemExit(main())
|
||||||
|
|||||||
@@ -1,65 +1,8 @@
|
|||||||
#!/usr/bin/env python3
|
"""Backward-compatible facade for init_user_state."""
|
||||||
import json
|
|
||||||
from datetime import datetime, timezone
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
from .runtime import ensure_runtime_dirs, get_runtime_paths
|
from __future__ import annotations
|
||||||
|
|
||||||
PATHS = get_runtime_paths()
|
|
||||||
ROOT = PATHS.root
|
|
||||||
CACHE_DIR = PATHS.cache_dir
|
|
||||||
|
|
||||||
|
|
||||||
def now_iso():
|
|
||||||
return datetime.now(timezone.utc).replace(microsecond=0).isoformat()
|
|
||||||
|
|
||||||
|
|
||||||
def ensure_file(path: Path, payload: dict):
|
|
||||||
if path.exists():
|
|
||||||
return False
|
|
||||||
path.write_text(json.dumps(payload, ensure_ascii=False, indent=2) + "\n", encoding="utf-8")
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
ensure_runtime_dirs(PATHS)
|
|
||||||
|
|
||||||
created = []
|
|
||||||
ts = now_iso()
|
|
||||||
|
|
||||||
templates = {
|
|
||||||
ROOT / "config.json": {
|
|
||||||
"default_exchange": "bybit",
|
|
||||||
"default_quote_currency": "USDT",
|
|
||||||
"timezone": "Asia/Shanghai",
|
|
||||||
"preferred_chains": ["solana", "base"],
|
|
||||||
"created_at": ts,
|
|
||||||
"updated_at": ts,
|
|
||||||
},
|
|
||||||
ROOT / "accounts.json": {
|
|
||||||
"accounts": []
|
|
||||||
},
|
|
||||||
ROOT / "positions.json": {
|
|
||||||
"positions": []
|
|
||||||
},
|
|
||||||
ROOT / "watchlist.json": {
|
|
||||||
"watchlist": []
|
|
||||||
},
|
|
||||||
ROOT / "notes.json": {
|
|
||||||
"notes": []
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for path, payload in templates.items():
|
|
||||||
if ensure_file(path, payload):
|
|
||||||
created.append(str(path))
|
|
||||||
|
|
||||||
print(json.dumps({
|
|
||||||
"root": str(ROOT),
|
|
||||||
"created": created,
|
|
||||||
"cache_dir": str(CACHE_DIR),
|
|
||||||
}, ensure_ascii=False, indent=2))
|
|
||||||
|
|
||||||
|
from .commands.init_user_state import main
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
raise SystemExit(main())
|
||||||
|
|||||||
@@ -2,28 +2,41 @@
|
|||||||
"""Coin Hunter structured logger."""
|
"""Coin Hunter structured logger."""
|
||||||
import json
|
import json
|
||||||
import traceback
|
import traceback
|
||||||
from datetime import datetime, timezone, timedelta
|
|
||||||
|
|
||||||
from .runtime import get_runtime_paths
|
__all__ = [
|
||||||
|
"SCHEMA_VERSION",
|
||||||
|
"log_decision",
|
||||||
|
"log_trade",
|
||||||
|
"log_snapshot",
|
||||||
|
"log_error",
|
||||||
|
"get_logs_by_date",
|
||||||
|
"get_logs_last_n_hours",
|
||||||
|
]
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
|
||||||
LOG_DIR = get_runtime_paths().logs_dir
|
from .runtime import get_runtime_paths, get_user_config
|
||||||
SCHEMA_VERSION = 2
|
|
||||||
|
SCHEMA_VERSION = get_user_config("logging.schema_version", 2)
|
||||||
|
|
||||||
CST = timezone(timedelta(hours=8))
|
CST = timezone(timedelta(hours=8))
|
||||||
|
|
||||||
|
|
||||||
|
def _log_dir():
|
||||||
|
return get_runtime_paths().logs_dir
|
||||||
|
|
||||||
|
|
||||||
def bj_now():
|
def bj_now():
|
||||||
return datetime.now(CST)
|
return datetime.now(CST)
|
||||||
|
|
||||||
|
|
||||||
def ensure_dir():
|
def ensure_dir():
|
||||||
LOG_DIR.mkdir(parents=True, exist_ok=True)
|
_log_dir().mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
|
||||||
def _append_jsonl(prefix: str, payload: dict):
|
def _append_jsonl(prefix: str, payload: dict):
|
||||||
ensure_dir()
|
ensure_dir()
|
||||||
date_str = bj_now().strftime("%Y%m%d")
|
date_str = bj_now().strftime("%Y%m%d")
|
||||||
log_file = LOG_DIR / f"{prefix}_{date_str}.jsonl"
|
log_file = _log_dir() / f"{prefix}_{date_str}.jsonl"
|
||||||
with open(log_file, "a", encoding="utf-8") as f:
|
with open(log_file, "a", encoding="utf-8") as f:
|
||||||
f.write(json.dumps(payload, ensure_ascii=False) + "\n")
|
f.write(json.dumps(payload, ensure_ascii=False) + "\n")
|
||||||
|
|
||||||
@@ -42,8 +55,8 @@ def log_decision(data: dict):
|
|||||||
return log_event("decisions", data)
|
return log_event("decisions", data)
|
||||||
|
|
||||||
|
|
||||||
def log_trade(action: str, symbol: str, qty: float = None, amount_usdt: float = None,
|
def log_trade(action: str, symbol: str, qty: float | None = None, amount_usdt: float | None = None,
|
||||||
price: float = None, note: str = "", **extra):
|
price: float | None = None, note: str = "", **extra):
|
||||||
payload = {
|
payload = {
|
||||||
"action": action,
|
"action": action,
|
||||||
"symbol": symbol,
|
"symbol": symbol,
|
||||||
@@ -71,10 +84,10 @@ def log_error(where: str, error: Exception | str, **extra):
|
|||||||
return log_event("errors", payload)
|
return log_event("errors", payload)
|
||||||
|
|
||||||
|
|
||||||
def get_logs_by_date(log_type: str, date_str: str = None) -> list:
|
def get_logs_by_date(log_type: str, date_str: str | None = None) -> list:
|
||||||
if date_str is None:
|
if date_str is None:
|
||||||
date_str = bj_now().strftime("%Y%m%d")
|
date_str = bj_now().strftime("%Y%m%d")
|
||||||
log_file = LOG_DIR / f"{log_type}_{date_str}.jsonl"
|
log_file = _log_dir() / f"{log_type}_{date_str}.jsonl"
|
||||||
if not log_file.exists():
|
if not log_file.exists():
|
||||||
return []
|
return []
|
||||||
entries = []
|
entries = []
|
||||||
|
|||||||
@@ -1,243 +1,8 @@
|
|||||||
#!/usr/bin/env python3
|
"""Backward-compatible facade for market_probe."""
|
||||||
import argparse
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import urllib.parse
|
|
||||||
import urllib.request
|
|
||||||
|
|
||||||
DEFAULT_TIMEOUT = 20
|
from __future__ import annotations
|
||||||
|
|
||||||
|
|
||||||
def fetch_json(url, headers=None, timeout=DEFAULT_TIMEOUT):
|
|
||||||
merged_headers = {
|
|
||||||
"Accept": "application/json",
|
|
||||||
"User-Agent": "Mozilla/5.0 (compatible; OpenClaw Coin Hunter/1.0)",
|
|
||||||
}
|
|
||||||
if headers:
|
|
||||||
merged_headers.update(headers)
|
|
||||||
req = urllib.request.Request(url, headers=merged_headers)
|
|
||||||
with urllib.request.urlopen(req, timeout=timeout) as resp:
|
|
||||||
data = resp.read()
|
|
||||||
return json.loads(data.decode("utf-8"))
|
|
||||||
|
|
||||||
|
|
||||||
def print_json(data):
|
|
||||||
print(json.dumps(data, ensure_ascii=False, indent=2))
|
|
||||||
|
|
||||||
|
|
||||||
def bybit_ticker(symbol: str):
|
|
||||||
url = (
|
|
||||||
"https://api.bybit.com/v5/market/tickers?category=spot&symbol="
|
|
||||||
+ urllib.parse.quote(symbol.upper())
|
|
||||||
)
|
|
||||||
payload = fetch_json(url)
|
|
||||||
items = payload.get("result", {}).get("list", [])
|
|
||||||
if not items:
|
|
||||||
raise SystemExit(f"No Bybit spot ticker found for {symbol}")
|
|
||||||
item = items[0]
|
|
||||||
out = {
|
|
||||||
"provider": "bybit",
|
|
||||||
"symbol": symbol.upper(),
|
|
||||||
"lastPrice": item.get("lastPrice"),
|
|
||||||
"price24hPcnt": item.get("price24hPcnt"),
|
|
||||||
"highPrice24h": item.get("highPrice24h"),
|
|
||||||
"lowPrice24h": item.get("lowPrice24h"),
|
|
||||||
"turnover24h": item.get("turnover24h"),
|
|
||||||
"volume24h": item.get("volume24h"),
|
|
||||||
"bid1Price": item.get("bid1Price"),
|
|
||||||
"ask1Price": item.get("ask1Price"),
|
|
||||||
}
|
|
||||||
print_json(out)
|
|
||||||
|
|
||||||
|
|
||||||
def bybit_klines(symbol: str, interval: str, limit: int):
|
|
||||||
params = urllib.parse.urlencode({
|
|
||||||
"category": "spot",
|
|
||||||
"symbol": symbol.upper(),
|
|
||||||
"interval": interval,
|
|
||||||
"limit": str(limit),
|
|
||||||
})
|
|
||||||
url = f"https://api.bybit.com/v5/market/kline?{params}"
|
|
||||||
payload = fetch_json(url)
|
|
||||||
rows = payload.get("result", {}).get("list", [])
|
|
||||||
out = {
|
|
||||||
"provider": "bybit",
|
|
||||||
"symbol": symbol.upper(),
|
|
||||||
"interval": interval,
|
|
||||||
"candles": [
|
|
||||||
{
|
|
||||||
"startTime": r[0],
|
|
||||||
"open": r[1],
|
|
||||||
"high": r[2],
|
|
||||||
"low": r[3],
|
|
||||||
"close": r[4],
|
|
||||||
"volume": r[5],
|
|
||||||
"turnover": r[6],
|
|
||||||
}
|
|
||||||
for r in rows
|
|
||||||
],
|
|
||||||
}
|
|
||||||
print_json(out)
|
|
||||||
|
|
||||||
|
|
||||||
def dexscreener_search(query: str):
|
|
||||||
url = "https://api.dexscreener.com/latest/dex/search/?q=" + urllib.parse.quote(query)
|
|
||||||
payload = fetch_json(url)
|
|
||||||
pairs = payload.get("pairs") or []
|
|
||||||
out = []
|
|
||||||
for p in pairs[:10]:
|
|
||||||
out.append({
|
|
||||||
"chainId": p.get("chainId"),
|
|
||||||
"dexId": p.get("dexId"),
|
|
||||||
"pairAddress": p.get("pairAddress"),
|
|
||||||
"url": p.get("url"),
|
|
||||||
"baseToken": p.get("baseToken"),
|
|
||||||
"quoteToken": p.get("quoteToken"),
|
|
||||||
"priceUsd": p.get("priceUsd"),
|
|
||||||
"liquidityUsd": (p.get("liquidity") or {}).get("usd"),
|
|
||||||
"fdv": p.get("fdv"),
|
|
||||||
"marketCap": p.get("marketCap"),
|
|
||||||
"volume24h": (p.get("volume") or {}).get("h24"),
|
|
||||||
"buys24h": ((p.get("txns") or {}).get("h24") or {}).get("buys"),
|
|
||||||
"sells24h": ((p.get("txns") or {}).get("h24") or {}).get("sells"),
|
|
||||||
})
|
|
||||||
print_json({"provider": "dexscreener", "query": query, "pairs": out})
|
|
||||||
|
|
||||||
|
|
||||||
def dexscreener_token(chain: str, address: str):
|
|
||||||
url = f"https://api.dexscreener.com/tokens/v1/{urllib.parse.quote(chain)}/{urllib.parse.quote(address)}"
|
|
||||||
payload = fetch_json(url)
|
|
||||||
pairs = payload if isinstance(payload, list) else payload.get("pairs") or []
|
|
||||||
out = []
|
|
||||||
for p in pairs[:10]:
|
|
||||||
out.append({
|
|
||||||
"chainId": p.get("chainId"),
|
|
||||||
"dexId": p.get("dexId"),
|
|
||||||
"pairAddress": p.get("pairAddress"),
|
|
||||||
"baseToken": p.get("baseToken"),
|
|
||||||
"quoteToken": p.get("quoteToken"),
|
|
||||||
"priceUsd": p.get("priceUsd"),
|
|
||||||
"liquidityUsd": (p.get("liquidity") or {}).get("usd"),
|
|
||||||
"fdv": p.get("fdv"),
|
|
||||||
"marketCap": p.get("marketCap"),
|
|
||||||
"volume24h": (p.get("volume") or {}).get("h24"),
|
|
||||||
})
|
|
||||||
print_json({"provider": "dexscreener", "chain": chain, "address": address, "pairs": out})
|
|
||||||
|
|
||||||
|
|
||||||
def coingecko_search(query: str):
|
|
||||||
url = "https://api.coingecko.com/api/v3/search?query=" + urllib.parse.quote(query)
|
|
||||||
payload = fetch_json(url)
|
|
||||||
coins = payload.get("coins") or []
|
|
||||||
out = []
|
|
||||||
for c in coins[:10]:
|
|
||||||
out.append({
|
|
||||||
"id": c.get("id"),
|
|
||||||
"name": c.get("name"),
|
|
||||||
"symbol": c.get("symbol"),
|
|
||||||
"marketCapRank": c.get("market_cap_rank"),
|
|
||||||
"thumb": c.get("thumb"),
|
|
||||||
})
|
|
||||||
print_json({"provider": "coingecko", "query": query, "coins": out})
|
|
||||||
|
|
||||||
|
|
||||||
def coingecko_coin(coin_id: str):
|
|
||||||
params = urllib.parse.urlencode({
|
|
||||||
"localization": "false",
|
|
||||||
"tickers": "false",
|
|
||||||
"market_data": "true",
|
|
||||||
"community_data": "false",
|
|
||||||
"developer_data": "false",
|
|
||||||
"sparkline": "false",
|
|
||||||
})
|
|
||||||
url = f"https://api.coingecko.com/api/v3/coins/{urllib.parse.quote(coin_id)}?{params}"
|
|
||||||
payload = fetch_json(url)
|
|
||||||
md = payload.get("market_data") or {}
|
|
||||||
out = {
|
|
||||||
"provider": "coingecko",
|
|
||||||
"id": payload.get("id"),
|
|
||||||
"symbol": payload.get("symbol"),
|
|
||||||
"name": payload.get("name"),
|
|
||||||
"marketCapRank": payload.get("market_cap_rank"),
|
|
||||||
"currentPriceUsd": (md.get("current_price") or {}).get("usd"),
|
|
||||||
"marketCapUsd": (md.get("market_cap") or {}).get("usd"),
|
|
||||||
"fullyDilutedValuationUsd": (md.get("fully_diluted_valuation") or {}).get("usd"),
|
|
||||||
"totalVolumeUsd": (md.get("total_volume") or {}).get("usd"),
|
|
||||||
"priceChangePercentage24h": md.get("price_change_percentage_24h"),
|
|
||||||
"priceChangePercentage7d": md.get("price_change_percentage_7d"),
|
|
||||||
"priceChangePercentage30d": md.get("price_change_percentage_30d"),
|
|
||||||
"circulatingSupply": md.get("circulating_supply"),
|
|
||||||
"totalSupply": md.get("total_supply"),
|
|
||||||
"maxSupply": md.get("max_supply"),
|
|
||||||
"homepage": (payload.get("links") or {}).get("homepage", [None])[0],
|
|
||||||
}
|
|
||||||
print_json(out)
|
|
||||||
|
|
||||||
|
|
||||||
def birdeye_token(address: str):
|
|
||||||
api_key = os.getenv("BIRDEYE_API_KEY") or os.getenv("BIRDEYE_APIKEY")
|
|
||||||
if not api_key:
|
|
||||||
raise SystemExit("Birdeye requires BIRDEYE_API_KEY in the environment")
|
|
||||||
url = "https://public-api.birdeye.so/defi/token_overview?address=" + urllib.parse.quote(address)
|
|
||||||
payload = fetch_json(url, headers={
|
|
||||||
"x-api-key": api_key,
|
|
||||||
"x-chain": "solana",
|
|
||||||
})
|
|
||||||
print_json({"provider": "birdeye", "address": address, "data": payload.get("data")})
|
|
||||||
|
|
||||||
|
|
||||||
def build_parser():
|
|
||||||
parser = argparse.ArgumentParser(description="Coin Hunter market data probe")
|
|
||||||
sub = parser.add_subparsers(dest="command", required=True)
|
|
||||||
|
|
||||||
p = sub.add_parser("bybit-ticker", help="Fetch Bybit spot ticker")
|
|
||||||
p.add_argument("symbol")
|
|
||||||
|
|
||||||
p = sub.add_parser("bybit-klines", help="Fetch Bybit spot klines")
|
|
||||||
p.add_argument("symbol")
|
|
||||||
p.add_argument("--interval", default="60", help="Bybit interval, e.g. 1, 5, 15, 60, 240, D")
|
|
||||||
p.add_argument("--limit", type=int, default=10)
|
|
||||||
|
|
||||||
p = sub.add_parser("dex-search", help="Search DexScreener by query")
|
|
||||||
p.add_argument("query")
|
|
||||||
|
|
||||||
p = sub.add_parser("dex-token", help="Fetch DexScreener token pairs by chain/address")
|
|
||||||
p.add_argument("chain")
|
|
||||||
p.add_argument("address")
|
|
||||||
|
|
||||||
p = sub.add_parser("gecko-search", help="Search CoinGecko")
|
|
||||||
p.add_argument("query")
|
|
||||||
|
|
||||||
p = sub.add_parser("gecko-coin", help="Fetch CoinGecko coin by id")
|
|
||||||
p.add_argument("coin_id")
|
|
||||||
|
|
||||||
p = sub.add_parser("birdeye-token", help="Fetch Birdeye token overview (Solana)")
|
|
||||||
p.add_argument("address")
|
|
||||||
|
|
||||||
return parser
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = build_parser()
|
|
||||||
args = parser.parse_args()
|
|
||||||
if args.command == "bybit-ticker":
|
|
||||||
bybit_ticker(args.symbol)
|
|
||||||
elif args.command == "bybit-klines":
|
|
||||||
bybit_klines(args.symbol, args.interval, args.limit)
|
|
||||||
elif args.command == "dex-search":
|
|
||||||
dexscreener_search(args.query)
|
|
||||||
elif args.command == "dex-token":
|
|
||||||
dexscreener_token(args.chain, args.address)
|
|
||||||
elif args.command == "gecko-search":
|
|
||||||
coingecko_search(args.query)
|
|
||||||
elif args.command == "gecko-coin":
|
|
||||||
coingecko_coin(args.coin_id)
|
|
||||||
elif args.command == "birdeye-token":
|
|
||||||
birdeye_token(args.address)
|
|
||||||
else:
|
|
||||||
parser.error("Unknown command")
|
|
||||||
|
|
||||||
|
from .commands.market_probe import main
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
raise SystemExit(main())
|
||||||
|
|||||||
@@ -1,16 +1,8 @@
|
|||||||
"""Print CoinHunter runtime paths."""
|
"""Backward-compatible facade for paths."""
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
from .commands.paths import main
|
||||||
|
|
||||||
from .runtime import get_runtime_paths
|
|
||||||
|
|
||||||
|
|
||||||
def main() -> int:
|
|
||||||
print(json.dumps(get_runtime_paths().as_dict(), ensure_ascii=False, indent=2))
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
raise SystemExit(main())
|
raise SystemExit(main())
|
||||||
|
|||||||
1005
src/coinhunter/precheck.py
Executable file → Normal file
1005
src/coinhunter/precheck.py
Executable file → Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,32 +1,12 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
import json
|
"""Backward-compatible facade for review context.
|
||||||
import sys
|
|
||||||
|
|
||||||
from . import review_engine
|
The executable implementation lives in ``coinhunter.commands.review_context``.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
def main():
|
from .commands.review_context import main
|
||||||
hours = int(sys.argv[1]) if len(sys.argv) > 1 else 12
|
|
||||||
review = review_engine.generate_review(hours)
|
|
||||||
compact = {
|
|
||||||
"review_period_hours": review.get("review_period_hours", hours),
|
|
||||||
"review_timestamp": review.get("review_timestamp"),
|
|
||||||
"total_decisions": review.get("total_decisions", 0),
|
|
||||||
"total_trades": review.get("total_trades", 0),
|
|
||||||
"total_errors": review.get("total_errors", 0),
|
|
||||||
"stats": review.get("stats", {}),
|
|
||||||
"insights": review.get("insights", []),
|
|
||||||
"recommendations": review.get("recommendations", []),
|
|
||||||
"decision_quality_top": review.get("decision_quality", [])[:5],
|
|
||||||
"should_report": bool(
|
|
||||||
review.get("total_decisions", 0)
|
|
||||||
or review.get("total_trades", 0)
|
|
||||||
or review.get("total_errors", 0)
|
|
||||||
or review.get("insights")
|
|
||||||
),
|
|
||||||
}
|
|
||||||
print(json.dumps(compact, ensure_ascii=False, indent=2))
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
|
|||||||
@@ -1,312 +1,48 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
"""Coin Hunter hourly review engine."""
|
"""Backward-compatible facade for review engine.
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from datetime import datetime, timezone, timedelta
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
import ccxt
|
The executable implementation lives in ``coinhunter.commands.review_engine``.
|
||||||
|
Core logic is in ``coinhunter.services.review_service``.
|
||||||
|
"""
|
||||||
|
|
||||||
from .logger import get_logs_last_n_hours, log_error
|
from __future__ import annotations
|
||||||
from .runtime import get_runtime_paths, load_env_file
|
|
||||||
|
|
||||||
PATHS = get_runtime_paths()
|
from importlib import import_module
|
||||||
ENV_FILE = PATHS.env_file
|
|
||||||
REVIEW_DIR = PATHS.reviews_dir
|
|
||||||
|
|
||||||
CST = timezone(timedelta(hours=8))
|
# Re-export service functions for backward compatibility
|
||||||
|
_EXPORT_MAP = {
|
||||||
|
"load_env": (".services.review_service", "load_env"),
|
||||||
|
"get_exchange": (".services.review_service", "get_exchange"),
|
||||||
|
"ensure_review_dir": (".services.review_service", "ensure_review_dir"),
|
||||||
|
"norm_symbol": (".services.review_service", "norm_symbol"),
|
||||||
|
"fetch_current_price": (".services.review_service", "fetch_current_price"),
|
||||||
|
"analyze_trade": (".services.review_service", "analyze_trade"),
|
||||||
|
"analyze_hold_passes": (".services.review_service", "analyze_hold_passes"),
|
||||||
|
"analyze_cash_misses": (".services.review_service", "analyze_cash_misses"),
|
||||||
|
"generate_review": (".services.review_service", "generate_review"),
|
||||||
|
"save_review": (".services.review_service", "save_review"),
|
||||||
|
"print_review": (".services.review_service", "print_review"),
|
||||||
|
}
|
||||||
|
|
||||||
|
__all__ = sorted(set(_EXPORT_MAP) | {"main"})
|
||||||
|
|
||||||
|
|
||||||
def load_env():
|
def __getattr__(name: str):
|
||||||
load_env_file(PATHS)
|
if name not in _EXPORT_MAP:
|
||||||
|
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
|
||||||
|
module_name, attr_name = _EXPORT_MAP[name]
|
||||||
|
module = import_module(module_name, __package__)
|
||||||
|
return getattr(module, attr_name)
|
||||||
|
|
||||||
|
|
||||||
def get_exchange():
|
def __dir__():
|
||||||
load_env()
|
return sorted(set(globals()) | set(__all__))
|
||||||
ex = ccxt.binance({
|
|
||||||
"apiKey": os.getenv("BINANCE_API_KEY"),
|
|
||||||
"secret": os.getenv("BINANCE_API_SECRET"),
|
|
||||||
"options": {"defaultType": "spot"},
|
|
||||||
"enableRateLimit": True,
|
|
||||||
})
|
|
||||||
ex.load_markets()
|
|
||||||
return ex
|
|
||||||
|
|
||||||
|
|
||||||
def ensure_review_dir():
|
|
||||||
REVIEW_DIR.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
|
|
||||||
def norm_symbol(symbol: str) -> str:
|
|
||||||
s = symbol.upper().replace("-", "").replace("_", "")
|
|
||||||
if "/" in s:
|
|
||||||
return s
|
|
||||||
if s.endswith("USDT"):
|
|
||||||
return s[:-4] + "/USDT"
|
|
||||||
return s
|
|
||||||
|
|
||||||
|
|
||||||
def fetch_current_price(ex, symbol: str):
|
|
||||||
try:
|
|
||||||
return float(ex.fetch_ticker(norm_symbol(symbol))["last"])
|
|
||||||
except Exception:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def analyze_trade(trade: dict, ex) -> dict:
|
|
||||||
symbol = trade.get("symbol")
|
|
||||||
price = trade.get("price")
|
|
||||||
action = trade.get("action", "")
|
|
||||||
current_price = fetch_current_price(ex, symbol) if symbol else None
|
|
||||||
pnl_estimate = None
|
|
||||||
outcome = "neutral"
|
|
||||||
if price and current_price and symbol:
|
|
||||||
change_pct = (current_price - float(price)) / float(price) * 100
|
|
||||||
if action == "BUY":
|
|
||||||
pnl_estimate = round(change_pct, 2)
|
|
||||||
outcome = "good" if change_pct > 2 else "bad" if change_pct < -2 else "neutral"
|
|
||||||
elif action == "SELL_ALL":
|
|
||||||
pnl_estimate = round(-change_pct, 2)
|
|
||||||
# Lowered missed threshold: >2% is a missed opportunity in short-term trading
|
|
||||||
outcome = "good" if change_pct < -2 else "missed" if change_pct > 2 else "neutral"
|
|
||||||
return {
|
|
||||||
"timestamp": trade.get("timestamp"),
|
|
||||||
"symbol": symbol,
|
|
||||||
"action": action,
|
|
||||||
"decision_id": trade.get("decision_id"),
|
|
||||||
"execution_price": price,
|
|
||||||
"current_price": current_price,
|
|
||||||
"pnl_estimate_pct": pnl_estimate,
|
|
||||||
"outcome_assessment": outcome,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def analyze_hold_passes(decisions: list, ex) -> list:
|
|
||||||
"""Check HOLD decisions where an opportunity was explicitly PASSed but later rallied."""
|
|
||||||
misses = []
|
|
||||||
for d in decisions:
|
|
||||||
if d.get("decision") != "HOLD":
|
|
||||||
continue
|
|
||||||
analysis = d.get("analysis")
|
|
||||||
if not isinstance(analysis, dict):
|
|
||||||
continue
|
|
||||||
opportunities = analysis.get("opportunities_evaluated", [])
|
|
||||||
market_snapshot = d.get("market_snapshot", {})
|
|
||||||
if not opportunities or not market_snapshot:
|
|
||||||
continue
|
|
||||||
for op in opportunities:
|
|
||||||
verdict = op.get("verdict", "")
|
|
||||||
if "PASS" not in verdict and "pass" not in verdict:
|
|
||||||
continue
|
|
||||||
symbol = op.get("symbol", "")
|
|
||||||
# Try to extract decision-time price from market_snapshot
|
|
||||||
snap = market_snapshot.get(symbol) or market_snapshot.get(symbol.replace("/", ""))
|
|
||||||
if not snap:
|
|
||||||
continue
|
|
||||||
decision_price = None
|
|
||||||
if isinstance(snap, dict):
|
|
||||||
decision_price = float(snap.get("lastPrice", 0)) or float(snap.get("last", 0))
|
|
||||||
elif isinstance(snap, (int, float, str)):
|
|
||||||
decision_price = float(snap)
|
|
||||||
if not decision_price:
|
|
||||||
continue
|
|
||||||
current_price = fetch_current_price(ex, symbol)
|
|
||||||
if not current_price:
|
|
||||||
continue
|
|
||||||
change_pct = (current_price - decision_price) / decision_price * 100
|
|
||||||
if change_pct > 3: # >3% rally after being passed = missed watch
|
|
||||||
misses.append({
|
|
||||||
"timestamp": d.get("timestamp"),
|
|
||||||
"symbol": symbol,
|
|
||||||
"decision_price": round(decision_price, 8),
|
|
||||||
"current_price": round(current_price, 8),
|
|
||||||
"change_pct": round(change_pct, 2),
|
|
||||||
"verdict_snippet": verdict[:80],
|
|
||||||
})
|
|
||||||
return misses
|
|
||||||
|
|
||||||
|
|
||||||
def analyze_cash_misses(decisions: list, ex) -> list:
|
|
||||||
"""If portfolio was mostly USDT but a watchlist coin rallied >5%, flag it."""
|
|
||||||
misses = []
|
|
||||||
watchlist = set()
|
|
||||||
for d in decisions:
|
|
||||||
snap = d.get("market_snapshot", {})
|
|
||||||
if isinstance(snap, dict):
|
|
||||||
for k in snap.keys():
|
|
||||||
if k.endswith("USDT"):
|
|
||||||
watchlist.add(k)
|
|
||||||
for d in decisions:
|
|
||||||
ts = d.get("timestamp")
|
|
||||||
balances = d.get("balances") or d.get("balances_before", {})
|
|
||||||
if not balances:
|
|
||||||
continue
|
|
||||||
total = sum(float(v) if isinstance(v, (int, float, str)) else 0 for v in balances.values())
|
|
||||||
usdt = float(balances.get("USDT", 0))
|
|
||||||
if total == 0 or (usdt / total) < 0.9:
|
|
||||||
continue
|
|
||||||
# Portfolio mostly cash — check watchlist performance
|
|
||||||
snap = d.get("market_snapshot", {})
|
|
||||||
if not isinstance(snap, dict):
|
|
||||||
continue
|
|
||||||
for symbol, data in snap.items():
|
|
||||||
if not symbol.endswith("USDT"):
|
|
||||||
continue
|
|
||||||
decision_price = None
|
|
||||||
if isinstance(data, dict):
|
|
||||||
decision_price = float(data.get("lastPrice", 0)) or float(data.get("last", 0))
|
|
||||||
elif isinstance(data, (int, float, str)):
|
|
||||||
decision_price = float(data)
|
|
||||||
if not decision_price:
|
|
||||||
continue
|
|
||||||
current_price = fetch_current_price(ex, symbol)
|
|
||||||
if not current_price:
|
|
||||||
continue
|
|
||||||
change_pct = (current_price - decision_price) / decision_price * 100
|
|
||||||
if change_pct > 5:
|
|
||||||
misses.append({
|
|
||||||
"timestamp": ts,
|
|
||||||
"symbol": symbol,
|
|
||||||
"decision_price": round(decision_price, 8),
|
|
||||||
"current_price": round(current_price, 8),
|
|
||||||
"change_pct": round(change_pct, 2),
|
|
||||||
})
|
|
||||||
# Deduplicate by symbol keeping the worst miss
|
|
||||||
seen = {}
|
|
||||||
for m in misses:
|
|
||||||
sym = m["symbol"]
|
|
||||||
if sym not in seen or m["change_pct"] > seen[sym]["change_pct"]:
|
|
||||||
seen[sym] = m
|
|
||||||
return list(seen.values())
|
|
||||||
|
|
||||||
|
|
||||||
def generate_review(hours: int = 1) -> dict:
|
|
||||||
decisions = get_logs_last_n_hours("decisions", hours)
|
|
||||||
trades = get_logs_last_n_hours("trades", hours)
|
|
||||||
errors = get_logs_last_n_hours("errors", hours)
|
|
||||||
|
|
||||||
review = {
|
|
||||||
"review_period_hours": hours,
|
|
||||||
"review_timestamp": datetime.now(CST).isoformat(),
|
|
||||||
"total_decisions": len(decisions),
|
|
||||||
"total_trades": len(trades),
|
|
||||||
"total_errors": len(errors),
|
|
||||||
"decision_quality": [],
|
|
||||||
"stats": {},
|
|
||||||
"insights": [],
|
|
||||||
"recommendations": [],
|
|
||||||
}
|
|
||||||
|
|
||||||
if not decisions and not trades:
|
|
||||||
review["insights"].append("本周期无决策/交易记录")
|
|
||||||
return review
|
|
||||||
|
|
||||||
ex = get_exchange()
|
|
||||||
outcomes = {"good": 0, "neutral": 0, "bad": 0, "missed": 0}
|
|
||||||
pnl_samples = []
|
|
||||||
|
|
||||||
for trade in trades:
|
|
||||||
analysis = analyze_trade(trade, ex)
|
|
||||||
review["decision_quality"].append(analysis)
|
|
||||||
outcomes[analysis["outcome_assessment"]] += 1
|
|
||||||
if analysis["pnl_estimate_pct"] is not None:
|
|
||||||
pnl_samples.append(analysis["pnl_estimate_pct"])
|
|
||||||
|
|
||||||
# New: analyze missed opportunities from HOLD / cash decisions
|
|
||||||
hold_pass_misses = analyze_hold_passes(decisions, ex)
|
|
||||||
cash_misses = analyze_cash_misses(decisions, ex)
|
|
||||||
total_missed = outcomes["missed"] + len(hold_pass_misses) + len(cash_misses)
|
|
||||||
|
|
||||||
review["stats"] = {
|
|
||||||
"good_decisions": outcomes["good"],
|
|
||||||
"neutral_decisions": outcomes["neutral"],
|
|
||||||
"bad_decisions": outcomes["bad"],
|
|
||||||
"missed_opportunities": total_missed,
|
|
||||||
"missed_sell_all": outcomes["missed"],
|
|
||||||
"missed_hold_passes": len(hold_pass_misses),
|
|
||||||
"missed_cash_sits": len(cash_misses),
|
|
||||||
"avg_estimated_edge_pct": round(sum(pnl_samples) / len(pnl_samples), 2) if pnl_samples else None,
|
|
||||||
}
|
|
||||||
|
|
||||||
if errors:
|
|
||||||
review["insights"].append(f"本周期出现 {len(errors)} 次执行/系统错误,健壮性需优先关注")
|
|
||||||
if outcomes["bad"] > outcomes["good"]:
|
|
||||||
review["insights"].append("最近交易质量偏弱,建议降低交易频率或提高入场门槛")
|
|
||||||
if total_missed > 0:
|
|
||||||
parts = []
|
|
||||||
if outcomes["missed"]:
|
|
||||||
parts.append(f"卖出后继续上涨 {outcomes['missed']} 次")
|
|
||||||
if hold_pass_misses:
|
|
||||||
parts.append(f"PASS 后错失 {len(hold_pass_misses)} 次")
|
|
||||||
if cash_misses:
|
|
||||||
parts.append(f"空仓观望错失 {len(cash_misses)} 次")
|
|
||||||
review["insights"].append("存在错失机会: " + ",".join(parts) + ",建议放宽趋势跟随或入场条件")
|
|
||||||
if outcomes["good"] >= max(1, outcomes["bad"] + total_missed):
|
|
||||||
review["insights"].append("近期决策总体可接受")
|
|
||||||
if not trades and decisions:
|
|
||||||
review["insights"].append("有决策无成交,可能是观望、最小成交额限制或执行被拦截")
|
|
||||||
if len(trades) < len(decisions) * 0.1 and decisions:
|
|
||||||
review["insights"].append("大量决策未转化为交易,需检查执行门槛(最小成交额/精度/手续费缓冲)是否过高")
|
|
||||||
if hold_pass_misses:
|
|
||||||
for m in hold_pass_misses[:3]:
|
|
||||||
review["insights"].append(f"HOLD 时 PASS 了 {m['symbol']},之后上涨 {m['change_pct']}%")
|
|
||||||
if cash_misses:
|
|
||||||
for m in cash_misses[:3]:
|
|
||||||
review["insights"].append(f"持仓以 USDT 为主时 {m['symbol']} 上涨 {m['change_pct']}%")
|
|
||||||
|
|
||||||
review["recommendations"] = [
|
|
||||||
"优先检查最小成交额/精度拒单是否影响小资金执行",
|
|
||||||
"若连续两个复盘周期 edge 为负,下一小时减少换仓频率",
|
|
||||||
"若错误日志增加,优先进入防守模式(多持 USDT)",
|
|
||||||
]
|
|
||||||
return review
|
|
||||||
|
|
||||||
|
|
||||||
def save_review(review: dict):
|
|
||||||
ensure_review_dir()
|
|
||||||
ts = datetime.now(CST).strftime("%Y%m%d_%H%M%S")
|
|
||||||
path = REVIEW_DIR / f"review_{ts}.json"
|
|
||||||
path.write_text(json.dumps(review, indent=2, ensure_ascii=False), encoding="utf-8")
|
|
||||||
return str(path)
|
|
||||||
|
|
||||||
|
|
||||||
def print_review(review: dict):
|
|
||||||
print("=" * 50)
|
|
||||||
print("📊 Coin Hunter 小时复盘报告")
|
|
||||||
print(f"复盘时间: {review['review_timestamp']}")
|
|
||||||
print(f"统计周期: 过去 {review['review_period_hours']} 小时")
|
|
||||||
print(f"总决策数: {review['total_decisions']} | 总交易数: {review['total_trades']} | 总错误数: {review['total_errors']}")
|
|
||||||
stats = review.get("stats", {})
|
|
||||||
print("\n决策质量统计:")
|
|
||||||
print(f" ✓ 优秀: {stats.get('good_decisions', 0)}")
|
|
||||||
print(f" ○ 中性: {stats.get('neutral_decisions', 0)}")
|
|
||||||
print(f" ✗ 失误: {stats.get('bad_decisions', 0)}")
|
|
||||||
print(f" ↗ 错过机会: {stats.get('missed_opportunities', 0)}")
|
|
||||||
if stats.get("avg_estimated_edge_pct") is not None:
|
|
||||||
print(f" 平均估计 edge: {stats['avg_estimated_edge_pct']}%")
|
|
||||||
if review.get("insights"):
|
|
||||||
print("\n💡 见解:")
|
|
||||||
for item in review["insights"]:
|
|
||||||
print(f" • {item}")
|
|
||||||
if review.get("recommendations"):
|
|
||||||
print("\n🔧 优化建议:")
|
|
||||||
for item in review["recommendations"]:
|
|
||||||
print(f" • {item}")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
try:
|
from .commands.review_engine import main as _main
|
||||||
hours = int(sys.argv[1]) if len(sys.argv) > 1 else 1
|
return _main()
|
||||||
review = generate_review(hours)
|
|
||||||
path = save_review(review)
|
|
||||||
print_review(review)
|
|
||||||
print(f"复盘已保存至: {path}")
|
|
||||||
except Exception as e:
|
|
||||||
log_error("review_engine", e)
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
raise SystemExit(main())
|
||||||
|
|||||||
@@ -1,28 +1,8 @@
|
|||||||
#!/usr/bin/env python3
|
"""Backward-compatible facade for rotate_external_gate_log."""
|
||||||
"""Rotate external gate log using the user's logrotate config/state."""
|
|
||||||
import shutil
|
|
||||||
import subprocess
|
|
||||||
|
|
||||||
from .runtime import ensure_runtime_dirs, get_runtime_paths
|
from __future__ import annotations
|
||||||
|
|
||||||
PATHS = get_runtime_paths()
|
|
||||||
STATE_DIR = PATHS.state_dir
|
|
||||||
LOGROTATE_STATUS = PATHS.logrotate_status
|
|
||||||
LOGROTATE_CONF = PATHS.logrotate_config
|
|
||||||
LOGS_DIR = PATHS.logs_dir
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
ensure_runtime_dirs(PATHS)
|
|
||||||
logrotate_bin = shutil.which("logrotate") or "/usr/sbin/logrotate"
|
|
||||||
cmd = [logrotate_bin, "-s", str(LOGROTATE_STATUS), str(LOGROTATE_CONF)]
|
|
||||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
|
||||||
if result.stdout.strip():
|
|
||||||
print(result.stdout.strip())
|
|
||||||
if result.stderr.strip():
|
|
||||||
print(result.stderr.strip())
|
|
||||||
return result.returncode
|
|
||||||
|
|
||||||
|
from .commands.rotate_external_gate_log import main
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
raise SystemExit(main())
|
raise SystemExit(main())
|
||||||
|
|||||||
@@ -2,6 +2,7 @@
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
from dataclasses import asdict, dataclass
|
from dataclasses import asdict, dataclass
|
||||||
@@ -24,6 +25,7 @@ class RuntimePaths:
|
|||||||
positions_lock: Path
|
positions_lock: Path
|
||||||
executions_lock: Path
|
executions_lock: Path
|
||||||
precheck_state_file: Path
|
precheck_state_file: Path
|
||||||
|
precheck_state_lock: Path
|
||||||
external_gate_lock: Path
|
external_gate_lock: Path
|
||||||
logrotate_config: Path
|
logrotate_config: Path
|
||||||
logrotate_status: Path
|
logrotate_status: Path
|
||||||
@@ -64,6 +66,7 @@ def get_runtime_paths() -> RuntimePaths:
|
|||||||
positions_lock=root / "positions.lock",
|
positions_lock=root / "positions.lock",
|
||||||
executions_lock=root / "executions.lock",
|
executions_lock=root / "executions.lock",
|
||||||
precheck_state_file=state_dir / "precheck_state.json",
|
precheck_state_file=state_dir / "precheck_state.json",
|
||||||
|
precheck_state_lock=state_dir / "precheck_state.lock",
|
||||||
external_gate_lock=state_dir / "external_gate.lock",
|
external_gate_lock=state_dir / "external_gate.lock",
|
||||||
logrotate_config=root / "logrotate_external_gate.conf",
|
logrotate_config=root / "logrotate_external_gate.conf",
|
||||||
logrotate_status=state_dir / "logrotate_external_gate.status",
|
logrotate_status=state_dir / "logrotate_external_gate.status",
|
||||||
@@ -105,3 +108,20 @@ def mask_secret(value: str | None, *, tail: int = 4) -> str:
|
|||||||
if len(value) <= tail:
|
if len(value) <= tail:
|
||||||
return "*" * len(value)
|
return "*" * len(value)
|
||||||
return "*" * max(4, len(value) - tail) + value[-tail:]
|
return "*" * max(4, len(value) - tail) + value[-tail:]
|
||||||
|
|
||||||
|
|
||||||
|
def get_user_config(key: str, default=None):
|
||||||
|
"""Read a dotted key from the user config file."""
|
||||||
|
paths = get_runtime_paths()
|
||||||
|
try:
|
||||||
|
config = json.loads(paths.config_file.read_text(encoding="utf-8"))
|
||||||
|
except Exception:
|
||||||
|
return default
|
||||||
|
for part in key.split("."):
|
||||||
|
if isinstance(config, dict):
|
||||||
|
config = config.get(part)
|
||||||
|
if config is None:
|
||||||
|
return default
|
||||||
|
else:
|
||||||
|
return default
|
||||||
|
return config if config is not None else default
|
||||||
|
|||||||
105
src/coinhunter/services/adaptive_profile.py
Normal file
105
src/coinhunter/services/adaptive_profile.py
Normal file
@@ -0,0 +1,105 @@
|
|||||||
|
"""Adaptive trigger profile builder for precheck."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from .data_utils import to_float
|
||||||
|
from .precheck_constants import (
|
||||||
|
BASE_CANDIDATE_SCORE_TRIGGER_RATIO,
|
||||||
|
BASE_COOLDOWN_MINUTES,
|
||||||
|
BASE_FORCE_ANALYSIS_AFTER_MINUTES,
|
||||||
|
BASE_PNL_TRIGGER_PCT,
|
||||||
|
BASE_PORTFOLIO_MOVE_TRIGGER_PCT,
|
||||||
|
BASE_PRICE_MOVE_TRIGGER_PCT,
|
||||||
|
MIN_ACTIONABLE_USDT,
|
||||||
|
MIN_REAL_POSITION_VALUE_USDT,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def build_adaptive_profile(snapshot: dict):
|
||||||
|
portfolio_value = snapshot.get("portfolio_value_usdt", 0)
|
||||||
|
free_usdt = snapshot.get("free_usdt", 0)
|
||||||
|
session = snapshot.get("session")
|
||||||
|
market = snapshot.get("market_regime", {})
|
||||||
|
volatility_score = to_float(market.get("volatility_score"), 0)
|
||||||
|
leader_score = to_float(market.get("leader_score"), 0)
|
||||||
|
actionable_positions = int(snapshot.get("actionable_positions") or 0)
|
||||||
|
largest_position_value = to_float(snapshot.get("largest_position_value_usdt"), 0)
|
||||||
|
|
||||||
|
capital_band = "micro" if portfolio_value < 25 else "small" if portfolio_value < 100 else "normal"
|
||||||
|
session_mode = "quiet" if session in {"overnight", "asia-morning"} else "active"
|
||||||
|
volatility_mode = "high" if volatility_score >= 2.5 or leader_score >= 120 else "normal"
|
||||||
|
dust_mode = free_usdt < MIN_ACTIONABLE_USDT and largest_position_value < MIN_REAL_POSITION_VALUE_USDT
|
||||||
|
|
||||||
|
price_trigger = BASE_PRICE_MOVE_TRIGGER_PCT
|
||||||
|
pnl_trigger = BASE_PNL_TRIGGER_PCT
|
||||||
|
portfolio_trigger = BASE_PORTFOLIO_MOVE_TRIGGER_PCT
|
||||||
|
candidate_ratio = BASE_CANDIDATE_SCORE_TRIGGER_RATIO
|
||||||
|
force_minutes = BASE_FORCE_ANALYSIS_AFTER_MINUTES
|
||||||
|
cooldown_minutes = BASE_COOLDOWN_MINUTES
|
||||||
|
soft_score_threshold = 2.0
|
||||||
|
|
||||||
|
if capital_band == "micro":
|
||||||
|
price_trigger += 0.02
|
||||||
|
pnl_trigger += 0.03
|
||||||
|
portfolio_trigger += 0.04
|
||||||
|
candidate_ratio += 0.25
|
||||||
|
force_minutes += 180
|
||||||
|
cooldown_minutes += 30
|
||||||
|
soft_score_threshold += 1.0
|
||||||
|
elif capital_band == "small":
|
||||||
|
price_trigger += 0.01
|
||||||
|
pnl_trigger += 0.01
|
||||||
|
portfolio_trigger += 0.01
|
||||||
|
candidate_ratio += 0.1
|
||||||
|
force_minutes += 60
|
||||||
|
cooldown_minutes += 10
|
||||||
|
soft_score_threshold += 0.5
|
||||||
|
|
||||||
|
if session_mode == "quiet":
|
||||||
|
price_trigger += 0.01
|
||||||
|
pnl_trigger += 0.01
|
||||||
|
portfolio_trigger += 0.01
|
||||||
|
candidate_ratio += 0.05
|
||||||
|
soft_score_threshold += 0.5
|
||||||
|
else:
|
||||||
|
force_minutes = max(120, force_minutes - 30)
|
||||||
|
|
||||||
|
if volatility_mode == "high":
|
||||||
|
price_trigger = max(0.02, price_trigger - 0.01)
|
||||||
|
pnl_trigger = max(0.025, pnl_trigger - 0.005)
|
||||||
|
portfolio_trigger = max(0.025, portfolio_trigger - 0.005)
|
||||||
|
candidate_ratio = max(1.1, candidate_ratio - 0.1)
|
||||||
|
cooldown_minutes = max(20, cooldown_minutes - 10)
|
||||||
|
soft_score_threshold = max(1.0, soft_score_threshold - 0.5)
|
||||||
|
|
||||||
|
if dust_mode:
|
||||||
|
candidate_ratio += 0.3
|
||||||
|
force_minutes += 180
|
||||||
|
cooldown_minutes += 30
|
||||||
|
soft_score_threshold += 1.5
|
||||||
|
|
||||||
|
return {
|
||||||
|
"capital_band": capital_band,
|
||||||
|
"session_mode": session_mode,
|
||||||
|
"volatility_mode": volatility_mode,
|
||||||
|
"dust_mode": dust_mode,
|
||||||
|
"price_move_trigger_pct": round(price_trigger, 4),
|
||||||
|
"pnl_trigger_pct": round(pnl_trigger, 4),
|
||||||
|
"portfolio_move_trigger_pct": round(portfolio_trigger, 4),
|
||||||
|
"candidate_score_trigger_ratio": round(candidate_ratio, 4),
|
||||||
|
"force_analysis_after_minutes": int(force_minutes),
|
||||||
|
"cooldown_minutes": int(cooldown_minutes),
|
||||||
|
"soft_score_threshold": round(soft_score_threshold, 2),
|
||||||
|
"new_entries_allowed": free_usdt >= MIN_ACTIONABLE_USDT and not dust_mode,
|
||||||
|
"switching_allowed": actionable_positions > 0 or portfolio_value >= 25,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def _candidate_weight(snapshot: dict, profile: dict) -> float:
|
||||||
|
if not profile.get("new_entries_allowed"):
|
||||||
|
return 0.5
|
||||||
|
if profile.get("volatility_mode") == "high":
|
||||||
|
return 1.5
|
||||||
|
if snapshot.get("session") in {"europe-open", "us-session"}:
|
||||||
|
return 1.25
|
||||||
|
return 1.0
|
||||||
71
src/coinhunter/services/candidate_scoring.py
Normal file
71
src/coinhunter/services/candidate_scoring.py
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
"""Candidate coin scoring and selection for precheck."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import re
|
||||||
|
|
||||||
|
from .data_utils import to_float
|
||||||
|
from .precheck_constants import BLACKLIST, MAX_PRICE_CAP, MIN_CHANGE_PCT, TOP_CANDIDATES
|
||||||
|
|
||||||
|
|
||||||
|
def _liquidity_score(volume: float) -> float:
|
||||||
|
return min(1.0, max(0.0, volume / 50_000_000))
|
||||||
|
|
||||||
|
|
||||||
|
def _breakout_score(price: float, avg_price: float | None) -> float:
|
||||||
|
if not avg_price or avg_price <= 0:
|
||||||
|
return 0.0
|
||||||
|
return (price - avg_price) / avg_price
|
||||||
|
|
||||||
|
|
||||||
|
def top_candidates_from_tickers(tickers: dict):
|
||||||
|
candidates = []
|
||||||
|
for symbol, ticker in tickers.items():
|
||||||
|
if not symbol.endswith("/USDT"):
|
||||||
|
continue
|
||||||
|
base = symbol.replace("/USDT", "")
|
||||||
|
if base in BLACKLIST:
|
||||||
|
continue
|
||||||
|
if not re.fullmatch(r"[A-Z0-9]{2,20}", base):
|
||||||
|
continue
|
||||||
|
price = to_float(ticker.get("last"))
|
||||||
|
change_pct = to_float(ticker.get("percentage"))
|
||||||
|
volume = to_float(ticker.get("quoteVolume"))
|
||||||
|
high = to_float(ticker.get("high"))
|
||||||
|
low = to_float(ticker.get("low"))
|
||||||
|
avg_price = to_float(ticker.get("average"), None)
|
||||||
|
if price <= 0:
|
||||||
|
continue
|
||||||
|
if MAX_PRICE_CAP is not None and price > MAX_PRICE_CAP:
|
||||||
|
continue
|
||||||
|
if volume < 500_000:
|
||||||
|
continue
|
||||||
|
if change_pct < MIN_CHANGE_PCT:
|
||||||
|
continue
|
||||||
|
momentum = change_pct / 10.0
|
||||||
|
liquidity = _liquidity_score(volume)
|
||||||
|
breakout = _breakout_score(price, avg_price)
|
||||||
|
score = round(momentum * 0.5 + liquidity * 0.3 + breakout * 0.2, 4)
|
||||||
|
band = "major" if price >= 10 else "mid" if price >= 1 else "meme"
|
||||||
|
distance_from_high = (high - price) / max(high, 1e-9) if high else None
|
||||||
|
candidates.append({
|
||||||
|
"symbol": symbol,
|
||||||
|
"base": base,
|
||||||
|
"price": round(price, 8),
|
||||||
|
"change_24h_pct": round(change_pct, 2),
|
||||||
|
"volume_24h": round(volume, 2),
|
||||||
|
"breakout_pct": round(breakout * 100, 2),
|
||||||
|
"high_24h": round(high, 8) if high else None,
|
||||||
|
"low_24h": round(low, 8) if low else None,
|
||||||
|
"distance_from_high_pct": round(distance_from_high * 100, 2) if distance_from_high is not None else None,
|
||||||
|
"score": score,
|
||||||
|
"band": band,
|
||||||
|
})
|
||||||
|
candidates.sort(key=lambda x: x["score"], reverse=True)
|
||||||
|
global_top = candidates[:TOP_CANDIDATES]
|
||||||
|
layers: dict[str, list[dict]] = {"major": [], "mid": [], "meme": []}
|
||||||
|
for c in candidates:
|
||||||
|
layers[c["band"]].append(c)
|
||||||
|
for k in layers:
|
||||||
|
layers[k] = layers[k][:5]
|
||||||
|
return global_top, layers
|
||||||
39
src/coinhunter/services/data_utils.py
Normal file
39
src/coinhunter/services/data_utils.py
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
"""Generic data helpers for precheck."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import hashlib
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
def load_json(path: Path, default):
|
||||||
|
if not path.exists():
|
||||||
|
return default
|
||||||
|
try:
|
||||||
|
return json.loads(path.read_text(encoding="utf-8"))
|
||||||
|
except Exception:
|
||||||
|
return default
|
||||||
|
|
||||||
|
|
||||||
|
def stable_hash(data) -> str:
|
||||||
|
payload = json.dumps(data, sort_keys=True, ensure_ascii=False, separators=(",", ":"))
|
||||||
|
return hashlib.sha1(payload.encode("utf-8")).hexdigest()
|
||||||
|
|
||||||
|
|
||||||
|
def to_float(value, default=0.0):
|
||||||
|
try:
|
||||||
|
if value is None:
|
||||||
|
return default
|
||||||
|
return float(value)
|
||||||
|
except Exception:
|
||||||
|
return default
|
||||||
|
|
||||||
|
|
||||||
|
def norm_symbol(symbol: str) -> str:
|
||||||
|
s = symbol.upper().replace("-", "").replace("_", "")
|
||||||
|
if "/" in s:
|
||||||
|
return s
|
||||||
|
if s.endswith("USDT"):
|
||||||
|
return s[:-4] + "/USDT"
|
||||||
|
return s
|
||||||
@@ -1,25 +1,47 @@
|
|||||||
"""Exchange helpers (ccxt, markets, balances, order prep)."""
|
"""Exchange helpers (ccxt, markets, balances, order prep)."""
|
||||||
import math
|
import math
|
||||||
import os
|
import os
|
||||||
|
import time
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"load_env",
|
||||||
|
"get_exchange",
|
||||||
|
"norm_symbol",
|
||||||
|
"storage_symbol",
|
||||||
|
"fetch_balances",
|
||||||
|
"build_market_snapshot",
|
||||||
|
"market_and_ticker",
|
||||||
|
"floor_to_step",
|
||||||
|
"prepare_buy_quantity",
|
||||||
|
"prepare_sell_quantity",
|
||||||
|
]
|
||||||
|
|
||||||
import ccxt
|
import ccxt
|
||||||
|
|
||||||
from ..runtime import get_runtime_paths, load_env_file
|
from ..runtime import get_runtime_paths, get_user_config, load_env_file
|
||||||
from .trade_common import log
|
|
||||||
|
|
||||||
PATHS = get_runtime_paths()
|
_exchange_cache = None
|
||||||
|
_exchange_cached_at = None
|
||||||
|
|
||||||
|
CACHE_TTL_SECONDS = 3600
|
||||||
|
|
||||||
|
|
||||||
def load_env():
|
def load_env():
|
||||||
load_env_file(PATHS)
|
load_env_file(get_runtime_paths())
|
||||||
|
|
||||||
|
|
||||||
def get_exchange():
|
def get_exchange(force_new: bool = False):
|
||||||
|
global _exchange_cache, _exchange_cached_at
|
||||||
|
now = time.time()
|
||||||
|
if not force_new and _exchange_cache is not None and _exchange_cached_at is not None:
|
||||||
|
ttl = get_user_config("exchange.cache_ttl_seconds", CACHE_TTL_SECONDS)
|
||||||
|
if now - _exchange_cached_at < ttl:
|
||||||
|
return _exchange_cache
|
||||||
load_env()
|
load_env()
|
||||||
api_key = os.getenv("BINANCE_API_KEY")
|
api_key = os.getenv("BINANCE_API_KEY")
|
||||||
secret = os.getenv("BINANCE_API_SECRET")
|
secret = os.getenv("BINANCE_API_SECRET")
|
||||||
if not api_key or not secret:
|
if not api_key or not secret:
|
||||||
raise RuntimeError("缺少 BINANCE_API_KEY 或 BINANCE_API_SECRET")
|
raise RuntimeError("Missing BINANCE_API_KEY or BINANCE_API_SECRET")
|
||||||
ex = ccxt.binance(
|
ex = ccxt.binance(
|
||||||
{
|
{
|
||||||
"apiKey": api_key,
|
"apiKey": api_key,
|
||||||
@@ -29,6 +51,8 @@ def get_exchange():
|
|||||||
}
|
}
|
||||||
)
|
)
|
||||||
ex.load_markets()
|
ex.load_markets()
|
||||||
|
_exchange_cache = ex
|
||||||
|
_exchange_cached_at = now
|
||||||
return ex
|
return ex
|
||||||
|
|
||||||
|
|
||||||
@@ -38,7 +62,7 @@ def norm_symbol(symbol: str) -> str:
|
|||||||
return s
|
return s
|
||||||
if s.endswith("USDT"):
|
if s.endswith("USDT"):
|
||||||
return s[:-4] + "/USDT"
|
return s[:-4] + "/USDT"
|
||||||
raise ValueError(f"不支持的 symbol: {symbol}")
|
raise ValueError(f"Unsupported symbol: {symbol}")
|
||||||
|
|
||||||
|
|
||||||
def storage_symbol(symbol: str) -> str:
|
def storage_symbol(symbol: str) -> str:
|
||||||
@@ -63,7 +87,8 @@ def build_market_snapshot(ex):
|
|||||||
if price is None or float(price) <= 0:
|
if price is None or float(price) <= 0:
|
||||||
continue
|
continue
|
||||||
vol = float(t.get("quoteVolume") or 0)
|
vol = float(t.get("quoteVolume") or 0)
|
||||||
if vol < 200_000:
|
min_volume = get_user_config("exchange.min_quote_volume", 200_000)
|
||||||
|
if vol < min_volume:
|
||||||
continue
|
continue
|
||||||
base = sym.replace("/", "")
|
base = sym.replace("/", "")
|
||||||
snapshot[base] = {
|
snapshot[base] = {
|
||||||
@@ -95,17 +120,17 @@ def prepare_buy_quantity(ex, symbol: str, amount_usdt: float):
|
|||||||
sym, market, ticker = market_and_ticker(ex, symbol)
|
sym, market, ticker = market_and_ticker(ex, symbol)
|
||||||
ask = float(ticker.get("ask") or ticker.get("last") or 0)
|
ask = float(ticker.get("ask") or ticker.get("last") or 0)
|
||||||
if ask <= 0:
|
if ask <= 0:
|
||||||
raise RuntimeError(f"{sym} 无法获取有效 ask 价格")
|
raise RuntimeError(f"No valid ask price for {sym}")
|
||||||
budget = amount_usdt * (1 - USDT_BUFFER_PCT)
|
budget = amount_usdt * (1 - USDT_BUFFER_PCT)
|
||||||
raw_qty = budget / ask
|
raw_qty = budget / ask
|
||||||
qty = float(ex.amount_to_precision(sym, raw_qty))
|
qty = float(ex.amount_to_precision(sym, raw_qty))
|
||||||
min_amt = (market.get("limits", {}).get("amount", {}) or {}).get("min") or 0
|
min_amt = (market.get("limits", {}).get("amount", {}) or {}).get("min") or 0
|
||||||
min_cost = (market.get("limits", {}).get("cost", {}) or {}).get("min") or 0
|
min_cost = (market.get("limits", {}).get("cost", {}) or {}).get("min") or 0
|
||||||
if min_amt and qty < float(min_amt):
|
if min_amt and qty < float(min_amt):
|
||||||
raise RuntimeError(f"{sym} 买入数量 {qty} 小于最小数量 {min_amt}")
|
raise RuntimeError(f"Buy quantity {qty} for {sym} below minimum {min_amt}")
|
||||||
est_cost = qty * ask
|
est_cost = qty * ask
|
||||||
if min_cost and est_cost < float(min_cost):
|
if min_cost and est_cost < float(min_cost):
|
||||||
raise RuntimeError(f"{sym} 买入金额 ${est_cost:.4f} 小于最小成交额 ${float(min_cost):.4f}")
|
raise RuntimeError(f"Buy cost ${est_cost:.4f} for {sym} below minimum ${float(min_cost):.4f}")
|
||||||
return sym, qty, ask, est_cost
|
return sym, qty, ask, est_cost
|
||||||
|
|
||||||
|
|
||||||
@@ -113,13 +138,13 @@ def prepare_sell_quantity(ex, symbol: str, free_qty: float):
|
|||||||
sym, market, ticker = market_and_ticker(ex, symbol)
|
sym, market, ticker = market_and_ticker(ex, symbol)
|
||||||
bid = float(ticker.get("bid") or ticker.get("last") or 0)
|
bid = float(ticker.get("bid") or ticker.get("last") or 0)
|
||||||
if bid <= 0:
|
if bid <= 0:
|
||||||
raise RuntimeError(f"{sym} 无法获取有效 bid 价格")
|
raise RuntimeError(f"No valid bid price for {sym}")
|
||||||
qty = float(ex.amount_to_precision(sym, free_qty))
|
qty = float(ex.amount_to_precision(sym, free_qty))
|
||||||
min_amt = (market.get("limits", {}).get("amount", {}) or {}).get("min") or 0
|
min_amt = (market.get("limits", {}).get("amount", {}) or {}).get("min") or 0
|
||||||
min_cost = (market.get("limits", {}).get("cost", {}) or {}).get("min") or 0
|
min_cost = (market.get("limits", {}).get("cost", {}) or {}).get("min") or 0
|
||||||
if min_amt and qty < float(min_amt):
|
if min_amt and qty < float(min_amt):
|
||||||
raise RuntimeError(f"{sym} 卖出数量 {qty} 小于最小数量 {min_amt}")
|
raise RuntimeError(f"Sell quantity {qty} for {sym} below minimum {min_amt}")
|
||||||
est_cost = qty * bid
|
est_cost = qty * bid
|
||||||
if min_cost and est_cost < float(min_cost):
|
if min_cost and est_cost < float(min_cost):
|
||||||
raise RuntimeError(f"{sym} 卖出金额 ${est_cost:.4f} 小于最小成交额 ${float(min_cost):.4f}")
|
raise RuntimeError(f"Sell cost ${est_cost:.4f} for {sym} below minimum ${float(min_cost):.4f}")
|
||||||
return sym, qty, bid, est_cost
|
return sym, qty, bid, est_cost
|
||||||
|
|||||||
@@ -1,17 +1,25 @@
|
|||||||
"""Execution state helpers (decision deduplication, executions.json)."""
|
"""Execution state helpers (decision deduplication, executions.json)."""
|
||||||
import hashlib
|
import hashlib
|
||||||
|
|
||||||
from ..runtime import get_runtime_paths
|
__all__ = [
|
||||||
from .file_utils import load_json_locked, save_json_locked
|
"default_decision_id",
|
||||||
from .trade_common import bj_now_iso
|
"load_executions",
|
||||||
|
"save_executions",
|
||||||
|
"record_execution_state",
|
||||||
|
"get_execution_state",
|
||||||
|
]
|
||||||
|
|
||||||
PATHS = get_runtime_paths()
|
from ..runtime import get_runtime_paths
|
||||||
EXECUTIONS_FILE = PATHS.executions_file
|
from .file_utils import load_json_locked, read_modify_write_json, save_json_locked
|
||||||
EXECUTIONS_LOCK = PATHS.executions_lock
|
|
||||||
|
|
||||||
|
def _paths():
|
||||||
|
return get_runtime_paths()
|
||||||
|
|
||||||
|
|
||||||
def default_decision_id(action: str, argv_tail: list[str]) -> str:
|
def default_decision_id(action: str, argv_tail: list[str]) -> str:
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
from .trade_common import CST
|
from .trade_common import CST
|
||||||
|
|
||||||
now = datetime.now(CST)
|
now = datetime.now(CST)
|
||||||
@@ -22,17 +30,24 @@ def default_decision_id(action: str, argv_tail: list[str]) -> str:
|
|||||||
|
|
||||||
|
|
||||||
def load_executions() -> dict:
|
def load_executions() -> dict:
|
||||||
return load_json_locked(EXECUTIONS_FILE, EXECUTIONS_LOCK, {"executions": {}}).get("executions", {})
|
paths = _paths()
|
||||||
|
data = load_json_locked(paths.executions_file, paths.executions_lock, {"executions": {}})
|
||||||
|
return data.get("executions", {}) # type: ignore[no-any-return]
|
||||||
|
|
||||||
|
|
||||||
def save_executions(executions: dict):
|
def save_executions(executions: dict):
|
||||||
save_json_locked(EXECUTIONS_FILE, EXECUTIONS_LOCK, {"executions": executions})
|
paths = _paths()
|
||||||
|
save_json_locked(paths.executions_file, paths.executions_lock, {"executions": executions})
|
||||||
|
|
||||||
|
|
||||||
def record_execution_state(decision_id: str, payload: dict):
|
def record_execution_state(decision_id: str, payload: dict):
|
||||||
executions = load_executions()
|
paths = _paths()
|
||||||
executions[decision_id] = payload
|
read_modify_write_json(
|
||||||
save_executions(executions)
|
paths.executions_file,
|
||||||
|
paths.executions_lock,
|
||||||
|
{"executions": {}},
|
||||||
|
lambda data: data.setdefault("executions", {}).__setitem__(decision_id, payload) or data,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def get_execution_state(decision_id: str):
|
def get_execution_state(decision_id: str):
|
||||||
|
|||||||
@@ -2,6 +2,14 @@
|
|||||||
import fcntl
|
import fcntl
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"locked_file",
|
||||||
|
"atomic_write_json",
|
||||||
|
"load_json_locked",
|
||||||
|
"save_json_locked",
|
||||||
|
"read_modify_write_json",
|
||||||
|
]
|
||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
@@ -9,13 +17,22 @@ from pathlib import Path
|
|||||||
@contextmanager
|
@contextmanager
|
||||||
def locked_file(path: Path):
|
def locked_file(path: Path):
|
||||||
path.parent.mkdir(parents=True, exist_ok=True)
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
with open(path, "a+", encoding="utf-8") as f:
|
fd = None
|
||||||
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
|
try:
|
||||||
f.seek(0)
|
fd = os.open(path, os.O_RDWR | os.O_CREAT)
|
||||||
yield f
|
fcntl.flock(fd, fcntl.LOCK_EX)
|
||||||
f.flush()
|
yield fd
|
||||||
os.fsync(f.fileno())
|
finally:
|
||||||
fcntl.flock(f.fileno(), fcntl.LOCK_UN)
|
if fd is not None:
|
||||||
|
try:
|
||||||
|
os.fsync(fd)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
try:
|
||||||
|
fcntl.flock(fd, fcntl.LOCK_UN)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
os.close(fd)
|
||||||
|
|
||||||
|
|
||||||
def atomic_write_json(path: Path, data: dict):
|
def atomic_write_json(path: Path, data: dict):
|
||||||
@@ -38,3 +55,22 @@ def load_json_locked(path: Path, lock_path: Path, default):
|
|||||||
def save_json_locked(path: Path, lock_path: Path, data: dict):
|
def save_json_locked(path: Path, lock_path: Path, data: dict):
|
||||||
with locked_file(lock_path):
|
with locked_file(lock_path):
|
||||||
atomic_write_json(path, data)
|
atomic_write_json(path, data)
|
||||||
|
|
||||||
|
|
||||||
|
def read_modify_write_json(path: Path, lock_path: Path, default, modifier):
|
||||||
|
"""Atomic read-modify-write under a single file lock.
|
||||||
|
|
||||||
|
Loads JSON from *path* (or uses *default* if missing/invalid),
|
||||||
|
calls ``modifier(data)``, then atomically writes the result back.
|
||||||
|
If *modifier* returns None, the mutated *data* is written.
|
||||||
|
"""
|
||||||
|
with locked_file(lock_path):
|
||||||
|
if path.exists():
|
||||||
|
try:
|
||||||
|
data = json.loads(path.read_text(encoding="utf-8"))
|
||||||
|
except Exception:
|
||||||
|
data = default
|
||||||
|
else:
|
||||||
|
data = default
|
||||||
|
result = modifier(data)
|
||||||
|
atomic_write_json(path, result if result is not None else data)
|
||||||
|
|||||||
133
src/coinhunter/services/market_data.py
Normal file
133
src/coinhunter/services/market_data.py
Normal file
@@ -0,0 +1,133 @@
|
|||||||
|
"""Market data fetching and metric computation for precheck."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
import ccxt
|
||||||
|
|
||||||
|
from .data_utils import norm_symbol, to_float
|
||||||
|
|
||||||
|
|
||||||
|
def get_exchange():
|
||||||
|
from ..runtime import load_env_file
|
||||||
|
|
||||||
|
load_env_file()
|
||||||
|
api_key = os.getenv("BINANCE_API_KEY")
|
||||||
|
secret = os.getenv("BINANCE_API_SECRET")
|
||||||
|
if not api_key or not secret:
|
||||||
|
raise RuntimeError("Missing BINANCE_API_KEY or BINANCE_API_SECRET in ~/.hermes/.env")
|
||||||
|
ex = ccxt.binance({
|
||||||
|
"apiKey": api_key,
|
||||||
|
"secret": secret,
|
||||||
|
"options": {"defaultType": "spot"},
|
||||||
|
"enableRateLimit": True,
|
||||||
|
})
|
||||||
|
ex.load_markets()
|
||||||
|
return ex
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_ohlcv_batch(ex, symbols: set, timeframe: str, limit: int):
|
||||||
|
results = {}
|
||||||
|
for sym in sorted(symbols):
|
||||||
|
try:
|
||||||
|
ohlcv = ex.fetch_ohlcv(sym, timeframe=timeframe, limit=limit)
|
||||||
|
if ohlcv and len(ohlcv) >= 2:
|
||||||
|
results[sym] = ohlcv
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
def compute_ohlcv_metrics(ohlcv_1h, ohlcv_4h, current_price, volume_24h=None):
|
||||||
|
metrics = {}
|
||||||
|
if ohlcv_1h and len(ohlcv_1h) >= 2:
|
||||||
|
closes = [c[4] for c in ohlcv_1h]
|
||||||
|
volumes = [c[5] for c in ohlcv_1h]
|
||||||
|
metrics["change_1h_pct"] = round((closes[-1] - closes[-2]) / closes[-2] * 100, 2) if closes[-2] != 0 else None
|
||||||
|
if len(closes) >= 5:
|
||||||
|
metrics["change_4h_pct"] = round((closes[-1] - closes[-5]) / closes[-5] * 100, 2) if closes[-5] != 0 else None
|
||||||
|
recent_vol = sum(volumes[-4:]) / 4 if len(volumes) >= 4 else None
|
||||||
|
metrics["volume_1h_avg"] = round(recent_vol, 2) if recent_vol else None
|
||||||
|
highs = [c[2] for c in ohlcv_1h[-4:]]
|
||||||
|
lows = [c[3] for c in ohlcv_1h[-4:]]
|
||||||
|
metrics["high_4h"] = round(max(highs), 8) if highs else None
|
||||||
|
metrics["low_4h"] = round(min(lows), 8) if lows else None
|
||||||
|
|
||||||
|
if ohlcv_4h and len(ohlcv_4h) >= 2:
|
||||||
|
closes_4h = [c[4] for c in ohlcv_4h]
|
||||||
|
volumes_4h = [c[5] for c in ohlcv_4h]
|
||||||
|
metrics["change_4h_pct_from_4h"] = round((closes_4h[-1] - closes_4h[-2]) / closes_4h[-2] * 100, 2) if closes_4h[-2] != 0 else None
|
||||||
|
recent_vol_4h = sum(volumes_4h[-2:]) / 2 if len(volumes_4h) >= 2 else None
|
||||||
|
metrics["volume_4h_avg"] = round(recent_vol_4h, 2) if recent_vol_4h else None
|
||||||
|
highs_4h = [c[2] for c in ohlcv_4h]
|
||||||
|
lows_4h = [c[3] for c in ohlcv_4h]
|
||||||
|
metrics["high_24h_calc"] = round(max(highs_4h), 8) if highs_4h else None
|
||||||
|
metrics["low_24h_calc"] = round(min(lows_4h), 8) if lows_4h else None
|
||||||
|
if highs_4h and lows_4h:
|
||||||
|
avg_price = sum(closes_4h) / len(closes_4h)
|
||||||
|
metrics["volatility_4h_pct"] = round((max(highs_4h) - min(lows_4h)) / avg_price * 100, 2)
|
||||||
|
|
||||||
|
if current_price:
|
||||||
|
if metrics.get("high_4h"):
|
||||||
|
metrics["distance_from_4h_high_pct"] = round((metrics["high_4h"] - current_price) / metrics["high_4h"] * 100, 2)
|
||||||
|
if metrics.get("low_4h"):
|
||||||
|
metrics["distance_from_4h_low_pct"] = round((current_price - metrics["low_4h"]) / metrics["low_4h"] * 100, 2)
|
||||||
|
if metrics.get("high_24h_calc"):
|
||||||
|
metrics["distance_from_24h_high_pct"] = round((metrics["high_24h_calc"] - current_price) / metrics["high_24h_calc"] * 100, 2)
|
||||||
|
if metrics.get("low_24h_calc"):
|
||||||
|
metrics["distance_from_24h_low_pct"] = round((current_price - metrics["low_24h_calc"]) / metrics["low_24h_calc"] * 100, 2)
|
||||||
|
|
||||||
|
if volume_24h and volume_24h > 0 and metrics.get("volume_1h_avg"):
|
||||||
|
daily_avg_1h = volume_24h / 24
|
||||||
|
metrics["volume_1h_multiple"] = round(metrics["volume_1h_avg"] / daily_avg_1h, 2)
|
||||||
|
if volume_24h and volume_24h > 0 and metrics.get("volume_4h_avg"):
|
||||||
|
daily_avg_4h = volume_24h / 6
|
||||||
|
metrics["volume_4h_multiple"] = round(metrics["volume_4h_avg"] / daily_avg_4h, 2)
|
||||||
|
|
||||||
|
return metrics
|
||||||
|
|
||||||
|
|
||||||
|
def enrich_candidates_and_positions(global_candidates, candidate_layers, positions_view, tickers, ex):
|
||||||
|
symbols = set()
|
||||||
|
for c in global_candidates:
|
||||||
|
symbols.add(c["symbol"])
|
||||||
|
for p in positions_view:
|
||||||
|
sym = p.get("symbol")
|
||||||
|
if sym:
|
||||||
|
sym_ccxt = norm_symbol(sym)
|
||||||
|
symbols.add(sym_ccxt)
|
||||||
|
|
||||||
|
ohlcv_1h = fetch_ohlcv_batch(ex, symbols, "1h", 24)
|
||||||
|
ohlcv_4h = fetch_ohlcv_batch(ex, symbols, "4h", 12)
|
||||||
|
|
||||||
|
def _apply(target_list):
|
||||||
|
for item in target_list:
|
||||||
|
sym = item.get("symbol")
|
||||||
|
if not sym:
|
||||||
|
continue
|
||||||
|
sym_ccxt = norm_symbol(sym)
|
||||||
|
v24h = to_float(tickers.get(sym_ccxt, {}).get("quoteVolume"))
|
||||||
|
metrics = compute_ohlcv_metrics(
|
||||||
|
ohlcv_1h.get(sym_ccxt),
|
||||||
|
ohlcv_4h.get(sym_ccxt),
|
||||||
|
item.get("price") or item.get("last_price"),
|
||||||
|
volume_24h=v24h,
|
||||||
|
)
|
||||||
|
item["metrics"] = metrics
|
||||||
|
|
||||||
|
_apply(global_candidates)
|
||||||
|
for band_list in candidate_layers.values():
|
||||||
|
_apply(band_list)
|
||||||
|
_apply(positions_view)
|
||||||
|
return global_candidates, candidate_layers, positions_view
|
||||||
|
|
||||||
|
|
||||||
|
def regime_from_pct(pct: float | None) -> str:
|
||||||
|
if pct is None:
|
||||||
|
return "unknown"
|
||||||
|
if pct >= 2.0:
|
||||||
|
return "bullish"
|
||||||
|
if pct <= -2.0:
|
||||||
|
return "bearish"
|
||||||
|
return "neutral"
|
||||||
@@ -1,19 +1,47 @@
|
|||||||
"""Portfolio state helpers (positions.json, reconcile with exchange)."""
|
"""Portfolio state helpers (positions.json, reconcile with exchange)."""
|
||||||
from ..runtime import get_runtime_paths
|
from ..runtime import get_runtime_paths
|
||||||
from .file_utils import load_json_locked, save_json_locked
|
|
||||||
|
__all__ = [
|
||||||
|
"load_positions",
|
||||||
|
"save_positions",
|
||||||
|
"update_positions",
|
||||||
|
"upsert_position",
|
||||||
|
"reconcile_positions_with_exchange",
|
||||||
|
]
|
||||||
|
from .file_utils import load_json_locked, read_modify_write_json, save_json_locked
|
||||||
from .trade_common import bj_now_iso
|
from .trade_common import bj_now_iso
|
||||||
|
|
||||||
PATHS = get_runtime_paths()
|
|
||||||
POSITIONS_FILE = PATHS.positions_file
|
def _paths():
|
||||||
POSITIONS_LOCK = PATHS.positions_lock
|
return get_runtime_paths()
|
||||||
|
|
||||||
|
|
||||||
def load_positions() -> list:
|
def load_positions() -> list:
|
||||||
return load_json_locked(POSITIONS_FILE, POSITIONS_LOCK, {"positions": []}).get("positions", [])
|
paths = _paths()
|
||||||
|
data = load_json_locked(paths.positions_file, paths.positions_lock, {"positions": []})
|
||||||
|
return data.get("positions", []) # type: ignore[no-any-return]
|
||||||
|
|
||||||
|
|
||||||
def save_positions(positions: list):
|
def save_positions(positions: list):
|
||||||
save_json_locked(POSITIONS_FILE, POSITIONS_LOCK, {"positions": positions})
|
paths = _paths()
|
||||||
|
save_json_locked(paths.positions_file, paths.positions_lock, {"positions": positions})
|
||||||
|
|
||||||
|
|
||||||
|
def update_positions(modifier):
|
||||||
|
"""Atomic read-modify-write for positions under a single lock.
|
||||||
|
|
||||||
|
*modifier* receives the current positions list and may mutate it in-place
|
||||||
|
or return a new list. If it returns None, the mutated list is saved.
|
||||||
|
"""
|
||||||
|
paths = _paths()
|
||||||
|
|
||||||
|
def _modify(data):
|
||||||
|
positions = data.get("positions", [])
|
||||||
|
result = modifier(positions)
|
||||||
|
data["positions"] = result if result is not None else positions
|
||||||
|
return data
|
||||||
|
|
||||||
|
read_modify_write_json(paths.positions_file, paths.positions_lock, {"positions": []}, _modify)
|
||||||
|
|
||||||
|
|
||||||
def upsert_position(positions: list, position: dict):
|
def upsert_position(positions: list, position: dict):
|
||||||
@@ -26,32 +54,43 @@ def upsert_position(positions: list, position: dict):
|
|||||||
return positions
|
return positions
|
||||||
|
|
||||||
|
|
||||||
def reconcile_positions_with_exchange(ex, positions: list):
|
def reconcile_positions_with_exchange(ex, positions_hint: list | None = None):
|
||||||
from .exchange_service import fetch_balances
|
from .exchange_service import fetch_balances
|
||||||
|
|
||||||
balances = fetch_balances(ex)
|
balances = fetch_balances(ex)
|
||||||
existing_by_symbol = {p.get("symbol"): p for p in positions}
|
|
||||||
reconciled = []
|
reconciled = []
|
||||||
for asset, qty in balances.items():
|
|
||||||
if asset == "USDT":
|
def _modify(data):
|
||||||
continue
|
nonlocal reconciled
|
||||||
if qty <= 0:
|
existing = data.get("positions", [])
|
||||||
continue
|
existing_by_symbol = {p.get("symbol"): p for p in existing}
|
||||||
sym = f"{asset}USDT"
|
if positions_hint is not None:
|
||||||
old = existing_by_symbol.get(sym, {})
|
existing_by_symbol.update({p.get("symbol"): p for p in positions_hint})
|
||||||
reconciled.append(
|
reconciled = []
|
||||||
{
|
for asset, qty in balances.items():
|
||||||
"account_id": old.get("account_id", "binance-main"),
|
if asset == "USDT":
|
||||||
"symbol": sym,
|
continue
|
||||||
"base_asset": asset,
|
if qty <= 0:
|
||||||
"quote_asset": "USDT",
|
continue
|
||||||
"market_type": "spot",
|
sym = f"{asset}USDT"
|
||||||
"quantity": qty,
|
old = existing_by_symbol.get(sym, {})
|
||||||
"avg_cost": old.get("avg_cost"),
|
reconciled.append(
|
||||||
"opened_at": old.get("opened_at", bj_now_iso()),
|
{
|
||||||
"updated_at": bj_now_iso(),
|
"account_id": old.get("account_id", "binance-main"),
|
||||||
"note": old.get("note", "Reconciled from Binance balances"),
|
"symbol": sym,
|
||||||
}
|
"base_asset": asset,
|
||||||
)
|
"quote_asset": "USDT",
|
||||||
save_positions(reconciled)
|
"market_type": "spot",
|
||||||
|
"quantity": qty,
|
||||||
|
"avg_cost": old.get("avg_cost"),
|
||||||
|
"opened_at": old.get("opened_at", bj_now_iso()),
|
||||||
|
"updated_at": bj_now_iso(),
|
||||||
|
"note": old.get("note", "Reconciled from Binance balances"),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
data["positions"] = reconciled
|
||||||
|
return data
|
||||||
|
|
||||||
|
paths = _paths()
|
||||||
|
read_modify_write_json(paths.positions_file, paths.positions_lock, {"positions": []}, _modify)
|
||||||
return reconciled, balances
|
return reconciled, balances
|
||||||
|
|||||||
@@ -2,16 +2,12 @@
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from .. import precheck as precheck_module
|
from .time_utils import utc_iso
|
||||||
|
|
||||||
|
|
||||||
def analyze_trigger(snapshot: dict, state: dict) -> dict:
|
|
||||||
return precheck_module.analyze_trigger(snapshot, state)
|
|
||||||
|
|
||||||
|
|
||||||
def build_failure_payload(exc: Exception) -> dict:
|
def build_failure_payload(exc: Exception) -> dict:
|
||||||
return {
|
return {
|
||||||
"generated_at": precheck_module.utc_iso(),
|
"generated_at": utc_iso(),
|
||||||
"status": "deep_analysis_required",
|
"status": "deep_analysis_required",
|
||||||
"should_analyze": True,
|
"should_analyze": True,
|
||||||
"pending_trigger": True,
|
"pending_trigger": True,
|
||||||
@@ -21,5 +17,5 @@ def build_failure_payload(exc: Exception) -> dict:
|
|||||||
"soft_reasons": [],
|
"soft_reasons": [],
|
||||||
"soft_score": 0,
|
"soft_score": 0,
|
||||||
"details": [str(exc)],
|
"details": [str(exc)],
|
||||||
"compact_summary": f"预检查失败,转入深度分析兜底: {exc}",
|
"compact_summary": f"Precheck failed, falling back to deep analysis: {exc}",
|
||||||
}
|
}
|
||||||
|
|||||||
25
src/coinhunter/services/precheck_constants.py
Normal file
25
src/coinhunter/services/precheck_constants.py
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
"""Precheck constants and thresholds."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from ..runtime import get_user_config
|
||||||
|
|
||||||
|
_BASE = "precheck"
|
||||||
|
|
||||||
|
BASE_PRICE_MOVE_TRIGGER_PCT = get_user_config(f"{_BASE}.base_price_move_trigger_pct", 0.025)
|
||||||
|
BASE_PNL_TRIGGER_PCT = get_user_config(f"{_BASE}.base_pnl_trigger_pct", 0.03)
|
||||||
|
BASE_PORTFOLIO_MOVE_TRIGGER_PCT = get_user_config(f"{_BASE}.base_portfolio_move_trigger_pct", 0.03)
|
||||||
|
BASE_CANDIDATE_SCORE_TRIGGER_RATIO = get_user_config(f"{_BASE}.base_candidate_score_trigger_ratio", 1.15)
|
||||||
|
BASE_FORCE_ANALYSIS_AFTER_MINUTES = get_user_config(f"{_BASE}.base_force_analysis_after_minutes", 180)
|
||||||
|
BASE_COOLDOWN_MINUTES = get_user_config(f"{_BASE}.base_cooldown_minutes", 45)
|
||||||
|
TOP_CANDIDATES = get_user_config(f"{_BASE}.top_candidates", 10)
|
||||||
|
MIN_ACTIONABLE_USDT = get_user_config(f"{_BASE}.min_actionable_usdt", 12.0)
|
||||||
|
MIN_REAL_POSITION_VALUE_USDT = get_user_config(f"{_BASE}.min_real_position_value_usdt", 8.0)
|
||||||
|
BLACKLIST = set(get_user_config(f"{_BASE}.blacklist", ["USDC", "BUSD", "TUSD", "FDUSD", "USTC", "PAXG"]))
|
||||||
|
HARD_STOP_PCT = get_user_config(f"{_BASE}.hard_stop_pct", -0.08)
|
||||||
|
HARD_MOON_PCT = get_user_config(f"{_BASE}.hard_moon_pct", 0.25)
|
||||||
|
MIN_CHANGE_PCT = get_user_config(f"{_BASE}.min_change_pct", 1.0)
|
||||||
|
MAX_PRICE_CAP = get_user_config(f"{_BASE}.max_price_cap", None)
|
||||||
|
HARD_REASON_DEDUP_MINUTES = get_user_config(f"{_BASE}.hard_reason_dedup_minutes", 15)
|
||||||
|
MAX_PENDING_TRIGGER_MINUTES = get_user_config(f"{_BASE}.max_pending_trigger_minutes", 30)
|
||||||
|
MAX_RUN_REQUEST_MINUTES = get_user_config(f"{_BASE}.max_run_request_minutes", 20)
|
||||||
107
src/coinhunter/services/precheck_core.py
Normal file
107
src/coinhunter/services/precheck_core.py
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
"""Backward-compatible facade for precheck internals.
|
||||||
|
|
||||||
|
The reusable implementation has been split into smaller modules:
|
||||||
|
- precheck_constants : paths and thresholds
|
||||||
|
- time_utils : UTC/local time helpers
|
||||||
|
- data_utils : json, hash, float, symbol normalization
|
||||||
|
- state_manager : load/save/sanitize state
|
||||||
|
- market_data : exchange, ohlcv, metrics
|
||||||
|
- candidate_scoring : top candidate selection
|
||||||
|
- snapshot_builder : build_snapshot
|
||||||
|
- adaptive_profile : trigger profile builder
|
||||||
|
- trigger_analyzer : analyze_trigger
|
||||||
|
|
||||||
|
Keep this module importable so older entrypoints continue to work.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from importlib import import_module
|
||||||
|
|
||||||
|
from ..runtime import get_runtime_paths
|
||||||
|
|
||||||
|
_PATH_ALIASES = {
|
||||||
|
"PATHS": lambda: get_runtime_paths(),
|
||||||
|
"BASE_DIR": lambda: get_runtime_paths().root,
|
||||||
|
"STATE_DIR": lambda: get_runtime_paths().state_dir,
|
||||||
|
"STATE_FILE": lambda: get_runtime_paths().precheck_state_file,
|
||||||
|
"POSITIONS_FILE": lambda: get_runtime_paths().positions_file,
|
||||||
|
"CONFIG_FILE": lambda: get_runtime_paths().config_file,
|
||||||
|
"ENV_FILE": lambda: get_runtime_paths().env_file,
|
||||||
|
}
|
||||||
|
|
||||||
|
_MODULE_MAP = {
|
||||||
|
"BASE_PRICE_MOVE_TRIGGER_PCT": ".precheck_constants",
|
||||||
|
"BASE_PNL_TRIGGER_PCT": ".precheck_constants",
|
||||||
|
"BASE_PORTFOLIO_MOVE_TRIGGER_PCT": ".precheck_constants",
|
||||||
|
"BASE_CANDIDATE_SCORE_TRIGGER_RATIO": ".precheck_constants",
|
||||||
|
"BASE_FORCE_ANALYSIS_AFTER_MINUTES": ".precheck_constants",
|
||||||
|
"BASE_COOLDOWN_MINUTES": ".precheck_constants",
|
||||||
|
"TOP_CANDIDATES": ".precheck_constants",
|
||||||
|
"MIN_ACTIONABLE_USDT": ".precheck_constants",
|
||||||
|
"MIN_REAL_POSITION_VALUE_USDT": ".precheck_constants",
|
||||||
|
"BLACKLIST": ".precheck_constants",
|
||||||
|
"HARD_STOP_PCT": ".precheck_constants",
|
||||||
|
"HARD_MOON_PCT": ".precheck_constants",
|
||||||
|
"MIN_CHANGE_PCT": ".precheck_constants",
|
||||||
|
"MAX_PRICE_CAP": ".precheck_constants",
|
||||||
|
"HARD_REASON_DEDUP_MINUTES": ".precheck_constants",
|
||||||
|
"MAX_PENDING_TRIGGER_MINUTES": ".precheck_constants",
|
||||||
|
"MAX_RUN_REQUEST_MINUTES": ".precheck_constants",
|
||||||
|
"utc_now": ".time_utils",
|
||||||
|
"utc_iso": ".time_utils",
|
||||||
|
"parse_ts": ".time_utils",
|
||||||
|
"get_local_now": ".time_utils",
|
||||||
|
"session_label": ".time_utils",
|
||||||
|
"load_json": ".data_utils",
|
||||||
|
"stable_hash": ".data_utils",
|
||||||
|
"to_float": ".data_utils",
|
||||||
|
"norm_symbol": ".data_utils",
|
||||||
|
"load_env": ".state_manager",
|
||||||
|
"load_positions": ".state_manager",
|
||||||
|
"load_state": ".state_manager",
|
||||||
|
"load_config": ".state_manager",
|
||||||
|
"clear_run_request_fields": ".state_manager",
|
||||||
|
"sanitize_state_for_stale_triggers": ".state_manager",
|
||||||
|
"save_state": ".state_manager",
|
||||||
|
"update_state_after_observation": ".state_manager",
|
||||||
|
"get_exchange": ".market_data",
|
||||||
|
"fetch_ohlcv_batch": ".market_data",
|
||||||
|
"compute_ohlcv_metrics": ".market_data",
|
||||||
|
"enrich_candidates_and_positions": ".market_data",
|
||||||
|
"regime_from_pct": ".market_data",
|
||||||
|
"_liquidity_score": ".candidate_scoring",
|
||||||
|
"_breakout_score": ".candidate_scoring",
|
||||||
|
"top_candidates_from_tickers": ".candidate_scoring",
|
||||||
|
"build_snapshot": ".snapshot_builder",
|
||||||
|
"build_adaptive_profile": ".adaptive_profile",
|
||||||
|
"_candidate_weight": ".adaptive_profile",
|
||||||
|
"analyze_trigger": ".trigger_analyzer",
|
||||||
|
}
|
||||||
|
|
||||||
|
__all__ = sorted(set(_MODULE_MAP) | set(_PATH_ALIASES) | {"main"})
|
||||||
|
|
||||||
|
|
||||||
|
def __getattr__(name: str):
|
||||||
|
if name in _PATH_ALIASES:
|
||||||
|
return _PATH_ALIASES[name]()
|
||||||
|
if name not in _MODULE_MAP:
|
||||||
|
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
|
||||||
|
module_name = _MODULE_MAP[name]
|
||||||
|
module = import_module(module_name, __package__)
|
||||||
|
return getattr(module, name)
|
||||||
|
|
||||||
|
|
||||||
|
def __dir__():
|
||||||
|
return sorted(set(globals()) | set(__all__))
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from .precheck_service import run as _run_service
|
||||||
|
return _run_service(sys.argv[1:])
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
raise SystemExit(main())
|
||||||
@@ -3,6 +3,8 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import json
|
import json
|
||||||
|
|
||||||
|
__all__ = ["run"]
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
from . import precheck_analysis, precheck_snapshot, precheck_state
|
from . import precheck_analysis, precheck_snapshot, precheck_state
|
||||||
@@ -19,12 +21,24 @@ def run(argv: list[str] | None = None) -> int:
|
|||||||
return 0
|
return 0
|
||||||
|
|
||||||
try:
|
try:
|
||||||
state = precheck_state.sanitize_state_for_stale_triggers(precheck_state.load_state())
|
captured = {}
|
||||||
snapshot = precheck_snapshot.build_snapshot()
|
|
||||||
analysis = precheck_analysis.analyze_trigger(snapshot, state)
|
def _modifier(state):
|
||||||
precheck_state.save_state(precheck_state.update_state_after_observation(state, snapshot, analysis))
|
state = precheck_state.sanitize_state_for_stale_triggers(state)
|
||||||
print(json.dumps(analysis, ensure_ascii=False, indent=2))
|
snapshot = precheck_snapshot.build_snapshot()
|
||||||
|
analysis = precheck_analysis.analyze_trigger(snapshot, state)
|
||||||
|
new_state = precheck_state.update_state_after_observation(state, snapshot, analysis)
|
||||||
|
state.clear()
|
||||||
|
state.update(new_state)
|
||||||
|
captured["analysis"] = analysis
|
||||||
|
return state
|
||||||
|
|
||||||
|
precheck_state.modify_state(_modifier)
|
||||||
|
result = {"ok": True, **captured["analysis"]}
|
||||||
|
print(json.dumps(result, ensure_ascii=False, indent=2))
|
||||||
return 0
|
return 0
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
print(json.dumps(precheck_analysis.build_failure_payload(exc), ensure_ascii=False, indent=2))
|
payload = precheck_analysis.build_failure_payload(exc)
|
||||||
return 0
|
result = {"ok": False, **payload}
|
||||||
|
print(json.dumps(result, ensure_ascii=False, indent=2))
|
||||||
|
return 1
|
||||||
|
|||||||
@@ -2,8 +2,3 @@
|
|||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from .. import precheck as precheck_module
|
|
||||||
|
|
||||||
|
|
||||||
def build_snapshot() -> dict:
|
|
||||||
return precheck_module.build_snapshot()
|
|
||||||
|
|||||||
@@ -4,44 +4,38 @@ from __future__ import annotations
|
|||||||
|
|
||||||
import json
|
import json
|
||||||
|
|
||||||
from .. import precheck as precheck_module
|
from .state_manager import (
|
||||||
|
load_state,
|
||||||
|
modify_state,
|
||||||
def load_state() -> dict:
|
)
|
||||||
return precheck_module.load_state()
|
from .time_utils import utc_iso
|
||||||
|
|
||||||
|
|
||||||
def save_state(state: dict) -> None:
|
|
||||||
precheck_module.save_state(state)
|
|
||||||
|
|
||||||
|
|
||||||
def sanitize_state_for_stale_triggers(state: dict) -> dict:
|
|
||||||
return precheck_module.sanitize_state_for_stale_triggers(state)
|
|
||||||
|
|
||||||
|
|
||||||
def update_state_after_observation(state: dict, snapshot: dict, analysis: dict) -> dict:
|
|
||||||
return precheck_module.update_state_after_observation(state, snapshot, analysis)
|
|
||||||
|
|
||||||
|
|
||||||
def mark_run_requested(note: str = "") -> dict:
|
def mark_run_requested(note: str = "") -> dict:
|
||||||
|
def _modifier(state):
|
||||||
|
state["run_requested_at"] = utc_iso()
|
||||||
|
state["run_request_note"] = note
|
||||||
|
return state
|
||||||
|
|
||||||
|
modify_state(_modifier)
|
||||||
state = load_state()
|
state = load_state()
|
||||||
state["run_requested_at"] = precheck_module.utc_iso()
|
|
||||||
state["run_request_note"] = note
|
|
||||||
save_state(state)
|
|
||||||
payload = {"ok": True, "run_requested_at": state["run_requested_at"], "note": note}
|
payload = {"ok": True, "run_requested_at": state["run_requested_at"], "note": note}
|
||||||
print(json.dumps(payload, ensure_ascii=False))
|
print(json.dumps(payload, ensure_ascii=False))
|
||||||
return payload
|
return payload
|
||||||
|
|
||||||
|
|
||||||
def ack_analysis(note: str = "") -> dict:
|
def ack_analysis(note: str = "") -> dict:
|
||||||
|
def _modifier(state):
|
||||||
|
state["last_deep_analysis_at"] = utc_iso()
|
||||||
|
state["pending_trigger"] = False
|
||||||
|
state["pending_reasons"] = []
|
||||||
|
state["last_ack_note"] = note
|
||||||
|
state.pop("run_requested_at", None)
|
||||||
|
state.pop("run_request_note", None)
|
||||||
|
return state
|
||||||
|
|
||||||
|
modify_state(_modifier)
|
||||||
state = load_state()
|
state = load_state()
|
||||||
state["last_deep_analysis_at"] = precheck_module.utc_iso()
|
|
||||||
state["pending_trigger"] = False
|
|
||||||
state["pending_reasons"] = []
|
|
||||||
state["last_ack_note"] = note
|
|
||||||
state.pop("run_requested_at", None)
|
|
||||||
state.pop("run_request_note", None)
|
|
||||||
save_state(state)
|
|
||||||
payload = {"ok": True, "acked_at": state["last_deep_analysis_at"], "note": note}
|
payload = {"ok": True, "acked_at": state["last_deep_analysis_at"], "note": note}
|
||||||
print(json.dumps(payload, ensure_ascii=False))
|
print(json.dumps(payload, ensure_ascii=False))
|
||||||
return payload
|
return payload
|
||||||
|
|||||||
292
src/coinhunter/services/review_service.py
Normal file
292
src/coinhunter/services/review_service.py
Normal file
@@ -0,0 +1,292 @@
|
|||||||
|
"""Review generation service for CoinHunter."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import json
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"ensure_review_dir",
|
||||||
|
"norm_symbol",
|
||||||
|
"fetch_current_price",
|
||||||
|
"analyze_trade",
|
||||||
|
"analyze_hold_passes",
|
||||||
|
"analyze_cash_misses",
|
||||||
|
"generate_review",
|
||||||
|
"save_review",
|
||||||
|
"print_review",
|
||||||
|
]
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from ..logger import get_logs_last_n_hours
|
||||||
|
from ..runtime import get_runtime_paths
|
||||||
|
from .exchange_service import get_exchange
|
||||||
|
|
||||||
|
CST = timezone(timedelta(hours=8))
|
||||||
|
|
||||||
|
|
||||||
|
def _paths():
|
||||||
|
return get_runtime_paths()
|
||||||
|
|
||||||
|
|
||||||
|
def _review_dir() -> Path:
|
||||||
|
return _paths().reviews_dir # type: ignore[no-any-return]
|
||||||
|
|
||||||
|
|
||||||
|
def _exchange():
|
||||||
|
return get_exchange()
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_review_dir():
|
||||||
|
_review_dir().mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
|
||||||
|
def norm_symbol(symbol: str) -> str:
|
||||||
|
s = symbol.upper().replace("-", "").replace("_", "")
|
||||||
|
if "/" in s:
|
||||||
|
return s
|
||||||
|
if s.endswith("USDT"):
|
||||||
|
return s[:-4] + "/USDT"
|
||||||
|
return s
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_current_price(ex, symbol: str):
|
||||||
|
try:
|
||||||
|
return float(ex.fetch_ticker(norm_symbol(symbol))["last"])
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def analyze_trade(trade: dict, ex) -> dict:
|
||||||
|
symbol = trade.get("symbol")
|
||||||
|
price = trade.get("price")
|
||||||
|
action = trade.get("action", "")
|
||||||
|
current_price = fetch_current_price(ex, symbol) if symbol else None
|
||||||
|
pnl_estimate = None
|
||||||
|
outcome = "neutral"
|
||||||
|
if price and current_price and symbol:
|
||||||
|
change_pct = (current_price - float(price)) / float(price) * 100
|
||||||
|
if action == "BUY":
|
||||||
|
pnl_estimate = round(change_pct, 2)
|
||||||
|
outcome = "good" if change_pct > 2 else "bad" if change_pct < -2 else "neutral"
|
||||||
|
elif action == "SELL_ALL":
|
||||||
|
pnl_estimate = round(-change_pct, 2)
|
||||||
|
outcome = "good" if change_pct < -2 else "missed" if change_pct > 2 else "neutral"
|
||||||
|
return {
|
||||||
|
"timestamp": trade.get("timestamp"),
|
||||||
|
"symbol": symbol,
|
||||||
|
"action": action,
|
||||||
|
"decision_id": trade.get("decision_id"),
|
||||||
|
"execution_price": price,
|
||||||
|
"current_price": current_price,
|
||||||
|
"pnl_estimate_pct": pnl_estimate,
|
||||||
|
"outcome_assessment": outcome,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def analyze_hold_passes(decisions: list, ex) -> list:
|
||||||
|
misses = []
|
||||||
|
for d in decisions:
|
||||||
|
if d.get("decision") != "HOLD":
|
||||||
|
continue
|
||||||
|
analysis = d.get("analysis")
|
||||||
|
if not isinstance(analysis, dict):
|
||||||
|
continue
|
||||||
|
opportunities = analysis.get("opportunities_evaluated", [])
|
||||||
|
market_snapshot = d.get("market_snapshot", {})
|
||||||
|
if not opportunities or not market_snapshot:
|
||||||
|
continue
|
||||||
|
for op in opportunities:
|
||||||
|
verdict = op.get("verdict", "")
|
||||||
|
if "PASS" not in verdict and "pass" not in verdict:
|
||||||
|
continue
|
||||||
|
symbol = op.get("symbol", "")
|
||||||
|
snap = market_snapshot.get(symbol) or market_snapshot.get(symbol.replace("/", ""))
|
||||||
|
if not snap:
|
||||||
|
continue
|
||||||
|
decision_price = None
|
||||||
|
if isinstance(snap, dict):
|
||||||
|
decision_price = float(snap.get("lastPrice", 0)) or float(snap.get("last", 0))
|
||||||
|
elif isinstance(snap, (int, float, str)):
|
||||||
|
decision_price = float(snap)
|
||||||
|
if not decision_price:
|
||||||
|
continue
|
||||||
|
current_price = fetch_current_price(ex, symbol)
|
||||||
|
if not current_price:
|
||||||
|
continue
|
||||||
|
change_pct = (current_price - decision_price) / decision_price * 100
|
||||||
|
if change_pct > 3:
|
||||||
|
misses.append({
|
||||||
|
"timestamp": d.get("timestamp"),
|
||||||
|
"symbol": symbol,
|
||||||
|
"decision_price": round(decision_price, 8),
|
||||||
|
"current_price": round(current_price, 8),
|
||||||
|
"change_pct": round(change_pct, 2),
|
||||||
|
"verdict_snippet": verdict[:80],
|
||||||
|
})
|
||||||
|
return misses
|
||||||
|
|
||||||
|
|
||||||
|
def analyze_cash_misses(decisions: list, ex) -> list:
|
||||||
|
misses = []
|
||||||
|
watchlist = set()
|
||||||
|
for d in decisions:
|
||||||
|
snap = d.get("market_snapshot", {})
|
||||||
|
if isinstance(snap, dict):
|
||||||
|
for k in snap.keys():
|
||||||
|
if k.endswith("USDT"):
|
||||||
|
watchlist.add(k)
|
||||||
|
for d in decisions:
|
||||||
|
ts = d.get("timestamp")
|
||||||
|
balances = d.get("balances") or d.get("balances_before", {})
|
||||||
|
if not balances:
|
||||||
|
continue
|
||||||
|
total = sum(float(v) if isinstance(v, (int, float, str)) else 0 for v in balances.values())
|
||||||
|
usdt = float(balances.get("USDT", 0))
|
||||||
|
if total == 0 or (usdt / total) < 0.9:
|
||||||
|
continue
|
||||||
|
snap = d.get("market_snapshot", {})
|
||||||
|
if not isinstance(snap, dict):
|
||||||
|
continue
|
||||||
|
for symbol, data in snap.items():
|
||||||
|
if not symbol.endswith("USDT"):
|
||||||
|
continue
|
||||||
|
decision_price = None
|
||||||
|
if isinstance(data, dict):
|
||||||
|
decision_price = float(data.get("lastPrice", 0)) or float(data.get("last", 0))
|
||||||
|
elif isinstance(data, (int, float, str)):
|
||||||
|
decision_price = float(data)
|
||||||
|
if not decision_price:
|
||||||
|
continue
|
||||||
|
current_price = fetch_current_price(ex, symbol)
|
||||||
|
if not current_price:
|
||||||
|
continue
|
||||||
|
change_pct = (current_price - decision_price) / decision_price * 100
|
||||||
|
if change_pct > 5:
|
||||||
|
misses.append({
|
||||||
|
"timestamp": ts,
|
||||||
|
"symbol": symbol,
|
||||||
|
"decision_price": round(decision_price, 8),
|
||||||
|
"current_price": round(current_price, 8),
|
||||||
|
"change_pct": round(change_pct, 2),
|
||||||
|
})
|
||||||
|
seen: dict[str, dict] = {}
|
||||||
|
for m in misses:
|
||||||
|
sym = m["symbol"]
|
||||||
|
if sym not in seen or m["change_pct"] > seen[sym]["change_pct"]:
|
||||||
|
seen[sym] = m
|
||||||
|
return list(seen.values())
|
||||||
|
|
||||||
|
|
||||||
|
def generate_review(hours: int = 1) -> dict:
|
||||||
|
decisions = get_logs_last_n_hours("decisions", hours)
|
||||||
|
trades = get_logs_last_n_hours("trades", hours)
|
||||||
|
errors = get_logs_last_n_hours("errors", hours)
|
||||||
|
|
||||||
|
review: dict = {
|
||||||
|
"review_period_hours": hours,
|
||||||
|
"review_timestamp": datetime.now(CST).isoformat(),
|
||||||
|
"total_decisions": len(decisions),
|
||||||
|
"total_trades": len(trades),
|
||||||
|
"total_errors": len(errors),
|
||||||
|
"decision_quality": [],
|
||||||
|
"stats": {},
|
||||||
|
"insights": [],
|
||||||
|
"recommendations": [],
|
||||||
|
}
|
||||||
|
|
||||||
|
if not decisions and not trades:
|
||||||
|
review["insights"].append("No decisions or trades in this period")
|
||||||
|
return review
|
||||||
|
|
||||||
|
ex = get_exchange()
|
||||||
|
outcomes = {"good": 0, "neutral": 0, "bad": 0, "missed": 0}
|
||||||
|
pnl_samples = []
|
||||||
|
|
||||||
|
for trade in trades:
|
||||||
|
analysis = analyze_trade(trade, ex)
|
||||||
|
review["decision_quality"].append(analysis)
|
||||||
|
outcomes[analysis["outcome_assessment"]] += 1
|
||||||
|
if analysis["pnl_estimate_pct"] is not None:
|
||||||
|
pnl_samples.append(analysis["pnl_estimate_pct"])
|
||||||
|
|
||||||
|
hold_pass_misses = analyze_hold_passes(decisions, ex)
|
||||||
|
cash_misses = analyze_cash_misses(decisions, ex)
|
||||||
|
total_missed = outcomes["missed"] + len(hold_pass_misses) + len(cash_misses)
|
||||||
|
|
||||||
|
review["stats"] = {
|
||||||
|
"good_decisions": outcomes["good"],
|
||||||
|
"neutral_decisions": outcomes["neutral"],
|
||||||
|
"bad_decisions": outcomes["bad"],
|
||||||
|
"missed_opportunities": total_missed,
|
||||||
|
"missed_sell_all": outcomes["missed"],
|
||||||
|
"missed_hold_passes": len(hold_pass_misses),
|
||||||
|
"missed_cash_sits": len(cash_misses),
|
||||||
|
"avg_estimated_edge_pct": round(sum(pnl_samples) / len(pnl_samples), 2) if pnl_samples else None,
|
||||||
|
}
|
||||||
|
|
||||||
|
if errors:
|
||||||
|
review["insights"].append(f"{len(errors)} execution/system errors this period; robustness needs attention")
|
||||||
|
if outcomes["bad"] > outcomes["good"]:
|
||||||
|
review["insights"].append("Recent trade quality is weak; consider lowering frequency or raising entry threshold")
|
||||||
|
if total_missed > 0:
|
||||||
|
parts = []
|
||||||
|
if outcomes["missed"]:
|
||||||
|
parts.append(f"sold then rallied {outcomes['missed']} times")
|
||||||
|
if hold_pass_misses:
|
||||||
|
parts.append(f"missed after PASS {len(hold_pass_misses)} times")
|
||||||
|
if cash_misses:
|
||||||
|
parts.append(f"missed while sitting in cash {len(cash_misses)} times")
|
||||||
|
review["insights"].append("Opportunities missed: " + ", ".join(parts) + "; consider relaxing trend-following or entry conditions")
|
||||||
|
if outcomes["good"] >= max(1, outcomes["bad"] + total_missed):
|
||||||
|
review["insights"].append("Recent decisions are generally acceptable")
|
||||||
|
if not trades and decisions:
|
||||||
|
review["insights"].append("Decisions without trades; may be due to waiting on sidelines, minimum notional limits, or execution interception")
|
||||||
|
if len(trades) < len(decisions) * 0.1 and decisions:
|
||||||
|
review["insights"].append("Many decisions did not convert to trades; check if minimum notional/step-size/fee buffer thresholds are too high")
|
||||||
|
if hold_pass_misses:
|
||||||
|
for m in hold_pass_misses[:3]:
|
||||||
|
review["insights"].append(f"PASS'd {m['symbol']} during HOLD, then it rose {m['change_pct']}%")
|
||||||
|
if cash_misses:
|
||||||
|
for m in cash_misses[:3]:
|
||||||
|
review["insights"].append(f"{m['symbol']} rose {m['change_pct']}% while portfolio was mostly USDT")
|
||||||
|
|
||||||
|
review["recommendations"] = [
|
||||||
|
"Check whether minimum-notional/precision rejections are blocking small-capital execution",
|
||||||
|
"If estimated edge is negative for two consecutive review periods, reduce rebalancing frequency next hour",
|
||||||
|
"If error logs are increasing, prioritize defensive mode (hold more USDT)",
|
||||||
|
]
|
||||||
|
return review
|
||||||
|
|
||||||
|
|
||||||
|
def save_review(review: dict) -> str:
|
||||||
|
ensure_review_dir()
|
||||||
|
ts = datetime.now(CST).strftime("%Y%m%d_%H%M%S")
|
||||||
|
path = _review_dir() / f"review_{ts}.json"
|
||||||
|
path.write_text(json.dumps(review, indent=2, ensure_ascii=False), encoding="utf-8")
|
||||||
|
return str(path)
|
||||||
|
|
||||||
|
|
||||||
|
def print_review(review: dict):
|
||||||
|
print("=" * 50)
|
||||||
|
print("📊 Coin Hunter Review Report")
|
||||||
|
print(f"Review time: {review['review_timestamp']}")
|
||||||
|
print(f"Period: last {review['review_period_hours']} hours")
|
||||||
|
print(f"Total decisions: {review['total_decisions']} | Total trades: {review['total_trades']} | Total errors: {review['total_errors']}")
|
||||||
|
stats = review.get("stats", {})
|
||||||
|
print("\nDecision quality:")
|
||||||
|
print(f" ✓ Good: {stats.get('good_decisions', 0)}")
|
||||||
|
print(f" ○ Neutral: {stats.get('neutral_decisions', 0)}")
|
||||||
|
print(f" ✗ Bad: {stats.get('bad_decisions', 0)}")
|
||||||
|
print(f" ↗ Missed opportunities: {stats.get('missed_opportunities', 0)}")
|
||||||
|
if stats.get("avg_estimated_edge_pct") is not None:
|
||||||
|
print(f" Avg estimated edge: {stats['avg_estimated_edge_pct']}%")
|
||||||
|
if review.get("insights"):
|
||||||
|
print("\n💡 Insights:")
|
||||||
|
for item in review["insights"]:
|
||||||
|
print(f" • {item}")
|
||||||
|
if review.get("recommendations"):
|
||||||
|
print("\n🔧 Recommendations:")
|
||||||
|
for item in review["recommendations"]:
|
||||||
|
print(f" • {item}")
|
||||||
|
print("=" * 50)
|
||||||
@@ -1,48 +1,136 @@
|
|||||||
"""CLI parser and legacy argument normalization for smart executor."""
|
"""CLI parser and legacy argument normalization for smart executor."""
|
||||||
import argparse
|
import argparse
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"COMMAND_CANONICAL",
|
||||||
|
"add_shared_options",
|
||||||
|
"build_parser",
|
||||||
|
"normalize_legacy_argv",
|
||||||
|
"parse_cli_args",
|
||||||
|
"cli_action_args",
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
COMMAND_CANONICAL = {
|
||||||
|
"bal": "balances",
|
||||||
|
"balances": "balances",
|
||||||
|
"balance": "balances",
|
||||||
|
"acct": "status",
|
||||||
|
"overview": "status",
|
||||||
|
"status": "status",
|
||||||
|
"hold": "hold",
|
||||||
|
"buy": "buy",
|
||||||
|
"flat": "sell-all",
|
||||||
|
"sell-all": "sell-all",
|
||||||
|
"sell_all": "sell-all",
|
||||||
|
"rotate": "rebalance",
|
||||||
|
"rebalance": "rebalance",
|
||||||
|
"orders": "orders",
|
||||||
|
"cancel": "cancel",
|
||||||
|
"order-status": "order-status",
|
||||||
|
"order_status": "order-status",
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def add_shared_options(parser: argparse.ArgumentParser) -> None:
|
||||||
|
parser.add_argument("--decision-id", help="Override the decision ID; otherwise one is derived automatically")
|
||||||
|
parser.add_argument("--analysis", help="Persist analysis text with the execution record")
|
||||||
|
parser.add_argument("--reasoning", help="Persist reasoning text with the execution record")
|
||||||
|
parser.add_argument("--dry-run", action="store_true", help="Simulate the command without placing live orders")
|
||||||
|
|
||||||
|
|
||||||
def build_parser() -> argparse.ArgumentParser:
|
def build_parser() -> argparse.ArgumentParser:
|
||||||
|
shared = argparse.ArgumentParser(add_help=False)
|
||||||
|
add_shared_options(shared)
|
||||||
parser = argparse.ArgumentParser(
|
parser = argparse.ArgumentParser(
|
||||||
description="Coin Hunter Smart Executor",
|
prog="coinhunter exec",
|
||||||
|
description="Professional execution console for account inspection and spot trading workflows",
|
||||||
formatter_class=argparse.RawTextHelpFormatter,
|
formatter_class=argparse.RawTextHelpFormatter,
|
||||||
|
parents=[shared],
|
||||||
epilog=(
|
epilog=(
|
||||||
"示例:\n"
|
"Preferred verbs:\n"
|
||||||
" python smart_executor.py hold\n"
|
" bal Print live balances as stable JSON\n"
|
||||||
" python smart_executor.py sell-all ETHUSDT\n"
|
" overview Print balances, positions, and market snapshot as stable JSON\n"
|
||||||
" python smart_executor.py buy ENJUSDT 100\n"
|
" hold Record a hold decision without trading\n"
|
||||||
" python smart_executor.py rebalance PEPEUSDT ETHUSDT\n"
|
" buy SYMBOL USDT Buy a symbol using a USDT notional amount\n"
|
||||||
" python smart_executor.py balances\n\n"
|
" flat SYMBOL Exit an entire symbol position\n"
|
||||||
"兼容旧调用:\n"
|
" rotate FROM TO Rotate exposure from one symbol into another\n"
|
||||||
" python smart_executor.py HOLD\n"
|
" orders List open spot orders\n"
|
||||||
" python smart_executor.py --decision HOLD --dry-run\n"
|
" order-status SYMBOL ORDER_ID Get status of a specific order\n"
|
||||||
|
" cancel SYMBOL [ORDER_ID] Cancel an open order (cancels newest if ORDER_ID omitted)\n\n"
|
||||||
|
"Examples:\n"
|
||||||
|
" coinhunter exec bal\n"
|
||||||
|
" coinhunter exec overview\n"
|
||||||
|
" coinhunter exec hold\n"
|
||||||
|
" coinhunter exec buy ENJUSDT 100\n"
|
||||||
|
" coinhunter exec flat ENJUSDT --dry-run\n"
|
||||||
|
" coinhunter exec rotate PEPEUSDT ETHUSDT\n"
|
||||||
|
" coinhunter exec orders\n"
|
||||||
|
" coinhunter exec order-status ENJUSDT 123456\n"
|
||||||
|
" coinhunter exec cancel ENJUSDT 123456\n\n"
|
||||||
|
"Legacy forms remain supported for backward compatibility:\n"
|
||||||
|
" balances, balance -> bal\n"
|
||||||
|
" acct, status -> overview\n"
|
||||||
|
" sell-all, sell_all -> flat\n"
|
||||||
|
" rebalance -> rotate\n"
|
||||||
|
" order_status -> order-status\n"
|
||||||
|
" HOLD / BUY / SELL_ALL / REBALANCE via --decision are still accepted\n"
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
parser.add_argument("--decision-id", help="Override decision id (otherwise derived automatically)")
|
subparsers = parser.add_subparsers(
|
||||||
parser.add_argument("--analysis", help="Decision analysis text to persist into logs")
|
dest="command",
|
||||||
parser.add_argument("--reasoning", help="Decision reasoning text to persist into logs")
|
metavar="{bal,overview,hold,buy,flat,rotate,orders,order-status,cancel,...}",
|
||||||
parser.add_argument("--dry-run", action="store_true", help="Force dry-run mode for this invocation")
|
)
|
||||||
|
|
||||||
subparsers = parser.add_subparsers(dest="command")
|
subparsers.add_parser("bal", parents=[shared], help="Preferred: print live balances as stable JSON")
|
||||||
|
subparsers.add_parser("overview", parents=[shared], help="Preferred: print the account overview as stable JSON")
|
||||||
|
subparsers.add_parser("hold", parents=[shared], help="Preferred: record a hold decision without trading")
|
||||||
|
|
||||||
subparsers.add_parser("hold", help="Log a HOLD decision without trading")
|
buy = subparsers.add_parser("buy", parents=[shared], help="Preferred: buy a symbol with a USDT notional amount")
|
||||||
subparsers.add_parser("balances", help="Print live balances as JSON")
|
|
||||||
subparsers.add_parser("balance", help="Alias of balances")
|
|
||||||
subparsers.add_parser("status", help="Print balances + positions + snapshot as JSON")
|
|
||||||
|
|
||||||
sell_all = subparsers.add_parser("sell-all", help="Sell all of one symbol")
|
|
||||||
sell_all.add_argument("symbol")
|
|
||||||
sell_all_legacy = subparsers.add_parser("sell_all", help=argparse.SUPPRESS)
|
|
||||||
sell_all_legacy.add_argument("symbol")
|
|
||||||
|
|
||||||
buy = subparsers.add_parser("buy", help="Buy symbol with USDT amount")
|
|
||||||
buy.add_argument("symbol")
|
buy.add_argument("symbol")
|
||||||
buy.add_argument("amount_usdt", type=float)
|
buy.add_argument("amount_usdt", type=float)
|
||||||
|
|
||||||
rebalance = subparsers.add_parser("rebalance", help="Sell one symbol and rotate to another")
|
flat = subparsers.add_parser("flat", parents=[shared], help="Preferred: exit an entire symbol position")
|
||||||
|
flat.add_argument("symbol")
|
||||||
|
|
||||||
|
rebalance = subparsers.add_parser("rotate", parents=[shared], help="Preferred: rotate exposure from one symbol into another")
|
||||||
rebalance.add_argument("from_symbol")
|
rebalance.add_argument("from_symbol")
|
||||||
rebalance.add_argument("to_symbol")
|
rebalance.add_argument("to_symbol")
|
||||||
|
|
||||||
|
subparsers.add_parser("orders", parents=[shared], help="List open spot orders")
|
||||||
|
|
||||||
|
order_status = subparsers.add_parser("order-status", parents=[shared], help="Get status of a specific order")
|
||||||
|
order_status.add_argument("symbol")
|
||||||
|
order_status.add_argument("order_id")
|
||||||
|
|
||||||
|
cancel = subparsers.add_parser("cancel", parents=[shared], help="Cancel an open order")
|
||||||
|
cancel.add_argument("symbol")
|
||||||
|
cancel.add_argument("order_id", nargs="?")
|
||||||
|
|
||||||
|
subparsers.add_parser("balances", parents=[shared], help=argparse.SUPPRESS)
|
||||||
|
subparsers.add_parser("balance", parents=[shared], help=argparse.SUPPRESS)
|
||||||
|
subparsers.add_parser("acct", parents=[shared], help=argparse.SUPPRESS)
|
||||||
|
subparsers.add_parser("status", parents=[shared], help=argparse.SUPPRESS)
|
||||||
|
|
||||||
|
sell_all = subparsers.add_parser("sell-all", parents=[shared], help=argparse.SUPPRESS)
|
||||||
|
sell_all.add_argument("symbol")
|
||||||
|
sell_all_legacy = subparsers.add_parser("sell_all", parents=[shared], help=argparse.SUPPRESS)
|
||||||
|
sell_all_legacy.add_argument("symbol")
|
||||||
|
|
||||||
|
rebalance_legacy = subparsers.add_parser("rebalance", parents=[shared], help=argparse.SUPPRESS)
|
||||||
|
rebalance_legacy.add_argument("from_symbol")
|
||||||
|
rebalance_legacy.add_argument("to_symbol")
|
||||||
|
|
||||||
|
order_status_legacy = subparsers.add_parser("order_status", parents=[shared], help=argparse.SUPPRESS)
|
||||||
|
order_status_legacy.add_argument("symbol")
|
||||||
|
order_status_legacy.add_argument("order_id")
|
||||||
|
|
||||||
|
subparsers._choices_actions = [
|
||||||
|
action
|
||||||
|
for action in subparsers._choices_actions
|
||||||
|
if action.help != argparse.SUPPRESS
|
||||||
|
]
|
||||||
|
|
||||||
return parser
|
return parser
|
||||||
|
|
||||||
|
|
||||||
@@ -53,6 +141,11 @@ def normalize_legacy_argv(argv: list[str]) -> list[str]:
|
|||||||
action_aliases = {
|
action_aliases = {
|
||||||
"HOLD": ["hold"],
|
"HOLD": ["hold"],
|
||||||
"hold": ["hold"],
|
"hold": ["hold"],
|
||||||
|
"bal": ["balances"],
|
||||||
|
"acct": ["status"],
|
||||||
|
"overview": ["status"],
|
||||||
|
"flat": ["sell-all"],
|
||||||
|
"rotate": ["rebalance"],
|
||||||
"SELL_ALL": ["sell-all"],
|
"SELL_ALL": ["sell-all"],
|
||||||
"sell_all": ["sell-all"],
|
"sell_all": ["sell-all"],
|
||||||
"sell-all": ["sell-all"],
|
"sell-all": ["sell-all"],
|
||||||
@@ -66,6 +159,14 @@ def normalize_legacy_argv(argv: list[str]) -> list[str]:
|
|||||||
"balances": ["balances"],
|
"balances": ["balances"],
|
||||||
"STATUS": ["status"],
|
"STATUS": ["status"],
|
||||||
"status": ["status"],
|
"status": ["status"],
|
||||||
|
"OVERVIEW": ["status"],
|
||||||
|
"ORDERS": ["orders"],
|
||||||
|
"orders": ["orders"],
|
||||||
|
"CANCEL": ["cancel"],
|
||||||
|
"cancel": ["cancel"],
|
||||||
|
"ORDER_STATUS": ["order-status"],
|
||||||
|
"order_status": ["order-status"],
|
||||||
|
"order-status": ["order-status"],
|
||||||
}
|
}
|
||||||
|
|
||||||
has_legacy_flag = any(t.startswith("--decision") for t in argv)
|
has_legacy_flag = any(t.startswith("--decision") for t in argv)
|
||||||
@@ -105,18 +206,18 @@ def normalize_legacy_argv(argv: list[str]) -> list[str]:
|
|||||||
rebuilt += ["hold"]
|
rebuilt += ["hold"]
|
||||||
elif decision == "SELL_ALL":
|
elif decision == "SELL_ALL":
|
||||||
if not ns.symbol:
|
if not ns.symbol:
|
||||||
raise RuntimeError("旧式 --decision SELL_ALL 需要搭配 --symbol")
|
raise RuntimeError("Legacy --decision SELL_ALL requires --symbol")
|
||||||
rebuilt += ["sell-all", ns.symbol]
|
rebuilt += ["sell-all", ns.symbol]
|
||||||
elif decision == "BUY":
|
elif decision == "BUY":
|
||||||
if not ns.symbol or ns.amount_usdt is None:
|
if not ns.symbol or ns.amount_usdt is None:
|
||||||
raise RuntimeError("旧式 --decision BUY 需要 --symbol 和 --amount-usdt")
|
raise RuntimeError("Legacy --decision BUY requires --symbol and --amount-usdt")
|
||||||
rebuilt += ["buy", ns.symbol, str(ns.amount_usdt)]
|
rebuilt += ["buy", ns.symbol, str(ns.amount_usdt)]
|
||||||
elif decision == "REBALANCE":
|
elif decision == "REBALANCE":
|
||||||
if not ns.from_symbol or not ns.to_symbol:
|
if not ns.from_symbol or not ns.to_symbol:
|
||||||
raise RuntimeError("旧式 --decision REBALANCE 需要 --from-symbol 和 --to-symbol")
|
raise RuntimeError("Legacy --decision REBALANCE requires --from-symbol and --to-symbol")
|
||||||
rebuilt += ["rebalance", ns.from_symbol, ns.to_symbol]
|
rebuilt += ["rebalance", ns.from_symbol, ns.to_symbol]
|
||||||
else:
|
else:
|
||||||
raise RuntimeError(f"不支持的旧式 decision: {decision}")
|
raise RuntimeError(f"Unsupported legacy decision: {decision}")
|
||||||
|
|
||||||
return rebuilt + unknown
|
return rebuilt + unknown
|
||||||
|
|
||||||
@@ -130,8 +231,7 @@ def parse_cli_args(argv: list[str]):
|
|||||||
if not args.command:
|
if not args.command:
|
||||||
parser.print_help()
|
parser.print_help()
|
||||||
raise SystemExit(1)
|
raise SystemExit(1)
|
||||||
if args.command == "sell_all":
|
args.command = COMMAND_CANONICAL.get(args.command, args.command)
|
||||||
args.command = "sell-all"
|
|
||||||
return args, normalized
|
return args, normalized
|
||||||
|
|
||||||
|
|
||||||
@@ -142,4 +242,8 @@ def cli_action_args(args, action: str) -> list[str]:
|
|||||||
return [args.symbol, str(args.amount_usdt)]
|
return [args.symbol, str(args.amount_usdt)]
|
||||||
if action == "rebalance":
|
if action == "rebalance":
|
||||||
return [args.from_symbol, args.to_symbol]
|
return [args.from_symbol, args.to_symbol]
|
||||||
|
if action == "order_status":
|
||||||
|
return [args.symbol, args.order_id]
|
||||||
|
if action == "cancel":
|
||||||
|
return [args.symbol, args.order_id] if args.order_id else [args.symbol]
|
||||||
return []
|
return []
|
||||||
|
|||||||
@@ -3,21 +3,27 @@
|
|||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import os
|
import os
|
||||||
|
|
||||||
|
__all__ = ["run"]
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
from ..logger import log_decision, log_error
|
from ..logger import log_decision, log_error
|
||||||
from .exchange_service import fetch_balances, build_market_snapshot
|
from .exchange_service import build_market_snapshot, fetch_balances
|
||||||
from .execution_state import default_decision_id, get_execution_state, record_execution_state
|
from .execution_state import default_decision_id, get_execution_state, record_execution_state
|
||||||
from .portfolio_service import load_positions
|
from .portfolio_service import load_positions
|
||||||
from .smart_executor_parser import parse_cli_args, cli_action_args
|
from .smart_executor_parser import cli_action_args, parse_cli_args
|
||||||
from .trade_common import is_dry_run, log, set_dry_run, bj_now_iso
|
from .trade_common import bj_now_iso, log, set_dry_run
|
||||||
from .trade_execution import (
|
from .trade_execution import (
|
||||||
command_balances,
|
|
||||||
command_status,
|
|
||||||
build_decision_context,
|
|
||||||
action_sell_all,
|
|
||||||
action_buy,
|
action_buy,
|
||||||
action_rebalance,
|
action_rebalance,
|
||||||
|
action_sell_all,
|
||||||
|
build_decision_context,
|
||||||
|
command_balances,
|
||||||
|
command_cancel,
|
||||||
|
command_order_status,
|
||||||
|
command_orders,
|
||||||
|
command_status,
|
||||||
|
print_json,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -36,9 +42,9 @@ def run(argv: list[str] | None = None) -> int:
|
|||||||
set_dry_run(True)
|
set_dry_run(True)
|
||||||
|
|
||||||
previous = get_execution_state(decision_id)
|
previous = get_execution_state(decision_id)
|
||||||
read_only_action = action in {"balance", "balances", "status"}
|
read_only_action = action in {"balance", "balances", "status", "orders", "order_status", "cancel"}
|
||||||
if previous and previous.get("status") == "success" and not read_only_action:
|
if previous and previous.get("status") == "success" and not read_only_action:
|
||||||
log(f"⚠️ decision_id={decision_id} 已执行成功,跳过重复执行")
|
log(f"⚠️ decision_id={decision_id} already executed successfully, skipping duplicate")
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -48,8 +54,14 @@ def run(argv: list[str] | None = None) -> int:
|
|||||||
if read_only_action:
|
if read_only_action:
|
||||||
if action in {"balance", "balances"}:
|
if action in {"balance", "balances"}:
|
||||||
command_balances(ex)
|
command_balances(ex)
|
||||||
else:
|
elif action == "status":
|
||||||
command_status(ex)
|
command_status(ex)
|
||||||
|
elif action == "orders":
|
||||||
|
command_orders(ex)
|
||||||
|
elif action == "order_status":
|
||||||
|
command_order_status(ex, args.symbol, args.order_id)
|
||||||
|
elif action == "cancel":
|
||||||
|
command_cancel(ex, args.symbol, getattr(args, "order_id", None))
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
decision_context = build_decision_context(ex, action, argv_tail, decision_id)
|
decision_context = build_decision_context(ex, action, argv_tail, decision_id)
|
||||||
@@ -88,10 +100,10 @@ def run(argv: list[str] | None = None) -> int:
|
|||||||
"execution_result": {"status": "hold"},
|
"execution_result": {"status": "hold"},
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
log("😴 决策: 持续持有,无操作")
|
log("😴 Decision: hold, no action")
|
||||||
result = {"status": "hold"}
|
result = {"status": "hold"}
|
||||||
else:
|
else:
|
||||||
raise RuntimeError(f"未知动作: {action};请运行 --help 查看正确 CLI 用法")
|
raise RuntimeError(f"Unknown action: {action}; run --help for valid CLI usage")
|
||||||
|
|
||||||
record_execution_state(
|
record_execution_state(
|
||||||
decision_id,
|
decision_id,
|
||||||
@@ -103,7 +115,9 @@ def run(argv: list[str] | None = None) -> int:
|
|||||||
"result": result,
|
"result": result,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
log(f"✅ 执行完成 decision_id={decision_id}")
|
if not read_only_action:
|
||||||
|
print_json({"ok": True, "decision_id": decision_id, "action": action, "result": result})
|
||||||
|
log(f"Execution completed decision_id={decision_id}")
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
@@ -124,5 +138,5 @@ def run(argv: list[str] | None = None) -> int:
|
|||||||
action=action,
|
action=action,
|
||||||
args=argv_tail,
|
args=argv_tail,
|
||||||
)
|
)
|
||||||
log(f"❌ 执行失败: {exc}")
|
log(f"❌ Execution failed: {exc}")
|
||||||
return 1
|
return 1
|
||||||
|
|||||||
110
src/coinhunter/services/snapshot_builder.py
Normal file
110
src/coinhunter/services/snapshot_builder.py
Normal file
@@ -0,0 +1,110 @@
|
|||||||
|
"""Snapshot construction for precheck."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from .candidate_scoring import top_candidates_from_tickers
|
||||||
|
from .data_utils import norm_symbol, stable_hash, to_float
|
||||||
|
from .market_data import enrich_candidates_and_positions, get_exchange, regime_from_pct
|
||||||
|
from .precheck_constants import MIN_REAL_POSITION_VALUE_USDT
|
||||||
|
from .state_manager import load_config, load_positions
|
||||||
|
from .time_utils import get_local_now, utc_iso
|
||||||
|
|
||||||
|
|
||||||
|
def build_snapshot():
|
||||||
|
config = load_config()
|
||||||
|
local_dt, tz_name = get_local_now(config)
|
||||||
|
ex = get_exchange()
|
||||||
|
positions = load_positions()
|
||||||
|
tickers = ex.fetch_tickers()
|
||||||
|
balances = ex.fetch_balance()["free"]
|
||||||
|
free_usdt = to_float(balances.get("USDT"))
|
||||||
|
|
||||||
|
positions_view = []
|
||||||
|
total_position_value = 0.0
|
||||||
|
largest_position_value = 0.0
|
||||||
|
actionable_positions = 0
|
||||||
|
for pos in positions:
|
||||||
|
symbol = pos.get("symbol") or ""
|
||||||
|
sym_ccxt = norm_symbol(symbol)
|
||||||
|
ticker = tickers.get(sym_ccxt, {})
|
||||||
|
last = to_float(ticker.get("last"), None)
|
||||||
|
qty = to_float(pos.get("quantity"))
|
||||||
|
avg_cost = to_float(pos.get("avg_cost"), None)
|
||||||
|
value = round(qty * last, 4) if last is not None else None
|
||||||
|
pnl_pct = round((last - avg_cost) / avg_cost, 4) if last is not None and avg_cost else None
|
||||||
|
high = to_float(ticker.get("high"))
|
||||||
|
low = to_float(ticker.get("low"))
|
||||||
|
distance_from_high = (high - last) / max(high, 1e-9) if high and last else None
|
||||||
|
if value is not None:
|
||||||
|
total_position_value += value
|
||||||
|
largest_position_value = max(largest_position_value, value)
|
||||||
|
if value >= MIN_REAL_POSITION_VALUE_USDT:
|
||||||
|
actionable_positions += 1
|
||||||
|
positions_view.append({
|
||||||
|
"symbol": symbol,
|
||||||
|
"base_asset": pos.get("base_asset"),
|
||||||
|
"quantity": qty,
|
||||||
|
"avg_cost": avg_cost,
|
||||||
|
"last_price": last,
|
||||||
|
"market_value_usdt": value,
|
||||||
|
"pnl_pct": pnl_pct,
|
||||||
|
"high_24h": round(high, 8) if high else None,
|
||||||
|
"low_24h": round(low, 8) if low else None,
|
||||||
|
"distance_from_high_pct": round(distance_from_high * 100, 2) if distance_from_high is not None else None,
|
||||||
|
})
|
||||||
|
|
||||||
|
btc_pct = to_float((tickers.get("BTC/USDT") or {}).get("percentage"), None)
|
||||||
|
eth_pct = to_float((tickers.get("ETH/USDT") or {}).get("percentage"), None)
|
||||||
|
global_candidates, candidate_layers = top_candidates_from_tickers(tickers)
|
||||||
|
global_candidates, candidate_layers, positions_view = enrich_candidates_and_positions(
|
||||||
|
global_candidates, candidate_layers, positions_view, tickers, ex
|
||||||
|
)
|
||||||
|
leader_score = global_candidates[0]["score"] if global_candidates else 0.0
|
||||||
|
portfolio_value = round(free_usdt + total_position_value, 4)
|
||||||
|
volatility_score = round(max(abs(to_float(btc_pct, 0)), abs(to_float(eth_pct, 0))), 2)
|
||||||
|
|
||||||
|
position_structure = [
|
||||||
|
{
|
||||||
|
"symbol": p.get("symbol"),
|
||||||
|
"base_asset": p.get("base_asset"),
|
||||||
|
"quantity": round(to_float(p.get("quantity"), 0), 10),
|
||||||
|
"avg_cost": to_float(p.get("avg_cost"), None),
|
||||||
|
}
|
||||||
|
for p in positions_view
|
||||||
|
]
|
||||||
|
|
||||||
|
snapshot = {
|
||||||
|
"generated_at": utc_iso(),
|
||||||
|
"timezone": tz_name,
|
||||||
|
"local_time": local_dt.isoformat(),
|
||||||
|
"session": get_local_now(config)[0] if False else None, # will be replaced below
|
||||||
|
"free_usdt": round(free_usdt, 4),
|
||||||
|
"portfolio_value_usdt": portfolio_value,
|
||||||
|
"largest_position_value_usdt": round(largest_position_value, 4),
|
||||||
|
"actionable_positions": actionable_positions,
|
||||||
|
"positions": positions_view,
|
||||||
|
"positions_hash": stable_hash(position_structure),
|
||||||
|
"top_candidates": global_candidates,
|
||||||
|
"top_candidates_layers": candidate_layers,
|
||||||
|
"candidates_hash": stable_hash({"global": global_candidates, "layers": candidate_layers}),
|
||||||
|
"market_regime": {
|
||||||
|
"btc_24h_pct": round(btc_pct, 2) if btc_pct is not None else None,
|
||||||
|
"btc_regime": regime_from_pct(btc_pct),
|
||||||
|
"eth_24h_pct": round(eth_pct, 2) if eth_pct is not None else None,
|
||||||
|
"eth_regime": regime_from_pct(eth_pct),
|
||||||
|
"volatility_score": volatility_score,
|
||||||
|
"leader_score": round(leader_score, 4),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
# fix session after the fact to avoid re-fetching config
|
||||||
|
snapshot["session"] = None
|
||||||
|
from .time_utils import session_label
|
||||||
|
snapshot["session"] = session_label(local_dt)
|
||||||
|
snapshot["snapshot_hash"] = stable_hash({
|
||||||
|
"portfolio_value_usdt": snapshot["portfolio_value_usdt"],
|
||||||
|
"positions_hash": snapshot["positions_hash"],
|
||||||
|
"candidates_hash": snapshot["candidates_hash"],
|
||||||
|
"market_regime": snapshot["market_regime"],
|
||||||
|
"session": snapshot["session"],
|
||||||
|
})
|
||||||
|
return snapshot
|
||||||
160
src/coinhunter/services/state_manager.py
Normal file
160
src/coinhunter/services/state_manager.py
Normal file
@@ -0,0 +1,160 @@
|
|||||||
|
"""State management for precheck workflows."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from datetime import timedelta
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"load_env",
|
||||||
|
"load_positions",
|
||||||
|
"load_state",
|
||||||
|
"modify_state",
|
||||||
|
"load_config",
|
||||||
|
"clear_run_request_fields",
|
||||||
|
"sanitize_state_for_stale_triggers",
|
||||||
|
"save_state",
|
||||||
|
"update_state_after_observation",
|
||||||
|
]
|
||||||
|
|
||||||
|
from ..runtime import get_runtime_paths, load_env_file
|
||||||
|
from .data_utils import load_json
|
||||||
|
from .file_utils import load_json_locked, read_modify_write_json, save_json_locked
|
||||||
|
from .precheck_constants import (
|
||||||
|
MAX_PENDING_TRIGGER_MINUTES,
|
||||||
|
MAX_RUN_REQUEST_MINUTES,
|
||||||
|
)
|
||||||
|
from .time_utils import parse_ts, utc_iso, utc_now
|
||||||
|
|
||||||
|
|
||||||
|
def _paths():
|
||||||
|
return get_runtime_paths()
|
||||||
|
|
||||||
|
|
||||||
|
def load_env() -> None:
|
||||||
|
load_env_file(_paths())
|
||||||
|
|
||||||
|
|
||||||
|
def load_positions():
|
||||||
|
return load_json(_paths().positions_file, {}).get("positions", [])
|
||||||
|
|
||||||
|
|
||||||
|
def load_state():
|
||||||
|
paths = _paths()
|
||||||
|
return load_json_locked(paths.precheck_state_file, paths.precheck_state_lock, {})
|
||||||
|
|
||||||
|
|
||||||
|
def modify_state(modifier):
|
||||||
|
"""Atomic read-modify-write for precheck state."""
|
||||||
|
paths = _paths()
|
||||||
|
read_modify_write_json(
|
||||||
|
paths.precheck_state_file,
|
||||||
|
paths.precheck_state_lock,
|
||||||
|
{},
|
||||||
|
modifier,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def load_config():
|
||||||
|
return load_json(_paths().config_file, {})
|
||||||
|
|
||||||
|
|
||||||
|
def clear_run_request_fields(state: dict):
|
||||||
|
state.pop("run_requested_at", None)
|
||||||
|
state.pop("run_request_note", None)
|
||||||
|
|
||||||
|
|
||||||
|
def sanitize_state_for_stale_triggers(state: dict):
|
||||||
|
sanitized = dict(state)
|
||||||
|
notes = []
|
||||||
|
now = utc_now()
|
||||||
|
run_requested_at = parse_ts(sanitized.get("run_requested_at"))
|
||||||
|
last_deep_analysis_at = parse_ts(sanitized.get("last_deep_analysis_at"))
|
||||||
|
last_triggered_at = parse_ts(sanitized.get("last_triggered_at"))
|
||||||
|
pending_trigger = bool(sanitized.get("pending_trigger"))
|
||||||
|
|
||||||
|
if run_requested_at and last_deep_analysis_at and last_deep_analysis_at >= run_requested_at:
|
||||||
|
clear_run_request_fields(sanitized)
|
||||||
|
if pending_trigger and (not last_triggered_at or last_deep_analysis_at >= last_triggered_at):
|
||||||
|
sanitized["pending_trigger"] = False
|
||||||
|
sanitized["pending_reasons"] = []
|
||||||
|
sanitized["last_ack_note"] = (
|
||||||
|
f"auto-cleared completed trigger at {utc_iso()} because last_deep_analysis_at >= run_requested_at"
|
||||||
|
)
|
||||||
|
pending_trigger = False
|
||||||
|
notes.append(
|
||||||
|
f"Auto-cleared completed run_requested marker: last_deep_analysis_at {last_deep_analysis_at.isoformat()} >= run_requested_at {run_requested_at.isoformat()}"
|
||||||
|
)
|
||||||
|
run_requested_at = None
|
||||||
|
|
||||||
|
if run_requested_at and now - run_requested_at > timedelta(minutes=MAX_RUN_REQUEST_MINUTES):
|
||||||
|
clear_run_request_fields(sanitized)
|
||||||
|
notes.append(
|
||||||
|
f"Auto-cleared stale run_requested marker: waited {(now - run_requested_at).total_seconds() / 60:.1f} minutes, exceeding {MAX_RUN_REQUEST_MINUTES} minutes"
|
||||||
|
)
|
||||||
|
run_requested_at = None
|
||||||
|
|
||||||
|
pending_anchor = run_requested_at or last_triggered_at or last_deep_analysis_at
|
||||||
|
if pending_trigger and pending_anchor and now - pending_anchor > timedelta(minutes=MAX_PENDING_TRIGGER_MINUTES):
|
||||||
|
sanitized["pending_trigger"] = False
|
||||||
|
sanitized["pending_reasons"] = []
|
||||||
|
sanitized["last_ack_note"] = (
|
||||||
|
f"auto-recovered stale pending trigger at {utc_iso()} after waiting "
|
||||||
|
f"{(now - pending_anchor).total_seconds() / 60:.1f} minutes"
|
||||||
|
)
|
||||||
|
notes.append(
|
||||||
|
f"Auto-recovered stale pending_trigger: trigger was dangling for {(now - pending_anchor).total_seconds() / 60:.1f} minutes, exceeding {MAX_PENDING_TRIGGER_MINUTES} minutes"
|
||||||
|
)
|
||||||
|
|
||||||
|
sanitized["_stale_recovery_notes"] = notes
|
||||||
|
return sanitized
|
||||||
|
|
||||||
|
|
||||||
|
def save_state(state: dict):
|
||||||
|
|
||||||
|
paths = _paths()
|
||||||
|
paths.state_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
state_to_save = dict(state)
|
||||||
|
state_to_save.pop("_stale_recovery_notes", None)
|
||||||
|
save_json_locked(
|
||||||
|
paths.precheck_state_file,
|
||||||
|
paths.precheck_state_lock,
|
||||||
|
state_to_save,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def update_state_after_observation(state: dict, snapshot: dict, analysis: dict):
|
||||||
|
new_state = dict(state)
|
||||||
|
new_state.update({
|
||||||
|
"last_observed_at": snapshot["generated_at"],
|
||||||
|
"last_snapshot_hash": snapshot["snapshot_hash"],
|
||||||
|
"last_positions_hash": snapshot["positions_hash"],
|
||||||
|
"last_candidates_hash": snapshot["candidates_hash"],
|
||||||
|
"last_portfolio_value_usdt": snapshot["portfolio_value_usdt"],
|
||||||
|
"last_market_regime": snapshot["market_regime"],
|
||||||
|
"last_positions_map": {
|
||||||
|
p["symbol"]: {"last_price": p.get("last_price"), "pnl_pct": p.get("pnl_pct")}
|
||||||
|
for p in snapshot["positions"]
|
||||||
|
},
|
||||||
|
"last_top_candidate": snapshot["top_candidates"][0] if snapshot["top_candidates"] else None,
|
||||||
|
"last_candidates_layers": snapshot.get("top_candidates_layers", {}),
|
||||||
|
"last_adaptive_profile": analysis.get("adaptive_profile", {}),
|
||||||
|
})
|
||||||
|
if analysis["should_analyze"]:
|
||||||
|
new_state["pending_trigger"] = True
|
||||||
|
new_state["pending_reasons"] = analysis["details"]
|
||||||
|
new_state["last_triggered_at"] = snapshot["generated_at"]
|
||||||
|
new_state["last_trigger_snapshot_hash"] = snapshot["snapshot_hash"]
|
||||||
|
new_state["last_trigger_hard_reasons"] = analysis.get("hard_reasons", [])
|
||||||
|
new_state["last_trigger_signal_delta"] = analysis.get("signal_delta", 0.0)
|
||||||
|
|
||||||
|
last_hard_reasons_at = dict(state.get("last_hard_reasons_at", {}))
|
||||||
|
for hr in analysis.get("hard_reasons", []):
|
||||||
|
last_hard_reasons_at[hr] = snapshot["generated_at"]
|
||||||
|
cutoff = utc_now() - timedelta(hours=24)
|
||||||
|
pruned: dict[str, str] = {}
|
||||||
|
for k, v in last_hard_reasons_at.items():
|
||||||
|
ts = parse_ts(v)
|
||||||
|
if ts and ts > cutoff:
|
||||||
|
pruned[k] = v
|
||||||
|
new_state["last_hard_reasons_at"] = pruned
|
||||||
|
return new_state
|
||||||
49
src/coinhunter/services/time_utils.py
Normal file
49
src/coinhunter/services/time_utils.py
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
"""Time utilities for precheck."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from zoneinfo import ZoneInfo
|
||||||
|
|
||||||
|
|
||||||
|
def utc_now() -> datetime:
|
||||||
|
return datetime.now(timezone.utc)
|
||||||
|
|
||||||
|
|
||||||
|
def utc_iso() -> str:
|
||||||
|
return utc_now().isoformat()
|
||||||
|
|
||||||
|
|
||||||
|
def parse_ts(value: str | None) -> datetime | None:
|
||||||
|
if not value:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
ts = datetime.fromisoformat(value)
|
||||||
|
if ts.tzinfo is None:
|
||||||
|
ts = ts.replace(tzinfo=timezone.utc)
|
||||||
|
return ts
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def get_local_now(config: dict) -> tuple[datetime, str]:
|
||||||
|
tz_name = config.get("timezone") or "Asia/Shanghai"
|
||||||
|
try:
|
||||||
|
tz = ZoneInfo(tz_name)
|
||||||
|
except Exception:
|
||||||
|
tz = ZoneInfo("Asia/Shanghai")
|
||||||
|
tz_name = "Asia/Shanghai"
|
||||||
|
return utc_now().astimezone(tz), tz_name
|
||||||
|
|
||||||
|
|
||||||
|
def session_label(local_dt: datetime) -> str:
|
||||||
|
hour = local_dt.hour
|
||||||
|
if 0 <= hour < 7:
|
||||||
|
return "overnight"
|
||||||
|
if 7 <= hour < 12:
|
||||||
|
return "asia-morning"
|
||||||
|
if 12 <= hour < 17:
|
||||||
|
return "asia-afternoon"
|
||||||
|
if 17 <= hour < 21:
|
||||||
|
return "europe-open"
|
||||||
|
return "us-session"
|
||||||
@@ -1,12 +1,15 @@
|
|||||||
"""Common trade utilities (time, logging, constants)."""
|
"""Common trade utilities (time, logging, constants)."""
|
||||||
import os
|
import os
|
||||||
from datetime import datetime, timezone, timedelta
|
import sys
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
|
||||||
|
from ..runtime import get_user_config
|
||||||
|
|
||||||
CST = timezone(timedelta(hours=8))
|
CST = timezone(timedelta(hours=8))
|
||||||
|
|
||||||
_DRY_RUN = {"value": os.getenv("DRY_RUN", "false").lower() == "true"}
|
_DRY_RUN = {"value": os.getenv("DRY_RUN", "false").lower() == "true"}
|
||||||
USDT_BUFFER_PCT = 0.03
|
USDT_BUFFER_PCT = get_user_config("trading.usdt_buffer_pct", 0.03)
|
||||||
MIN_REMAINING_DUST_USDT = 1.0
|
MIN_REMAINING_DUST_USDT = get_user_config("trading.min_remaining_dust_usdt", 1.0)
|
||||||
|
|
||||||
|
|
||||||
def is_dry_run() -> bool:
|
def is_dry_run() -> bool:
|
||||||
@@ -18,7 +21,7 @@ def set_dry_run(value: bool):
|
|||||||
|
|
||||||
|
|
||||||
def log(msg: str):
|
def log(msg: str):
|
||||||
print(f"[{datetime.now(CST).strftime('%Y-%m-%d %H:%M:%S')} CST] {msg}")
|
print(f"[{datetime.now(CST).strftime('%Y-%m-%d %H:%M:%S')} CST] {msg}", file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
def bj_now_iso():
|
def bj_now_iso():
|
||||||
|
|||||||
@@ -1,15 +1,41 @@
|
|||||||
"""Trade execution actions (buy, sell, rebalance, hold, status)."""
|
"""Trade execution actions (buy, sell, rebalance, hold, status)."""
|
||||||
|
import json
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"print_json",
|
||||||
|
"build_decision_context",
|
||||||
|
"market_sell",
|
||||||
|
"market_buy",
|
||||||
|
"action_sell_all",
|
||||||
|
"action_buy",
|
||||||
|
"action_rebalance",
|
||||||
|
"command_status",
|
||||||
|
"command_balances",
|
||||||
|
"command_orders",
|
||||||
|
"command_order_status",
|
||||||
|
"command_cancel",
|
||||||
|
]
|
||||||
|
|
||||||
from ..logger import log_decision, log_trade
|
from ..logger import log_decision, log_trade
|
||||||
from .exchange_service import (
|
from .exchange_service import (
|
||||||
|
build_market_snapshot,
|
||||||
fetch_balances,
|
fetch_balances,
|
||||||
norm_symbol,
|
norm_symbol,
|
||||||
storage_symbol,
|
|
||||||
build_market_snapshot,
|
|
||||||
prepare_buy_quantity,
|
prepare_buy_quantity,
|
||||||
prepare_sell_quantity,
|
prepare_sell_quantity,
|
||||||
|
storage_symbol,
|
||||||
)
|
)
|
||||||
from .portfolio_service import load_positions, save_positions, upsert_position, reconcile_positions_with_exchange
|
from .portfolio_service import (
|
||||||
from .trade_common import is_dry_run, USDT_BUFFER_PCT, log, bj_now_iso
|
load_positions,
|
||||||
|
reconcile_positions_with_exchange,
|
||||||
|
update_positions,
|
||||||
|
upsert_position,
|
||||||
|
)
|
||||||
|
from .trade_common import USDT_BUFFER_PCT, bj_now_iso, is_dry_run, log
|
||||||
|
|
||||||
|
|
||||||
|
def print_json(payload: dict) -> None:
|
||||||
|
print(json.dumps(payload, ensure_ascii=False, indent=2, sort_keys=True))
|
||||||
|
|
||||||
|
|
||||||
def build_decision_context(ex, action: str, argv_tail: list[str], decision_id: str):
|
def build_decision_context(ex, action: str, argv_tail: list[str], decision_id: str):
|
||||||
@@ -29,7 +55,7 @@ def build_decision_context(ex, action: str, argv_tail: list[str], decision_id: s
|
|||||||
def market_sell(ex, symbol: str, qty: float, decision_id: str):
|
def market_sell(ex, symbol: str, qty: float, decision_id: str):
|
||||||
sym, qty, bid, est_cost = prepare_sell_quantity(ex, symbol, qty)
|
sym, qty, bid, est_cost = prepare_sell_quantity(ex, symbol, qty)
|
||||||
if is_dry_run():
|
if is_dry_run():
|
||||||
log(f"[DRY RUN] 卖出 {sym} 数量 {qty}")
|
log(f"[DRY RUN] SELL {sym} qty {qty}")
|
||||||
return {"id": f"dry-sell-{decision_id}", "symbol": sym, "amount": qty, "price": bid, "cost": est_cost, "status": "closed"}
|
return {"id": f"dry-sell-{decision_id}", "symbol": sym, "amount": qty, "price": bid, "cost": est_cost, "status": "closed"}
|
||||||
order = ex.create_market_sell_order(sym, qty, params={"newClientOrderId": f"ch-{decision_id}-sell"})
|
order = ex.create_market_sell_order(sym, qty, params={"newClientOrderId": f"ch-{decision_id}-sell"})
|
||||||
return order
|
return order
|
||||||
@@ -38,7 +64,7 @@ def market_sell(ex, symbol: str, qty: float, decision_id: str):
|
|||||||
def market_buy(ex, symbol: str, amount_usdt: float, decision_id: str):
|
def market_buy(ex, symbol: str, amount_usdt: float, decision_id: str):
|
||||||
sym, qty, ask, est_cost = prepare_buy_quantity(ex, symbol, amount_usdt)
|
sym, qty, ask, est_cost = prepare_buy_quantity(ex, symbol, amount_usdt)
|
||||||
if is_dry_run():
|
if is_dry_run():
|
||||||
log(f"[DRY RUN] 买入 {sym} 金额 ${est_cost:.4f} 数量 {qty}")
|
log(f"[DRY RUN] BUY {sym} amount ${est_cost:.4f} qty {qty}")
|
||||||
return {"id": f"dry-buy-{decision_id}", "symbol": sym, "amount": qty, "price": ask, "cost": est_cost, "status": "closed"}
|
return {"id": f"dry-buy-{decision_id}", "symbol": sym, "amount": qty, "price": ask, "cost": est_cost, "status": "closed"}
|
||||||
order = ex.create_market_buy_order(sym, qty, params={"newClientOrderId": f"ch-{decision_id}-buy"})
|
order = ex.create_market_buy_order(sym, qty, params={"newClientOrderId": f"ch-{decision_id}-buy"})
|
||||||
return order
|
return order
|
||||||
@@ -49,13 +75,13 @@ def action_sell_all(ex, symbol: str, decision_id: str, decision_context: dict):
|
|||||||
base = norm_symbol(symbol).split("/")[0]
|
base = norm_symbol(symbol).split("/")[0]
|
||||||
qty = float(balances_before.get(base, 0))
|
qty = float(balances_before.get(base, 0))
|
||||||
if qty <= 0:
|
if qty <= 0:
|
||||||
raise RuntimeError(f"{base} 余额为0,无法卖出")
|
raise RuntimeError(f"{base} balance is zero, cannot sell")
|
||||||
order = market_sell(ex, symbol, qty, decision_id)
|
order = market_sell(ex, symbol, qty, decision_id)
|
||||||
positions_after, balances_after = (
|
if is_dry_run():
|
||||||
reconcile_positions_with_exchange(ex, load_positions())
|
positions_after = load_positions()
|
||||||
if not is_dry_run()
|
balances_after = balances_before
|
||||||
else (load_positions(), balances_before)
|
else:
|
||||||
)
|
positions_after, balances_after = reconcile_positions_with_exchange(ex)
|
||||||
log_trade(
|
log_trade(
|
||||||
"SELL_ALL",
|
"SELL_ALL",
|
||||||
norm_symbol(symbol),
|
norm_symbol(symbol),
|
||||||
@@ -82,13 +108,12 @@ def action_sell_all(ex, symbol: str, decision_id: str, decision_context: dict):
|
|||||||
return order
|
return order
|
||||||
|
|
||||||
|
|
||||||
def action_buy(ex, symbol: str, amount_usdt: float, decision_id: str, decision_context: dict, simulated_usdt_balance: float = None):
|
def action_buy(ex, symbol: str, amount_usdt: float, decision_id: str, decision_context: dict, simulated_usdt_balance: float | None = None):
|
||||||
balances_before = fetch_balances(ex) if simulated_usdt_balance is None else {"USDT": simulated_usdt_balance}
|
balances_before = fetch_balances(ex) if simulated_usdt_balance is None else {"USDT": simulated_usdt_balance}
|
||||||
usdt = float(balances_before.get("USDT", 0))
|
usdt = float(balances_before.get("USDT", 0))
|
||||||
if usdt < amount_usdt:
|
if usdt < amount_usdt:
|
||||||
raise RuntimeError(f"USDT 余额不足(${usdt:.4f} < ${amount_usdt:.4f})")
|
raise RuntimeError(f"Insufficient USDT balance (${usdt:.4f} < ${amount_usdt:.4f})")
|
||||||
order = market_buy(ex, symbol, amount_usdt, decision_id)
|
order = market_buy(ex, symbol, amount_usdt, decision_id)
|
||||||
positions_existing = load_positions()
|
|
||||||
sym_store = storage_symbol(symbol)
|
sym_store = storage_symbol(symbol)
|
||||||
price = float(order.get("price") or 0)
|
price = float(order.get("price") or 0)
|
||||||
qty = float(order.get("amount") or 0)
|
qty = float(order.get("amount") or 0)
|
||||||
@@ -104,18 +129,13 @@ def action_buy(ex, symbol: str, amount_usdt: float, decision_id: str, decision_c
|
|||||||
"updated_at": bj_now_iso(),
|
"updated_at": bj_now_iso(),
|
||||||
"note": "Smart executor entry",
|
"note": "Smart executor entry",
|
||||||
}
|
}
|
||||||
upsert_position(positions_existing, position)
|
|
||||||
if is_dry_run():
|
if is_dry_run():
|
||||||
balances_after = balances_before
|
balances_after = balances_before
|
||||||
positions_after = positions_existing
|
positions_after = load_positions()
|
||||||
|
upsert_position(positions_after, position)
|
||||||
else:
|
else:
|
||||||
save_positions(positions_existing)
|
update_positions(lambda p: upsert_position(p, position))
|
||||||
positions_after, balances_after = reconcile_positions_with_exchange(ex, positions_existing)
|
positions_after, balances_after = reconcile_positions_with_exchange(ex, [position])
|
||||||
for p in positions_after:
|
|
||||||
if p["symbol"] == sym_store and price:
|
|
||||||
p["avg_cost"] = price
|
|
||||||
p["updated_at"] = bj_now_iso()
|
|
||||||
save_positions(positions_after)
|
|
||||||
log_trade(
|
log_trade(
|
||||||
"BUY",
|
"BUY",
|
||||||
norm_symbol(symbol),
|
norm_symbol(symbol),
|
||||||
@@ -154,7 +174,7 @@ def action_rebalance(ex, from_symbol: str, to_symbol: str, decision_id: str, dec
|
|||||||
spend = usdt * (1 - USDT_BUFFER_PCT)
|
spend = usdt * (1 - USDT_BUFFER_PCT)
|
||||||
simulated_usdt = None
|
simulated_usdt = None
|
||||||
if spend < 5:
|
if spend < 5:
|
||||||
raise RuntimeError(f"卖出后 USDT ${spend:.4f} 不足,无法买入新币")
|
raise RuntimeError(f"USDT ${spend:.4f} insufficient after sell, cannot buy new token")
|
||||||
buy_order = action_buy(ex, to_symbol, spend, decision_id + "b", decision_context, simulated_usdt_balance=simulated_usdt)
|
buy_order = action_buy(ex, to_symbol, spend, decision_id + "b", decision_context, simulated_usdt_balance=simulated_usdt)
|
||||||
return {"sell": sell_order, "buy": buy_order}
|
return {"sell": sell_order, "buy": buy_order}
|
||||||
|
|
||||||
@@ -168,11 +188,56 @@ def command_status(ex):
|
|||||||
"positions": positions,
|
"positions": positions,
|
||||||
"market_snapshot": market_snapshot,
|
"market_snapshot": market_snapshot,
|
||||||
}
|
}
|
||||||
print(payload)
|
print_json(payload)
|
||||||
return payload
|
return payload
|
||||||
|
|
||||||
|
|
||||||
def command_balances(ex):
|
def command_balances(ex):
|
||||||
balances = fetch_balances(ex)
|
balances = fetch_balances(ex)
|
||||||
print({"balances": balances})
|
payload = {"balances": balances}
|
||||||
|
print_json(payload)
|
||||||
return balances
|
return balances
|
||||||
|
|
||||||
|
|
||||||
|
def command_orders(ex):
|
||||||
|
sym = None
|
||||||
|
try:
|
||||||
|
orders = ex.fetch_open_orders(symbol=sym) if sym else ex.fetch_open_orders()
|
||||||
|
except Exception as e:
|
||||||
|
raise RuntimeError(f"Failed to fetch open orders: {e}")
|
||||||
|
payload = {"orders": orders}
|
||||||
|
print_json(payload)
|
||||||
|
return orders
|
||||||
|
|
||||||
|
|
||||||
|
def command_order_status(ex, symbol: str, order_id: str):
|
||||||
|
sym = norm_symbol(symbol)
|
||||||
|
try:
|
||||||
|
order = ex.fetch_order(order_id, sym)
|
||||||
|
except Exception as e:
|
||||||
|
raise RuntimeError(f"Failed to fetch order {order_id}: {e}")
|
||||||
|
payload = {"order": order}
|
||||||
|
print_json(payload)
|
||||||
|
return order
|
||||||
|
|
||||||
|
|
||||||
|
def command_cancel(ex, symbol: str, order_id: str | None):
|
||||||
|
sym = norm_symbol(symbol)
|
||||||
|
if is_dry_run():
|
||||||
|
log(f"[DRY RUN] Would cancel order {order_id or '(newest)'} on {sym}")
|
||||||
|
return {"dry_run": True, "symbol": sym, "order_id": order_id}
|
||||||
|
if not order_id:
|
||||||
|
try:
|
||||||
|
open_orders = ex.fetch_open_orders(sym)
|
||||||
|
except Exception as e:
|
||||||
|
raise RuntimeError(f"Failed to fetch open orders for {sym}: {e}")
|
||||||
|
if not open_orders:
|
||||||
|
raise RuntimeError(f"No open orders to cancel for {sym}")
|
||||||
|
order_id = str(open_orders[-1]["id"])
|
||||||
|
try:
|
||||||
|
result = ex.cancel_order(order_id, sym)
|
||||||
|
except Exception as e:
|
||||||
|
raise RuntimeError(f"Failed to cancel order {order_id} on {sym}: {e}")
|
||||||
|
payload = {"cancelled": True, "symbol": sym, "order_id": order_id, "result": result}
|
||||||
|
print_json(payload)
|
||||||
|
return result
|
||||||
|
|||||||
317
src/coinhunter/services/trigger_analyzer.py
Normal file
317
src/coinhunter/services/trigger_analyzer.py
Normal file
@@ -0,0 +1,317 @@
|
|||||||
|
"""Trigger analysis logic for precheck."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from datetime import timedelta
|
||||||
|
|
||||||
|
from .adaptive_profile import _candidate_weight, build_adaptive_profile
|
||||||
|
from .data_utils import to_float
|
||||||
|
from .precheck_constants import (
|
||||||
|
HARD_MOON_PCT,
|
||||||
|
HARD_REASON_DEDUP_MINUTES,
|
||||||
|
HARD_STOP_PCT,
|
||||||
|
MIN_REAL_POSITION_VALUE_USDT,
|
||||||
|
)
|
||||||
|
from .time_utils import parse_ts, utc_now
|
||||||
|
|
||||||
|
|
||||||
|
def analyze_trigger(snapshot: dict, state: dict):
|
||||||
|
reasons = []
|
||||||
|
details = list(state.get("_stale_recovery_notes", []))
|
||||||
|
hard_reasons = []
|
||||||
|
soft_reasons = []
|
||||||
|
soft_score = 0.0
|
||||||
|
|
||||||
|
profile = build_adaptive_profile(snapshot)
|
||||||
|
market = snapshot.get("market_regime", {})
|
||||||
|
now = utc_now()
|
||||||
|
|
||||||
|
last_positions_hash = state.get("last_positions_hash")
|
||||||
|
last_portfolio_value = state.get("last_portfolio_value_usdt")
|
||||||
|
last_market_regime = state.get("last_market_regime", {})
|
||||||
|
last_positions_map = state.get("last_positions_map", {})
|
||||||
|
last_top_candidate = state.get("last_top_candidate")
|
||||||
|
pending_trigger = bool(state.get("pending_trigger"))
|
||||||
|
run_requested_at = parse_ts(state.get("run_requested_at"))
|
||||||
|
last_deep_analysis_at = parse_ts(state.get("last_deep_analysis_at"))
|
||||||
|
last_triggered_at = parse_ts(state.get("last_triggered_at"))
|
||||||
|
last_trigger_snapshot_hash = state.get("last_trigger_snapshot_hash")
|
||||||
|
last_hard_reasons_at = state.get("last_hard_reasons_at", {})
|
||||||
|
|
||||||
|
price_trigger = profile["price_move_trigger_pct"]
|
||||||
|
pnl_trigger = profile["pnl_trigger_pct"]
|
||||||
|
portfolio_trigger = profile["portfolio_move_trigger_pct"]
|
||||||
|
candidate_ratio_trigger = profile["candidate_score_trigger_ratio"]
|
||||||
|
force_minutes = profile["force_analysis_after_minutes"]
|
||||||
|
cooldown_minutes = profile["cooldown_minutes"]
|
||||||
|
soft_score_threshold = profile["soft_score_threshold"]
|
||||||
|
|
||||||
|
if pending_trigger:
|
||||||
|
reasons.append("pending-trigger-unacked")
|
||||||
|
hard_reasons.append("pending-trigger-unacked")
|
||||||
|
details.append("Previous deep analysis trigger has not been acknowledged yet")
|
||||||
|
if run_requested_at:
|
||||||
|
details.append(f"External gate requested analysis at {run_requested_at.isoformat()}")
|
||||||
|
|
||||||
|
if not last_deep_analysis_at:
|
||||||
|
reasons.append("first-analysis")
|
||||||
|
hard_reasons.append("first-analysis")
|
||||||
|
details.append("No deep analysis has been recorded yet")
|
||||||
|
elif now - last_deep_analysis_at >= timedelta(minutes=force_minutes):
|
||||||
|
reasons.append("stale-analysis")
|
||||||
|
hard_reasons.append("stale-analysis")
|
||||||
|
details.append(f"Time since last deep analysis exceeds {force_minutes} minutes")
|
||||||
|
|
||||||
|
if last_positions_hash and snapshot["positions_hash"] != last_positions_hash:
|
||||||
|
reasons.append("positions-changed")
|
||||||
|
hard_reasons.append("positions-changed")
|
||||||
|
details.append("Position structure has changed")
|
||||||
|
|
||||||
|
if last_portfolio_value not in (None, 0):
|
||||||
|
lpf = float(str(last_portfolio_value))
|
||||||
|
portfolio_delta = abs(snapshot["portfolio_value_usdt"] - lpf) / max(lpf, 1e-9)
|
||||||
|
if portfolio_delta >= portfolio_trigger:
|
||||||
|
if portfolio_delta >= 1.0:
|
||||||
|
reasons.append("portfolio-extreme-move")
|
||||||
|
hard_reasons.append("portfolio-extreme-move")
|
||||||
|
details.append(f"Portfolio value moved extremely {portfolio_delta:.1%}, exceeding 100%, treated as hard trigger")
|
||||||
|
else:
|
||||||
|
reasons.append("portfolio-move")
|
||||||
|
soft_reasons.append("portfolio-move")
|
||||||
|
soft_score += 1.0
|
||||||
|
details.append(f"Portfolio value moved {portfolio_delta:.1%}, threshold {portfolio_trigger:.1%}")
|
||||||
|
|
||||||
|
for pos in snapshot["positions"]:
|
||||||
|
symbol = pos["symbol"]
|
||||||
|
prev = last_positions_map.get(symbol, {})
|
||||||
|
cur_price = pos.get("last_price")
|
||||||
|
prev_price = prev.get("last_price")
|
||||||
|
cur_pnl = pos.get("pnl_pct")
|
||||||
|
prev_pnl = prev.get("pnl_pct")
|
||||||
|
market_value = to_float(pos.get("market_value_usdt"), 0)
|
||||||
|
actionable_position = market_value >= MIN_REAL_POSITION_VALUE_USDT
|
||||||
|
|
||||||
|
if cur_price and prev_price:
|
||||||
|
price_move = abs(cur_price - prev_price) / max(prev_price, 1e-9)
|
||||||
|
if price_move >= price_trigger:
|
||||||
|
reasons.append(f"price-move:{symbol}")
|
||||||
|
soft_reasons.append(f"price-move:{symbol}")
|
||||||
|
soft_score += 1.0 if actionable_position else 0.4
|
||||||
|
details.append(f"{symbol} price moved {price_move:.1%}, threshold {price_trigger:.1%}")
|
||||||
|
if cur_pnl is not None and prev_pnl is not None:
|
||||||
|
pnl_move = abs(cur_pnl - prev_pnl)
|
||||||
|
if pnl_move >= pnl_trigger:
|
||||||
|
reasons.append(f"pnl-move:{symbol}")
|
||||||
|
soft_reasons.append(f"pnl-move:{symbol}")
|
||||||
|
soft_score += 1.0 if actionable_position else 0.4
|
||||||
|
details.append(f"{symbol} PnL moved {pnl_move:.1%}, threshold {pnl_trigger:.1%}")
|
||||||
|
if cur_pnl is not None:
|
||||||
|
stop_band = -0.06 if actionable_position else -0.12
|
||||||
|
take_band = 0.14 if actionable_position else 0.25
|
||||||
|
if cur_pnl <= stop_band or cur_pnl >= take_band:
|
||||||
|
reasons.append(f"risk-band:{symbol}")
|
||||||
|
hard_reasons.append(f"risk-band:{symbol}")
|
||||||
|
details.append(f"{symbol} near execution threshold, current PnL {cur_pnl:.1%}")
|
||||||
|
if cur_pnl <= HARD_STOP_PCT:
|
||||||
|
reasons.append(f"hard-stop:{symbol}")
|
||||||
|
hard_reasons.append(f"hard-stop:{symbol}")
|
||||||
|
details.append(f"{symbol} PnL exceeded {HARD_STOP_PCT:.1%}, emergency hard trigger")
|
||||||
|
|
||||||
|
current_market = snapshot.get("market_regime", {})
|
||||||
|
if last_market_regime:
|
||||||
|
if current_market.get("btc_regime") != last_market_regime.get("btc_regime"):
|
||||||
|
reasons.append("btc-regime-change")
|
||||||
|
hard_reasons.append("btc-regime-change")
|
||||||
|
details.append(f"BTC regime changed from {last_market_regime.get('btc_regime')} to {current_market.get('btc_regime')}")
|
||||||
|
if current_market.get("eth_regime") != last_market_regime.get("eth_regime"):
|
||||||
|
reasons.append("eth-regime-change")
|
||||||
|
hard_reasons.append("eth-regime-change")
|
||||||
|
details.append(f"ETH regime changed from {last_market_regime.get('eth_regime')} to {current_market.get('eth_regime')}")
|
||||||
|
|
||||||
|
for cand in snapshot.get("top_candidates", []):
|
||||||
|
if cand.get("change_24h_pct", 0) >= HARD_MOON_PCT * 100:
|
||||||
|
reasons.append(f"hard-moon:{cand['symbol']}")
|
||||||
|
hard_reasons.append(f"hard-moon:{cand['symbol']}")
|
||||||
|
details.append(f"Candidate {cand['symbol']} 24h change {cand['change_24h_pct']:.1f}%, hard moon trigger")
|
||||||
|
|
||||||
|
candidate_weight = _candidate_weight(snapshot, profile)
|
||||||
|
|
||||||
|
last_layers = state.get("last_candidates_layers", {})
|
||||||
|
current_layers = snapshot.get("top_candidates_layers", {})
|
||||||
|
for band in ("major", "mid", "meme"):
|
||||||
|
cur_band = current_layers.get(band, [])
|
||||||
|
prev_band = last_layers.get(band, [])
|
||||||
|
cur_leader = cur_band[0] if cur_band else None
|
||||||
|
prev_leader = prev_band[0] if prev_band else None
|
||||||
|
if cur_leader and prev_leader and cur_leader["symbol"] != prev_leader["symbol"]:
|
||||||
|
score_ratio = cur_leader.get("score", 0) / max(prev_leader.get("score", 0.0001), 0.0001)
|
||||||
|
if score_ratio >= candidate_ratio_trigger:
|
||||||
|
reasons.append(f"new-leader-{band}:{cur_leader['symbol']}")
|
||||||
|
soft_reasons.append(f"new-leader-{band}:{cur_leader['symbol']}")
|
||||||
|
soft_score += candidate_weight * 0.7
|
||||||
|
details.append(
|
||||||
|
f"{band} tier new leader {cur_leader['symbol']} replaced {prev_leader['symbol']}, score ratio {score_ratio:.2f}"
|
||||||
|
)
|
||||||
|
|
||||||
|
current_leader = snapshot.get("top_candidates", [{}])[0] if snapshot.get("top_candidates") else None
|
||||||
|
if last_top_candidate and current_leader:
|
||||||
|
if current_leader.get("symbol") != last_top_candidate.get("symbol"):
|
||||||
|
score_ratio = current_leader.get("score", 0) / max(last_top_candidate.get("score", 0.0001), 0.0001)
|
||||||
|
if score_ratio >= candidate_ratio_trigger:
|
||||||
|
reasons.append("new-leader")
|
||||||
|
soft_reasons.append("new-leader")
|
||||||
|
soft_score += candidate_weight
|
||||||
|
details.append(
|
||||||
|
f"New candidate {current_leader.get('symbol')} leads previous top, score ratio {score_ratio:.2f}, threshold {candidate_ratio_trigger:.2f}"
|
||||||
|
)
|
||||||
|
elif current_leader and not last_top_candidate:
|
||||||
|
reasons.append("candidate-leader-init")
|
||||||
|
soft_reasons.append("candidate-leader-init")
|
||||||
|
soft_score += candidate_weight
|
||||||
|
details.append(f"First recorded candidate leader {current_leader.get('symbol')}")
|
||||||
|
|
||||||
|
def _signal_delta() -> float:
|
||||||
|
delta = 0.0
|
||||||
|
if last_trigger_snapshot_hash and snapshot.get("snapshot_hash") != last_trigger_snapshot_hash:
|
||||||
|
delta += 0.5
|
||||||
|
if snapshot["positions_hash"] != last_positions_hash:
|
||||||
|
delta += 1.5
|
||||||
|
for pos in snapshot["positions"]:
|
||||||
|
symbol = pos["symbol"]
|
||||||
|
prev = last_positions_map.get(symbol, {})
|
||||||
|
cur_price = pos.get("last_price")
|
||||||
|
prev_price = prev.get("last_price")
|
||||||
|
cur_pnl = pos.get("pnl_pct")
|
||||||
|
prev_pnl = prev.get("pnl_pct")
|
||||||
|
if cur_price and prev_price and abs(cur_price - prev_price) / max(prev_price, 1e-9) >= 0.02:
|
||||||
|
delta += 0.5
|
||||||
|
if cur_pnl is not None and prev_pnl is not None and abs(cur_pnl - prev_pnl) >= 0.03:
|
||||||
|
delta += 0.5
|
||||||
|
last_leader = state.get("last_top_candidate")
|
||||||
|
if current_leader and last_leader and current_leader.get("symbol") != last_leader.get("symbol"):
|
||||||
|
delta += 1.0
|
||||||
|
for band in ("major", "mid", "meme"):
|
||||||
|
cur_band = current_layers.get(band, [])
|
||||||
|
prev_band = last_layers.get(band, [])
|
||||||
|
cur_l = cur_band[0] if cur_band else None
|
||||||
|
prev_l = prev_band[0] if prev_band else None
|
||||||
|
if cur_l and prev_l and cur_l.get("symbol") != prev_l.get("symbol"):
|
||||||
|
delta += 0.5
|
||||||
|
if last_market_regime:
|
||||||
|
if current_market.get("btc_regime") != last_market_regime.get("btc_regime"):
|
||||||
|
delta += 1.5
|
||||||
|
if current_market.get("eth_regime") != last_market_regime.get("eth_regime"):
|
||||||
|
delta += 1.5
|
||||||
|
if last_portfolio_value not in (None, 0):
|
||||||
|
lpf = float(str(last_portfolio_value))
|
||||||
|
portfolio_delta = abs(snapshot["portfolio_value_usdt"] - lpf) / max(lpf, 1e-9)
|
||||||
|
if portfolio_delta >= 0.05:
|
||||||
|
delta += 1.0
|
||||||
|
last_trigger_hard_types = {r.split(":")[0] for r in (state.get("last_trigger_hard_reasons") or [])}
|
||||||
|
current_hard_types = {r.split(":")[0] for r in hard_reasons}
|
||||||
|
if current_hard_types - last_trigger_hard_types:
|
||||||
|
delta += 2.0
|
||||||
|
return delta
|
||||||
|
|
||||||
|
signal_delta = _signal_delta()
|
||||||
|
effective_cooldown = cooldown_minutes
|
||||||
|
if signal_delta < 1.0:
|
||||||
|
effective_cooldown = max(cooldown_minutes, 90)
|
||||||
|
elif signal_delta >= 2.5:
|
||||||
|
effective_cooldown = max(0, cooldown_minutes - 15)
|
||||||
|
|
||||||
|
cooldown_active = bool(last_triggered_at and now - last_triggered_at < timedelta(minutes=effective_cooldown))
|
||||||
|
|
||||||
|
dedup_window = timedelta(minutes=HARD_REASON_DEDUP_MINUTES)
|
||||||
|
for hr in list(hard_reasons):
|
||||||
|
last_at = parse_ts(last_hard_reasons_at.get(hr))
|
||||||
|
if last_at and now - last_at < dedup_window:
|
||||||
|
hard_reasons.remove(hr)
|
||||||
|
details.append(f"{hr} triggered recently, deduplicated within {HARD_REASON_DEDUP_MINUTES} minutes")
|
||||||
|
|
||||||
|
hard_trigger = bool(hard_reasons)
|
||||||
|
if profile.get("dust_mode") and not hard_trigger and soft_score < soft_score_threshold + 1.0:
|
||||||
|
details.append("Dust-mode portfolio: raising soft-trigger threshold to avoid noise")
|
||||||
|
|
||||||
|
if profile.get("dust_mode") and not profile.get("new_entries_allowed") and any(
|
||||||
|
r in {"new-leader", "candidate-leader-init"} for r in soft_reasons
|
||||||
|
):
|
||||||
|
details.append("Available capital below executable threshold; new candidates are observation-only")
|
||||||
|
soft_score = max(0.0, soft_score - 0.75)
|
||||||
|
|
||||||
|
should_analyze = hard_trigger or soft_score >= soft_score_threshold
|
||||||
|
|
||||||
|
if cooldown_active and not hard_trigger and should_analyze:
|
||||||
|
should_analyze = False
|
||||||
|
details.append(f"In {cooldown_minutes} minute cooldown window; soft trigger logged but not escalated")
|
||||||
|
|
||||||
|
if cooldown_active and not hard_trigger and reasons and soft_score < soft_score_threshold:
|
||||||
|
details.append(f"In {cooldown_minutes} minute cooldown window with insufficient soft signal ({soft_score:.2f} < {soft_score_threshold:.2f})")
|
||||||
|
|
||||||
|
status = "deep_analysis_required" if should_analyze else "stable"
|
||||||
|
|
||||||
|
compact_lines = [
|
||||||
|
f"Status: {status}",
|
||||||
|
f"Portfolio: ${snapshot['portfolio_value_usdt']:.4f} | Free USDT: ${snapshot['free_usdt']:.4f}",
|
||||||
|
f"Session: {snapshot['session']} | TZ: {snapshot['timezone']}",
|
||||||
|
f"BTC/ETH: {market.get('btc_regime')} ({market.get('btc_24h_pct')}%), {market.get('eth_regime')} ({market.get('eth_24h_pct')}%) | Volatility score {market.get('volatility_score')}",
|
||||||
|
f"Profile: capital={profile['capital_band']}, session={profile['session_mode']}, volatility={profile['volatility_mode']}, dust={profile['dust_mode']}",
|
||||||
|
f"Thresholds: price={price_trigger:.1%}, pnl={pnl_trigger:.1%}, portfolio={portfolio_trigger:.1%}, candidate={candidate_ratio_trigger:.2f}, cooldown={effective_cooldown}m({cooldown_minutes}m base), force={force_minutes}m",
|
||||||
|
f"Soft score: {soft_score:.2f} / {soft_score_threshold:.2f}",
|
||||||
|
f"Signal delta: {signal_delta:.1f}",
|
||||||
|
]
|
||||||
|
if snapshot["positions"]:
|
||||||
|
compact_lines.append("Positions:")
|
||||||
|
for pos in snapshot["positions"][:4]:
|
||||||
|
pnl = pos.get("pnl_pct")
|
||||||
|
pnl_text = f"{pnl:+.1%}" if pnl is not None else "n/a"
|
||||||
|
compact_lines.append(
|
||||||
|
f"- {pos['symbol']}: qty={pos['quantity']}, px={pos.get('last_price')}, pnl={pnl_text}, value=${pos.get('market_value_usdt')}"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
compact_lines.append("Positions: no spot positions currently")
|
||||||
|
if snapshot["top_candidates"]:
|
||||||
|
compact_lines.append("Candidates:")
|
||||||
|
for cand in snapshot["top_candidates"]:
|
||||||
|
compact_lines.append(
|
||||||
|
f"- {cand['symbol']}: score={cand['score']}, 24h={cand['change_24h_pct']}%, vol=${cand['volume_24h']}"
|
||||||
|
)
|
||||||
|
layers = snapshot.get("top_candidates_layers", {})
|
||||||
|
for band, band_cands in layers.items():
|
||||||
|
if band_cands:
|
||||||
|
compact_lines.append(f"{band} tier:")
|
||||||
|
for cand in band_cands:
|
||||||
|
compact_lines.append(
|
||||||
|
f"- {cand['symbol']}: score={cand['score']}, 24h={cand['change_24h_pct']}%, vol=${cand['volume_24h']}"
|
||||||
|
)
|
||||||
|
if details:
|
||||||
|
compact_lines.append("Trigger notes:")
|
||||||
|
for item in details:
|
||||||
|
compact_lines.append(f"- {item}")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"generated_at": snapshot["generated_at"],
|
||||||
|
"status": status,
|
||||||
|
"should_analyze": should_analyze,
|
||||||
|
"pending_trigger": pending_trigger,
|
||||||
|
"run_requested": bool(run_requested_at),
|
||||||
|
"run_requested_at": run_requested_at.isoformat() if run_requested_at else None,
|
||||||
|
"cooldown_active": cooldown_active,
|
||||||
|
"effective_cooldown_minutes": effective_cooldown,
|
||||||
|
"signal_delta": round(signal_delta, 2),
|
||||||
|
"reasons": reasons,
|
||||||
|
"hard_reasons": hard_reasons,
|
||||||
|
"soft_reasons": soft_reasons,
|
||||||
|
"soft_score": round(soft_score, 3),
|
||||||
|
"adaptive_profile": profile,
|
||||||
|
"portfolio_value_usdt": snapshot["portfolio_value_usdt"],
|
||||||
|
"free_usdt": snapshot["free_usdt"],
|
||||||
|
"market_regime": snapshot["market_regime"],
|
||||||
|
"session": snapshot["session"],
|
||||||
|
"positions": snapshot["positions"],
|
||||||
|
"top_candidates": snapshot["top_candidates"],
|
||||||
|
"top_candidates_layers": layers,
|
||||||
|
"snapshot_hash": snapshot["snapshot_hash"],
|
||||||
|
"compact_summary": "\n".join(compact_lines),
|
||||||
|
"details": details,
|
||||||
|
}
|
||||||
102
src/coinhunter/smart_executor.py
Executable file → Normal file
102
src/coinhunter/smart_executor.py
Executable file → Normal file
@@ -1,28 +1,102 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
"""Coin Hunter robust smart executor — compatibility facade."""
|
"""Backward-compatible facade for smart executor workflows.
|
||||||
|
|
||||||
|
The executable implementation lives in ``coinhunter.services.smart_executor_service``.
|
||||||
|
This module stays importable for older callers without importing the whole trading
|
||||||
|
stack up front.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
|
from importlib import import_module
|
||||||
|
|
||||||
from .runtime import get_runtime_paths, load_env_file
|
_EXPORT_MAP = {
|
||||||
from .services.trade_common import CST, is_dry_run, USDT_BUFFER_PCT, MIN_REMAINING_DUST_USDT, log, bj_now_iso, set_dry_run
|
"PATHS": (".runtime", "get_runtime_paths"),
|
||||||
from .services.file_utils import locked_file, atomic_write_json, load_json_locked, save_json_locked
|
"ENV_FILE": (".runtime", "get_runtime_paths"),
|
||||||
from .services.smart_executor_parser import build_parser, normalize_legacy_argv, parse_cli_args, cli_action_args
|
"load_env_file": (".runtime", "load_env_file"),
|
||||||
from .services.execution_state import default_decision_id, record_execution_state, get_execution_state, load_executions, save_executions
|
"CST": (".services.trade_common", "CST"),
|
||||||
from .services.portfolio_service import load_positions, save_positions, upsert_position, reconcile_positions_with_exchange
|
"USDT_BUFFER_PCT": (".services.trade_common", "USDT_BUFFER_PCT"),
|
||||||
from .services.exchange_service import get_exchange, norm_symbol, storage_symbol, fetch_balances, build_market_snapshot, market_and_ticker, floor_to_step, prepare_buy_quantity, prepare_sell_quantity
|
"MIN_REMAINING_DUST_USDT": (".services.trade_common", "MIN_REMAINING_DUST_USDT"),
|
||||||
from .services.trade_execution import build_decision_context, market_sell, market_buy, action_sell_all, action_buy, action_rebalance, command_status, command_balances
|
"is_dry_run": (".services.trade_common", "is_dry_run"),
|
||||||
from .services.smart_executor_service import run as _run_service
|
"log": (".services.trade_common", "log"),
|
||||||
|
"bj_now_iso": (".services.trade_common", "bj_now_iso"),
|
||||||
|
"set_dry_run": (".services.trade_common", "set_dry_run"),
|
||||||
|
"locked_file": (".services.file_utils", "locked_file"),
|
||||||
|
"atomic_write_json": (".services.file_utils", "atomic_write_json"),
|
||||||
|
"load_json_locked": (".services.file_utils", "load_json_locked"),
|
||||||
|
"save_json_locked": (".services.file_utils", "save_json_locked"),
|
||||||
|
"build_parser": (".services.smart_executor_parser", "build_parser"),
|
||||||
|
"normalize_legacy_argv": (".services.smart_executor_parser", "normalize_legacy_argv"),
|
||||||
|
"parse_cli_args": (".services.smart_executor_parser", "parse_cli_args"),
|
||||||
|
"cli_action_args": (".services.smart_executor_parser", "cli_action_args"),
|
||||||
|
"default_decision_id": (".services.execution_state", "default_decision_id"),
|
||||||
|
"record_execution_state": (".services.execution_state", "record_execution_state"),
|
||||||
|
"get_execution_state": (".services.execution_state", "get_execution_state"),
|
||||||
|
"load_executions": (".services.execution_state", "load_executions"),
|
||||||
|
"save_executions": (".services.execution_state", "save_executions"),
|
||||||
|
"load_positions": (".services.portfolio_service", "load_positions"),
|
||||||
|
"save_positions": (".services.portfolio_service", "save_positions"),
|
||||||
|
"update_positions": (".services.portfolio_service", "update_positions"),
|
||||||
|
"upsert_position": (".services.portfolio_service", "upsert_position"),
|
||||||
|
"reconcile_positions_with_exchange": (".services.portfolio_service", "reconcile_positions_with_exchange"),
|
||||||
|
"get_exchange": (".services.exchange_service", "get_exchange"),
|
||||||
|
"norm_symbol": (".services.exchange_service", "norm_symbol"),
|
||||||
|
"storage_symbol": (".services.exchange_service", "storage_symbol"),
|
||||||
|
"fetch_balances": (".services.exchange_service", "fetch_balances"),
|
||||||
|
"build_market_snapshot": (".services.exchange_service", "build_market_snapshot"),
|
||||||
|
"market_and_ticker": (".services.exchange_service", "market_and_ticker"),
|
||||||
|
"floor_to_step": (".services.exchange_service", "floor_to_step"),
|
||||||
|
"prepare_buy_quantity": (".services.exchange_service", "prepare_buy_quantity"),
|
||||||
|
"prepare_sell_quantity": (".services.exchange_service", "prepare_sell_quantity"),
|
||||||
|
"build_decision_context": (".services.trade_execution", "build_decision_context"),
|
||||||
|
"market_sell": (".services.trade_execution", "market_sell"),
|
||||||
|
"market_buy": (".services.trade_execution", "market_buy"),
|
||||||
|
"action_sell_all": (".services.trade_execution", "action_sell_all"),
|
||||||
|
"action_buy": (".services.trade_execution", "action_buy"),
|
||||||
|
"action_rebalance": (".services.trade_execution", "action_rebalance"),
|
||||||
|
"command_status": (".services.trade_execution", "command_status"),
|
||||||
|
"command_balances": (".services.trade_execution", "command_balances"),
|
||||||
|
"command_orders": (".services.trade_execution", "command_orders"),
|
||||||
|
"command_order_status": (".services.trade_execution", "command_order_status"),
|
||||||
|
"command_cancel": (".services.trade_execution", "command_cancel"),
|
||||||
|
}
|
||||||
|
|
||||||
PATHS = get_runtime_paths()
|
__all__ = sorted(set(_EXPORT_MAP) | {"ENV_FILE", "PATHS", "load_env", "main"})
|
||||||
ENV_FILE = PATHS.env_file
|
|
||||||
|
|
||||||
|
def __getattr__(name: str):
|
||||||
|
if name == "PATHS":
|
||||||
|
runtime = import_module(".runtime", __package__)
|
||||||
|
return runtime.get_runtime_paths()
|
||||||
|
if name == "ENV_FILE":
|
||||||
|
runtime = import_module(".runtime", __package__)
|
||||||
|
return runtime.get_runtime_paths().env_file
|
||||||
|
if name == "load_env":
|
||||||
|
return load_env
|
||||||
|
if name not in _EXPORT_MAP:
|
||||||
|
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
|
||||||
|
module_name, attr_name = _EXPORT_MAP[name]
|
||||||
|
module = import_module(module_name, __package__)
|
||||||
|
if name == "PATHS":
|
||||||
|
return getattr(module, attr_name)()
|
||||||
|
if name == "ENV_FILE":
|
||||||
|
return getattr(module, attr_name)().env_file
|
||||||
|
return getattr(module, attr_name)
|
||||||
|
|
||||||
|
|
||||||
|
def __dir__():
|
||||||
|
return sorted(set(globals()) | set(__all__))
|
||||||
|
|
||||||
|
|
||||||
def load_env():
|
def load_env():
|
||||||
load_env_file(PATHS)
|
runtime = import_module(".runtime", __package__)
|
||||||
|
runtime.load_env_file(runtime.get_runtime_paths())
|
||||||
|
|
||||||
|
|
||||||
def main(argv=None):
|
def main(argv=None):
|
||||||
return _run_service(argv)
|
from .services.smart_executor_service import run as _run_service
|
||||||
|
return _run_service(sys.argv[1:] if argv is None else argv)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
|||||||
0
tests/__init__.py
Normal file
0
tests/__init__.py
Normal file
70
tests/test_check_api.py
Normal file
70
tests/test_check_api.py
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
"""Tests for check_api command."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
from coinhunter.commands import check_api
|
||||||
|
|
||||||
|
|
||||||
|
class TestMain:
|
||||||
|
def test_missing_api_key(self, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("BINANCE_API_KEY", "")
|
||||||
|
monkeypatch.setenv("BINANCE_API_SECRET", "secret")
|
||||||
|
rc = check_api.main()
|
||||||
|
assert rc == 1
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["ok"] is False
|
||||||
|
assert "BINANCE_API_KEY" in out["error"]
|
||||||
|
|
||||||
|
def test_missing_api_secret(self, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("BINANCE_API_KEY", "key")
|
||||||
|
monkeypatch.setenv("BINANCE_API_SECRET", "")
|
||||||
|
rc = check_api.main()
|
||||||
|
assert rc == 1
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["ok"] is False
|
||||||
|
assert "BINANCE_API_SECRET" in out["error"]
|
||||||
|
|
||||||
|
def test_placholder_api_key(self, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("BINANCE_API_KEY", "your_api_key")
|
||||||
|
monkeypatch.setenv("BINANCE_API_SECRET", "secret")
|
||||||
|
rc = check_api.main()
|
||||||
|
assert rc == 1
|
||||||
|
|
||||||
|
def test_balance_fetch_failure(self, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("BINANCE_API_KEY", "key")
|
||||||
|
monkeypatch.setenv("BINANCE_API_SECRET", "secret")
|
||||||
|
mock_ex = MagicMock()
|
||||||
|
mock_ex.fetch_balance.side_effect = Exception("Network error")
|
||||||
|
with patch("coinhunter.commands.check_api.ccxt.binance", return_value=mock_ex):
|
||||||
|
rc = check_api.main()
|
||||||
|
assert rc == 1
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert "Failed to connect" in out["error"]
|
||||||
|
|
||||||
|
def test_success_with_spot_trading(self, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("BINANCE_API_KEY", "key")
|
||||||
|
monkeypatch.setenv("BINANCE_API_SECRET", "secret")
|
||||||
|
mock_ex = MagicMock()
|
||||||
|
mock_ex.fetch_balance.return_value = {"USDT": 100.0}
|
||||||
|
mock_ex.sapi_get_account_api_restrictions.return_value = {"enableSpotTrading": True}
|
||||||
|
with patch("coinhunter.commands.check_api.ccxt.binance", return_value=mock_ex):
|
||||||
|
rc = check_api.main()
|
||||||
|
assert rc == 0
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["ok"] is True
|
||||||
|
assert out["read_permission"] is True
|
||||||
|
assert out["spot_trading_enabled"] is True
|
||||||
|
|
||||||
|
def test_success_restrictions_query_fails(self, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("BINANCE_API_KEY", "key")
|
||||||
|
monkeypatch.setenv("BINANCE_API_SECRET", "secret")
|
||||||
|
mock_ex = MagicMock()
|
||||||
|
mock_ex.fetch_balance.return_value = {"USDT": 100.0}
|
||||||
|
mock_ex.sapi_get_account_api_restrictions.side_effect = Exception("no permission")
|
||||||
|
with patch("coinhunter.commands.check_api.ccxt.binance", return_value=mock_ex):
|
||||||
|
rc = check_api.main()
|
||||||
|
assert rc == 0
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["spot_trading_enabled"] is None
|
||||||
|
assert "may be null" in out["note"]
|
||||||
58
tests/test_cli.py
Normal file
58
tests/test_cli.py
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
"""Tests for CLI routing and parser behavior."""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from coinhunter.cli import ALIASES, MODULE_MAP, build_parser, run_python_module
|
||||||
|
|
||||||
|
|
||||||
|
class TestAliases:
|
||||||
|
def test_all_aliases_resolve_to_canonical(self):
|
||||||
|
for alias, canonical in ALIASES.items():
|
||||||
|
assert canonical in MODULE_MAP, f"alias {alias!r} points to missing canonical {canonical!r}"
|
||||||
|
|
||||||
|
def test_no_alias_is_itself_an_alias_loop(self):
|
||||||
|
for alias in ALIASES:
|
||||||
|
assert alias not in ALIASES.values() or alias in MODULE_MAP
|
||||||
|
|
||||||
|
|
||||||
|
class TestModuleMap:
|
||||||
|
def test_all_modules_exist(self):
|
||||||
|
import importlib
|
||||||
|
|
||||||
|
for command, module_name in MODULE_MAP.items():
|
||||||
|
full = f"coinhunter.{module_name}"
|
||||||
|
mod = importlib.import_module(full)
|
||||||
|
assert hasattr(mod, "main"), f"{full} missing main()"
|
||||||
|
|
||||||
|
|
||||||
|
class TestBuildParser:
|
||||||
|
def test_help_includes_commands(self):
|
||||||
|
parser = build_parser()
|
||||||
|
help_text = parser.format_help()
|
||||||
|
assert "coinhunter diag" in help_text
|
||||||
|
assert "coinhunter exec" in help_text
|
||||||
|
|
||||||
|
def test_parses_command_and_args(self):
|
||||||
|
parser = build_parser()
|
||||||
|
ns = parser.parse_args(["exec", "bal"])
|
||||||
|
assert ns.command == "exec"
|
||||||
|
assert "bal" in ns.args
|
||||||
|
|
||||||
|
def test_version_action_exits(self):
|
||||||
|
parser = build_parser()
|
||||||
|
with pytest.raises(SystemExit) as exc:
|
||||||
|
parser.parse_args(["--version"])
|
||||||
|
assert exc.value.code == 0
|
||||||
|
|
||||||
|
|
||||||
|
class TestRunPythonModule:
|
||||||
|
def test_runs_module_main_and_returns_int(self):
|
||||||
|
result = run_python_module("commands.paths", [], "coinhunter paths")
|
||||||
|
assert result == 0
|
||||||
|
|
||||||
|
def test_mutates_sys_argv(self):
|
||||||
|
import sys
|
||||||
|
|
||||||
|
original = sys.argv[:]
|
||||||
|
run_python_module("commands.paths", ["--help"], "coinhunter paths")
|
||||||
|
assert sys.argv == original
|
||||||
98
tests/test_exchange_service.py
Normal file
98
tests/test_exchange_service.py
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
"""Tests for exchange helpers and caching."""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from coinhunter.services import exchange_service as es
|
||||||
|
|
||||||
|
|
||||||
|
class TestNormSymbol:
|
||||||
|
def test_already_normalized(self):
|
||||||
|
assert es.norm_symbol("BTC/USDT") == "BTC/USDT"
|
||||||
|
|
||||||
|
def test_raw_symbol(self):
|
||||||
|
assert es.norm_symbol("BTCUSDT") == "BTC/USDT"
|
||||||
|
|
||||||
|
def test_lowercase(self):
|
||||||
|
assert es.norm_symbol("btcusdt") == "BTC/USDT"
|
||||||
|
|
||||||
|
def test_dash_separator(self):
|
||||||
|
assert es.norm_symbol("BTC-USDT") == "BTC/USDT"
|
||||||
|
|
||||||
|
def test_underscore_separator(self):
|
||||||
|
assert es.norm_symbol("BTC_USDT") == "BTC/USDT"
|
||||||
|
|
||||||
|
def test_unsupported_symbol(self):
|
||||||
|
with pytest.raises(ValueError):
|
||||||
|
es.norm_symbol("BTC")
|
||||||
|
|
||||||
|
|
||||||
|
class TestStorageSymbol:
|
||||||
|
def test_converts_to_flat(self):
|
||||||
|
assert es.storage_symbol("BTC/USDT") == "BTCUSDT"
|
||||||
|
|
||||||
|
def test_raw_input(self):
|
||||||
|
assert es.storage_symbol("ETHUSDT") == "ETHUSDT"
|
||||||
|
|
||||||
|
|
||||||
|
class TestFloorToStep:
|
||||||
|
def test_no_step(self):
|
||||||
|
assert es.floor_to_step(1.234, 0) == 1.234
|
||||||
|
|
||||||
|
def test_floors_to_step(self):
|
||||||
|
assert es.floor_to_step(1.234, 0.1) == pytest.approx(1.2)
|
||||||
|
|
||||||
|
def test_exact_multiple(self):
|
||||||
|
assert es.floor_to_step(12, 1) == 12
|
||||||
|
|
||||||
|
|
||||||
|
class TestGetExchangeCaching:
|
||||||
|
def test_returns_cached_instance(self, monkeypatch):
|
||||||
|
monkeypatch.setenv("BINANCE_API_KEY", "test_key")
|
||||||
|
monkeypatch.setenv("BINANCE_API_SECRET", "test_secret")
|
||||||
|
|
||||||
|
class FakeEx:
|
||||||
|
def load_markets(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Reset cache and patch ccxt.binance
|
||||||
|
original_cache = es._exchange_cache
|
||||||
|
original_cached_at = es._exchange_cached_at
|
||||||
|
try:
|
||||||
|
es._exchange_cache = None
|
||||||
|
es._exchange_cached_at = None
|
||||||
|
monkeypatch.setattr(es.ccxt, "binance", lambda *a, **kw: FakeEx())
|
||||||
|
|
||||||
|
first = es.get_exchange()
|
||||||
|
second = es.get_exchange()
|
||||||
|
assert first is second
|
||||||
|
|
||||||
|
third = es.get_exchange(force_new=True)
|
||||||
|
assert third is not first
|
||||||
|
finally:
|
||||||
|
es._exchange_cache = original_cache
|
||||||
|
es._exchange_cached_at = original_cached_at
|
||||||
|
|
||||||
|
def test_cache_expires_after_ttl(self, monkeypatch):
|
||||||
|
monkeypatch.setenv("BINANCE_API_KEY", "test_key")
|
||||||
|
monkeypatch.setenv("BINANCE_API_SECRET", "test_secret")
|
||||||
|
|
||||||
|
class FakeEx:
|
||||||
|
def load_markets(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
original_cache = es._exchange_cache
|
||||||
|
original_cached_at = es._exchange_cached_at
|
||||||
|
try:
|
||||||
|
es._exchange_cache = None
|
||||||
|
es._exchange_cached_at = None
|
||||||
|
monkeypatch.setattr(es.ccxt, "binance", lambda *a, **kw: FakeEx())
|
||||||
|
monkeypatch.setattr(es, "load_env", lambda: None)
|
||||||
|
|
||||||
|
first = es.get_exchange()
|
||||||
|
# Simulate time passing beyond TTL
|
||||||
|
es._exchange_cached_at -= es.CACHE_TTL_SECONDS + 1
|
||||||
|
second = es.get_exchange()
|
||||||
|
assert first is not second
|
||||||
|
finally:
|
||||||
|
es._exchange_cache = original_cache
|
||||||
|
es._exchange_cached_at = original_cached_at
|
||||||
203
tests/test_external_gate.py
Normal file
203
tests/test_external_gate.py
Normal file
@@ -0,0 +1,203 @@
|
|||||||
|
"""Tests for external_gate configurable hook."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
from coinhunter.commands import external_gate
|
||||||
|
|
||||||
|
|
||||||
|
class TestResolveTriggerCommand:
|
||||||
|
def test_missing_config_means_disabled(self, tmp_path, monkeypatch):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
# no config file exists
|
||||||
|
cmd = external_gate._resolve_trigger_command(paths)
|
||||||
|
assert cmd is None
|
||||||
|
|
||||||
|
def test_uses_explicit_list_from_config(self, tmp_path, monkeypatch):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
paths.config_file.write_text(json.dumps({
|
||||||
|
"external_gate": {"trigger_command": ["my-scheduler", "run", "job-123"]}
|
||||||
|
}), encoding="utf-8")
|
||||||
|
cmd = external_gate._resolve_trigger_command(paths)
|
||||||
|
assert cmd == ["my-scheduler", "run", "job-123"]
|
||||||
|
|
||||||
|
def test_null_means_disabled(self, tmp_path, monkeypatch):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
paths.config_file.write_text(json.dumps({
|
||||||
|
"external_gate": {"trigger_command": None}
|
||||||
|
}), encoding="utf-8")
|
||||||
|
cmd = external_gate._resolve_trigger_command(paths)
|
||||||
|
assert cmd is None
|
||||||
|
|
||||||
|
def test_empty_list_means_disabled(self, tmp_path, monkeypatch):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
paths.config_file.write_text(json.dumps({
|
||||||
|
"external_gate": {"trigger_command": []}
|
||||||
|
}), encoding="utf-8")
|
||||||
|
cmd = external_gate._resolve_trigger_command(paths)
|
||||||
|
assert cmd is None
|
||||||
|
|
||||||
|
def test_string_gets_wrapped(self, tmp_path, monkeypatch):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
paths.config_file.write_text(json.dumps({
|
||||||
|
"external_gate": {"trigger_command": "my-script.sh"}
|
||||||
|
}), encoding="utf-8")
|
||||||
|
cmd = external_gate._resolve_trigger_command(paths)
|
||||||
|
assert cmd == ["my-script.sh"]
|
||||||
|
|
||||||
|
def test_unexpected_type_returns_none_and_warns(self, tmp_path, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
paths.config_file.write_text(json.dumps({
|
||||||
|
"external_gate": {"trigger_command": 42}
|
||||||
|
}), encoding="utf-8")
|
||||||
|
cmd = external_gate._resolve_trigger_command(paths)
|
||||||
|
assert cmd is None
|
||||||
|
|
||||||
|
|
||||||
|
class TestMain:
|
||||||
|
def test_already_running(self, tmp_path, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.state_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
# Acquire the lock in this process so main() sees it as busy
|
||||||
|
import fcntl
|
||||||
|
with open(paths.external_gate_lock, "w", encoding="utf-8") as f:
|
||||||
|
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
|
||||||
|
rc = external_gate.main()
|
||||||
|
assert rc == 0
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["reason"] == "already_running"
|
||||||
|
|
||||||
|
def test_precheck_failure(self, tmp_path, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
fake_result = MagicMock(returncode=1, stdout="err", stderr="")
|
||||||
|
with patch("coinhunter.commands.external_gate.run_cmd", return_value=fake_result):
|
||||||
|
rc = external_gate.main()
|
||||||
|
assert rc == 1
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["reason"] == "precheck_failed"
|
||||||
|
|
||||||
|
def test_precheck_parse_error(self, tmp_path, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
fake_result = MagicMock(returncode=0, stdout="not-json", stderr="")
|
||||||
|
with patch("coinhunter.commands.external_gate.run_cmd", return_value=fake_result):
|
||||||
|
rc = external_gate.main()
|
||||||
|
assert rc == 1
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["reason"] == "precheck_parse_error"
|
||||||
|
|
||||||
|
def test_precheck_not_ok(self, tmp_path, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
fake_result = MagicMock(returncode=0, stdout=json.dumps({"ok": False}), stderr="")
|
||||||
|
with patch("coinhunter.commands.external_gate.run_cmd", return_value=fake_result):
|
||||||
|
rc = external_gate.main()
|
||||||
|
assert rc == 1
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["reason"] == "precheck_not_ok"
|
||||||
|
|
||||||
|
def test_no_trigger(self, tmp_path, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
fake_result = MagicMock(returncode=0, stdout=json.dumps({"ok": True, "should_analyze": False}), stderr="")
|
||||||
|
with patch("coinhunter.commands.external_gate.run_cmd", return_value=fake_result):
|
||||||
|
rc = external_gate.main()
|
||||||
|
assert rc == 0
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["reason"] == "no_trigger"
|
||||||
|
|
||||||
|
def test_already_queued(self, tmp_path, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
fake_result = MagicMock(
|
||||||
|
returncode=0,
|
||||||
|
stdout=json.dumps({"ok": True, "should_analyze": True, "run_requested": True, "run_requested_at": "2024-01-01T00:00:00Z"}),
|
||||||
|
stderr="",
|
||||||
|
)
|
||||||
|
with patch("coinhunter.commands.external_gate.run_cmd", return_value=fake_result):
|
||||||
|
rc = external_gate.main()
|
||||||
|
assert rc == 0
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["reason"] == "already_queued"
|
||||||
|
|
||||||
|
def test_trigger_disabled(self, tmp_path, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
paths.config_file.write_text(json.dumps({"external_gate": {"trigger_command": None}}), encoding="utf-8")
|
||||||
|
# First call is precheck, second is mark-run-requested
|
||||||
|
responses = [
|
||||||
|
MagicMock(returncode=0, stdout=json.dumps({"ok": True, "should_analyze": True}), stderr=""),
|
||||||
|
MagicMock(returncode=0, stdout=json.dumps({"ok": True}), stderr=""),
|
||||||
|
]
|
||||||
|
with patch("coinhunter.commands.external_gate.run_cmd", side_effect=responses):
|
||||||
|
rc = external_gate.main()
|
||||||
|
assert rc == 0
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["reason"] == "trigger_disabled"
|
||||||
|
|
||||||
|
def test_trigger_success(self, tmp_path, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
paths.config_file.write_text(json.dumps({"external_gate": {"trigger_command": ["echo", "ok"]}}), encoding="utf-8")
|
||||||
|
responses = [
|
||||||
|
MagicMock(returncode=0, stdout=json.dumps({"ok": True, "should_analyze": True, "reasons": ["price-move"]}), stderr=""),
|
||||||
|
MagicMock(returncode=0, stdout=json.dumps({"ok": True}), stderr=""),
|
||||||
|
MagicMock(returncode=0, stdout="triggered", stderr=""),
|
||||||
|
]
|
||||||
|
with patch("coinhunter.commands.external_gate.run_cmd", side_effect=responses):
|
||||||
|
rc = external_gate.main()
|
||||||
|
assert rc == 0
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["triggered"] is True
|
||||||
|
assert out["reason"] == "price-move"
|
||||||
|
|
||||||
|
def test_trigger_command_failure(self, tmp_path, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
paths.config_file.write_text(json.dumps({"external_gate": {"trigger_command": ["false"]}}), encoding="utf-8")
|
||||||
|
responses = [
|
||||||
|
MagicMock(returncode=0, stdout=json.dumps({"ok": True, "should_analyze": True}), stderr=""),
|
||||||
|
MagicMock(returncode=0, stdout=json.dumps({"ok": True}), stderr=""),
|
||||||
|
MagicMock(returncode=1, stdout="", stderr="fail"),
|
||||||
|
]
|
||||||
|
with patch("coinhunter.commands.external_gate.run_cmd", side_effect=responses):
|
||||||
|
rc = external_gate.main()
|
||||||
|
assert rc == 1
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["reason"] == "trigger_failed"
|
||||||
|
|
||||||
|
def test_mark_run_requested_failure(self, tmp_path, monkeypatch, capsys):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = external_gate._paths()
|
||||||
|
paths.root.mkdir(parents=True, exist_ok=True)
|
||||||
|
paths.config_file.write_text(json.dumps({"external_gate": {"trigger_command": ["echo", "ok"]}}), encoding="utf-8")
|
||||||
|
responses = [
|
||||||
|
MagicMock(returncode=0, stdout=json.dumps({"ok": True, "should_analyze": True}), stderr=""),
|
||||||
|
MagicMock(returncode=1, stdout="err", stderr=""),
|
||||||
|
]
|
||||||
|
with patch("coinhunter.commands.external_gate.run_cmd", side_effect=responses):
|
||||||
|
rc = external_gate.main()
|
||||||
|
assert rc == 1
|
||||||
|
out = json.loads(capsys.readouterr().out)
|
||||||
|
assert out["reason"] == "mark_failed"
|
||||||
194
tests/test_review_service.py
Normal file
194
tests/test_review_service.py
Normal file
@@ -0,0 +1,194 @@
|
|||||||
|
"""Tests for review_service."""
|
||||||
|
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import patch
|
||||||
|
|
||||||
|
from coinhunter.services import review_service as rs
|
||||||
|
|
||||||
|
|
||||||
|
class TestAnalyzeTrade:
|
||||||
|
def test_buy_good_when_price_up(self):
|
||||||
|
ex = None
|
||||||
|
trade = {"symbol": "BTC/USDT", "price": 50000.0, "action": "BUY"}
|
||||||
|
with patch.object(rs, "fetch_current_price", return_value=52000.0):
|
||||||
|
result = rs.analyze_trade(trade, ex)
|
||||||
|
assert result["action"] == "BUY"
|
||||||
|
assert result["pnl_estimate_pct"] == 4.0
|
||||||
|
assert result["outcome_assessment"] == "good"
|
||||||
|
|
||||||
|
def test_buy_bad_when_price_down(self):
|
||||||
|
trade = {"symbol": "BTC/USDT", "price": 50000.0, "action": "BUY"}
|
||||||
|
with patch.object(rs, "fetch_current_price", return_value=48000.0):
|
||||||
|
result = rs.analyze_trade(trade, None)
|
||||||
|
assert result["pnl_estimate_pct"] == -4.0
|
||||||
|
assert result["outcome_assessment"] == "bad"
|
||||||
|
|
||||||
|
def test_sell_all_missed_when_price_up(self):
|
||||||
|
trade = {"symbol": "BTC/USDT", "price": 50000.0, "action": "SELL_ALL"}
|
||||||
|
with patch.object(rs, "fetch_current_price", return_value=53000.0):
|
||||||
|
result = rs.analyze_trade(trade, None)
|
||||||
|
assert result["pnl_estimate_pct"] == -6.0
|
||||||
|
assert result["outcome_assessment"] == "missed"
|
||||||
|
|
||||||
|
def test_neutral_when_small_move(self):
|
||||||
|
trade = {"symbol": "BTC/USDT", "price": 50000.0, "action": "BUY"}
|
||||||
|
with patch.object(rs, "fetch_current_price", return_value=50100.0):
|
||||||
|
result = rs.analyze_trade(trade, None)
|
||||||
|
assert result["outcome_assessment"] == "neutral"
|
||||||
|
|
||||||
|
def test_none_when_no_price(self):
|
||||||
|
trade = {"symbol": "BTC/USDT", "action": "BUY"}
|
||||||
|
result = rs.analyze_trade(trade, None)
|
||||||
|
assert result["pnl_estimate_pct"] is None
|
||||||
|
assert result["outcome_assessment"] == "neutral"
|
||||||
|
|
||||||
|
|
||||||
|
class TestAnalyzeHoldPasses:
|
||||||
|
def test_finds_passed_opportunities_that_rise(self):
|
||||||
|
decisions = [
|
||||||
|
{
|
||||||
|
"timestamp": "2024-01-01T00:00:00Z",
|
||||||
|
"decision": "HOLD",
|
||||||
|
"analysis": {"opportunities_evaluated": [{"symbol": "ETH/USDT", "verdict": "PASS"}]},
|
||||||
|
"market_snapshot": {"ETHUSDT": {"lastPrice": 100.0}},
|
||||||
|
}
|
||||||
|
]
|
||||||
|
with patch.object(rs, "fetch_current_price", return_value=110.0):
|
||||||
|
result = rs.analyze_hold_passes(decisions, None)
|
||||||
|
assert len(result) == 1
|
||||||
|
assert result[0]["symbol"] == "ETH/USDT"
|
||||||
|
assert result[0]["change_pct"] == 10.0
|
||||||
|
|
||||||
|
def test_ignores_non_pass_verdicts(self):
|
||||||
|
decisions = [
|
||||||
|
{
|
||||||
|
"decision": "HOLD",
|
||||||
|
"analysis": {"opportunities_evaluated": [{"symbol": "ETH/USDT", "verdict": "BUY"}]},
|
||||||
|
"market_snapshot": {"ETHUSDT": {"lastPrice": 100.0}},
|
||||||
|
}
|
||||||
|
]
|
||||||
|
with patch.object(rs, "fetch_current_price", return_value=110.0):
|
||||||
|
result = rs.analyze_hold_passes(decisions, None)
|
||||||
|
assert result == []
|
||||||
|
|
||||||
|
def test_ignores_small_moves(self):
|
||||||
|
decisions = [
|
||||||
|
{
|
||||||
|
"decision": "HOLD",
|
||||||
|
"analysis": {"opportunities_evaluated": [{"symbol": "ETH/USDT", "verdict": "PASS"}]},
|
||||||
|
"market_snapshot": {"ETHUSDT": {"lastPrice": 100.0}},
|
||||||
|
}
|
||||||
|
]
|
||||||
|
with patch.object(rs, "fetch_current_price", return_value=102.0):
|
||||||
|
result = rs.analyze_hold_passes(decisions, None)
|
||||||
|
assert result == []
|
||||||
|
|
||||||
|
|
||||||
|
class TestAnalyzeCashMisses:
|
||||||
|
def test_finds_cash_sit_misses(self):
|
||||||
|
decisions = [
|
||||||
|
{
|
||||||
|
"timestamp": "2024-01-01T00:00:00Z",
|
||||||
|
"balances": {"USDT": 100.0},
|
||||||
|
"market_snapshot": {"BTCUSDT": {"lastPrice": 50000.0}},
|
||||||
|
}
|
||||||
|
]
|
||||||
|
with patch.object(rs, "fetch_current_price", return_value=54000.0):
|
||||||
|
result = rs.analyze_cash_misses(decisions, None)
|
||||||
|
assert len(result) == 1
|
||||||
|
assert result[0]["symbol"] == "BTCUSDT"
|
||||||
|
assert result[0]["change_pct"] == 8.0
|
||||||
|
|
||||||
|
def test_requires_mostly_usdt(self):
|
||||||
|
decisions = [
|
||||||
|
{
|
||||||
|
"balances": {"USDT": 10.0, "BTC": 90.0},
|
||||||
|
"market_snapshot": {"ETHUSDT": {"lastPrice": 100.0}},
|
||||||
|
}
|
||||||
|
]
|
||||||
|
with patch.object(rs, "fetch_current_price", return_value=200.0):
|
||||||
|
result = rs.analyze_cash_misses(decisions, None)
|
||||||
|
assert result == []
|
||||||
|
|
||||||
|
def test_dedupes_by_best_change(self):
|
||||||
|
decisions = [
|
||||||
|
{
|
||||||
|
"timestamp": "2024-01-01T00:00:00Z",
|
||||||
|
"balances": {"USDT": 100.0},
|
||||||
|
"market_snapshot": {"BTCUSDT": {"lastPrice": 50000.0}},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"timestamp": "2024-01-01T01:00:00Z",
|
||||||
|
"balances": {"USDT": 100.0},
|
||||||
|
"market_snapshot": {"BTCUSDT": {"lastPrice": 52000.0}},
|
||||||
|
},
|
||||||
|
]
|
||||||
|
with patch.object(rs, "fetch_current_price", return_value=56000.0):
|
||||||
|
result = rs.analyze_cash_misses(decisions, None)
|
||||||
|
assert len(result) == 1
|
||||||
|
# best change is from 50000 -> 56000 = 12%
|
||||||
|
assert result[0]["change_pct"] == 12.0
|
||||||
|
|
||||||
|
|
||||||
|
class TestGenerateReview:
|
||||||
|
def test_empty_period(self, monkeypatch):
|
||||||
|
monkeypatch.setattr(rs, "get_logs_last_n_hours", lambda log_type, hours: [])
|
||||||
|
review = rs.generate_review(1)
|
||||||
|
assert review["total_decisions"] == 0
|
||||||
|
assert review["total_trades"] == 0
|
||||||
|
assert any("No decisions or trades" in i for i in review["insights"])
|
||||||
|
|
||||||
|
def test_with_trades_and_decisions(self, monkeypatch):
|
||||||
|
trades = [
|
||||||
|
{"symbol": "BTC/USDT", "price": 50000.0, "action": "BUY", "timestamp": "2024-01-01T00:00:00Z"}
|
||||||
|
]
|
||||||
|
decisions = [
|
||||||
|
{
|
||||||
|
"timestamp": "2024-01-01T00:00:00Z",
|
||||||
|
"decision": "HOLD",
|
||||||
|
"analysis": {"opportunities_evaluated": [{"symbol": "ETH/USDT", "verdict": "PASS"}]},
|
||||||
|
"market_snapshot": {"ETHUSDT": {"lastPrice": 100.0}},
|
||||||
|
}
|
||||||
|
]
|
||||||
|
errors = [{"message": "oops"}]
|
||||||
|
|
||||||
|
def _logs(log_type, hours):
|
||||||
|
return {"decisions": decisions, "trades": trades, "errors": errors}.get(log_type, [])
|
||||||
|
|
||||||
|
monkeypatch.setattr(rs, "get_logs_last_n_hours", _logs)
|
||||||
|
monkeypatch.setattr(rs, "fetch_current_price", lambda ex, sym: 110.0 if "ETH" in sym else 52000.0)
|
||||||
|
review = rs.generate_review(1)
|
||||||
|
assert review["total_trades"] == 1
|
||||||
|
assert review["total_decisions"] == 1
|
||||||
|
assert review["stats"]["good_decisions"] == 1
|
||||||
|
assert review["stats"]["missed_hold_passes"] == 1
|
||||||
|
assert any("execution/system errors" in i for i in review["insights"])
|
||||||
|
|
||||||
|
|
||||||
|
class TestSaveReview:
|
||||||
|
def test_writes_file(self, tmp_path, monkeypatch):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
review = {"review_timestamp": "2024-01-01T00:00:00+08:00", "review_period_hours": 1}
|
||||||
|
path = rs.save_review(review)
|
||||||
|
saved = json.loads(Path(path).read_text(encoding="utf-8"))
|
||||||
|
assert saved["review_period_hours"] == 1
|
||||||
|
|
||||||
|
|
||||||
|
class TestPrintReview:
|
||||||
|
def test_outputs_report(self, capsys):
|
||||||
|
review = {
|
||||||
|
"review_timestamp": "2024-01-01T00:00:00+08:00",
|
||||||
|
"review_period_hours": 1,
|
||||||
|
"total_decisions": 2,
|
||||||
|
"total_trades": 1,
|
||||||
|
"total_errors": 0,
|
||||||
|
"stats": {"good_decisions": 1, "neutral_decisions": 0, "bad_decisions": 0, "missed_opportunities": 0},
|
||||||
|
"insights": ["Insight A"],
|
||||||
|
"recommendations": ["Rec B"],
|
||||||
|
}
|
||||||
|
rs.print_review(review)
|
||||||
|
captured = capsys.readouterr()
|
||||||
|
assert "Coin Hunter Review Report" in captured.out
|
||||||
|
assert "Insight A" in captured.out
|
||||||
|
assert "Rec B" in captured.out
|
||||||
63
tests/test_runtime.py
Normal file
63
tests/test_runtime.py
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
"""Tests for runtime path resolution."""
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from coinhunter.runtime import RuntimePaths, ensure_runtime_dirs, get_runtime_paths, mask_secret
|
||||||
|
|
||||||
|
|
||||||
|
class TestGetRuntimePaths:
|
||||||
|
def test_defaults_point_to_home_dot_coinhunter(self):
|
||||||
|
paths = get_runtime_paths()
|
||||||
|
assert paths.root == Path.home() / ".coinhunter"
|
||||||
|
|
||||||
|
def test_respects_coinhunter_home_env(self, monkeypatch):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", "~/custom_ch")
|
||||||
|
paths = get_runtime_paths()
|
||||||
|
assert paths.root == Path.home() / "custom_ch"
|
||||||
|
|
||||||
|
def test_respects_hermes_home_env(self, monkeypatch):
|
||||||
|
monkeypatch.setenv("HERMES_HOME", "~/custom_hermes")
|
||||||
|
paths = get_runtime_paths()
|
||||||
|
assert paths.hermes_home == Path.home() / "custom_hermes"
|
||||||
|
assert paths.env_file == Path.home() / "custom_hermes" / ".env"
|
||||||
|
|
||||||
|
def test_returns_frozen_dataclass(self):
|
||||||
|
paths = get_runtime_paths()
|
||||||
|
assert isinstance(paths, RuntimePaths)
|
||||||
|
with pytest.raises(AttributeError):
|
||||||
|
paths.root = Path("/tmp")
|
||||||
|
|
||||||
|
def test_as_dict_returns_strings(self):
|
||||||
|
paths = get_runtime_paths()
|
||||||
|
d = paths.as_dict()
|
||||||
|
assert isinstance(d, dict)
|
||||||
|
assert all(isinstance(v, str) for v in d.values())
|
||||||
|
assert "root" in d
|
||||||
|
|
||||||
|
|
||||||
|
class TestEnsureRuntimeDirs:
|
||||||
|
def test_creates_directories(self, tmp_path, monkeypatch):
|
||||||
|
monkeypatch.setenv("COINHUNTER_HOME", str(tmp_path))
|
||||||
|
paths = get_runtime_paths()
|
||||||
|
returned = ensure_runtime_dirs(paths)
|
||||||
|
assert returned.root.exists()
|
||||||
|
assert returned.state_dir.exists()
|
||||||
|
assert returned.logs_dir.exists()
|
||||||
|
assert returned.cache_dir.exists()
|
||||||
|
assert returned.reviews_dir.exists()
|
||||||
|
|
||||||
|
|
||||||
|
class TestMaskSecret:
|
||||||
|
def test_empty_string(self):
|
||||||
|
assert mask_secret("") == ""
|
||||||
|
|
||||||
|
def test_short_value(self):
|
||||||
|
assert mask_secret("ab") == "**"
|
||||||
|
|
||||||
|
def test_masks_all_but_tail(self):
|
||||||
|
assert mask_secret("supersecret", tail=4) == "*******cret"
|
||||||
|
|
||||||
|
def test_none_returns_empty(self):
|
||||||
|
assert mask_secret(None) == ""
|
||||||
98
tests/test_smart_executor_service.py
Normal file
98
tests/test_smart_executor_service.py
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
"""Tests for smart executor deduplication and routing."""
|
||||||
|
|
||||||
|
|
||||||
|
from coinhunter.services import exchange_service
|
||||||
|
from coinhunter.services import smart_executor_service as ses
|
||||||
|
|
||||||
|
|
||||||
|
class FakeArgs:
|
||||||
|
def __init__(self, command="buy", symbol="BTCUSDT", amount_usdt=50, dry_run=False,
|
||||||
|
decision_id=None, analysis=None, reasoning=None, order_id=None,
|
||||||
|
from_symbol=None, to_symbol=None):
|
||||||
|
self.command = command
|
||||||
|
self.symbol = symbol
|
||||||
|
self.amount_usdt = amount_usdt
|
||||||
|
self.dry_run = dry_run
|
||||||
|
self.decision_id = decision_id
|
||||||
|
self.analysis = analysis
|
||||||
|
self.reasoning = reasoning
|
||||||
|
self.order_id = order_id
|
||||||
|
self.from_symbol = from_symbol
|
||||||
|
self.to_symbol = to_symbol
|
||||||
|
|
||||||
|
|
||||||
|
class TestDeduplication:
|
||||||
|
def test_skips_duplicate_mutating_action(self, monkeypatch, capsys):
|
||||||
|
monkeypatch.setattr(ses, "parse_cli_args", lambda argv: (FakeArgs(command="buy"), ["BTCUSDT", "50"]))
|
||||||
|
monkeypatch.setattr(ses, "get_execution_state", lambda did: {"status": "success"})
|
||||||
|
get_exchange_calls = []
|
||||||
|
monkeypatch.setattr(exchange_service, "get_exchange", lambda: get_exchange_calls.append(1) or "ex")
|
||||||
|
|
||||||
|
result = ses.run(["buy", "BTCUSDT", "50"])
|
||||||
|
assert result == 0
|
||||||
|
assert get_exchange_calls == []
|
||||||
|
captured = capsys.readouterr()
|
||||||
|
assert "already executed successfully" in captured.err
|
||||||
|
|
||||||
|
def test_allows_duplicate_read_only_action(self, monkeypatch):
|
||||||
|
monkeypatch.setattr(ses, "parse_cli_args", lambda argv: (FakeArgs(command="balances"), ["balances"]))
|
||||||
|
monkeypatch.setattr(ses, "get_execution_state", lambda did: {"status": "success"})
|
||||||
|
monkeypatch.setattr(ses, "command_balances", lambda ex: None)
|
||||||
|
get_exchange_calls = []
|
||||||
|
monkeypatch.setattr(exchange_service, "get_exchange", lambda: get_exchange_calls.append(1) or "ex")
|
||||||
|
|
||||||
|
result = ses.run(["bal"])
|
||||||
|
assert result == 0
|
||||||
|
assert len(get_exchange_calls) == 1
|
||||||
|
|
||||||
|
def test_allows_retry_after_failure(self, monkeypatch):
|
||||||
|
monkeypatch.setattr(ses, "parse_cli_args", lambda argv: (FakeArgs(command="buy"), ["BTCUSDT", "50"]))
|
||||||
|
monkeypatch.setattr(ses, "get_execution_state", lambda did: {"status": "failed"})
|
||||||
|
monkeypatch.setattr(ses, "record_execution_state", lambda did, state: None)
|
||||||
|
monkeypatch.setattr(ses, "build_decision_context", lambda ex, action, tail, did: {})
|
||||||
|
monkeypatch.setattr(ses, "action_buy", lambda *a, **k: {"id": "123"})
|
||||||
|
monkeypatch.setattr(ses, "print_json", lambda d: None)
|
||||||
|
get_exchange_calls = []
|
||||||
|
monkeypatch.setattr(exchange_service, "get_exchange", lambda: get_exchange_calls.append(1) or "ex")
|
||||||
|
|
||||||
|
result = ses.run(["buy", "BTCUSDT", "50"])
|
||||||
|
assert result == 0
|
||||||
|
assert len(get_exchange_calls) == 1
|
||||||
|
|
||||||
|
|
||||||
|
class TestReadOnlyRouting:
|
||||||
|
def test_routes_balances(self, monkeypatch):
|
||||||
|
monkeypatch.setattr(ses, "parse_cli_args", lambda argv: (FakeArgs(command="balances"), ["balances"]))
|
||||||
|
monkeypatch.setattr(ses, "get_execution_state", lambda did: None)
|
||||||
|
routed = []
|
||||||
|
monkeypatch.setattr(ses, "command_balances", lambda ex: routed.append("balances"))
|
||||||
|
monkeypatch.setattr(exchange_service, "get_exchange", lambda: "ex")
|
||||||
|
|
||||||
|
result = ses.run(["balances"])
|
||||||
|
assert result == 0
|
||||||
|
assert routed == ["balances"]
|
||||||
|
|
||||||
|
def test_routes_orders(self, monkeypatch):
|
||||||
|
monkeypatch.setattr(ses, "parse_cli_args", lambda argv: (FakeArgs(command="orders"), ["orders"]))
|
||||||
|
monkeypatch.setattr(ses, "get_execution_state", lambda did: None)
|
||||||
|
routed = []
|
||||||
|
monkeypatch.setattr(ses, "command_orders", lambda ex: routed.append("orders"))
|
||||||
|
monkeypatch.setattr(exchange_service, "get_exchange", lambda: "ex")
|
||||||
|
|
||||||
|
result = ses.run(["orders"])
|
||||||
|
assert result == 0
|
||||||
|
assert routed == ["orders"]
|
||||||
|
|
||||||
|
def test_routes_order_status(self, monkeypatch):
|
||||||
|
monkeypatch.setattr(ses, "parse_cli_args", lambda argv: (
|
||||||
|
FakeArgs(command="order-status", symbol="BTCUSDT", order_id="123"),
|
||||||
|
["order-status", "BTCUSDT", "123"]
|
||||||
|
))
|
||||||
|
monkeypatch.setattr(ses, "get_execution_state", lambda did: None)
|
||||||
|
routed = []
|
||||||
|
monkeypatch.setattr(ses, "command_order_status", lambda ex, sym, oid: routed.append((sym, oid)))
|
||||||
|
monkeypatch.setattr(exchange_service, "get_exchange", lambda: "ex")
|
||||||
|
|
||||||
|
result = ses.run(["order-status", "BTCUSDT", "123"])
|
||||||
|
assert result == 0
|
||||||
|
assert routed == [("BTCUSDT", "123")]
|
||||||
99
tests/test_state_manager.py
Normal file
99
tests/test_state_manager.py
Normal file
@@ -0,0 +1,99 @@
|
|||||||
|
"""Tests for state_manager precheck utilities."""
|
||||||
|
|
||||||
|
from datetime import timedelta, timezone
|
||||||
|
|
||||||
|
from coinhunter.services.state_manager import (
|
||||||
|
clear_run_request_fields,
|
||||||
|
sanitize_state_for_stale_triggers,
|
||||||
|
update_state_after_observation,
|
||||||
|
)
|
||||||
|
from coinhunter.services.time_utils import utc_iso, utc_now
|
||||||
|
|
||||||
|
|
||||||
|
class TestClearRunRequestFields:
|
||||||
|
def test_removes_run_fields(self):
|
||||||
|
state = {"run_requested_at": utc_iso(), "run_request_note": "test"}
|
||||||
|
clear_run_request_fields(state)
|
||||||
|
assert "run_requested_at" not in state
|
||||||
|
assert "run_request_note" not in state
|
||||||
|
|
||||||
|
|
||||||
|
class TestSanitizeStateForStaleTriggers:
|
||||||
|
def test_no_changes_when_clean(self):
|
||||||
|
state = {"pending_trigger": False}
|
||||||
|
result = sanitize_state_for_stale_triggers(state)
|
||||||
|
assert result["pending_trigger"] is False
|
||||||
|
assert result["_stale_recovery_notes"] == []
|
||||||
|
|
||||||
|
def test_clears_completed_run_request(self):
|
||||||
|
state = {
|
||||||
|
"pending_trigger": True,
|
||||||
|
"run_requested_at": utc_iso(),
|
||||||
|
"last_deep_analysis_at": utc_iso(),
|
||||||
|
}
|
||||||
|
result = sanitize_state_for_stale_triggers(state)
|
||||||
|
assert result["pending_trigger"] is False
|
||||||
|
assert "run_requested_at" not in result
|
||||||
|
assert any("completed run_requested" in note for note in result["_stale_recovery_notes"])
|
||||||
|
|
||||||
|
def test_clears_stale_run_request(self):
|
||||||
|
old = (utc_now() - timedelta(minutes=60)).replace(tzinfo=timezone.utc).isoformat()
|
||||||
|
state = {
|
||||||
|
"pending_trigger": False,
|
||||||
|
"run_requested_at": old,
|
||||||
|
}
|
||||||
|
result = sanitize_state_for_stale_triggers(state)
|
||||||
|
assert "run_requested_at" not in result
|
||||||
|
assert any("stale run_requested" in note for note in result["_stale_recovery_notes"])
|
||||||
|
|
||||||
|
def test_recovers_stale_pending_trigger(self):
|
||||||
|
old = (utc_now() - timedelta(minutes=60)).replace(tzinfo=timezone.utc).isoformat()
|
||||||
|
state = {
|
||||||
|
"pending_trigger": True,
|
||||||
|
"last_triggered_at": old,
|
||||||
|
}
|
||||||
|
result = sanitize_state_for_stale_triggers(state)
|
||||||
|
assert result["pending_trigger"] is False
|
||||||
|
assert any("stale pending_trigger" in note for note in result["_stale_recovery_notes"])
|
||||||
|
|
||||||
|
|
||||||
|
class TestUpdateStateAfterObservation:
|
||||||
|
def test_updates_last_observed_fields(self):
|
||||||
|
state = {}
|
||||||
|
snapshot = {
|
||||||
|
"generated_at": "2024-01-01T00:00:00Z",
|
||||||
|
"snapshot_hash": "abc",
|
||||||
|
"positions_hash": "pos123",
|
||||||
|
"candidates_hash": "can456",
|
||||||
|
"portfolio_value_usdt": 100.0,
|
||||||
|
"market_regime": "neutral",
|
||||||
|
"positions": [],
|
||||||
|
"top_candidates": [],
|
||||||
|
}
|
||||||
|
analysis = {"should_analyze": False, "details": [], "adaptive_profile": {}}
|
||||||
|
result = update_state_after_observation(state, snapshot, analysis)
|
||||||
|
assert result["last_observed_at"] == snapshot["generated_at"]
|
||||||
|
assert result["last_snapshot_hash"] == "abc"
|
||||||
|
|
||||||
|
def test_sets_pending_trigger_when_should_analyze(self):
|
||||||
|
state = {}
|
||||||
|
snapshot = {
|
||||||
|
"generated_at": "2024-01-01T00:00:00Z",
|
||||||
|
"snapshot_hash": "abc",
|
||||||
|
"positions_hash": "pos123",
|
||||||
|
"candidates_hash": "can456",
|
||||||
|
"portfolio_value_usdt": 100.0,
|
||||||
|
"market_regime": "neutral",
|
||||||
|
"positions": [],
|
||||||
|
"top_candidates": [],
|
||||||
|
}
|
||||||
|
analysis = {
|
||||||
|
"should_analyze": True,
|
||||||
|
"details": ["price move"],
|
||||||
|
"adaptive_profile": {},
|
||||||
|
"hard_reasons": ["moon"],
|
||||||
|
"signal_delta": 1.5,
|
||||||
|
}
|
||||||
|
result = update_state_after_observation(state, snapshot, analysis)
|
||||||
|
assert result["pending_trigger"] is True
|
||||||
|
assert result["pending_reasons"] == ["price move"]
|
||||||
160
tests/test_trade_execution.py
Normal file
160
tests/test_trade_execution.py
Normal file
@@ -0,0 +1,160 @@
|
|||||||
|
"""Tests for trade execution dry-run paths."""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
from coinhunter.services import trade_execution as te
|
||||||
|
from coinhunter.services.trade_common import set_dry_run
|
||||||
|
|
||||||
|
|
||||||
|
class TestMarketSellDryRun:
|
||||||
|
def test_returns_dry_run_order(self, monkeypatch):
|
||||||
|
set_dry_run(True)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
te, "prepare_sell_quantity",
|
||||||
|
lambda ex, sym, qty: ("BTC/USDT", 0.5, 60000.0, 30000.0)
|
||||||
|
)
|
||||||
|
order = te.market_sell(None, "BTCUSDT", 0.5, "dec-123")
|
||||||
|
assert order["id"] == "dry-sell-dec-123"
|
||||||
|
assert order["symbol"] == "BTC/USDT"
|
||||||
|
assert order["amount"] == 0.5
|
||||||
|
assert order["status"] == "closed"
|
||||||
|
set_dry_run(False)
|
||||||
|
|
||||||
|
|
||||||
|
class TestMarketBuyDryRun:
|
||||||
|
def test_returns_dry_run_order(self, monkeypatch):
|
||||||
|
set_dry_run(True)
|
||||||
|
monkeypatch.setattr(
|
||||||
|
te, "prepare_buy_quantity",
|
||||||
|
lambda ex, sym, amt: ("ETH/USDT", 1.0, 3000.0, 3000.0)
|
||||||
|
)
|
||||||
|
order = te.market_buy(None, "ETHUSDT", 100, "dec-456")
|
||||||
|
assert order["id"] == "dry-buy-dec-456"
|
||||||
|
assert order["symbol"] == "ETH/USDT"
|
||||||
|
assert order["amount"] == 1.0
|
||||||
|
assert order["status"] == "closed"
|
||||||
|
set_dry_run(False)
|
||||||
|
|
||||||
|
|
||||||
|
class TestActionSellAll:
|
||||||
|
def test_raises_when_balance_zero(self, monkeypatch):
|
||||||
|
monkeypatch.setattr(te, "fetch_balances", lambda ex: {"BTC": 0})
|
||||||
|
with pytest.raises(RuntimeError, match="balance is zero"):
|
||||||
|
te.action_sell_all(None, "BTCUSDT", "dec-789", {})
|
||||||
|
|
||||||
|
def test_dry_run_does_not_reconcile(self, monkeypatch):
|
||||||
|
set_dry_run(True)
|
||||||
|
monkeypatch.setattr(te, "fetch_balances", lambda ex: {"BTC": 0.5})
|
||||||
|
monkeypatch.setattr(
|
||||||
|
te, "market_sell",
|
||||||
|
lambda ex, sym, qty, did: {"id": "dry-1", "amount": qty, "price": 60000.0, "cost": 30000.0, "status": "closed"}
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(te, "reconcile_positions_with_exchange", lambda ex, hint=None: ([], {}))
|
||||||
|
monkeypatch.setattr(te, "log_trade", lambda *a, **k: None)
|
||||||
|
monkeypatch.setattr(te, "log_decision", lambda *a, **k: None)
|
||||||
|
|
||||||
|
result = te.action_sell_all(None, "BTCUSDT", "dec-789", {"analysis": "test"})
|
||||||
|
assert result["id"] == "dry-1"
|
||||||
|
set_dry_run(False)
|
||||||
|
|
||||||
|
|
||||||
|
class TestActionBuy:
|
||||||
|
def test_raises_when_insufficient_usdt(self, monkeypatch):
|
||||||
|
monkeypatch.setattr(te, "fetch_balances", lambda ex: {"USDT": 10.0})
|
||||||
|
with pytest.raises(RuntimeError, match="Insufficient USDT"):
|
||||||
|
te.action_buy(None, "BTCUSDT", 50, "dec-abc", {})
|
||||||
|
|
||||||
|
def test_dry_run_skips_save(self, monkeypatch):
|
||||||
|
set_dry_run(True)
|
||||||
|
monkeypatch.setattr(te, "fetch_balances", lambda ex: {"USDT": 100.0})
|
||||||
|
monkeypatch.setattr(
|
||||||
|
te, "market_buy",
|
||||||
|
lambda ex, sym, amt, did: {"id": "dry-2", "amount": 0.01, "price": 50000.0, "cost": 500.0, "status": "closed"}
|
||||||
|
)
|
||||||
|
monkeypatch.setattr(te, "load_positions", lambda: [])
|
||||||
|
monkeypatch.setattr(te, "upsert_position", lambda positions, pos: positions.append(pos))
|
||||||
|
monkeypatch.setattr(te, "reconcile_positions_with_exchange", lambda ex, hint=None: ([], {}))
|
||||||
|
monkeypatch.setattr(te, "log_trade", lambda *a, **k: None)
|
||||||
|
monkeypatch.setattr(te, "log_decision", lambda *a, **k: None)
|
||||||
|
|
||||||
|
result = te.action_buy(None, "BTCUSDT", 50, "dec-abc", {})
|
||||||
|
assert result["id"] == "dry-2"
|
||||||
|
set_dry_run(False)
|
||||||
|
|
||||||
|
|
||||||
|
class TestCommandBalances:
|
||||||
|
def test_returns_balances(self, monkeypatch, capsys):
|
||||||
|
monkeypatch.setattr(te, "fetch_balances", lambda ex: {"USDT": 100.0, "BTC": 0.5})
|
||||||
|
result = te.command_balances(None)
|
||||||
|
assert result == {"USDT": 100.0, "BTC": 0.5}
|
||||||
|
captured = capsys.readouterr()
|
||||||
|
assert '"USDT": 100.0' in captured.out
|
||||||
|
|
||||||
|
|
||||||
|
class TestCommandStatus:
|
||||||
|
def test_returns_snapshot(self, monkeypatch, capsys):
|
||||||
|
monkeypatch.setattr(te, "fetch_balances", lambda ex: {"USDT": 100.0})
|
||||||
|
monkeypatch.setattr(te, "load_positions", lambda: [])
|
||||||
|
monkeypatch.setattr(te, "build_market_snapshot", lambda ex: {"BTC/USDT": 50000.0})
|
||||||
|
result = te.command_status(None)
|
||||||
|
assert result["balances"]["USDT"] == 100.0
|
||||||
|
captured = capsys.readouterr()
|
||||||
|
assert '"balances"' in captured.out
|
||||||
|
|
||||||
|
|
||||||
|
class TestCommandOrders:
|
||||||
|
def test_returns_orders(self, monkeypatch, capsys):
|
||||||
|
orders = [{"id": "1", "symbol": "BTC/USDT"}]
|
||||||
|
monkeypatch.setattr(te, "print_json", lambda d: None)
|
||||||
|
ex = type("Ex", (), {"fetch_open_orders": lambda self: orders})()
|
||||||
|
result = te.command_orders(ex)
|
||||||
|
assert result == orders
|
||||||
|
|
||||||
|
|
||||||
|
class TestCommandOrderStatus:
|
||||||
|
def test_returns_order(self, monkeypatch, capsys):
|
||||||
|
order = {"id": "123", "status": "open"}
|
||||||
|
monkeypatch.setattr(te, "print_json", lambda d: None)
|
||||||
|
ex = type("Ex", (), {"fetch_order": lambda self, oid, sym: order})()
|
||||||
|
result = te.command_order_status(ex, "BTCUSDT", "123")
|
||||||
|
assert result == order
|
||||||
|
|
||||||
|
|
||||||
|
class TestCommandCancel:
|
||||||
|
def test_dry_run_returns_early(self, monkeypatch):
|
||||||
|
set_dry_run(True)
|
||||||
|
monkeypatch.setattr(te, "log", lambda msg: None)
|
||||||
|
result = te.command_cancel(None, "BTCUSDT", "123")
|
||||||
|
assert result["dry_run"] is True
|
||||||
|
set_dry_run(False)
|
||||||
|
|
||||||
|
def test_cancel_by_order_id(self, monkeypatch):
|
||||||
|
set_dry_run(False)
|
||||||
|
monkeypatch.setattr(te, "print_json", lambda d: None)
|
||||||
|
ex = type("Ex", (), {"cancel_order": lambda self, oid, sym: {"id": oid, "status": "canceled"}})()
|
||||||
|
result = te.command_cancel(ex, "BTCUSDT", "123")
|
||||||
|
assert result["status"] == "canceled"
|
||||||
|
|
||||||
|
def test_cancel_latest_when_no_order_id(self, monkeypatch):
|
||||||
|
set_dry_run(False)
|
||||||
|
monkeypatch.setattr(te, "print_json", lambda d: None)
|
||||||
|
ex = type("Ex", (), {
|
||||||
|
"fetch_open_orders": lambda self, sym: [{"id": "999"}, {"id": "888"}],
|
||||||
|
"cancel_order": lambda self, oid, sym: {"id": oid, "status": "canceled"},
|
||||||
|
})()
|
||||||
|
result = te.command_cancel(ex, "BTCUSDT", None)
|
||||||
|
assert result["id"] == "888"
|
||||||
|
|
||||||
|
def test_cancel_raises_when_no_open_orders(self, monkeypatch):
|
||||||
|
set_dry_run(False)
|
||||||
|
ex = type("Ex", (), {"fetch_open_orders": lambda self, sym: []})()
|
||||||
|
with pytest.raises(RuntimeError, match="No open orders"):
|
||||||
|
te.command_cancel(ex, "BTCUSDT", None)
|
||||||
|
|
||||||
|
|
||||||
|
class TestPrintJson:
|
||||||
|
def test_outputs_sorted_json(self, capsys):
|
||||||
|
te.print_json({"b": 2, "a": 1})
|
||||||
|
captured = capsys.readouterr()
|
||||||
|
assert '"a": 1' in captured.out
|
||||||
|
assert '"b": 2' in captured.out
|
||||||
Reference in New Issue
Block a user