TRADINGHOE
Memecoin Calls → Telegram ↗
tradinghoe
TH
first to the new thing
| THESIS ARCHIVE
$VVV - Venice Token
03-2026 Investment Thesis Entry $5.40
Entry $5.40
ATH $16.43
Upside 204.3%
$TAO - Bittensor
03-2026 Investment Thesis Entry $175.47
Entry $175.47
ATH $378
Upside 115%

Market Thoughts

Apr 25, 2026
How to Build an Order Book Collector

No exchange stores historical order book data. Once a moment passes, the resting bids and asks are gone. A step-by-step guide to building your own order book collector bot. $6/month, 15 minutes, no API keys needed.

tools order book guide
← EXIT

$VVV - Venice Token

$VVV - Venice Token

Overview

Venice ($VVV) is a privacy-first, censorship-resistant AI inference platform built on Base (Ethereum L2), founded by Erik Voorhees, the serial entrepreneur behind ShapeShift. The protocol tokenizes access to private AI inference: staking VVV grants a proportional, perpetual share of the platform's API capacity at zero marginal cost.

With over 450,000 registered users, a two-token model ($VVV + $DIEM), and API revenue flowing into monthly buybacks and burns, VVV sits at the intersection of two dominant narratives, AI agents and on-chain privacy.

With the rise of autonomous agents and AI automation, I believe VVV is one to look into. Venice serves as a private and uncensored alternative to ChatGPT.

Tokenomics

Token: $VVV · Chain: Base (Ethereum L2) · Total Supply: 100,000,000 · Circulating: ~46.14M · Current Supply: ~79.73M · Category: AI x Privacy Infra

Thesis

1. The AI Surveillance Problem is Real and Structurally Growing

Every prompt sent to ChatGPT, Gemini, Grok, or Perplexity is stored permanently unless deleted, and is subject to government data requests. This is the documented business model of centralized AI providers. As AI integrates deeper into legal, financial, medical, and personal workflows, the value of private inference increases structurally. Venice is the only consumer-grade product built entirely around this constraint.

Venice stores every message on your local browser storage encrypted, but there is one issue. Every request passes through GPUs in data centers which have to read your message to process it. You'd have to trust it is not logged, but Erik claims they have control in data centers under ZDR guarantees.

2. AI Agents Need Permissionless, Persistent Inference

Autonomous AI agents need continuous, uncensored, and cost-predictable access to inference. Today, they depend on OpenAI and Anthropic APIs that impose content restrictions, per-request billing friction, and surveillance. VVV was designed primarily for agents, to reduce economic friction. It solves the bottleneck of pay-per-request usually used in the centralized world.

It was the first airdrop targeting AI agents themselves. 25% of the genesis supply was allocated to AI community protocol accounts on Base (Virtuals, and agents like Luna, aixbt, VaderAI, and others). This is a genuine product-market signal that no competitor can claim.

3. Deflationary Token Economics with a Real Revenue Feedback Loop

VVV has a rather interesting tokenomics setup. Annual emissions started at 14M VVV/year and have been aggressively reduced, currently at 6M, dropping to 3M by July 2026. Revenue from subscriptions and API usage is directed into monthly buybacks and burns. The initial supply was 100M but 42.9% of the original genesis supply has been permanently burned to date.

As GPU compute costs decline (Moore's Law), a fixed VVV stake commands a growing share of Venice's expanding compute capacity. Simultaneously, API revenue is directed into monthly VVV buybacks and burns. This creates a dual deflationary mechanism: expanding capacity per token + shrinking supply.

Demand for VVV is justified by the utility of staking it to own a share of Venice's AI inference capacity. When you stake VVV you can mint DIEM, which equates to $1/day of inference capacity.

Team

Erik Voorhees (Co-Founder & CEO) is one of crypto's most credible operators. Track record includes BitInstant (early executive at one of the first fiat-to-crypto ramps), ShapeShift (founded and scaled one of the first non-custodial exchanges to billions in volume; later transitioned to a DAO), and Venice.ai (2024, applying the same privacy-first philosophy to AI inference). Teana Baker-Taylor serves as Co-Founder, bringing institutional and regulatory experience from her role as VP at Circle.

Product

Venice.ai launched May 2024 as a private, uncensored alternative to ChatGPT. Privacy by architecture: Venice does not store conversations on its servers. Uncensored models with content filters removed. Open-source stack built entirely on open-source models. Mobile app available on iOS and Android.

The two-token architecture ($VVV for staking + access and $DIEM as compute credits) resolves the dilution problem of the original pro-rata staking model. DIEM replaces this with a fixed, dollar-denominated compute credit.

Valuation

VVV's correlation with BTC is ~0.6, indicating meaningful idiosyncratic upside tied to AI-specific sentiment. Macro AI tailwinds (NVIDIA earnings, agent proliferation, enterprise AI adoption) serve as external catalysts regardless of Venice's internal metrics.

Using Bittensor ($TAO) as a comparable at ~$2.5B market cap for decentralized AI compute infrastructure, VVV at an entry of $5.40 (~$240M circulating cap) represents a ~10x discount. If VVV reaches 30-50% of TAO's valuation on fundamentals convergence, the implied circulating cap is $750M-$1.25B, or roughly $16-$27 per token.

The more autonomous agents go mainstream, the higher Venice revenue.

Risks

Emissions dilution: Annual token emissions create structural sell pressure. Revenue buybacks must outpace emissions for net deflation.

Compute trust assumption: Originally, client-side privacy relied on trusting GPU node operators not to log inference. Venice has since addressed this with four privacy modes, including TEE (hardware-isolated GPU processing) and E2EE (end-to-end encryption) for Pro users, alongside ZDR (Zero Data Retention) guarantees at the data center level. The default "Private" mode is now contract-enforced ZDR.

Regulatory exposure: Uncensored AI models may attract regulatory attention in EU, UK, and US jurisdictions.

Recommendation

A lot of people with capital will always be waiting to bid the token with significant strength when BTC is dumping. There will be tourists in the token who'd sell when "market not go up." This will be particularly a good time to get a position because this token fits the economic state we are in right now: AI agents going full stream and eventually being used to do even more important stuff. I think this token fits into that state perfectly. Look at it like the ZEC of AI agents.

Update, April 2026

Emissions slashed aggressively. Venice originally set emissions at 14M VVV/year. This was cut to 8M, then permanently reduced 25% to 6M in February 2026. The team announced a further 50% reduction: 6M to 5M (May 1), 5M to 4M (June 1), 4M to 3M (July 1, 2026). The goal is to cross the threshold where VVV burns exceed emissions, making VVV net deflationary while still generating native yield for stakers.

Programmatic burn engine launched. As of April 15, 2026, every new Venice Pro subscription triggers an automated VVV market buy and burn on-chain. Since April 26, burns are now tier-aware: $2 per Pro sub, $5 per Pro+, $10 per Max subscription. Live burn data at venicestats.com/burns. Over 33.7M tokens (~42.9% of the original 100M genesis supply) have been permanently burned to date.

OpenClaw integration. On March 2, 2026, OpenClaw, a leading decentralized AI framework, named Venice AI as its recommended model provider, listing venice/llama-3.3-70b as the default model and venice/claude-opus-45 as "best overall." VVV spiked 20% on the news. Note: Venice remains listed as a provider in the OpenClaw docs as of April 2026, though the initial "highlight" framing has been toned down. Venice is now one of several supported providers rather than the sole recommendation.

2M registered users. Venice crossed 2,000,000 registered users, up from 450K at the time of this thesis. API demand has surged 5-50x as users migrate from restrictive centralized AI providers.

EXPAND allocation. Venice introduced EXPAND, a new reward allocation for VVV holders, functioning like an airdrop/bonus that incentivizes accumulation.

Peak $9.58. VVV rallied ~10x from its yearly low, the best YTD performance for a non-scam coin. The token is currently trading around $8.50-$9.00 with a market cap exceeding $400M.

Four Privacy Modes launched. Venice introduced four distinct privacy tiers for every AI model on the platform. Private (zero data retention, contract-enforced, default), Anonymous (identity obscured from provider), TEE (hardware-isolated GPU processing, Pro), and E2EE (encrypted end-to-end, fully verifiable, Pro). This directly addresses the compute trust assumption in the original thesis. Users now have granular control over their privacy level rather than relying solely on trust.

Published by tradinghoe · March 4, 2026
Not financial advice. Do your own research.
← EXIT

$TAO - Bittensor

$TAO - Bittensor

Overview

Bittensor is an open source platform where participants produce best-in-class digital commodities including compute power, storage space, AI inference and training, protein folding, financial markets prediction and more.

TAO is the currency of the ecosystem. There will only be 21 million TAO (same idea as Bitcoin scarcity). New TAO gets created every block and handed out to people doing useful work. The amount of new TAO being created recently got cut in half (the "halving" in December 2025), which means it gets scarcer over time.

Tokenomics

Token: $TAO · Max Supply: 21,000,000 · Circulating: ~10.85M (~52%) · Halving: December 14, 2025 · Daily Emissions: 3,600 TAO (down from 7,200) · Staked: ~70% of circulating supply · Active Subnets: 128+ · ATH: $378

Subnets

Think of subnets as different departments in a company, except each one is its own mini marketplace focused on one specific AI task. Each subnet consists of:

Miners: the workers. They run AI models or provide computing power. They're competing against each other to do the best job.

Validators: the judges. They test the miners' work and score it. Good miners get more TAO, bad miners get less. The matrix of these scores, by each validator for each miner, serves as input to Yuma Consensus.

Subnet Creators: the managers. They design the subnet and write the rules for what counts as "good work."

The Yuma Consensus algorithm operates on-chain and determines emissions to miners, validators, and subnet creators across the platform, based on performance. There are currently 128+ of these subnets: taostats.io

Subnet Economics (dTAO)

Each subnet functions as its own automated market maker (AMM), with two liquidity reserves: one containing TAO, the currency of the Bittensor network, and one containing a subnet-specific "dynamic" currency, referred to as that subnet's alpha token. The alpha token is purchased by staking TAO into the subnet's reserve.

A subnet's economy consists of three pools: TAO reserves (the amount of TAO staked into the subnet), Alpha reserves (the amount of alpha available for purchase), and Alpha outstanding (the amount of alpha held in the hotkeys of a subnet's participants, also referred to as the total stake in the subnet).

The price of a subnet's alpha token is determined by the ratio of TAO in that subnet's reserve to its alpha in reserve.

As TAO holders stake TAO into subnets in exchange for the subnet-specific alpha, they are essentially "voting with their TAO" for the value of the subnet. Subnets with more staking than unstaking receive higher emissions, while subnets with net outflows receive reduced or zero emissions. This flow-based model rewards subnets that attract genuine user engagement. In return, stakers extract a share of the subnet's emissions.

Top 5 Subnets

Chutes (SN64) is cited as the leader for serverless compute for AI at scale. It lets you run AI models in the cloud without managing servers.

Templar (SN3) focuses on collaborative model training. Instead of one big company training an AI model behind closed doors, Templar lets a distributed network of miners work together to train models collectively. One of the older subnets and has consistently stayed near the top.

Affine (SN120) coordinates other subnets so they can work together through reinforcement learning and model evaluation. Right now, all these subnets work in isolation. Chutes does compute, Ridges does coding, Targon does verification, but none of them talk to each other. Rather than specializing in a single service, Affine aims to connect and compose the services of multiple subnets, like taking compute from Chutes and combining it with coding from Ridges to deliver a complete solution. It also runs a never-ending competition where miners fine-tune AI models on hard puzzles like coding and multi-step logic, upload the improved model to Chutes, and validators test every model head-to-head. The single best one wins the entire daily prize pool, and every other miner downloads it and tries to beat it next round. A big deal here: the team includes Jacob Steeves, the co-founder of Bittensor itself, who stepped down as CEO of the Opentensor Foundation to build this subnet.

Targon (SN4) also functions as a compute marketplace like Chutes, but with a twist: it has its own network called the TVM (Targon Virtual Machine) where you can build and test privately, without others seeing what you're doing or how many GPUs you're using.

Ridges (SN62) is building a platform where you can allow AI agents to solve software problems, end-to-end. Instead of an engineer going back and forth with a model, they should be able to submit an entire problem and reliably know that it will be completed for them.

Root Subnet

Root subnet or Subnet Zero is a special subnet on Bittensor. It is the only subnet without its own alpha currency. No miners can register on subnet zero, and no validation work is performed. However, validators can register on the root subnet, and TAO holders can stake to those validators, as with any other subnet. This offers a mechanism for TAO holders to stake TAO into validators in a subnet-agnostic way.

Staking

TAO holders can stake any amount they hold to a validator. After the validator extracts their take, the remaining emissions are credited back to the stakers/delegators in proportion to their stake with that validator.

Staking is always local to a subnet. Each subnet operates its own AMM. You pick a specific subnet and a specific validator on that subnet. Your TAO goes into that subnet's TAO reserve in its AMM pool, and the AMM calculates the equivalent units of the subnet's alpha token at the current exchange rate. You now hold that subnet's alpha token, staked to that validator.

Staking to a given validator's hotkeys on different subnets is independent. So if you stake to Validator X on Subnet 3 and also on Subnet 119, those are two completely separate positions with two different alpha tokens and two different exchange rates.

Validator directory: tao.app/validators

How to Buy Subnet Tokens

1. Download Talisman wallet (@wearetalisman)

2. Create ETH wallet + Substrate Account

3. Send ETH to ETH wallet

4. Use Talisman interface to cross-chain swap ETH to TAO, select from ETH wallet to Substrate account

5. Connect wallet and swap TAO to subnet tokens here: taostats.io/subnets

Bull Case

AI demand is exploding, TAO supply is getting scarcer (halving), and if the subnets keep building real products that people actually pay for, demand for TAO goes up because you need it to use the network. There are also institutional products emerging: Grayscale and Bitwise have filed for TAO ETFs, Grayscale allocated 43.06% of its AI Fund to TAO (its largest single-asset reallocation to date), Nvidia invested $420M (with 77% staked), Polychain Capital added $200M in exposure, and there's a staked TAO product listing on the SIX Swiss Exchange. Over 70% of circulating TAO is staked, leaving a thin liquid float of roughly 3M tokens.

Bear Case

Most subnet alpha tokens still don't have clear revenue models, liquidity is thin (meaning prices can swing wildly on small trades), and it's still early enough that gaming the system is possible. Investors are demanding subnets prove they can compete with leading AI models like GPT and Gemini. TAO's value is increasingly tied to measurable AI output and subnet utility, not just speculation.

Update, April 2026

Covenant AI exit (April 10). Covenant AI, operator of three major subnets (Templar SN3, Basilica SN39, Grail SN81), publicly exited Bittensor. Founder Sam Dare accused Bittensor co-founder Jacob Steeves of centralized control, calling the governance "decentralization theatre." Dare sold ~37,000 TAO (~$10M) into the market. TAO crashed 18-28% within hours, erasing a prior 100% rally and wiping ~$900M from market cap. Over $9M in long positions were liquidated in the cascade.

Steeves denied all claims. Jacob Steeves responded that he does not have the ability to suspend emissions, that his token sales were less than 1% of what he invested in his teams, and that he reserves the right to buy and sell tokens like any normal TAO holder. The Bittensor network continued operating normally through the event.

Community miners restored affected subnets. SN3, SN39, and SN81 were successfully restored by community miners using open-source code, demonstrating that the network can operate without any single central participant.

Context: Covenant AI was one of 128+ subnets. The network generated an estimated $43M in Q1 2026 revenue across all subnets. 70% of circulating TAO remains staked. The exit was a sentiment event, not a fundamental network failure. TAO has since stabilized around the $240-$260 range.

Bullish developments:

Nvidia $420M investment. Nvidia invested $420M into Bittensor with 77% staked. Polychain Capital added $200M in exposure. This is institutional-grade conviction.

Grayscale AI Fund reallocation. Grayscale raised TAO weighting to 43.06% in its AI Fund on April 7, the largest single-asset reallocation in the fund's history.

Spot TAO ETF filings. Grayscale and Bitwise submitted Spot TAO ETF applications on April 2. SEC decision anticipated by August 2026. An approved ETF would be the first AI-focused crypto ETF.

Staked TAO on SIX Swiss Exchange. A staked TAO product is now listed on SIX Swiss Exchange, opening access to traditional finance investors.

BitGo x Yuma partnership. Enables secure custody and staking for Bittensor subnet tokens, removing a major operational hurdle for institutional participants.

Published by tradinghoe · March 7, 2026
Not financial advice. Do your own research.
← EXIT

How to Build an Order Book Collector

No exchange stores historical order book data. Once a moment passes, the resting bids and asks are gone. No API lets you go back and retrieve them. If you want to know what the order book looked like last week, last month, or during a pump, you had to be collecting it at the time.

This is a step-by-step guide to building your own order book collector. A bot that snapshots Binance Futures order book depth, funding rates, and open interest every 60 seconds and saves everything into a local database. No API keys needed. The whole setup costs $6/month and takes about 15 minutes.

Why This Matters

The order book is the earliest signal of accumulation. When a whale uses TWAP or iceberg orders to slowly buy into a position, the fingerprint shows up in the order book days before the price move confirms it. Bid-ask imbalance stays elevated (above 0.4 for extended periods), ask-side depth thins out as sell orders get absorbed, and the bid/ask ratio creeps toward 1.0 without crossing it.

Tools like CoinAnk, Coinglass, and Binance show you the order book right now. None of them let you see what it looked like a week ago on a micro-cap token. This bot gives you that data.

What You Need

A server that runs 24/7. Your laptop won't work. It sleeps, restarts, loses Wi-Fi. You need a VPS (Virtual Private Server) that stays on permanently.

DigitalOcean: $6/month for the smallest droplet (1GB RAM, Ubuntu). Alternatives: Vultr ($6/month), Hetzner ($4/month), or any VPS provider running Ubuntu.

Python 3 (comes pre-installed on Ubuntu). The requests library (one install command). No API keys, no Binance account, no trading permissions. The bot uses Binance's public API endpoints which are free and open to anyone.

Step 1: Get a Server

Go to digitalocean.com and create an account. Click "Create Droplet." Choose Ubuntu 24.04, the $6/month plan (Regular, 1GB RAM), and pick a region close to you. Under Authentication, choose Password and set a root password. Click "Create Droplet" and wait 60 seconds. You'll get an IP address. That's your server.

Step 2: Connect to Your Server

Open Terminal on your Mac (or Command Prompt on Windows) and type:

ssh root@YOUR_IP_ADDRESS

If SSH times out on the public IP, install Tailscale on both your computer and the server for a reliable connection.

Step 3: Install the One Dependency

apt update && apt install python3-pip -y
pip install requests --break-system-packages

Step 4: Create the Bot

Run:

nano ~/bsb_collector.py

Paste the full bot code below into the editor. Press Ctrl+X, then Y, then Enter to save.

#!/usr/bin/env python3
"""
bsb_collector.py - Binance Order Book + Funding Rate Collector
Snapshots order book depth, funding rates, OI into SQLite every 60s.
No API keys needed - uses Binance's public endpoints.

Usage:
  python3 bsb_collector.py collect --symbols BSBUSDT
  python3 bsb_collector.py collect --symbols BSBUSDT MYXUSDT SIRENUSDT
  python3 bsb_collector.py backfill --symbol BSBUSDT --days 15
  python3 bsb_collector.py export --symbol BSBUSDT
"""
import argparse, json, logging, signal, sqlite3, time
from datetime import datetime, timezone, timedelta
import requests

FUTURES_BASE = "https://fapi.binance.com"
ENDPOINTS = {
    "depth": f"{FUTURES_BASE}/fapi/v1/depth",
    "premium": f"{FUTURES_BASE}/fapi/v1/premiumIndex",
    "oi": f"{FUTURES_BASE}/fapi/v1/openInterest",
    "ticker": f"{FUTURES_BASE}/fapi/v1/ticker/24hr",
    "klines": f"{FUTURES_BASE}/fapi/v1/klines",
    "funding_history": f"{FUTURES_BASE}/fapi/v1/fundingRate",
}

logging.basicConfig(level=logging.INFO,
    format="%(asctime)s [%(levelname)s] %(message)s",
    datefmt="%Y-%m-%d %H:%M:%S")
log = logging.getLogger("bsb")
shutdown = False

def _sig(s, f):
    global shutdown
    log.info("Shutting down...")
    shutdown = True
signal.signal(signal.SIGINT, _sig)
signal.signal(signal.SIGTERM, _sig)

def init_db(path):
    conn = sqlite3.connect(path, timeout=30)
    conn.execute("PRAGMA journal_mode=WAL")
    conn.execute("PRAGMA synchronous=NORMAL")
    conn.executescript("""
        CREATE TABLE IF NOT EXISTS snapshots (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            ts TEXT NOT NULL, ts_unix REAL NOT NULL,
            symbol TEXT NOT NULL, price REAL,
            mark_price REAL, index_price REAL,
            funding_rate REAL, oi REAL, oi_notional REAL,
            volume_24h REAL, bid_depth REAL NOT NULL,
            ask_depth REAL NOT NULL, imbalance REAL NOT NULL,
            best_bid REAL, best_ask REAL, spread REAL,
            mid_price REAL, bids_json TEXT NOT NULL,
            asks_json TEXT NOT NULL);
        CREATE TABLE IF NOT EXISTS anomalies (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            ts TEXT NOT NULL, ts_unix REAL NOT NULL,
            symbol TEXT NOT NULL, event_type TEXT NOT NULL,
            severity TEXT NOT NULL DEFAULT 'low',
            value REAL, details TEXT);
        CREATE INDEX IF NOT EXISTS idx_snap_sym_ts
            ON snapshots(symbol, ts_unix);
        CREATE INDEX IF NOT EXISTS idx_anom_sym_ts
            ON anomalies(symbol, ts_unix);
    """)
    conn.commit()
    log.info(f"DB ready: {path}")
    return conn

sess = requests.Session()
sess.headers["User-Agent"] = "bsb/1.0"

def api(url, params=None, retries=3):
    for i in range(retries):
        try:
            r = sess.get(url, params=params, timeout=10)
            if r.status_code == 429:
                time.sleep(int(r.headers.get("Retry-After", 5)))
                continue
            r.raise_for_status()
            return r.json()
        except Exception as e:
            log.warning(f"API fail ({i+1}/{retries}): {e}")
            if i < retries - 1: time.sleep(2 ** i)
    return None

def collect_snapshot(conn, symbol, depth):
    now = datetime.now(timezone.utc)
    now_unix, now_str = now.timestamp(), now.isoformat()
    ob = api(ENDPOINTS["depth"], {"symbol": symbol, "limit": depth})
    if not ob:
        log.error(f"[{symbol}] order book fetch failed")
        return
    bids, asks = ob.get("bids", []), ob.get("asks", [])
    bid_total = sum(float(b[1]) for b in bids)
    ask_total = sum(float(a[1]) for a in asks)
    total = bid_total + ask_total
    imbalance = (bid_total - ask_total) / total if total > 0 else 0.0
    best_bid = float(bids[0][0]) if bids else None
    best_ask = float(asks[0][0]) if asks else None
    spread = (best_ask - best_bid) if best_bid and best_ask else None
    mid = (best_bid + best_ask) / 2 if best_bid and best_ask else None
    prem = api(ENDPOINTS["premium"], {"symbol": symbol})
    funding = float(prem["lastFundingRate"]) if prem else None
    mark = float(prem["markPrice"]) if prem else None
    index = float(prem["indexPrice"]) if prem else None
    oi_data = api(ENDPOINTS["oi"], {"symbol": symbol})
    oi = float(oi_data["openInterest"]) if oi_data else None
    oi_n = oi * mark if oi and mark else None
    tk = api(ENDPOINTS["ticker"], {"symbol": symbol})
    price = float(tk["lastPrice"]) if tk else mid
    vol = float(tk["quoteVolume"]) if tk else None
    conn.execute("""
        INSERT INTO snapshots
        (ts,ts_unix,symbol,price,mark_price,index_price,
         funding_rate,oi,oi_notional,volume_24h,bid_depth,
         ask_depth,imbalance,best_bid,best_ask,spread,
         mid_price,bids_json,asks_json)
        VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
    """, (now_str,now_unix,symbol,price,mark,index,funding,
          oi,oi_n,vol,bid_total,ask_total,imbalance,
          best_bid,best_ask,spread,mid,
          json.dumps(bids),json.dumps(asks)))
    events = []
    if abs(imbalance) > 0.6:
        events.append(("imbalance_spike","high",imbalance,
            f"Imbalance {imbalance:.4f} exceeds 0.6"))
    elif abs(imbalance) > 0.4:
        events.append(("imbalance_elevated","medium",imbalance,
            f"Imbalance {imbalance:.4f} exceeds 0.4"))
    hist = conn.execute("""
        SELECT ask_depth FROM snapshots
        WHERE symbol = ? ORDER BY ts_unix DESC LIMIT 30
    """, (symbol,)).fetchall()
    if len(hist) >= 10:
        avg_ask = sum(r[0] for r in hist) / len(hist)
        if avg_ask > 0 and ask_total < avg_ask * 0.7:
            drop = (1 - ask_total / avg_ask) * 100
            events.append(("ask_thinning","high",drop,
                f"Ask depth {drop:.1f}% below avg"))
    recent = conn.execute("""
        SELECT imbalance FROM snapshots
        WHERE symbol = ? ORDER BY ts_unix DESC LIMIT 5
    """, (symbol,)).fetchall()
    if len(recent) >= 5 and all(r[0] > 0.4 for r in recent):
        events.append(("sustained_imbalance","high",imbalance,
            "Imbalance >0.4 for 5+ consecutive snapshots"))
    if funding is not None:
        if funding > 0.0005:
            events.append(("funding_high","medium",funding*100,
                f"Funding {funding*100:.4f}%"))
        elif funding < -0.0005:
            events.append(("funding_low","medium",funding*100,
                f"Funding {funding*100:.4f}%"))
    for etype,sev,val,detail in events:
        conn.execute("""
            INSERT INTO anomalies
            (ts,ts_unix,symbol,event_type,severity,value,details)
            VALUES (?,?,?,?,?,?,?)
        """, (now_str,now_unix,symbol,etype,sev,val,detail))
    conn.commit()
    flag = " !" if events else ""
    log.info(f"[{symbol}] price={price:.6f} imb={imbalance:+.3f} "
        f"bid={bid_total:.0f} ask={ask_total:.0f} "
        f"fr={funding*100:.4f}% oi_n={oi_n or 0:,.0f}{flag}")

def main():
    p = argparse.ArgumentParser(description="BSB Collector")
    sub = p.add_subparsers(dest="cmd")
    c = sub.add_parser("collect")
    c.add_argument("--symbols", nargs="+", default=["BSBUSDT"])
    c.add_argument("--interval", type=int, default=60)
    c.add_argument("--depth", type=int, default=20)
    c.add_argument("--db", default="bsb_data.db")
    b = sub.add_parser("backfill")
    b.add_argument("--symbol", required=True)
    b.add_argument("--days", type=int, default=15)
    b.add_argument("--db", default="bsb_data.db")
    e = sub.add_parser("export")
    e.add_argument("--symbol", required=True)
    e.add_argument("--hours", type=int, default=360)
    e.add_argument("--out", default=None)
    e.add_argument("--db", default="bsb_data.db")
    args = p.parse_args()
    if args.cmd == "collect":
        conn = init_db(args.db)
        log.info(f"Collecting {args.symbols} every {args.interval}s")
        while not shutdown:
            for sym in args.symbols:
                try: collect_snapshot(conn, sym, args.depth)
                except Exception as ex: log.error(f"[{sym}] {ex}")
            time.sleep(args.interval)
        conn.close()
        log.info("Done.")
    else:
        p.print_help()

if __name__ == "__main__":
    main()

Step 5: Start Collecting

nohup python3 ~/bsb_collector.py collect --symbols BSBUSDT > ~/bsb.log 2>&1 &

Check it's working:

tail -f ~/bsb.log

You should see a log line every 60 seconds showing price, imbalance, bid/ask depth, funding rate, and open interest. To track multiple tokens at once:

nohup python3 ~/bsb_collector.py collect --symbols BSBUSDT MYXUSDT SIRENUSDT > ~/bsb.log 2>&1 &

Step 6: Backfill Historical Data

python3 ~/bsb_collector.py backfill --symbol BSBUSDT --days 15

Important: backfilled data is not real order book depth. It's taker buy vs taker sell volume from hourly candles. A different measurement that shows the same accumulation patterns but at different scales. Live collection captures actual resting order book depth. Both are useful, but they're measuring different things.

What Each Snapshot Contains

price: Last traded price on Binance Futures. mark_price: Composite index price (used for liquidations). funding_rate: Current funding rate (positive = longs pay shorts). oi / oi_notional: Open interest in token quantity and USDT. bid_depth / ask_depth: Total resting volume across top 20 levels. imbalance: (bid - ask) / (bid + ask), ranges from -1 to +1. spread: Gap between best ask and best bid. bids_json / asks_json: Raw order book levels (price + quantity per level).

Anomaly Detection

The bot runs four checks on every snapshot:

Imbalance spike: Flags when bid-ask imbalance exceeds +/-0.4 (elevated) or +/-0.6 (spike). A reading of 0.4 means 70% of the order book is bids. Sustained above 0.4 means someone is loading.

Ask-side thinning: Flags when ask depth drops 30%+ below its 30-snapshot rolling average. Sell orders are being absorbed faster than they're being replaced.

Sustained imbalance: Flags when imbalance stays above 0.4 for 5+ consecutive readings. One elevated reading is noise. Five in a row is a pattern.

Funding rate extreme: Flags when funding exceeds +/-0.05%. Deeply positive = longs overheated. Deeply negative = squeeze setup.

Cost

$6/month for the DigitalOcean droplet. No API keys, no subscriptions, no premium tiers. At 60-second intervals tracking 5 tokens, the database grows about 50MB per month. The smallest droplet has 25GB of storage. You won't run out of space.

Published by tradinghoe · April 25, 2026
Not financial advice. Do your own research.