diff --git a/README.md b/README.md index a6558fd..f779719 100644 --- a/README.md +++ b/README.md @@ -1 +1,308 @@ -# warnet-demo-project +# Warnet Demo Project + +A hands-on Warnet project to help you learn Bitcoin network analysis with a simple three-node network and propagation scenario. + +## What is this? + +This is a complete Warnet project that demonstrates how to: + +- **Deploy a three-node Bitcoin network** using Warnet +- **Run a network propagation test** to see how transactions flow +- **Monitor network behavior** and observe node responses +- **Understand basic Bitcoin network interactions** + +The project includes a pre-configured network (`simple-3node-network`) and a beginner-friendly scenario (`propagation_check.py`) that you can run immediately. + +## What is Network Propagation? + +**Network propagation** is how information (like new transactions and blocks) spreads through the Bitcoin network. When one node receives a new transaction, it shares that information with other nodes it's connected to. This is how the entire Bitcoin network stays synchronized. + +In this demo, you'll see: + +- How transactions move from one node to another +- How blocks are shared across the network +- How nodes communicate and stay in sync + +## Prerequisites + +Before you start, you'll need: + +### 1. **Python 3.8 or higher** + +- [Download Python](https://www.python.org/downloads/) +- Make sure you can run `python3 --version` in your terminal + +### 2. **Docker or Kubernetes** + +Choose one of these options: + +**Option A: Docker Desktop (Recommended for beginners)** + +- [Download Docker Desktop](https://www.docker.com/products/docker-desktop/) +- Enable Kubernetes in Docker Desktop settings +- Good for beginners because it has a graphical interface + +**Option B: Minikube (Good for Linux users)** + +- [Install Minikube](https://minikube.sigs.k8s.io/docs/start/) +- Runs a local Kubernetes cluster +- Works well on Linux systems + +### 3. **Basic Terminal Knowledge** + +- Know how to open a terminal/command prompt +- Understand basic commands like `cd`, `ls`, `mkdir` +- No Bitcoin knowledge required - we'll explain everything! + +## Quick Start + +### 1. Set up Warnet + +Follow the [Warnet installation guide](https://github.com/bitcoin-dev-project/warnet/blob/main/README.md#quick-start) for detailed setup instructions: + +```sh +# Create virtual environment (isolated Python environment) +python3 -m venv .venv +source .venv/bin/activate # On Windows: .venv\Scripts\activate + +# Install Warnet +pip install warnet + +# Set up dependencies +warnet setup +``` + +When prompted, choose your preferred backend: + +- **Docker Desktop** - If you have Docker Desktop with Kubernetes enabled (recommended for beginners due to GUI) +- **Minikube** - Local Kubernetes cluster (good for Linux users) + +**Recommendation**: Docker Desktop is generally easier for beginners due to its graphical interface, while Minikube works well on Linux systems. + +### 2. Deploy the three-node network + +This project includes a pre-configured three-node network. Deploy it with: + +```sh +warnet deploy warnet-demo/networks/simple-3node-network +``` + +This will create a simple Bitcoin network with 3 nodes for basic experimentation. + +**What happens**: Warnet will create three Bitcoin nodes (tank-0000, tank-0001, tank-0002) that can communicate with each other. + +### 3. Run the propagation test + +Execute the included scenario to see the network in action: + +```sh +warnet run warnet-demo/scenarios/propagation_check.py --debug +``` + +**Note**: The `--debug` flag shows detailed runtime output, making it easier to follow what's happening. However, this will consume your terminal foreground. If you want to run other commands (like `warnet dashboard`) simultaneously, you can omit the `--debug` flag and run the scenario in the background. + +**What the scenario does**: + +- **Step 1**: Waits for all nodes to be ready and connected +- **Step 2**: Generates initial blocks to create funds (like mining Bitcoin) +- **Step 3**: Creates test transactions (like sending Bitcoin between addresses) +- **Step 4**: Monitors transaction propagation across the 3-node network +- **Step 5**: Shows you how nodes communicate and synchronize + +### 4. Monitor your network and watch the propagation + +#### **Option A: Web Dashboard (Recommended)** + +Open the Warnet dashboard to visualize your network in real-time: + +```sh +warnet dashboard +``` + +This opens a web interface where you can: + +- See your node's status and connections +- Monitor network activity in real-time +- View logs and metrics +- Watch the propagation scenario execute step-by-step + +**Note**: If using Minikube, you may need to run `minikube tunnel` in another terminal first. + +#### **Option B: Command Line Monitoring** + +Monitor your network from the terminal: + +```sh +# Check network status +warnet status + +# View real-time logs +warnet logs -f + +# Get detailed node information +warnet logs tank-0000 +``` + +#### **Option C: Direct Node Access** + +Connect directly to your Bitcoin nodes using RPC (Remote Procedure Call) commands: + +```sh +# Get node info +warnet bitcoin rpc tank-0000 getnetworkinfo + +# Check mempool (pending transactions waiting to be included in a block) +warnet bitcoin rpc tank-0000 getmempoolinfo + +# View recent blocks +warnet bitcoin rpc tank-0000 getblockcount +``` + +#### **What You'll See:** + +When running the propagation scenario, you'll see output like this: + +``` +🚀 Starting Network Propagation Check scenario +Step 1: Waiting for all nodes to be ready... +✓ All nodes are ready and connected +Step 2: Checking initial network state... +Node 0 (tank-0000): Block count: 0, Connections: 2 +Node 1 (tank-0001): Block count: 0, Connections: 2 +Node 2 (tank-0002): Block count: 0, Connections: 2 +Step 3: Generating initial blocks... +Generating 101 blocks on node 0... +✓ All nodes synchronized at block 101 +Step 4: Creating test transactions... +Created transaction 1: d537f2d2c9948a60... +Created transaction 2: db6e229bd3bcc0ee... +Step 5: Monitoring transaction propagation... +Node 0 mempool: 5 transactions +Node 1 mempool: 5 transactions +Node 2 mempool: 5 transactions +🎉 Network Propagation Check completed successfully! +``` + +The dashboard will show these activities happening in real-time! + +### 5. Clean up + +When you're done experimenting: + +```sh +warnet down +``` + +This stops all the nodes and frees up system resources. + +## What You'll Learn + +By running this project, you'll understand: + +1. **Network Deployment**: How Warnet creates and manages Bitcoin networks +2. **Node Communication**: How Bitcoin nodes connect and share information +3. **Transaction Propagation**: How transactions flow through the network +4. **Network Monitoring**: How to observe network behavior in real-time +5. **Scenario Execution**: How to run and create network tests + +## Key Bitcoin Concepts Explained + +### **Regtest Mode** + +- A testing environment where you can create your own Bitcoin network +- No real Bitcoin is involved - it's all simulated +- Perfect for learning and experimentation + +### **Blocks** + +- Groups of transactions that are added to the Bitcoin blockchain +- In regtest mode, you can create blocks instantly for testing + +### **Mempool** + +- A temporary storage area for transactions waiting to be included in a block +- Think of it as a "waiting room" for transactions + +### **RPC (Remote Procedure Call)** + +- A way to send commands to Bitcoin nodes +- Allows you to ask nodes for information or tell them to do something + +## Project Structure + +``` +warnet-demo/ +├── warnet.yaml # Project configuration +├── networks/ +│ └── simple-3node-network/ # Three-node network configuration +│ ├── network.yaml # Network topology +│ └── node-defaults.yaml # Default node settings +└── scenarios/ + └── propagation_check.py # Network propagation test scenario +``` + +## The Network: simple-3node-network + +This is a simple three-node Bitcoin network: + +``` +tank-0000 +tank-0001 +tank-0002 +``` + +These three nodes run Bitcoin Core 26.0 in regtest mode, allowing you to observe how transactions propagate between nodes in a small network. + +## The Scenario: propagation_check.py + +This scenario demonstrates network propagation: + +1. **Network Readiness Check**: Ensures all nodes are online and connected +2. **Block Generation**: Creates initial funds by mining blocks +3. **Transaction Creation**: Sends test transactions across the network +4. **Propagation Monitoring**: Observes how transactions spread between nodes +5. **Network State Analysis**: Shows the final state of all nodes + +## Optional: Real Attack Scenario + +If you want to see an actual attack scenario that demonstrates how nodes can be affected by malicious behavior, you can also run the attack from the [Battle of Galen Erso](https://github.com/bitcoin-dev-project/battle-of-galen-erso) project: + +```sh +# Clone the Galen Erso project +git clone https://github.com/bitcoin-dev-project/battle-of-galen-erso.git +cd battle-of-galen-erso + +# Run the actual attack scenario +warnet run scenarios/my_first_attack_5kinv.py +``` + +This will demonstrate a real attack where you can watch a node "die" or become unresponsive due to malicious network behavior. + +## Troubleshooting + +### **Common Issues:** + +- **Network won't start**: Make sure your chosen backend (Minikube/Docker Desktop) is running +- **Scenario fails**: Check that the network is fully deployed first with `warnet status` +- **Dashboard not accessible**: + - For Minikube: Run `minikube tunnel` in another terminal + - For Docker Desktop: Check that Kubernetes is enabled +- **Setup fails**: Make sure you have Docker installed and running +- **"Command not found"**: Make sure you've activated the virtual environment with `source .venv/bin/activate` + +### **Getting Help:** + +If you encounter issues: + +1. Check the [Warnet documentation](https://github.com/bitcoin-dev-project/warnet/blob/main/README.md) +2. Look at the logs with `warnet logs -f` +3. Make sure all prerequisites are installed correctly + +## Next Steps + +After completing this tutorial, you can: + +1. **Try different wait times**: Run the scenario with `--wait-time 20` to see propagation more slowly +2. **Add more nodes**: Modify `networks/simple-3node-network/network.yaml` to create larger networks +3. **Explore the dashboard**: Try different monitoring options to see network activity +4. **Learn more**: Check out the main [Warnet documentation](https://github.com/bitcoin-dev-project/warnet/blob/main/README.md) for advanced features diff --git a/warnet-demo/networks/simple-3node-network/network.yaml b/warnet-demo/networks/simple-3node-network/network.yaml new file mode 100644 index 0000000..f6f09ad --- /dev/null +++ b/warnet-demo/networks/simple-3node-network/network.yaml @@ -0,0 +1,56 @@ +# Simple three-node Bitcoin network for learning Warnet +# This creates a small network topology to demonstrate propagation + +nodes: + - name: tank-0000 + bitcoin_config: | + server=1 + rpcuser=warnet + rpcpassword=warnet + rpcallowip=0.0.0.0/0 + regtest=1 + listen=1 + bind=0.0.0.0 + blockmintxfee=0.00001 + debug=net + logips=1 + - name: tank-0001 + bitcoin_config: | + server=1 + rpcuser=warnet + rpcpassword=warnet + rpcallowip=0.0.0.0/0 + regtest=1 + listen=1 + bind=0.0.0.0 + blockmintxfee=0.00001 + debug=net + logips=1 + - name: tank-0002 + bitcoin_config: | + server=1 + rpcuser=warnet + rpcpassword=warnet + rpcallowip=0.0.0.0/0 + regtest=1 + listen=1 + bind=0.0.0.0 + blockmintxfee=0.00001 + debug=net + logips=1 + + + +# Network-wide settings +chain: regtest + +# Dashboard and monitoring settings +caddy: + enabled: true +grafana: + enabled: true +prometheus: + enabled: true +fork_observer: + configQueryInterval: 20 + enabled: true \ No newline at end of file diff --git a/warnet-demo/networks/simple-3node-network/node-defaults.yaml b/warnet-demo/networks/simple-3node-network/node-defaults.yaml new file mode 100644 index 0000000..e206f6f --- /dev/null +++ b/warnet-demo/networks/simple-3node-network/node-defaults.yaml @@ -0,0 +1,17 @@ +# Default configuration for all nodes in this Warnet project +# These settings will be applied to all nodes unless overridden + +bitcoin_config: | + server=1 + rpcuser=warnet + rpcpassword=warnet + rpcallowip=0.0.0.0/0 + regtest=1 + listen=1 + bind=0.0.0.0 + blockmintxfee=0.00001 + debug=net + logips=1 + +# Default chain for all networks +chain: regtest \ No newline at end of file diff --git a/warnet-demo/scenarios/__init__.py b/warnet-demo/scenarios/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/warnet-demo/scenarios/commander.py b/warnet-demo/scenarios/commander.py new file mode 100644 index 0000000..4a44ab4 --- /dev/null +++ b/warnet-demo/scenarios/commander.py @@ -0,0 +1,500 @@ +import argparse +import base64 +import configparser +import json +import logging +import os +import pathlib +import random +import signal +import sys +import tempfile +import threading +from time import sleep + +from kubernetes import client, config +from ln_framework.ln import CLN, LND, LNNode +from test_framework.authproxy import AuthServiceProxy +from test_framework.p2p import NetworkThread +from test_framework.test_framework import ( + TMPDIR_PREFIX, + BitcoinTestFramework, + TestStatus, +) +from test_framework.test_node import TestNode +from test_framework.util import PortSeed, get_rpc_proxy + +NAMESPACE = None +pods = client.V1PodList(items=[]) +cmaps = client.V1ConfigMapList(items=[]) + +try: + # Get the in-cluster k8s client to determine what we have access to + config.load_incluster_config() + sclient = client.CoreV1Api() + + # Figure out what namespace we are in + with open("/var/run/secrets/kubernetes.io/serviceaccount/namespace") as f: + NAMESPACE = f.read().strip() + + try: + # An admin with cluster access can list everything. + # A wargames player with namespaced access will get a FORBIDDEN error here + pods = sclient.list_pod_for_all_namespaces() + cmaps = sclient.list_config_map_for_all_namespaces() + except Exception: + # Just get whatever we have access to in this namespace only + pods = sclient.list_namespaced_pod(namespace=NAMESPACE) + cmaps = sclient.list_namespaced_config_map(namespace=NAMESPACE) +except Exception: + # If there is no cluster config, the user might just be + # running the scenario file locally with --help + pass + +WARNET = {"tanks": [], "lightning": [], "channels": []} +for pod in pods.items: + if "mission" not in pod.metadata.labels: + continue + + if pod.metadata.labels["mission"] == "tank": + WARNET["tanks"].append( + { + "tank": pod.metadata.name, + "chain": pod.metadata.labels["chain"], + "rpc_host": pod.status.pod_ip, + "rpc_port": int(pod.metadata.labels["RPCPort"]), + "rpc_user": "user", + "rpc_password": pod.metadata.labels["rpcpassword"], + "init_peers": pod.metadata.annotations["init_peers"], + } + ) + + if pod.metadata.labels["mission"] == "lightning": + lnnode = LND(pod.metadata.name, pod.status.pod_ip) + if "cln" in pod.metadata.labels["app.kubernetes.io/name"]: + lnnode = CLN(pod.metadata.name, pod.status.pod_ip) + WARNET["lightning"].append(lnnode) + +for cm in cmaps.items: + if not cm.metadata.labels or "channels" not in cm.metadata.labels: + continue + channel_jsons = json.loads(cm.data["channels"]) + for channel_json in channel_jsons: + channel_json["source"] = cm.data["source"] + WARNET["channels"].append(channel_json) + + +# Ensure that all RPC calls are made with brand new http connections +def auth_proxy_request(self, method, path, postdata): + self._set_conn() # creates new http client connection + return self.oldrequest(method, path, postdata) + + +AuthServiceProxy.oldrequest = AuthServiceProxy._request +AuthServiceProxy._request = auth_proxy_request + + +# Create a custom formatter +class ColorFormatter(logging.Formatter): + """Custom formatter to add color based on log level.""" + + # Define ANSI color codes + RED = "\033[91m" + YELLOW = "\033[93m" + GREEN = "\033[92m" + RESET = "\033[0m" + + FORMATS = { + logging.DEBUG: f"{RESET}%(name)-8s - Thread-%(thread)d - %(message)s{RESET}", + logging.INFO: f"{RESET}%(name)-8s - %(message)s{RESET}", + logging.WARNING: f"{YELLOW}%(name)-8s - %(message)s{RESET}", + logging.ERROR: f"{RED}%(name)-8s - %(message)s{RESET}", + logging.CRITICAL: f"{RED}##%(name)-8s - %(message)s##{RESET}", + } + + def format(self, record): + log_fmt = self.FORMATS.get(record.levelno) + formatter = logging.Formatter(log_fmt) + return formatter.format(record) + + +class Commander(BitcoinTestFramework): + # required by subclasses of BitcoinTestFramework + def set_test_params(self): + pass + + def run_test(self): + pass + + # Utility functions for Warnet scenarios + @staticmethod + def ensure_miner(node): + wallets = node.listwallets() + if "miner" not in wallets: + node.createwallet("miner", descriptors=True) + return node.get_wallet_rpc("miner") + + @staticmethod + def hex_to_b64(hex): + return base64.b64encode(bytes.fromhex(hex)).decode() + + @staticmethod + def b64_to_hex(b64, reverse=False): + if reverse: + return base64.b64decode(b64)[::-1].hex() + else: + return base64.b64decode(b64).hex() + + def wait_for_tanks_connected(self): + def tank_connected(self, tank): + while True: + peers = tank.getpeerinfo() + count = sum( + 1 + for peer in peers + if peer.get("connection_type") == "manual" or peer.get("addnode") is True + ) + self.log.info(f"Tank {tank.tank} connected to {count}/{tank.init_peers} peers") + if count >= tank.init_peers: + break + else: + sleep(5) + + conn_threads = [ + threading.Thread(target=tank_connected, args=(self, tank)) for tank in self.nodes + ] + for thread in conn_threads: + thread.start() + + all(thread.join() is None for thread in conn_threads) + self.log.info("Network connected") + + def handle_sigterm(self, signum, frame): + print("SIGTERM received, stopping...") + self.shutdown() + sys.exit(0) + + # The following functions are chopped-up hacks of + # the original methods from BitcoinTestFramework + + def setup(self): + signal.signal(signal.SIGTERM, self.handle_sigterm) + + # hacked from _start_logging() + # Scenarios will log plain messages to stdout only, which will can redirected by warnet + self.log = logging.getLogger(self.__class__.__name__) + self.log.setLevel(logging.INFO) # set this to DEBUG to see ALL RPC CALLS + + # Because scenarios run in their own subprocess, the logger here + # is not the same as the warnet server or other global loggers. + # Scenarios log directly to stdout which gets picked up by the + # subprocess manager in the server, and reprinted to the global log. + ch = logging.StreamHandler(sys.stdout) + ch.setFormatter(ColorFormatter()) + self.log.addHandler(ch) + + # Keep a separate index of tanks by pod name + self.tanks: dict[str, TestNode] = {} + self.lns: dict[str, LNNode] = {} + self.channels = WARNET["channels"] + + for i, tank in enumerate(WARNET["tanks"]): + self.log.info( + f"Adding TestNode #{i} from pod {tank['tank']} with IP {tank['rpc_host']}" + ) + node = TestNode( + i, + pathlib.Path(), # datadir path + chain=tank["chain"], + rpchost=tank["rpc_host"], + timewait=60, + timeout_factor=self.options.timeout_factor, + bitcoind=None, + bitcoin_cli=None, + cwd=self.options.tmpdir, + coverage_dir=self.options.coveragedir, + ) + node.tank = tank["tank"] + node.rpc = get_rpc_proxy( + f"http://{tank['rpc_user']}:{tank['rpc_password']}@{tank['rpc_host']}:{tank['rpc_port']}", + i, + timeout=60, + coveragedir=self.options.coveragedir, + ) + node.rpc_connected = True + node.init_peers = int(tank["init_peers"]) + + self.nodes.append(node) + self.tanks[tank["tank"]] = node + + for ln in WARNET["lightning"]: + self.lns[ln.name] = ln + + self.num_nodes = len(self.nodes) + + # Set up temp directory and start logging + if self.options.tmpdir: + self.options.tmpdir = os.path.abspath(self.options.tmpdir) + os.makedirs(self.options.tmpdir, exist_ok=False) + else: + self.options.tmpdir = tempfile.mkdtemp(prefix=TMPDIR_PREFIX) + + seed = self.options.randomseed + if seed is None: + seed = random.randrange(sys.maxsize) + else: + self.log.info(f"User supplied random seed {seed}") + random.seed(seed) + self.log.info(f"PRNG seed is: {seed}") + + self.log.debug("Setting up network thread") + self.network_thread = NetworkThread() + self.network_thread.start() + + self.success = TestStatus.PASSED + + def parse_args(self): + # Only print "outer" args from parent class when using --help + help_parser = argparse.ArgumentParser(usage="%(prog)s [options]") + self.add_options(help_parser) + help_args, _ = help_parser.parse_known_args() + # Check if 'help' attribute exists in help_args before accessing it + if hasattr(help_args, "help") and help_args.help: + help_parser.print_help() + sys.exit(0) + + previous_releases_path = "" + parser = argparse.ArgumentParser(usage="%(prog)s [options]") + parser.add_argument( + "--nocleanup", + dest="nocleanup", + default=False, + action="store_true", + help="Leave bitcoinds and test.* datadir on exit or error", + ) + parser.add_argument( + "--nosandbox", + dest="nosandbox", + default=False, + action="store_true", + help="Don't use the syscall sandbox", + ) + parser.add_argument( + "--noshutdown", + dest="noshutdown", + default=False, + action="store_true", + help="Don't stop bitcoinds after the test execution", + ) + parser.add_argument( + "--cachedir", + dest="cachedir", + default=None, + help="Directory for caching pregenerated datadirs (default: %(default)s)", + ) + parser.add_argument( + "--tmpdir", dest="tmpdir", default=None, help="Root directory for datadirs" + ) + parser.add_argument( + "-l", + "--loglevel", + dest="loglevel", + default="DEBUG", + help="log events at this level and higher to the console. Can be set to DEBUG, INFO, WARNING, ERROR or CRITICAL. Passing --loglevel DEBUG will output all logs to console. Note that logs at all levels are always written to the test_framework.log file in the temporary test directory.", + ) + parser.add_argument( + "--tracerpc", + dest="trace_rpc", + default=False, + action="store_true", + help="Print out all RPC calls as they are made", + ) + parser.add_argument( + "--portseed", + dest="port_seed", + default=0, + help="The seed to use for assigning port numbers (default: current process id)", + ) + parser.add_argument( + "--previous-releases", + dest="prev_releases", + default=None, + action="store_true", + help="Force test of previous releases (default: %(default)s)", + ) + parser.add_argument( + "--coveragedir", + dest="coveragedir", + default=None, + help="Write tested RPC commands into this directory", + ) + parser.add_argument( + "--configfile", + dest="configfile", + default=None, + help="Location of the test framework config file (default: %(default)s)", + ) + parser.add_argument( + "--pdbonfailure", + dest="pdbonfailure", + default=False, + action="store_true", + help="Attach a python debugger if test fails", + ) + parser.add_argument( + "--usecli", + dest="usecli", + default=False, + action="store_true", + help="use bitcoin-cli instead of RPC for all commands", + ) + parser.add_argument( + "--perf", + dest="perf", + default=False, + action="store_true", + help="profile running nodes with perf for the duration of the test", + ) + parser.add_argument( + "--valgrind", + dest="valgrind", + default=False, + action="store_true", + help="run nodes under the valgrind memory error detector: expect at least a ~10x slowdown. valgrind 3.14 or later required.", + ) + parser.add_argument( + "--randomseed", + default=0x7761726E6574, # "warnet" ascii + help="set a random seed for deterministically reproducing a previous test run", + ) + parser.add_argument( + "--timeout-factor", + dest="timeout_factor", + default=1, + help="adjust test timeouts by a factor. Setting it to 0 disables all timeouts", + ) + parser.add_argument( + "--network", + dest="network", + default="warnet", + help="Designate which warnet this should run on (default: warnet)", + ) + parser.add_argument( + "--v2transport", + dest="v2transport", + default=False, + action="store_true", + help="use BIP324 v2 connections between all nodes by default", + ) + + self.add_options(parser) + # Running TestShell in a Jupyter notebook causes an additional -f argument + # To keep TestShell from failing with an "unrecognized argument" error, we add a dummy "-f" argument + # source: https://stackoverflow.com/questions/48796169/how-to-fix-ipykernel-launcher-py-error-unrecognized-arguments-in-jupyter/56349168#56349168 + parser.add_argument("-f", "--fff", help="a dummy argument to fool ipython", default="1") + self.options = parser.parse_args() + if self.options.timeout_factor == 0: + self.options.timeout_factor = 99999 + self.options.timeout_factor = self.options.timeout_factor or ( + 4 if self.options.valgrind else 1 + ) + self.options.previous_releases_path = previous_releases_path + config = configparser.ConfigParser() + if self.options.configfile is not None: + with open(self.options.configfile) as f: + config.read_file(f) + + config["environment"] = {"PACKAGE_BUGREPORT": ""} + + self.config = config + + if "descriptors" not in self.options: + # Wallet is not required by the test at all and the value of self.options.descriptors won't matter. + # It still needs to exist and be None in order for tests to work however. + # So set it to None to force -disablewallet, because the wallet is not needed. + self.options.descriptors = None + elif self.options.descriptors is None: + # Some wallet is either required or optionally used by the test. + # Prefer SQLite unless it isn't available + if self.is_sqlite_compiled(): + self.options.descriptors = True + elif self.is_bdb_compiled(): + self.options.descriptors = False + else: + # If neither are compiled, tests requiring a wallet will be skipped and the value of self.options.descriptors won't matter + # It still needs to exist and be None in order for tests to work however. + # So set it to None, which will also set -disablewallet. + self.options.descriptors = None + + PortSeed.n = self.options.port_seed + + def connect_nodes(self, a, b, *, peer_advertises_v2=None, wait_for_connect: bool = True): + """ + Kwargs: + wait_for_connect: if True, block until the nodes are verified as connected. You might + want to disable this when using -stopatheight with one of the connected nodes, + since there will be a race between the actual connection and performing + the assertions before one node shuts down. + """ + from_connection = self.nodes[a] + to_connection = self.nodes[b] + from_num_peers = 1 + len(from_connection.getpeerinfo()) + to_num_peers = 1 + len(to_connection.getpeerinfo()) + ip_port = self.nodes[b].rpchost + ":18444" + + if peer_advertises_v2 is None: + peer_advertises_v2 = self.options.v2transport + + if peer_advertises_v2: + from_connection.addnode(node=ip_port, command="onetry", v2transport=True) + else: + # skip the optional third argument (default false) for + # compatibility with older clients + from_connection.addnode(ip_port, "onetry") + + if not wait_for_connect: + return + + # poll until version handshake complete to avoid race conditions + # with transaction relaying + # See comments in net_processing: + # * Must have a version message before anything else + # * Must have a verack message before anything else + self.wait_until( + lambda: sum(peer["version"] != 0 for peer in from_connection.getpeerinfo()) + == from_num_peers + ) + self.wait_until( + lambda: sum(peer["version"] != 0 for peer in to_connection.getpeerinfo()) + == to_num_peers + ) + self.wait_until( + lambda: sum( + peer["bytesrecv_per_msg"].pop("verack", 0) >= 21 + for peer in from_connection.getpeerinfo() + ) + == from_num_peers + ) + self.wait_until( + lambda: sum( + peer["bytesrecv_per_msg"].pop("verack", 0) >= 21 + for peer in to_connection.getpeerinfo() + ) + == to_num_peers + ) + # The message bytes are counted before processing the message, so make + # sure it was fully processed by waiting for a ping. + self.wait_until( + lambda: sum( + peer["bytesrecv_per_msg"].pop("pong", 0) >= 29 + for peer in from_connection.getpeerinfo() + ) + == from_num_peers + ) + self.wait_until( + lambda: sum( + peer["bytesrecv_per_msg"].pop("pong", 0) >= 29 + for peer in to_connection.getpeerinfo() + ) + == to_num_peers + ) diff --git a/warnet-demo/scenarios/ln_framework/__init__.py b/warnet-demo/scenarios/ln_framework/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/warnet-demo/scenarios/ln_framework/ln.py b/warnet-demo/scenarios/ln_framework/ln.py new file mode 100644 index 0000000..f2ec4cb --- /dev/null +++ b/warnet-demo/scenarios/ln_framework/ln.py @@ -0,0 +1,529 @@ +import base64 +import http.client +import json +import logging +import ssl +from abc import ABC, abstractmethod +from time import sleep + +import requests + +# hard-coded deterministic lnd credentials +ADMIN_MACAROON_HEX = "0201036c6e6402f801030a1062beabbf2a614b112128afa0c0b4fdd61201301a160a0761646472657373120472656164120577726974651a130a04696e666f120472656164120577726974651a170a08696e766f69636573120472656164120577726974651a210a086d616361726f6f6e120867656e6572617465120472656164120577726974651a160a076d657373616765120472656164120577726974651a170a086f6666636861696e120472656164120577726974651a160a076f6e636861696e120472656164120577726974651a140a057065657273120472656164120577726974651a180a067369676e6572120867656e657261746512047265616400000620b17be53e367290871681055d0de15587f6d1cd47d1248fe2662ae27f62cfbdc6" +# Don't worry about lnd's self-signed certificates +INSECURE_CONTEXT = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT) +INSECURE_CONTEXT.check_hostname = False +INSECURE_CONTEXT.verify_mode = ssl.CERT_NONE + + +# https://github.com/lightningcn/lightning-rfc/blob/master/07-routing-gossip.md#the-channel_update-message +# We use the field names as written in the BOLT as our canonical, internal field names. +# In LND, Policy objects returned by DescribeGraph have completely different labels +# than policy objects expected by the UpdateChannelPolicy API, and neither +# of these are the names used in the BOLT... +class Policy: + def __init__( + self, + cltv_expiry_delta: int, + htlc_minimum_msat: int, + fee_base_msat: int, + fee_proportional_millionths: int, + htlc_maximum_msat: int, + ): + self.cltv_expiry_delta = cltv_expiry_delta + self.htlc_minimum_msat = htlc_minimum_msat + self.fee_base_msat = fee_base_msat + self.fee_proportional_millionths = fee_proportional_millionths + self.htlc_maximum_msat = htlc_maximum_msat + + @classmethod + def from_lnd_describegraph(cls, policy: dict): + return cls( + cltv_expiry_delta=int(policy.get("time_lock_delta")), + htlc_minimum_msat=int(policy.get("min_htlc")), + fee_base_msat=int(policy.get("fee_base_msat")), + fee_proportional_millionths=int(policy.get("fee_rate_milli_msat")), + htlc_maximum_msat=int(policy.get("max_htlc_msat")), + ) + + @classmethod + def from_dict(cls, policy: dict): + return cls( + cltv_expiry_delta=policy.get("cltv_expiry_delta"), + htlc_minimum_msat=policy.get("htlc_minimum_msat"), + fee_base_msat=policy.get("fee_base_msat"), + fee_proportional_millionths=policy.get("fee_proportional_millionths"), + htlc_maximum_msat=policy.get("htlc_maximum_msat"), + ) + + def to_dict(self): + return { + "cltv_expiry_delta": self.cltv_expiry_delta, + "htlc_minimum_msat": self.htlc_minimum_msat, + "fee_base_msat": self.fee_base_msat, + "fee_proportional_millionths": self.fee_proportional_millionths, + "htlc_maximum_msat": self.htlc_maximum_msat, + } + + def to_lnd_chanpolicy(self, capacity): + # LND requires a 1% reserve + reserve = ((capacity * 99) // 100) * 1000 + return { + "time_lock_delta": self.cltv_expiry_delta, + "min_htlc_msat": self.htlc_minimum_msat, + "base_fee_msat": self.fee_base_msat, + "fee_rate_ppm": self.fee_proportional_millionths, + "max_htlc_msat": min(self.htlc_maximum_msat, reserve), + "min_htlc_msat_specified": True, + } + + +class LNNode(ABC): + @abstractmethod + def __init__(self, pod_name, ip_address): + self.name = pod_name + self.ip_address = ip_address + self.log = logging.getLogger(pod_name) + handler = logging.StreamHandler() + formatter = logging.Formatter("%(name)-8s - %(levelname)s: %(message)s") + handler.setFormatter(formatter) + self.log.addHandler(handler) + self.log.setLevel(logging.INFO) + + @staticmethod + def hex_to_b64(hex): + return base64.b64encode(bytes.fromhex(hex)).decode() + + @staticmethod + def b64_to_hex(b64, reverse=False): + if reverse: + return base64.b64decode(b64)[::-1].hex() + else: + return base64.b64decode(b64).hex() + + @abstractmethod + def newaddress(self, max_tries=10) -> tuple[bool, str]: + pass + + @abstractmethod + def uri(self) -> str: + pass + + @abstractmethod + def walletbalance(self) -> int: + pass + + @abstractmethod + def connect(self, target_uri) -> dict: + pass + + @abstractmethod + def channel(self, pk, capacity, push_amt, fee_rate) -> dict: + pass + + @abstractmethod + def graph(self) -> dict: + pass + + @abstractmethod + def update(self, txid_hex: str, policy: dict, capacity: int) -> dict: + pass + + +class CLN(LNNode): + def __init__(self, pod_name, ip_address): + super().__init__(pod_name, ip_address) + self.conn = None + self.headers = {} + self.impl = "cln" + self.reset_connection() + + def reset_connection(self): + self.conn = http.client.HTTPSConnection( + host=self.name, port=3010, timeout=5, context=INSECURE_CONTEXT + ) + + def setRune(self, rune): + self.headers = {"Rune": rune} + + def get(self, uri): + attempt = 0 + while True: + try: + self.log.warning(f"headers: {self.headers}") + self.conn.request( + method="GET", + url=uri, + headers=self.headers, + ) + return self.conn.getresponse().read().decode("utf8") + except Exception as e: + self.reset_connection() + attempt += 1 + if attempt > 5: + self.log.error(f"Error CLN GET, Abort: {e}") + return None + sleep(1) + + def post(self, uri, data=None): + if not data: + data = {} + body = json.dumps(data) + post_header = self.headers + post_header["Content-Length"] = str(len(body)) + post_header["Content-Type"] = "application/json" + attempt = 0 + while True: + try: + self.conn.request( + method="POST", + url=uri, + body=body, + headers=post_header, + ) + # Stream output, otherwise we get a timeout error + res = self.conn.getresponse() + stream = "" + while True: + try: + data = res.read(1) + if len(data) == 0: + break + else: + stream += data.decode("utf8") + except Exception: + break + return stream + except Exception as e: + self.reset_connection() + attempt += 1 + if attempt > 5: + self.log.error(f"Error CLN POST, Abort: {e}") + return None + sleep(1) + + def createrune(self, max_tries=2): + attempt = 0 + while attempt < max_tries: + attempt += 1 + response = requests.get(f"http://{self.ip_address}:8080/rune.json", timeout=5).text + if not response: + sleep(2) + continue + self.log.debug(response) + res = json.loads(response) + self.setRune(res["rune"]) + return + raise Exception(f"Unable to fetch rune from {self.name}") + + def newaddress(self, max_tries=2): + self.createrune() + attempt = 0 + while attempt < max_tries: + attempt += 1 + response = self.post("/v1/newaddr") + if not response: + sleep(2) + continue + res = json.loads(response) + if "bech32" in res: + return True, res["bech32"] + else: + self.log.warning( + f"Couldn't get wallet address from {self.name}:\n {res}\n wait and retry..." + ) + sleep(2) + return False, "" + + def uri(self): + res = json.loads(self.post("/v1/getinfo")) + if len(res["address"]) < 1: + return None + return f"{res['id']}@{res['address'][0]['address']}:{res['address'][0]['port']}" + + def walletbalance(self, max_tries=2) -> int: + attempt = 0 + while attempt < max_tries: + attempt += 1 + response = self.post("/v1/listfunds") + if not response: + sleep(2) + continue + res = json.loads(response) + return int(sum(o["amount_msat"] for o in res["outputs"]) / 1000) + return 0 + + def channelbalance(self, max_tries=2) -> int: + attempt = 0 + while attempt < max_tries: + attempt += 1 + response = self.post("/v1/listfunds") + if not response: + sleep(2) + continue + res = json.loads(response) + return int(sum(o["our_amount_msat"] for o in res["channels"]) / 1000) + return 0 + + def connect(self, target_uri, max_tries=5) -> dict: + attempt = 0 + while attempt < max_tries: + attempt += 1 + response = self.post("/v1/connect", {"id": target_uri}) + if response: + res = json.loads(response) + if "id" in res: + return {} + elif "code" in res and res["code"] == 402: + self.log.warning(f"failed connect 402: {response}, wait and retry...") + sleep(5) + else: + return res + else: + self.log.debug(f"connect response: {response}, wait and retry...") + sleep(2) + return None + + def channel(self, pk, capacity, push_amt, fee_rate, max_tries=5) -> dict: + data = { + "amount": capacity, + "push_msat": push_amt, + "id": pk, + "feerate": fee_rate, + } + attempt = 0 + while attempt < max_tries: + attempt += 1 + response = self.post("/v1/fundchannel", data) + if response: + res = json.loads(response) + if "txid" in res: + return {"txid": res["txid"], "outpoint": f"{res['txid']}:{res['outnum']}"} + else: + self.log.warning(f"unable to open channel: {res}, wait and retry...") + sleep(1) + else: + self.log.debug(f"channel response: {response}, wait and retry...") + sleep(2) + return None + + def createinvoice(self, sats, label, description="new invoice") -> str: + response = self.post( + "invoice", {"amount_msat": sats * 1000, "label": label, "description": description} + ) + if response: + res = json.loads(response) + return res["bolt11"] + return None + + def payinvoice(self, payment_request) -> str: + response = self.post("/v1/pay", {"bolt11": payment_request}) + if response: + res = json.loads(response) + if "code" in res: + return res["message"] + else: + return res["payment_hash"] + return None + + def graph(self, max_tries=2) -> dict: + attempt = 0 + while attempt < max_tries: + attempt += 1 + response = self.post("/v1/listchannels") + if response: + res = json.loads(response) + if "channels" in res: + # Map to desired output + filtered_channels = [ch for ch in res["channels"] if ch["direction"] == 1] + # Sort by short_channel_id - block -> index -> output + sorted_channels = sorted(filtered_channels, key=lambda x: x["short_channel_id"]) + # Add capacity by dividing amount_msat by 1000 + for channel in sorted_channels: + channel["capacity"] = channel["amount_msat"] // 1000 + return {"edges": sorted_channels} + else: + self.log.warning(f"unable to open channel: {res}, wait and retry...") + sleep(1) + else: + self.log.debug(f"channel response: {response}, wait and retry...") + sleep(2) + return None + + def update(self, txid_hex: str, policy: dict, capacity: int, max_tries=2) -> dict: + self.log.warning("Channel Policy Updates not supported by CLN yet!") + return None + + +class LND(LNNode): + def __init__(self, pod_name, ip_address): + super().__init__(pod_name, ip_address) + self.conn = http.client.HTTPSConnection( + host=pod_name, port=8080, timeout=5, context=INSECURE_CONTEXT + ) + self.headers = { + "Grpc-Metadata-macaroon": ADMIN_MACAROON_HEX, + "Connection": "close", + } + self.impl = "lnd" + + def reset_connection(self): + self.conn = http.client.HTTPSConnection( + host=self.name, port=8080, timeout=5, context=INSECURE_CONTEXT + ) + + def get(self, uri): + attempt = 0 + while True: + try: + self.conn.request( + method="GET", + url=uri, + headers=self.headers, + ) + return self.conn.getresponse().read().decode("utf8") + except Exception as e: + self.reset_connection() + attempt += 1 + if attempt > 5: + self.log.error(f"Error LND POST, Abort: {e}") + return None + sleep(1) + + def post(self, uri, data): + body = json.dumps(data) + post_header = self.headers + post_header["Content-Length"] = str(len(body)) + post_header["Content-Type"] = "application/json" + attempt = 0 + while True: + try: + self.conn.request( + method="POST", + url=uri, + body=body, + headers=post_header, + ) + # Stream output, otherwise we get a timeout error + res = self.conn.getresponse() + stream = "" + while True: + try: + data = res.read(1) + if len(data) == 0: + break + else: + stream += data.decode("utf8") + except Exception: + break + return stream + except Exception as e: + self.reset_connection() + attempt += 1 + if attempt > 5: + self.log.error(f"Error LND POST, Abort: {e}") + return None + sleep(1) + + def newaddress(self, max_tries=10): + attempt = 0 + while attempt < max_tries: + attempt += 1 + response = self.get("/v1/newaddress") + res = json.loads(response) + if "address" in res: + return True, res["address"] + else: + self.log.warning( + f"Couldn't get wallet address from {self.name}:\n {res}\n wait and retry..." + ) + sleep(1) + return False, "" + + def walletbalance(self) -> int: + res = self.get("/v1/balance/blockchain") + return int(json.loads(res)["confirmed_balance"]) + + def channelbalance(self) -> int: + res = self.get("/v1/balance/channels") + return int(json.loads(res)["balance"]) + + def uri(self): + res = self.get("/v1/getinfo") + info = json.loads(res) + if "uris" not in info or len(info["uris"]) == 0: + return None + return info["uris"][0] + + def connect(self, target_uri): + pk, host = target_uri.split("@") + res = self.post("/v1/peers", data={"addr": {"pubkey": pk, "host": host}}) + return json.loads(res) + + def channel(self, pk, capacity, push_amt, fee_rate, max_tries=2): + b64_pk = self.hex_to_b64(pk) + attempt = 0 + while attempt < max_tries: + attempt += 1 + response = self.post( + "/v1/channels/stream", + data={ + "local_funding_amount": capacity, + "push_sat": push_amt, + "node_pubkey": b64_pk, + "sat_per_vbyte": fee_rate, + }, + ) + try: + res = json.loads(response) + if "result" in res: + res["txid"] = self.b64_to_hex( + res["result"]["chan_pending"]["txid"], reverse=True + ) + res["outpoint"] = ( + f"{res['txid']}:{res['result']['chan_pending']['output_index']}" + ) + return res + self.log.warning(f"Open LND channel error: {res}") + except Exception as e: + self.log.error(f"Error opening LND channel: {e}") + sleep(2) + return None + + def update(self, txid_hex: str, policy: dict, capacity: int): + ln_policy = Policy.from_dict(policy).to_lnd_chanpolicy(capacity) + data = {"chan_point": {"funding_txid_str": txid_hex, "output_index": 0}, **ln_policy} + res = self.post( + "/v1/chanpolicy", + # Policy objects returned by DescribeGraph have + # completely different labels than policy objects expected + # by the UpdateChannelPolicy API. + data=data, + ) + return json.loads(res) + + def createinvoice(self, sats, label, description="new invoice") -> str: + b64_desc = base64.b64encode(description.encode("utf-8")) + response = self.post( + "/v1/invoices", data={"value": sats, "memo": label, "description_hash": b64_desc} + ) + if response: + res = json.loads(response) + return res["payment_request"] + return None + + def payinvoice(self, payment_request) -> str: + response = self.post( + "/v1/channels/transaction-stream", data={"payment_request": payment_request} + ) + if response: + res = json.loads(response) + if "payment_error" in res: + return res["payment_error"] + else: + return res["payment_hash"] + return None + + def graph(self): + res = self.get("/v1/graph") + return json.loads(res) diff --git a/warnet-demo/scenarios/propagation_check.py b/warnet-demo/scenarios/propagation_check.py new file mode 100644 index 0000000..5b28959 --- /dev/null +++ b/warnet-demo/scenarios/propagation_check.py @@ -0,0 +1,175 @@ +#!/usr/bin/env python3 +""" +Network Propagation Check Scenario + +A simple scenario to demonstrate network propagation with a 3-node network. +This scenario shows how to: +1. Connect to nodes +2. Monitor network state +3. Perform basic network operations +4. Observe transaction propagation between nodes + +Usage: + warnet run scenarios/propagation_check.py +""" + +import time +from commander import Commander + + +class PropagationCheck(Commander): + def set_test_params(self): + # This scenario works with any number of nodes + self.num_nodes = 0 + + def add_options(self, parser): + parser.description = "Network propagation check - observe transaction flow between nodes" + parser.usage = "warnet run scenarios/propagation_check.py [options]" + parser.add_argument( + "--wait-time", + dest="wait_time", + default=5, + type=int, + help="Time to wait between operations (default 5 seconds)", + ) + + def run_test(self): + """Main test logic""" + self.log.info("🚀 Starting Network Propagation Check scenario") + + # Step 1: Wait for all nodes to be ready and connect them + self.log.info("Step 1: Waiting for all nodes to be ready...") + if len(self.nodes) > 1: + self.wait_for_tanks_connected() + self.log.info("✓ All nodes are ready and connected") + + # Manually connect nodes to each other (if not already connected) + self.log.info("Connecting nodes to each other...") + + # Check if nodes are already connected before connecting + # Get current peer counts + node0_peers = len(self.nodes[0].getpeerinfo()) + node1_peers = len(self.nodes[1].getpeerinfo()) + node2_peers = len(self.nodes[2].getpeerinfo()) + + # Only connect if nodes don't have enough peers + if node0_peers < 2: + self.connect_nodes(0, 1) + self.connect_nodes(0, 2) + if node1_peers < 2: + self.connect_nodes(1, 2) + + # Wait for connections to establish + time.sleep(2) + self.log.info("✓ Nodes connected to each other") + else: + self.log.info("✓ Single node is ready") + + # Step 2: Check initial network state + self.log.info("Step 2: Checking initial network state...") + self.check_network_state() + + # Step 3: Generate some blocks to create initial funds + self.log.info("Step 3: Generating initial blocks...") + self.generate_initial_blocks() + + # Step 4: Create some transactions + self.log.info("Step 4: Creating test transactions...") + self.create_test_transactions() + + # Step 5: Monitor network propagation + self.log.info("Step 5: Monitoring transaction propagation...") + self.monitor_transaction_propagation() + + # Step 6: Check final network state + self.log.info("Step 6: Checking final network state...") + self.check_network_state() + + self.log.info("🎉 Network Propagation Check completed successfully!") + + def check_network_state(self): + """Check the current state of all nodes in the network""" + for i, node in enumerate(self.nodes): + # Get basic node info + block_count = node.getblockcount() + connection_count = node.getconnectioncount() + mempool_size = len(node.getrawmempool()) + + self.log.info(f"Node {i} ({node.tank}):") + self.log.info(f" - Block count: {block_count}") + self.log.info(f" - Connections: {connection_count}") + self.log.info(f" - Mempool size: {mempool_size}") + + # Show peer connections + if connection_count > 0: + self.log.info(f" - Connected to {connection_count} peers") + else: + self.log.info(f" - No peer connections yet") + + def generate_initial_blocks(self): + """Generate some initial blocks to create funds""" + # Use the first node to generate blocks + node0 = self.nodes[0] + miner_wallet = self.ensure_miner(node0) + address = miner_wallet.getnewaddress() + + # Generate 101 blocks to create mature coinbase outputs + self.log.info("Generating 101 blocks on node 0...") + self.generatetoaddress(node0, 101, address, sync_fun=self.sync_all) + + # Check that all nodes have the same block count + block_counts = [node.getblockcount() for node in self.nodes] + if len(set(block_counts)) == 1: + self.log.info(f"✓ All nodes synchronized at block {block_counts[0]}") + else: + self.log.warning(f"⚠️ Nodes have different block counts: {block_counts}") + + def create_test_transactions(self): + """Create some test transactions to demonstrate network activity""" + # Create a wallet on node 0 + node0 = self.nodes[0] + miner_wallet = self.ensure_miner(node0) + + # Get a new address + address = miner_wallet.getnewaddress() + + # Send some transactions to demonstrate network activity + self.log.info("Creating test transactions...") + + for i in range(5): + # Send a small amount to the address + txid = miner_wallet.sendtoaddress(address, 0.1) + self.log.info(f"Created transaction {i+1}: {txid[:16]}...") + + # Wait a bit between transactions + time.sleep(0.5) + + # Wait for transactions to propagate + self.sync_all() + + def monitor_transaction_propagation(self): + """Monitor how transactions propagate through the network""" + self.log.info("Monitoring transaction propagation...") + + # Wait a bit for transactions to propagate + time.sleep(self.options.wait_time) + + # Check mempool on all nodes + for i, node in enumerate(self.nodes): + mempool = node.getrawmempool() + self.log.info(f"Node {i} mempool: {len(mempool)} transactions") + + # Show some transaction details + if mempool: + txid = list(mempool)[0] + tx_info = node.getmempoolentry(txid) + fee = tx_info.get('fee', 'unknown') + self.log.info(f" Sample transaction: {txid[:16]}... (fee: {fee})") + + +def main(): + PropagationCheck().main() + + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/warnet-demo/scenarios/test_framework/__init__.py b/warnet-demo/scenarios/test_framework/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/warnet-demo/scenarios/test_framework/address.py b/warnet-demo/scenarios/test_framework/address.py new file mode 100644 index 0000000..5b2e328 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/address.py @@ -0,0 +1,227 @@ +#!/usr/bin/env python3 +# Copyright (c) 2016-2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Encode and decode Bitcoin addresses. + +- base58 P2PKH and P2SH addresses. +- bech32 segwit v0 P2WPKH and P2WSH addresses. +- bech32m segwit v1 P2TR addresses.""" + +import enum +import unittest + +from .script import ( + CScript, + OP_0, + OP_TRUE, + hash160, + hash256, + sha256, + taproot_construct, +) +from .util import assert_equal +from test_framework.script_util import ( + keyhash_to_p2pkh_script, + program_to_witness_script, + scripthash_to_p2sh_script, +) +from test_framework.segwit_addr import ( + decode_segwit_address, + encode_segwit_address, +) + + +ADDRESS_BCRT1_UNSPENDABLE = 'bcrt1qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq3xueyj' +ADDRESS_BCRT1_UNSPENDABLE_DESCRIPTOR = 'addr(bcrt1qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq3xueyj)#juyq9d97' +# Coins sent to this address can be spent with a witness stack of just OP_TRUE +ADDRESS_BCRT1_P2WSH_OP_TRUE = 'bcrt1qft5p2uhsdcdc3l2ua4ap5qqfg4pjaqlp250x7us7a8qqhrxrxfsqseac85' + + +class AddressType(enum.Enum): + bech32 = 'bech32' + p2sh_segwit = 'p2sh-segwit' + legacy = 'legacy' # P2PKH + + +b58chars = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' + + +def create_deterministic_address_bcrt1_p2tr_op_true(): + """ + Generates a deterministic bech32m address (segwit v1 output) that + can be spent with a witness stack of OP_TRUE and the control block + with internal public key (script-path spending). + + Returns a tuple with the generated address and the internal key. + """ + internal_key = (1).to_bytes(32, 'big') + address = output_key_to_p2tr(taproot_construct(internal_key, [(None, CScript([OP_TRUE]))]).output_pubkey) + assert_equal(address, 'bcrt1p9yfmy5h72durp7zrhlw9lf7jpwjgvwdg0jr0lqmmjtgg83266lqsekaqka') + return (address, internal_key) + + +def byte_to_base58(b, version): + result = '' + b = bytes([version]) + b # prepend version + b += hash256(b)[:4] # append checksum + value = int.from_bytes(b, 'big') + while value > 0: + result = b58chars[value % 58] + result + value //= 58 + while b[0] == 0: + result = b58chars[0] + result + b = b[1:] + return result + + +def base58_to_byte(s): + """Converts a base58-encoded string to its data and version. + + Throws if the base58 checksum is invalid.""" + if not s: + return b'' + n = 0 + for c in s: + n *= 58 + assert c in b58chars + digit = b58chars.index(c) + n += digit + h = '%x' % n + if len(h) % 2: + h = '0' + h + res = n.to_bytes((n.bit_length() + 7) // 8, 'big') + pad = 0 + for c in s: + if c == b58chars[0]: + pad += 1 + else: + break + res = b'\x00' * pad + res + + if hash256(res[:-4])[:4] != res[-4:]: + raise ValueError('Invalid Base58Check checksum') + + return res[1:-4], int(res[0]) + + +def keyhash_to_p2pkh(hash, main=False): + assert len(hash) == 20 + version = 0 if main else 111 + return byte_to_base58(hash, version) + +def scripthash_to_p2sh(hash, main=False): + assert len(hash) == 20 + version = 5 if main else 196 + return byte_to_base58(hash, version) + +def key_to_p2pkh(key, main=False): + key = check_key(key) + return keyhash_to_p2pkh(hash160(key), main) + +def script_to_p2sh(script, main=False): + script = check_script(script) + return scripthash_to_p2sh(hash160(script), main) + +def key_to_p2sh_p2wpkh(key, main=False): + key = check_key(key) + p2shscript = CScript([OP_0, hash160(key)]) + return script_to_p2sh(p2shscript, main) + +def program_to_witness(version, program, main=False): + if (type(program) is str): + program = bytes.fromhex(program) + assert 0 <= version <= 16 + assert 2 <= len(program) <= 40 + assert version > 0 or len(program) in [20, 32] + return encode_segwit_address("bc" if main else "bcrt", version, program) + +def script_to_p2wsh(script, main=False): + script = check_script(script) + return program_to_witness(0, sha256(script), main) + +def key_to_p2wpkh(key, main=False): + key = check_key(key) + return program_to_witness(0, hash160(key), main) + +def script_to_p2sh_p2wsh(script, main=False): + script = check_script(script) + p2shscript = CScript([OP_0, sha256(script)]) + return script_to_p2sh(p2shscript, main) + +def output_key_to_p2tr(key, main=False): + assert len(key) == 32 + return program_to_witness(1, key, main) + +def check_key(key): + if (type(key) is str): + key = bytes.fromhex(key) # Assuming this is hex string + if (type(key) is bytes and (len(key) == 33 or len(key) == 65)): + return key + assert False + +def check_script(script): + if (type(script) is str): + script = bytes.fromhex(script) # Assuming this is hex string + if (type(script) is bytes or type(script) is CScript): + return script + assert False + + +def bech32_to_bytes(address): + hrp = address.split('1')[0] + if hrp not in ['bc', 'tb', 'bcrt']: + return (None, None) + version, payload = decode_segwit_address(hrp, address) + if version is None: + return (None, None) + return version, bytearray(payload) + + +def address_to_scriptpubkey(address): + """Converts a given address to the corresponding output script (scriptPubKey).""" + version, payload = bech32_to_bytes(address) + if version is not None: + return program_to_witness_script(version, payload) # testnet segwit scriptpubkey + payload, version = base58_to_byte(address) + if version == 111: # testnet pubkey hash + return keyhash_to_p2pkh_script(payload) + elif version == 196: # testnet script hash + return scripthash_to_p2sh_script(payload) + # TODO: also support other address formats + else: + assert False + + +class TestFrameworkScript(unittest.TestCase): + def test_base58encodedecode(self): + def check_base58(data, version): + self.assertEqual(base58_to_byte(byte_to_base58(data, version)), (data, version)) + + check_base58(bytes.fromhex('1f8ea1702a7bd4941bca0941b852c4bbfedb2e05'), 111) + check_base58(bytes.fromhex('3a0b05f4d7f66c3ba7009f453530296c845cc9cf'), 111) + check_base58(bytes.fromhex('41c1eaf111802559bad61b60d62b1f897c63928a'), 111) + check_base58(bytes.fromhex('0041c1eaf111802559bad61b60d62b1f897c63928a'), 111) + check_base58(bytes.fromhex('000041c1eaf111802559bad61b60d62b1f897c63928a'), 111) + check_base58(bytes.fromhex('00000041c1eaf111802559bad61b60d62b1f897c63928a'), 111) + check_base58(bytes.fromhex('1f8ea1702a7bd4941bca0941b852c4bbfedb2e05'), 0) + check_base58(bytes.fromhex('3a0b05f4d7f66c3ba7009f453530296c845cc9cf'), 0) + check_base58(bytes.fromhex('41c1eaf111802559bad61b60d62b1f897c63928a'), 0) + check_base58(bytes.fromhex('0041c1eaf111802559bad61b60d62b1f897c63928a'), 0) + check_base58(bytes.fromhex('000041c1eaf111802559bad61b60d62b1f897c63928a'), 0) + check_base58(bytes.fromhex('00000041c1eaf111802559bad61b60d62b1f897c63928a'), 0) + + + def test_bech32_decode(self): + def check_bech32_decode(payload, version): + hrp = "tb" + self.assertEqual(bech32_to_bytes(encode_segwit_address(hrp, version, payload)), (version, payload)) + + check_bech32_decode(bytes.fromhex('36e3e2a33f328de12e4b43c515a75fba2632ecc3'), 0) + check_bech32_decode(bytes.fromhex('823e9790fc1d1782321140d4f4aa61aabd5e045b'), 0) + check_bech32_decode(bytes.fromhex('79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798'), 1) + check_bech32_decode(bytes.fromhex('39cf8ebd95134f431c39db0220770bd127f5dd3cc103c988b7dcd577ae34e354'), 1) + check_bech32_decode(bytes.fromhex('708244006d27c757f6f1fc6f853b6ec26268b727866f7ce632886e34eb5839a3'), 1) + check_bech32_decode(bytes.fromhex('616211ab00dffe0adcb6ce258d6d3fd8cbd901e2'), 0) + check_bech32_decode(bytes.fromhex('b6a7c98b482d7fb21c9fa8e65692a0890410ff22'), 0) + check_bech32_decode(bytes.fromhex('f0c2109cb1008cfa7b5a09cc56f7267cd8e50929'), 0) diff --git a/warnet-demo/scenarios/test_framework/authproxy.py b/warnet-demo/scenarios/test_framework/authproxy.py new file mode 100644 index 0000000..0304287 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/authproxy.py @@ -0,0 +1,189 @@ +# Copyright (c) 2011 Jeff Garzik +# +# Previous copyright, from python-jsonrpc/jsonrpc/proxy.py: +# +# Copyright (c) 2007 Jan-Klaas Kollhof +# +# This file is part of jsonrpc. +# +# jsonrpc is free software; you can redistribute it and/or modify +# it under the terms of the GNU Lesser General Public License as published by +# the Free Software Foundation; either version 2.1 of the License, or +# (at your option) any later version. +# +# This software is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public License +# along with this software; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +"""HTTP proxy for opening RPC connection to bitcoind. + +AuthServiceProxy has the following improvements over python-jsonrpc's +ServiceProxy class: + +- HTTP connections persist for the life of the AuthServiceProxy object + (if server supports HTTP/1.1) +- sends protocol 'version', per JSON-RPC 1.1 +- sends proper, incrementing 'id' +- sends Basic HTTP authentication headers +- parses all JSON numbers that look like floats as Decimal +- uses standard Python json lib +""" + +import base64 +import decimal +from http import HTTPStatus +import http.client +import json +import logging +import pathlib +import socket +import time +import urllib.parse + +HTTP_TIMEOUT = 30 +USER_AGENT = "AuthServiceProxy/0.1" + +log = logging.getLogger("BitcoinRPC") + +class JSONRPCException(Exception): + def __init__(self, rpc_error, http_status=None): + try: + errmsg = '%(message)s (%(code)i)' % rpc_error + except (KeyError, TypeError): + errmsg = '' + super().__init__(errmsg) + self.error = rpc_error + self.http_status = http_status + + +def serialization_fallback(o): + if isinstance(o, decimal.Decimal): + return str(o) + if isinstance(o, pathlib.Path): + return str(o) + raise TypeError(repr(o) + " is not JSON serializable") + +class AuthServiceProxy(): + __id_count = 0 + + # ensure_ascii: escape unicode as \uXXXX, passed to json.dumps + def __init__(self, service_url, service_name=None, timeout=HTTP_TIMEOUT, connection=None, ensure_ascii=True): + self.__service_url = service_url + self._service_name = service_name + self.ensure_ascii = ensure_ascii # can be toggled on the fly by tests + self.__url = urllib.parse.urlparse(service_url) + user = None if self.__url.username is None else self.__url.username.encode('utf8') + passwd = None if self.__url.password is None else self.__url.password.encode('utf8') + authpair = user + b':' + passwd + self.__auth_header = b'Basic ' + base64.b64encode(authpair) + # clamp the socket timeout, since larger values can cause an + # "Invalid argument" exception in Python's HTTP(S) client + # library on some operating systems (e.g. OpenBSD, FreeBSD) + self.timeout = min(timeout, 2147483) + self._set_conn(connection) + + def __getattr__(self, name): + if name.startswith('__') and name.endswith('__'): + # Python internal stuff + raise AttributeError + if self._service_name is not None: + name = "%s.%s" % (self._service_name, name) + return AuthServiceProxy(self.__service_url, name, connection=self.__conn) + + def _request(self, method, path, postdata): + ''' + Do a HTTP request. + ''' + headers = {'Host': self.__url.hostname, + 'User-Agent': USER_AGENT, + 'Authorization': self.__auth_header, + 'Content-type': 'application/json'} + self.__conn.request(method, path, postdata, headers) + return self._get_response() + + def get_request(self, *args, **argsn): + AuthServiceProxy.__id_count += 1 + + log.debug("-{}-> {} {}".format( + AuthServiceProxy.__id_count, + self._service_name, + json.dumps(args or argsn, default=serialization_fallback, ensure_ascii=self.ensure_ascii), + )) + if args and argsn: + params = dict(args=args, **argsn) + else: + params = args or argsn + return {'version': '1.1', + 'method': self._service_name, + 'params': params, + 'id': AuthServiceProxy.__id_count} + + def __call__(self, *args, **argsn): + postdata = json.dumps(self.get_request(*args, **argsn), default=serialization_fallback, ensure_ascii=self.ensure_ascii) + response, status = self._request('POST', self.__url.path, postdata.encode('utf-8')) + if response['error'] is not None: + raise JSONRPCException(response['error'], status) + elif 'result' not in response: + raise JSONRPCException({ + 'code': -343, 'message': 'missing JSON-RPC result'}, status) + elif status != HTTPStatus.OK: + raise JSONRPCException({ + 'code': -342, 'message': 'non-200 HTTP status code but no JSON-RPC error'}, status) + else: + return response['result'] + + def batch(self, rpc_call_list): + postdata = json.dumps(list(rpc_call_list), default=serialization_fallback, ensure_ascii=self.ensure_ascii) + log.debug("--> " + postdata) + response, status = self._request('POST', self.__url.path, postdata.encode('utf-8')) + if status != HTTPStatus.OK: + raise JSONRPCException({ + 'code': -342, 'message': 'non-200 HTTP status code but no JSON-RPC error'}, status) + return response + + def _get_response(self): + req_start_time = time.time() + try: + http_response = self.__conn.getresponse() + except socket.timeout: + raise JSONRPCException({ + 'code': -344, + 'message': '%r RPC took longer than %f seconds. Consider ' + 'using larger timeout for calls that take ' + 'longer to return.' % (self._service_name, + self.__conn.timeout)}) + if http_response is None: + raise JSONRPCException({ + 'code': -342, 'message': 'missing HTTP response from server'}) + + content_type = http_response.getheader('Content-Type') + if content_type != 'application/json': + raise JSONRPCException( + {'code': -342, 'message': 'non-JSON HTTP response with \'%i %s\' from server' % (http_response.status, http_response.reason)}, + http_response.status) + + responsedata = http_response.read().decode('utf8') + response = json.loads(responsedata, parse_float=decimal.Decimal) + elapsed = time.time() - req_start_time + if "error" in response and response["error"] is None: + log.debug("<-%s- [%.6f] %s" % (response["id"], elapsed, json.dumps(response["result"], default=serialization_fallback, ensure_ascii=self.ensure_ascii))) + else: + log.debug("<-- [%.6f] %s" % (elapsed, responsedata)) + return response, http_response.status + + def __truediv__(self, relative_uri): + return AuthServiceProxy("{}/{}".format(self.__service_url, relative_uri), self._service_name, connection=self.__conn) + + def _set_conn(self, connection=None): + port = 80 if self.__url.port is None else self.__url.port + if connection: + self.__conn = connection + self.timeout = connection.timeout + elif self.__url.scheme == 'https': + self.__conn = http.client.HTTPSConnection(self.__url.hostname, port, timeout=self.timeout) + else: + self.__conn = http.client.HTTPConnection(self.__url.hostname, port, timeout=self.timeout) diff --git a/warnet-demo/scenarios/test_framework/bdb.py b/warnet-demo/scenarios/test_framework/bdb.py new file mode 100644 index 0000000..41886c0 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/bdb.py @@ -0,0 +1,151 @@ +#!/usr/bin/env python3 +# Copyright (c) 2020-2021 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +""" +Utilities for working directly with the wallet's BDB database file + +This is specific to the configuration of BDB used in this project: + - pagesize: 4096 bytes + - Outer database contains single subdatabase named 'main' + - btree + - btree leaf pages + +Each key-value pair is two entries in a btree leaf. The first is the key, the one that follows +is the value. And so on. Note that the entry data is itself not in the correct order. Instead +entry offsets are stored in the correct order and those offsets are needed to then retrieve +the data itself. + +Page format can be found in BDB source code dbinc/db_page.h +This only implements the deserialization of btree metadata pages and normal btree pages. Overflow +pages are not implemented but may be needed in the future if dealing with wallets with large +transactions. + +`db_dump -da wallet.dat` is useful to see the data in a wallet.dat BDB file +""" + +import struct + +# Important constants +PAGESIZE = 4096 +OUTER_META_PAGE = 0 +INNER_META_PAGE = 2 + +# Page type values +BTREE_INTERNAL = 3 +BTREE_LEAF = 5 +BTREE_META = 9 + +# Some magic numbers for sanity checking +BTREE_MAGIC = 0x053162 +DB_VERSION = 9 + +# Deserializes a leaf page into a dict. +# Btree internal pages have the same header, for those, return None. +# For the btree leaf pages, deserialize them and put all the data into a dict +def dump_leaf_page(data): + page_info = {} + page_header = data[0:26] + _, pgno, prev_pgno, next_pgno, entries, hf_offset, level, pg_type = struct.unpack('QIIIHHBB', page_header) + page_info['pgno'] = pgno + page_info['prev_pgno'] = prev_pgno + page_info['next_pgno'] = next_pgno + page_info['hf_offset'] = hf_offset + page_info['level'] = level + page_info['pg_type'] = pg_type + page_info['entry_offsets'] = struct.unpack('{}H'.format(entries), data[26:26 + entries * 2]) + page_info['entries'] = [] + + if pg_type == BTREE_INTERNAL: + # Skip internal pages. These are the internal nodes of the btree and don't contain anything relevant to us + return None + + assert pg_type == BTREE_LEAF, 'A non-btree leaf page has been encountered while dumping leaves' + + for i in range(0, entries): + offset = page_info['entry_offsets'][i] + entry = {'offset': offset} + page_data_header = data[offset:offset + 3] + e_len, pg_type = struct.unpack('HB', page_data_header) + entry['len'] = e_len + entry['pg_type'] = pg_type + entry['data'] = data[offset + 3:offset + 3 + e_len] + page_info['entries'].append(entry) + + return page_info + +# Deserializes a btree metadata page into a dict. +# Does a simple sanity check on the magic value, type, and version +def dump_meta_page(page): + # metadata page + # general metadata + metadata = {} + meta_page = page[0:72] + _, pgno, magic, version, pagesize, encrypt_alg, pg_type, metaflags, _, free, last_pgno, nparts, key_count, record_count, flags, uid = struct.unpack('QIIIIBBBBIIIIII20s', meta_page) + metadata['pgno'] = pgno + metadata['magic'] = magic + metadata['version'] = version + metadata['pagesize'] = pagesize + metadata['encrypt_alg'] = encrypt_alg + metadata['pg_type'] = pg_type + metadata['metaflags'] = metaflags + metadata['free'] = free + metadata['last_pgno'] = last_pgno + metadata['nparts'] = nparts + metadata['key_count'] = key_count + metadata['record_count'] = record_count + metadata['flags'] = flags + metadata['uid'] = uid.hex().encode() + + assert magic == BTREE_MAGIC, 'bdb magic does not match bdb btree magic' + assert pg_type == BTREE_META, 'Metadata page is not a btree metadata page' + assert version == DB_VERSION, 'Database too new' + + # btree metadata + btree_meta_page = page[72:512] + _, minkey, re_len, re_pad, root, _, crypto_magic, _, iv, chksum = struct.unpack('IIIII368sI12s16s20s', btree_meta_page) + metadata['minkey'] = minkey + metadata['re_len'] = re_len + metadata['re_pad'] = re_pad + metadata['root'] = root + metadata['crypto_magic'] = crypto_magic + metadata['iv'] = iv.hex().encode() + metadata['chksum'] = chksum.hex().encode() + + return metadata + +# Given the dict from dump_leaf_page, get the key-value pairs and put them into a dict +def extract_kv_pairs(page_data): + out = {} + last_key = None + for i, entry in enumerate(page_data['entries']): + # By virtue of these all being pairs, even number entries are keys, and odd are values + if i % 2 == 0: + out[entry['data']] = b'' + last_key = entry['data'] + else: + out[last_key] = entry['data'] + return out + +# Extract the key-value pairs of the BDB file given in filename +def dump_bdb_kv(filename): + # Read in the BDB file and start deserializing it + pages = [] + with open(filename, 'rb') as f: + data = f.read(PAGESIZE) + while len(data) > 0: + pages.append(data) + data = f.read(PAGESIZE) + + # Sanity check the meta pages + dump_meta_page(pages[OUTER_META_PAGE]) + dump_meta_page(pages[INNER_META_PAGE]) + + # Fetch the kv pairs from the leaf pages + kv = {} + for i in range(3, len(pages)): + info = dump_leaf_page(pages[i]) + if info is not None: + info_kv = extract_kv_pairs(info) + kv = {**kv, **info_kv} + return kv diff --git a/warnet-demo/scenarios/test_framework/bip340_test_vectors.csv b/warnet-demo/scenarios/test_framework/bip340_test_vectors.csv new file mode 100644 index 0000000..e068322 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/bip340_test_vectors.csv @@ -0,0 +1,16 @@ +index,secret key,public key,aux_rand,message,signature,verification result,comment +0,0000000000000000000000000000000000000000000000000000000000000003,F9308A019258C31049344F85F89D5229B531C845836F99B08601F113BCE036F9,0000000000000000000000000000000000000000000000000000000000000000,0000000000000000000000000000000000000000000000000000000000000000,E907831F80848D1069A5371B402410364BDF1C5F8307B0084C55F1CE2DCA821525F66A4A85EA8B71E482A74F382D2CE5EBEEE8FDB2172F477DF4900D310536C0,TRUE, +1,B7E151628AED2A6ABF7158809CF4F3C762E7160F38B4DA56A784D9045190CFEF,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,0000000000000000000000000000000000000000000000000000000000000001,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,6896BD60EEAE296DB48A229FF71DFE071BDE413E6D43F917DC8DCF8C78DE33418906D11AC976ABCCB20B091292BFF4EA897EFCB639EA871CFA95F6DE339E4B0A,TRUE, +2,C90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B14E5C9,DD308AFEC5777E13121FA72B9CC1B7CC0139715309B086C960E18FD969774EB8,C87AA53824B4D7AE2EB035A2B5BBBCCC080E76CDC6D1692C4B0B62D798E6D906,7E2D58D8B3BCDF1ABADEC7829054F90DDA9805AAB56C77333024B9D0A508B75C,5831AAEED7B44BB74E5EAB94BA9D4294C49BCF2A60728D8B4C200F50DD313C1BAB745879A5AD954A72C45A91C3A51D3C7ADEA98D82F8481E0E1E03674A6F3FB7,TRUE, +3,0B432B2677937381AEF05BB02A66ECD012773062CF3FA2549E44F58ED2401710,25D1DFF95105F5253C4022F628A996AD3A0D95FBF21D468A1B33F8C160D8F517,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF,7EB0509757E246F19449885651611CB965ECC1A187DD51B64FDA1EDC9637D5EC97582B9CB13DB3933705B32BA982AF5AF25FD78881EBB32771FC5922EFC66EA3,TRUE,test fails if msg is reduced modulo p or n +4,,D69C3509BB99E412E68B0FE8544E72837DFA30746D8BE2AA65975F29D22DC7B9,,4DF3C3F68FCC83B27E9D42C90431A72499F17875C81A599B566C9889B9696703,00000000000000000000003B78CE563F89A0ED9414F5AA28AD0D96D6795F9C6376AFB1548AF603B3EB45C9F8207DEE1060CB71C04E80F593060B07D28308D7F4,TRUE, +5,,EEFDEA4CDB677750A420FEE807EACF21EB9898AE79B9768766E4FAA04A2D4A34,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,6CFF5C3BA86C69EA4B7376F31A9BCB4F74C1976089B2D9963DA2E5543E17776969E89B4C5564D00349106B8497785DD7D1D713A8AE82B32FA79D5F7FC407D39B,FALSE,public key not on the curve +6,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,FFF97BD5755EEEA420453A14355235D382F6472F8568A18B2F057A14602975563CC27944640AC607CD107AE10923D9EF7A73C643E166BE5EBEAFA34B1AC553E2,FALSE,has_even_y(R) is false +7,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,1FA62E331EDBC21C394792D2AB1100A7B432B013DF3F6FF4F99FCB33E0E1515F28890B3EDB6E7189B630448B515CE4F8622A954CFE545735AAEA5134FCCDB2BD,FALSE,negated message +8,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,6CFF5C3BA86C69EA4B7376F31A9BCB4F74C1976089B2D9963DA2E5543E177769961764B3AA9B2FFCB6EF947B6887A226E8D7C93E00C5ED0C1834FF0D0C2E6DA6,FALSE,negated s value +9,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,0000000000000000000000000000000000000000000000000000000000000000123DDA8328AF9C23A94C1FEECFD123BA4FB73476F0D594DCB65C6425BD186051,FALSE,sG - eP is infinite. Test fails in single verification if has_even_y(inf) is defined as true and x(inf) as 0 +10,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,00000000000000000000000000000000000000000000000000000000000000017615FBAF5AE28864013C099742DEADB4DBA87F11AC6754F93780D5A1837CF197,FALSE,sG - eP is infinite. Test fails in single verification if has_even_y(inf) is defined as true and x(inf) as 1 +11,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,4A298DACAE57395A15D0795DDBFD1DCB564DA82B0F269BC70A74F8220429BA1D69E89B4C5564D00349106B8497785DD7D1D713A8AE82B32FA79D5F7FC407D39B,FALSE,sig[0:32] is not an X coordinate on the curve +12,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F69E89B4C5564D00349106B8497785DD7D1D713A8AE82B32FA79D5F7FC407D39B,FALSE,sig[0:32] is equal to field size +13,,DFF1D77F2A671C5F36183726DB2341BE58FEAE1DA2DECED843240F7B502BA659,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,6CFF5C3BA86C69EA4B7376F31A9BCB4F74C1976089B2D9963DA2E5543E177769FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141,FALSE,sig[32:64] is equal to curve order +14,,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC30,,243F6A8885A308D313198A2E03707344A4093822299F31D0082EFA98EC4E6C89,6CFF5C3BA86C69EA4B7376F31A9BCB4F74C1976089B2D9963DA2E5543E17776969E89B4C5564D00349106B8497785DD7D1D713A8AE82B32FA79D5F7FC407D39B,FALSE,public key is not a valid X coordinate because it exceeds the field size diff --git a/warnet-demo/scenarios/test_framework/blockfilter.py b/warnet-demo/scenarios/test_framework/blockfilter.py new file mode 100644 index 0000000..a30e37e --- /dev/null +++ b/warnet-demo/scenarios/test_framework/blockfilter.py @@ -0,0 +1,49 @@ +#!/usr/bin/env python3 +# Copyright (c) 2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Helper routines relevant for compact block filters (BIP158). +""" +from .siphash import siphash + + +def bip158_basic_element_hash(script_pub_key, N, block_hash): + """ Calculates the ranged hash of a filter element as defined in BIP158: + + 'The first step in the filter construction is hashing the variable-sized + raw items in the set to the range [0, F), where F = N * M.' + + 'The items are first passed through the pseudorandom function SipHash, which takes a + 128-bit key k and a variable-sized byte vector and produces a uniformly random 64-bit + output. Implementations of this BIP MUST use the SipHash parameters c = 2 and d = 4.' + + 'The parameter k MUST be set to the first 16 bytes of the hash (in standard + little-endian representation) of the block for which the filter is constructed. This + ensures the key is deterministic while still varying from block to block.' + """ + M = 784931 + block_hash_bytes = bytes.fromhex(block_hash)[::-1] + k0 = int.from_bytes(block_hash_bytes[0:8], 'little') + k1 = int.from_bytes(block_hash_bytes[8:16], 'little') + return (siphash(k0, k1, script_pub_key) * (N * M)) >> 64 + + +def bip158_relevant_scriptpubkeys(node, block_hash): + """ Determines the basic filter relvant scriptPubKeys as defined in BIP158: + + 'A basic filter MUST contain exactly the following items for each transaction in a block: + - The previous output script (the script being spent) for each input, except for + the coinbase transaction. + - The scriptPubKey of each output, aside from all OP_RETURN output scripts.' + """ + spks = set() + for tx in node.getblock(blockhash=block_hash, verbosity=3)['tx']: + # gather prevout scripts + for i in tx['vin']: + if 'prevout' in i: + spks.add(bytes.fromhex(i['prevout']['scriptPubKey']['hex'])) + # gather output scripts + for o in tx['vout']: + if o['scriptPubKey']['type'] != 'nulldata': + spks.add(bytes.fromhex(o['scriptPubKey']['hex'])) + return spks diff --git a/warnet-demo/scenarios/test_framework/blocktools.py b/warnet-demo/scenarios/test_framework/blocktools.py new file mode 100644 index 0000000..cfd923b --- /dev/null +++ b/warnet-demo/scenarios/test_framework/blocktools.py @@ -0,0 +1,236 @@ +#!/usr/bin/env python3 +# Copyright (c) 2015-2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Utilities for manipulating blocks and transactions.""" + +import struct +import time +import unittest + +from .address import ( + address_to_scriptpubkey, + key_to_p2sh_p2wpkh, + key_to_p2wpkh, + script_to_p2sh_p2wsh, + script_to_p2wsh, +) +from .messages import ( + CBlock, + COIN, + COutPoint, + CTransaction, + CTxIn, + CTxInWitness, + CTxOut, + SEQUENCE_FINAL, + hash256, + ser_uint256, + tx_from_hex, + uint256_from_str, +) +from .script import ( + CScript, + CScriptNum, + CScriptOp, + OP_1, + OP_RETURN, + OP_TRUE, +) +from .script_util import ( + key_to_p2pk_script, + key_to_p2wpkh_script, + keys_to_multisig_script, + script_to_p2wsh_script, +) +from .util import assert_equal + +WITNESS_SCALE_FACTOR = 4 +MAX_BLOCK_SIGOPS = 20000 +MAX_BLOCK_SIGOPS_WEIGHT = MAX_BLOCK_SIGOPS * WITNESS_SCALE_FACTOR + +# Genesis block time (regtest) +TIME_GENESIS_BLOCK = 1296688602 + +MAX_FUTURE_BLOCK_TIME = 2 * 60 * 60 + +# Coinbase transaction outputs can only be spent after this number of new blocks (network rule) +COINBASE_MATURITY = 100 + +# From BIP141 +WITNESS_COMMITMENT_HEADER = b"\xaa\x21\xa9\xed" + +NORMAL_GBT_REQUEST_PARAMS = {"rules": ["segwit"]} +VERSIONBITS_LAST_OLD_BLOCK_VERSION = 4 +MIN_BLOCKS_TO_KEEP = 288 + + +def create_block(hashprev=None, coinbase=None, ntime=None, *, version=None, tmpl=None, txlist=None): + """Create a block (with regtest difficulty).""" + block = CBlock() + if tmpl is None: + tmpl = {} + block.nVersion = version or tmpl.get('version') or VERSIONBITS_LAST_OLD_BLOCK_VERSION + block.nTime = ntime or tmpl.get('curtime') or int(time.time() + 600) + block.hashPrevBlock = hashprev or int(tmpl['previousblockhash'], 0x10) + if tmpl and not tmpl.get('bits') is None: + block.nBits = struct.unpack('>I', bytes.fromhex(tmpl['bits']))[0] + else: + block.nBits = 0x207fffff # difficulty retargeting is disabled in REGTEST chainparams + if coinbase is None: + coinbase = create_coinbase(height=tmpl['height']) + block.vtx.append(coinbase) + if txlist: + for tx in txlist: + if not hasattr(tx, 'calc_sha256'): + tx = tx_from_hex(tx) + block.vtx.append(tx) + block.hashMerkleRoot = block.calc_merkle_root() + block.calc_sha256() + return block + +def get_witness_script(witness_root, witness_nonce): + witness_commitment = uint256_from_str(hash256(ser_uint256(witness_root) + ser_uint256(witness_nonce))) + output_data = WITNESS_COMMITMENT_HEADER + ser_uint256(witness_commitment) + return CScript([OP_RETURN, output_data]) + +def add_witness_commitment(block, nonce=0): + """Add a witness commitment to the block's coinbase transaction. + + According to BIP141, blocks with witness rules active must commit to the + hash of all in-block transactions including witness.""" + # First calculate the merkle root of the block's + # transactions, with witnesses. + witness_nonce = nonce + witness_root = block.calc_witness_merkle_root() + # witness_nonce should go to coinbase witness. + block.vtx[0].wit.vtxinwit = [CTxInWitness()] + block.vtx[0].wit.vtxinwit[0].scriptWitness.stack = [ser_uint256(witness_nonce)] + + # witness commitment is the last OP_RETURN output in coinbase + block.vtx[0].vout.append(CTxOut(0, get_witness_script(witness_root, witness_nonce))) + block.vtx[0].rehash() + block.hashMerkleRoot = block.calc_merkle_root() + block.rehash() + + +def script_BIP34_coinbase_height(height): + if height <= 16: + res = CScriptOp.encode_op_n(height) + # Append dummy to increase scriptSig size above 2 (see bad-cb-length consensus rule) + return CScript([res, OP_1]) + return CScript([CScriptNum(height)]) + + +def create_coinbase(height, pubkey=None, *, script_pubkey=None, extra_output_script=None, fees=0, nValue=50): + """Create a coinbase transaction. + + If pubkey is passed in, the coinbase output will be a P2PK output; + otherwise an anyone-can-spend output. + + If extra_output_script is given, make a 0-value output to that + script. This is useful to pad block weight/sigops as needed. """ + coinbase = CTransaction() + coinbase.vin.append(CTxIn(COutPoint(0, 0xffffffff), script_BIP34_coinbase_height(height), SEQUENCE_FINAL)) + coinbaseoutput = CTxOut() + coinbaseoutput.nValue = nValue * COIN + if nValue == 50: + halvings = int(height / 150) # regtest + coinbaseoutput.nValue >>= halvings + coinbaseoutput.nValue += fees + if pubkey is not None: + coinbaseoutput.scriptPubKey = key_to_p2pk_script(pubkey) + elif script_pubkey is not None: + coinbaseoutput.scriptPubKey = script_pubkey + else: + coinbaseoutput.scriptPubKey = CScript([OP_TRUE]) + coinbase.vout = [coinbaseoutput] + if extra_output_script is not None: + coinbaseoutput2 = CTxOut() + coinbaseoutput2.nValue = 0 + coinbaseoutput2.scriptPubKey = extra_output_script + coinbase.vout.append(coinbaseoutput2) + coinbase.calc_sha256() + return coinbase + +def create_tx_with_script(prevtx, n, script_sig=b"", *, amount, script_pub_key=CScript()): + """Return one-input, one-output transaction object + spending the prevtx's n-th output with the given amount. + + Can optionally pass scriptPubKey and scriptSig, default is anyone-can-spend output. + """ + tx = CTransaction() + assert n < len(prevtx.vout) + tx.vin.append(CTxIn(COutPoint(prevtx.sha256, n), script_sig, SEQUENCE_FINAL)) + tx.vout.append(CTxOut(amount, script_pub_key)) + tx.calc_sha256() + return tx + +def get_legacy_sigopcount_block(block, accurate=True): + count = 0 + for tx in block.vtx: + count += get_legacy_sigopcount_tx(tx, accurate) + return count + +def get_legacy_sigopcount_tx(tx, accurate=True): + count = 0 + for i in tx.vout: + count += i.scriptPubKey.GetSigOpCount(accurate) + for j in tx.vin: + # scriptSig might be of type bytes, so convert to CScript for the moment + count += CScript(j.scriptSig).GetSigOpCount(accurate) + return count + +def witness_script(use_p2wsh, pubkey): + """Create a scriptPubKey for a pay-to-witness TxOut. + + This is either a P2WPKH output for the given pubkey, or a P2WSH output of a + 1-of-1 multisig for the given pubkey. Returns the hex encoding of the + scriptPubKey.""" + if not use_p2wsh: + # P2WPKH instead + pkscript = key_to_p2wpkh_script(pubkey) + else: + # 1-of-1 multisig + witness_script = keys_to_multisig_script([pubkey]) + pkscript = script_to_p2wsh_script(witness_script) + return pkscript.hex() + +def create_witness_tx(node, use_p2wsh, utxo, pubkey, encode_p2sh, amount): + """Return a transaction (in hex) that spends the given utxo to a segwit output. + + Optionally wrap the segwit output using P2SH.""" + if use_p2wsh: + program = keys_to_multisig_script([pubkey]) + addr = script_to_p2sh_p2wsh(program) if encode_p2sh else script_to_p2wsh(program) + else: + addr = key_to_p2sh_p2wpkh(pubkey) if encode_p2sh else key_to_p2wpkh(pubkey) + if not encode_p2sh: + assert_equal(address_to_scriptpubkey(addr).hex(), witness_script(use_p2wsh, pubkey)) + return node.createrawtransaction([utxo], {addr: amount}) + +def send_to_witness(use_p2wsh, node, utxo, pubkey, encode_p2sh, amount, sign=True, insert_redeem_script=""): + """Create a transaction spending a given utxo to a segwit output. + + The output corresponds to the given pubkey: use_p2wsh determines whether to + use P2WPKH or P2WSH; encode_p2sh determines whether to wrap in P2SH. + sign=True will have the given node sign the transaction. + insert_redeem_script will be added to the scriptSig, if given.""" + tx_to_witness = create_witness_tx(node, use_p2wsh, utxo, pubkey, encode_p2sh, amount) + if (sign): + signed = node.signrawtransactionwithwallet(tx_to_witness) + assert "errors" not in signed or len(["errors"]) == 0 + return node.sendrawtransaction(signed["hex"]) + else: + if (insert_redeem_script): + tx = tx_from_hex(tx_to_witness) + tx.vin[0].scriptSig += CScript([bytes.fromhex(insert_redeem_script)]) + tx_to_witness = tx.serialize().hex() + + return node.sendrawtransaction(tx_to_witness) + +class TestFrameworkBlockTools(unittest.TestCase): + def test_create_coinbase(self): + height = 20 + coinbase_tx = create_coinbase(height=height) + assert_equal(CScriptNum.decode(coinbase_tx.vin[0].scriptSig), height) diff --git a/warnet-demo/scenarios/test_framework/coverage.py b/warnet-demo/scenarios/test_framework/coverage.py new file mode 100644 index 0000000..912a945 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/coverage.py @@ -0,0 +1,113 @@ +#!/usr/bin/env python3 +# Copyright (c) 2015-2021 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Utilities for doing coverage analysis on the RPC interface. + +Provides a way to track which RPC commands are exercised during +testing. +""" + +import os + +from .authproxy import AuthServiceProxy +from typing import Optional + +REFERENCE_FILENAME = 'rpc_interface.txt' + + +class AuthServiceProxyWrapper(): + """ + An object that wraps AuthServiceProxy to record specific RPC calls. + + """ + def __init__(self, auth_service_proxy_instance: AuthServiceProxy, rpc_url: str, coverage_logfile: Optional[str]=None): + """ + Kwargs: + auth_service_proxy_instance: the instance being wrapped. + rpc_url: url of the RPC instance being wrapped + coverage_logfile: if specified, write each service_name + out to a file when called. + + """ + self.auth_service_proxy_instance = auth_service_proxy_instance + self.rpc_url = rpc_url + self.coverage_logfile = coverage_logfile + + def __getattr__(self, name): + return_val = getattr(self.auth_service_proxy_instance, name) + if not isinstance(return_val, type(self.auth_service_proxy_instance)): + # If proxy getattr returned an unwrapped value, do the same here. + return return_val + return AuthServiceProxyWrapper(return_val, self.rpc_url, self.coverage_logfile) + + def __call__(self, *args, **kwargs): + """ + Delegates to AuthServiceProxy, then writes the particular RPC method + called to a file. + + """ + return_val = self.auth_service_proxy_instance.__call__(*args, **kwargs) + self._log_call() + return return_val + + def _log_call(self): + rpc_method = self.auth_service_proxy_instance._service_name + + if self.coverage_logfile: + with open(self.coverage_logfile, 'a+', encoding='utf8') as f: + f.write("%s\n" % rpc_method) + + def __truediv__(self, relative_uri): + return AuthServiceProxyWrapper(self.auth_service_proxy_instance / relative_uri, + self.rpc_url, + self.coverage_logfile) + + def get_request(self, *args, **kwargs): + self._log_call() + return self.auth_service_proxy_instance.get_request(*args, **kwargs) + +def get_filename(dirname, n_node): + """ + Get a filename unique to the test process ID and node. + + This file will contain a list of RPC commands covered. + """ + pid = str(os.getpid()) + return os.path.join( + dirname, "coverage.pid%s.node%s.txt" % (pid, str(n_node))) + + +def write_all_rpc_commands(dirname: str, node: AuthServiceProxy) -> bool: + """ + Write out a list of all RPC functions available in `bitcoin-cli` for + coverage comparison. This will only happen once per coverage + directory. + + Args: + dirname: temporary test dir + node: client + + Returns: + if the RPC interface file was written. + + """ + filename = os.path.join(dirname, REFERENCE_FILENAME) + + if os.path.isfile(filename): + return False + + help_output = node.help().split('\n') + commands = set() + + for line in help_output: + line = line.strip() + + # Ignore blanks and headers + if line and not line.startswith('='): + commands.add("%s\n" % line.split()[0]) + + with open(filename, 'w', encoding='utf8') as f: + f.writelines(list(commands)) + + return True diff --git a/warnet-demo/scenarios/test_framework/descriptors.py b/warnet-demo/scenarios/test_framework/descriptors.py new file mode 100644 index 0000000..46b4057 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/descriptors.py @@ -0,0 +1,64 @@ +#!/usr/bin/env python3 +# Copyright (c) 2019 Pieter Wuille +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Utility functions related to output descriptors""" + +import re + +INPUT_CHARSET = "0123456789()[],'/*abcdefgh@:$%{}IJKLMNOPQRSTUVWXYZ&+-.;<=>?!^_|~ijklmnopqrstuvwxyzABCDEFGH`#\"\\ " +CHECKSUM_CHARSET = "qpzry9x8gf2tvdw0s3jn54khce6mua7l" +GENERATOR = [0xf5dee51989, 0xa9fdca3312, 0x1bab10e32d, 0x3706b1677a, 0x644d626ffd] + +def descsum_polymod(symbols): + """Internal function that computes the descriptor checksum.""" + chk = 1 + for value in symbols: + top = chk >> 35 + chk = (chk & 0x7ffffffff) << 5 ^ value + for i in range(5): + chk ^= GENERATOR[i] if ((top >> i) & 1) else 0 + return chk + +def descsum_expand(s): + """Internal function that does the character to symbol expansion""" + groups = [] + symbols = [] + for c in s: + if not c in INPUT_CHARSET: + return None + v = INPUT_CHARSET.find(c) + symbols.append(v & 31) + groups.append(v >> 5) + if len(groups) == 3: + symbols.append(groups[0] * 9 + groups[1] * 3 + groups[2]) + groups = [] + if len(groups) == 1: + symbols.append(groups[0]) + elif len(groups) == 2: + symbols.append(groups[0] * 3 + groups[1]) + return symbols + +def descsum_create(s): + """Add a checksum to a descriptor without""" + symbols = descsum_expand(s) + [0, 0, 0, 0, 0, 0, 0, 0] + checksum = descsum_polymod(symbols) ^ 1 + return s + '#' + ''.join(CHECKSUM_CHARSET[(checksum >> (5 * (7 - i))) & 31] for i in range(8)) + +def descsum_check(s, require=True): + """Verify that the checksum is correct in a descriptor""" + if not '#' in s: + return not require + if s[-9] != '#': + return False + if not all(x in CHECKSUM_CHARSET for x in s[-8:]): + return False + symbols = descsum_expand(s[:-9]) + [CHECKSUM_CHARSET.find(x) for x in s[-8:]] + return descsum_polymod(symbols) == 1 + +def drop_origins(s): + '''Drop the key origins from a descriptor''' + desc = re.sub(r'\[.+?\]', '', s) + if '#' in s: + desc = desc[:desc.index('#')] + return descsum_create(desc) diff --git a/warnet-demo/scenarios/test_framework/ellswift.py b/warnet-demo/scenarios/test_framework/ellswift.py new file mode 100644 index 0000000..97b1011 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/ellswift.py @@ -0,0 +1,163 @@ +#!/usr/bin/env python3 +# Copyright (c) 2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Test-only Elligator Swift implementation + +WARNING: This code is slow and uses bad randomness. +Do not use for anything but tests.""" + +import csv +import os +import random +import unittest + +from test_framework.secp256k1 import FE, G, GE + +# Precomputed constant square root of -3 (mod p). +MINUS_3_SQRT = FE(-3).sqrt() + +def xswiftec(u, t): + """Decode field elements (u, t) to an X coordinate on the curve.""" + if u == 0: + u = FE(1) + if t == 0: + t = FE(1) + if u**3 + t**2 + 7 == 0: + t = 2 * t + X = (u**3 + 7 - t**2) / (2 * t) + Y = (X + t) / (MINUS_3_SQRT * u) + for x in (u + 4 * Y**2, (-X / Y - u) / 2, (X / Y - u) / 2): + if GE.is_valid_x(x): + return x + assert False + +def xswiftec_inv(x, u, case): + """Given x and u, find t such that xswiftec(u, t) = x, or return None. + + Case selects which of the up to 8 results to return.""" + + if case & 2 == 0: + if GE.is_valid_x(-x - u): + return None + v = x + s = -(u**3 + 7) / (u**2 + u*v + v**2) + else: + s = x - u + if s == 0: + return None + r = (-s * (4 * (u**3 + 7) + 3 * s * u**2)).sqrt() + if r is None: + return None + if case & 1 and r == 0: + return None + v = (-u + r / s) / 2 + w = s.sqrt() + if w is None: + return None + if case & 5 == 0: + return -w * (u * (1 - MINUS_3_SQRT) / 2 + v) + if case & 5 == 1: + return w * (u * (1 + MINUS_3_SQRT) / 2 + v) + if case & 5 == 4: + return w * (u * (1 - MINUS_3_SQRT) / 2 + v) + if case & 5 == 5: + return -w * (u * (1 + MINUS_3_SQRT) / 2 + v) + +def xelligatorswift(x): + """Given a field element X on the curve, find (u, t) that encode them.""" + assert GE.is_valid_x(x) + while True: + u = FE(random.randrange(1, FE.SIZE)) + case = random.randrange(0, 8) + t = xswiftec_inv(x, u, case) + if t is not None: + return u, t + +def ellswift_create(): + """Generate a (privkey, ellswift_pubkey) pair.""" + priv = random.randrange(1, GE.ORDER) + u, t = xelligatorswift((priv * G).x) + return priv.to_bytes(32, 'big'), u.to_bytes() + t.to_bytes() + +def ellswift_ecdh_xonly(pubkey_theirs, privkey): + """Compute X coordinate of shared ECDH point between ellswift pubkey and privkey.""" + u = FE(int.from_bytes(pubkey_theirs[:32], 'big')) + t = FE(int.from_bytes(pubkey_theirs[32:], 'big')) + d = int.from_bytes(privkey, 'big') + return (d * GE.lift_x(xswiftec(u, t))).x.to_bytes() + + +class TestFrameworkEllSwift(unittest.TestCase): + def test_xswiftec(self): + """Verify that xswiftec maps all inputs to the curve.""" + for _ in range(32): + u = FE(random.randrange(0, FE.SIZE)) + t = FE(random.randrange(0, FE.SIZE)) + x = xswiftec(u, t) + self.assertTrue(GE.is_valid_x(x)) + + # Check that inputs which are considered undefined in the original + # SwiftEC paper can also be decoded successfully (by remapping) + undefined_inputs = [ + (FE(0), FE(23)), # u = 0 + (FE(42), FE(0)), # t = 0 + (FE(5), FE(-132).sqrt()), # u^3 + t^2 + 7 = 0 + ] + assert undefined_inputs[-1][0]**3 + undefined_inputs[-1][1]**2 + 7 == 0 + for u, t in undefined_inputs: + x = xswiftec(u, t) + self.assertTrue(GE.is_valid_x(x)) + + def test_elligator_roundtrip(self): + """Verify that encoding using xelligatorswift decodes back using xswiftec.""" + for _ in range(32): + while True: + # Loop until we find a valid X coordinate on the curve. + x = FE(random.randrange(1, FE.SIZE)) + if GE.is_valid_x(x): + break + # Encoding it to (u, t), decode it back, and compare. + u, t = xelligatorswift(x) + x2 = xswiftec(u, t) + self.assertEqual(x2, x) + + def test_ellswift_ecdh_xonly(self): + """Verify that shared secret computed by ellswift_ecdh_xonly match.""" + for _ in range(32): + privkey1, encoding1 = ellswift_create() + privkey2, encoding2 = ellswift_create() + shared_secret1 = ellswift_ecdh_xonly(encoding1, privkey2) + shared_secret2 = ellswift_ecdh_xonly(encoding2, privkey1) + self.assertEqual(shared_secret1, shared_secret2) + + def test_elligator_encode_testvectors(self): + """Implement the BIP324 test vectors for ellswift encoding (read from xswiftec_inv_test_vectors.csv).""" + vectors_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'xswiftec_inv_test_vectors.csv') + with open(vectors_file, newline='', encoding='utf8') as csvfile: + reader = csv.DictReader(csvfile) + for row in reader: + u = FE.from_bytes(bytes.fromhex(row['u'])) + x = FE.from_bytes(bytes.fromhex(row['x'])) + for case in range(8): + ret = xswiftec_inv(x, u, case) + if ret is None: + self.assertEqual(row[f"case{case}_t"], "") + else: + self.assertEqual(row[f"case{case}_t"], ret.to_bytes().hex()) + self.assertEqual(xswiftec(u, ret), x) + + def test_elligator_decode_testvectors(self): + """Implement the BIP324 test vectors for ellswift decoding (read from ellswift_decode_test_vectors.csv).""" + vectors_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'ellswift_decode_test_vectors.csv') + with open(vectors_file, newline='', encoding='utf8') as csvfile: + reader = csv.DictReader(csvfile) + for row in reader: + encoding = bytes.fromhex(row['ellswift']) + assert len(encoding) == 64 + expected_x = FE(int(row['x'], 16)) + u = FE(int.from_bytes(encoding[:32], 'big')) + t = FE(int.from_bytes(encoding[32:], 'big')) + x = xswiftec(u, t) + self.assertEqual(x, expected_x) + self.assertTrue(GE.is_valid_x(x)) diff --git a/warnet-demo/scenarios/test_framework/ellswift_decode_test_vectors.csv b/warnet-demo/scenarios/test_framework/ellswift_decode_test_vectors.csv new file mode 100644 index 0000000..1bab96b --- /dev/null +++ b/warnet-demo/scenarios/test_framework/ellswift_decode_test_vectors.csv @@ -0,0 +1,77 @@ +ellswift,x,comment +00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000,edd1fd3e327ce90cc7a3542614289aee9682003e9cf7dcc9cf2ca9743be5aa0c,u%p=0;t%p=0;valid_x(x2) +000000000000000000000000000000000000000000000000000000000000000001d3475bf7655b0fb2d852921035b2ef607f49069b97454e6795251062741771,b5da00b73cd6560520e7c364086e7cd23a34bf60d0e707be9fc34d4cd5fdfa2c,u%p=0;valid_x(x1) +000000000000000000000000000000000000000000000000000000000000000082277c4a71f9d22e66ece523f8fa08741a7c0912c66a69ce68514bfd3515b49f,f482f2e241753ad0fb89150d8491dc1e34ff0b8acfbb442cfe999e2e5e6fd1d2,u%p=0;valid_x(x3);valid_x(x2);valid_x(x1) +00000000000000000000000000000000000000000000000000000000000000008421cc930e77c9f514b6915c3dbe2a94c6d8f690b5b739864ba6789fb8a55dd0,9f59c40275f5085a006f05dae77eb98c6fd0db1ab4a72ac47eae90a4fc9e57e0,u%p=0;valid_x(x2) +0000000000000000000000000000000000000000000000000000000000000000bde70df51939b94c9c24979fa7dd04ebd9b3572da7802290438af2a681895441,aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa9fffffd6b,u%p=0;(u'^3-t'^2+7)%p=0;valid_x(x3) +0000000000000000000000000000000000000000000000000000000000000000d19c182d2759cd99824228d94799f8c6557c38a1c0d6779b9d4b729c6f1ccc42,70720db7e238d04121f5b1afd8cc5ad9d18944c6bdc94881f502b7a3af3aecff,u%p=0;valid_x(x3) +0000000000000000000000000000000000000000000000000000000000000000fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f,edd1fd3e327ce90cc7a3542614289aee9682003e9cf7dcc9cf2ca9743be5aa0c,u%p=0;t%p=0;valid_x(x2);t>=p +0000000000000000000000000000000000000000000000000000000000000000ffffffffffffffffffffffffffffffffffffffffffffffffffffffff2664bbd5,50873db31badcc71890e4f67753a65757f97aaa7dd5f1e82b753ace32219064b,u%p=0;valid_x(x3);valid_x(x2);valid_x(x1);t>=p +0000000000000000000000000000000000000000000000000000000000000000ffffffffffffffffffffffffffffffffffffffffffffffffffffffff7028de7d,1eea9cc59cfcf2fa151ac6c274eea4110feb4f7b68c5965732e9992e976ef68e,u%p=0;valid_x(x2);t>=p +0000000000000000000000000000000000000000000000000000000000000000ffffffffffffffffffffffffffffffffffffffffffffffffffffffffcbcfb7e7,12303941aedc208880735b1f1795c8e55be520ea93e103357b5d2adb7ed59b8e,u%p=0;valid_x(x1);t>=p +0000000000000000000000000000000000000000000000000000000000000000fffffffffffffffffffffffffffffffffffffffffffffffffffffffff3113ad9,7eed6b70e7b0767c7d7feac04e57aa2a12fef5e0f48f878fcbb88b3b6b5e0783,u%p=0;valid_x(x3);t>=p +0a2d2ba93507f1df233770c2a797962cc61f6d15da14ecd47d8d27ae1cd5f8530000000000000000000000000000000000000000000000000000000000000000,532167c11200b08c0e84a354e74dcc40f8b25f4fe686e30869526366278a0688,t%p=0;(u'^3+t'^2+7)%p=0;valid_x(x3);valid_x(x2);valid_x(x1) +0a2d2ba93507f1df233770c2a797962cc61f6d15da14ecd47d8d27ae1cd5f853fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f,532167c11200b08c0e84a354e74dcc40f8b25f4fe686e30869526366278a0688,t%p=0;(u'^3+t'^2+7)%p=0;valid_x(x3);valid_x(x2);valid_x(x1);t>=p +0ffde9ca81d751e9cdaffc1a50779245320b28996dbaf32f822f20117c22fbd6c74d99efceaa550f1ad1c0f43f46e7ff1ee3bd0162b7bf55f2965da9c3450646,74e880b3ffd18fe3cddf7902522551ddf97fa4a35a3cfda8197f947081a57b8f,valid_x(x3) +0ffde9ca81d751e9cdaffc1a50779245320b28996dbaf32f822f20117c22fbd6ffffffffffffffffffffffffffffffffffffffffffffffffffffffff156ca896,377b643fce2271f64e5c8101566107c1be4980745091783804f654781ac9217c,valid_x(x2);t>=p +123658444f32be8f02ea2034afa7ef4bbe8adc918ceb49b12773b625f490b368ffffffffffffffffffffffffffffffffffffffffffffffffffffffff8dc5fe11,ed16d65cf3a9538fcb2c139f1ecbc143ee14827120cbc2659e667256800b8142,(u'^3-t'^2+7)%p=0;valid_x(x3);valid_x(x2);valid_x(x1);t>=p +146f92464d15d36e35382bd3ca5b0f976c95cb08acdcf2d5b3570617990839d7ffffffffffffffffffffffffffffffffffffffffffffffffffffffff3145e93b,0d5cd840427f941f65193079ab8e2e83024ef2ee7ca558d88879ffd879fb6657,(u'^3+t'^2+7)%p=0;valid_x(x3);t>=p +15fdf5cf09c90759add2272d574d2bb5fe1429f9f3c14c65e3194bf61b82aa73ffffffffffffffffffffffffffffffffffffffffffffffffffffffff04cfd906,16d0e43946aec93f62d57eb8cde68951af136cf4b307938dd1447411e07bffe1,(u'^3+t'^2+7)%p=0;valid_x(x2);t>=p +1f67edf779a8a649d6def60035f2fa22d022dd359079a1a144073d84f19b92d50000000000000000000000000000000000000000000000000000000000000000,025661f9aba9d15c3118456bbe980e3e1b8ba2e047c737a4eb48a040bb566f6c,t%p=0;valid_x(x2) +1f67edf779a8a649d6def60035f2fa22d022dd359079a1a144073d84f19b92d5fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f,025661f9aba9d15c3118456bbe980e3e1b8ba2e047c737a4eb48a040bb566f6c,t%p=0;valid_x(x2);t>=p +1fe1e5ef3fceb5c135ab7741333ce5a6e80d68167653f6b2b24bcbcfaaaff507fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f,98bec3b2a351fa96cfd191c1778351931b9e9ba9ad1149f6d9eadca80981b801,t%p=0;(u'^3-t'^2+7)%p=0;valid_x(x3);valid_x(x2);valid_x(x1);t>=p +4056a34a210eec7892e8820675c860099f857b26aad85470ee6d3cf1304a9dcf375e70374271f20b13c9986ed7d3c17799698cfc435dbed3a9f34b38c823c2b4,868aac2003b29dbcad1a3e803855e078a89d16543ac64392d122417298cec76e,(u'^3-t'^2+7)%p=0;valid_x(x3) +4197ec3723c654cfdd32ab075506648b2ff5070362d01a4fff14b336b78f963fffffffffffffffffffffffffffffffffffffffffffffffffffffffffb3ab1e95,ba5a6314502a8952b8f456e085928105f665377a8ce27726a5b0eb7ec1ac0286,(u'^3+t'^2+7)%p=0;valid_x(x1);t>=p +47eb3e208fedcdf8234c9421e9cd9a7ae873bfbdbc393723d1ba1e1e6a8e6b24ffffffffffffffffffffffffffffffffffffffffffffffffffffffff7cd12cb1,d192d52007e541c9807006ed0468df77fd214af0a795fe119359666fdcf08f7c,(u'^3+t'^2+7)%p=0;valid_x(x3);valid_x(x2);valid_x(x1);t>=p +5eb9696a2336fe2c3c666b02c755db4c0cfd62825c7b589a7b7bb442e141c1d693413f0052d49e64abec6d5831d66c43612830a17df1fe4383db896468100221,ef6e1da6d6c7627e80f7a7234cb08a022c1ee1cf29e4d0f9642ae924cef9eb38,(u'^3+t'^2+7)%p=0;valid_x(x1) +7bf96b7b6da15d3476a2b195934b690a3a3de3e8ab8474856863b0de3af90b0e0000000000000000000000000000000000000000000000000000000000000000,50851dfc9f418c314a437295b24feeea27af3d0cd2308348fda6e21c463e46ff,t%p=0;valid_x(x1) +7bf96b7b6da15d3476a2b195934b690a3a3de3e8ab8474856863b0de3af90b0efffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f,50851dfc9f418c314a437295b24feeea27af3d0cd2308348fda6e21c463e46ff,t%p=0;valid_x(x1);t>=p +851b1ca94549371c4f1f7187321d39bf51c6b7fb61f7cbf027c9da62021b7a65fc54c96837fb22b362eda63ec52ec83d81bedd160c11b22d965d9f4a6d64d251,3e731051e12d33237eb324f2aa5b16bb868eb49a1aa1fadc19b6e8761b5a5f7b,(u'^3+t'^2+7)%p=0;valid_x(x2) +943c2f775108b737fe65a9531e19f2fc2a197f5603e3a2881d1d83e4008f91250000000000000000000000000000000000000000000000000000000000000000,311c61f0ab2f32b7b1f0223fa72f0a78752b8146e46107f8876dd9c4f92b2942,t%p=0;valid_x(x3);valid_x(x2);valid_x(x1) +943c2f775108b737fe65a9531e19f2fc2a197f5603e3a2881d1d83e4008f9125fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f,311c61f0ab2f32b7b1f0223fa72f0a78752b8146e46107f8876dd9c4f92b2942,t%p=0;valid_x(x3);valid_x(x2);valid_x(x1);t>=p +a0f18492183e61e8063e573606591421b06bc3513631578a73a39c1c3306239f2f32904f0d2a33ecca8a5451705bb537d3bf44e071226025cdbfd249fe0f7ad6,97a09cf1a2eae7c494df3c6f8a9445bfb8c09d60832f9b0b9d5eabe25fbd14b9,valid_x(x1) +a1ed0a0bd79d8a23cfe4ec5fef5ba5cccfd844e4ff5cb4b0f2e71627341f1c5b17c499249e0ac08d5d11ea1c2c8ca7001616559a7994eadec9ca10fb4b8516dc,65a89640744192cdac64b2d21ddf989cdac7500725b645bef8e2200ae39691f2,valid_x(x2) +ba94594a432721aa3580b84c161d0d134bc354b690404d7cd4ec57c16d3fbe98ffffffffffffffffffffffffffffffffffffffffffffffffffffffffea507dd7,5e0d76564aae92cb347e01a62afd389a9aa401c76c8dd227543dc9cd0efe685a,valid_x(x1);t>=p +bcaf7219f2f6fbf55fe5e062dce0e48c18f68103f10b8198e974c184750e1be3932016cbf69c4471bd1f656c6a107f1973de4af7086db897277060e25677f19a,2d97f96cac882dfe73dc44db6ce0f1d31d6241358dd5d74eb3d3b50003d24c2b,valid_x(x3);valid_x(x2);valid_x(x1) +bcaf7219f2f6fbf55fe5e062dce0e48c18f68103f10b8198e974c184750e1be3ffffffffffffffffffffffffffffffffffffffffffffffffffffffff6507d09a,e7008afe6e8cbd5055df120bd748757c686dadb41cce75e4addcc5e02ec02b44,valid_x(x3);valid_x(x2);valid_x(x1);t>=p +c5981bae27fd84401c72a155e5707fbb811b2b620645d1028ea270cbe0ee225d4b62aa4dca6506c1acdbecc0552569b4b21436a5692e25d90d3bc2eb7ce24078,948b40e7181713bc018ec1702d3d054d15746c59a7020730dd13ecf985a010d7,(u'^3+t'^2+7)%p=0;valid_x(x3) +c894ce48bfec433014b931a6ad4226d7dbd8eaa7b6e3faa8d0ef94052bcf8cff336eeb3919e2b4efb746c7f71bbca7e9383230fbbc48ffafe77e8bcc69542471,f1c91acdc2525330f9b53158434a4d43a1c547cff29f15506f5da4eb4fe8fa5a,(u'^3-t'^2+7)%p=0;valid_x(x3);valid_x(x2);valid_x(x1) +cbb0deab125754f1fdb2038b0434ed9cb3fb53ab735391129994a535d925f6730000000000000000000000000000000000000000000000000000000000000000,872d81ed8831d9998b67cb7105243edbf86c10edfebb786c110b02d07b2e67cd,t%p=0;(u'^3-t'^2+7)%p=0;valid_x(x3);valid_x(x2);valid_x(x1) +d917b786dac35670c330c9c5ae5971dfb495c8ae523ed97ee2420117b171f41effffffffffffffffffffffffffffffffffffffffffffffffffffffff2001f6f6,e45b71e110b831f2bdad8651994526e58393fde4328b1ec04d59897142584691,valid_x(x3);t>=p +e28bd8f5929b467eb70e04332374ffb7e7180218ad16eaa46b7161aa679eb4260000000000000000000000000000000000000000000000000000000000000000,66b8c980a75c72e598d383a35a62879f844242ad1e73ff12edaa59f4e58632b5,t%p=0;valid_x(x3) +e28bd8f5929b467eb70e04332374ffb7e7180218ad16eaa46b7161aa679eb426fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f,66b8c980a75c72e598d383a35a62879f844242ad1e73ff12edaa59f4e58632b5,t%p=0;valid_x(x3);t>=p +e7ee5814c1706bf8a89396a9b032bc014c2cac9c121127dbf6c99278f8bb53d1dfd04dbcda8e352466b6fcd5f2dea3e17d5e133115886eda20db8a12b54de71b,e842c6e3529b234270a5e97744edc34a04d7ba94e44b6d2523c9cf0195730a50,(u'^3+t'^2+7)%p=0;valid_x(x3);valid_x(x2);valid_x(x1) +f292e46825f9225ad23dc057c1d91c4f57fcb1386f29ef10481cb1d22518593fffffffffffffffffffffffffffffffffffffffffffffffffffffffff7011c989,3cea2c53b8b0170166ac7da67194694adacc84d56389225e330134dab85a4d55,(u'^3-t'^2+7)%p=0;valid_x(x3);t>=p +fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f0000000000000000000000000000000000000000000000000000000000000000,edd1fd3e327ce90cc7a3542614289aee9682003e9cf7dcc9cf2ca9743be5aa0c,u%p=0;t%p=0;valid_x(x2);u>=p +fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f01d3475bf7655b0fb2d852921035b2ef607f49069b97454e6795251062741771,b5da00b73cd6560520e7c364086e7cd23a34bf60d0e707be9fc34d4cd5fdfa2c,u%p=0;valid_x(x1);u>=p +fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f4218f20ae6c646b363db68605822fb14264ca8d2587fdd6fbc750d587e76a7ee,aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa9fffffd6b,u%p=0;(u'^3-t'^2+7)%p=0;valid_x(x3);u>=p +fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f82277c4a71f9d22e66ece523f8fa08741a7c0912c66a69ce68514bfd3515b49f,f482f2e241753ad0fb89150d8491dc1e34ff0b8acfbb442cfe999e2e5e6fd1d2,u%p=0;valid_x(x3);valid_x(x2);valid_x(x1);u>=p +fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f8421cc930e77c9f514b6915c3dbe2a94c6d8f690b5b739864ba6789fb8a55dd0,9f59c40275f5085a006f05dae77eb98c6fd0db1ab4a72ac47eae90a4fc9e57e0,u%p=0;valid_x(x2);u>=p +fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2fd19c182d2759cd99824228d94799f8c6557c38a1c0d6779b9d4b729c6f1ccc42,70720db7e238d04121f5b1afd8cc5ad9d18944c6bdc94881f502b7a3af3aecff,u%p=0;valid_x(x3);u>=p +fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2ffffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f,edd1fd3e327ce90cc7a3542614289aee9682003e9cf7dcc9cf2ca9743be5aa0c,u%p=0;t%p=0;valid_x(x2);u>=p;t>=p +fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2fffffffffffffffffffffffffffffffffffffffffffffffffffffffff2664bbd5,50873db31badcc71890e4f67753a65757f97aaa7dd5f1e82b753ace32219064b,u%p=0;valid_x(x3);valid_x(x2);valid_x(x1);u>=p;t>=p +fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2fffffffffffffffffffffffffffffffffffffffffffffffffffffffff7028de7d,1eea9cc59cfcf2fa151ac6c274eea4110feb4f7b68c5965732e9992e976ef68e,u%p=0;valid_x(x2);u>=p;t>=p +fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2fffffffffffffffffffffffffffffffffffffffffffffffffffffffffcbcfb7e7,12303941aedc208880735b1f1795c8e55be520ea93e103357b5d2adb7ed59b8e,u%p=0;valid_x(x1);u>=p;t>=p +fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2ffffffffffffffffffffffffffffffffffffffffffffffffffffffffff3113ad9,7eed6b70e7b0767c7d7feac04e57aa2a12fef5e0f48f878fcbb88b3b6b5e0783,u%p=0;valid_x(x3);u>=p;t>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff13cea4a70000000000000000000000000000000000000000000000000000000000000000,649984435b62b4a25d40c6133e8d9ab8c53d4b059ee8a154a3be0fcf4e892edb,t%p=0;valid_x(x1);u>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff13cea4a7fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f,649984435b62b4a25d40c6133e8d9ab8c53d4b059ee8a154a3be0fcf4e892edb,t%p=0;valid_x(x1);u>=p;t>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff15028c590063f64d5a7f1c14915cd61eac886ab295bebd91992504cf77edb028bdd6267f,3fde5713f8282eead7d39d4201f44a7c85a5ac8a0681f35e54085c6b69543374,(u'^3+t'^2+7)%p=0;valid_x(x2);u>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff2715de860000000000000000000000000000000000000000000000000000000000000000,3524f77fa3a6eb4389c3cb5d27f1f91462086429cd6c0cb0df43ea8f1e7b3fb4,t%p=0;valid_x(x3);valid_x(x2);valid_x(x1);u>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff2715de86fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f,3524f77fa3a6eb4389c3cb5d27f1f91462086429cd6c0cb0df43ea8f1e7b3fb4,t%p=0;valid_x(x3);valid_x(x2);valid_x(x1);u>=p;t>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff2c2c5709e7156c417717f2feab147141ec3da19fb759575cc6e37b2ea5ac9309f26f0f66,d2469ab3e04acbb21c65a1809f39caafe7a77c13d10f9dd38f391c01dc499c52,(u'^3-t'^2+7)%p=0;valid_x(x3);valid_x(x2);valid_x(x1);u>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff3a08cc1efffffffffffffffffffffffffffffffffffffffffffffffffffffffff760e9f0,38e2a5ce6a93e795e16d2c398bc99f0369202ce21e8f09d56777b40fc512bccc,valid_x(x3);u>=p;t>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff3e91257d932016cbf69c4471bd1f656c6a107f1973de4af7086db897277060e25677f19a,864b3dc902c376709c10a93ad4bbe29fce0012f3dc8672c6286bba28d7d6d6fc,valid_x(x3);u>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff795d6c1c322cadf599dbb86481522b3cc55f15a67932db2afa0111d9ed6981bcd124bf44,766dfe4a700d9bee288b903ad58870e3d4fe2f0ef780bcac5c823f320d9a9bef,(u'^3+t'^2+7)%p=0;valid_x(x1);u>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff8e426f0392389078c12b1a89e9542f0593bc96b6bfde8224f8654ef5d5cda935a3582194,faec7bc1987b63233fbc5f956edbf37d54404e7461c58ab8631bc68e451a0478,valid_x(x1);u>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff91192139ffffffffffffffffffffffffffffffffffffffffffffffffffffffff45f0f1eb,ec29a50bae138dbf7d8e24825006bb5fc1a2cc1243ba335bc6116fb9e498ec1f,valid_x(x2);u>=p;t>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff98eb9ab76e84499c483b3bf06214abfe065dddf43b8601de596d63b9e45a166a580541fe,1e0ff2dee9b09b136292a9e910f0d6ac3e552a644bba39e64e9dd3e3bbd3d4d4,(u'^3-t'^2+7)%p=0;valid_x(x3);u>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff9b77b7f2c74d99efceaa550f1ad1c0f43f46e7ff1ee3bd0162b7bf55f2965da9c3450646,8b7dd5c3edba9ee97b70eff438f22dca9849c8254a2f3345a0a572ffeaae0928,valid_x(x2);u>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffff9b77b7f2ffffffffffffffffffffffffffffffffffffffffffffffffffffffff156ca896,0881950c8f51d6b9a6387465d5f12609ef1bb25412a08a74cb2dfb200c74bfbf,valid_x(x3);valid_x(x2);valid_x(x1);u>=p;t>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffffa2f5cd838816c16c4fe8a1661d606fdb13cf9af04b979a2e159a09409ebc8645d58fde02,2f083207b9fd9b550063c31cd62b8746bd543bdc5bbf10e3a35563e927f440c8,(u'^3+t'^2+7)%p=0;valid_x(x3);valid_x(x2);valid_x(x1);u>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffffb13f75c00000000000000000000000000000000000000000000000000000000000000000,4f51e0be078e0cddab2742156adba7e7a148e73157072fd618cd60942b146bd0,t%p=0;valid_x(x3);u>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffffb13f75c0fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f,4f51e0be078e0cddab2742156adba7e7a148e73157072fd618cd60942b146bd0,t%p=0;valid_x(x3);u>=p;t>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffffe7bc1f8d0000000000000000000000000000000000000000000000000000000000000000,16c2ccb54352ff4bd794f6efd613c72197ab7082da5b563bdf9cb3edaafe74c2,t%p=0;valid_x(x2);u>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffffe7bc1f8dfffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f,16c2ccb54352ff4bd794f6efd613c72197ab7082da5b563bdf9cb3edaafe74c2,t%p=0;valid_x(x2);u>=p;t>=p +ffffffffffffffffffffffffffffffffffffffffffffffffffffffffef64d162750546ce42b0431361e52d4f5242d8f24f33e6b1f99b591647cbc808f462af51,d41244d11ca4f65240687759f95ca9efbab767ededb38fd18c36e18cd3b6f6a9,(u'^3+t'^2+7)%p=0;valid_x(x3);u>=p +fffffffffffffffffffffffffffffffffffffffffffffffffffffffff0e5be52372dd6e894b2a326fc3605a6e8f3c69c710bf27d630dfe2004988b78eb6eab36,64bf84dd5e03670fdb24c0f5d3c2c365736f51db6c92d95010716ad2d36134c8,valid_x(x3);valid_x(x2);valid_x(x1);u>=p +fffffffffffffffffffffffffffffffffffffffffffffffffffffffffefbb982fffffffffffffffffffffffffffffffffffffffffffffffffffffffff6d6db1f,1c92ccdfcf4ac550c28db57cff0c8515cb26936c786584a70114008d6c33a34b,valid_x(x1);u>=p;t>=p diff --git a/warnet-demo/scenarios/test_framework/key.py b/warnet-demo/scenarios/test_framework/key.py new file mode 100644 index 0000000..6c18925 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/key.py @@ -0,0 +1,351 @@ +# Copyright (c) 2019-2020 Pieter Wuille +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Test-only secp256k1 elliptic curve protocols implementation + +WARNING: This code is slow, uses bad randomness, does not properly protect +keys, and is trivially vulnerable to side channel attacks. Do not use for +anything but tests.""" +import csv +import hashlib +import hmac +import os +import random +import unittest + +from test_framework import secp256k1 + +# Point with no known discrete log. +H_POINT = "50929b74c1a04954b78b4b6035e97a5e078a5a0f28ec96d547bfee9ace803ac0" + +# Order of the secp256k1 curve +ORDER = secp256k1.GE.ORDER + +def TaggedHash(tag, data): + ss = hashlib.sha256(tag.encode('utf-8')).digest() + ss += ss + ss += data + return hashlib.sha256(ss).digest() + + +class ECPubKey: + """A secp256k1 public key""" + + def __init__(self): + """Construct an uninitialized public key""" + self.p = None + + def set(self, data): + """Construct a public key from a serialization in compressed or uncompressed format""" + self.p = secp256k1.GE.from_bytes(data) + self.compressed = len(data) == 33 + + @property + def is_compressed(self): + return self.compressed + + @property + def is_valid(self): + return self.p is not None + + def get_bytes(self): + assert self.is_valid + if self.compressed: + return self.p.to_bytes_compressed() + else: + return self.p.to_bytes_uncompressed() + + def verify_ecdsa(self, sig, msg, low_s=True): + """Verify a strictly DER-encoded ECDSA signature against this pubkey. + + See https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm for the + ECDSA verifier algorithm""" + assert self.is_valid + + # Extract r and s from the DER formatted signature. Return false for + # any DER encoding errors. + if (sig[1] + 2 != len(sig)): + return False + if (len(sig) < 4): + return False + if (sig[0] != 0x30): + return False + if (sig[2] != 0x02): + return False + rlen = sig[3] + if (len(sig) < 6 + rlen): + return False + if rlen < 1 or rlen > 33: + return False + if sig[4] >= 0x80: + return False + if (rlen > 1 and (sig[4] == 0) and not (sig[5] & 0x80)): + return False + r = int.from_bytes(sig[4:4+rlen], 'big') + if (sig[4+rlen] != 0x02): + return False + slen = sig[5+rlen] + if slen < 1 or slen > 33: + return False + if (len(sig) != 6 + rlen + slen): + return False + if sig[6+rlen] >= 0x80: + return False + if (slen > 1 and (sig[6+rlen] == 0) and not (sig[7+rlen] & 0x80)): + return False + s = int.from_bytes(sig[6+rlen:6+rlen+slen], 'big') + + # Verify that r and s are within the group order + if r < 1 or s < 1 or r >= ORDER or s >= ORDER: + return False + if low_s and s >= secp256k1.GE.ORDER_HALF: + return False + z = int.from_bytes(msg, 'big') + + # Run verifier algorithm on r, s + w = pow(s, -1, ORDER) + R = secp256k1.GE.mul((z * w, secp256k1.G), (r * w, self.p)) + if R.infinity or (int(R.x) % ORDER) != r: + return False + return True + +def generate_privkey(): + """Generate a valid random 32-byte private key.""" + return random.randrange(1, ORDER).to_bytes(32, 'big') + +def rfc6979_nonce(key): + """Compute signing nonce using RFC6979.""" + v = bytes([1] * 32) + k = bytes([0] * 32) + k = hmac.new(k, v + b"\x00" + key, 'sha256').digest() + v = hmac.new(k, v, 'sha256').digest() + k = hmac.new(k, v + b"\x01" + key, 'sha256').digest() + v = hmac.new(k, v, 'sha256').digest() + return hmac.new(k, v, 'sha256').digest() + +class ECKey: + """A secp256k1 private key""" + + def __init__(self): + self.valid = False + + def set(self, secret, compressed): + """Construct a private key object with given 32-byte secret and compressed flag.""" + assert len(secret) == 32 + secret = int.from_bytes(secret, 'big') + self.valid = (secret > 0 and secret < ORDER) + if self.valid: + self.secret = secret + self.compressed = compressed + + def generate(self, compressed=True): + """Generate a random private key (compressed or uncompressed).""" + self.set(generate_privkey(), compressed) + + def get_bytes(self): + """Retrieve the 32-byte representation of this key.""" + assert self.valid + return self.secret.to_bytes(32, 'big') + + @property + def is_valid(self): + return self.valid + + @property + def is_compressed(self): + return self.compressed + + def get_pubkey(self): + """Compute an ECPubKey object for this secret key.""" + assert self.valid + ret = ECPubKey() + ret.p = self.secret * secp256k1.G + ret.compressed = self.compressed + return ret + + def sign_ecdsa(self, msg, low_s=True, rfc6979=False): + """Construct a DER-encoded ECDSA signature with this key. + + See https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm for the + ECDSA signer algorithm.""" + assert self.valid + z = int.from_bytes(msg, 'big') + # Note: no RFC6979 by default, but a simple random nonce (some tests rely on distinct transactions for the same operation) + if rfc6979: + k = int.from_bytes(rfc6979_nonce(self.secret.to_bytes(32, 'big') + msg), 'big') + else: + k = random.randrange(1, ORDER) + R = k * secp256k1.G + r = int(R.x) % ORDER + s = (pow(k, -1, ORDER) * (z + self.secret * r)) % ORDER + if low_s and s > secp256k1.GE.ORDER_HALF: + s = ORDER - s + # Represent in DER format. The byte representations of r and s have + # length rounded up (255 bits becomes 32 bytes and 256 bits becomes 33 + # bytes). + rb = r.to_bytes((r.bit_length() + 8) // 8, 'big') + sb = s.to_bytes((s.bit_length() + 8) // 8, 'big') + return b'\x30' + bytes([4 + len(rb) + len(sb), 2, len(rb)]) + rb + bytes([2, len(sb)]) + sb + +def compute_xonly_pubkey(key): + """Compute an x-only (32 byte) public key from a (32 byte) private key. + + This also returns whether the resulting public key was negated. + """ + + assert len(key) == 32 + x = int.from_bytes(key, 'big') + if x == 0 or x >= ORDER: + return (None, None) + P = x * secp256k1.G + return (P.to_bytes_xonly(), not P.y.is_even()) + +def tweak_add_privkey(key, tweak): + """Tweak a private key (after negating it if needed).""" + + assert len(key) == 32 + assert len(tweak) == 32 + + x = int.from_bytes(key, 'big') + if x == 0 or x >= ORDER: + return None + if not (x * secp256k1.G).y.is_even(): + x = ORDER - x + t = int.from_bytes(tweak, 'big') + if t >= ORDER: + return None + x = (x + t) % ORDER + if x == 0: + return None + return x.to_bytes(32, 'big') + +def tweak_add_pubkey(key, tweak): + """Tweak a public key and return whether the result had to be negated.""" + + assert len(key) == 32 + assert len(tweak) == 32 + + P = secp256k1.GE.from_bytes_xonly(key) + if P is None: + return None + t = int.from_bytes(tweak, 'big') + if t >= ORDER: + return None + Q = t * secp256k1.G + P + if Q.infinity: + return None + return (Q.to_bytes_xonly(), not Q.y.is_even()) + +def verify_schnorr(key, sig, msg): + """Verify a Schnorr signature (see BIP 340). + + - key is a 32-byte xonly pubkey (computed using compute_xonly_pubkey). + - sig is a 64-byte Schnorr signature + - msg is a 32-byte message + """ + assert len(key) == 32 + assert len(msg) == 32 + assert len(sig) == 64 + + P = secp256k1.GE.from_bytes_xonly(key) + if P is None: + return False + r = int.from_bytes(sig[0:32], 'big') + if r >= secp256k1.FE.SIZE: + return False + s = int.from_bytes(sig[32:64], 'big') + if s >= ORDER: + return False + e = int.from_bytes(TaggedHash("BIP0340/challenge", sig[0:32] + key + msg), 'big') % ORDER + R = secp256k1.GE.mul((s, secp256k1.G), (-e, P)) + if R.infinity or not R.y.is_even(): + return False + if r != R.x: + return False + return True + +def sign_schnorr(key, msg, aux=None, flip_p=False, flip_r=False): + """Create a Schnorr signature (see BIP 340).""" + + if aux is None: + aux = bytes(32) + + assert len(key) == 32 + assert len(msg) == 32 + assert len(aux) == 32 + + sec = int.from_bytes(key, 'big') + if sec == 0 or sec >= ORDER: + return None + P = sec * secp256k1.G + if P.y.is_even() == flip_p: + sec = ORDER - sec + t = (sec ^ int.from_bytes(TaggedHash("BIP0340/aux", aux), 'big')).to_bytes(32, 'big') + kp = int.from_bytes(TaggedHash("BIP0340/nonce", t + P.to_bytes_xonly() + msg), 'big') % ORDER + assert kp != 0 + R = kp * secp256k1.G + k = kp if R.y.is_even() != flip_r else ORDER - kp + e = int.from_bytes(TaggedHash("BIP0340/challenge", R.to_bytes_xonly() + P.to_bytes_xonly() + msg), 'big') % ORDER + return R.to_bytes_xonly() + ((k + e * sec) % ORDER).to_bytes(32, 'big') + + +class TestFrameworkKey(unittest.TestCase): + def test_ecdsa_and_schnorr(self): + """Test the Python ECDSA and Schnorr implementations.""" + def random_bitflip(sig): + sig = list(sig) + sig[random.randrange(len(sig))] ^= (1 << (random.randrange(8))) + return bytes(sig) + + byte_arrays = [generate_privkey() for _ in range(3)] + [v.to_bytes(32, 'big') for v in [0, ORDER - 1, ORDER, 2**256 - 1]] + keys = {} + for privkey_bytes in byte_arrays: # build array of key/pubkey pairs + privkey = ECKey() + privkey.set(privkey_bytes, compressed=True) + if privkey.is_valid: + keys[privkey] = privkey.get_pubkey() + for msg in byte_arrays: # test every combination of message, signing key, verification key + for sign_privkey, _ in keys.items(): + sig_ecdsa = sign_privkey.sign_ecdsa(msg) + sig_schnorr = sign_schnorr(sign_privkey.get_bytes(), msg) + for verify_privkey, verify_pubkey in keys.items(): + verify_xonly_pubkey = verify_pubkey.get_bytes()[1:] + if verify_privkey == sign_privkey: + self.assertTrue(verify_pubkey.verify_ecdsa(sig_ecdsa, msg)) + self.assertTrue(verify_schnorr(verify_xonly_pubkey, sig_schnorr, msg)) + sig_ecdsa = random_bitflip(sig_ecdsa) # damaging signature should break things + sig_schnorr = random_bitflip(sig_schnorr) + self.assertFalse(verify_pubkey.verify_ecdsa(sig_ecdsa, msg)) + self.assertFalse(verify_schnorr(verify_xonly_pubkey, sig_schnorr, msg)) + + def test_schnorr_testvectors(self): + """Implement the BIP340 test vectors (read from bip340_test_vectors.csv).""" + num_tests = 0 + vectors_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'bip340_test_vectors.csv') + with open(vectors_file, newline='', encoding='utf8') as csvfile: + reader = csv.reader(csvfile) + next(reader) + for row in reader: + (i_str, seckey_hex, pubkey_hex, aux_rand_hex, msg_hex, sig_hex, result_str, comment) = row + i = int(i_str) + pubkey = bytes.fromhex(pubkey_hex) + msg = bytes.fromhex(msg_hex) + sig = bytes.fromhex(sig_hex) + result = result_str == 'TRUE' + if seckey_hex != '': + seckey = bytes.fromhex(seckey_hex) + pubkey_actual = compute_xonly_pubkey(seckey)[0] + self.assertEqual(pubkey.hex(), pubkey_actual.hex(), "BIP340 test vector %i (%s): pubkey mismatch" % (i, comment)) + aux_rand = bytes.fromhex(aux_rand_hex) + try: + sig_actual = sign_schnorr(seckey, msg, aux_rand) + self.assertEqual(sig.hex(), sig_actual.hex(), "BIP340 test vector %i (%s): sig mismatch" % (i, comment)) + except RuntimeError as e: + self.fail("BIP340 test vector %i (%s): signing raised exception %s" % (i, comment, e)) + result_actual = verify_schnorr(pubkey, sig, msg) + if result: + self.assertEqual(result, result_actual, "BIP340 test vector %i (%s): verification failed" % (i, comment)) + else: + self.assertEqual(result, result_actual, "BIP340 test vector %i (%s): verification succeeded unexpectedly" % (i, comment)) + num_tests += 1 + self.assertTrue(num_tests >= 15) # expect at least 15 test vectors diff --git a/warnet-demo/scenarios/test_framework/messages.py b/warnet-demo/scenarios/test_framework/messages.py new file mode 100644 index 0000000..8f3aea8 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/messages.py @@ -0,0 +1,1906 @@ +#!/usr/bin/env python3 +# Copyright (c) 2010 ArtForz -- public domain half-a-node +# Copyright (c) 2012 Jeff Garzik +# Copyright (c) 2010-2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Bitcoin test framework primitive and message structures + +CBlock, CTransaction, CBlockHeader, CTxIn, CTxOut, etc....: + data structures that should map to corresponding structures in + bitcoin/primitives + +msg_block, msg_tx, msg_headers, etc.: + data structures that represent network messages + +ser_*, deser_*: functions that handle serialization/deserialization. + +Classes use __slots__ to ensure extraneous attributes aren't accidentally added +by tests, compromising their intended effect. +""" +from base64 import b32decode, b32encode +import copy +import hashlib +from io import BytesIO +import math +import random +import socket +import struct +import time +import unittest + +from test_framework.siphash import siphash256 +from test_framework.util import assert_equal + +MAX_LOCATOR_SZ = 101 +MAX_BLOCK_WEIGHT = 4000000 +MAX_BLOOM_FILTER_SIZE = 36000 +MAX_BLOOM_HASH_FUNCS = 50 + +COIN = 100000000 # 1 btc in satoshis +MAX_MONEY = 21000000 * COIN + +MAX_BIP125_RBF_SEQUENCE = 0xfffffffd # Sequence number that is rbf-opt-in (BIP 125) and csv-opt-out (BIP 68) +SEQUENCE_FINAL = 0xffffffff # Sequence number that disables nLockTime if set for every input of a tx + +MAX_PROTOCOL_MESSAGE_LENGTH = 4000000 # Maximum length of incoming protocol messages +MAX_HEADERS_RESULTS = 2000 # Number of headers sent in one getheaders result +MAX_INV_SIZE = 50000 # Maximum number of entries in an 'inv' protocol message + +NODE_NETWORK = (1 << 0) +NODE_BLOOM = (1 << 2) +NODE_WITNESS = (1 << 3) +NODE_COMPACT_FILTERS = (1 << 6) +NODE_NETWORK_LIMITED = (1 << 10) +NODE_P2P_V2 = (1 << 11) + +MSG_TX = 1 +MSG_BLOCK = 2 +MSG_FILTERED_BLOCK = 3 +MSG_CMPCT_BLOCK = 4 +MSG_WTX = 5 +MSG_WITNESS_FLAG = 1 << 30 +MSG_TYPE_MASK = 0xffffffff >> 2 +MSG_WITNESS_TX = MSG_TX | MSG_WITNESS_FLAG + +FILTER_TYPE_BASIC = 0 + +WITNESS_SCALE_FACTOR = 4 + +DEFAULT_ANCESTOR_LIMIT = 25 # default max number of in-mempool ancestors +DEFAULT_DESCENDANT_LIMIT = 25 # default max number of in-mempool descendants + +# Default setting for -datacarriersize. 80 bytes of data, +1 for OP_RETURN, +2 for the pushdata opcodes. +MAX_OP_RETURN_RELAY = 83 + +DEFAULT_MEMPOOL_EXPIRY_HOURS = 336 # hours + +def sha256(s): + return hashlib.sha256(s).digest() + + +def sha3(s): + return hashlib.sha3_256(s).digest() + + +def hash256(s): + return sha256(sha256(s)) + + +def ser_compact_size(l): + r = b"" + if l < 253: + r = struct.pack("B", l) + elif l < 0x10000: + r = struct.pack("> 24) & 0xFF + v = (c & 0xFFFFFF) << (8 * (nbytes - 3)) + return v + + +# deser_function_name: Allow for an alternate deserialization function on the +# entries in the vector. +def deser_vector(f, c, deser_function_name=None): + nit = deser_compact_size(f) + r = [] + for _ in range(nit): + t = c() + if deser_function_name: + getattr(t, deser_function_name)(f) + else: + t.deserialize(f) + r.append(t) + return r + + +# ser_function_name: Allow for an alternate serialization function on the +# entries in the vector (we use this for serializing the vector of transactions +# for a witness block). +def ser_vector(l, ser_function_name=None): + r = ser_compact_size(len(l)) + for i in l: + if ser_function_name: + r += getattr(i, ser_function_name)() + else: + r += i.serialize() + return r + + +def deser_uint256_vector(f): + nit = deser_compact_size(f) + r = [] + for _ in range(nit): + t = deser_uint256(f) + r.append(t) + return r + + +def ser_uint256_vector(l): + r = ser_compact_size(len(l)) + for i in l: + r += ser_uint256(i) + return r + + +def deser_string_vector(f): + nit = deser_compact_size(f) + r = [] + for _ in range(nit): + t = deser_string(f) + r.append(t) + return r + + +def ser_string_vector(l): + r = ser_compact_size(len(l)) + for sv in l: + r += ser_string(sv) + return r + + +def from_hex(obj, hex_string): + """Deserialize from a hex string representation (e.g. from RPC) + + Note that there is no complementary helper like e.g. `to_hex` for the + inverse operation. To serialize a message object to a hex string, simply + use obj.serialize().hex()""" + obj.deserialize(BytesIO(bytes.fromhex(hex_string))) + return obj + + +def tx_from_hex(hex_string): + """Deserialize from hex string to a transaction object""" + return from_hex(CTransaction(), hex_string) + + +# like from_hex, but without the hex part +def from_binary(cls, stream): + """deserialize a binary stream (or bytes object) into an object""" + # handle bytes object by turning it into a stream + was_bytes = isinstance(stream, bytes) + if was_bytes: + stream = BytesIO(stream) + obj = cls() + obj.deserialize(stream) + if was_bytes: + assert len(stream.read()) == 0 + return obj + + +# Objects that map to bitcoind objects, which can be serialized/deserialized + + +class CAddress: + __slots__ = ("net", "ip", "nServices", "port", "time") + + # see https://github.com/bitcoin/bips/blob/master/bip-0155.mediawiki + NET_IPV4 = 1 + NET_IPV6 = 2 + NET_TORV3 = 4 + NET_I2P = 5 + NET_CJDNS = 6 + + ADDRV2_NET_NAME = { + NET_IPV4: "IPv4", + NET_IPV6: "IPv6", + NET_TORV3: "TorV3", + NET_I2P: "I2P", + NET_CJDNS: "CJDNS" + } + + ADDRV2_ADDRESS_LENGTH = { + NET_IPV4: 4, + NET_IPV6: 16, + NET_TORV3: 32, + NET_I2P: 32, + NET_CJDNS: 16 + } + + I2P_PAD = "====" + + def __init__(self): + self.time = 0 + self.nServices = 1 + self.net = self.NET_IPV4 + self.ip = "0.0.0.0" + self.port = 0 + + def __eq__(self, other): + return self.net == other.net and self.ip == other.ip and self.nServices == other.nServices and self.port == other.port and self.time == other.time + + def deserialize(self, f, *, with_time=True): + """Deserialize from addrv1 format (pre-BIP155)""" + if with_time: + # VERSION messages serialize CAddress objects without time + self.time = struct.unpack("H", f.read(2))[0] + + def serialize(self, *, with_time=True): + """Serialize in addrv1 format (pre-BIP155)""" + assert self.net == self.NET_IPV4 + r = b"" + if with_time: + # VERSION messages serialize CAddress objects without time + r += struct.pack("H", self.port) + return r + + def deserialize_v2(self, f): + """Deserialize from addrv2 format (BIP155)""" + self.time = struct.unpack("H", f.read(2))[0] + + def serialize_v2(self): + """Serialize in addrv2 format (BIP155)""" + assert self.net in self.ADDRV2_NET_NAME + r = b"" + r += struct.pack("H", self.port) + return r + + def __repr__(self): + return ("CAddress(nServices=%i net=%s addr=%s port=%i)" + % (self.nServices, self.ADDRV2_NET_NAME[self.net], self.ip, self.port)) + + +class CInv: + __slots__ = ("hash", "type") + + typemap = { + 0: "Error", + MSG_TX: "TX", + MSG_BLOCK: "Block", + MSG_TX | MSG_WITNESS_FLAG: "WitnessTx", + MSG_BLOCK | MSG_WITNESS_FLAG: "WitnessBlock", + MSG_FILTERED_BLOCK: "filtered Block", + MSG_CMPCT_BLOCK: "CompactBlock", + MSG_WTX: "WTX", + } + + def __init__(self, t=0, h=0): + self.type = t + self.hash = h + + def deserialize(self, f): + self.type = struct.unpack(" 21000000 * COIN: + return False + return True + + # Calculate the transaction weight using witness and non-witness + # serialization size (does NOT use sigops). + def get_weight(self): + with_witness_size = len(self.serialize_with_witness()) + without_witness_size = len(self.serialize_without_witness()) + return (WITNESS_SCALE_FACTOR - 1) * without_witness_size + with_witness_size + + def get_vsize(self): + return math.ceil(self.get_weight() / WITNESS_SCALE_FACTOR) + + def __repr__(self): + return "CTransaction(nVersion=%i vin=%s vout=%s wit=%s nLockTime=%i)" \ + % (self.nVersion, repr(self.vin), repr(self.vout), repr(self.wit), self.nLockTime) + + +class CBlockHeader: + __slots__ = ("hash", "hashMerkleRoot", "hashPrevBlock", "nBits", "nNonce", + "nTime", "nVersion", "sha256") + + def __init__(self, header=None): + if header is None: + self.set_null() + else: + self.nVersion = header.nVersion + self.hashPrevBlock = header.hashPrevBlock + self.hashMerkleRoot = header.hashMerkleRoot + self.nTime = header.nTime + self.nBits = header.nBits + self.nNonce = header.nNonce + self.sha256 = header.sha256 + self.hash = header.hash + self.calc_sha256() + + def set_null(self): + self.nVersion = 4 + self.hashPrevBlock = 0 + self.hashMerkleRoot = 0 + self.nTime = 0 + self.nBits = 0 + self.nNonce = 0 + self.sha256 = None + self.hash = None + + def deserialize(self, f): + self.nVersion = struct.unpack(" 1: + newhashes = [] + for i in range(0, len(hashes), 2): + i2 = min(i+1, len(hashes)-1) + newhashes.append(hash256(hashes[i] + hashes[i2])) + hashes = newhashes + return uint256_from_str(hashes[0]) + + def calc_merkle_root(self): + hashes = [] + for tx in self.vtx: + tx.calc_sha256() + hashes.append(ser_uint256(tx.sha256)) + return self.get_merkle_root(hashes) + + def calc_witness_merkle_root(self): + # For witness root purposes, the hash of the + # coinbase, with witness, is defined to be 0...0 + hashes = [ser_uint256(0)] + + for tx in self.vtx[1:]: + # Calculate the hashes with witness data + hashes.append(ser_uint256(tx.calc_sha256(True))) + + return self.get_merkle_root(hashes) + + def is_valid(self): + self.calc_sha256() + target = uint256_from_compact(self.nBits) + if self.sha256 > target: + return False + for tx in self.vtx: + if not tx.is_valid(): + return False + if self.calc_merkle_root() != self.hashMerkleRoot: + return False + return True + + def solve(self): + self.rehash() + target = uint256_from_compact(self.nBits) + while self.sha256 > target: + self.nNonce += 1 + self.rehash() + + # Calculate the block weight using witness and non-witness + # serialization size (does NOT use sigops). + def get_weight(self): + with_witness_size = len(self.serialize(with_witness=True)) + without_witness_size = len(self.serialize(with_witness=False)) + return (WITNESS_SCALE_FACTOR - 1) * without_witness_size + with_witness_size + + def __repr__(self): + return "CBlock(nVersion=%i hashPrevBlock=%064x hashMerkleRoot=%064x nTime=%s nBits=%08x nNonce=%08x vtx=%s)" \ + % (self.nVersion, self.hashPrevBlock, self.hashMerkleRoot, + time.ctime(self.nTime), self.nBits, self.nNonce, repr(self.vtx)) + + +class PrefilledTransaction: + __slots__ = ("index", "tx") + + def __init__(self, index=0, tx = None): + self.index = index + self.tx = tx + + def deserialize(self, f): + self.index = deser_compact_size(f) + self.tx = CTransaction() + self.tx.deserialize(f) + + def serialize(self, with_witness=True): + r = b"" + r += ser_compact_size(self.index) + if with_witness: + r += self.tx.serialize_with_witness() + else: + r += self.tx.serialize_without_witness() + return r + + def serialize_without_witness(self): + return self.serialize(with_witness=False) + + def serialize_with_witness(self): + return self.serialize(with_witness=True) + + def __repr__(self): + return "PrefilledTransaction(index=%d, tx=%s)" % (self.index, repr(self.tx)) + + +# This is what we send on the wire, in a cmpctblock message. +class P2PHeaderAndShortIDs: + __slots__ = ("header", "nonce", "prefilled_txn", "prefilled_txn_length", + "shortids", "shortids_length") + + def __init__(self): + self.header = CBlockHeader() + self.nonce = 0 + self.shortids_length = 0 + self.shortids = [] + self.prefilled_txn_length = 0 + self.prefilled_txn = [] + + def deserialize(self, f): + self.header.deserialize(f) + self.nonce = struct.unpack(" +class msg_headers: + __slots__ = ("headers",) + msgtype = b"headers" + + def __init__(self, headers=None): + self.headers = headers if headers is not None else [] + + def deserialize(self, f): + # comment in bitcoind indicates these should be deserialized as blocks + blocks = deser_vector(f, CBlock) + for x in blocks: + self.headers.append(CBlockHeader(x)) + + def serialize(self): + blocks = [CBlock(x) for x in self.headers] + return ser_vector(blocks) + + def __repr__(self): + return "msg_headers(headers=%s)" % repr(self.headers) + + +class msg_merkleblock: + __slots__ = ("merkleblock",) + msgtype = b"merkleblock" + + def __init__(self, merkleblock=None): + if merkleblock is None: + self.merkleblock = CMerkleBlock() + else: + self.merkleblock = merkleblock + + def deserialize(self, f): + self.merkleblock.deserialize(f) + + def serialize(self): + return self.merkleblock.serialize() + + def __repr__(self): + return "msg_merkleblock(merkleblock=%s)" % (repr(self.merkleblock)) + + +class msg_filterload: + __slots__ = ("data", "nHashFuncs", "nTweak", "nFlags") + msgtype = b"filterload" + + def __init__(self, data=b'00', nHashFuncs=0, nTweak=0, nFlags=0): + self.data = data + self.nHashFuncs = nHashFuncs + self.nTweak = nTweak + self.nFlags = nFlags + + def deserialize(self, f): + self.data = deser_string(f) + self.nHashFuncs = struct.unpack("> (32 - bits)) + +def chacha20_doubleround(s): + """Apply a ChaCha20 double round to 16-element state array s. + + See https://cr.yp.to/chacha/chacha-20080128.pdf and https://tools.ietf.org/html/rfc8439 + """ + QUARTER_ROUNDS = [(0, 4, 8, 12), + (1, 5, 9, 13), + (2, 6, 10, 14), + (3, 7, 11, 15), + (0, 5, 10, 15), + (1, 6, 11, 12), + (2, 7, 8, 13), + (3, 4, 9, 14)] + + for a, b, c, d in QUARTER_ROUNDS: + s[a] = (s[a] + s[b]) & 0xffffffff + s[d] = rot32(s[d] ^ s[a], 16) + s[c] = (s[c] + s[d]) & 0xffffffff + s[b] = rot32(s[b] ^ s[c], 12) + s[a] = (s[a] + s[b]) & 0xffffffff + s[d] = rot32(s[d] ^ s[a], 8) + s[c] = (s[c] + s[d]) & 0xffffffff + s[b] = rot32(s[b] ^ s[c], 7) + +def chacha20_32_to_384(key32): + """Specialized ChaCha20 implementation with 32-byte key, 0 IV, 384-byte output.""" + # See RFC 8439 section 2.3 for chacha20 parameters + CONSTANTS = [0x61707865, 0x3320646e, 0x79622d32, 0x6b206574] + + key_bytes = [0]*8 + for i in range(8): + key_bytes[i] = int.from_bytes(key32[(4 * i):(4 * (i+1))], 'little') + + INITIALIZATION_VECTOR = [0] * 4 + init = CONSTANTS + key_bytes + INITIALIZATION_VECTOR + out = bytearray() + for counter in range(6): + init[12] = counter + s = init.copy() + for _ in range(10): + chacha20_doubleround(s) + for i in range(16): + out.extend(((s[i] + init[i]) & 0xffffffff).to_bytes(4, 'little')) + return bytes(out) + +def data_to_num3072(data): + """Hash a 32-byte array data to a 3072-bit number using 6 Chacha20 operations.""" + bytes384 = chacha20_32_to_384(data) + return int.from_bytes(bytes384, 'little') + +class MuHash3072: + """Class representing the MuHash3072 computation of a set. + + See https://cseweb.ucsd.edu/~mihir/papers/inchash.pdf and https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-May/014337.html + """ + + MODULUS = 2**3072 - 1103717 + + def __init__(self): + """Initialize for an empty set.""" + self.numerator = 1 + self.denominator = 1 + + def insert(self, data): + """Insert a byte array data in the set.""" + data_hash = hashlib.sha256(data).digest() + self.numerator = (self.numerator * data_to_num3072(data_hash)) % self.MODULUS + + def remove(self, data): + """Remove a byte array from the set.""" + data_hash = hashlib.sha256(data).digest() + self.denominator = (self.denominator * data_to_num3072(data_hash)) % self.MODULUS + + def digest(self): + """Extract the final hash. Does not modify this object.""" + val = (self.numerator * pow(self.denominator, -1, self.MODULUS)) % self.MODULUS + bytes384 = val.to_bytes(384, 'little') + return hashlib.sha256(bytes384).digest() + +class TestFrameworkMuhash(unittest.TestCase): + def test_muhash(self): + muhash = MuHash3072() + muhash.insert(b'\x00' * 32) + muhash.insert((b'\x01' + b'\x00' * 31)) + muhash.remove((b'\x02' + b'\x00' * 31)) + finalized = muhash.digest() + # This mirrors the result in the C++ MuHash3072 unit test + self.assertEqual(finalized[::-1].hex(), "10d312b100cbd32ada024a6646e40d3482fcff103668d2625f10002a607d5863") + + def test_chacha20(self): + def chacha_check(key, result): + self.assertEqual(chacha20_32_to_384(key)[:64].hex(), result) + + # Test vectors from https://tools.ietf.org/html/draft-agl-tls-chacha20poly1305-04#section-7 + # Since the nonce is hardcoded to 0 in our function we only use those vectors. + chacha_check([0]*32, "76b8e0ada0f13d90405d6ae55386bd28bdd219b8a08ded1aa836efcc8b770dc7da41597c5157488d7724e03fb8d84a376a43b8f41518a11cc387b669b2ee6586") + chacha_check([0]*31 + [1], "4540f05a9f1fb296d7736e7b208e3c96eb4fe1834688d2604f450952ed432d41bbe2a0b6ea7566d2a5d1e7e20d42af2c53d792b1c43fea817e9ad275ae546963") diff --git a/warnet-demo/scenarios/test_framework/netutil.py b/warnet-demo/scenarios/test_framework/netutil.py new file mode 100644 index 0000000..838f40f --- /dev/null +++ b/warnet-demo/scenarios/test_framework/netutil.py @@ -0,0 +1,160 @@ +#!/usr/bin/env python3 +# Copyright (c) 2014-2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Linux network utilities. + +Roughly based on http://voorloopnul.com/blog/a-python-netstat-in-less-than-100-lines-of-code/ by Ricardo Pascal +""" + +import sys +import socket +import struct +import array +import os + +# STATE_ESTABLISHED = '01' +# STATE_SYN_SENT = '02' +# STATE_SYN_RECV = '03' +# STATE_FIN_WAIT1 = '04' +# STATE_FIN_WAIT2 = '05' +# STATE_TIME_WAIT = '06' +# STATE_CLOSE = '07' +# STATE_CLOSE_WAIT = '08' +# STATE_LAST_ACK = '09' +STATE_LISTEN = '0A' +# STATE_CLOSING = '0B' + +# Address manager size constants as defined in addrman_impl.h +ADDRMAN_NEW_BUCKET_COUNT = 1 << 10 +ADDRMAN_TRIED_BUCKET_COUNT = 1 << 8 +ADDRMAN_BUCKET_SIZE = 1 << 6 + +def get_socket_inodes(pid): + ''' + Get list of socket inodes for process pid. + ''' + base = '/proc/%i/fd' % pid + inodes = [] + for item in os.listdir(base): + target = os.readlink(os.path.join(base, item)) + if target.startswith('socket:'): + inodes.append(int(target[8:-1])) + return inodes + +def _remove_empty(array): + return [x for x in array if x !=''] + +def _convert_ip_port(array): + host,port = array.split(':') + # convert host from mangled-per-four-bytes form as used by kernel + host = bytes.fromhex(host) + host_out = '' + for x in range(0, len(host) // 4): + (val,) = struct.unpack('=I', host[x*4:(x+1)*4]) + host_out += '%08x' % val + + return host_out,int(port,16) + +def netstat(typ='tcp'): + ''' + Function to return a list with status of tcp connections at linux systems + To get pid of all network process running on system, you must run this script + as superuser + ''' + with open('/proc/net/'+typ,'r',encoding='utf8') as f: + content = f.readlines() + content.pop(0) + result = [] + for line in content: + line_array = _remove_empty(line.split(' ')) # Split lines and remove empty spaces. + tcp_id = line_array[0] + l_addr = _convert_ip_port(line_array[1]) + r_addr = _convert_ip_port(line_array[2]) + state = line_array[3] + inode = int(line_array[9]) # Need the inode to match with process pid. + nline = [tcp_id, l_addr, r_addr, state, inode] + result.append(nline) + return result + +def get_bind_addrs(pid): + ''' + Get bind addresses as (host,port) tuples for process pid. + ''' + inodes = get_socket_inodes(pid) + bind_addrs = [] + for conn in netstat('tcp') + netstat('tcp6'): + if conn[3] == STATE_LISTEN and conn[4] in inodes: + bind_addrs.append(conn[1]) + return bind_addrs + +# from: https://code.activestate.com/recipes/439093/ +def all_interfaces(): + ''' + Return all interfaces that are up + ''' + import fcntl # Linux only, so only import when required + + is_64bits = sys.maxsize > 2**32 + struct_size = 40 if is_64bits else 32 + s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) + max_possible = 8 # initial value + while True: + bytes = max_possible * struct_size + names = array.array('B', b'\0' * bytes) + outbytes = struct.unpack('iL', fcntl.ioctl( + s.fileno(), + 0x8912, # SIOCGIFCONF + struct.pack('iL', bytes, names.buffer_info()[0]) + ))[0] + if outbytes == bytes: + max_possible *= 2 + else: + break + namestr = names.tobytes() + return [(namestr[i:i+16].split(b'\0', 1)[0], + socket.inet_ntoa(namestr[i+20:i+24])) + for i in range(0, outbytes, struct_size)] + +def addr_to_hex(addr): + ''' + Convert string IPv4 or IPv6 address to binary address as returned by + get_bind_addrs. + Very naive implementation that certainly doesn't work for all IPv6 variants. + ''' + if '.' in addr: # IPv4 + addr = [int(x) for x in addr.split('.')] + elif ':' in addr: # IPv6 + sub = [[], []] # prefix, suffix + x = 0 + addr = addr.split(':') + for i,comp in enumerate(addr): + if comp == '': + if i == 0 or i == (len(addr)-1): # skip empty component at beginning or end + continue + x += 1 # :: skips to suffix + assert x < 2 + else: # two bytes per component + val = int(comp, 16) + sub[x].append(val >> 8) + sub[x].append(val & 0xff) + nullbytes = 16 - len(sub[0]) - len(sub[1]) + assert (x == 0 and nullbytes == 0) or (x == 1 and nullbytes > 0) + addr = sub[0] + ([0] * nullbytes) + sub[1] + else: + raise ValueError('Could not parse address %s' % addr) + return bytearray(addr).hex() + +def test_ipv6_local(): + ''' + Check for (local) IPv6 support. + ''' + # By using SOCK_DGRAM this will not actually make a connection, but it will + # fail if there is no route to IPv6 localhost. + have_ipv6 = True + try: + s = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM) + s.connect(('::1', 1)) + except socket.error: + have_ipv6 = False + return have_ipv6 diff --git a/warnet-demo/scenarios/test_framework/p2p.py b/warnet-demo/scenarios/test_framework/p2p.py new file mode 100644 index 0000000..be4ed62 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/p2p.py @@ -0,0 +1,807 @@ +#!/usr/bin/env python3 +# Copyright (c) 2010 ArtForz -- public domain half-a-node +# Copyright (c) 2012 Jeff Garzik +# Copyright (c) 2010-2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Test objects for interacting with a bitcoind node over the p2p protocol. + +The P2PInterface objects interact with the bitcoind nodes under test using the +node's p2p interface. They can be used to send messages to the node, and +callbacks can be registered that execute when messages are received from the +node. Messages are sent to/received from the node on an asyncio event loop. +State held inside the objects must be guarded by the p2p_lock to avoid data +races between the main testing thread and the event loop. + +P2PConnection: A low-level connection object to a node's P2P interface +P2PInterface: A high-level interface object for communicating to a node over P2P +P2PDataStore: A p2p interface class that keeps a store of transactions and blocks + and can respond correctly to getdata and getheaders messages +P2PTxInvStore: A p2p interface class that inherits from P2PDataStore, and keeps + a count of how many times each txid has been announced.""" + +import asyncio +from collections import defaultdict +from io import BytesIO +import logging +import struct +import sys +import threading + +from test_framework.messages import ( + CBlockHeader, + MAX_HEADERS_RESULTS, + msg_addr, + msg_addrv2, + msg_block, + MSG_BLOCK, + msg_blocktxn, + msg_cfcheckpt, + msg_cfheaders, + msg_cfilter, + msg_cmpctblock, + msg_feefilter, + msg_filteradd, + msg_filterclear, + msg_filterload, + msg_getaddr, + msg_getblocks, + msg_getblocktxn, + msg_getcfcheckpt, + msg_getcfheaders, + msg_getcfilters, + msg_getdata, + msg_getheaders, + msg_headers, + msg_inv, + msg_mempool, + msg_merkleblock, + msg_notfound, + msg_ping, + msg_pong, + msg_sendaddrv2, + msg_sendcmpct, + msg_sendheaders, + msg_sendtxrcncl, + msg_tx, + MSG_TX, + MSG_TYPE_MASK, + msg_verack, + msg_version, + MSG_WTX, + msg_wtxidrelay, + NODE_NETWORK, + NODE_WITNESS, + sha256, +) +from test_framework.util import ( + MAX_NODES, + p2p_port, + wait_until_helper_internal, +) + +logger = logging.getLogger("TestFramework.p2p") + +# The minimum P2P version that this test framework supports +MIN_P2P_VERSION_SUPPORTED = 60001 +# The P2P version that this test framework implements and sends in its `version` message +# Version 70016 supports wtxid relay +P2P_VERSION = 70016 +# The services that this test framework offers in its `version` message +P2P_SERVICES = NODE_NETWORK | NODE_WITNESS +# The P2P user agent string that this test framework sends in its `version` message +P2P_SUBVERSION = "/python-p2p-tester:0.0.3/" +# Value for relay that this test framework sends in its `version` message +P2P_VERSION_RELAY = 1 +# Delay after receiving a tx inv before requesting transactions from non-preferred peers, in seconds +NONPREF_PEER_TX_DELAY = 2 +# Delay for requesting transactions via txids if we have wtxid-relaying peers, in seconds +TXID_RELAY_DELAY = 2 +# Delay for requesting transactions if the peer has MAX_PEER_TX_REQUEST_IN_FLIGHT or more requests +OVERLOADED_PEER_TX_DELAY = 2 +# How long to wait before downloading a transaction from an additional peer +GETDATA_TX_INTERVAL = 60 + +MESSAGEMAP = { + b"addr": msg_addr, + b"addrv2": msg_addrv2, + b"block": msg_block, + b"blocktxn": msg_blocktxn, + b"cfcheckpt": msg_cfcheckpt, + b"cfheaders": msg_cfheaders, + b"cfilter": msg_cfilter, + b"cmpctblock": msg_cmpctblock, + b"feefilter": msg_feefilter, + b"filteradd": msg_filteradd, + b"filterclear": msg_filterclear, + b"filterload": msg_filterload, + b"getaddr": msg_getaddr, + b"getblocks": msg_getblocks, + b"getblocktxn": msg_getblocktxn, + b"getcfcheckpt": msg_getcfcheckpt, + b"getcfheaders": msg_getcfheaders, + b"getcfilters": msg_getcfilters, + b"getdata": msg_getdata, + b"getheaders": msg_getheaders, + b"headers": msg_headers, + b"inv": msg_inv, + b"mempool": msg_mempool, + b"merkleblock": msg_merkleblock, + b"notfound": msg_notfound, + b"ping": msg_ping, + b"pong": msg_pong, + b"sendaddrv2": msg_sendaddrv2, + b"sendcmpct": msg_sendcmpct, + b"sendheaders": msg_sendheaders, + b"sendtxrcncl": msg_sendtxrcncl, + b"tx": msg_tx, + b"verack": msg_verack, + b"version": msg_version, + b"wtxidrelay": msg_wtxidrelay, +} + +MAGIC_BYTES = { + "mainnet": b"\xf9\xbe\xb4\xd9", # mainnet + "testnet3": b"\x0b\x11\x09\x07", # testnet3 + "regtest": b"\xfa\xbf\xb5\xda", # regtest + "signet": b"\x0a\x03\xcf\x40", # signet +} + + +class P2PConnection(asyncio.Protocol): + """A low-level connection object to a node's P2P interface. + + This class is responsible for: + + - opening and closing the TCP connection to the node + - reading bytes from and writing bytes to the socket + - deserializing and serializing the P2P message header + - logging messages as they are sent and received + + This class contains no logic for handing the P2P message payloads. It must be + sub-classed and the on_message() callback overridden.""" + + def __init__(self): + # The underlying transport of the connection. + # Should only call methods on this from the NetworkThread, c.f. call_soon_threadsafe + self._transport = None + + @property + def is_connected(self): + return self._transport is not None + + def peer_connect_helper(self, dstaddr, dstport, net, timeout_factor): + assert not self.is_connected + self.timeout_factor = timeout_factor + self.dstaddr = dstaddr + self.dstport = dstport + # The initial message to send after the connection was made: + self.on_connection_send_msg = None + self.recvbuf = b"" + self.magic_bytes = MAGIC_BYTES[net] + + def peer_connect(self, dstaddr, dstport, *, net, timeout_factor): + self.peer_connect_helper(dstaddr, dstport, net, timeout_factor) + + loop = NetworkThread.network_event_loop + logger.debug('Connecting to Bitcoin Node: %s:%d' % (self.dstaddr, self.dstport)) + coroutine = loop.create_connection(lambda: self, host=self.dstaddr, port=self.dstport) + return lambda: loop.call_soon_threadsafe(loop.create_task, coroutine) + + def peer_accept_connection(self, connect_id, connect_cb=lambda: None, *, net, timeout_factor): + self.peer_connect_helper('0', 0, net, timeout_factor) + + logger.debug('Listening for Bitcoin Node with id: {}'.format(connect_id)) + return lambda: NetworkThread.listen(self, connect_cb, idx=connect_id) + + def peer_disconnect(self): + # Connection could have already been closed by other end. + NetworkThread.network_event_loop.call_soon_threadsafe(lambda: self._transport and self._transport.abort()) + + # Connection and disconnection methods + + def connection_made(self, transport): + """asyncio callback when a connection is opened.""" + assert not self._transport + logger.debug("Connected & Listening: %s:%d" % (self.dstaddr, self.dstport)) + self._transport = transport + if self.on_connection_send_msg: + self.send_message(self.on_connection_send_msg) + self.on_connection_send_msg = None # Never used again + self.on_open() + + def connection_lost(self, exc): + """asyncio callback when a connection is closed.""" + if exc: + logger.warning("Connection lost to {}:{} due to {}".format(self.dstaddr, self.dstport, exc)) + else: + logger.debug("Closed connection to: %s:%d" % (self.dstaddr, self.dstport)) + self._transport = None + self.recvbuf = b"" + self.on_close() + + # Socket read methods + + def data_received(self, t): + """asyncio callback when data is read from the socket.""" + if len(t) > 0: + self.recvbuf += t + self._on_data() + + def _on_data(self): + """Try to read P2P messages from the recv buffer. + + This method reads data from the buffer in a loop. It deserializes, + parses and verifies the P2P header, then passes the P2P payload to + the on_message callback for processing.""" + try: + while True: + if len(self.recvbuf) < 4: + return + if self.recvbuf[:4] != self.magic_bytes: + raise ValueError("magic bytes mismatch: {} != {}".format(repr(self.magic_bytes), repr(self.recvbuf))) + if len(self.recvbuf) < 4 + 12 + 4 + 4: + return + msgtype = self.recvbuf[4:4+12].split(b"\x00", 1)[0] + msglen = struct.unpack(" 500: + log_message += "... (msg truncated)" + logger.debug(log_message) + + +class P2PInterface(P2PConnection): + """A high-level P2P interface class for communicating with a Bitcoin node. + + This class provides high-level callbacks for processing P2P message + payloads, as well as convenience methods for interacting with the + node over P2P. + + Individual testcases should subclass this and override the on_* methods + if they want to alter message handling behaviour.""" + def __init__(self, support_addrv2=False, wtxidrelay=True): + super().__init__() + + # Track number of messages of each type received. + # Should be read-only in a test. + self.message_count = defaultdict(int) + + # Track the most recent message of each type. + # To wait for a message to be received, pop that message from + # this and use self.wait_until. + self.last_message = {} + + # A count of the number of ping messages we've sent to the node + self.ping_counter = 1 + + # The network services received from the peer + self.nServices = 0 + + self.support_addrv2 = support_addrv2 + + # If the peer supports wtxid-relay + self.wtxidrelay = wtxidrelay + + def peer_connect_send_version(self, services): + # Send a version msg + vt = msg_version() + vt.nVersion = P2P_VERSION + vt.strSubVer = P2P_SUBVERSION + vt.relay = P2P_VERSION_RELAY + vt.nServices = services + vt.addrTo.ip = self.dstaddr + vt.addrTo.port = self.dstport + vt.addrFrom.ip = "0.0.0.0" + vt.addrFrom.port = 0 + self.on_connection_send_msg = vt # Will be sent in connection_made callback + + def peer_connect(self, *args, services=P2P_SERVICES, send_version=True, **kwargs): + create_conn = super().peer_connect(*args, **kwargs) + + if send_version: + self.peer_connect_send_version(services) + + return create_conn + + def peer_accept_connection(self, *args, services=P2P_SERVICES, **kwargs): + create_conn = super().peer_accept_connection(*args, **kwargs) + self.peer_connect_send_version(services) + + return create_conn + + # Message receiving methods + + def on_message(self, message): + """Receive message and dispatch message to appropriate callback. + + We keep a count of how many of each message type has been received + and the most recent message of each type.""" + with p2p_lock: + try: + msgtype = message.msgtype.decode('ascii') + self.message_count[msgtype] += 1 + self.last_message[msgtype] = message + getattr(self, 'on_' + msgtype)(message) + except Exception: + print("ERROR delivering %s (%s)" % (repr(message), sys.exc_info()[0])) + raise + + # Callback methods. Can be overridden by subclasses in individual test + # cases to provide custom message handling behaviour. + + def on_open(self): + pass + + def on_close(self): + pass + + def on_addr(self, message): pass + def on_addrv2(self, message): pass + def on_block(self, message): pass + def on_blocktxn(self, message): pass + def on_cfcheckpt(self, message): pass + def on_cfheaders(self, message): pass + def on_cfilter(self, message): pass + def on_cmpctblock(self, message): pass + def on_feefilter(self, message): pass + def on_filteradd(self, message): pass + def on_filterclear(self, message): pass + def on_filterload(self, message): pass + def on_getaddr(self, message): pass + def on_getblocks(self, message): pass + def on_getblocktxn(self, message): pass + def on_getdata(self, message): pass + def on_getheaders(self, message): pass + def on_headers(self, message): pass + def on_mempool(self, message): pass + def on_merkleblock(self, message): pass + def on_notfound(self, message): pass + def on_pong(self, message): pass + def on_sendaddrv2(self, message): pass + def on_sendcmpct(self, message): pass + def on_sendheaders(self, message): pass + def on_sendtxrcncl(self, message): pass + def on_tx(self, message): pass + def on_wtxidrelay(self, message): pass + + def on_inv(self, message): + want = msg_getdata() + for i in message.inv: + if i.type != 0: + want.inv.append(i) + if len(want.inv): + self.send_message(want) + + def on_ping(self, message): + self.send_message(msg_pong(message.nonce)) + + def on_verack(self, message): + pass + + def on_version(self, message): + assert message.nVersion >= MIN_P2P_VERSION_SUPPORTED, "Version {} received. Test framework only supports versions greater than {}".format(message.nVersion, MIN_P2P_VERSION_SUPPORTED) + if message.nVersion >= 70016 and self.wtxidrelay: + self.send_message(msg_wtxidrelay()) + if self.support_addrv2: + self.send_message(msg_sendaddrv2()) + self.send_message(msg_verack()) + self.nServices = message.nServices + self.relay = message.relay + self.send_message(msg_getaddr()) + + # Connection helper methods + + def wait_until(self, test_function_in, *, timeout=60, check_connected=True): + def test_function(): + if check_connected: + assert self.is_connected + return test_function_in() + + wait_until_helper_internal(test_function, timeout=timeout, lock=p2p_lock, timeout_factor=self.timeout_factor) + + def wait_for_connect(self, timeout=60): + test_function = lambda: self.is_connected + self.wait_until(test_function, timeout=timeout, check_connected=False) + + def wait_for_disconnect(self, timeout=60): + test_function = lambda: not self.is_connected + self.wait_until(test_function, timeout=timeout, check_connected=False) + + # Message receiving helper methods + + def wait_for_tx(self, txid, timeout=60): + def test_function(): + if not self.last_message.get('tx'): + return False + return self.last_message['tx'].tx.rehash() == txid + + self.wait_until(test_function, timeout=timeout) + + def wait_for_block(self, blockhash, timeout=60): + def test_function(): + return self.last_message.get("block") and self.last_message["block"].block.rehash() == blockhash + + self.wait_until(test_function, timeout=timeout) + + def wait_for_header(self, blockhash, timeout=60): + def test_function(): + last_headers = self.last_message.get('headers') + if not last_headers: + return False + return last_headers.headers[0].rehash() == int(blockhash, 16) + + self.wait_until(test_function, timeout=timeout) + + def wait_for_merkleblock(self, blockhash, timeout=60): + def test_function(): + last_filtered_block = self.last_message.get('merkleblock') + if not last_filtered_block: + return False + return last_filtered_block.merkleblock.header.rehash() == int(blockhash, 16) + + self.wait_until(test_function, timeout=timeout) + + def wait_for_getdata(self, hash_list, timeout=60): + """Waits for a getdata message. + + The object hashes in the inventory vector must match the provided hash_list.""" + def test_function(): + last_data = self.last_message.get("getdata") + if not last_data: + return False + return [x.hash for x in last_data.inv] == hash_list + + self.wait_until(test_function, timeout=timeout) + + def wait_for_getheaders(self, timeout=60): + """Waits for a getheaders message. + + Receiving any getheaders message will satisfy the predicate. the last_message["getheaders"] + value must be explicitly cleared before calling this method, or this will return + immediately with success. TODO: change this method to take a hash value and only + return true if the correct block header has been requested.""" + def test_function(): + return self.last_message.get("getheaders") + + self.wait_until(test_function, timeout=timeout) + + def wait_for_inv(self, expected_inv, timeout=60): + """Waits for an INV message and checks that the first inv object in the message was as expected.""" + if len(expected_inv) > 1: + raise NotImplementedError("wait_for_inv() will only verify the first inv object") + + def test_function(): + return self.last_message.get("inv") and \ + self.last_message["inv"].inv[0].type == expected_inv[0].type and \ + self.last_message["inv"].inv[0].hash == expected_inv[0].hash + + self.wait_until(test_function, timeout=timeout) + + def wait_for_verack(self, timeout=60): + def test_function(): + return "verack" in self.last_message + + self.wait_until(test_function, timeout=timeout) + + # Message sending helper functions + + def send_and_ping(self, message, timeout=60): + self.send_message(message) + self.sync_with_ping(timeout=timeout) + + def sync_with_ping(self, timeout=60): + """Ensure ProcessMessages and SendMessages is called on this connection""" + # Sending two pings back-to-back, requires that the node calls + # `ProcessMessage` twice, and thus ensures `SendMessages` must have + # been called at least once + self.send_message(msg_ping(nonce=0)) + self.send_message(msg_ping(nonce=self.ping_counter)) + + def test_function(): + return self.last_message.get("pong") and self.last_message["pong"].nonce == self.ping_counter + + self.wait_until(test_function, timeout=timeout) + self.ping_counter += 1 + + +# One lock for synchronizing all data access between the network event loop (see +# NetworkThread below) and the thread running the test logic. For simplicity, +# P2PConnection acquires this lock whenever delivering a message to a P2PInterface. +# This lock should be acquired in the thread running the test logic to synchronize +# access to any data shared with the P2PInterface or P2PConnection. +p2p_lock = threading.Lock() + + +class NetworkThread(threading.Thread): + network_event_loop = None + + def __init__(self): + super().__init__(name="NetworkThread") + # There is only one event loop and no more than one thread must be created + assert not self.network_event_loop + + NetworkThread.listeners = {} + NetworkThread.protos = {} + if sys.platform == 'win32': + asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) + NetworkThread.network_event_loop = asyncio.new_event_loop() + + def run(self): + """Start the network thread.""" + self.network_event_loop.run_forever() + + def close(self, timeout=10): + """Close the connections and network event loop.""" + self.network_event_loop.call_soon_threadsafe(self.network_event_loop.stop) + wait_until_helper_internal(lambda: not self.network_event_loop.is_running(), timeout=timeout) + self.network_event_loop.close() + self.join(timeout) + # Safe to remove event loop. + NetworkThread.network_event_loop = None + + @classmethod + def listen(cls, p2p, callback, port=None, addr=None, idx=1): + """ Ensure a listening server is running on the given port, and run the + protocol specified by `p2p` on the next connection to it. Once ready + for connections, call `callback`.""" + + if port is None: + assert 0 < idx <= MAX_NODES + port = p2p_port(MAX_NODES - idx) + if addr is None: + addr = '127.0.0.1' + + coroutine = cls.create_listen_server(addr, port, callback, p2p) + cls.network_event_loop.call_soon_threadsafe(cls.network_event_loop.create_task, coroutine) + + @classmethod + async def create_listen_server(cls, addr, port, callback, proto): + def peer_protocol(): + """Returns a function that does the protocol handling for a new + connection. To allow different connections to have different + behaviors, the protocol function is first put in the cls.protos + dict. When the connection is made, the function removes the + protocol function from that dict, and returns it so the event loop + can start executing it.""" + response = cls.protos.get((addr, port)) + cls.protos[(addr, port)] = None + return response + + if (addr, port) not in cls.listeners: + # When creating a listener on a given (addr, port) we only need to + # do it once. If we want different behaviors for different + # connections, we can accomplish this by providing different + # `proto` functions + + listener = await cls.network_event_loop.create_server(peer_protocol, addr, port) + logger.debug("Listening server on %s:%d should be started" % (addr, port)) + cls.listeners[(addr, port)] = listener + + cls.protos[(addr, port)] = proto + callback(addr, port) + + +class P2PDataStore(P2PInterface): + """A P2P data store class. + + Keeps a block and transaction store and responds correctly to getdata and getheaders requests.""" + + def __init__(self): + super().__init__() + # store of blocks. key is block hash, value is a CBlock object + self.block_store = {} + self.last_block_hash = '' + # store of txs. key is txid, value is a CTransaction object + self.tx_store = {} + self.getdata_requests = [] + + def on_getdata(self, message): + """Check for the tx/block in our stores and if found, reply with an inv message.""" + for inv in message.inv: + self.getdata_requests.append(inv.hash) + if (inv.type & MSG_TYPE_MASK) == MSG_TX and inv.hash in self.tx_store.keys(): + self.send_message(msg_tx(self.tx_store[inv.hash])) + elif (inv.type & MSG_TYPE_MASK) == MSG_BLOCK and inv.hash in self.block_store.keys(): + self.send_message(msg_block(self.block_store[inv.hash])) + else: + logger.debug('getdata message type {} received.'.format(hex(inv.type))) + + def on_getheaders(self, message): + """Search back through our block store for the locator, and reply with a headers message if found.""" + + locator, hash_stop = message.locator, message.hashstop + + # Assume that the most recent block added is the tip + if not self.block_store: + return + + headers_list = [self.block_store[self.last_block_hash]] + while headers_list[-1].sha256 not in locator.vHave: + # Walk back through the block store, adding headers to headers_list + # as we go. + prev_block_hash = headers_list[-1].hashPrevBlock + if prev_block_hash in self.block_store: + prev_block_header = CBlockHeader(self.block_store[prev_block_hash]) + headers_list.append(prev_block_header) + if prev_block_header.sha256 == hash_stop: + # if this is the hashstop header, stop here + break + else: + logger.debug('block hash {} not found in block store'.format(hex(prev_block_hash))) + break + + # Truncate the list if there are too many headers + headers_list = headers_list[:-MAX_HEADERS_RESULTS - 1:-1] + response = msg_headers(headers_list) + + if response is not None: + self.send_message(response) + + def send_blocks_and_test(self, blocks, node, *, success=True, force_send=False, reject_reason=None, expect_disconnect=False, timeout=60): + """Send blocks to test node and test whether the tip advances. + + - add all blocks to our block_store + - send a headers message for the final block + - the on_getheaders handler will ensure that any getheaders are responded to + - if force_send is False: wait for getdata for each of the blocks. The on_getdata handler will + ensure that any getdata messages are responded to. Otherwise send the full block unsolicited. + - if success is True: assert that the node's tip advances to the most recent block + - if success is False: assert that the node's tip doesn't advance + - if reject_reason is set: assert that the correct reject message is logged""" + + with p2p_lock: + for block in blocks: + self.block_store[block.sha256] = block + self.last_block_hash = block.sha256 + + reject_reason = [reject_reason] if reject_reason else [] + with node.assert_debug_log(expected_msgs=reject_reason): + if force_send: + for b in blocks: + self.send_message(msg_block(block=b)) + else: + self.send_message(msg_headers([CBlockHeader(block) for block in blocks])) + self.wait_until( + lambda: blocks[-1].sha256 in self.getdata_requests, + timeout=timeout, + check_connected=success, + ) + + if expect_disconnect: + self.wait_for_disconnect(timeout=timeout) + else: + self.sync_with_ping(timeout=timeout) + + if success: + self.wait_until(lambda: node.getbestblockhash() == blocks[-1].hash, timeout=timeout) + else: + assert node.getbestblockhash() != blocks[-1].hash + + def send_txs_and_test(self, txs, node, *, success=True, expect_disconnect=False, reject_reason=None): + """Send txs to test node and test whether they're accepted to the mempool. + + - add all txs to our tx_store + - send tx messages for all txs + - if success is True/False: assert that the txs are/are not accepted to the mempool + - if expect_disconnect is True: Skip the sync with ping + - if reject_reason is set: assert that the correct reject message is logged.""" + + with p2p_lock: + for tx in txs: + self.tx_store[tx.sha256] = tx + + reject_reason = [reject_reason] if reject_reason else [] + with node.assert_debug_log(expected_msgs=reject_reason): + for tx in txs: + self.send_message(msg_tx(tx)) + + if expect_disconnect: + self.wait_for_disconnect() + else: + self.sync_with_ping() + + raw_mempool = node.getrawmempool() + if success: + # Check that all txs are now in the mempool + for tx in txs: + assert tx.hash in raw_mempool, "{} not found in mempool".format(tx.hash) + else: + # Check that none of the txs are now in the mempool + for tx in txs: + assert tx.hash not in raw_mempool, "{} tx found in mempool".format(tx.hash) + +class P2PTxInvStore(P2PInterface): + """A P2PInterface which stores a count of how many times each txid has been announced.""" + def __init__(self): + super().__init__() + self.tx_invs_received = defaultdict(int) + + def on_inv(self, message): + super().on_inv(message) # Send getdata in response. + # Store how many times invs have been received for each tx. + for i in message.inv: + if (i.type == MSG_TX) or (i.type == MSG_WTX): + # save txid + self.tx_invs_received[i.hash] += 1 + + def get_invs(self): + with p2p_lock: + return list(self.tx_invs_received.keys()) + + def wait_for_broadcast(self, txns, timeout=60): + """Waits for the txns (list of txids) to complete initial broadcast. + The mempool should mark unbroadcast=False for these transactions. + """ + # Wait until invs have been received (and getdatas sent) for each txid. + self.wait_until(lambda: set(self.tx_invs_received.keys()) == set([int(tx, 16) for tx in txns]), timeout=timeout) + # Flush messages and wait for the getdatas to be processed + self.sync_with_ping() diff --git a/warnet-demo/scenarios/test_framework/psbt.py b/warnet-demo/scenarios/test_framework/psbt.py new file mode 100644 index 0000000..1eff4a2 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/psbt.py @@ -0,0 +1,140 @@ +#!/usr/bin/env python3 +# Copyright (c) 2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. + +import base64 + +from .messages import ( + CTransaction, + deser_string, + from_binary, + ser_compact_size, +) + + +# global types +PSBT_GLOBAL_UNSIGNED_TX = 0x00 +PSBT_GLOBAL_XPUB = 0x01 +PSBT_GLOBAL_TX_VERSION = 0x02 +PSBT_GLOBAL_FALLBACK_LOCKTIME = 0x03 +PSBT_GLOBAL_INPUT_COUNT = 0x04 +PSBT_GLOBAL_OUTPUT_COUNT = 0x05 +PSBT_GLOBAL_TX_MODIFIABLE = 0x06 +PSBT_GLOBAL_VERSION = 0xfb +PSBT_GLOBAL_PROPRIETARY = 0xfc + +# per-input types +PSBT_IN_NON_WITNESS_UTXO = 0x00 +PSBT_IN_WITNESS_UTXO = 0x01 +PSBT_IN_PARTIAL_SIG = 0x02 +PSBT_IN_SIGHASH_TYPE = 0x03 +PSBT_IN_REDEEM_SCRIPT = 0x04 +PSBT_IN_WITNESS_SCRIPT = 0x05 +PSBT_IN_BIP32_DERIVATION = 0x06 +PSBT_IN_FINAL_SCRIPTSIG = 0x07 +PSBT_IN_FINAL_SCRIPTWITNESS = 0x08 +PSBT_IN_POR_COMMITMENT = 0x09 +PSBT_IN_RIPEMD160 = 0x0a +PSBT_IN_SHA256 = 0x0b +PSBT_IN_HASH160 = 0x0c +PSBT_IN_HASH256 = 0x0d +PSBT_IN_PREVIOUS_TXID = 0x0e +PSBT_IN_OUTPUT_INDEX = 0x0f +PSBT_IN_SEQUENCE = 0x10 +PSBT_IN_REQUIRED_TIME_LOCKTIME = 0x11 +PSBT_IN_REQUIRED_HEIGHT_LOCKTIME = 0x12 +PSBT_IN_TAP_KEY_SIG = 0x13 +PSBT_IN_TAP_SCRIPT_SIG = 0x14 +PSBT_IN_TAP_LEAF_SCRIPT = 0x15 +PSBT_IN_TAP_BIP32_DERIVATION = 0x16 +PSBT_IN_TAP_INTERNAL_KEY = 0x17 +PSBT_IN_TAP_MERKLE_ROOT = 0x18 +PSBT_IN_PROPRIETARY = 0xfc + +# per-output types +PSBT_OUT_REDEEM_SCRIPT = 0x00 +PSBT_OUT_WITNESS_SCRIPT = 0x01 +PSBT_OUT_BIP32_DERIVATION = 0x02 +PSBT_OUT_AMOUNT = 0x03 +PSBT_OUT_SCRIPT = 0x04 +PSBT_OUT_TAP_INTERNAL_KEY = 0x05 +PSBT_OUT_TAP_TREE = 0x06 +PSBT_OUT_TAP_BIP32_DERIVATION = 0x07 +PSBT_OUT_PROPRIETARY = 0xfc + + +class PSBTMap: + """Class for serializing and deserializing PSBT maps""" + + def __init__(self, map=None): + self.map = map if map is not None else {} + + def deserialize(self, f): + m = {} + while True: + k = deser_string(f) + if len(k) == 0: + break + v = deser_string(f) + if len(k) == 1: + k = k[0] + assert k not in m + m[k] = v + self.map = m + + def serialize(self): + m = b"" + for k,v in self.map.items(): + if isinstance(k, int) and 0 <= k and k <= 255: + k = bytes([k]) + m += ser_compact_size(len(k)) + k + m += ser_compact_size(len(v)) + v + m += b"\x00" + return m + +class PSBT: + """Class for serializing and deserializing PSBTs""" + + def __init__(self, *, g=None, i=None, o=None): + self.g = g if g is not None else PSBTMap() + self.i = i if i is not None else [] + self.o = o if o is not None else [] + self.tx = None + + def deserialize(self, f): + assert f.read(5) == b"psbt\xff" + self.g = from_binary(PSBTMap, f) + assert PSBT_GLOBAL_UNSIGNED_TX in self.g.map + self.tx = from_binary(CTransaction, self.g.map[PSBT_GLOBAL_UNSIGNED_TX]) + self.i = [from_binary(PSBTMap, f) for _ in self.tx.vin] + self.o = [from_binary(PSBTMap, f) for _ in self.tx.vout] + return self + + def serialize(self): + assert isinstance(self.g, PSBTMap) + assert isinstance(self.i, list) and all(isinstance(x, PSBTMap) for x in self.i) + assert isinstance(self.o, list) and all(isinstance(x, PSBTMap) for x in self.o) + assert PSBT_GLOBAL_UNSIGNED_TX in self.g.map + tx = from_binary(CTransaction, self.g.map[PSBT_GLOBAL_UNSIGNED_TX]) + assert len(tx.vin) == len(self.i) + assert len(tx.vout) == len(self.o) + + psbt = [x.serialize() for x in [self.g] + self.i + self.o] + return b"psbt\xff" + b"".join(psbt) + + def make_blank(self): + """ + Remove all fields except for PSBT_GLOBAL_UNSIGNED_TX + """ + for m in self.i + self.o: + m.map.clear() + + self.g = PSBTMap(map={PSBT_GLOBAL_UNSIGNED_TX: self.g.map[PSBT_GLOBAL_UNSIGNED_TX]}) + + def to_base64(self): + return base64.b64encode(self.serialize()).decode("utf8") + + @classmethod + def from_base64(cls, b64psbt): + return from_binary(cls, base64.b64decode(b64psbt)) diff --git a/warnet-demo/scenarios/test_framework/ripemd160.py b/warnet-demo/scenarios/test_framework/ripemd160.py new file mode 100644 index 0000000..1280136 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/ripemd160.py @@ -0,0 +1,130 @@ +# Copyright (c) 2021 Pieter Wuille +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Test-only pure Python RIPEMD160 implementation.""" + +import unittest + +# Message schedule indexes for the left path. +ML = [ + 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, + 7, 4, 13, 1, 10, 6, 15, 3, 12, 0, 9, 5, 2, 14, 11, 8, + 3, 10, 14, 4, 9, 15, 8, 1, 2, 7, 0, 6, 13, 11, 5, 12, + 1, 9, 11, 10, 0, 8, 12, 4, 13, 3, 7, 15, 14, 5, 6, 2, + 4, 0, 5, 9, 7, 12, 2, 10, 14, 1, 3, 8, 11, 6, 15, 13 +] + +# Message schedule indexes for the right path. +MR = [ + 5, 14, 7, 0, 9, 2, 11, 4, 13, 6, 15, 8, 1, 10, 3, 12, + 6, 11, 3, 7, 0, 13, 5, 10, 14, 15, 8, 12, 4, 9, 1, 2, + 15, 5, 1, 3, 7, 14, 6, 9, 11, 8, 12, 2, 10, 0, 4, 13, + 8, 6, 4, 1, 3, 11, 15, 0, 5, 12, 2, 13, 9, 7, 10, 14, + 12, 15, 10, 4, 1, 5, 8, 7, 6, 2, 13, 14, 0, 3, 9, 11 +] + +# Rotation counts for the left path. +RL = [ + 11, 14, 15, 12, 5, 8, 7, 9, 11, 13, 14, 15, 6, 7, 9, 8, + 7, 6, 8, 13, 11, 9, 7, 15, 7, 12, 15, 9, 11, 7, 13, 12, + 11, 13, 6, 7, 14, 9, 13, 15, 14, 8, 13, 6, 5, 12, 7, 5, + 11, 12, 14, 15, 14, 15, 9, 8, 9, 14, 5, 6, 8, 6, 5, 12, + 9, 15, 5, 11, 6, 8, 13, 12, 5, 12, 13, 14, 11, 8, 5, 6 +] + +# Rotation counts for the right path. +RR = [ + 8, 9, 9, 11, 13, 15, 15, 5, 7, 7, 8, 11, 14, 14, 12, 6, + 9, 13, 15, 7, 12, 8, 9, 11, 7, 7, 12, 7, 6, 15, 13, 11, + 9, 7, 15, 11, 8, 6, 6, 14, 12, 13, 5, 14, 13, 13, 7, 5, + 15, 5, 8, 11, 14, 14, 6, 14, 6, 9, 12, 9, 12, 5, 15, 8, + 8, 5, 12, 9, 12, 5, 14, 6, 8, 13, 6, 5, 15, 13, 11, 11 +] + +# K constants for the left path. +KL = [0, 0x5a827999, 0x6ed9eba1, 0x8f1bbcdc, 0xa953fd4e] + +# K constants for the right path. +KR = [0x50a28be6, 0x5c4dd124, 0x6d703ef3, 0x7a6d76e9, 0] + + +def fi(x, y, z, i): + """The f1, f2, f3, f4, and f5 functions from the specification.""" + if i == 0: + return x ^ y ^ z + elif i == 1: + return (x & y) | (~x & z) + elif i == 2: + return (x | ~y) ^ z + elif i == 3: + return (x & z) | (y & ~z) + elif i == 4: + return x ^ (y | ~z) + else: + assert False + + +def rol(x, i): + """Rotate the bottom 32 bits of x left by i bits.""" + return ((x << i) | ((x & 0xffffffff) >> (32 - i))) & 0xffffffff + + +def compress(h0, h1, h2, h3, h4, block): + """Compress state (h0, h1, h2, h3, h4) with block.""" + # Left path variables. + al, bl, cl, dl, el = h0, h1, h2, h3, h4 + # Right path variables. + ar, br, cr, dr, er = h0, h1, h2, h3, h4 + # Message variables. + x = [int.from_bytes(block[4*i:4*(i+1)], 'little') for i in range(16)] + + # Iterate over the 80 rounds of the compression. + for j in range(80): + rnd = j >> 4 + # Perform left side of the transformation. + al = rol(al + fi(bl, cl, dl, rnd) + x[ML[j]] + KL[rnd], RL[j]) + el + al, bl, cl, dl, el = el, al, bl, rol(cl, 10), dl + # Perform right side of the transformation. + ar = rol(ar + fi(br, cr, dr, 4 - rnd) + x[MR[j]] + KR[rnd], RR[j]) + er + ar, br, cr, dr, er = er, ar, br, rol(cr, 10), dr + + # Compose old state, left transform, and right transform into new state. + return h1 + cl + dr, h2 + dl + er, h3 + el + ar, h4 + al + br, h0 + bl + cr + + +def ripemd160(data): + """Compute the RIPEMD-160 hash of data.""" + # Initialize state. + state = (0x67452301, 0xefcdab89, 0x98badcfe, 0x10325476, 0xc3d2e1f0) + # Process full 64-byte blocks in the input. + for b in range(len(data) >> 6): + state = compress(*state, data[64*b:64*(b+1)]) + # Construct final blocks (with padding and size). + pad = b"\x80" + b"\x00" * ((119 - len(data)) & 63) + fin = data[len(data) & ~63:] + pad + (8 * len(data)).to_bytes(8, 'little') + # Process final blocks. + for b in range(len(fin) >> 6): + state = compress(*state, fin[64*b:64*(b+1)]) + # Produce output. + return b"".join((h & 0xffffffff).to_bytes(4, 'little') for h in state) + + +class TestFrameworkKey(unittest.TestCase): + def test_ripemd160(self): + """RIPEMD-160 test vectors.""" + # See https://homes.esat.kuleuven.be/~bosselae/ripemd160.html + for msg, hexout in [ + (b"", "9c1185a5c5e9fc54612808977ee8f548b2258d31"), + (b"a", "0bdc9d2d256b3ee9daae347be6f4dc835a467ffe"), + (b"abc", "8eb208f7e05d987a9b044a8e98c6b087f15a0bfc"), + (b"message digest", "5d0689ef49d2fae572b881b123a85ffa21595f36"), + (b"abcdefghijklmnopqrstuvwxyz", + "f71c27109c692c1b56bbdceb5b9d2865b3708dbc"), + (b"abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq", + "12a053384a9c0c88e405a06c27dcf49ada62eb2b"), + (b"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789", + "b0e20b6e3116640286ed3a87a5713079b21f5189"), + (b"1234567890" * 8, "9b752e45573d4b39f4dbd3323cab82bf63326bfb"), + (b"a" * 1000000, "52783243c1697bdbe16d37f97f68f08325dc1528") + ]: + self.assertEqual(ripemd160(msg).hex(), hexout) diff --git a/warnet-demo/scenarios/test_framework/script.py b/warnet-demo/scenarios/test_framework/script.py new file mode 100644 index 0000000..17a954c --- /dev/null +++ b/warnet-demo/scenarios/test_framework/script.py @@ -0,0 +1,928 @@ +#!/usr/bin/env python3 +# Copyright (c) 2015-2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Functionality to build scripts, as well as signature hash functions. + +This file is modified from python-bitcoinlib. +""" + +from collections import namedtuple +import struct +import unittest +from typing import List, Dict + +from .key import TaggedHash, tweak_add_pubkey, compute_xonly_pubkey + +from .messages import ( + CTransaction, + CTxOut, + hash256, + ser_string, + ser_uint256, + sha256, + uint256_from_str, +) + +from .ripemd160 import ripemd160 + +MAX_SCRIPT_ELEMENT_SIZE = 520 +MAX_PUBKEYS_PER_MULTI_A = 999 +LOCKTIME_THRESHOLD = 500000000 +ANNEX_TAG = 0x50 + +LEAF_VERSION_TAPSCRIPT = 0xc0 + +def hash160(s): + return ripemd160(sha256(s)) + +def bn2vch(v): + """Convert number to bitcoin-specific little endian format.""" + # We need v.bit_length() bits, plus a sign bit for every nonzero number. + n_bits = v.bit_length() + (v != 0) + # The number of bytes for that is: + n_bytes = (n_bits + 7) // 8 + # Convert number to absolute value + sign in top bit. + encoded_v = 0 if v == 0 else abs(v) | ((v < 0) << (n_bytes * 8 - 1)) + # Serialize to bytes + return encoded_v.to_bytes(n_bytes, 'little') + +class CScriptOp(int): + """A single script opcode""" + __slots__ = () + + @staticmethod + def encode_op_pushdata(d): + """Encode a PUSHDATA op, returning bytes""" + if len(d) < 0x4c: + return b'' + bytes([len(d)]) + d # OP_PUSHDATA + elif len(d) <= 0xff: + return b'\x4c' + bytes([len(d)]) + d # OP_PUSHDATA1 + elif len(d) <= 0xffff: + return b'\x4d' + struct.pack(b'>= 8 + if r[-1] & 0x80: + r.append(0x80 if neg else 0) + elif neg: + r[-1] |= 0x80 + return bytes([len(r)]) + r + + @staticmethod + def decode(vch): + result = 0 + # We assume valid push_size and minimal encoding + value = vch[1:] + if len(value) == 0: + return result + for i, byte in enumerate(value): + result |= int(byte) << 8 * i + if value[-1] >= 0x80: + # Mask for all but the highest result bit + num_mask = (2**(len(value) * 8) - 1) >> 1 + result &= num_mask + result *= -1 + return result + + +class CScript(bytes): + """Serialized script + + A bytes subclass, so you can use this directly whenever bytes are accepted. + Note that this means that indexing does *not* work - you'll get an index by + byte rather than opcode. This format was chosen for efficiency so that the + general case would not require creating a lot of little CScriptOP objects. + + iter(script) however does iterate by opcode. + """ + __slots__ = () + + @classmethod + def __coerce_instance(cls, other): + # Coerce other into bytes + if isinstance(other, CScriptOp): + other = bytes([other]) + elif isinstance(other, CScriptNum): + if (other.value == 0): + other = bytes([CScriptOp(OP_0)]) + else: + other = CScriptNum.encode(other) + elif isinstance(other, int): + if 0 <= other <= 16: + other = bytes([CScriptOp.encode_op_n(other)]) + elif other == -1: + other = bytes([OP_1NEGATE]) + else: + other = CScriptOp.encode_op_pushdata(bn2vch(other)) + elif isinstance(other, (bytes, bytearray)): + other = CScriptOp.encode_op_pushdata(other) + return other + + def __add__(self, other): + # add makes no sense for a CScript() + raise NotImplementedError + + def join(self, iterable): + # join makes no sense for a CScript() + raise NotImplementedError + + def __new__(cls, value=b''): + if isinstance(value, bytes) or isinstance(value, bytearray): + return super().__new__(cls, value) + else: + def coerce_iterable(iterable): + for instance in iterable: + yield cls.__coerce_instance(instance) + # Annoyingly on both python2 and python3 bytes.join() always + # returns a bytes instance even when subclassed. + return super().__new__(cls, b''.join(coerce_iterable(value))) + + def raw_iter(self): + """Raw iteration + + Yields tuples of (opcode, data, sop_idx) so that the different possible + PUSHDATA encodings can be accurately distinguished, as well as + determining the exact opcode byte indexes. (sop_idx) + """ + i = 0 + while i < len(self): + sop_idx = i + opcode = self[i] + i += 1 + + if opcode > OP_PUSHDATA4: + yield (opcode, None, sop_idx) + else: + datasize = None + pushdata_type = None + if opcode < OP_PUSHDATA1: + pushdata_type = 'PUSHDATA(%d)' % opcode + datasize = opcode + + elif opcode == OP_PUSHDATA1: + pushdata_type = 'PUSHDATA1' + if i >= len(self): + raise CScriptInvalidError('PUSHDATA1: missing data length') + datasize = self[i] + i += 1 + + elif opcode == OP_PUSHDATA2: + pushdata_type = 'PUSHDATA2' + if i + 1 >= len(self): + raise CScriptInvalidError('PUSHDATA2: missing data length') + datasize = self[i] + (self[i + 1] << 8) + i += 2 + + elif opcode == OP_PUSHDATA4: + pushdata_type = 'PUSHDATA4' + if i + 3 >= len(self): + raise CScriptInvalidError('PUSHDATA4: missing data length') + datasize = self[i] + (self[i + 1] << 8) + (self[i + 2] << 16) + (self[i + 3] << 24) + i += 4 + + else: + assert False # shouldn't happen + + data = bytes(self[i:i + datasize]) + + # Check for truncation + if len(data) < datasize: + raise CScriptTruncatedPushDataError('%s: truncated data' % pushdata_type, data) + + i += datasize + + yield (opcode, data, sop_idx) + + def __iter__(self): + """'Cooked' iteration + + Returns either a CScriptOP instance, an integer, or bytes, as + appropriate. + + See raw_iter() if you need to distinguish the different possible + PUSHDATA encodings. + """ + for (opcode, data, sop_idx) in self.raw_iter(): + if data is not None: + yield data + else: + opcode = CScriptOp(opcode) + + if opcode.is_small_int(): + yield opcode.decode_op_n() + else: + yield CScriptOp(opcode) + + def __repr__(self): + def _repr(o): + if isinstance(o, bytes): + return "x('%s')" % o.hex() + else: + return repr(o) + + ops = [] + i = iter(self) + while True: + op = None + try: + op = _repr(next(i)) + except CScriptTruncatedPushDataError as err: + op = '%s...' % (_repr(err.data), err) + break + except CScriptInvalidError as err: + op = '' % err + break + except StopIteration: + break + finally: + if op is not None: + ops.append(op) + + return "CScript([%s])" % ', '.join(ops) + + def GetSigOpCount(self, fAccurate): + """Get the SigOp count. + + fAccurate - Accurately count CHECKMULTISIG, see BIP16 for details. + + Note that this is consensus-critical. + """ + n = 0 + lastOpcode = OP_INVALIDOPCODE + for (opcode, data, sop_idx) in self.raw_iter(): + if opcode in (OP_CHECKSIG, OP_CHECKSIGVERIFY): + n += 1 + elif opcode in (OP_CHECKMULTISIG, OP_CHECKMULTISIGVERIFY): + if fAccurate and (OP_1 <= lastOpcode <= OP_16): + n += opcode.decode_op_n() + else: + n += 20 + lastOpcode = opcode + return n + + def IsWitnessProgram(self): + """A witness program is any valid CScript that consists of a 1-byte + push opcode followed by a data push between 2 and 40 bytes.""" + return ((4 <= len(self) <= 42) and + (self[0] == OP_0 or (OP_1 <= self[0] <= OP_16)) and + (self[1] + 2 == len(self))) + + +SIGHASH_DEFAULT = 0 # Taproot-only default, semantics same as SIGHASH_ALL +SIGHASH_ALL = 1 +SIGHASH_NONE = 2 +SIGHASH_SINGLE = 3 +SIGHASH_ANYONECANPAY = 0x80 + +def FindAndDelete(script, sig): + """Consensus critical, see FindAndDelete() in Satoshi codebase""" + r = b'' + last_sop_idx = sop_idx = 0 + skip = True + for (opcode, data, sop_idx) in script.raw_iter(): + if not skip: + r += script[last_sop_idx:sop_idx] + last_sop_idx = sop_idx + if script[sop_idx:sop_idx + len(sig)] == sig: + skip = True + else: + skip = False + if not skip: + r += script[last_sop_idx:] + return CScript(r) + +def LegacySignatureMsg(script, txTo, inIdx, hashtype): + """Preimage of the signature hash, if it exists. + + Returns either (None, err) to indicate error (which translates to sighash 1), + or (msg, None). + """ + + if inIdx >= len(txTo.vin): + return (None, "inIdx %d out of range (%d)" % (inIdx, len(txTo.vin))) + txtmp = CTransaction(txTo) + + for txin in txtmp.vin: + txin.scriptSig = b'' + txtmp.vin[inIdx].scriptSig = FindAndDelete(script, CScript([OP_CODESEPARATOR])) + + if (hashtype & 0x1f) == SIGHASH_NONE: + txtmp.vout = [] + + for i in range(len(txtmp.vin)): + if i != inIdx: + txtmp.vin[i].nSequence = 0 + + elif (hashtype & 0x1f) == SIGHASH_SINGLE: + outIdx = inIdx + if outIdx >= len(txtmp.vout): + return (None, "outIdx %d out of range (%d)" % (outIdx, len(txtmp.vout))) + + tmp = txtmp.vout[outIdx] + txtmp.vout = [] + for _ in range(outIdx): + txtmp.vout.append(CTxOut(-1)) + txtmp.vout.append(tmp) + + for i in range(len(txtmp.vin)): + if i != inIdx: + txtmp.vin[i].nSequence = 0 + + if hashtype & SIGHASH_ANYONECANPAY: + tmp = txtmp.vin[inIdx] + txtmp.vin = [] + txtmp.vin.append(tmp) + + s = txtmp.serialize_without_witness() + s += struct.pack(b" TaprootLeafInfo objects for all known leaves +# - merkle_root: the script tree's Merkle root, or bytes() if no leaves are present +TaprootInfo = namedtuple("TaprootInfo", "scriptPubKey,internal_pubkey,negflag,tweak,leaves,merkle_root,output_pubkey") + +# A TaprootLeafInfo object has the following fields: +# - script: the leaf script (CScript or bytes) +# - version: the leaf version (0xc0 for BIP342 tapscript) +# - merklebranch: the merkle branch to use for this leaf (32*N bytes) +TaprootLeafInfo = namedtuple("TaprootLeafInfo", "script,version,merklebranch,leaf_hash") + +def taproot_construct(pubkey, scripts=None, treat_internal_as_infinity=False): + """Construct a tree of Taproot spending conditions + + pubkey: a 32-byte xonly pubkey for the internal pubkey (bytes) + scripts: a list of items; each item is either: + - a (name, CScript or bytes, leaf version) tuple + - a (name, CScript or bytes) tuple (defaulting to leaf version 0xc0) + - another list of items (with the same structure) + - a list of two items; the first of which is an item itself, and the + second is a function. The function takes as input the Merkle root of the + first item, and produces a (fictitious) partner to hash with. + + Returns: a TaprootInfo object + """ + if scripts is None: + scripts = [] + + ret, h = taproot_tree_helper(scripts) + tweak = TaggedHash("TapTweak", pubkey + h) + if treat_internal_as_infinity: + tweaked, negated = compute_xonly_pubkey(tweak) + else: + tweaked, negated = tweak_add_pubkey(pubkey, tweak) + leaves = dict((name, TaprootLeafInfo(script, version, merklebranch, leaf)) for name, version, script, merklebranch, leaf in ret) + return TaprootInfo(CScript([OP_1, tweaked]), pubkey, negated + 0, tweak, leaves, h, tweaked) + +def is_op_success(o): + return o == 0x50 or o == 0x62 or o == 0x89 or o == 0x8a or o == 0x8d or o == 0x8e or (o >= 0x7e and o <= 0x81) or (o >= 0x83 and o <= 0x86) or (o >= 0x95 and o <= 0x99) or (o >= 0xbb and o <= 0xfe) diff --git a/warnet-demo/scenarios/test_framework/script_util.py b/warnet-demo/scenarios/test_framework/script_util.py new file mode 100644 index 0000000..62894cc --- /dev/null +++ b/warnet-demo/scenarios/test_framework/script_util.py @@ -0,0 +1,127 @@ +#!/usr/bin/env python3 +# Copyright (c) 2019-2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Useful Script constants and utils.""" +from test_framework.script import ( + CScript, + CScriptOp, + OP_0, + OP_CHECKMULTISIG, + OP_CHECKSIG, + OP_DUP, + OP_EQUAL, + OP_EQUALVERIFY, + OP_HASH160, + OP_RETURN, + hash160, + sha256, +) + +# To prevent a "tx-size-small" policy rule error, a transaction has to have a +# non-witness size of at least 65 bytes (MIN_STANDARD_TX_NONWITNESS_SIZE in +# src/policy/policy.h). Considering a Tx with the smallest possible single +# input (blank, empty scriptSig), and with an output omitting the scriptPubKey, +# we get to a minimum size of 60 bytes: +# +# Tx Skeleton: 4 [Version] + 1 [InCount] + 1 [OutCount] + 4 [LockTime] = 10 bytes +# Blank Input: 32 [PrevTxHash] + 4 [Index] + 1 [scriptSigLen] + 4 [SeqNo] = 41 bytes +# Output: 8 [Amount] + 1 [scriptPubKeyLen] = 9 bytes +# +# Hence, the scriptPubKey of the single output has to have a size of at +# least 5 bytes. +MIN_STANDARD_TX_NONWITNESS_SIZE = 65 +MIN_PADDING = MIN_STANDARD_TX_NONWITNESS_SIZE - 10 - 41 - 9 +assert MIN_PADDING == 5 + +# This script cannot be spent, allowing dust output values under +# standardness checks +DUMMY_MIN_OP_RETURN_SCRIPT = CScript([OP_RETURN] + ([OP_0] * (MIN_PADDING - 1))) +assert len(DUMMY_MIN_OP_RETURN_SCRIPT) == MIN_PADDING + +def key_to_p2pk_script(key): + key = check_key(key) + return CScript([key, OP_CHECKSIG]) + + +def keys_to_multisig_script(keys, *, k=None): + n = len(keys) + if k is None: # n-of-n multisig by default + k = n + assert k <= n + op_k = CScriptOp.encode_op_n(k) + op_n = CScriptOp.encode_op_n(n) + checked_keys = [check_key(key) for key in keys] + return CScript([op_k] + checked_keys + [op_n, OP_CHECKMULTISIG]) + + +def keyhash_to_p2pkh_script(hash): + assert len(hash) == 20 + return CScript([OP_DUP, OP_HASH160, hash, OP_EQUALVERIFY, OP_CHECKSIG]) + + +def scripthash_to_p2sh_script(hash): + assert len(hash) == 20 + return CScript([OP_HASH160, hash, OP_EQUAL]) + + +def key_to_p2pkh_script(key): + key = check_key(key) + return keyhash_to_p2pkh_script(hash160(key)) + + +def script_to_p2sh_script(script): + script = check_script(script) + return scripthash_to_p2sh_script(hash160(script)) + + +def key_to_p2sh_p2wpkh_script(key): + key = check_key(key) + p2shscript = CScript([OP_0, hash160(key)]) + return script_to_p2sh_script(p2shscript) + + +def program_to_witness_script(version, program): + if isinstance(program, str): + program = bytes.fromhex(program) + assert 0 <= version <= 16 + assert 2 <= len(program) <= 40 + assert version > 0 or len(program) in [20, 32] + return CScript([version, program]) + + +def script_to_p2wsh_script(script): + script = check_script(script) + return program_to_witness_script(0, sha256(script)) + + +def key_to_p2wpkh_script(key): + key = check_key(key) + return program_to_witness_script(0, hash160(key)) + + +def script_to_p2sh_p2wsh_script(script): + script = check_script(script) + p2shscript = CScript([OP_0, sha256(script)]) + return script_to_p2sh_script(p2shscript) + + +def output_key_to_p2tr_script(key): + assert len(key) == 32 + return program_to_witness_script(1, key) + + +def check_key(key): + if isinstance(key, str): + key = bytes.fromhex(key) # Assuming this is hex string + if isinstance(key, bytes) and (len(key) == 33 or len(key) == 65): + return key + assert False + + +def check_script(script): + if isinstance(script, str): + script = bytes.fromhex(script) # Assuming this is hex string + if isinstance(script, bytes) or isinstance(script, CScript): + return script + assert False diff --git a/warnet-demo/scenarios/test_framework/secp256k1.py b/warnet-demo/scenarios/test_framework/secp256k1.py new file mode 100644 index 0000000..2e9e419 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/secp256k1.py @@ -0,0 +1,346 @@ +# Copyright (c) 2022-2023 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. + +"""Test-only implementation of low-level secp256k1 field and group arithmetic + +It is designed for ease of understanding, not performance. + +WARNING: This code is slow and trivially vulnerable to side channel attacks. Do not use for +anything but tests. + +Exports: +* FE: class for secp256k1 field elements +* GE: class for secp256k1 group elements +* G: the secp256k1 generator point +""" + + +class FE: + """Objects of this class represent elements of the field GF(2**256 - 2**32 - 977). + + They are represented internally in numerator / denominator form, in order to delay inversions. + """ + + # The size of the field (also its modulus and characteristic). + SIZE = 2**256 - 2**32 - 977 + + def __init__(self, a=0, b=1): + """Initialize a field element a/b; both a and b can be ints or field elements.""" + if isinstance(a, FE): + num = a._num + den = a._den + else: + num = a % FE.SIZE + den = 1 + if isinstance(b, FE): + den = (den * b._num) % FE.SIZE + num = (num * b._den) % FE.SIZE + else: + den = (den * b) % FE.SIZE + assert den != 0 + if num == 0: + den = 1 + self._num = num + self._den = den + + def __add__(self, a): + """Compute the sum of two field elements (second may be int).""" + if isinstance(a, FE): + return FE(self._num * a._den + self._den * a._num, self._den * a._den) + return FE(self._num + self._den * a, self._den) + + def __radd__(self, a): + """Compute the sum of an integer and a field element.""" + return FE(a) + self + + def __sub__(self, a): + """Compute the difference of two field elements (second may be int).""" + if isinstance(a, FE): + return FE(self._num * a._den - self._den * a._num, self._den * a._den) + return FE(self._num - self._den * a, self._den) + + def __rsub__(self, a): + """Compute the difference of an integer and a field element.""" + return FE(a) - self + + def __mul__(self, a): + """Compute the product of two field elements (second may be int).""" + if isinstance(a, FE): + return FE(self._num * a._num, self._den * a._den) + return FE(self._num * a, self._den) + + def __rmul__(self, a): + """Compute the product of an integer with a field element.""" + return FE(a) * self + + def __truediv__(self, a): + """Compute the ratio of two field elements (second may be int).""" + return FE(self, a) + + def __pow__(self, a): + """Raise a field element to an integer power.""" + return FE(pow(self._num, a, FE.SIZE), pow(self._den, a, FE.SIZE)) + + def __neg__(self): + """Negate a field element.""" + return FE(-self._num, self._den) + + def __int__(self): + """Convert a field element to an integer in range 0..p-1. The result is cached.""" + if self._den != 1: + self._num = (self._num * pow(self._den, -1, FE.SIZE)) % FE.SIZE + self._den = 1 + return self._num + + def sqrt(self): + """Compute the square root of a field element if it exists (None otherwise). + + Due to the fact that our modulus is of the form (p % 4) == 3, the Tonelli-Shanks + algorithm (https://en.wikipedia.org/wiki/Tonelli-Shanks_algorithm) is simply + raising the argument to the power (p + 1) / 4. + + To see why: (p-1) % 2 = 0, so 2 divides the order of the multiplicative group, + and thus only half of the non-zero field elements are squares. An element a is + a (nonzero) square when Euler's criterion, a^((p-1)/2) = 1 (mod p), holds. We're + looking for x such that x^2 = a (mod p). Given a^((p-1)/2) = 1, that is equivalent + to x^2 = a^(1 + (p-1)/2) mod p. As (1 + (p-1)/2) is even, this is equivalent to + x = a^((1 + (p-1)/2)/2) mod p, or x = a^((p+1)/4) mod p.""" + v = int(self) + s = pow(v, (FE.SIZE + 1) // 4, FE.SIZE) + if s**2 % FE.SIZE == v: + return FE(s) + return None + + def is_square(self): + """Determine if this field element has a square root.""" + # A more efficient algorithm is possible here (Jacobi symbol). + return self.sqrt() is not None + + def is_even(self): + """Determine whether this field element, represented as integer in 0..p-1, is even.""" + return int(self) & 1 == 0 + + def __eq__(self, a): + """Check whether two field elements are equal (second may be an int).""" + if isinstance(a, FE): + return (self._num * a._den - self._den * a._num) % FE.SIZE == 0 + return (self._num - self._den * a) % FE.SIZE == 0 + + def to_bytes(self): + """Convert a field element to a 32-byte array (BE byte order).""" + return int(self).to_bytes(32, 'big') + + @staticmethod + def from_bytes(b): + """Convert a 32-byte array to a field element (BE byte order, no overflow allowed).""" + v = int.from_bytes(b, 'big') + if v >= FE.SIZE: + return None + return FE(v) + + def __str__(self): + """Convert this field element to a 64 character hex string.""" + return f"{int(self):064x}" + + def __repr__(self): + """Get a string representation of this field element.""" + return f"FE(0x{int(self):x})" + + +class GE: + """Objects of this class represent secp256k1 group elements (curve points or infinity) + + Normal points on the curve have fields: + * x: the x coordinate (a field element) + * y: the y coordinate (a field element, satisfying y^2 = x^3 + 7) + * infinity: False + + The point at infinity has field: + * infinity: True + """ + + # Order of the group (number of points on the curve, plus 1 for infinity) + ORDER = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141 + + # Number of valid distinct x coordinates on the curve. + ORDER_HALF = ORDER // 2 + + def __init__(self, x=None, y=None): + """Initialize a group element with specified x and y coordinates, or infinity.""" + if x is None: + # Initialize as infinity. + assert y is None + self.infinity = True + else: + # Initialize as point on the curve (and check that it is). + fx = FE(x) + fy = FE(y) + assert fy**2 == fx**3 + 7 + self.infinity = False + self.x = fx + self.y = fy + + def __add__(self, a): + """Add two group elements together.""" + # Deal with infinity: a + infinity == infinity + a == a. + if self.infinity: + return a + if a.infinity: + return self + if self.x == a.x: + if self.y != a.y: + # A point added to its own negation is infinity. + assert self.y + a.y == 0 + return GE() + else: + # For identical inputs, use the tangent (doubling formula). + lam = (3 * self.x**2) / (2 * self.y) + else: + # For distinct inputs, use the line through both points (adding formula). + lam = (self.y - a.y) / (self.x - a.x) + # Determine point opposite to the intersection of that line with the curve. + x = lam**2 - (self.x + a.x) + y = lam * (self.x - x) - self.y + return GE(x, y) + + @staticmethod + def mul(*aps): + """Compute a (batch) scalar group element multiplication. + + GE.mul((a1, p1), (a2, p2), (a3, p3)) is identical to a1*p1 + a2*p2 + a3*p3, + but more efficient.""" + # Reduce all the scalars modulo order first (so we can deal with negatives etc). + naps = [(a % GE.ORDER, p) for a, p in aps] + # Start with point at infinity. + r = GE() + # Iterate over all bit positions, from high to low. + for i in range(255, -1, -1): + # Double what we have so far. + r = r + r + # Add then add the points for which the corresponding scalar bit is set. + for (a, p) in naps: + if (a >> i) & 1: + r += p + return r + + def __rmul__(self, a): + """Multiply an integer with a group element.""" + if self == G: + return FAST_G.mul(a) + return GE.mul((a, self)) + + def __neg__(self): + """Compute the negation of a group element.""" + if self.infinity: + return self + return GE(self.x, -self.y) + + def to_bytes_compressed(self): + """Convert a non-infinite group element to 33-byte compressed encoding.""" + assert not self.infinity + return bytes([3 - self.y.is_even()]) + self.x.to_bytes() + + def to_bytes_uncompressed(self): + """Convert a non-infinite group element to 65-byte uncompressed encoding.""" + assert not self.infinity + return b'\x04' + self.x.to_bytes() + self.y.to_bytes() + + def to_bytes_xonly(self): + """Convert (the x coordinate of) a non-infinite group element to 32-byte xonly encoding.""" + assert not self.infinity + return self.x.to_bytes() + + @staticmethod + def lift_x(x): + """Return group element with specified field element as x coordinate (and even y).""" + y = (FE(x)**3 + 7).sqrt() + if y is None: + return None + if not y.is_even(): + y = -y + return GE(x, y) + + @staticmethod + def from_bytes(b): + """Convert a compressed or uncompressed encoding to a group element.""" + assert len(b) in (33, 65) + if len(b) == 33: + if b[0] != 2 and b[0] != 3: + return None + x = FE.from_bytes(b[1:]) + if x is None: + return None + r = GE.lift_x(x) + if r is None: + return None + if b[0] == 3: + r = -r + return r + else: + if b[0] != 4: + return None + x = FE.from_bytes(b[1:33]) + y = FE.from_bytes(b[33:]) + if y**2 != x**3 + 7: + return None + return GE(x, y) + + @staticmethod + def from_bytes_xonly(b): + """Convert a point given in xonly encoding to a group element.""" + assert len(b) == 32 + x = FE.from_bytes(b) + if x is None: + return None + return GE.lift_x(x) + + @staticmethod + def is_valid_x(x): + """Determine whether the provided field element is a valid X coordinate.""" + return (FE(x)**3 + 7).is_square() + + def __str__(self): + """Convert this group element to a string.""" + if self.infinity: + return "(inf)" + return f"({self.x},{self.y})" + + def __repr__(self): + """Get a string representation for this group element.""" + if self.infinity: + return "GE()" + return f"GE(0x{int(self.x):x},0x{int(self.y):x})" + +# The secp256k1 generator point +G = GE.lift_x(0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798) + + +class FastGEMul: + """Table for fast multiplication with a constant group element. + + Speed up scalar multiplication with a fixed point P by using a precomputed lookup table with + its powers of 2: + + table = [P, 2*P, 4*P, (2^3)*P, (2^4)*P, ..., (2^255)*P] + + During multiplication, the points corresponding to each bit set in the scalar are added up, + i.e. on average ~128 point additions take place. + """ + + def __init__(self, p): + self.table = [p] # table[i] = (2^i) * p + for _ in range(255): + p = p + p + self.table.append(p) + + def mul(self, a): + result = GE() + a = a % GE.ORDER + for bit in range(a.bit_length()): + if a & (1 << bit): + result += self.table[bit] + return result + +# Precomputed table with multiples of G for fast multiplication +FAST_G = FastGEMul(G) diff --git a/warnet-demo/scenarios/test_framework/segwit_addr.py b/warnet-demo/scenarios/test_framework/segwit_addr.py new file mode 100644 index 0000000..861ca2b --- /dev/null +++ b/warnet-demo/scenarios/test_framework/segwit_addr.py @@ -0,0 +1,141 @@ +#!/usr/bin/env python3 +# Copyright (c) 2017 Pieter Wuille +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Reference implementation for Bech32/Bech32m and segwit addresses.""" +import unittest +from enum import Enum + +CHARSET = "qpzry9x8gf2tvdw0s3jn54khce6mua7l" +BECH32_CONST = 1 +BECH32M_CONST = 0x2bc830a3 + +class Encoding(Enum): + """Enumeration type to list the various supported encodings.""" + BECH32 = 1 + BECH32M = 2 + + +def bech32_polymod(values): + """Internal function that computes the Bech32 checksum.""" + generator = [0x3b6a57b2, 0x26508e6d, 0x1ea119fa, 0x3d4233dd, 0x2a1462b3] + chk = 1 + for value in values: + top = chk >> 25 + chk = (chk & 0x1ffffff) << 5 ^ value + for i in range(5): + chk ^= generator[i] if ((top >> i) & 1) else 0 + return chk + + +def bech32_hrp_expand(hrp): + """Expand the HRP into values for checksum computation.""" + return [ord(x) >> 5 for x in hrp] + [0] + [ord(x) & 31 for x in hrp] + + +def bech32_verify_checksum(hrp, data): + """Verify a checksum given HRP and converted data characters.""" + check = bech32_polymod(bech32_hrp_expand(hrp) + data) + if check == BECH32_CONST: + return Encoding.BECH32 + elif check == BECH32M_CONST: + return Encoding.BECH32M + else: + return None + +def bech32_create_checksum(encoding, hrp, data): + """Compute the checksum values given HRP and data.""" + values = bech32_hrp_expand(hrp) + data + const = BECH32M_CONST if encoding == Encoding.BECH32M else BECH32_CONST + polymod = bech32_polymod(values + [0, 0, 0, 0, 0, 0]) ^ const + return [(polymod >> 5 * (5 - i)) & 31 for i in range(6)] + + +def bech32_encode(encoding, hrp, data): + """Compute a Bech32 or Bech32m string given HRP and data values.""" + combined = data + bech32_create_checksum(encoding, hrp, data) + return hrp + '1' + ''.join([CHARSET[d] for d in combined]) + + +def bech32_decode(bech): + """Validate a Bech32/Bech32m string, and determine HRP and data.""" + if ((any(ord(x) < 33 or ord(x) > 126 for x in bech)) or + (bech.lower() != bech and bech.upper() != bech)): + return (None, None, None) + bech = bech.lower() + pos = bech.rfind('1') + if pos < 1 or pos + 7 > len(bech) or len(bech) > 90: + return (None, None, None) + if not all(x in CHARSET for x in bech[pos+1:]): + return (None, None, None) + hrp = bech[:pos] + data = [CHARSET.find(x) for x in bech[pos+1:]] + encoding = bech32_verify_checksum(hrp, data) + if encoding is None: + return (None, None, None) + return (encoding, hrp, data[:-6]) + + +def convertbits(data, frombits, tobits, pad=True): + """General power-of-2 base conversion.""" + acc = 0 + bits = 0 + ret = [] + maxv = (1 << tobits) - 1 + max_acc = (1 << (frombits + tobits - 1)) - 1 + for value in data: + if value < 0 or (value >> frombits): + return None + acc = ((acc << frombits) | value) & max_acc + bits += frombits + while bits >= tobits: + bits -= tobits + ret.append((acc >> bits) & maxv) + if pad: + if bits: + ret.append((acc << (tobits - bits)) & maxv) + elif bits >= frombits or ((acc << (tobits - bits)) & maxv): + return None + return ret + + +def decode_segwit_address(hrp, addr): + """Decode a segwit address.""" + encoding, hrpgot, data = bech32_decode(addr) + if hrpgot != hrp: + return (None, None) + decoded = convertbits(data[1:], 5, 8, False) + if decoded is None or len(decoded) < 2 or len(decoded) > 40: + return (None, None) + if data[0] > 16: + return (None, None) + if data[0] == 0 and len(decoded) != 20 and len(decoded) != 32: + return (None, None) + if (data[0] == 0 and encoding != Encoding.BECH32) or (data[0] != 0 and encoding != Encoding.BECH32M): + return (None, None) + return (data[0], decoded) + + +def encode_segwit_address(hrp, witver, witprog): + """Encode a segwit address.""" + encoding = Encoding.BECH32 if witver == 0 else Encoding.BECH32M + ret = bech32_encode(encoding, hrp, [witver] + convertbits(witprog, 8, 5)) + if decode_segwit_address(hrp, ret) == (None, None): + return None + return ret + +class TestFrameworkScript(unittest.TestCase): + def test_segwit_encode_decode(self): + def test_python_bech32(addr): + hrp = addr[:4] + self.assertEqual(hrp, "bcrt") + (witver, witprog) = decode_segwit_address(hrp, addr) + self.assertEqual(encode_segwit_address(hrp, witver, witprog), addr) + + # P2WPKH + test_python_bech32('bcrt1qthmht0k2qnh3wy7336z05lu2km7emzfpm3wg46') + # P2WSH + test_python_bech32('bcrt1qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq3xueyj') + test_python_bech32('bcrt1qft5p2uhsdcdc3l2ua4ap5qqfg4pjaqlp250x7us7a8qqhrxrxfsqseac85') + # P2TR + test_python_bech32('bcrt1p0xlxvlhemja6c4dqv22uapctqupfhlxm9h8z3k2e72q4k9hcz7vqc8gma6') diff --git a/warnet-demo/scenarios/test_framework/siphash.py b/warnet-demo/scenarios/test_framework/siphash.py new file mode 100644 index 0000000..bd13b2c --- /dev/null +++ b/warnet-demo/scenarios/test_framework/siphash.py @@ -0,0 +1,65 @@ +#!/usr/bin/env python3 +# Copyright (c) 2016-2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""SipHash-2-4 implementation. + +This implements SipHash-2-4. For convenience, an interface taking 256-bit +integers is provided in addition to the one accepting generic data. +""" + +def rotl64(n, b): + return n >> (64 - b) | (n & ((1 << (64 - b)) - 1)) << b + + +def siphash_round(v0, v1, v2, v3): + v0 = (v0 + v1) & ((1 << 64) - 1) + v1 = rotl64(v1, 13) + v1 ^= v0 + v0 = rotl64(v0, 32) + v2 = (v2 + v3) & ((1 << 64) - 1) + v3 = rotl64(v3, 16) + v3 ^= v2 + v0 = (v0 + v3) & ((1 << 64) - 1) + v3 = rotl64(v3, 21) + v3 ^= v0 + v2 = (v2 + v1) & ((1 << 64) - 1) + v1 = rotl64(v1, 17) + v1 ^= v2 + v2 = rotl64(v2, 32) + return (v0, v1, v2, v3) + + +def siphash(k0, k1, data): + assert type(data) is bytes + v0 = 0x736f6d6570736575 ^ k0 + v1 = 0x646f72616e646f6d ^ k1 + v2 = 0x6c7967656e657261 ^ k0 + v3 = 0x7465646279746573 ^ k1 + c = 0 + t = 0 + for d in data: + t |= d << (8 * (c % 8)) + c = (c + 1) & 0xff + if (c & 7) == 0: + v3 ^= t + v0, v1, v2, v3 = siphash_round(v0, v1, v2, v3) + v0, v1, v2, v3 = siphash_round(v0, v1, v2, v3) + v0 ^= t + t = 0 + t = t | (c << 56) + v3 ^= t + v0, v1, v2, v3 = siphash_round(v0, v1, v2, v3) + v0, v1, v2, v3 = siphash_round(v0, v1, v2, v3) + v0 ^= t + v2 ^= 0xff + v0, v1, v2, v3 = siphash_round(v0, v1, v2, v3) + v0, v1, v2, v3 = siphash_round(v0, v1, v2, v3) + v0, v1, v2, v3 = siphash_round(v0, v1, v2, v3) + v0, v1, v2, v3 = siphash_round(v0, v1, v2, v3) + return v0 ^ v1 ^ v2 ^ v3 + + +def siphash256(k0, k1, num): + assert type(num) is int + return siphash(k0, k1, num.to_bytes(32, 'little')) diff --git a/warnet-demo/scenarios/test_framework/socks5.py b/warnet-demo/scenarios/test_framework/socks5.py new file mode 100644 index 0000000..0ca06a7 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/socks5.py @@ -0,0 +1,162 @@ +#!/usr/bin/env python3 +# Copyright (c) 2015-2019 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Dummy Socks5 server for testing.""" + +import socket +import threading +import queue +import logging + +logger = logging.getLogger("TestFramework.socks5") + +# Protocol constants +class Command: + CONNECT = 0x01 + +class AddressType: + IPV4 = 0x01 + DOMAINNAME = 0x03 + IPV6 = 0x04 + +# Utility functions +def recvall(s, n): + """Receive n bytes from a socket, or fail.""" + rv = bytearray() + while n > 0: + d = s.recv(n) + if not d: + raise IOError('Unexpected end of stream') + rv.extend(d) + n -= len(d) + return rv + +# Implementation classes +class Socks5Configuration(): + """Proxy configuration.""" + def __init__(self): + self.addr = None # Bind address (must be set) + self.af = socket.AF_INET # Bind address family + self.unauth = False # Support unauthenticated + self.auth = False # Support authentication + self.keep_alive = False # Do not automatically close connections + +class Socks5Command(): + """Information about an incoming socks5 command.""" + def __init__(self, cmd, atyp, addr, port, username, password): + self.cmd = cmd # Command (one of Command.*) + self.atyp = atyp # Address type (one of AddressType.*) + self.addr = addr # Address + self.port = port # Port to connect to + self.username = username + self.password = password + def __repr__(self): + return 'Socks5Command(%s,%s,%s,%s,%s,%s)' % (self.cmd, self.atyp, self.addr, self.port, self.username, self.password) + +class Socks5Connection(): + def __init__(self, serv, conn): + self.serv = serv + self.conn = conn + + def handle(self): + """Handle socks5 request according to RFC192.""" + try: + # Verify socks version + ver = recvall(self.conn, 1)[0] + if ver != 0x05: + raise IOError('Invalid socks version %i' % ver) + # Choose authentication method + nmethods = recvall(self.conn, 1)[0] + methods = bytearray(recvall(self.conn, nmethods)) + method = None + if 0x02 in methods and self.serv.conf.auth: + method = 0x02 # username/password + elif 0x00 in methods and self.serv.conf.unauth: + method = 0x00 # unauthenticated + if method is None: + raise IOError('No supported authentication method was offered') + # Send response + self.conn.sendall(bytearray([0x05, method])) + # Read authentication (optional) + username = None + password = None + if method == 0x02: + ver = recvall(self.conn, 1)[0] + if ver != 0x01: + raise IOError('Invalid auth packet version %i' % ver) + ulen = recvall(self.conn, 1)[0] + username = str(recvall(self.conn, ulen)) + plen = recvall(self.conn, 1)[0] + password = str(recvall(self.conn, plen)) + # Send authentication response + self.conn.sendall(bytearray([0x01, 0x00])) + + # Read connect request + ver, cmd, _, atyp = recvall(self.conn, 4) + if ver != 0x05: + raise IOError('Invalid socks version %i in connect request' % ver) + if cmd != Command.CONNECT: + raise IOError('Unhandled command %i in connect request' % cmd) + + if atyp == AddressType.IPV4: + addr = recvall(self.conn, 4) + elif atyp == AddressType.DOMAINNAME: + n = recvall(self.conn, 1)[0] + addr = recvall(self.conn, n) + elif atyp == AddressType.IPV6: + addr = recvall(self.conn, 16) + else: + raise IOError('Unknown address type %i' % atyp) + port_hi,port_lo = recvall(self.conn, 2) + port = (port_hi << 8) | port_lo + + # Send dummy response + self.conn.sendall(bytearray([0x05, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00])) + + cmdin = Socks5Command(cmd, atyp, addr, port, username, password) + self.serv.queue.put(cmdin) + logger.debug('Proxy: %s', cmdin) + # Fall through to disconnect + except Exception as e: + logger.exception("socks5 request handling failed.") + self.serv.queue.put(e) + finally: + if not self.serv.keep_alive: + self.conn.close() + +class Socks5Server(): + def __init__(self, conf): + self.conf = conf + self.s = socket.socket(conf.af) + self.s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + self.s.bind(conf.addr) + self.s.listen(5) + self.running = False + self.thread = None + self.queue = queue.Queue() # report connections and exceptions to client + self.keep_alive = conf.keep_alive + + def run(self): + while self.running: + (sockconn, _) = self.s.accept() + if self.running: + conn = Socks5Connection(self, sockconn) + thread = threading.Thread(None, conn.handle) + thread.daemon = True + thread.start() + + def start(self): + assert not self.running + self.running = True + self.thread = threading.Thread(None, self.run) + self.thread.daemon = True + self.thread.start() + + def stop(self): + self.running = False + # connect to self to end run loop + s = socket.socket(self.conf.af) + s.connect(self.conf.addr) + s.close() + self.thread.join() diff --git a/warnet-demo/scenarios/test_framework/test_framework.py b/warnet-demo/scenarios/test_framework/test_framework.py new file mode 100644 index 0000000..4e6d245 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/test_framework.py @@ -0,0 +1,1010 @@ +#!/usr/bin/env python3 +# Copyright (c) 2014-2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Base class for RPC testing.""" + +import configparser +from enum import Enum +import argparse +import logging +import os +import platform +import pdb +import random +import re +import shutil +import subprocess +import sys +import tempfile +import time + +from typing import List +from .address import create_deterministic_address_bcrt1_p2tr_op_true +from .authproxy import JSONRPCException +from . import coverage +from .p2p import NetworkThread +from .test_node import TestNode +from .util import ( + MAX_NODES, + PortSeed, + assert_equal, + check_json_precision, + get_datadir_path, + initialize_datadir, + p2p_port, + wait_until_helper_internal, +) + + +class TestStatus(Enum): + PASSED = 1 + FAILED = 2 + SKIPPED = 3 + +TEST_EXIT_PASSED = 0 +TEST_EXIT_FAILED = 1 +TEST_EXIT_SKIPPED = 77 + +TMPDIR_PREFIX = "bitcoin_func_test_" + + +class SkipTest(Exception): + """This exception is raised to skip a test""" + + def __init__(self, message): + self.message = message + + +class BitcoinTestMetaClass(type): + """Metaclass for BitcoinTestFramework. + + Ensures that any attempt to register a subclass of `BitcoinTestFramework` + adheres to a standard whereby the subclass overrides `set_test_params` and + `run_test` but DOES NOT override either `__init__` or `main`. If any of + those standards are violated, a ``TypeError`` is raised.""" + + def __new__(cls, clsname, bases, dct): + if not clsname == 'BitcoinTestFramework': + if not ('run_test' in dct and 'set_test_params' in dct): + raise TypeError("BitcoinTestFramework subclasses must override " + "'run_test' and 'set_test_params'") + if '__init__' in dct or 'main' in dct: + raise TypeError("BitcoinTestFramework subclasses may not override " + "'__init__' or 'main'") + + return super().__new__(cls, clsname, bases, dct) + + +class BitcoinTestFramework(metaclass=BitcoinTestMetaClass): + """Base class for a bitcoin test script. + + Individual bitcoin test scripts should subclass this class and override the set_test_params() and run_test() methods. + + Individual tests can also override the following methods to customize the test setup: + + - add_options() + - setup_chain() + - setup_network() + - setup_nodes() + + The __init__() and main() methods should not be overridden. + + This class also contains various public and private helper methods.""" + + def __init__(self) -> None: + """Sets test framework defaults. Do not override this method. Instead, override the set_test_params() method""" + self.chain: str = 'regtest' + self.setup_clean_chain: bool = False + self.nodes: List[TestNode] = [] + self.extra_args = None + self.network_thread = None + self.rpc_timeout = 60 # Wait for up to 60 seconds for the RPC server to respond + self.supports_cli = True + self.bind_to_localhost_only = True + self.parse_args() + self.default_wallet_name = "default_wallet" if self.options.descriptors else "" + self.wallet_data_filename = "wallet.dat" + # Optional list of wallet names that can be set in set_test_params to + # create and import keys to. If unset, default is len(nodes) * + # [default_wallet_name]. If wallet names are None, wallet creation is + # skipped. If list is truncated, wallet creation is skipped and keys + # are not imported. + self.wallet_names = None + # By default the wallet is not required. Set to true by skip_if_no_wallet(). + # When False, we ignore wallet_names regardless of what it is. + self._requires_wallet = False + # Disable ThreadOpenConnections by default, so that adding entries to + # addrman will not result in automatic connections to them. + self.disable_autoconnect = True + self.set_test_params() + assert self.wallet_names is None or len(self.wallet_names) <= self.num_nodes + self.rpc_timeout = int(self.rpc_timeout * self.options.timeout_factor) # optionally, increase timeout by a factor + + def main(self): + """Main function. This should not be overridden by the subclass test scripts.""" + + assert hasattr(self, "num_nodes"), "Test must set self.num_nodes in set_test_params()" + + try: + self.setup() + self.run_test() + except JSONRPCException: + self.log.exception("JSONRPC error") + self.success = TestStatus.FAILED + except SkipTest as e: + self.log.warning("Test Skipped: %s" % e.message) + self.success = TestStatus.SKIPPED + except AssertionError: + self.log.exception("Assertion failed") + self.success = TestStatus.FAILED + except KeyError: + self.log.exception("Key error") + self.success = TestStatus.FAILED + except subprocess.CalledProcessError as e: + self.log.exception("Called Process failed with '{}'".format(e.output)) + self.success = TestStatus.FAILED + except Exception: + self.log.exception("Unexpected exception caught during testing") + self.success = TestStatus.FAILED + except KeyboardInterrupt: + self.log.warning("Exiting after keyboard interrupt") + self.success = TestStatus.FAILED + finally: + exit_code = self.shutdown() + sys.exit(exit_code) + + def parse_args(self): + previous_releases_path = os.getenv("PREVIOUS_RELEASES_DIR") or os.getcwd() + "/releases" + parser = argparse.ArgumentParser(usage="%(prog)s [options]") + parser.add_argument("--nocleanup", dest="nocleanup", default=False, action="store_true", + help="Leave bitcoinds and test.* datadir on exit or error") + parser.add_argument("--noshutdown", dest="noshutdown", default=False, action="store_true", + help="Don't stop bitcoinds after the test execution") + parser.add_argument("--cachedir", dest="cachedir", default=os.path.abspath(os.path.dirname(os.path.realpath(__file__)) + "/../../cache"), + help="Directory for caching pregenerated datadirs (default: %(default)s)") + parser.add_argument("--tmpdir", dest="tmpdir", help="Root directory for datadirs") + parser.add_argument("-l", "--loglevel", dest="loglevel", default="INFO", + help="log events at this level and higher to the console. Can be set to DEBUG, INFO, WARNING, ERROR or CRITICAL. Passing --loglevel DEBUG will output all logs to console. Note that logs at all levels are always written to the test_framework.log file in the temporary test directory.") + parser.add_argument("--tracerpc", dest="trace_rpc", default=False, action="store_true", + help="Print out all RPC calls as they are made") + parser.add_argument("--portseed", dest="port_seed", default=os.getpid(), type=int, + help="The seed to use for assigning port numbers (default: current process id)") + parser.add_argument("--previous-releases", dest="prev_releases", action="store_true", + default=os.path.isdir(previous_releases_path) and bool(os.listdir(previous_releases_path)), + help="Force test of previous releases (default: %(default)s)") + parser.add_argument("--coveragedir", dest="coveragedir", + help="Write tested RPC commands into this directory") + parser.add_argument("--configfile", dest="configfile", + default=os.path.abspath(os.path.dirname(os.path.realpath(__file__)) + "/../../config.ini"), + help="Location of the test framework config file (default: %(default)s)") + parser.add_argument("--pdbonfailure", dest="pdbonfailure", default=False, action="store_true", + help="Attach a python debugger if test fails") + parser.add_argument("--usecli", dest="usecli", default=False, action="store_true", + help="use bitcoin-cli instead of RPC for all commands") + parser.add_argument("--perf", dest="perf", default=False, action="store_true", + help="profile running nodes with perf for the duration of the test") + parser.add_argument("--valgrind", dest="valgrind", default=False, action="store_true", + help="run nodes under the valgrind memory error detector: expect at least a ~10x slowdown. valgrind 3.14 or later required. Does not apply to previous release binaries.") + parser.add_argument("--randomseed", type=int, + help="set a random seed for deterministically reproducing a previous test run") + parser.add_argument("--timeout-factor", dest="timeout_factor", type=float, help="adjust test timeouts by a factor. Setting it to 0 disables all timeouts") + parser.add_argument("--v2transport", dest="v2transport", default=False, action="store_true", + help="use BIP324 v2 connections between all nodes by default") + + self.add_options(parser) + # Running TestShell in a Jupyter notebook causes an additional -f argument + # To keep TestShell from failing with an "unrecognized argument" error, we add a dummy "-f" argument + # source: https://stackoverflow.com/questions/48796169/how-to-fix-ipykernel-launcher-py-error-unrecognized-arguments-in-jupyter/56349168#56349168 + parser.add_argument("-f", "--fff", help="a dummy argument to fool ipython", default="1") + self.options = parser.parse_args() + if self.options.timeout_factor == 0: + self.options.timeout_factor = 99999 + self.options.timeout_factor = self.options.timeout_factor or (4 if self.options.valgrind else 1) + self.options.previous_releases_path = previous_releases_path + + config = configparser.ConfigParser() + config.read_file(open(self.options.configfile)) + self.config = config + + if "descriptors" not in self.options: + # Wallet is not required by the test at all and the value of self.options.descriptors won't matter. + # It still needs to exist and be None in order for tests to work however. + # So set it to None to force -disablewallet, because the wallet is not needed. + self.options.descriptors = None + elif self.options.descriptors is None: + # Some wallet is either required or optionally used by the test. + # Prefer SQLite unless it isn't available + if self.is_sqlite_compiled(): + self.options.descriptors = True + elif self.is_bdb_compiled(): + self.options.descriptors = False + else: + # If neither are compiled, tests requiring a wallet will be skipped and the value of self.options.descriptors won't matter + # It still needs to exist and be None in order for tests to work however. + # So set it to None, which will also set -disablewallet. + self.options.descriptors = None + + PortSeed.n = self.options.port_seed + + def set_binary_paths(self): + """Update self.options with the paths of all binaries from environment variables or their default values""" + + binaries = { + "bitcoind": ("bitcoind", "BITCOIND"), + "bitcoin-cli": ("bitcoincli", "BITCOINCLI"), + "bitcoin-util": ("bitcoinutil", "BITCOINUTIL"), + "bitcoin-wallet": ("bitcoinwallet", "BITCOINWALLET"), + } + for binary, [attribute_name, env_variable_name] in binaries.items(): + default_filename = os.path.join( + self.config["environment"]["BUILDDIR"], + "src", + binary + self.config["environment"]["EXEEXT"], + ) + setattr(self.options, attribute_name, os.getenv(env_variable_name, default=default_filename)) + + def setup(self): + """Call this method to start up the test framework object with options set.""" + + check_json_precision() + + self.options.cachedir = os.path.abspath(self.options.cachedir) + + config = self.config + + self.set_binary_paths() + + os.environ['PATH'] = os.pathsep.join([ + os.path.join(config['environment']['BUILDDIR'], 'src'), + os.path.join(config['environment']['BUILDDIR'], 'src', 'qt'), os.environ['PATH'] + ]) + + # Set up temp directory and start logging + if self.options.tmpdir: + self.options.tmpdir = os.path.abspath(self.options.tmpdir) + os.makedirs(self.options.tmpdir, exist_ok=False) + else: + self.options.tmpdir = tempfile.mkdtemp(prefix=TMPDIR_PREFIX) + self._start_logging() + + # Seed the PRNG. Note that test runs are reproducible if and only if + # a single thread accesses the PRNG. For more information, see + # https://docs.python.org/3/library/random.html#notes-on-reproducibility. + # The network thread shouldn't access random. If we need to change the + # network thread to access randomness, it should instantiate its own + # random.Random object. + seed = self.options.randomseed + + if seed is None: + seed = random.randrange(sys.maxsize) + else: + self.log.info("User supplied random seed {}".format(seed)) + + random.seed(seed) + self.log.info("PRNG seed is: {}".format(seed)) + + self.log.debug('Setting up network thread') + self.network_thread = NetworkThread() + self.network_thread.start() + + if self.options.usecli: + if not self.supports_cli: + raise SkipTest("--usecli specified but test does not support using CLI") + self.skip_if_no_cli() + self.skip_test_if_missing_module() + self.setup_chain() + self.setup_network() + + self.success = TestStatus.PASSED + + def shutdown(self): + """Call this method to shut down the test framework object.""" + + if self.success == TestStatus.FAILED and self.options.pdbonfailure: + print("Testcase failed. Attaching python debugger. Enter ? for help") + pdb.set_trace() + + self.log.debug('Closing down network thread') + self.network_thread.close() + if not self.options.noshutdown: + self.log.info("Stopping nodes") + if self.nodes: + self.stop_nodes() + else: + for node in self.nodes: + node.cleanup_on_exit = False + self.log.info("Note: bitcoinds were not stopped and may still be running") + + should_clean_up = ( + not self.options.nocleanup and + not self.options.noshutdown and + self.success != TestStatus.FAILED and + not self.options.perf + ) + if should_clean_up: + self.log.info("Cleaning up {} on exit".format(self.options.tmpdir)) + cleanup_tree_on_exit = True + elif self.options.perf: + self.log.warning("Not cleaning up dir {} due to perf data".format(self.options.tmpdir)) + cleanup_tree_on_exit = False + else: + self.log.warning("Not cleaning up dir {}".format(self.options.tmpdir)) + cleanup_tree_on_exit = False + + if self.success == TestStatus.PASSED: + self.log.info("Tests successful") + exit_code = TEST_EXIT_PASSED + elif self.success == TestStatus.SKIPPED: + self.log.info("Test skipped") + exit_code = TEST_EXIT_SKIPPED + else: + self.log.error("Test failed. Test logging available at %s/test_framework.log", self.options.tmpdir) + self.log.error("") + self.log.error("Hint: Call {} '{}' to consolidate all logs".format(os.path.normpath(os.path.dirname(os.path.realpath(__file__)) + "/../combine_logs.py"), self.options.tmpdir)) + self.log.error("") + self.log.error("If this failure happened unexpectedly or intermittently, please file a bug and provide a link or upload of the combined log.") + self.log.error(self.config['environment']['PACKAGE_BUGREPORT']) + self.log.error("") + exit_code = TEST_EXIT_FAILED + # Logging.shutdown will not remove stream- and filehandlers, so we must + # do it explicitly. Handlers are removed so the next test run can apply + # different log handler settings. + # See: https://docs.python.org/3/library/logging.html#logging.shutdown + for h in list(self.log.handlers): + h.flush() + h.close() + self.log.removeHandler(h) + rpc_logger = logging.getLogger("BitcoinRPC") + for h in list(rpc_logger.handlers): + h.flush() + rpc_logger.removeHandler(h) + if cleanup_tree_on_exit: + shutil.rmtree(self.options.tmpdir) + + self.nodes.clear() + return exit_code + + # Methods to override in subclass test scripts. + def set_test_params(self): + """Tests must override this method to change default values for number of nodes, topology, etc""" + raise NotImplementedError + + def add_options(self, parser): + """Override this method to add command-line options to the test""" + pass + + def skip_test_if_missing_module(self): + """Override this method to skip a test if a module is not compiled""" + pass + + def setup_chain(self): + """Override this method to customize blockchain setup""" + self.log.info("Initializing test directory " + self.options.tmpdir) + if self.setup_clean_chain: + self._initialize_chain_clean() + else: + self._initialize_chain() + + def setup_network(self): + """Override this method to customize test network topology""" + self.setup_nodes() + + # Connect the nodes as a "chain". This allows us + # to split the network between nodes 1 and 2 to get + # two halves that can work on competing chains. + # + # Topology looks like this: + # node0 <-- node1 <-- node2 <-- node3 + # + # If all nodes are in IBD (clean chain from genesis), node0 is assumed to be the source of blocks (miner). To + # ensure block propagation, all nodes will establish outgoing connections toward node0. + # See fPreferredDownload in net_processing. + # + # If further outbound connections are needed, they can be added at the beginning of the test with e.g. + # self.connect_nodes(1, 2) + for i in range(self.num_nodes - 1): + self.connect_nodes(i + 1, i) + self.sync_all() + + def setup_nodes(self): + """Override this method to customize test node setup""" + self.add_nodes(self.num_nodes, self.extra_args) + self.start_nodes() + if self._requires_wallet: + self.import_deterministic_coinbase_privkeys() + if not self.setup_clean_chain: + for n in self.nodes: + assert_equal(n.getblockchaininfo()["blocks"], 199) + # To ensure that all nodes are out of IBD, the most recent block + # must have a timestamp not too old (see IsInitialBlockDownload()). + self.log.debug('Generate a block with current time') + block_hash = self.generate(self.nodes[0], 1, sync_fun=self.no_op)[0] + block = self.nodes[0].getblock(blockhash=block_hash, verbosity=0) + for n in self.nodes: + n.submitblock(block) + chain_info = n.getblockchaininfo() + assert_equal(chain_info["blocks"], 200) + assert_equal(chain_info["initialblockdownload"], False) + + def import_deterministic_coinbase_privkeys(self): + for i in range(self.num_nodes): + self.init_wallet(node=i) + + def init_wallet(self, *, node): + wallet_name = self.default_wallet_name if self.wallet_names is None else self.wallet_names[node] if node < len(self.wallet_names) else False + if wallet_name is not False: + n = self.nodes[node] + if wallet_name is not None: + n.createwallet(wallet_name=wallet_name, descriptors=self.options.descriptors, load_on_startup=True) + n.importprivkey(privkey=n.get_deterministic_priv_key().key, label='coinbase', rescan=True) + + def run_test(self): + """Tests must override this method to define test logic""" + raise NotImplementedError + + # Public helper methods. These can be accessed by the subclass test scripts. + + def add_wallet_options(self, parser, *, descriptors=True, legacy=True): + kwargs = {} + if descriptors + legacy == 1: + # If only one type can be chosen, set it as default + kwargs["default"] = descriptors + group = parser.add_mutually_exclusive_group( + # If only one type is allowed, require it to be set in test_runner.py + required=os.getenv("REQUIRE_WALLET_TYPE_SET") == "1" and "default" in kwargs) + if descriptors: + group.add_argument("--descriptors", action='store_const', const=True, **kwargs, + help="Run test using a descriptor wallet", dest='descriptors') + if legacy: + group.add_argument("--legacy-wallet", action='store_const', const=False, **kwargs, + help="Run test using legacy wallets", dest='descriptors') + + def add_nodes(self, num_nodes: int, extra_args=None, *, rpchost=None, binary=None, binary_cli=None, versions=None): + """Instantiate TestNode objects. + + Should only be called once after the nodes have been specified in + set_test_params().""" + def get_bin_from_version(version, bin_name, bin_default): + if not version: + return bin_default + if version > 219999: + # Starting at client version 220000 the first two digits represent + # the major version, e.g. v22.0 instead of v0.22.0. + version *= 100 + return os.path.join( + self.options.previous_releases_path, + re.sub( + r'\.0$' if version <= 219999 else r'(\.0){1,2}$', + '', # Remove trailing dot for point releases, after 22.0 also remove double trailing dot. + 'v{}.{}.{}.{}'.format( + (version % 100000000) // 1000000, + (version % 1000000) // 10000, + (version % 10000) // 100, + (version % 100) // 1, + ), + ), + 'bin', + bin_name, + ) + + if self.bind_to_localhost_only: + extra_confs = [["bind=127.0.0.1"]] * num_nodes + else: + extra_confs = [[]] * num_nodes + if extra_args is None: + extra_args = [[]] * num_nodes + if versions is None: + versions = [None] * num_nodes + if binary is None: + binary = [get_bin_from_version(v, 'bitcoind', self.options.bitcoind) for v in versions] + if binary_cli is None: + binary_cli = [get_bin_from_version(v, 'bitcoin-cli', self.options.bitcoincli) for v in versions] + assert_equal(len(extra_confs), num_nodes) + assert_equal(len(extra_args), num_nodes) + assert_equal(len(versions), num_nodes) + assert_equal(len(binary), num_nodes) + assert_equal(len(binary_cli), num_nodes) + for i in range(num_nodes): + args = list(extra_args[i]) + if self.options.v2transport and ("-v2transport=0" not in args): + args.append("-v2transport=1") + test_node_i = TestNode( + i, + get_datadir_path(self.options.tmpdir, i), + chain=self.chain, + rpchost=rpchost, + timewait=self.rpc_timeout, + timeout_factor=self.options.timeout_factor, + bitcoind=binary[i], + bitcoin_cli=binary_cli[i], + version=versions[i], + coverage_dir=self.options.coveragedir, + cwd=self.options.tmpdir, + extra_conf=extra_confs[i], + extra_args=args, + use_cli=self.options.usecli, + start_perf=self.options.perf, + use_valgrind=self.options.valgrind, + descriptors=self.options.descriptors, + ) + self.nodes.append(test_node_i) + if not test_node_i.version_is_at_least(170000): + # adjust conf for pre 17 + test_node_i.replace_in_config([('[regtest]', '')]) + + def start_node(self, i, *args, **kwargs): + """Start a bitcoind""" + + node = self.nodes[i] + + node.start(*args, **kwargs) + node.wait_for_rpc_connection() + + if self.options.coveragedir is not None: + coverage.write_all_rpc_commands(self.options.coveragedir, node.rpc) + + def start_nodes(self, extra_args=None, *args, **kwargs): + """Start multiple bitcoinds""" + + if extra_args is None: + extra_args = [None] * self.num_nodes + assert_equal(len(extra_args), self.num_nodes) + try: + for i, node in enumerate(self.nodes): + node.start(extra_args[i], *args, **kwargs) + for node in self.nodes: + node.wait_for_rpc_connection() + except Exception: + # If one node failed to start, stop the others + self.stop_nodes() + raise + + if self.options.coveragedir is not None: + for node in self.nodes: + coverage.write_all_rpc_commands(self.options.coveragedir, node.rpc) + + def stop_node(self, i, expected_stderr='', wait=0): + """Stop a bitcoind test node""" + self.nodes[i].stop_node(expected_stderr, wait=wait) + + def stop_nodes(self, wait=0): + """Stop multiple bitcoind test nodes""" + for node in self.nodes: + # Issue RPC to stop nodes + node.stop_node(wait=wait, wait_until_stopped=False) + + for node in self.nodes: + # Wait for nodes to stop + node.wait_until_stopped() + + def restart_node(self, i, extra_args=None): + """Stop and start a test node""" + self.stop_node(i) + self.start_node(i, extra_args) + + def wait_for_node_exit(self, i, timeout): + self.nodes[i].process.wait(timeout) + + def connect_nodes(self, a, b, *, peer_advertises_v2=None, wait_for_connect: bool = True): + """ + Kwargs: + wait_for_connect: if True, block until the nodes are verified as connected. You might + want to disable this when using -stopatheight with one of the connected nodes, + since there will be a race between the actual connection and performing + the assertions before one node shuts down. + """ + from_connection = self.nodes[a] + to_connection = self.nodes[b] + from_num_peers = 1 + len(from_connection.getpeerinfo()) + to_num_peers = 1 + len(to_connection.getpeerinfo()) + ip_port = "127.0.0.1:" + str(p2p_port(b)) + + if peer_advertises_v2 is None: + peer_advertises_v2 = self.options.v2transport + + if peer_advertises_v2: + from_connection.addnode(node=ip_port, command="onetry", v2transport=True) + else: + # skip the optional third argument (default false) for + # compatibility with older clients + from_connection.addnode(ip_port, "onetry") + + if not wait_for_connect: + return + + # poll until version handshake complete to avoid race conditions + # with transaction relaying + # See comments in net_processing: + # * Must have a version message before anything else + # * Must have a verack message before anything else + self.wait_until(lambda: sum(peer['version'] != 0 for peer in from_connection.getpeerinfo()) == from_num_peers) + self.wait_until(lambda: sum(peer['version'] != 0 for peer in to_connection.getpeerinfo()) == to_num_peers) + self.wait_until(lambda: sum(peer['bytesrecv_per_msg'].pop('verack', 0) >= 21 for peer in from_connection.getpeerinfo()) == from_num_peers) + self.wait_until(lambda: sum(peer['bytesrecv_per_msg'].pop('verack', 0) >= 21 for peer in to_connection.getpeerinfo()) == to_num_peers) + # The message bytes are counted before processing the message, so make + # sure it was fully processed by waiting for a ping. + self.wait_until(lambda: sum(peer["bytesrecv_per_msg"].pop("pong", 0) >= 29 for peer in from_connection.getpeerinfo()) == from_num_peers) + self.wait_until(lambda: sum(peer["bytesrecv_per_msg"].pop("pong", 0) >= 29 for peer in to_connection.getpeerinfo()) == to_num_peers) + + def disconnect_nodes(self, a, b): + def disconnect_nodes_helper(node_a, node_b): + def get_peer_ids(from_connection, node_num): + result = [] + for peer in from_connection.getpeerinfo(): + if "testnode{}".format(node_num) in peer['subver']: + result.append(peer['id']) + return result + + peer_ids = get_peer_ids(node_a, node_b.index) + if not peer_ids: + self.log.warning("disconnect_nodes: {} and {} were not connected".format( + node_a.index, + node_b.index, + )) + return + for peer_id in peer_ids: + try: + node_a.disconnectnode(nodeid=peer_id) + except JSONRPCException as e: + # If this node is disconnected between calculating the peer id + # and issuing the disconnect, don't worry about it. + # This avoids a race condition if we're mass-disconnecting peers. + if e.error['code'] != -29: # RPC_CLIENT_NODE_NOT_CONNECTED + raise + + # wait to disconnect + self.wait_until(lambda: not get_peer_ids(node_a, node_b.index), timeout=5) + self.wait_until(lambda: not get_peer_ids(node_b, node_a.index), timeout=5) + + disconnect_nodes_helper(self.nodes[a], self.nodes[b]) + + def split_network(self): + """ + Split the network of four nodes into nodes 0/1 and 2/3. + """ + self.disconnect_nodes(1, 2) + self.sync_all(self.nodes[:2]) + self.sync_all(self.nodes[2:]) + + def join_network(self): + """ + Join the (previously split) network halves together. + """ + self.connect_nodes(1, 2) + self.sync_all() + + def no_op(self): + pass + + def generate(self, generator, *args, sync_fun=None, **kwargs): + blocks = generator.generate(*args, invalid_call=False, **kwargs) + sync_fun() if sync_fun else self.sync_all() + return blocks + + def generateblock(self, generator, *args, sync_fun=None, **kwargs): + blocks = generator.generateblock(*args, invalid_call=False, **kwargs) + sync_fun() if sync_fun else self.sync_all() + return blocks + + def generatetoaddress(self, generator, *args, sync_fun=None, **kwargs): + blocks = generator.generatetoaddress(*args, invalid_call=False, **kwargs) + sync_fun() if sync_fun else self.sync_all() + return blocks + + def generatetodescriptor(self, generator, *args, sync_fun=None, **kwargs): + blocks = generator.generatetodescriptor(*args, invalid_call=False, **kwargs) + sync_fun() if sync_fun else self.sync_all() + return blocks + + def sync_blocks(self, nodes=None, wait=1, timeout=60): + """ + Wait until everybody has the same tip. + sync_blocks needs to be called with an rpc_connections set that has least + one node already synced to the latest, stable tip, otherwise there's a + chance it might return before all nodes are stably synced. + """ + rpc_connections = nodes or self.nodes + timeout = int(timeout * self.options.timeout_factor) + stop_time = time.time() + timeout + while time.time() <= stop_time: + best_hash = [x.getbestblockhash() for x in rpc_connections] + if best_hash.count(best_hash[0]) == len(rpc_connections): + return + # Check that each peer has at least one connection + assert (all([len(x.getpeerinfo()) for x in rpc_connections])) + time.sleep(wait) + raise AssertionError("Block sync timed out after {}s:{}".format( + timeout, + "".join("\n {!r}".format(b) for b in best_hash), + )) + + def sync_mempools(self, nodes=None, wait=1, timeout=60, flush_scheduler=True): + """ + Wait until everybody has the same transactions in their memory + pools + """ + rpc_connections = nodes or self.nodes + timeout = int(timeout * self.options.timeout_factor) + stop_time = time.time() + timeout + while time.time() <= stop_time: + pool = [set(r.getrawmempool()) for r in rpc_connections] + if pool.count(pool[0]) == len(rpc_connections): + if flush_scheduler: + for r in rpc_connections: + r.syncwithvalidationinterfacequeue() + return + # Check that each peer has at least one connection + assert (all([len(x.getpeerinfo()) for x in rpc_connections])) + time.sleep(wait) + raise AssertionError("Mempool sync timed out after {}s:{}".format( + timeout, + "".join("\n {!r}".format(m) for m in pool), + )) + + def sync_all(self, nodes=None): + self.sync_blocks(nodes) + self.sync_mempools(nodes) + + def wait_until(self, test_function, timeout=60): + return wait_until_helper_internal(test_function, timeout=timeout, timeout_factor=self.options.timeout_factor) + + # Private helper methods. These should not be accessed by the subclass test scripts. + + def _start_logging(self): + # Add logger and logging handlers + self.log = logging.getLogger('TestFramework') + self.log.setLevel(logging.DEBUG) + # Create file handler to log all messages + fh = logging.FileHandler(self.options.tmpdir + '/test_framework.log', encoding='utf-8') + fh.setLevel(logging.DEBUG) + # Create console handler to log messages to stderr. By default this logs only error messages, but can be configured with --loglevel. + ch = logging.StreamHandler(sys.stdout) + # User can provide log level as a number or string (eg DEBUG). loglevel was caught as a string, so try to convert it to an int + ll = int(self.options.loglevel) if self.options.loglevel.isdigit() else self.options.loglevel.upper() + ch.setLevel(ll) + # Format logs the same as bitcoind's debug.log with microprecision (so log files can be concatenated and sorted) + formatter = logging.Formatter(fmt='%(asctime)s.%(msecs)03d000Z %(name)s (%(levelname)s): %(message)s', datefmt='%Y-%m-%dT%H:%M:%S') + formatter.converter = time.gmtime + fh.setFormatter(formatter) + ch.setFormatter(formatter) + # add the handlers to the logger + self.log.addHandler(fh) + self.log.addHandler(ch) + + if self.options.trace_rpc: + rpc_logger = logging.getLogger("BitcoinRPC") + rpc_logger.setLevel(logging.DEBUG) + rpc_handler = logging.StreamHandler(sys.stdout) + rpc_handler.setLevel(logging.DEBUG) + rpc_logger.addHandler(rpc_handler) + + def _initialize_chain(self): + """Initialize a pre-mined blockchain for use by the test. + + Create a cache of a 199-block-long chain + Afterward, create num_nodes copies from the cache.""" + + CACHE_NODE_ID = 0 # Use node 0 to create the cache for all other nodes + cache_node_dir = get_datadir_path(self.options.cachedir, CACHE_NODE_ID) + assert self.num_nodes <= MAX_NODES + + if not os.path.isdir(cache_node_dir): + self.log.debug("Creating cache directory {}".format(cache_node_dir)) + + initialize_datadir(self.options.cachedir, CACHE_NODE_ID, self.chain, self.disable_autoconnect) + self.nodes.append( + TestNode( + CACHE_NODE_ID, + cache_node_dir, + chain=self.chain, + extra_conf=["bind=127.0.0.1"], + extra_args=['-disablewallet'], + rpchost=None, + timewait=self.rpc_timeout, + timeout_factor=self.options.timeout_factor, + bitcoind=self.options.bitcoind, + bitcoin_cli=self.options.bitcoincli, + coverage_dir=None, + cwd=self.options.tmpdir, + descriptors=self.options.descriptors, + )) + self.start_node(CACHE_NODE_ID) + cache_node = self.nodes[CACHE_NODE_ID] + + # Wait for RPC connections to be ready + cache_node.wait_for_rpc_connection() + + # Set a time in the past, so that blocks don't end up in the future + cache_node.setmocktime(cache_node.getblockheader(cache_node.getbestblockhash())['time']) + + # Create a 199-block-long chain; each of the 3 first nodes + # gets 25 mature blocks and 25 immature. + # The 4th address gets 25 mature and only 24 immature blocks so that the very last + # block in the cache does not age too much (have an old tip age). + # This is needed so that we are out of IBD when the test starts, + # see the tip age check in IsInitialBlockDownload(). + gen_addresses = [k.address for k in TestNode.PRIV_KEYS][:3] + [create_deterministic_address_bcrt1_p2tr_op_true()[0]] + assert_equal(len(gen_addresses), 4) + for i in range(8): + self.generatetoaddress( + cache_node, + nblocks=25 if i != 7 else 24, + address=gen_addresses[i % len(gen_addresses)], + ) + + assert_equal(cache_node.getblockchaininfo()["blocks"], 199) + + # Shut it down, and clean up cache directories: + self.stop_nodes() + self.nodes = [] + + def cache_path(*paths): + return os.path.join(cache_node_dir, self.chain, *paths) + + os.rmdir(cache_path('wallets')) # Remove empty wallets dir + for entry in os.listdir(cache_path()): + if entry not in ['chainstate', 'blocks', 'indexes']: # Only indexes, chainstate and blocks folders + os.remove(cache_path(entry)) + + for i in range(self.num_nodes): + self.log.debug("Copy cache directory {} to node {}".format(cache_node_dir, i)) + to_dir = get_datadir_path(self.options.tmpdir, i) + shutil.copytree(cache_node_dir, to_dir) + initialize_datadir(self.options.tmpdir, i, self.chain, self.disable_autoconnect) # Overwrite port/rpcport in bitcoin.conf + + def _initialize_chain_clean(self): + """Initialize empty blockchain for use by the test. + + Create an empty blockchain and num_nodes wallets. + Useful if a test case wants complete control over initialization.""" + for i in range(self.num_nodes): + initialize_datadir(self.options.tmpdir, i, self.chain, self.disable_autoconnect) + + def skip_if_no_py3_zmq(self): + """Attempt to import the zmq package and skip the test if the import fails.""" + try: + import zmq # noqa + except ImportError: + raise SkipTest("python3-zmq module not available.") + + def skip_if_no_py_sqlite3(self): + """Attempt to import the sqlite3 package and skip the test if the import fails.""" + try: + import sqlite3 # noqa + except ImportError: + raise SkipTest("sqlite3 module not available.") + + def skip_if_no_python_bcc(self): + """Attempt to import the bcc package and skip the tests if the import fails.""" + try: + import bcc # type: ignore[import] # noqa: F401 + except ImportError: + raise SkipTest("bcc python module not available") + + def skip_if_no_bitcoind_tracepoints(self): + """Skip the running test if bitcoind has not been compiled with USDT tracepoint support.""" + if not self.is_usdt_compiled(): + raise SkipTest("bitcoind has not been built with USDT tracepoints enabled.") + + def skip_if_no_bpf_permissions(self): + """Skip the running test if we don't have permissions to do BPF syscalls and load BPF maps.""" + # check for 'root' permissions + if os.geteuid() != 0: + raise SkipTest("no permissions to use BPF (please review the tests carefully before running them with higher privileges)") + + def skip_if_platform_not_linux(self): + """Skip the running test if we are not on a Linux platform""" + if platform.system() != "Linux": + raise SkipTest("not on a Linux system") + + def skip_if_platform_not_posix(self): + """Skip the running test if we are not on a POSIX platform""" + if os.name != 'posix': + raise SkipTest("not on a POSIX system") + + def skip_if_no_bitcoind_zmq(self): + """Skip the running test if bitcoind has not been compiled with zmq support.""" + if not self.is_zmq_compiled(): + raise SkipTest("bitcoind has not been built with zmq enabled.") + + def skip_if_no_wallet(self): + """Skip the running test if wallet has not been compiled.""" + self._requires_wallet = True + if not self.is_wallet_compiled(): + raise SkipTest("wallet has not been compiled.") + if self.options.descriptors: + self.skip_if_no_sqlite() + else: + self.skip_if_no_bdb() + + def skip_if_no_sqlite(self): + """Skip the running test if sqlite has not been compiled.""" + if not self.is_sqlite_compiled(): + raise SkipTest("sqlite has not been compiled.") + + def skip_if_no_bdb(self): + """Skip the running test if BDB has not been compiled.""" + if not self.is_bdb_compiled(): + raise SkipTest("BDB has not been compiled.") + + def skip_if_no_wallet_tool(self): + """Skip the running test if bitcoin-wallet has not been compiled.""" + if not self.is_wallet_tool_compiled(): + raise SkipTest("bitcoin-wallet has not been compiled") + + def skip_if_no_bitcoin_util(self): + """Skip the running test if bitcoin-util has not been compiled.""" + if not self.is_bitcoin_util_compiled(): + raise SkipTest("bitcoin-util has not been compiled") + + def skip_if_no_cli(self): + """Skip the running test if bitcoin-cli has not been compiled.""" + if not self.is_cli_compiled(): + raise SkipTest("bitcoin-cli has not been compiled.") + + def skip_if_no_previous_releases(self): + """Skip the running test if previous releases are not available.""" + if not self.has_previous_releases(): + raise SkipTest("previous releases not available or disabled") + + def has_previous_releases(self): + """Checks whether previous releases are present and enabled.""" + if not os.path.isdir(self.options.previous_releases_path): + if self.options.prev_releases: + raise AssertionError("Force test of previous releases but releases missing: {}".format( + self.options.previous_releases_path)) + return self.options.prev_releases + + def skip_if_no_external_signer(self): + """Skip the running test if external signer support has not been compiled.""" + if not self.is_external_signer_compiled(): + raise SkipTest("external signer support has not been compiled.") + + def is_cli_compiled(self): + """Checks whether bitcoin-cli was compiled.""" + return self.config["components"].getboolean("ENABLE_CLI") + + def is_external_signer_compiled(self): + """Checks whether external signer support was compiled.""" + return self.config["components"].getboolean("ENABLE_EXTERNAL_SIGNER") + + def is_wallet_compiled(self): + """Checks whether the wallet module was compiled.""" + return self.config["components"].getboolean("ENABLE_WALLET") + + def is_specified_wallet_compiled(self): + """Checks whether wallet support for the specified type + (legacy or descriptor wallet) was compiled.""" + if self.options.descriptors: + return self.is_sqlite_compiled() + else: + return self.is_bdb_compiled() + + def is_wallet_tool_compiled(self): + """Checks whether bitcoin-wallet was compiled.""" + return self.config["components"].getboolean("ENABLE_WALLET_TOOL") + + def is_bitcoin_util_compiled(self): + """Checks whether bitcoin-util was compiled.""" + return self.config["components"].getboolean("ENABLE_BITCOIN_UTIL") + + def is_zmq_compiled(self): + """Checks whether the zmq module was compiled.""" + return self.config["components"].getboolean("ENABLE_ZMQ") + + def is_usdt_compiled(self): + """Checks whether the USDT tracepoints were compiled.""" + return self.config["components"].getboolean("ENABLE_USDT_TRACEPOINTS") + + def is_sqlite_compiled(self): + """Checks whether the wallet module was compiled with Sqlite support.""" + return self.config["components"].getboolean("USE_SQLITE") + + def is_bdb_compiled(self): + """Checks whether the wallet module was compiled with BDB support.""" + return self.config["components"].getboolean("USE_BDB") + + def has_blockfile(self, node, filenum: str): + blocksdir = node.datadir_path / self.chain / 'blocks' + return (blocksdir / f"blk{filenum}.dat").is_file() diff --git a/warnet-demo/scenarios/test_framework/test_node.py b/warnet-demo/scenarios/test_framework/test_node.py new file mode 100644 index 0000000..efbb900 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/test_node.py @@ -0,0 +1,892 @@ +#!/usr/bin/env python3 +# Copyright (c) 2017-2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Class for bitcoind node under test""" + +import contextlib +import decimal +import errno +from enum import Enum +import http.client +import json +import logging +import os +import re +import subprocess +import tempfile +import time +import urllib.parse +import collections +import shlex +import sys +from pathlib import Path + +from .authproxy import ( + JSONRPCException, + serialization_fallback, +) +from .descriptors import descsum_create +from .p2p import P2P_SUBVERSION +from .util import ( + MAX_NODES, + assert_equal, + append_config, + delete_cookie_file, + get_auth_cookie, + get_rpc_proxy, + rpc_url, + wait_until_helper_internal, + p2p_port, +) + +BITCOIND_PROC_WAIT_TIMEOUT = 60 + + +class FailedToStartError(Exception): + """Raised when a node fails to start correctly.""" + + +class ErrorMatch(Enum): + FULL_TEXT = 1 + FULL_REGEX = 2 + PARTIAL_REGEX = 3 + + +class TestNode(): + """A class for representing a bitcoind node under test. + + This class contains: + + - state about the node (whether it's running, etc) + - a Python subprocess.Popen object representing the running process + - an RPC connection to the node + - one or more P2P connections to the node + + + To make things easier for the test writer, any unrecognised messages will + be dispatched to the RPC connection.""" + + def __init__(self, i, datadir_path, *, chain, rpchost, timewait, timeout_factor, bitcoind, bitcoin_cli, coverage_dir, cwd, extra_conf=None, extra_args=None, use_cli=False, start_perf=False, use_valgrind=False, version=None, descriptors=False): + """ + Kwargs: + start_perf (bool): If True, begin profiling the node with `perf` as soon as + the node starts. + """ + + self.index = i + self.p2p_conn_index = 1 + self.datadir_path = datadir_path + self.bitcoinconf = self.datadir_path / "bitcoin.conf" + self.stdout_dir = self.datadir_path / "stdout" + self.stderr_dir = self.datadir_path / "stderr" + self.chain = chain + self.rpchost = rpchost + self.rpc_timeout = timewait + self.binary = bitcoind + self.coverage_dir = coverage_dir + self.cwd = cwd + self.descriptors = descriptors + if extra_conf is not None: + append_config(self.datadir_path, extra_conf) + # Most callers will just need to add extra args to the standard list below. + # For those callers that need more flexibility, they can just set the args property directly. + # Note that common args are set in the config file (see initialize_datadir) + self.extra_args = extra_args + self.version = version + # Configuration for logging is set as command-line args rather than in the bitcoin.conf file. + # This means that starting a bitcoind using the temp dir to debug a failed test won't + # spam debug.log. + self.args = [ + self.binary, + f"-datadir={self.datadir_path}", + "-logtimemicros", + "-debug", + "-debugexclude=libevent", + "-debugexclude=leveldb", + "-debugexclude=rand", + "-uacomment=testnode%d" % i, + ] + if self.descriptors is None: + self.args.append("-disablewallet") + + # Use valgrind, expect for previous release binaries + if use_valgrind and version is None: + default_suppressions_file = Path(__file__).parents[3] / "contrib" / "valgrind.supp" + suppressions_file = os.getenv("VALGRIND_SUPPRESSIONS_FILE", + default_suppressions_file) + self.args = ["valgrind", "--suppressions={}".format(suppressions_file), + "--gen-suppressions=all", "--exit-on-first-error=yes", + "--error-exitcode=1", "--quiet"] + self.args + + if self.version_is_at_least(190000): + self.args.append("-logthreadnames") + if self.version_is_at_least(219900): + self.args.append("-logsourcelocations") + if self.version_is_at_least(239000): + self.args.append("-loglevel=trace") + + self.cli = TestNodeCLI(bitcoin_cli, self.datadir_path) + self.use_cli = use_cli + self.start_perf = start_perf + + self.running = False + self.process = None + self.rpc_connected = False + self.rpc = None + self.miniwallet = None + self.url = None + self.log = logging.getLogger('TestFramework.node%d' % i) + self.cleanup_on_exit = True # Whether to kill the node when this object goes away + # Cache perf subprocesses here by their data output filename. + self.perf_subprocesses = {} + + self.p2ps = [] + self.timeout_factor = timeout_factor + + self.mocktime = None + + AddressKeyPair = collections.namedtuple('AddressKeyPair', ['address', 'key']) + PRIV_KEYS = [ + # address , privkey + AddressKeyPair('mjTkW3DjgyZck4KbiRusZsqTgaYTxdSz6z', 'cVpF924EspNh8KjYsfhgY96mmxvT6DgdWiTYMtMjuM74hJaU5psW'), + AddressKeyPair('msX6jQXvxiNhx3Q62PKeLPrhrqZQdSimTg', 'cUxsWyKyZ9MAQTaAhUQWJmBbSvHMwSmuv59KgxQV7oZQU3PXN3KE'), + AddressKeyPair('mnonCMyH9TmAsSj3M59DsbH8H63U3RKoFP', 'cTrh7dkEAeJd6b3MRX9bZK8eRmNqVCMH3LSUkE3dSFDyzjU38QxK'), + AddressKeyPair('mqJupas8Dt2uestQDvV2NH3RU8uZh2dqQR', 'cVuKKa7gbehEQvVq717hYcbE9Dqmq7KEBKqWgWrYBa2CKKrhtRim'), + AddressKeyPair('msYac7Rvd5ywm6pEmkjyxhbCDKqWsVeYws', 'cQDCBuKcjanpXDpCqacNSjYfxeQj8G6CAtH1Dsk3cXyqLNC4RPuh'), + AddressKeyPair('n2rnuUnwLgXqf9kk2kjvVm8R5BZK1yxQBi', 'cQakmfPSLSqKHyMFGwAqKHgWUiofJCagVGhiB4KCainaeCSxeyYq'), + AddressKeyPair('myzuPxRwsf3vvGzEuzPfK9Nf2RfwauwYe6', 'cQMpDLJwA8DBe9NcQbdoSb1BhmFxVjWD5gRyrLZCtpuF9Zi3a9RK'), + AddressKeyPair('mumwTaMtbxEPUswmLBBN3vM9oGRtGBrys8', 'cSXmRKXVcoouhNNVpcNKFfxsTsToY5pvB9DVsFksF1ENunTzRKsy'), + AddressKeyPair('mpV7aGShMkJCZgbW7F6iZgrvuPHjZjH9qg', 'cSoXt6tm3pqy43UMabY6eUTmR3eSUYFtB2iNQDGgb3VUnRsQys2k'), + AddressKeyPair('mq4fBNdckGtvY2mijd9am7DRsbRB4KjUkf', 'cN55daf1HotwBAgAKWVgDcoppmUNDtQSfb7XLutTLeAgVc3u8hik'), + AddressKeyPair('mpFAHDjX7KregM3rVotdXzQmkbwtbQEnZ6', 'cT7qK7g1wkYEMvKowd2ZrX1E5f6JQ7TM246UfqbCiyF7kZhorpX3'), + AddressKeyPair('mzRe8QZMfGi58KyWCse2exxEFry2sfF2Y7', 'cPiRWE8KMjTRxH1MWkPerhfoHFn5iHPWVK5aPqjW8NxmdwenFinJ'), + ] + + def get_deterministic_priv_key(self): + """Return a deterministic priv key in base58, that only depends on the node's index""" + assert len(self.PRIV_KEYS) == MAX_NODES + return self.PRIV_KEYS[self.index] + + def _node_msg(self, msg: str) -> str: + """Return a modified msg that identifies this node by its index as a debugging aid.""" + return "[node %d] %s" % (self.index, msg) + + def _raise_assertion_error(self, msg: str): + """Raise an AssertionError with msg modified to identify this node.""" + raise AssertionError(self._node_msg(msg)) + + def __del__(self): + # Ensure that we don't leave any bitcoind processes lying around after + # the test ends + if self.process and self.cleanup_on_exit: + # Should only happen on test failure + # Avoid using logger, as that may have already been shutdown when + # this destructor is called. + print(self._node_msg("Cleaning up leftover process")) + self.process.kill() + + def __getattr__(self, name): + """Dispatches any unrecognised messages to the RPC connection or a CLI instance.""" + if self.use_cli: + return getattr(RPCOverloadWrapper(self.cli, True, self.descriptors), name) + else: + assert self.rpc_connected and self.rpc is not None, self._node_msg("Error: no RPC connection") + return getattr(RPCOverloadWrapper(self.rpc, descriptors=self.descriptors), name) + + def start(self, extra_args=None, *, cwd=None, stdout=None, stderr=None, env=None, **kwargs): + """Start the node.""" + if extra_args is None: + extra_args = self.extra_args + + # Add a new stdout and stderr file each time bitcoind is started + if stderr is None: + stderr = tempfile.NamedTemporaryFile(dir=self.stderr_dir, delete=False) + if stdout is None: + stdout = tempfile.NamedTemporaryFile(dir=self.stdout_dir, delete=False) + self.stderr = stderr + self.stdout = stdout + + if cwd is None: + cwd = self.cwd + + # Delete any existing cookie file -- if such a file exists (eg due to + # unclean shutdown), it will get overwritten anyway by bitcoind, and + # potentially interfere with our attempt to authenticate + delete_cookie_file(self.datadir_path, self.chain) + + # add environment variable LIBC_FATAL_STDERR_=1 so that libc errors are written to stderr and not the terminal + subp_env = dict(os.environ, LIBC_FATAL_STDERR_="1") + if env is not None: + subp_env.update(env) + + self.process = subprocess.Popen(self.args + extra_args, env=subp_env, stdout=stdout, stderr=stderr, cwd=cwd, **kwargs) + + self.running = True + self.log.debug("bitcoind started, waiting for RPC to come up") + + if self.start_perf: + self._start_perf() + + def wait_for_rpc_connection(self): + """Sets up an RPC connection to the bitcoind process. Returns False if unable to connect.""" + # Poll at a rate of four times per second + poll_per_s = 4 + for _ in range(poll_per_s * self.rpc_timeout): + if self.process.poll() is not None: + # Attach abrupt shutdown error/s to the exception message + self.stderr.seek(0) + str_error = ''.join(line.decode('utf-8') for line in self.stderr) + str_error += "************************\n" if str_error else '' + + raise FailedToStartError(self._node_msg( + f'bitcoind exited with status {self.process.returncode} during initialization. {str_error}')) + try: + rpc = get_rpc_proxy( + rpc_url(self.datadir_path, self.index, self.chain, self.rpchost), + self.index, + timeout=self.rpc_timeout // 2, # Shorter timeout to allow for one retry in case of ETIMEDOUT + coveragedir=self.coverage_dir, + ) + rpc.getblockcount() + # If the call to getblockcount() succeeds then the RPC connection is up + if self.version_is_at_least(190000): + # getmempoolinfo.loaded is available since commit + # bb8ae2c (version 0.19.0) + wait_until_helper_internal(lambda: rpc.getmempoolinfo()['loaded'], timeout_factor=self.timeout_factor) + # Wait for the node to finish reindex, block import, and + # loading the mempool. Usually importing happens fast or + # even "immediate" when the node is started. However, there + # is no guarantee and sometimes ImportBlocks might finish + # later. This is going to cause intermittent test failures, + # because generally the tests assume the node is fully + # ready after being started. + # + # For example, the node will reject block messages from p2p + # when it is still importing with the error "Unexpected + # block message received" + # + # The wait is done here to make tests as robust as possible + # and prevent racy tests and intermittent failures as much + # as possible. Some tests might not need this, but the + # overhead is trivial, and the added guarantees are worth + # the minimal performance cost. + self.log.debug("RPC successfully started") + if self.use_cli: + return + self.rpc = rpc + self.rpc_connected = True + self.url = self.rpc.rpc_url + return + except JSONRPCException as e: # Initialization phase + # -28 RPC in warmup + # -342 Service unavailable, RPC server started but is shutting down due to error + if e.error['code'] != -28 and e.error['code'] != -342: + raise # unknown JSON RPC exception + except ConnectionResetError: + # This might happen when the RPC server is in warmup, but shut down before the call to getblockcount + # succeeds. Try again to properly raise the FailedToStartError + pass + except OSError as e: + if e.errno == errno.ETIMEDOUT: + pass # Treat identical to ConnectionResetError + elif e.errno == errno.ECONNREFUSED: + pass # Port not yet open? + else: + raise # unknown OS error + except ValueError as e: # cookie file not found and no rpcuser or rpcpassword; bitcoind is still starting + if "No RPC credentials" not in str(e): + raise + time.sleep(1.0 / poll_per_s) + self._raise_assertion_error("Unable to connect to bitcoind after {}s".format(self.rpc_timeout)) + + def wait_for_cookie_credentials(self): + """Ensures auth cookie credentials can be read, e.g. for testing CLI with -rpcwait before RPC connection is up.""" + self.log.debug("Waiting for cookie credentials") + # Poll at a rate of four times per second. + poll_per_s = 4 + for _ in range(poll_per_s * self.rpc_timeout): + try: + get_auth_cookie(self.datadir_path, self.chain) + self.log.debug("Cookie credentials successfully retrieved") + return + except ValueError: # cookie file not found and no rpcuser or rpcpassword; bitcoind is still starting + pass # so we continue polling until RPC credentials are retrieved + time.sleep(1.0 / poll_per_s) + self._raise_assertion_error("Unable to retrieve cookie credentials after {}s".format(self.rpc_timeout)) + + def generate(self, nblocks, maxtries=1000000, **kwargs): + self.log.debug("TestNode.generate() dispatches `generate` call to `generatetoaddress`") + return self.generatetoaddress(nblocks=nblocks, address=self.get_deterministic_priv_key().address, maxtries=maxtries, **kwargs) + + def generateblock(self, *args, invalid_call, **kwargs): + assert not invalid_call + return self.__getattr__('generateblock')(*args, **kwargs) + + def generatetoaddress(self, *args, invalid_call, **kwargs): + assert not invalid_call + return self.__getattr__('generatetoaddress')(*args, **kwargs) + + def generatetodescriptor(self, *args, invalid_call, **kwargs): + assert not invalid_call + return self.__getattr__('generatetodescriptor')(*args, **kwargs) + + def setmocktime(self, timestamp): + """Wrapper for setmocktime RPC, sets self.mocktime""" + if timestamp == 0: + # setmocktime(0) resets to system time. + self.mocktime = None + else: + self.mocktime = timestamp + return self.__getattr__('setmocktime')(timestamp) + + def get_wallet_rpc(self, wallet_name): + if self.use_cli: + return RPCOverloadWrapper(self.cli("-rpcwallet={}".format(wallet_name)), True, self.descriptors) + else: + assert self.rpc_connected and self.rpc, self._node_msg("RPC not connected") + wallet_path = "wallet/{}".format(urllib.parse.quote(wallet_name)) + return RPCOverloadWrapper(self.rpc / wallet_path, descriptors=self.descriptors) + + def version_is_at_least(self, ver): + return self.version is None or self.version >= ver + + def stop_node(self, expected_stderr='', *, wait=0, wait_until_stopped=True): + """Stop the node.""" + if not self.running: + return + self.log.debug("Stopping node") + try: + # Do not use wait argument when testing older nodes, e.g. in wallet_backwards_compatibility.py + if self.version_is_at_least(180000): + self.stop(wait=wait) + else: + self.stop() + except http.client.CannotSendRequest: + self.log.exception("Unable to stop node.") + + # If there are any running perf processes, stop them. + for profile_name in tuple(self.perf_subprocesses.keys()): + self._stop_perf(profile_name) + + del self.p2ps[:] + + assert (not expected_stderr) or wait_until_stopped # Must wait to check stderr + if wait_until_stopped: + self.wait_until_stopped(expected_stderr=expected_stderr) + + def is_node_stopped(self, *, expected_stderr="", expected_ret_code=0): + """Checks whether the node has stopped. + + Returns True if the node has stopped. False otherwise. + This method is responsible for freeing resources (self.process).""" + if not self.running: + return True + return_code = self.process.poll() + if return_code is None: + return False + + # process has stopped. Assert that it didn't return an error code. + assert return_code == expected_ret_code, self._node_msg( + f"Node returned unexpected exit code ({return_code}) vs ({expected_ret_code}) when stopping") + # Check that stderr is as expected + self.stderr.seek(0) + stderr = self.stderr.read().decode('utf-8').strip() + if stderr != expected_stderr: + raise AssertionError("Unexpected stderr {} != {}".format(stderr, expected_stderr)) + + self.stdout.close() + self.stderr.close() + + self.running = False + self.process = None + self.rpc_connected = False + self.rpc = None + self.log.debug("Node stopped") + return True + + def wait_until_stopped(self, *, timeout=BITCOIND_PROC_WAIT_TIMEOUT, expect_error=False, **kwargs): + expected_ret_code = 1 if expect_error else 0 # Whether node shutdown return EXIT_FAILURE or EXIT_SUCCESS + wait_until_helper_internal(lambda: self.is_node_stopped(expected_ret_code=expected_ret_code, **kwargs), timeout=timeout, timeout_factor=self.timeout_factor) + + def replace_in_config(self, replacements): + """ + Perform replacements in the configuration file. + The substitutions are passed as a list of search-replace-tuples, e.g. + [("old", "new"), ("foo", "bar"), ...] + """ + with open(self.bitcoinconf, 'r', encoding='utf8') as conf: + conf_data = conf.read() + for replacement in replacements: + assert_equal(len(replacement), 2) + old, new = replacement[0], replacement[1] + conf_data = conf_data.replace(old, new) + with open(self.bitcoinconf, 'w', encoding='utf8') as conf: + conf.write(conf_data) + + @property + def chain_path(self) -> Path: + return self.datadir_path / self.chain + + @property + def debug_log_path(self) -> Path: + return self.chain_path / 'debug.log' + + @property + def blocks_path(self) -> Path: + return self.chain_path / "blocks" + + @property + def wallets_path(self) -> Path: + return self.chain_path / "wallets" + + def debug_log_size(self, **kwargs) -> int: + with open(self.debug_log_path, **kwargs) as dl: + dl.seek(0, 2) + return dl.tell() + + @contextlib.contextmanager + def assert_debug_log(self, expected_msgs, unexpected_msgs=None, timeout=2): + if unexpected_msgs is None: + unexpected_msgs = [] + assert_equal(type(expected_msgs), list) + assert_equal(type(unexpected_msgs), list) + + time_end = time.time() + timeout * self.timeout_factor + prev_size = self.debug_log_size(encoding="utf-8") # Must use same encoding that is used to read() below + + yield + + while True: + found = True + with open(self.debug_log_path, encoding="utf-8", errors="replace") as dl: + dl.seek(prev_size) + log = dl.read() + print_log = " - " + "\n - ".join(log.splitlines()) + for unexpected_msg in unexpected_msgs: + if re.search(re.escape(unexpected_msg), log, flags=re.MULTILINE): + self._raise_assertion_error('Unexpected message "{}" partially matches log:\n\n{}\n\n'.format(unexpected_msg, print_log)) + for expected_msg in expected_msgs: + if re.search(re.escape(expected_msg), log, flags=re.MULTILINE) is None: + found = False + if found: + return + if time.time() >= time_end: + break + time.sleep(0.05) + self._raise_assertion_error('Expected messages "{}" does not partially match log:\n\n{}\n\n'.format(str(expected_msgs), print_log)) + + @contextlib.contextmanager + def wait_for_debug_log(self, expected_msgs, timeout=60): + """ + Block until we see a particular debug log message fragment or until we exceed the timeout. + Return: + the number of log lines we encountered when matching + """ + time_end = time.time() + timeout * self.timeout_factor + prev_size = self.debug_log_size(mode="rb") # Must use same mode that is used to read() below + + yield + + while True: + found = True + with open(self.debug_log_path, "rb") as dl: + dl.seek(prev_size) + log = dl.read() + + for expected_msg in expected_msgs: + if expected_msg not in log: + found = False + + if found: + return + + if time.time() >= time_end: + print_log = " - " + "\n - ".join(log.decode("utf8", errors="replace").splitlines()) + break + + # No sleep here because we want to detect the message fragment as fast as + # possible. + + self._raise_assertion_error( + 'Expected messages "{}" does not partially match log:\n\n{}\n\n'.format( + str(expected_msgs), print_log)) + + @contextlib.contextmanager + def profile_with_perf(self, profile_name: str): + """ + Context manager that allows easy profiling of node activity using `perf`. + + See `test/functional/README.md` for details on perf usage. + + Args: + profile_name: This string will be appended to the + profile data filename generated by perf. + """ + subp = self._start_perf(profile_name) + + yield + + if subp: + self._stop_perf(profile_name) + + def _start_perf(self, profile_name=None): + """Start a perf process to profile this node. + + Returns the subprocess running perf.""" + subp = None + + def test_success(cmd): + return subprocess.call( + # shell=True required for pipe use below + cmd, shell=True, + stderr=subprocess.DEVNULL, stdout=subprocess.DEVNULL) == 0 + + if not sys.platform.startswith('linux'): + self.log.warning("Can't profile with perf; only available on Linux platforms") + return None + + if not test_success('which perf'): + self.log.warning("Can't profile with perf; must install perf-tools") + return None + + if not test_success('readelf -S {} | grep .debug_str'.format(shlex.quote(self.binary))): + self.log.warning( + "perf output won't be very useful without debug symbols compiled into bitcoind") + + output_path = tempfile.NamedTemporaryFile( + dir=self.datadir_path, + prefix="{}.perf.data.".format(profile_name or 'test'), + delete=False, + ).name + + cmd = [ + 'perf', 'record', + '-g', # Record the callgraph. + '--call-graph', 'dwarf', # Compatibility for gcc's --fomit-frame-pointer. + '-F', '101', # Sampling frequency in Hz. + '-p', str(self.process.pid), + '-o', output_path, + ] + subp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + self.perf_subprocesses[profile_name] = subp + + return subp + + def _stop_perf(self, profile_name): + """Stop (and pop) a perf subprocess.""" + subp = self.perf_subprocesses.pop(profile_name) + output_path = subp.args[subp.args.index('-o') + 1] + + subp.terminate() + subp.wait(timeout=10) + + stderr = subp.stderr.read().decode() + if 'Consider tweaking /proc/sys/kernel/perf_event_paranoid' in stderr: + self.log.warning( + "perf couldn't collect data! Try " + "'sudo sysctl -w kernel.perf_event_paranoid=-1'") + else: + report_cmd = "perf report -i {}".format(output_path) + self.log.info("See perf output by running '{}'".format(report_cmd)) + + def assert_start_raises_init_error(self, extra_args=None, expected_msg=None, match=ErrorMatch.FULL_TEXT, *args, **kwargs): + """Attempt to start the node and expect it to raise an error. + + extra_args: extra arguments to pass through to bitcoind + expected_msg: regex that stderr should match when bitcoind fails + + Will throw if bitcoind starts without an error. + Will throw if an expected_msg is provided and it does not match bitcoind's stdout.""" + assert not self.running + with tempfile.NamedTemporaryFile(dir=self.stderr_dir, delete=False) as log_stderr, \ + tempfile.NamedTemporaryFile(dir=self.stdout_dir, delete=False) as log_stdout: + try: + self.start(extra_args, stdout=log_stdout, stderr=log_stderr, *args, **kwargs) + ret = self.process.wait(timeout=self.rpc_timeout) + self.log.debug(self._node_msg(f'bitcoind exited with status {ret} during initialization')) + assert ret != 0 # Exit code must indicate failure + self.running = False + self.process = None + # Check stderr for expected message + if expected_msg is not None: + log_stderr.seek(0) + stderr = log_stderr.read().decode('utf-8').strip() + if match == ErrorMatch.PARTIAL_REGEX: + if re.search(expected_msg, stderr, flags=re.MULTILINE) is None: + self._raise_assertion_error( + 'Expected message "{}" does not partially match stderr:\n"{}"'.format(expected_msg, stderr)) + elif match == ErrorMatch.FULL_REGEX: + if re.fullmatch(expected_msg, stderr) is None: + self._raise_assertion_error( + 'Expected message "{}" does not fully match stderr:\n"{}"'.format(expected_msg, stderr)) + elif match == ErrorMatch.FULL_TEXT: + if expected_msg != stderr: + self._raise_assertion_error( + 'Expected message "{}" does not fully match stderr:\n"{}"'.format(expected_msg, stderr)) + except subprocess.TimeoutExpired: + self.process.kill() + self.running = False + self.process = None + assert_msg = f'bitcoind should have exited within {self.rpc_timeout}s ' + if expected_msg is None: + assert_msg += "with an error" + else: + assert_msg += "with expected error " + expected_msg + self._raise_assertion_error(assert_msg) + + def add_p2p_connection(self, p2p_conn, *, wait_for_verack=True, **kwargs): + """Add an inbound p2p connection to the node. + + This method adds the p2p connection to the self.p2ps list and also + returns the connection to the caller.""" + if 'dstport' not in kwargs: + kwargs['dstport'] = p2p_port(self.index) + if 'dstaddr' not in kwargs: + kwargs['dstaddr'] = '127.0.0.1' + + p2p_conn.peer_connect(**kwargs, net=self.chain, timeout_factor=self.timeout_factor)() + self.p2ps.append(p2p_conn) + p2p_conn.wait_until(lambda: p2p_conn.is_connected, check_connected=False) + if wait_for_verack: + # Wait for the node to send us the version and verack + p2p_conn.wait_for_verack() + # At this point we have sent our version message and received the version and verack, however the full node + # has not yet received the verack from us (in reply to their version). So, the connection is not yet fully + # established (fSuccessfullyConnected). + # + # This shouldn't lead to any issues when sending messages, since the verack will be in-flight before the + # message we send. However, it might lead to races where we are expecting to receive a message. E.g. a + # transaction that will be added to the mempool as soon as we return here. + # + # So syncing here is redundant when we only want to send a message, but the cost is low (a few milliseconds) + # in comparison to the upside of making tests less fragile and unexpected intermittent errors less likely. + p2p_conn.sync_with_ping() + + # Consistency check that the node received our user agent string. + # Find our connection in getpeerinfo by our address:port and theirs, as this combination is unique. + sockname = p2p_conn._transport.get_extra_info("socket").getsockname() + our_addr_and_port = f"{sockname[0]}:{sockname[1]}" + dst_addr_and_port = f"{p2p_conn.dstaddr}:{p2p_conn.dstport}" + info = [peer for peer in self.getpeerinfo() if peer["addr"] == our_addr_and_port and peer["addrbind"] == dst_addr_and_port] + assert_equal(len(info), 1) + assert_equal(info[0]["subver"], P2P_SUBVERSION) + + return p2p_conn + + def add_outbound_p2p_connection(self, p2p_conn, *, wait_for_verack=True, p2p_idx, connection_type="outbound-full-relay", **kwargs): + """Add an outbound p2p connection from node. Must be an + "outbound-full-relay", "block-relay-only", "addr-fetch" or "feeler" connection. + + This method adds the p2p connection to the self.p2ps list and returns + the connection to the caller. + + p2p_idx must be different for simultaneously connected peers. When reusing it for the next peer + after disconnecting the previous one, it is necessary to wait for the disconnect to finish to avoid + a race condition. + """ + + def addconnection_callback(address, port): + self.log.debug("Connecting to %s:%d %s" % (address, port, connection_type)) + self.addconnection('%s:%d' % (address, port), connection_type) + + p2p_conn.peer_accept_connection(connect_cb=addconnection_callback, connect_id=p2p_idx + 1, net=self.chain, timeout_factor=self.timeout_factor, **kwargs)() + + if connection_type == "feeler": + # feeler connections are closed as soon as the node receives a `version` message + p2p_conn.wait_until(lambda: p2p_conn.message_count["version"] == 1, check_connected=False) + p2p_conn.wait_until(lambda: not p2p_conn.is_connected, check_connected=False) + else: + p2p_conn.wait_for_connect() + self.p2ps.append(p2p_conn) + + if wait_for_verack: + p2p_conn.wait_for_verack() + p2p_conn.sync_with_ping() + + return p2p_conn + + def num_test_p2p_connections(self): + """Return number of test framework p2p connections to the node.""" + return len([peer for peer in self.getpeerinfo() if peer['subver'] == P2P_SUBVERSION]) + + def disconnect_p2ps(self): + """Close all p2p connections to the node. + Use only after each p2p has sent a version message to ensure the wait works.""" + for p in self.p2ps: + p.peer_disconnect() + del self.p2ps[:] + + wait_until_helper_internal(lambda: self.num_test_p2p_connections() == 0, timeout_factor=self.timeout_factor) + + def bumpmocktime(self, seconds): + """Fast forward using setmocktime to self.mocktime + seconds. Requires setmocktime to have + been called at some point in the past.""" + assert self.mocktime + self.mocktime += seconds + self.setmocktime(self.mocktime) + + +class TestNodeCLIAttr: + def __init__(self, cli, command): + self.cli = cli + self.command = command + + def __call__(self, *args, **kwargs): + return self.cli.send_cli(self.command, *args, **kwargs) + + def get_request(self, *args, **kwargs): + return lambda: self(*args, **kwargs) + + +def arg_to_cli(arg): + if isinstance(arg, bool): + return str(arg).lower() + elif arg is None: + return 'null' + elif isinstance(arg, dict) or isinstance(arg, list): + return json.dumps(arg, default=serialization_fallback) + else: + return str(arg) + + +class TestNodeCLI(): + """Interface to bitcoin-cli for an individual node""" + def __init__(self, binary, datadir): + self.options = [] + self.binary = binary + self.datadir = datadir + self.input = None + self.log = logging.getLogger('TestFramework.bitcoincli') + + def __call__(self, *options, input=None): + # TestNodeCLI is callable with bitcoin-cli command-line options + cli = TestNodeCLI(self.binary, self.datadir) + cli.options = [str(o) for o in options] + cli.input = input + return cli + + def __getattr__(self, command): + return TestNodeCLIAttr(self, command) + + def batch(self, requests): + results = [] + for request in requests: + try: + results.append(dict(result=request())) + except JSONRPCException as e: + results.append(dict(error=e)) + return results + + def send_cli(self, command=None, *args, **kwargs): + """Run bitcoin-cli command. Deserializes returned string as python object.""" + pos_args = [arg_to_cli(arg) for arg in args] + named_args = [str(key) + "=" + arg_to_cli(value) for (key, value) in kwargs.items()] + p_args = [self.binary, f"-datadir={self.datadir}"] + self.options + if named_args: + p_args += ["-named"] + if command is not None: + p_args += [command] + p_args += pos_args + named_args + self.log.debug("Running bitcoin-cli {}".format(p_args[2:])) + process = subprocess.Popen(p_args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) + cli_stdout, cli_stderr = process.communicate(input=self.input) + returncode = process.poll() + if returncode: + match = re.match(r'error code: ([-0-9]+)\nerror message:\n(.*)', cli_stderr) + if match: + code, message = match.groups() + raise JSONRPCException(dict(code=int(code), message=message)) + # Ignore cli_stdout, raise with cli_stderr + raise subprocess.CalledProcessError(returncode, self.binary, output=cli_stderr) + try: + return json.loads(cli_stdout, parse_float=decimal.Decimal) + except (json.JSONDecodeError, decimal.InvalidOperation): + return cli_stdout.rstrip("\n") + +class RPCOverloadWrapper(): + def __init__(self, rpc, cli=False, descriptors=False): + self.rpc = rpc + self.is_cli = cli + self.descriptors = descriptors + + def __getattr__(self, name): + return getattr(self.rpc, name) + + def createwallet_passthrough(self, *args, **kwargs): + return self.__getattr__("createwallet")(*args, **kwargs) + + def createwallet(self, wallet_name, disable_private_keys=None, blank=None, passphrase='', avoid_reuse=None, descriptors=None, load_on_startup=None, external_signer=None): + if descriptors is None: + descriptors = self.descriptors + return self.__getattr__('createwallet')(wallet_name, disable_private_keys, blank, passphrase, avoid_reuse, descriptors, load_on_startup, external_signer) + + def importprivkey(self, privkey, label=None, rescan=None): + wallet_info = self.getwalletinfo() + if 'descriptors' not in wallet_info or ('descriptors' in wallet_info and not wallet_info['descriptors']): + return self.__getattr__('importprivkey')(privkey, label, rescan) + desc = descsum_create('combo(' + privkey + ')') + req = [{ + 'desc': desc, + 'timestamp': 0 if rescan else 'now', + 'label': label if label else '' + }] + import_res = self.importdescriptors(req) + if not import_res[0]['success']: + raise JSONRPCException(import_res[0]['error']) + + def addmultisigaddress(self, nrequired, keys, label=None, address_type=None): + wallet_info = self.getwalletinfo() + if 'descriptors' not in wallet_info or ('descriptors' in wallet_info and not wallet_info['descriptors']): + return self.__getattr__('addmultisigaddress')(nrequired, keys, label, address_type) + cms = self.createmultisig(nrequired, keys, address_type) + req = [{ + 'desc': cms['descriptor'], + 'timestamp': 0, + 'label': label if label else '' + }] + import_res = self.importdescriptors(req) + if not import_res[0]['success']: + raise JSONRPCException(import_res[0]['error']) + return cms + + def importpubkey(self, pubkey, label=None, rescan=None): + wallet_info = self.getwalletinfo() + if 'descriptors' not in wallet_info or ('descriptors' in wallet_info and not wallet_info['descriptors']): + return self.__getattr__('importpubkey')(pubkey, label, rescan) + desc = descsum_create('combo(' + pubkey + ')') + req = [{ + 'desc': desc, + 'timestamp': 0 if rescan else 'now', + 'label': label if label else '' + }] + import_res = self.importdescriptors(req) + if not import_res[0]['success']: + raise JSONRPCException(import_res[0]['error']) + + def importaddress(self, address, label=None, rescan=None, p2sh=None): + wallet_info = self.getwalletinfo() + if 'descriptors' not in wallet_info or ('descriptors' in wallet_info and not wallet_info['descriptors']): + return self.__getattr__('importaddress')(address, label, rescan, p2sh) + is_hex = False + try: + int(address ,16) + is_hex = True + desc = descsum_create('raw(' + address + ')') + except Exception: + desc = descsum_create('addr(' + address + ')') + reqs = [{ + 'desc': desc, + 'timestamp': 0 if rescan else 'now', + 'label': label if label else '' + }] + if is_hex and p2sh: + reqs.append({ + 'desc': descsum_create('p2sh(raw(' + address + '))'), + 'timestamp': 0 if rescan else 'now', + 'label': label if label else '' + }) + import_res = self.importdescriptors(reqs) + for res in import_res: + if not res['success']: + raise JSONRPCException(res['error']) diff --git a/warnet-demo/scenarios/test_framework/test_shell.py b/warnet-demo/scenarios/test_framework/test_shell.py new file mode 100644 index 0000000..09ccec2 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/test_shell.py @@ -0,0 +1,78 @@ +#!/usr/bin/env python3 +# Copyright (c) 2019-2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. + +from test_framework.test_framework import BitcoinTestFramework + +class TestShell: + """Wrapper Class for BitcoinTestFramework. + + The TestShell class extends the BitcoinTestFramework + rpc & daemon process management functionality to external + python environments. + + It is a singleton class, which ensures that users only + start a single TestShell at a time.""" + + class __TestShell(BitcoinTestFramework): + def add_options(self, parser): + self.add_wallet_options(parser) + + def set_test_params(self): + pass + + def run_test(self): + pass + + def setup(self, **kwargs): + if self.running: + print("TestShell is already running!") + return + + # Num_nodes parameter must be set + # by BitcoinTestFramework child class. + self.num_nodes = 1 + + # User parameters override default values. + for key, value in kwargs.items(): + if hasattr(self, key): + setattr(self, key, value) + elif hasattr(self.options, key): + setattr(self.options, key, value) + else: + raise KeyError(key + " not a valid parameter key!") + + super().setup() + self.running = True + return self + + def shutdown(self): + if not self.running: + print("TestShell is not running!") + else: + super().shutdown() + self.running = False + + def reset(self): + if self.running: + print("Shutdown TestShell before resetting!") + else: + self.num_nodes = None + super().__init__() + + instance = None + + def __new__(cls): + # This implementation enforces singleton pattern, and will return the + # previously initialized instance if available + if not TestShell.instance: + TestShell.instance = TestShell.__TestShell() + TestShell.instance.running = False + return TestShell.instance + + def __getattr__(self, name): + return getattr(self.instance, name) + + def __setattr__(self, name, value): + return setattr(self.instance, name, value) diff --git a/warnet-demo/scenarios/test_framework/util.py b/warnet-demo/scenarios/test_framework/util.py new file mode 100644 index 0000000..61346e9 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/util.py @@ -0,0 +1,551 @@ +#!/usr/bin/env python3 +# Copyright (c) 2014-2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Helpful routines for regression testing.""" + +from base64 import b64encode +from decimal import Decimal, ROUND_DOWN +from subprocess import CalledProcessError +import hashlib +import inspect +import json +import logging +import os +import pathlib +import random +import re +import sys +import time + +from . import coverage +from .authproxy import AuthServiceProxy, JSONRPCException +from typing import Callable, Optional, Tuple + +logger = logging.getLogger("TestFramework.utils") + +# Assert functions +################## + + +def assert_approx(v, vexp, vspan=0.00001): + """Assert that `v` is within `vspan` of `vexp`""" + if isinstance(v, Decimal) or isinstance(vexp, Decimal): + v=Decimal(v) + vexp=Decimal(vexp) + vspan=Decimal(vspan) + if v < vexp - vspan: + raise AssertionError("%s < [%s..%s]" % (str(v), str(vexp - vspan), str(vexp + vspan))) + if v > vexp + vspan: + raise AssertionError("%s > [%s..%s]" % (str(v), str(vexp - vspan), str(vexp + vspan))) + + +def assert_fee_amount(fee, tx_size, feerate_BTC_kvB): + """Assert the fee is in range.""" + assert isinstance(tx_size, int) + target_fee = get_fee(tx_size, feerate_BTC_kvB) + if fee < target_fee: + raise AssertionError("Fee of %s BTC too low! (Should be %s BTC)" % (str(fee), str(target_fee))) + # allow the wallet's estimation to be at most 2 bytes off + high_fee = get_fee(tx_size + 2, feerate_BTC_kvB) + if fee > high_fee: + raise AssertionError("Fee of %s BTC too high! (Should be %s BTC)" % (str(fee), str(target_fee))) + + +def assert_equal(thing1, thing2, *args): + if thing1 != thing2 or any(thing1 != arg for arg in args): + raise AssertionError("not(%s)" % " == ".join(str(arg) for arg in (thing1, thing2) + args)) + + +def assert_greater_than(thing1, thing2): + if thing1 <= thing2: + raise AssertionError("%s <= %s" % (str(thing1), str(thing2))) + + +def assert_greater_than_or_equal(thing1, thing2): + if thing1 < thing2: + raise AssertionError("%s < %s" % (str(thing1), str(thing2))) + + +def assert_raises(exc, fun, *args, **kwds): + assert_raises_message(exc, None, fun, *args, **kwds) + + +def assert_raises_message(exc, message, fun, *args, **kwds): + try: + fun(*args, **kwds) + except JSONRPCException: + raise AssertionError("Use assert_raises_rpc_error() to test RPC failures") + except exc as e: + if message is not None and message not in e.error['message']: + raise AssertionError( + "Expected substring not found in error message:\nsubstring: '{}'\nerror message: '{}'.".format( + message, e.error['message'])) + except Exception as e: + raise AssertionError("Unexpected exception raised: " + type(e).__name__) + else: + raise AssertionError("No exception raised") + + +def assert_raises_process_error(returncode: int, output: str, fun: Callable, *args, **kwds): + """Execute a process and asserts the process return code and output. + + Calls function `fun` with arguments `args` and `kwds`. Catches a CalledProcessError + and verifies that the return code and output are as expected. Throws AssertionError if + no CalledProcessError was raised or if the return code and output are not as expected. + + Args: + returncode: the process return code. + output: [a substring of] the process output. + fun: the function to call. This should execute a process. + args*: positional arguments for the function. + kwds**: named arguments for the function. + """ + try: + fun(*args, **kwds) + except CalledProcessError as e: + if returncode != e.returncode: + raise AssertionError("Unexpected returncode %i" % e.returncode) + if output not in e.output: + raise AssertionError("Expected substring not found:" + e.output) + else: + raise AssertionError("No exception raised") + + +def assert_raises_rpc_error(code: Optional[int], message: Optional[str], fun: Callable, *args, **kwds): + """Run an RPC and verify that a specific JSONRPC exception code and message is raised. + + Calls function `fun` with arguments `args` and `kwds`. Catches a JSONRPCException + and verifies that the error code and message are as expected. Throws AssertionError if + no JSONRPCException was raised or if the error code/message are not as expected. + + Args: + code: the error code returned by the RPC call (defined in src/rpc/protocol.h). + Set to None if checking the error code is not required. + message: [a substring of] the error string returned by the RPC call. + Set to None if checking the error string is not required. + fun: the function to call. This should be the name of an RPC. + args*: positional arguments for the function. + kwds**: named arguments for the function. + """ + assert try_rpc(code, message, fun, *args, **kwds), "No exception raised" + + +def try_rpc(code, message, fun, *args, **kwds): + """Tries to run an rpc command. + + Test against error code and message if the rpc fails. + Returns whether a JSONRPCException was raised.""" + try: + fun(*args, **kwds) + except JSONRPCException as e: + # JSONRPCException was thrown as expected. Check the code and message values are correct. + if (code is not None) and (code != e.error["code"]): + raise AssertionError("Unexpected JSONRPC error code %i" % e.error["code"]) + if (message is not None) and (message not in e.error['message']): + raise AssertionError( + "Expected substring not found in error message:\nsubstring: '{}'\nerror message: '{}'.".format( + message, e.error['message'])) + return True + except Exception as e: + raise AssertionError("Unexpected exception raised: " + type(e).__name__) + else: + return False + + +def assert_is_hex_string(string): + try: + int(string, 16) + except Exception as e: + raise AssertionError("Couldn't interpret %r as hexadecimal; raised: %s" % (string, e)) + + +def assert_is_hash_string(string, length=64): + if not isinstance(string, str): + raise AssertionError("Expected a string, got type %r" % type(string)) + elif length and len(string) != length: + raise AssertionError("String of length %d expected; got %d" % (length, len(string))) + elif not re.match('[abcdef0-9]+$', string): + raise AssertionError("String %r contains invalid characters for a hash." % string) + + +def assert_array_result(object_array, to_match, expected, should_not_find=False): + """ + Pass in array of JSON objects, a dictionary with key/value pairs + to match against, and another dictionary with expected key/value + pairs. + If the should_not_find flag is true, to_match should not be found + in object_array + """ + if should_not_find: + assert_equal(expected, {}) + num_matched = 0 + for item in object_array: + all_match = True + for key, value in to_match.items(): + if item[key] != value: + all_match = False + if not all_match: + continue + elif should_not_find: + num_matched = num_matched + 1 + for key, value in expected.items(): + if item[key] != value: + raise AssertionError("%s : expected %s=%s" % (str(item), str(key), str(value))) + num_matched = num_matched + 1 + if num_matched == 0 and not should_not_find: + raise AssertionError("No objects matched %s" % (str(to_match))) + if num_matched > 0 and should_not_find: + raise AssertionError("Objects were found %s" % (str(to_match))) + + +# Utility functions +################### + + +def check_json_precision(): + """Make sure json library being used does not lose precision converting BTC values""" + n = Decimal("20000000.00000003") + satoshis = int(json.loads(json.dumps(float(n))) * 1.0e8) + if satoshis != 2000000000000003: + raise RuntimeError("JSON encode/decode loses precision") + + +def count_bytes(hex_string): + return len(bytearray.fromhex(hex_string)) + + +def str_to_b64str(string): + return b64encode(string.encode('utf-8')).decode('ascii') + + +def ceildiv(a, b): + """ + Divide 2 ints and round up to next int rather than round down + Implementation requires python integers, which have a // operator that does floor division. + Other types like decimal.Decimal whose // operator truncates towards 0 will not work. + """ + assert isinstance(a, int) + assert isinstance(b, int) + return -(-a // b) + + +def get_fee(tx_size, feerate_btc_kvb): + """Calculate the fee in BTC given a feerate is BTC/kvB. Reflects CFeeRate::GetFee""" + feerate_sat_kvb = int(feerate_btc_kvb * Decimal(1e8)) # Fee in sat/kvb as an int to avoid float precision errors + target_fee_sat = ceildiv(feerate_sat_kvb * tx_size, 1000) # Round calculated fee up to nearest sat + return target_fee_sat / Decimal(1e8) # Return result in BTC + + +def satoshi_round(amount): + return Decimal(amount).quantize(Decimal('0.00000001'), rounding=ROUND_DOWN) + + +def wait_until_helper_internal(predicate, *, attempts=float('inf'), timeout=float('inf'), lock=None, timeout_factor=1.0): + """Sleep until the predicate resolves to be True. + + Warning: Note that this method is not recommended to be used in tests as it is + not aware of the context of the test framework. Using the `wait_until()` members + from `BitcoinTestFramework` or `P2PInterface` class ensures the timeout is + properly scaled. Furthermore, `wait_until()` from `P2PInterface` class in + `p2p.py` has a preset lock. + """ + if attempts == float('inf') and timeout == float('inf'): + timeout = 60 + timeout = timeout * timeout_factor + attempt = 0 + time_end = time.time() + timeout + + while attempt < attempts and time.time() < time_end: + if lock: + with lock: + if predicate(): + return + else: + if predicate(): + return + attempt += 1 + time.sleep(0.05) + + # Print the cause of the timeout + predicate_source = "''''\n" + inspect.getsource(predicate) + "'''" + logger.error("wait_until() failed. Predicate: {}".format(predicate_source)) + if attempt >= attempts: + raise AssertionError("Predicate {} not true after {} attempts".format(predicate_source, attempts)) + elif time.time() >= time_end: + raise AssertionError("Predicate {} not true after {} seconds".format(predicate_source, timeout)) + raise RuntimeError('Unreachable') + + +def sha256sum_file(filename): + h = hashlib.sha256() + with open(filename, 'rb') as f: + d = f.read(4096) + while len(d) > 0: + h.update(d) + d = f.read(4096) + return h.digest() + + +# TODO: Remove and use random.randbytes(n) instead, available in Python 3.9 +def random_bytes(n): + """Return a random bytes object of length n.""" + return bytes(random.getrandbits(8) for i in range(n)) + + +# RPC/P2P connection constants and functions +############################################ + +# The maximum number of nodes a single test can spawn +MAX_NODES = 12 +# Don't assign rpc or p2p ports lower than this +PORT_MIN = int(os.getenv('TEST_RUNNER_PORT_MIN', default=11000)) +# The number of ports to "reserve" for p2p and rpc, each +PORT_RANGE = 5000 + + +class PortSeed: + # Must be initialized with a unique integer for each process + n = None + + +def get_rpc_proxy(url: str, node_number: int, *, timeout: Optional[int]=None, coveragedir: Optional[str]=None) -> coverage.AuthServiceProxyWrapper: + """ + Args: + url: URL of the RPC server to call + node_number: the node number (or id) that this calls to + + Kwargs: + timeout: HTTP timeout in seconds + coveragedir: Directory + + Returns: + AuthServiceProxy. convenience object for making RPC calls. + + """ + proxy_kwargs = {} + if timeout is not None: + proxy_kwargs['timeout'] = int(timeout) + + proxy = AuthServiceProxy(url, **proxy_kwargs) + + coverage_logfile = coverage.get_filename(coveragedir, node_number) if coveragedir else None + + return coverage.AuthServiceProxyWrapper(proxy, url, coverage_logfile) + + +def p2p_port(n): + assert n <= MAX_NODES + return PORT_MIN + n + (MAX_NODES * PortSeed.n) % (PORT_RANGE - 1 - MAX_NODES) + + +def rpc_port(n): + return PORT_MIN + PORT_RANGE + n + (MAX_NODES * PortSeed.n) % (PORT_RANGE - 1 - MAX_NODES) + + +def rpc_url(datadir, i, chain, rpchost): + rpc_u, rpc_p = get_auth_cookie(datadir, chain) + host = '127.0.0.1' + port = rpc_port(i) + if rpchost: + parts = rpchost.split(':') + if len(parts) == 2: + host, port = parts + else: + host = rpchost + return "http://%s:%s@%s:%d" % (rpc_u, rpc_p, host, int(port)) + + +# Node functions +################ + + +def initialize_datadir(dirname, n, chain, disable_autoconnect=True): + datadir = get_datadir_path(dirname, n) + if not os.path.isdir(datadir): + os.makedirs(datadir) + write_config(os.path.join(datadir, "bitcoin.conf"), n=n, chain=chain, disable_autoconnect=disable_autoconnect) + os.makedirs(os.path.join(datadir, 'stderr'), exist_ok=True) + os.makedirs(os.path.join(datadir, 'stdout'), exist_ok=True) + return datadir + + +def write_config(config_path, *, n, chain, extra_config="", disable_autoconnect=True): + # Translate chain subdirectory name to config name + if chain == 'testnet3': + chain_name_conf_arg = 'testnet' + chain_name_conf_section = 'test' + else: + chain_name_conf_arg = chain + chain_name_conf_section = chain + with open(config_path, 'w', encoding='utf8') as f: + if chain_name_conf_arg: + f.write("{}=1\n".format(chain_name_conf_arg)) + if chain_name_conf_section: + f.write("[{}]\n".format(chain_name_conf_section)) + f.write("port=" + str(p2p_port(n)) + "\n") + f.write("rpcport=" + str(rpc_port(n)) + "\n") + # Disable server-side timeouts to avoid intermittent issues + f.write("rpcservertimeout=99000\n") + f.write("rpcdoccheck=1\n") + f.write("fallbackfee=0.0002\n") + f.write("server=1\n") + f.write("keypool=1\n") + f.write("discover=0\n") + f.write("dnsseed=0\n") + f.write("fixedseeds=0\n") + f.write("listenonion=0\n") + # Increase peertimeout to avoid disconnects while using mocktime. + # peertimeout is measured in mock time, so setting it large enough to + # cover any duration in mock time is sufficient. It can be overridden + # in tests. + f.write("peertimeout=999999999\n") + f.write("printtoconsole=0\n") + f.write("upnp=0\n") + f.write("natpmp=0\n") + f.write("shrinkdebugfile=0\n") + f.write("deprecatedrpc=create_bdb\n") # Required to run the tests + # To improve SQLite wallet performance so that the tests don't timeout, use -unsafesqlitesync + f.write("unsafesqlitesync=1\n") + if disable_autoconnect: + f.write("connect=0\n") + f.write(extra_config) + + +def get_datadir_path(dirname, n): + return pathlib.Path(dirname) / f"node{n}" + + +def get_temp_default_datadir(temp_dir: pathlib.Path) -> Tuple[dict, pathlib.Path]: + """Return os-specific environment variables that can be set to make the + GetDefaultDataDir() function return a datadir path under the provided + temp_dir, as well as the complete path it would return.""" + if sys.platform == "win32": + env = dict(APPDATA=str(temp_dir)) + datadir = temp_dir / "Bitcoin" + else: + env = dict(HOME=str(temp_dir)) + if sys.platform == "darwin": + datadir = temp_dir / "Library/Application Support/Bitcoin" + else: + datadir = temp_dir / ".bitcoin" + return env, datadir + + +def append_config(datadir, options): + with open(os.path.join(datadir, "bitcoin.conf"), 'a', encoding='utf8') as f: + for option in options: + f.write(option + "\n") + + +def get_auth_cookie(datadir, chain): + user = None + password = None + if os.path.isfile(os.path.join(datadir, "bitcoin.conf")): + with open(os.path.join(datadir, "bitcoin.conf"), 'r', encoding='utf8') as f: + for line in f: + if line.startswith("rpcuser="): + assert user is None # Ensure that there is only one rpcuser line + user = line.split("=")[1].strip("\n") + if line.startswith("rpcpassword="): + assert password is None # Ensure that there is only one rpcpassword line + password = line.split("=")[1].strip("\n") + try: + with open(os.path.join(datadir, chain, ".cookie"), 'r', encoding="ascii") as f: + userpass = f.read() + split_userpass = userpass.split(':') + user = split_userpass[0] + password = split_userpass[1] + except OSError: + pass + if user is None or password is None: + raise ValueError("No RPC credentials") + return user, password + + +# If a cookie file exists in the given datadir, delete it. +def delete_cookie_file(datadir, chain): + if os.path.isfile(os.path.join(datadir, chain, ".cookie")): + logger.debug("Deleting leftover cookie file") + os.remove(os.path.join(datadir, chain, ".cookie")) + + +def softfork_active(node, key): + """Return whether a softfork is active.""" + return node.getdeploymentinfo()['deployments'][key]['active'] + + +def set_node_times(nodes, t): + for node in nodes: + node.setmocktime(t) + + +def check_node_connections(*, node, num_in, num_out): + info = node.getnetworkinfo() + assert_equal(info["connections_in"], num_in) + assert_equal(info["connections_out"], num_out) + + +# Transaction/Block functions +############################# + + +def find_output(node, txid, amount, *, blockhash=None): + """ + Return index to output of txid with value amount + Raises exception if there is none. + """ + txdata = node.getrawtransaction(txid, 1, blockhash) + for i in range(len(txdata["vout"])): + if txdata["vout"][i]["value"] == amount: + return i + raise RuntimeError("find_output txid %s : %s not found" % (txid, str(amount))) + + +# Create large OP_RETURN txouts that can be appended to a transaction +# to make it large (helper for constructing large transactions). The +# total serialized size of the txouts is about 66k vbytes. +def gen_return_txouts(): + from .messages import CTxOut + from .script import CScript, OP_RETURN + txouts = [CTxOut(nValue=0, scriptPubKey=CScript([OP_RETURN, b'\x01'*67437]))] + assert_equal(sum([len(txout.serialize()) for txout in txouts]), 67456) + return txouts + + +# Create a spend of each passed-in utxo, splicing in "txouts" to each raw +# transaction to make it large. See gen_return_txouts() above. +def create_lots_of_big_transactions(mini_wallet, node, fee, tx_batch_size, txouts, utxos=None): + txids = [] + use_internal_utxos = utxos is None + for _ in range(tx_batch_size): + tx = mini_wallet.create_self_transfer( + utxo_to_spend=None if use_internal_utxos else utxos.pop(), + fee=fee, + )["tx"] + tx.vout.extend(txouts) + res = node.testmempoolaccept([tx.serialize().hex()])[0] + assert_equal(res['fees']['base'], fee) + txids.append(node.sendrawtransaction(tx.serialize().hex())) + return txids + + +def mine_large_block(test_framework, mini_wallet, node): + # generate a 66k transaction, + # and 14 of them is close to the 1MB block limit + txouts = gen_return_txouts() + fee = 100 * node.getnetworkinfo()["relayfee"] + create_lots_of_big_transactions(mini_wallet, node, fee, 14, txouts) + test_framework.generate(node, 1) + + +def find_vout_for_address(node, txid, addr): + """ + Locate the vout index of the given transaction sending to the + given address. Raises runtime error exception if not found. + """ + tx = node.getrawtransaction(txid, True) + for i in range(len(tx["vout"])): + if addr == tx["vout"][i]["scriptPubKey"]["address"]: + return i + raise RuntimeError("Vout not found for address: txid=%s, addr=%s" % (txid, addr)) diff --git a/warnet-demo/scenarios/test_framework/wallet.py b/warnet-demo/scenarios/test_framework/wallet.py new file mode 100644 index 0000000..035a482 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/wallet.py @@ -0,0 +1,426 @@ +#!/usr/bin/env python3 +# Copyright (c) 2020-2022 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""A limited-functionality wallet, which may replace a real wallet in tests""" + +from copy import deepcopy +from decimal import Decimal +from enum import Enum +from typing import ( + Any, + List, + Optional, +) +from test_framework.address import ( + address_to_scriptpubkey, + create_deterministic_address_bcrt1_p2tr_op_true, + key_to_p2pkh, + key_to_p2sh_p2wpkh, + key_to_p2wpkh, + output_key_to_p2tr, +) +from test_framework.blocktools import COINBASE_MATURITY +from test_framework.descriptors import descsum_create +from test_framework.key import ( + ECKey, + compute_xonly_pubkey, +) +from test_framework.messages import ( + COIN, + COutPoint, + CTransaction, + CTxIn, + CTxInWitness, + CTxOut, +) +from test_framework.script import ( + CScript, + LEAF_VERSION_TAPSCRIPT, + OP_NOP, + OP_RETURN, + OP_TRUE, + sign_input_legacy, + taproot_construct, +) +from test_framework.script_util import ( + key_to_p2pk_script, + key_to_p2pkh_script, + key_to_p2sh_p2wpkh_script, + key_to_p2wpkh_script, +) +from test_framework.util import ( + assert_equal, + assert_greater_than_or_equal, +) +from test_framework.wallet_util import generate_keypair + +DEFAULT_FEE = Decimal("0.0001") + +class MiniWalletMode(Enum): + """Determines the transaction type the MiniWallet is creating and spending. + + For most purposes, the default mode ADDRESS_OP_TRUE should be sufficient; + it simply uses a fixed bech32m P2TR address whose coins are spent with a + witness stack of OP_TRUE, i.e. following an anyone-can-spend policy. + However, if the transactions need to be modified by the user (e.g. prepending + scriptSig for testing opcodes that are activated by a soft-fork), or the txs + should contain an actual signature, the raw modes RAW_OP_TRUE and RAW_P2PK + can be useful. Summary of modes: + + | output | | tx is | can modify | needs + mode | description | address | standard | scriptSig | signing + ----------------+-------------------+-----------+----------+------------+---------- + ADDRESS_OP_TRUE | anyone-can-spend | bech32m | yes | no | no + RAW_OP_TRUE | anyone-can-spend | - (raw) | no | yes | no + RAW_P2PK | pay-to-public-key | - (raw) | yes | yes | yes + """ + ADDRESS_OP_TRUE = 1 + RAW_OP_TRUE = 2 + RAW_P2PK = 3 + + +class MiniWallet: + def __init__(self, test_node, *, mode=MiniWalletMode.ADDRESS_OP_TRUE): + self._test_node = test_node + self._utxos = [] + self._mode = mode + + assert isinstance(mode, MiniWalletMode) + if mode == MiniWalletMode.RAW_OP_TRUE: + self._scriptPubKey = bytes(CScript([OP_TRUE])) + elif mode == MiniWalletMode.RAW_P2PK: + # use simple deterministic private key (k=1) + self._priv_key = ECKey() + self._priv_key.set((1).to_bytes(32, 'big'), True) + pub_key = self._priv_key.get_pubkey() + self._scriptPubKey = key_to_p2pk_script(pub_key.get_bytes()) + elif mode == MiniWalletMode.ADDRESS_OP_TRUE: + self._address, self._internal_key = create_deterministic_address_bcrt1_p2tr_op_true() + self._scriptPubKey = address_to_scriptpubkey(self._address) + + # When the pre-mined test framework chain is used, it contains coinbase + # outputs to the MiniWallet's default address in blocks 76-100 + # (see method BitcoinTestFramework._initialize_chain()) + # The MiniWallet needs to rescan_utxos() in order to account + # for those mature UTXOs, so that all txs spend confirmed coins + self.rescan_utxos() + + def _create_utxo(self, *, txid, vout, value, height, coinbase, confirmations): + return {"txid": txid, "vout": vout, "value": value, "height": height, "coinbase": coinbase, "confirmations": confirmations} + + def _bulk_tx(self, tx, target_weight): + """Pad a transaction with extra outputs until it reaches a target weight (or higher). + returns the tx + """ + tx.vout.append(CTxOut(nValue=0, scriptPubKey=CScript([OP_RETURN, b'a']))) + dummy_vbytes = (target_weight - tx.get_weight() + 3) // 4 + tx.vout[-1].scriptPubKey = CScript([OP_RETURN, b'a' * dummy_vbytes]) + # Lower bound should always be off by at most 3 + assert_greater_than_or_equal(tx.get_weight(), target_weight) + # Higher bound should always be off by at most 3 + 12 weight (for encoding the length) + assert_greater_than_or_equal(target_weight + 15, tx.get_weight()) + + def get_balance(self): + return sum(u['value'] for u in self._utxos) + + def rescan_utxos(self, *, include_mempool=True): + """Drop all utxos and rescan the utxo set""" + self._utxos = [] + res = self._test_node.scantxoutset(action="start", scanobjects=[self.get_descriptor()]) + assert_equal(True, res['success']) + for utxo in res['unspents']: + self._utxos.append( + self._create_utxo(txid=utxo["txid"], + vout=utxo["vout"], + value=utxo["amount"], + height=utxo["height"], + coinbase=utxo["coinbase"], + confirmations=res["height"] - utxo["height"] + 1)) + if include_mempool: + mempool = self._test_node.getrawmempool(verbose=True) + # Sort tx by ancestor count. See BlockAssembler::SortForBlock in src/node/miner.cpp + sorted_mempool = sorted(mempool.items(), key=lambda item: (item[1]["ancestorcount"], int(item[0], 16))) + for txid, _ in sorted_mempool: + self.scan_tx(self._test_node.getrawtransaction(txid=txid, verbose=True)) + + def scan_tx(self, tx): + """Scan the tx and adjust the internal list of owned utxos""" + for spent in tx["vin"]: + # Mark spent. This may happen when the caller has ownership of a + # utxo that remained in this wallet. For example, by passing + # mark_as_spent=False to get_utxo or by using an utxo returned by a + # create_self_transfer* call. + try: + self.get_utxo(txid=spent["txid"], vout=spent["vout"]) + except StopIteration: + pass + for out in tx['vout']: + if out['scriptPubKey']['hex'] == self._scriptPubKey.hex(): + self._utxos.append(self._create_utxo(txid=tx["txid"], vout=out["n"], value=out["value"], height=0, coinbase=False, confirmations=0)) + + def scan_txs(self, txs): + for tx in txs: + self.scan_tx(tx) + + def sign_tx(self, tx, fixed_length=True): + if self._mode == MiniWalletMode.RAW_P2PK: + # for exact fee calculation, create only signatures with fixed size by default (>49.89% probability): + # 65 bytes: high-R val (33 bytes) + low-S val (32 bytes) + # with the DER header/skeleton data of 6 bytes added, plus 2 bytes scriptSig overhead + # (OP_PUSHn and SIGHASH_ALL), this leads to a scriptSig target size of 73 bytes + tx.vin[0].scriptSig = b'' + while not len(tx.vin[0].scriptSig) == 73: + tx.vin[0].scriptSig = b'' + sign_input_legacy(tx, 0, self._scriptPubKey, self._priv_key) + if not fixed_length: + break + elif self._mode == MiniWalletMode.RAW_OP_TRUE: + for i in tx.vin: + i.scriptSig = CScript([OP_NOP] * 43) # pad to identical size + elif self._mode == MiniWalletMode.ADDRESS_OP_TRUE: + tx.wit.vtxinwit = [CTxInWitness()] * len(tx.vin) + for i in tx.wit.vtxinwit: + i.scriptWitness.stack = [CScript([OP_TRUE]), bytes([LEAF_VERSION_TAPSCRIPT]) + self._internal_key] + else: + assert False + + def generate(self, num_blocks, **kwargs): + """Generate blocks with coinbase outputs to the internal address, and call rescan_utxos""" + blocks = self._test_node.generatetodescriptor(num_blocks, self.get_descriptor(), **kwargs) + # Calling rescan_utxos here makes sure that after a generate the utxo + # set is in a clean state. For example, the wallet will update + # - if the caller consumed utxos, but never used them + # - if the caller sent a transaction that is not mined or got rbf'd + # - after block re-orgs + # - the utxo height for mined mempool txs + # - However, the wallet will not consider remaining mempool txs + self.rescan_utxos() + return blocks + + def get_scriptPubKey(self): + return self._scriptPubKey + + def get_descriptor(self): + return descsum_create(f'raw({self._scriptPubKey.hex()})') + + def get_address(self): + assert_equal(self._mode, MiniWalletMode.ADDRESS_OP_TRUE) + return self._address + + def get_utxo(self, *, txid: str = '', vout: Optional[int] = None, mark_as_spent=True, confirmed_only=False) -> dict: + """ + Returns a utxo and marks it as spent (pops it from the internal list) + + Args: + txid: get the first utxo we find from a specific transaction + """ + self._utxos = sorted(self._utxos, key=lambda k: (k['value'], -k['height'])) # Put the largest utxo last + blocks_height = self._test_node.getblockchaininfo()['blocks'] + mature_coins = list(filter(lambda utxo: not utxo['coinbase'] or COINBASE_MATURITY - 1 <= blocks_height - utxo['height'], self._utxos)) + if txid: + utxo_filter: Any = filter(lambda utxo: txid == utxo['txid'], self._utxos) + else: + utxo_filter = reversed(mature_coins) # By default the largest utxo + if vout is not None: + utxo_filter = filter(lambda utxo: vout == utxo['vout'], utxo_filter) + if confirmed_only: + utxo_filter = filter(lambda utxo: utxo['confirmations'] > 0, utxo_filter) + index = self._utxos.index(next(utxo_filter)) + if mark_as_spent: + return self._utxos.pop(index) + else: + return self._utxos[index] + + def get_utxos(self, *, include_immature_coinbase=False, mark_as_spent=True, confirmed_only=False): + """Returns the list of all utxos and optionally mark them as spent""" + if not include_immature_coinbase: + blocks_height = self._test_node.getblockchaininfo()['blocks'] + utxo_filter = filter(lambda utxo: not utxo['coinbase'] or COINBASE_MATURITY - 1 <= blocks_height - utxo['height'], self._utxos) + else: + utxo_filter = self._utxos + if confirmed_only: + utxo_filter = filter(lambda utxo: utxo['confirmations'] > 0, utxo_filter) + utxos = deepcopy(list(utxo_filter)) + if mark_as_spent: + self._utxos = [] + return utxos + + def send_self_transfer(self, *, from_node, **kwargs): + """Call create_self_transfer and send the transaction.""" + tx = self.create_self_transfer(**kwargs) + self.sendrawtransaction(from_node=from_node, tx_hex=tx['hex']) + return tx + + def send_to(self, *, from_node, scriptPubKey, amount, fee=1000): + """ + Create and send a tx with an output to a given scriptPubKey/amount, + plus a change output to our internal address. To keep things simple, a + fixed fee given in Satoshi is used. + + Note that this method fails if there is no single internal utxo + available that can cover the cost for the amount and the fixed fee + (the utxo with the largest value is taken). + """ + tx = self.create_self_transfer(fee_rate=0)["tx"] + assert_greater_than_or_equal(tx.vout[0].nValue, amount + fee) + tx.vout[0].nValue -= (amount + fee) # change output -> MiniWallet + tx.vout.append(CTxOut(amount, scriptPubKey)) # arbitrary output -> to be returned + txid = self.sendrawtransaction(from_node=from_node, tx_hex=tx.serialize().hex()) + return { + "sent_vout": 1, + "txid": txid, + "wtxid": tx.getwtxid(), + "hex": tx.serialize().hex(), + "tx": tx, + } + + def send_self_transfer_multi(self, *, from_node, **kwargs): + """Call create_self_transfer_multi and send the transaction.""" + tx = self.create_self_transfer_multi(**kwargs) + self.sendrawtransaction(from_node=from_node, tx_hex=tx["hex"]) + return tx + + def create_self_transfer_multi( + self, + *, + utxos_to_spend: Optional[List[dict]] = None, + num_outputs=1, + amount_per_output=0, + locktime=0, + sequence=0, + fee_per_output=1000, + target_weight=0, + confirmed_only=False + ): + """ + Create and return a transaction that spends the given UTXOs and creates a + certain number of outputs with equal amounts. The output amounts can be + set by amount_per_output or automatically calculated with a fee_per_output. + """ + utxos_to_spend = utxos_to_spend or [self.get_utxo(confirmed_only=confirmed_only)] + sequence = [sequence] * len(utxos_to_spend) if type(sequence) is int else sequence + assert_equal(len(utxos_to_spend), len(sequence)) + + # calculate output amount + inputs_value_total = sum([int(COIN * utxo['value']) for utxo in utxos_to_spend]) + outputs_value_total = inputs_value_total - fee_per_output * num_outputs + amount_per_output = amount_per_output or (outputs_value_total // num_outputs) + assert amount_per_output > 0 + outputs_value_total = amount_per_output * num_outputs + fee = Decimal(inputs_value_total - outputs_value_total) / COIN + + # create tx + tx = CTransaction() + tx.vin = [CTxIn(COutPoint(int(utxo_to_spend['txid'], 16), utxo_to_spend['vout']), nSequence=seq) for utxo_to_spend, seq in zip(utxos_to_spend, sequence)] + tx.vout = [CTxOut(amount_per_output, bytearray(self._scriptPubKey)) for _ in range(num_outputs)] + tx.nLockTime = locktime + + self.sign_tx(tx) + + if target_weight: + self._bulk_tx(tx, target_weight) + + txid = tx.rehash() + return { + "new_utxos": [self._create_utxo( + txid=txid, + vout=i, + value=Decimal(tx.vout[i].nValue) / COIN, + height=0, + coinbase=False, + confirmations=0, + ) for i in range(len(tx.vout))], + "fee": fee, + "txid": txid, + "wtxid": tx.getwtxid(), + "hex": tx.serialize().hex(), + "tx": tx, + } + + def create_self_transfer(self, *, + fee_rate=Decimal("0.003"), + fee=Decimal("0"), + utxo_to_spend=None, + locktime=0, + sequence=0, + target_weight=0, + confirmed_only=False + ): + """Create and return a tx with the specified fee. If fee is 0, use fee_rate, where the resulting fee may be exact or at most one satoshi higher than needed.""" + utxo_to_spend = utxo_to_spend or self.get_utxo(confirmed_only=confirmed_only) + assert fee_rate >= 0 + assert fee >= 0 + # calculate fee + if self._mode in (MiniWalletMode.RAW_OP_TRUE, MiniWalletMode.ADDRESS_OP_TRUE): + vsize = Decimal(104) # anyone-can-spend + elif self._mode == MiniWalletMode.RAW_P2PK: + vsize = Decimal(168) # P2PK (73 bytes scriptSig + 35 bytes scriptPubKey + 60 bytes other) + else: + assert False + send_value = utxo_to_spend["value"] - (fee or (fee_rate * vsize / 1000)) + + # create tx + tx = self.create_self_transfer_multi(utxos_to_spend=[utxo_to_spend], locktime=locktime, sequence=sequence, amount_per_output=int(COIN * send_value), target_weight=target_weight) + if not target_weight: + assert_equal(tx["tx"].get_vsize(), vsize) + tx["new_utxo"] = tx.pop("new_utxos")[0] + + return tx + + def sendrawtransaction(self, *, from_node, tx_hex, maxfeerate=0, **kwargs): + txid = from_node.sendrawtransaction(hexstring=tx_hex, maxfeerate=maxfeerate, **kwargs) + self.scan_tx(from_node.decoderawtransaction(tx_hex)) + return txid + + def create_self_transfer_chain(self, *, chain_length, utxo_to_spend=None): + """ + Create a "chain" of chain_length transactions. The nth transaction in + the chain is a child of the n-1th transaction and parent of the n+1th transaction. + """ + chaintip_utxo = utxo_to_spend or self.get_utxo() + chain = [] + + for _ in range(chain_length): + tx = self.create_self_transfer(utxo_to_spend=chaintip_utxo) + chaintip_utxo = tx["new_utxo"] + chain.append(tx) + + return chain + + def send_self_transfer_chain(self, *, from_node, **kwargs): + """Create and send a "chain" of chain_length transactions. The nth transaction in + the chain is a child of the n-1th transaction and parent of the n+1th transaction. + + Returns a list of objects for each tx (see create_self_transfer_multi). + """ + chain = self.create_self_transfer_chain(**kwargs) + for t in chain: + self.sendrawtransaction(from_node=from_node, tx_hex=t["hex"]) + return chain + + +def getnewdestination(address_type='bech32m'): + """Generate a random destination of the specified type and return the + corresponding public key, scriptPubKey and address. Supported types are + 'legacy', 'p2sh-segwit', 'bech32' and 'bech32m'. Can be used when a random + destination is needed, but no compiled wallet is available (e.g. as + replacement to the getnewaddress/getaddressinfo RPCs).""" + key, pubkey = generate_keypair() + if address_type == 'legacy': + scriptpubkey = key_to_p2pkh_script(pubkey) + address = key_to_p2pkh(pubkey) + elif address_type == 'p2sh-segwit': + scriptpubkey = key_to_p2sh_p2wpkh_script(pubkey) + address = key_to_p2sh_p2wpkh(pubkey) + elif address_type == 'bech32': + scriptpubkey = key_to_p2wpkh_script(pubkey) + address = key_to_p2wpkh(pubkey) + elif address_type == 'bech32m': + tap = taproot_construct(compute_xonly_pubkey(key.get_bytes())[0]) + pubkey = tap.output_pubkey + scriptpubkey = tap.scriptPubKey + address = output_key_to_p2tr(pubkey) + else: + assert False + return pubkey, scriptpubkey, address diff --git a/warnet-demo/scenarios/test_framework/wallet_util.py b/warnet-demo/scenarios/test_framework/wallet_util.py new file mode 100644 index 0000000..4481191 --- /dev/null +++ b/warnet-demo/scenarios/test_framework/wallet_util.py @@ -0,0 +1,143 @@ +#!/usr/bin/env python3 +# Copyright (c) 2018-2021 The Bitcoin Core developers +# Distributed under the MIT software license, see the accompanying +# file COPYING or http://www.opensource.org/licenses/mit-license.php. +"""Useful util functions for testing the wallet""" +from collections import namedtuple + +from test_framework.address import ( + byte_to_base58, + key_to_p2pkh, + key_to_p2sh_p2wpkh, + key_to_p2wpkh, + script_to_p2sh, + script_to_p2sh_p2wsh, + script_to_p2wsh, +) +from test_framework.key import ECKey +from test_framework.script_util import ( + key_to_p2pkh_script, + key_to_p2wpkh_script, + keys_to_multisig_script, + script_to_p2sh_script, + script_to_p2wsh_script, +) + +Key = namedtuple('Key', ['privkey', + 'pubkey', + 'p2pkh_script', + 'p2pkh_addr', + 'p2wpkh_script', + 'p2wpkh_addr', + 'p2sh_p2wpkh_script', + 'p2sh_p2wpkh_redeem_script', + 'p2sh_p2wpkh_addr']) + +Multisig = namedtuple('Multisig', ['privkeys', + 'pubkeys', + 'p2sh_script', + 'p2sh_addr', + 'redeem_script', + 'p2wsh_script', + 'p2wsh_addr', + 'p2sh_p2wsh_script', + 'p2sh_p2wsh_addr']) + +def get_key(node): + """Generate a fresh key on node + + Returns a named tuple of privkey, pubkey and all address and scripts.""" + addr = node.getnewaddress() + pubkey = node.getaddressinfo(addr)['pubkey'] + return Key(privkey=node.dumpprivkey(addr), + pubkey=pubkey, + p2pkh_script=key_to_p2pkh_script(pubkey).hex(), + p2pkh_addr=key_to_p2pkh(pubkey), + p2wpkh_script=key_to_p2wpkh_script(pubkey).hex(), + p2wpkh_addr=key_to_p2wpkh(pubkey), + p2sh_p2wpkh_script=script_to_p2sh_script(key_to_p2wpkh_script(pubkey)).hex(), + p2sh_p2wpkh_redeem_script=key_to_p2wpkh_script(pubkey).hex(), + p2sh_p2wpkh_addr=key_to_p2sh_p2wpkh(pubkey)) + +def get_generate_key(): + """Generate a fresh key + + Returns a named tuple of privkey, pubkey and all address and scripts.""" + privkey, pubkey = generate_keypair(wif=True) + return Key(privkey=privkey, + pubkey=pubkey.hex(), + p2pkh_script=key_to_p2pkh_script(pubkey).hex(), + p2pkh_addr=key_to_p2pkh(pubkey), + p2wpkh_script=key_to_p2wpkh_script(pubkey).hex(), + p2wpkh_addr=key_to_p2wpkh(pubkey), + p2sh_p2wpkh_script=script_to_p2sh_script(key_to_p2wpkh_script(pubkey)).hex(), + p2sh_p2wpkh_redeem_script=key_to_p2wpkh_script(pubkey).hex(), + p2sh_p2wpkh_addr=key_to_p2sh_p2wpkh(pubkey)) + +def get_multisig(node): + """Generate a fresh 2-of-3 multisig on node + + Returns a named tuple of privkeys, pubkeys and all address and scripts.""" + addrs = [] + pubkeys = [] + for _ in range(3): + addr = node.getaddressinfo(node.getnewaddress()) + addrs.append(addr['address']) + pubkeys.append(addr['pubkey']) + script_code = keys_to_multisig_script(pubkeys, k=2) + witness_script = script_to_p2wsh_script(script_code) + return Multisig(privkeys=[node.dumpprivkey(addr) for addr in addrs], + pubkeys=pubkeys, + p2sh_script=script_to_p2sh_script(script_code).hex(), + p2sh_addr=script_to_p2sh(script_code), + redeem_script=script_code.hex(), + p2wsh_script=witness_script.hex(), + p2wsh_addr=script_to_p2wsh(script_code), + p2sh_p2wsh_script=script_to_p2sh_script(witness_script).hex(), + p2sh_p2wsh_addr=script_to_p2sh_p2wsh(script_code)) + +def test_address(node, address, **kwargs): + """Get address info for `address` and test whether the returned values are as expected.""" + addr_info = node.getaddressinfo(address) + for key, value in kwargs.items(): + if value is None: + if key in addr_info.keys(): + raise AssertionError("key {} unexpectedly returned in getaddressinfo.".format(key)) + elif addr_info[key] != value: + raise AssertionError("key {} value {} did not match expected value {}".format(key, addr_info[key], value)) + +def bytes_to_wif(b, compressed=True): + if compressed: + b += b'\x01' + return byte_to_base58(b, 239) + +def generate_keypair(compressed=True, wif=False): + """Generate a new random keypair and return the corresponding ECKey / + bytes objects. The private key can also be provided as WIF (wallet + import format) string instead, which is often useful for wallet RPC + interaction.""" + privkey = ECKey() + privkey.generate(compressed) + pubkey = privkey.get_pubkey().get_bytes() + if wif: + privkey = bytes_to_wif(privkey.get_bytes(), compressed) + return privkey, pubkey + +class WalletUnlock(): + """ + A context manager for unlocking a wallet with a passphrase and automatically locking it afterward. + """ + + MAXIMUM_TIMEOUT = 999000 + + def __init__(self, wallet, passphrase, timeout=MAXIMUM_TIMEOUT): + self.wallet = wallet + self.passphrase = passphrase + self.timeout = timeout + + def __enter__(self): + self.wallet.walletpassphrase(self.passphrase, self.timeout) + + def __exit__(self, *args): + _ = args + self.wallet.walletlock() diff --git a/warnet-demo/scenarios/test_framework/xswiftec_inv_test_vectors.csv b/warnet-demo/scenarios/test_framework/xswiftec_inv_test_vectors.csv new file mode 100644 index 0000000..138c4cf --- /dev/null +++ b/warnet-demo/scenarios/test_framework/xswiftec_inv_test_vectors.csv @@ -0,0 +1,33 @@ +u,x,case0_t,case1_t,case2_t,case3_t,case4_t,case5_t,case6_t,case7_t,comment +05ff6bdad900fc3261bc7fe34e2fb0f569f06e091ae437d3a52e9da0cbfb9590,80cdf63774ec7022c89a5a8558e373a279170285e0ab27412dbce510bdfe23fc,,,45654798ece071ba79286d04f7f3eb1c3f1d17dd883610f2ad2efd82a287466b,0aeaa886f6b76c7158452418cbf5033adc5747e9e9b5d3b2303db96936528557,,,ba9ab867131f8e4586d792fb080c14e3c0e2e82277c9ef0d52d1027c5d78b5c4,f51557790948938ea7badbe7340afcc523a8b816164a2c4dcfc24695c9ad76d8,case0:bad[valid_x(-x-u)];case1:bad[valid_x(-x-u)];case2:info[v=0]&ok;case3:ok;case4:bad[valid_x(-x-u)];case5:bad[valid_x(-x-u)];case6:info[v=0]&ok;case7:ok +1737a85f4c8d146cec96e3ffdca76d9903dcf3bd53061868d478c78c63c2aa9e,39e48dd150d2f429be088dfd5b61882e7e8407483702ae9a5ab35927b15f85ea,1be8cc0b04be0c681d0c6a68f733f82c6c896e0c8a262fcd392918e303a7abf4,605b5814bf9b8cb066667c9e5480d22dc5b6c92f14b4af3ee0a9eb83b03685e3,,,e41733f4fb41f397e2f3959708cc07d3937691f375d9d032c6d6e71bfc58503b,9fa4a7eb4064734f99998361ab7f2dd23a4936d0eb4b50c11f56147b4fc9764c,,,case0:ok;case1:ok;case2:info[v=0]&bad[non_square(s)];case3:bad[non_square(s)];case4:ok;case5:ok;case6:info[v=0]&bad[non_square(s)];case7:bad[non_square(s)] +1aaa1ccebf9c724191033df366b36f691c4d902c228033ff4516d122b2564f68,c75541259d3ba98f207eaa30c69634d187d0b6da594e719e420f4898638fc5b0,,,,,,,,,case0:bad[valid_x(-x-u)];case1:bad[valid_x(-x-u)];case2:bad[non_square(q)];case3:bad[non_square(q)];case4:bad[valid_x(-x-u)];case5:bad[valid_x(-x-u)];case6:bad[non_square(q)];case7:bad[non_square(q)] +2323a1d079b0fd72fc8bb62ec34230a815cb0596c2bfac998bd6b84260f5dc26,239342dfb675500a34a196310b8d87d54f49dcac9da50c1743ceab41a7b249ff,f63580b8aa49c4846de56e39e1b3e73f171e881eba8c66f614e67e5c975dfc07,b6307b332e699f1cf77841d90af25365404deb7fed5edb3090db49e642a156b6,,,09ca7f4755b63b7b921a91c61e4c18c0e8e177e145739909eb1981a268a20028,49cf84ccd19660e30887be26f50dac9abfb2148012a124cf6f24b618bd5ea579,,,case0:ok;case1:ok;case2:bad[non_square(q)];case3:bad[non_square(q)];case4:ok;case5:ok;case6:bad[non_square(q)];case7:bad[non_square(q)] +2dc90e640cb646ae9164c0b5a9ef0169febe34dc4437d6e46acb0e27e219d1e8,d236f19bf349b9516e9b3f4a5610fe960141cb23bbc8291b9534f1d71de62a47,e69df7d9c026c36600ebdf588072675847c0c431c8eb730682533e964b6252c9,4f18bbdf7c2d6c5f818c18802fa35cd069eaa79fff74e4fc837c80d93fece2f8,,,196208263fd93c99ff1420a77f8d98a7b83f3bce37148cf97dacc168b49da966,b0e7442083d293a07e73e77fd05ca32f96155860008b1b037c837f25c0131937,,,case0:ok;case1:info[v=0]&ok;case2:bad[non_square(q)];case3:bad[non_square(q)];case4:ok;case5:info[v=0]&ok;case6:bad[non_square(q)];case7:bad[non_square(q)] +3edd7b3980e2f2f34d1409a207069f881fda5f96f08027ac4465b63dc278d672,053a98de4a27b1961155822b3a3121f03b2a14458bd80eb4a560c4c7a85c149c,,,b3dae4b7dcf858e4c6968057cef2b156465431526538199cf52dc1b2d62fda30,4aa77dd55d6b6d3cfa10cc9d0fe42f79232e4575661049ae36779c1d0c666d88,,,4c251b482307a71b39697fa8310d4ea9b9abcead9ac7e6630ad23e4c29d021ff,b558822aa29492c305ef3362f01bd086dcd1ba8a99efb651c98863e1f3998ea7,case0:bad[valid_x(-x-u)];case1:bad[valid_x(-x-u)];case2:ok;case3:ok;case4:bad[valid_x(-x-u)];case5:bad[valid_x(-x-u)];case6:ok;case7:ok +4295737efcb1da6fb1d96b9ca7dcd1e320024b37a736c4948b62598173069f70,fa7ffe4f25f88362831c087afe2e8a9b0713e2cac1ddca6a383205a266f14307,,,,,,,,,case0:bad[non_square(s)];case1:bad[non_square(s)];case2:bad[non_square(s)];case3:bad[non_square(s)];case4:bad[non_square(s)];case5:bad[non_square(s)];case6:bad[non_square(s)];case7:bad[non_square(s)] +587c1a0cee91939e7f784d23b963004a3bf44f5d4e32a0081995ba20b0fca59e,2ea988530715e8d10363907ff25124524d471ba2454d5ce3be3f04194dfd3a3c,cfd5a094aa0b9b8891b76c6ab9438f66aa1c095a65f9f70135e8171292245e74,a89057d7c6563f0d6efa19ae84412b8a7b47e791a191ecdfdf2af84fd97bc339,475d0ae9ef46920df07b34117be5a0817de1023e3cc32689e9be145b406b0aef,a0759178ad80232454f827ef05ea3e72ad8d75418e6d4cc1cd4f5306c5e7c453,302a5f6b55f464776e48939546bc709955e3f6a59a0608feca17e8ec6ddb9dbb,576fa82839a9c0f29105e6517bbed47584b8186e5e6e132020d507af268438f6,b8a2f51610b96df20f84cbee841a5f7e821efdc1c33cd9761641eba3bf94f140,5f8a6e87527fdcdbab07d810fa15c18d52728abe7192b33e32b0acf83a1837dc,case0:ok;case1:ok;case2:ok;case3:ok;case4:ok;case5:ok;case6:ok;case7:ok +5fa88b3365a635cbbcee003cce9ef51dd1a310de277e441abccdb7be1e4ba249,79461ff62bfcbcac4249ba84dd040f2cec3c63f725204dc7f464c16bf0ff3170,,,6bb700e1f4d7e236e8d193ff4a76c1b3bcd4e2b25acac3d51c8dac653fe909a0,f4c73410633da7f63a4f1d55aec6dd32c4c6d89ee74075edb5515ed90da9e683,,,9448ff1e0b281dc9172e6c00b5893e4c432b1d4da5353c2ae3725399c016f28f,0b38cbef9cc25809c5b0e2aa513922cd3b39276118bf8a124aaea125f25615ac,case0:bad[non_square(s)];case1:bad[non_square(s)];case2:ok;case3:info[v=0]&ok;case4:bad[non_square(s)];case5:bad[non_square(s)];case6:ok;case7:info[v=0]&ok +6fb31c7531f03130b42b155b952779efbb46087dd9807d241a48eac63c3d96d6,56f81be753e8d4ae4940ea6f46f6ec9fda66a6f96cc95f506cb2b57490e94260,,,59059774795bdb7a837fbe1140a5fa59984f48af8df95d57dd6d1c05437dcec1,22a644db79376ad4e7b3a009e58b3f13137c54fdf911122cc93667c47077d784,,,a6fa688b86a424857c8041eebf5a05a667b0b7507206a2a82292e3f9bc822d6e,dd59bb2486c8952b184c5ff61a74c0ecec83ab0206eeedd336c9983a8f8824ab,case0:bad[valid_x(-x-u)];case1:bad[valid_x(-x-u)];case2:ok;case3:info[v=0]&ok;case4:bad[valid_x(-x-u)];case5:bad[valid_x(-x-u)];case6:ok;case7:info[v=0]&ok +704cd226e71cb6826a590e80dac90f2d2f5830f0fdf135a3eae3965bff25ff12,138e0afa68936ee670bd2b8db53aedbb7bea2a8597388b24d0518edd22ad66ec,,,,,,,,,case0:bad[non_square(s)];case1:bad[non_square(s)];case2:bad[non_square(q)];case3:bad[non_square(q)];case4:bad[non_square(s)];case5:bad[non_square(s)];case6:bad[non_square(q)];case7:bad[non_square(q)] +725e914792cb8c8949e7e1168b7cdd8a8094c91c6ec2202ccd53a6a18771edeb,8da16eb86d347376b6181ee9748322757f6b36e3913ddfd332ac595d788e0e44,dd357786b9f6873330391aa5625809654e43116e82a5a5d82ffd1d6624101fc4,a0b7efca01814594c59c9aae8e49700186ca5d95e88bcc80399044d9c2d8613d,,,22ca8879460978cccfc6e55a9da7f69ab1bcee917d5a5a27d002e298dbefdc6b,5f481035fe7eba6b3a63655171b68ffe7935a26a1774337fc66fbb253d279af2,,,case0:ok;case1:info[v=0]&ok;case2:bad[non_square(s)];case3:bad[non_square(s)];case4:ok;case5:info[v=0]&ok;case6:bad[non_square(s)];case7:bad[non_square(s)] +78fe6b717f2ea4a32708d79c151bf503a5312a18c0963437e865cc6ed3f6ae97,8701948e80d15b5cd8f72863eae40afc5aced5e73f69cbc8179a33902c094d98,,,,,,,,,case0:bad[non_square(s)];case1:info[v=0]&bad[non_square(s)];case2:bad[non_square(q)];case3:bad[non_square(q)];case4:bad[non_square(s)];case5:info[v=0]&bad[non_square(s)];case6:bad[non_square(q)];case7:bad[non_square(q)] +7c37bb9c5061dc07413f11acd5a34006e64c5c457fdb9a438f217255a961f50d,5c1a76b44568eb59d6789a7442d9ed7cdc6226b7752b4ff8eaf8e1a95736e507,,,b94d30cd7dbff60b64620c17ca0fafaa40b3d1f52d077a60a2e0cafd145086c2,,,,46b2cf32824009f49b9df3e835f05055bf4c2e0ad2f8859f5d1f3501ebaf756d,,case0:bad[non_square(s)];case1:bad[non_square(s)];case2:info[q=0]&info[X=0]&ok;case3:info[q=0]&bad[r=0];case4:bad[non_square(s)];case5:bad[non_square(s)];case6:info[q=0]&info[X=0]&ok;case7:info[q=0]&bad[r=0] +82388888967f82a6b444438a7d44838e13c0d478b9ca060da95a41fb94303de6,29e9654170628fec8b4972898b113cf98807f4609274f4f3140d0674157c90a0,,,,,,,,,case0:bad[non_square(s)];case1:bad[non_square(s)];case2:bad[non_square(s)];case3:info[v=0]&bad[non_square(s)];case4:bad[non_square(s)];case5:bad[non_square(s)];case6:bad[non_square(s)];case7:info[v=0]&bad[non_square(s)] +91298f5770af7a27f0a47188d24c3b7bf98ab2990d84b0b898507e3c561d6472,144f4ccbd9a74698a88cbf6fd00ad886d339d29ea19448f2c572cac0a07d5562,e6a0ffa3807f09dadbe71e0f4be4725f2832e76cad8dc1d943ce839375eff248,837b8e68d4917544764ad0903cb11f8615d2823cefbb06d89049dbabc69befda,,,195f005c7f80f6252418e1f0b41b8da0d7cd189352723e26bc317c6b8a1009e7,7c8471972b6e8abb89b52f6fc34ee079ea2d7dc31044f9276fb6245339640c55,,,case0:ok;case1:ok;case2:bad[non_square(s)];case3:info[v=0]&bad[non_square(s)];case4:ok;case5:ok;case6:bad[non_square(s)];case7:info[v=0]&bad[non_square(s)] +b682f3d03bbb5dee4f54b5ebfba931b4f52f6a191e5c2f483c73c66e9ace97e1,904717bf0bc0cb7873fcdc38aa97f19e3a62630972acff92b24cc6dda197cb96,,,,,,,,,case0:bad[valid_x(-x-u)];case1:bad[valid_x(-x-u)];case2:bad[non_square(s)];case3:bad[non_square(s)];case4:bad[valid_x(-x-u)];case5:bad[valid_x(-x-u)];case6:bad[non_square(s)];case7:bad[non_square(s)] +c17ec69e665f0fb0dbab48d9c2f94d12ec8a9d7eacb58084833091801eb0b80b,147756e66d96e31c426d3cc85ed0c4cfbef6341dd8b285585aa574ea0204b55e,6f4aea431a0043bdd03134d6d9159119ce034b88c32e50e8e36c4ee45eac7ae9,fd5be16d4ffa2690126c67c3ef7cb9d29b74d397c78b06b3605fda34dc9696a6,5e9c60792a2f000e45c6250f296f875e174efc0e9703e628706103a9dd2d82c7,,90b515bce5ffbc422fcecb2926ea6ee631fcb4773cd1af171c93b11aa1538146,02a41e92b005d96fed93983c1083462d648b2c683874f94c9fa025ca23696589,a1639f86d5d0fff1ba39daf0d69078a1e8b103f168fc19d78f9efc5522d27968,,case0:ok;case1:ok;case2:info[q=0]&info[X=0]&ok;case3:info[q=0]&bad[r=0];case4:ok;case5:ok;case6:info[q=0]&info[X=0]&ok;case7:info[q=0]&bad[r=0] +c25172fc3f29b6fc4a1155b8575233155486b27464b74b8b260b499a3f53cb14,1ea9cbdb35cf6e0329aa31b0bb0a702a65123ed008655a93b7dcd5280e52e1ab,,,7422edc7843136af0053bb8854448a8299994f9ddcefd3a9a92d45462c59298a,78c7774a266f8b97ea23d05d064f033c77319f923f6b78bce4e20bf05fa5398d,,,8bdd12387bcec950ffac4477abbb757d6666b06223102c5656d2bab8d3a6d2a5,873888b5d990746815dc2fa2f9b0fcc388ce606dc09487431b1df40ea05ac2a2,case0:bad[non_square(s)];case1:bad[non_square(s)];case2:ok;case3:ok;case4:bad[non_square(s)];case5:bad[non_square(s)];case6:ok;case7:ok +cab6626f832a4b1280ba7add2fc5322ff011caededf7ff4db6735d5026dc0367,2b2bef0852c6f7c95d72ac99a23802b875029cd573b248d1f1b3fc8033788eb6,,,,,,,,,case0:bad[non_square(s)];case1:bad[non_square(s)];case2:info[v=0]&bad[non_square(s)];case3:bad[non_square(s)];case4:bad[non_square(s)];case5:bad[non_square(s)];case6:info[v=0]&bad[non_square(s)];case7:bad[non_square(s)] +d8621b4ffc85b9ed56e99d8dd1dd24aedcecb14763b861a17112dc771a104fd2,812cabe972a22aa67c7da0c94d8a936296eb9949d70c37cb2b2487574cb3ce58,fbc5febc6fdbc9ae3eb88a93b982196e8b6275a6d5a73c17387e000c711bd0e3,8724c96bd4e5527f2dd195a51c468d2d211ba2fac7cbe0b4b3434253409fb42d,,,043a014390243651c147756c467de691749d8a592a58c3e8c781fff28ee42b4c,78db36942b1aad80d22e6a5ae3b972d2dee45d0538341f4b4cbcbdabbf604802,,,case0:ok;case1:ok;case2:bad[non_square(s)];case3:bad[non_square(s)];case4:ok;case5:ok;case6:bad[non_square(s)];case7:bad[non_square(s)] +da463164c6f4bf7129ee5f0ec00f65a675a8adf1bd931b39b64806afdcda9a22,25b9ce9b390b408ed611a0f13ff09a598a57520e426ce4c649b7f94f2325620d,,,,,,,,,case0:bad[non_square(s)];case1:info[v=0]&bad[non_square(s)];case2:bad[non_square(s)];case3:bad[non_square(s)];case4:bad[non_square(s)];case5:info[v=0]&bad[non_square(s)];case6:bad[non_square(s)];case7:bad[non_square(s)] +dafc971e4a3a7b6dcfb42a08d9692d82ad9e7838523fcbda1d4827e14481ae2d,250368e1b5c58492304bd5f72696d27d526187c7adc03425e2b7d81dbb7e4e02,,,370c28f1be665efacde6aa436bf86fe21e6e314c1e53dd040e6c73a46b4c8c49,cd8acee98ffe56531a84d7eb3e48fa4034206ce825ace907d0edf0eaeb5e9ca2,,,c8f3d70e4199a105321955bc9407901de191ceb3e1ac22fbf1938c5a94b36fe6,327531167001a9ace57b2814c1b705bfcbdf9317da5316f82f120f1414a15f8d,case0:bad[non_square(s)];case1:info[v=0]&bad[non_square(s)];case2:ok;case3:ok;case4:bad[non_square(s)];case5:info[v=0]&bad[non_square(s)];case6:ok;case7:ok +e0294c8bc1a36b4166ee92bfa70a5c34976fa9829405efea8f9cd54dcb29b99e,ae9690d13b8d20a0fbbf37bed8474f67a04e142f56efd78770a76b359165d8a1,,,dcd45d935613916af167b029058ba3a700d37150b9df34728cb05412c16d4182,,,,232ba26ca9ec6e950e984fd6fa745c58ff2c8eaf4620cb8d734fabec3e92baad,,case0:bad[valid_x(-x-u)];case1:bad[valid_x(-x-u)];case2:info[q=0]&info[X=0]&ok;case3:info[q=0]&bad[r=0];case4:bad[valid_x(-x-u)];case5:bad[valid_x(-x-u)];case6:info[q=0]&info[X=0]&ok;case7:info[q=0]&bad[r=0] +e148441cd7b92b8b0e4fa3bd68712cfd0d709ad198cace611493c10e97f5394e,164a639794d74c53afc4d3294e79cdb3cd25f99f6df45c000f758aba54d699c0,,,,,,,,,case0:bad[valid_x(-x-u)];case1:bad[valid_x(-x-u)];case2:bad[non_square(s)];case3:info[v=0]&bad[non_square(s)];case4:bad[valid_x(-x-u)];case5:bad[valid_x(-x-u)];case6:bad[non_square(s)];case7:info[v=0]&bad[non_square(s)] +e4b00ec97aadcca97644d3b0c8a931b14ce7bcf7bc8779546d6e35aa5937381c,94e9588d41647b3fcc772dc8d83c67ce3be003538517c834103d2cd49d62ef4d,c88d25f41407376bb2c03a7fffeb3ec7811cc43491a0c3aac0378cdc78357bee,51c02636ce00c2345ecd89adb6089fe4d5e18ac924e3145e6669501cd37a00d4,205b3512db40521cb200952e67b46f67e09e7839e0de44004138329ebd9138c5,58aab390ab6fb55c1d1b80897a207ce94a78fa5b4aa61a33398bcae9adb20d3e,3772da0bebf8c8944d3fc5800014c1387ee33bcb6e5f3c553fc8732287ca8041,ae3fd9c931ff3dcba132765249f7601b2a1e7536db1ceba19996afe22c85fb5b,dfa4caed24bfade34dff6ad1984b90981f6187c61f21bbffbec7cd60426ec36a,a7554c6f54904aa3e2e47f7685df8316b58705a4b559e5ccc6743515524deef1,case0:ok;case1:ok;case2:ok;case3:info[v=0]&ok;case4:ok;case5:ok;case6:ok;case7:info[v=0]&ok +e5bbb9ef360d0a501618f0067d36dceb75f5be9a620232aa9fd5139d0863fde5,e5bbb9ef360d0a501618f0067d36dceb75f5be9a620232aa9fd5139d0863fde5,,,,,,,,,case0:bad[valid_x(-x-u)];case1:bad[valid_x(-x-u)];case2:bad[s=0];case3:bad[s=0];case4:bad[valid_x(-x-u)];case5:bad[valid_x(-x-u)];case6:bad[s=0];case7:bad[s=0] +e6bcb5c3d63467d490bfa54fbbc6092a7248c25e11b248dc2964a6e15edb1457,19434a3c29cb982b6f405ab04439f6d58db73da1ee4db723d69b591da124e7d8,67119877832ab8f459a821656d8261f544a553b89ae4f25c52a97134b70f3426,ffee02f5e649c07f0560eff1867ec7b32d0e595e9b1c0ea6e2a4fc70c97cd71f,b5e0c189eb5b4bacd025b7444d74178be8d5246cfa4a9a207964a057ee969992,5746e4591bf7f4c3044609ea372e908603975d279fdef8349f0b08d32f07619d,98ee67887cd5470ba657de9a927d9e0abb5aac47651b0da3ad568eca48f0c809,0011fd0a19b63f80fa9f100e7981384cd2f1a6a164e3f1591d5b038e36832510,4a1f3e7614a4b4532fda48bbb28be874172adb9305b565df869b5fa71169629d,a8b91ba6e4080b3cfbb9f615c8d16f79fc68a2d8602107cb60f4f72bd0f89a92,case0:ok;case1:info[v=0]&ok;case2:ok;case3:ok;case4:ok;case5:info[v=0]&ok;case6:ok;case7:ok +f28fba64af766845eb2f4302456e2b9f8d80affe57e7aae42738d7cddb1c2ce6,f28fba64af766845eb2f4302456e2b9f8d80affe57e7aae42738d7cddb1c2ce6,4f867ad8bb3d840409d26b67307e62100153273f72fa4b7484becfa14ebe7408,5bbc4f59e452cc5f22a99144b10ce8989a89a995ec3cea1c91ae10e8f721bb5d,,,b079852744c27bfbf62d9498cf819deffeacd8c08d05b48b7b41305db1418827,a443b0a61bad33a0dd566ebb4ef317676576566a13c315e36e51ef1608de40d2,,,case0:ok;case1:ok;case2:bad[s=0];case3:bad[s=0];case4:ok;case5:ok;case6:bad[s=0];case7:bad[s=0] +f455605bc85bf48e3a908c31023faf98381504c6c6d3aeb9ede55f8dd528924d,d31fbcd5cdb798f6c00db6692f8fe8967fa9c79dd10958f4a194f01374905e99,,,0c00c5715b56fe632d814ad8a77f8e66628ea47a6116834f8c1218f3a03cbd50,df88e44fac84fa52df4d59f48819f18f6a8cd4151d162afaf773166f57c7ff46,,,f3ff3a8ea4a9019cd27eb527588071999d715b859ee97cb073ede70b5fc33edf,20771bb0537b05ad20b2a60b77e60e7095732beae2e9d505088ce98fa837fce9,case0:bad[non_square(s)];case1:bad[non_square(s)];case2:info[v=0]&ok;case3:ok;case4:bad[non_square(s)];case5:bad[non_square(s)];case6:info[v=0]&ok;case7:ok +f58cd4d9830bad322699035e8246007d4be27e19b6f53621317b4f309b3daa9d,78ec2b3dc0948de560148bbc7c6dc9633ad5df70a5a5750cbed721804f082a3b,6c4c580b76c7594043569f9dae16dc2801c16a1fbe12860881b75f8ef929bce5,94231355e7385c5f25ca436aa64191471aea4393d6e86ab7a35fe2afacaefd0d,dff2a1951ada6db574df834048149da3397a75b829abf58c7e69db1b41ac0989,a52b66d3c907035548028bf804711bf422aba95f1a666fc86f4648e05f29caae,93b3a7f48938a6bfbca9606251e923d7fe3e95e041ed79f77e48a07006d63f4a,6bdcecaa18c7a3a0da35bc9559be6eb8e515bc6c291795485ca01d4f5350ff22,200d5e6ae525924a8b207cbfb7eb625cc6858a47d6540a73819624e3be53f2a6,5ad4992c36f8fcaab7fd7407fb8ee40bdd5456a0e599903790b9b71ea0d63181,case0:ok;case1:ok;case2:info[v=0]&ok;case3:ok;case4:ok;case5:ok;case6:info[v=0]&ok;case7:ok +fd7d912a40f182a3588800d69ebfb5048766da206fd7ebc8d2436c81cbef6421,8d37c862054debe731694536ff46b273ec122b35a9bf1445ac3c4ff9f262c952,,,,,,,,,case0:bad[valid_x(-x-u)];case1:bad[valid_x(-x-u)];case2:info[v=0]&bad[non_square(s)];case3:bad[non_square(s)];case4:bad[valid_x(-x-u)];case5:bad[valid_x(-x-u)];case6:info[v=0]&bad[non_square(s)];case7:bad[non_square(s)] diff --git a/warnet-demo/warnet.yaml b/warnet-demo/warnet.yaml new file mode 100644 index 0000000..b03ff82 --- /dev/null +++ b/warnet-demo/warnet.yaml @@ -0,0 +1,25 @@ +name: "warnet-demo" +version: "1.0.0" +description: "A simple Warnet project for learning Bitcoin network analysis" + +# Default network to use +default_network: "simple-3node-network" + +# Project metadata +metadata: + author: "Warnet Demo Project" + tags: ["bitcoin", "network-analysis", "learning", "demo"] + difficulty: "beginner" + +# Available networks in this project +networks: + - name: "simple-3node-network" + description: "Simple 3-node Bitcoin network for learning propagation" + file: "networks/simple-3node-network/network.yaml" + +# Available scenarios in this project +scenarios: + - name: "propagation_check" + description: "Network propagation test - observe how transactions flow between nodes" + file: "scenarios/propagation_check.py" + difficulty: "beginner" \ No newline at end of file