feat(tests): Move the RPC tests framework from zcashd (#8866)

* move the rpc-tests framework from zcashd

* ignore pycache

* remove all tests from the list except getmininginfo

* iimprove a bit the readme

* change some env variable names

* add cache, add reindex test

* fix the paralell framework

* fix env variables

* change tests order

* update docs with env variable name change

* fix binary location

* reduce base config

* restore env var

* ignore stderr in the output
This commit is contained in:
Alfredo Garcia 2024-09-20 13:36:20 -03:00 committed by GitHub
parent 6951988456
commit c8280d488f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
27 changed files with 7377 additions and 0 deletions

6
.gitignore vendored
View File

@ -158,3 +158,9 @@ $RECYCLE.BIN/
# Windows shortcuts # Windows shortcuts
*.lnk *.lnk
# Python pycache
__pycache__/
# RPC tests cache
zebra-rpc/qa/cache/

88
zebra-rpc/qa/README.md Normal file
View File

@ -0,0 +1,88 @@
The [pull-tester](/pull-tester/) folder contains a script to call
multiple tests from the [rpc-tests](/rpc-tests/) folder.
Every pull request to the zebra repository is built and run through
the regression test suite. You can also run all or only individual
tests locally.
Test dependencies
=================
Before running the tests, the following must be installed.
Unix
----
The `zmq`, `toml` and `base58` Python libraries are required. On Ubuntu or Debian-based
distributions they can be installed via:
```
sudo apt-get install python3-zmq python3-base58
```
OS X
------
```
pip3 install pyzmq base58 toml
```
Running tests locally
=====================
Make sure `zebrad` binary exists in the `../target/debug/` folder or set the binary path with:
```
export CARGO_BIN_EXE_zebrad=/path/to/zebrad
```
You can run any single test by calling
./qa/pull-tester/rpc-tests.py <testname1>
Run the regression test suite with
./qa/pull-tester/rpc-tests.py
By default, tests will be run in parallel. To specify how many jobs to run,
append `--jobs=n` (default n=4).
If you want to create a basic coverage report for the RPC test suite, append `--coverage`.
Possible options, which apply to each individual test run:
```
-h, --help show this help message and exit
--nocleanup Leave zcashds and test.* datadir on exit or error
--noshutdown Don't stop zcashds after the test execution
--srcdir=SRCDIR Source directory containing zcashd/zcash-cli
(default: ../../src)
--tmpdir=TMPDIR Root directory for datadirs
--tracerpc Print out all RPC calls as they are made
--coveragedir=COVERAGEDIR
Write tested RPC commands into this directory
```
If you set the environment variable `PYTHON_DEBUG=1` you will get some debug
output (example: `PYTHON_DEBUG=1 qa/pull-tester/rpc-tests.py wallet`).
A 200-block -regtest blockchain and wallets for four nodes
is created the first time a regression test is run and
is stored in the cache/ directory. Each node has the miner
subsidy from 25 mature blocks (25*10=250 ZEC) in its wallet.
After the first run, the cache/ blockchain and wallets are
copied into a temporary directory and used as the initial
test state.
If you get into a bad state, you should be able
to recover with:
```bash
rm -rf cache
killall zcashd
```
Writing tests
=============
You are encouraged to write tests for new or existing features.
Further information about the test framework and individual RPC
tests is found in [rpc-tests](rpc-tests).

View File

@ -0,0 +1,12 @@
[mining]
miner_address = "t27eWDgjFYJGVXmzrXeVjnb5J3uXDM9xH9v"
[network]
listen_addr = "127.0.0.1:0"
network = "Regtest"
[rpc]
listen_addr = "127.0.0.1:0"
[state]
cache_dir = ""

View File

@ -0,0 +1,401 @@
#!/usr/bin/env python3
# Copyright (c) 2014-2016 The Bitcoin Core developers
# Copyright (c) 2020-2022 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
"""
rpc-tests.py - run regression test suite
This module calls down into individual test cases via subprocess. It will
forward all unrecognized arguments onto the individual test scripts.
RPC tests are disabled on Windows by default. Use --force to run them anyway.
For a description of arguments recognized by test scripts, see
`qa/pull-tester/test_framework/test_framework.py:BitcoinTestFramework.main`.
"""
import argparse
import configparser
import os
import time
import shutil
import sys
import subprocess
import tempfile
import re
SERIAL_SCRIPTS = [
# These tests involve enough shielded spends (consuming all CPU
# cores) that we can't run them in parallel.
]
FLAKY_SCRIPTS = [
# These tests have intermittent failures that we haven't diagnosed yet.
]
BASE_SCRIPTS= [
# Scripts that are run by the travis build process
# Longest test should go first, to favor running tests in parallel
'reindex.py',
'getmininginfo.py']
ZMQ_SCRIPTS = [
# ZMQ test can only be run if bitcoin was built with zmq-enabled.
# call rpc_tests.py with --nozmq to explicitly exclude these tests.
]
EXTENDED_SCRIPTS = [
# These tests are not run by the travis build process.
# Longest test should go first, to favor running tests in parallel
]
ALL_SCRIPTS = SERIAL_SCRIPTS + FLAKY_SCRIPTS + BASE_SCRIPTS + ZMQ_SCRIPTS + EXTENDED_SCRIPTS
def main():
# Parse arguments and pass through unrecognised args
parser = argparse.ArgumentParser(add_help=False,
usage='%(prog)s [rpc-test.py options] [script options] [scripts]',
description=__doc__,
epilog='''
Help text and arguments for individual test script:''',
formatter_class=argparse.RawTextHelpFormatter)
parser.add_argument('--coverage', action='store_true', help='generate a basic coverage report for the RPC interface')
parser.add_argument('--deterministic', '-d', action='store_true', help='make the output a bit closer to deterministic in order to compare runs.')
parser.add_argument('--exclude', '-x', help='specify a comma-separated-list of scripts to exclude. Do not include the .py extension in the name.')
parser.add_argument('--extended', action='store_true', help='run the extended test suite in addition to the basic tests')
parser.add_argument('--force', '-f', action='store_true', help='run tests even on platforms where they are disabled by default (e.g. windows).')
parser.add_argument('--help', '-h', '-?', action='store_true', help='print help text and exit')
parser.add_argument('--jobs', '-j', type=int, default=4, help='how many test scripts to run in parallel. Default=4.')
parser.add_argument('--machines', '-m', type=int, default=-1, help='how many machines to shard the tests over. must also provide individual shard index. Default=-1 (no sharding).')
parser.add_argument('--rpcgroup', '-r', type=int, default=-1, help='individual shard index. must also provide how many machines to shard the tests over. Default=-1 (no sharding).')
parser.add_argument('--nozmq', action='store_true', help='do not run the zmq tests')
args, unknown_args = parser.parse_known_args()
# Create a set to store arguments and create the passon string
tests = set(arg for arg in unknown_args if arg[:2] != "--")
passon_args = [arg for arg in unknown_args if arg[:2] == "--"]
# Read config generated by configure.
config = configparser.ConfigParser()
config.read_file(open(os.path.dirname(__file__) + "/tests_config.ini"))
enable_wallet = config["components"].getboolean("ENABLE_WALLET")
enable_utils = config["components"].getboolean("ENABLE_UTILS")
enable_bitcoind = config["components"].getboolean("ENABLE_BITCOIND")
enable_zmq = config["components"].getboolean("ENABLE_ZMQ") and not args.nozmq
if config["environment"]["EXEEXT"] == ".exe" and not args.force:
# https://github.com/bitcoin/bitcoin/commit/d52802551752140cf41f0d9a225a43e84404d3e9
# https://github.com/bitcoin/bitcoin/pull/5677#issuecomment-136646964
print("Tests currently disabled on Windows by default. Use --force option to enable")
sys.exit(0)
if not (enable_wallet and enable_utils and enable_bitcoind):
print("No rpc tests to run. Wallet, utils, and bitcoind must all be enabled")
print("Rerun `configure` with -enable-wallet, -with-utils and -with-daemon and rerun make")
sys.exit(0)
# python3-zmq may not be installed. Handle this gracefully and with some helpful info
if enable_zmq:
try:
import zmq
zmq # Silences pyflakes
except ImportError:
print("ERROR: \"import zmq\" failed. Use --nozmq to run without the ZMQ tests."
"To run zmq tests, see dependency info in /qa/README.md.")
raise
# Build list of tests
if tests:
# Individual tests have been specified. Run specified tests that exist
# in the ALL_SCRIPTS list. Accept the name with or without .py extension.
test_list = [t for t in ALL_SCRIPTS if
(t in tests or re.sub(".py$", "", t) in tests)]
print("Running individually selected tests: ")
for t in test_list:
print("\t" + t)
else:
# No individual tests have been specified. Run base tests, and
# optionally ZMQ tests and extended tests.
test_list = SERIAL_SCRIPTS + FLAKY_SCRIPTS + BASE_SCRIPTS
if enable_zmq:
test_list += ZMQ_SCRIPTS
if args.extended:
test_list += EXTENDED_SCRIPTS
# TODO: BASE_SCRIPTS and EXTENDED_SCRIPTS are sorted by runtime
# (for parallel running efficiency). This combined list will is no
# longer sorted.
# Remove the test cases that the user has explicitly asked to exclude.
if args.exclude:
for exclude_test in args.exclude.split(','):
if exclude_test + ".py" in test_list:
test_list.remove(exclude_test + ".py")
if not test_list:
print("No valid test scripts specified. Check that your test is in one "
"of the test lists in rpc-tests.py, or run rpc-tests.py with no arguments to run all tests")
sys.exit(0)
if args.help:
# Print help for rpc-tests.py, then print help of the first script and exit.
parser.print_help()
subprocess.check_call((config["environment"]["SRCDIR"] + '/qa/rpc-tests/' + test_list[0]).split() + ['-h'])
sys.exit(0)
if (args.rpcgroup == -1) != (args.machines == -1):
print("ERROR: Please use both -m and -r options when using parallel rpc_groups.")
sys.exit(0)
if args.machines == 0:
print("ERROR: -m/--machines must be greater than 0")
sys.exit(0)
if args.machines > 0 and (args.rpcgroup >= args.machines):
print("ERROR: -r/--rpcgroup must be less than -m/--machines")
sys.exit(0)
if args.rpcgroup != -1 and args.machines != -1 and args.machines > args.rpcgroup:
# Ceiling division using floor division, by inverting the world.
# https://stackoverflow.com/a/17511341
k = -(len(test_list) // -args.machines)
split_list = list(test_list[i*k:(i+1)*k] for i in range(args.machines))
tests_to_run = split_list[args.rpcgroup]
else:
tests_to_run = test_list
all_passed = run_tests(
RPCTestHandler,
tests_to_run,
config["environment"]["SRCDIR"],
config["environment"]["BUILDDIR"],
config["environment"]["EXEEXT"],
args.jobs,
args.coverage,
args.deterministic,
passon_args)
sys.exit(not all_passed)
def run_tests(test_handler, test_list, src_dir, build_dir, exeext, jobs=1, enable_coverage=False, deterministic=False, args=[]):
BOLD = ("","")
if os.name == 'posix':
# primitive formatting on supported
# terminal via ANSI escape sequences:
BOLD = ('\033[0m', '\033[1m')
#Set env vars
if "CARGO_BIN_EXE_zebrad" not in os.environ:
os.environ["CARGO_BIN_EXE_zebrad"] = os.path.join("..", "target", "debug", "zebrad")
tests_dir = src_dir + '/qa/rpc-tests/'
flags = ["--srcdir={}/src".format(build_dir)] + args
flags.append("--cachedir=%s/qa/cache" % build_dir)
if enable_coverage:
coverage = RPCCoverage()
flags.append(coverage.flag)
print("Initializing coverage directory at %s\n" % coverage.dir)
else:
coverage = None
if len(test_list) > 1 and jobs > 1:
# Populate cache
subprocess.check_output([tests_dir + 'create_cache.py'] + flags)
#Run Tests
time_sum = 0
time0 = time.time()
job_queue = test_handler(jobs, tests_dir, test_list, flags)
max_len_name = len(max(test_list, key=len))
total_count = 0
passed_count = 0
results = []
try:
for _ in range(len(test_list)):
(name, stdout, stderr, passed, duration) = job_queue.get_next(deterministic)
time_sum += duration
print('\n' + BOLD[1] + name + BOLD[0] + ":")
print('' if passed else stdout + '\n', end='')
# TODO: Zebrad always produce the welcome message in the stderr.
# Ignoring stderr output here until that is fixed.
#print('' if stderr == '' else 'stderr:\n' + stderr + '\n', end='')
print("Pass: %s%s%s" % (BOLD[1], passed, BOLD[0]), end='')
if deterministic:
print("\n", end='')
else:
print(", Duration: %s s" % (duration,))
total_count += 1
if passed:
passed_count += 1
new_result = "%s | %s" % (name.ljust(max_len_name), str(passed).ljust(6))
if not deterministic:
new_result += (" | %s s" % (duration,))
results.append(new_result)
except (InterruptedError, KeyboardInterrupt):
print('\nThe following tests were running when interrupted:')
for j in job_queue.jobs:
print("", j[0])
print('\n', end='')
all_passed = passed_count == total_count
if all_passed:
success_rate = "True"
else:
success_rate = "%d/%d" % (passed_count, total_count)
header = "%s | PASSED" % ("TEST".ljust(max_len_name),)
footer = "%s | %s" % ("ALL".ljust(max_len_name), str(success_rate).ljust(6))
if not deterministic:
header += " | DURATION"
footer += " | %s s (accumulated)\nRuntime: %s s" % (time_sum, int(time.time() - time0))
print(
BOLD[1] + header + BOLD[0] + "\n\n"
+ "\n".join(sorted(results)) + "\n"
+ BOLD[1] + footer + BOLD[0])
if coverage:
coverage.report_rpc_coverage()
print("Cleaning up coverage data")
coverage.cleanup()
return all_passed
class RPCTestHandler:
"""
Trigger the testscrips passed in via the list.
"""
def __init__(self, num_tests_parallel, tests_dir, test_list=None, flags=None):
assert(num_tests_parallel >= 1)
self.num_jobs = num_tests_parallel
self.tests_dir = tests_dir
self.test_list = test_list
self.flags = flags
self.num_running = 0
# In case there is a graveyard of zombie bitcoinds, we can apply a
# pseudorandom offset to hopefully jump over them.
# (625 is PORT_RANGE/MAX_NODES)
self.portseed_offset = int(time.time() * 1000) % 625
self.jobs = []
def start_test(self, args, stdout, stderr):
return subprocess.Popen(
args,
universal_newlines=True,
stdout=stdout,
stderr=stderr)
def get_next(self, deterministic):
while self.num_running < self.num_jobs and self.test_list:
# Add tests
self.num_running += 1
t = self.test_list.pop(0)
port_seed = ["--portseed={}".format(len(self.test_list) + self.portseed_offset)]
log_stdout = tempfile.SpooledTemporaryFile(max_size=2**16)
log_stderr = tempfile.SpooledTemporaryFile(max_size=2**16)
self.jobs.append((t,
time.time(),
self.start_test((self.tests_dir + t).split() + self.flags + port_seed,
log_stdout,
log_stderr),
log_stdout,
log_stderr))
# Run serial scripts on their own. We always run these first,
# so we won't have added any other jobs yet.
if t in SERIAL_SCRIPTS:
break
if not self.jobs:
raise IndexError('pop from empty list')
while True:
# Return first proc that finishes
time.sleep(.5)
for j in self.jobs:
(name, time0, proc, log_out, log_err) = j
if proc.poll() is not None:
log_out.seek(0), log_err.seek(0)
[stdout, stderr] = [l.read().decode('utf-8') for l in (log_out, log_err)]
log_out.close(), log_err.close()
# We can't check for an empty stderr in Zebra so we just check for the return code.
passed = proc.returncode == 0
self.num_running -= 1
self.jobs.remove(j)
return name, stdout, stderr, passed, int(time.time() - time0)
if not deterministic:
print('.', end='', flush=True)
class RPCCoverage(object):
"""
Coverage reporting utilities for pull-tester.
Coverage calculation works by having each test script subprocess write
coverage files into a particular directory. These files contain the RPC
commands invoked during testing, as well as a complete listing of RPC
commands per `bitcoin-cli help` (`rpc_interface.txt`).
After all tests complete, the commands run are combined and diff'd against
the complete list to calculate uncovered RPC commands.
See also: qa/rpc-tests/test_framework/coverage.py
"""
def __init__(self):
self.dir = tempfile.mkdtemp(prefix="coverage")
self.flag = '--coveragedir=%s' % self.dir
def report_rpc_coverage(self):
"""
Print out RPC commands that were unexercised by tests.
"""
uncovered = self._get_uncovered_rpc_commands()
if uncovered:
print("Uncovered RPC commands:")
print("".join((" - %s\n" % i) for i in sorted(uncovered)))
else:
print("All RPC commands covered.")
def cleanup(self):
return shutil.rmtree(self.dir)
def _get_uncovered_rpc_commands(self):
"""
Return a set of currently untested RPC commands.
"""
# This is shared from `qa/rpc-tests/test-framework/coverage.py`
reference_filename = 'rpc_interface.txt'
coverage_file_prefix = 'coverage.'
coverage_ref_filename = os.path.join(self.dir, reference_filename)
coverage_filenames = set()
all_cmds = set()
covered_cmds = set()
if not os.path.isfile(coverage_ref_filename):
raise RuntimeError("No coverage reference found")
with open(coverage_ref_filename, 'r', encoding='utf8') as f:
all_cmds.update([i.strip() for i in f.readlines()])
for root, dirs, files in os.walk(self.dir):
for filename in files:
if filename.startswith(coverage_file_prefix):
coverage_filenames.add(os.path.join(root, filename))
for filename in coverage_filenames:
with open(filename, 'r', encoding='utf8') as f:
covered_cmds.update([i.strip() for i in f.readlines()])
return all_cmds - covered_cmds
if __name__ == '__main__':
main()

View File

@ -0,0 +1,19 @@
# Copyright (c) 2013-2016 The Bitcoin Core developers
# Copyright (c) 2020-2022 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
# These environment variables are set by the build process and read by
# rpc-tests.py
[environment]
SRCDIR=.
BUILDDIR=.
EXEEXT=
[components]
# Which components are enabled. These are commented out by `configure` if they were disabled when running config.
ENABLE_WALLET=true
ENABLE_UTILS=true
ENABLE_BITCOIND=true
ENABLE_ZMQ=false

View File

@ -0,0 +1,31 @@
#!/usr/bin/env python3
# Copyright (c) 2016 The Bitcoin Core developers
# Copyright (c) 2020-2022 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
#
# Helper script to create the cache
# (see BitcoinTestFramework.setup_chain)
#
from test_framework.test_framework import BitcoinTestFramework
class CreateCache(BitcoinTestFramework):
def __init__(self):
super().__init__()
# Test network and test nodes are not required:
self.num_nodes = 0
self.nodes = []
def setup_network(self):
pass
def run_test(self):
pass
if __name__ == '__main__':
CreateCache().main()

View File

@ -0,0 +1,47 @@
#!/usr/bin/env python3
# Copyright (c) 2021 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import start_nodes
class GetMiningInfoTest(BitcoinTestFramework):
'''
Test getmininginfo.
'''
def __init__(self):
super().__init__()
self.num_nodes = 1
self.cache_behavior = 'clean'
def setup_network(self, split=False):
self.nodes = start_nodes(self.num_nodes, self.options.tmpdir)
self.is_network_split = False
self.sync_all()
def run_test(self):
node = self.nodes[0]
info = node.getmininginfo()
assert(info['blocks'] == 0)
# No blocks have been mined yet, so these fields should not be present.
assert('currentblocksize' not in info)
assert('currentblocktx' not in info)
node.generate(1)
info = node.getmininginfo()
assert(info['blocks'] == 1)
# One block has been mined, so these fields should now be present.
assert('currentblocksize' in info)
assert('currentblocktx' in info)
assert(info['currentblocksize'] > 0)
# The transaction count doesn't include the coinbase
assert(info['currentblocktx'] == 0)
if __name__ == '__main__':
GetMiningInfoTest().main()

View File

@ -0,0 +1,54 @@
#!/usr/bin/env python3
# Copyright (c) 2014-2016 The Bitcoin Core developers
# Copyright (c) 2017-2022 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
#
# Test -reindex and -reindex-chainstate with CheckBlockIndex
#
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import assert_equal, \
start_node, stop_node, wait_bitcoinds
import time
class ReindexTest(BitcoinTestFramework):
def __init__(self):
super().__init__()
self.cache_behavior = 'clean'
self.num_nodes = 1
def setup_network(self):
self.nodes = []
self.is_network_split = False
self.nodes.append(start_node(0, self.options.tmpdir))
def reindex(self, justchainstate=False):
# When zebra reindexes, it will only do it up to the finalized chain height.
# This happens after the first 100 blocks, so we need to generate 100 blocks
# for the reindex to be able to catch block 1.
finalized_height = 100
self.nodes[0].generate(finalized_height)
blockcount = self.nodes[0].getblockcount() - (finalized_height - 1)
stop_node(self.nodes[0], 0)
wait_bitcoinds()
self.nodes[0]=start_node(0, self.options.tmpdir)
while self.nodes[0].getblockcount() < blockcount:
time.sleep(0.1)
assert_equal(self.nodes[0].getblockcount(), blockcount)
print("Success")
def run_test(self):
self.reindex(False)
self.reindex(True)
self.reindex(False)
self.reindex(True)
if __name__ == '__main__':
ReindexTest().main()

View File

@ -0,0 +1,166 @@
"""
Copyright 2011 Jeff Garzik
AuthServiceProxy has the following improvements over python-jsonrpc's
ServiceProxy class:
- HTTP connections persist for the life of the AuthServiceProxy object
(if server supports HTTP/1.1)
- sends protocol 'version', per JSON-RPC 1.1
- sends proper, incrementing 'id'
- sends Basic HTTP authentication headers
- parses all JSON numbers that look like floats as Decimal
- uses standard Python json lib
Previous copyright, from python-jsonrpc/jsonrpc/proxy.py:
Copyright (c) 2007 Jan-Klaas Kollhof
This file is part of jsonrpc.
jsonrpc is free software; you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation; either version 2.1 of the License, or
(at your option) any later version.
This software is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License
along with this software; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
"""
import base64
import decimal
import json
import logging
from http.client import HTTPConnection, HTTPSConnection, BadStatusLine
from urllib.parse import urlparse
USER_AGENT = "AuthServiceProxy/0.1"
HTTP_TIMEOUT = 600
log = logging.getLogger("BitcoinRPC")
class JSONRPCException(Exception):
def __init__(self, rpc_error):
Exception.__init__(self, rpc_error.get("message"))
self.error = rpc_error
def EncodeDecimal(o):
if isinstance(o, decimal.Decimal):
return str(o)
raise TypeError(repr(o) + " is not JSON serializable")
class AuthServiceProxy():
__id_count = 0
def __init__(self, service_url, service_name=None, timeout=HTTP_TIMEOUT, connection=None):
self.__service_url = service_url
self._service_name = service_name
self.__url = urlparse(service_url)
(user, passwd) = (self.__url.username, self.__url.password)
try:
user = user.encode('utf8')
except AttributeError:
pass
try:
passwd = passwd.encode('utf8')
except AttributeError:
pass
authpair = user + b':' + passwd
self.__auth_header = b'Basic ' + base64.b64encode(authpair)
self.timeout = timeout
self._set_conn(connection)
def _set_conn(self, connection=None):
port = 80 if self.__url.port is None else self.__url.port
if connection:
self.__conn = connection
self.timeout = connection.timeout
elif self.__url.scheme == 'https':
self.__conn = HTTPSConnection(self.__url.hostname, port, timeout=self.timeout)
else:
self.__conn = HTTPConnection(self.__url.hostname, port, timeout=self.timeout)
def __getattr__(self, name):
if name.startswith('__') and name.endswith('__'):
# Python internal stuff
raise AttributeError
if self._service_name is not None:
name = "%s.%s" % (self._service_name, name)
return AuthServiceProxy(self.__service_url, name, connection=self.__conn)
def _request(self, method, path, postdata):
'''
Do a HTTP request, with retry if we get disconnected (e.g. due to a timeout).
This is a workaround for https://bugs.python.org/issue3566 which is fixed in Python 3.5.
'''
headers = {'Host': self.__url.hostname,
'User-Agent': USER_AGENT,
'Authorization': self.__auth_header,
'Content-type': 'application/json'}
try:
self.__conn.request(method, path, postdata, headers)
return self._get_response()
except Exception as e:
# If connection was closed, try again.
# Python 3.5+ raises BrokenPipeError instead of BadStatusLine when the connection was reset.
# ConnectionResetError happens on FreeBSD with Python 3.4.
# This can be simplified now that we depend on Python 3 (previously, we could not
# refer to BrokenPipeError or ConnectionResetError which did not exist on Python 2)
if ((isinstance(e, BadStatusLine) and e.line == "''")
or e.__class__.__name__ in ('BrokenPipeError', 'ConnectionResetError')):
self.__conn.close()
self.__conn.request(method, path, postdata, headers)
return self._get_response()
else:
raise
def __call__(self, *args):
AuthServiceProxy.__id_count += 1
log.debug("-%s-> %s %s"%(AuthServiceProxy.__id_count, self._service_name,
json.dumps(args, default=EncodeDecimal)))
postdata = json.dumps({'version': '1.1',
'method': self._service_name,
'params': args,
'id': AuthServiceProxy.__id_count}, default=EncodeDecimal)
response = self._request('POST', self.__url.path, postdata)
if response['error'] is not None:
raise JSONRPCException(response['error'])
elif 'result' not in response:
raise JSONRPCException({
'code': -343, 'message': 'missing JSON-RPC result'})
else:
return response['result']
def _batch(self, rpc_call_list):
postdata = json.dumps(list(rpc_call_list), default=EncodeDecimal)
log.debug("--> "+postdata)
return self._request('POST', self.__url.path, postdata)
def _get_response(self):
http_response = self.__conn.getresponse()
if http_response is None:
raise JSONRPCException({
'code': -342, 'message': 'missing HTTP response from server'})
content_type = http_response.getheader('Content-Type')
if content_type != 'application/json':
raise JSONRPCException({
'code': -342, 'message': 'non-JSON HTTP response with \'%i %s\' from server' % (http_response.status, http_response.reason)})
responsedata = http_response.read().decode('utf8')
response = json.loads(responsedata, parse_float=decimal.Decimal)
if "error" in response and response["error"] is None:
log.debug("<-%s- %s"%(response["id"], json.dumps(response["result"], default=EncodeDecimal)))
else:
log.debug("<-- "+responsedata)
return response

View File

@ -0,0 +1,100 @@
#!/usr/bin/env python3
#
# bignum.py
#
# This file is copied from python-bitcoinlib.
#
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
#
"""Bignum routines"""
import struct
# generic big endian MPI format
def bn_bytes(v, have_ext=False):
ext = 0
if have_ext:
ext = 1
return ((v.bit_length()+7)//8) + ext
def bn2bin(v):
s = bytearray()
i = bn_bytes(v)
while i > 0:
s.append((v >> ((i-1) * 8)) & 0xff)
i -= 1
return s
def bin2bn(s):
l = 0
for ch in s:
l = (l << 8) | ch
return l
def bn2mpi(v):
have_ext = False
if v.bit_length() > 0:
have_ext = (v.bit_length() & 0x07) == 0
neg = False
if v < 0:
neg = True
v = -v
s = struct.pack(b">I", bn_bytes(v, have_ext))
ext = bytearray()
if have_ext:
ext.append(0)
v_bin = bn2bin(v)
if neg:
if have_ext:
ext[0] |= 0x80
else:
v_bin[0] |= 0x80
return s + ext + v_bin
def mpi2bn(s):
if len(s) < 4:
return None
s_size = bytes(s[:4])
v_len = struct.unpack(b">I", s_size)[0]
if len(s) != (v_len + 4):
return None
if v_len == 0:
return 0
v_str = bytearray(s[4:])
neg = False
i = v_str[0]
if i & 0x80:
neg = True
i &= ~0x80
v_str[0] = i
v = bin2bn(v_str)
if neg:
return -v
return v
# bitcoin-specific little endian format, with implicit size
def mpi2vch(s):
r = s[4:] # strip size
r = r[::-1] # reverse string, converting BE->LE
return r
def bn2vch(v):
return bytes(mpi2vch(bn2mpi(v)))
def vch2mpi(s):
r = struct.pack(b">I", len(s)) # size
r += s[::-1] # reverse string, converting LE->BE
return r
def vch2bn(s):
return mpi2bn(vch2mpi(s))

View File

@ -0,0 +1,142 @@
#!/usr/bin/env python3
# BlockStore: a helper class that keeps a map of blocks and implements
# helper functions for responding to getheaders and getdata,
# and for constructing a getheaders message
#
from .mininode import CBlock, CBlockHeader, CBlockLocator, CTransaction, msg_block, msg_headers, msg_tx
import sys
from io import BytesIO
import dbm.ndbm
class BlockStore():
def __init__(self, datadir):
self.blockDB = dbm.ndbm.open(datadir + "/blocks", 'c')
self.currentBlock = 0
self.headers_map = dict()
def close(self):
self.blockDB.close()
def get(self, blockhash):
serialized_block = None
try:
serialized_block = self.blockDB[repr(blockhash)]
except KeyError:
return None
f = BytesIO(serialized_block)
ret = CBlock()
ret.deserialize(f)
ret.calc_sha256()
return ret
def get_header(self, blockhash):
try:
return self.headers_map[blockhash]
except KeyError:
return None
# Note: this pulls full blocks out of the database just to retrieve
# the headers -- perhaps we could keep a separate data structure
# to avoid this overhead.
def headers_for(self, locator, hash_stop, current_tip=None):
if current_tip is None:
current_tip = self.currentBlock
current_block_header = self.get_header(current_tip)
if current_block_header is None:
return None
response = msg_headers()
headersList = [ current_block_header ]
maxheaders = 2000
while (headersList[0].sha256 not in locator.vHave):
prevBlockHash = headersList[0].hashPrevBlock
prevBlockHeader = self.get_header(prevBlockHash)
if prevBlockHeader is not None:
headersList.insert(0, prevBlockHeader)
else:
break
headersList = headersList[:maxheaders] # truncate if we have too many
hashList = [x.sha256 for x in headersList]
index = len(headersList)
if (hash_stop in hashList):
index = hashList.index(hash_stop)+1
response.headers = headersList[:index]
return response
def add_block(self, block):
block.calc_sha256()
try:
self.blockDB[repr(block.sha256)] = bytes(block.serialize())
except TypeError as e:
print("Unexpected error: ", sys.exc_info()[0], e.args)
self.currentBlock = block.sha256
self.headers_map[block.sha256] = CBlockHeader(block)
def add_header(self, header):
self.headers_map[header.sha256] = header
def get_blocks(self, inv):
responses = []
for i in inv:
if (i.type == 2): # MSG_BLOCK
block = self.get(i.hash)
if block is not None:
responses.append(msg_block(block))
return responses
def get_locator(self, current_tip=None):
if current_tip is None:
current_tip = self.currentBlock
r = []
counter = 0
step = 1
lastBlock = self.get(current_tip)
while lastBlock is not None:
r.append(lastBlock.hashPrevBlock)
for i in range(step):
lastBlock = self.get(lastBlock.hashPrevBlock)
if lastBlock is None:
break
counter += 1
if counter > 10:
step *= 2
locator = CBlockLocator()
locator.vHave = r
return locator
class TxStore(object):
def __init__(self, datadir):
self.txDB = dbm.ndbm.open(datadir + "/transactions", 'c')
def close(self):
self.txDB.close()
def get(self, txhash):
serialized_tx = None
try:
serialized_tx = self.txDB[repr(txhash)]
except KeyError:
return None
f = BytesIO(serialized_tx)
ret = CTransaction()
ret.deserialize(f)
ret.calc_sha256()
return ret
def add_transaction(self, tx):
tx.calc_sha256()
try:
self.txDB[repr(tx.sha256)] = bytes(tx.serialize())
except TypeError as e:
print("Unexpected error: ", sys.exc_info()[0], e.args)
def get_transactions(self, inv):
responses = []
for i in inv:
if (i.type == 1): # MSG_TX
tx = self.get(i.hash)
if tx is not None:
responses.append(msg_tx(tx))
return responses

View File

@ -0,0 +1,110 @@
#!/usr/bin/env python3
# blocktools.py - utilities for manipulating blocks and transactions
# Copyright (c) 2015-2016 The Bitcoin Core developers
# Copyright (c) 2017-2022 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
from hashlib import blake2b
from .mininode import (
CBlock, CTransaction, CTxIn, CTxOut, COutPoint,
BLOSSOM_POW_TARGET_SPACING_RATIO,
)
from .script import CScript, OP_0, OP_EQUAL, OP_HASH160, OP_TRUE, OP_CHECKSIG
# Create a block (with regtest difficulty)
def create_block(hashprev, coinbase, nTime=None, nBits=None, hashBlockCommitments=None):
block = CBlock()
if nTime is None:
import time
block.nTime = int(time.time()+600)
else:
block.nTime = nTime
block.hashPrevBlock = hashprev
if hashBlockCommitments is None:
# By default NUs up to Sapling are active from block 1, so we set this to the empty root.
hashBlockCommitments = 0x3e49b5f954aa9d3545bc6c37744661eea48d7c34e3000d82b7f0010c30f4c2fb
block.hashBlockCommitments = hashBlockCommitments
if nBits is None:
block.nBits = 0x200f0f0f # difficulty retargeting is disabled in REGTEST chainparams
else:
block.nBits = nBits
block.vtx.append(coinbase)
block.hashMerkleRoot = block.calc_merkle_root()
block.hashAuthDataRoot = block.calc_auth_data_root()
block.calc_sha256()
return block
def derive_block_commitments_hash(chain_history_root, auth_data_root):
digest = blake2b(
digest_size=32,
person=b'ZcashBlockCommit')
digest.update(chain_history_root)
digest.update(auth_data_root)
digest.update(b'\x00' * 32)
return digest.digest()
def serialize_script_num(value):
r = bytearray(0)
if value == 0:
return r
neg = value < 0
absvalue = -value if neg else value
while (absvalue):
r.append(int(absvalue & 0xff))
absvalue >>= 8
if r[-1] & 0x80:
r.append(0x80 if neg else 0)
elif neg:
r[-1] |= 0x80
return r
# Create a coinbase transaction, assuming no miner fees.
# If pubkey is passed in, the coinbase output will be a P2PK output;
# otherwise an anyone-can-spend output.
def create_coinbase(height, pubkey=None, after_blossom=False, outputs=[], lockboxvalue=0):
coinbase = CTransaction()
coinbase.nExpiryHeight = height
coinbase.vin.append(CTxIn(COutPoint(0, 0xffffffff),
CScript([height, OP_0]), 0xffffffff))
coinbaseoutput = CTxOut()
coinbaseoutput.nValue = int(12.5*100000000)
if after_blossom:
coinbaseoutput.nValue //= BLOSSOM_POW_TARGET_SPACING_RATIO
halvings = height // 150 # regtest
coinbaseoutput.nValue >>= halvings
coinbaseoutput.nValue -= lockboxvalue
if (pubkey != None):
coinbaseoutput.scriptPubKey = CScript([pubkey, OP_CHECKSIG])
else:
coinbaseoutput.scriptPubKey = CScript([OP_TRUE])
coinbase.vout = [ coinbaseoutput ]
if len(outputs) == 0 and halvings == 0: # regtest
froutput = CTxOut()
froutput.nValue = coinbaseoutput.nValue // 5
# regtest
fraddr = bytearray([0x67, 0x08, 0xe6, 0x67, 0x0d, 0xb0, 0xb9, 0x50,
0xda, 0xc6, 0x80, 0x31, 0x02, 0x5c, 0xc5, 0xb6,
0x32, 0x13, 0xa4, 0x91])
froutput.scriptPubKey = CScript([OP_HASH160, fraddr, OP_EQUAL])
coinbaseoutput.nValue -= froutput.nValue
coinbase.vout.append(froutput)
coinbaseoutput.nValue -= sum(output.nValue for output in outputs)
assert coinbaseoutput.nValue >= 0, coinbaseoutput.nValue
coinbase.vout.extend(outputs)
coinbase.calc_sha256()
return coinbase
# Create a transaction with an anyone-can-spend output, that spends the
# nth output of prevtx.
def create_transaction(prevtx, n, sig, value):
tx = CTransaction()
assert(n < len(prevtx.vout))
tx.vin.append(CTxIn(COutPoint(prevtx.sha256, n), sig, 0xffffffff))
tx.vout.append(CTxOut(value, b""))
tx.calc_sha256()
return tx

View File

@ -0,0 +1,446 @@
#!/usr/bin/env python3
# Copyright (c) 2015-2016 The Bitcoin Core developers
# Copyright (c) 2017-2022 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
from .blockstore import BlockStore, TxStore
from .mininode import (
CBlock,
CBlockHeader,
CTransaction,
CInv,
msg_block,
msg_getheaders,
msg_headers,
msg_inv,
msg_mempool,
msg_ping,
mininode_lock,
MAX_INV_SZ,
NodeConn,
NodeConnCB,
)
from .util import p2p_port
import time
'''
This is a tool for comparing two or more bitcoinds to each other
using a script provided.
To use, create a class that implements get_tests(), and pass it in
as the test generator to TestManager. get_tests() should be a python
generator that returns TestInstance objects. See below for definition.
In practice get_tests is always implemented on a subclass of ComparisonTestFramework.
'''
# TestNode behaves as follows:
# Configure with a BlockStore and TxStore
# on_inv: log the message but don't request
# on_headers: log the chain tip
# on_pong: update ping response map (for synchronization)
# on_getheaders: provide headers via BlockStore
# on_getdata: provide blocks via BlockStore
def wait_until(predicate, attempts=float('inf'), timeout=float('inf')):
attempt = 0
elapsed = 0
while attempt < attempts and elapsed < timeout:
with mininode_lock:
if predicate():
return True
attempt += 1
elapsed += 0.05
time.sleep(0.05)
return False
class RejectResult(object):
'''
Outcome that expects rejection of a transaction or block.
'''
def __init__(self, code, reason=b''):
self.code = code
self.reason = reason
def match(self, other):
if self.code != other.code:
return False
return other.reason.startswith(self.reason)
def __repr__(self):
return '%i:%s' % (self.code,self.reason or '*')
class TestNode(NodeConnCB):
def __init__(self, block_store, tx_store):
NodeConnCB.__init__(self)
self.create_callback_map()
self.conn = None
self.bestblockhash = None
self.block_store = block_store
self.block_request_map = {}
self.tx_store = tx_store
self.tx_request_map = {}
self.block_reject_map = {}
self.tx_reject_map = {}
# When the pingmap is non-empty we're waiting for
# a response
self.pingMap = {}
self.lastInv = []
self.closed = False
def on_close(self, conn):
self.closed = True
def add_connection(self, conn):
self.conn = conn
def on_headers(self, conn, message):
if len(message.headers) > 0:
best_header = message.headers[-1]
best_header.calc_sha256()
self.bestblockhash = best_header.sha256
def on_getheaders(self, conn, message):
response = self.block_store.headers_for(message.locator, message.hashstop)
if response is not None:
conn.send_message(response)
def on_getdata(self, conn, message):
[conn.send_message(r) for r in self.block_store.get_blocks(message.inv)]
[conn.send_message(r) for r in self.tx_store.get_transactions(message.inv)]
for i in message.inv:
if i.type == 1:
self.tx_request_map[i.hash] = True
elif i.type == 2:
self.block_request_map[i.hash] = True
def on_inv(self, conn, message):
self.lastInv = [x.hash for x in message.inv]
def on_pong(self, conn, message):
try:
del self.pingMap[message.nonce]
except KeyError:
raise AssertionError("Got pong for unknown ping [%s]" % repr(message))
def on_reject(self, conn, message):
if message.message == b'tx':
self.tx_reject_map[message.data] = RejectResult(message.code, message.reason)
if message.message == b'block':
self.block_reject_map[message.data] = RejectResult(message.code, message.reason)
def send_inv(self, obj):
mtype = 2 if isinstance(obj, CBlock) else 1
self.conn.send_message(msg_inv([CInv(mtype, obj.sha256)]))
def send_getheaders(self):
# We ask for headers from their last tip.
m = msg_getheaders()
m.locator = self.block_store.get_locator(self.bestblockhash)
self.conn.send_message(m)
def send_header(self, header):
m = msg_headers()
m.headers.append(header)
self.conn.send_message(m)
# This assumes BIP31
def send_ping(self, nonce):
self.pingMap[nonce] = True
self.conn.send_message(msg_ping(nonce))
def received_ping_response(self, nonce):
return nonce not in self.pingMap
def send_mempool(self):
self.lastInv = []
self.conn.send_message(msg_mempool())
# TestInstance:
#
# Instances of these are generated by the test generator, and fed into the
# comptool.
#
# "blocks_and_transactions" should be an array of
# [obj, True/False/None, hash/None]:
# - obj is either a CBlock, CBlockHeader, or a CTransaction, and
# - the second value indicates whether the object should be accepted
# into the blockchain or mempool (for tests where we expect a certain
# answer), or "None" if we don't expect a certain answer and are just
# comparing the behavior of the nodes being tested.
# - the third value is the hash to test the tip against (if None or omitted,
# use the hash of the block)
# - NOTE: if a block header, no test is performed; instead the header is
# just added to the block_store. This is to facilitate block delivery
# when communicating with headers-first clients (when withholding an
# intermediate block).
# sync_every_block: if True, then each block will be inv'ed, synced, and
# nodes will be tested based on the outcome for the block. If False,
# then inv's accumulate until all blocks are processed (or max inv size
# is reached) and then sent out in one inv message. Then the final block
# will be synced across all connections, and the outcome of the final
# block will be tested.
# sync_every_tx: analogous to behavior for sync_every_block, except if outcome
# on the final tx is None, then contents of entire mempool are compared
# across all connections. (If outcome of final tx is specified as true
# or false, then only the last tx is tested against outcome.)
class TestInstance(object):
def __init__(self, objects=None, sync_every_block=True, sync_every_tx=False):
self.blocks_and_transactions = objects if objects else []
self.sync_every_block = sync_every_block
self.sync_every_tx = sync_every_tx
class TestManager(object):
def __init__(self, testgen, datadir):
self.test_generator = testgen
self.connections = []
self.test_nodes = []
self.block_store = BlockStore(datadir)
self.tx_store = TxStore(datadir)
self.ping_counter = 1
def add_all_connections(self, nodes):
for i in range(len(nodes)):
# Create a p2p connection to each node
test_node = TestNode(self.block_store, self.tx_store)
self.test_nodes.append(test_node)
self.connections.append(NodeConn('127.0.0.1', p2p_port(i), nodes[i], test_node))
# Make sure the TestNode (callback class) has a reference to its
# associated NodeConn
test_node.add_connection(self.connections[-1])
def wait_for_disconnections(self):
def disconnected():
return all(node.closed for node in self.test_nodes)
return wait_until(disconnected, timeout=10)
def wait_for_verack(self):
def veracked():
return all(node.verack_received for node in self.test_nodes)
return wait_until(veracked, timeout=10)
def wait_for_pings(self, counter):
def received_pongs():
return all(node.received_ping_response(counter) for node in self.test_nodes)
return wait_until(received_pongs)
# sync_blocks: Wait for all connections to request the blockhash given
# then send get_headers to find out the tip of each node, and synchronize
# the response by using a ping (and waiting for pong with same nonce).
def sync_blocks(self, blockhash, num_blocks):
def blocks_requested():
return all(
blockhash in node.block_request_map and node.block_request_map[blockhash]
for node in self.test_nodes
)
# --> error if not requested
if not wait_until(blocks_requested, attempts=20*num_blocks):
# print [ c.cb.block_request_map for c in self.connections ]
raise AssertionError("Not all nodes requested block")
# Send getheaders message
[ c.cb.send_getheaders() for c in self.connections ]
# Send ping and wait for response -- synchronization hack
[ c.cb.send_ping(self.ping_counter) for c in self.connections ]
self.wait_for_pings(self.ping_counter)
self.ping_counter += 1
# Analogous to sync_block (see above)
def sync_transaction(self, txhash, num_events):
# Wait for nodes to request transaction (50ms sleep * 20 tries * num_events)
def transaction_requested():
return all(
txhash in node.tx_request_map and node.tx_request_map[txhash]
for node in self.test_nodes
)
# --> error if not requested
if not wait_until(transaction_requested, attempts=20*num_events):
# print [ c.cb.tx_request_map for c in self.connections ]
raise AssertionError("Not all nodes requested transaction")
# Get the mempool
[ c.cb.send_mempool() for c in self.connections ]
# Send ping and wait for response -- synchronization hack
[ c.cb.send_ping(self.ping_counter) for c in self.connections ]
self.wait_for_pings(self.ping_counter)
self.ping_counter += 1
# Sort inv responses from each node
with mininode_lock:
[ c.cb.lastInv.sort() for c in self.connections ]
# Verify that the tip of each connection all agree with each other, and
# with the expected outcome (if given)
def check_results(self, blockhash, outcome):
with mininode_lock:
for c in self.connections:
if outcome is None:
if c.cb.bestblockhash != self.connections[0].cb.bestblockhash:
return False
elif isinstance(outcome, RejectResult): # Check that block was rejected w/ code
if c.cb.bestblockhash == blockhash:
return False
if blockhash not in c.cb.block_reject_map:
print('Block not in reject map: %064x' % (blockhash))
return False
if not outcome.match(c.cb.block_reject_map[blockhash]):
print('Block rejected with %s instead of expected %s: %064x' % (c.cb.block_reject_map[blockhash], outcome, blockhash))
return False
elif ((c.cb.bestblockhash == blockhash) != outcome):
if outcome is True and blockhash in c.cb.block_reject_map:
print('Block rejected with %s instead of accepted: %064x' % (c.cb.block_reject_map[blockhash], blockhash))
return False
return True
# Either check that the mempools all agree with each other, or that
# txhash's presence in the mempool matches the outcome specified.
# This is somewhat of a strange comparison, in that we're either comparing
# a particular tx to an outcome, or the entire mempools altogether;
# perhaps it would be useful to add the ability to check explicitly that
# a particular tx's existence in the mempool is the same across all nodes.
def check_mempool(self, txhash, outcome):
with mininode_lock:
for c in self.connections:
if outcome is None:
# Make sure the mempools agree with each other
if c.cb.lastInv != self.connections[0].cb.lastInv:
# print c.rpc.getrawmempool()
return False
elif isinstance(outcome, RejectResult): # Check that tx was rejected w/ code
if txhash in c.cb.lastInv:
return False
if txhash not in c.cb.tx_reject_map:
print('Tx not in reject map: %064x' % (txhash))
return False
if not outcome.match(c.cb.tx_reject_map[txhash]):
print('Tx rejected with %s instead of expected %s: %064x' % (c.cb.tx_reject_map[txhash], outcome, txhash))
return False
elif ((txhash in c.cb.lastInv) != outcome):
# print c.rpc.getrawmempool(), c.cb.lastInv
return False
return True
def run(self):
# Wait until verack is received
self.wait_for_verack()
test_number = 1
for test_instance in self.test_generator.get_tests():
# We use these variables to keep track of the last block
# and last transaction in the tests, which are used
# if we're not syncing on every block or every tx.
[ block, block_outcome, tip ] = [ None, None, None ]
[ tx, tx_outcome ] = [ None, None ]
invqueue = []
for test_obj in test_instance.blocks_and_transactions:
b_or_t = test_obj[0]
outcome = test_obj[1]
# Determine if we're dealing with a block or tx
if isinstance(b_or_t, CBlock): # Block test runner
block = b_or_t
block_outcome = outcome
tip = block.sha256
# each test_obj can have an optional third argument
# to specify the tip we should compare with
# (default is to use the block being tested)
if len(test_obj) >= 3:
tip = test_obj[2]
# Add to shared block_store, set as current block
# If there was an open getdata request for the block
# previously, and we didn't have an entry in the
# block_store, then immediately deliver, because the
# node wouldn't send another getdata request while
# the earlier one is outstanding.
first_block_with_hash = True
if self.block_store.get(block.sha256) is not None:
first_block_with_hash = False
with mininode_lock:
self.block_store.add_block(block)
for c in self.connections:
if first_block_with_hash and block.sha256 in c.cb.block_request_map and c.cb.block_request_map[block.sha256] == True:
# There was a previous request for this block hash
# Most likely, we delivered a header for this block
# but never had the block to respond to the getdata
c.send_message(msg_block(block))
else:
c.cb.block_request_map[block.sha256] = False
# Either send inv's to each node and sync, or add
# to invqueue for later inv'ing.
if (test_instance.sync_every_block):
# if we expect success, send inv and sync every block
# if we expect failure, just push the block and see what happens.
if outcome == True:
[ c.cb.send_inv(block) for c in self.connections ]
self.sync_blocks(block.sha256, 1)
else:
[ c.send_message(msg_block(block)) for c in self.connections ]
[ c.cb.send_ping(self.ping_counter) for c in self.connections ]
self.wait_for_pings(self.ping_counter)
self.ping_counter += 1
if (not self.check_results(tip, outcome)):
raise AssertionError("Test failed at test %d" % test_number)
else:
invqueue.append(CInv(2, block.sha256))
elif isinstance(b_or_t, CBlockHeader):
block_header = b_or_t
self.block_store.add_header(block_header)
[ c.cb.send_header(block_header) for c in self.connections ]
else: # Tx test runner
assert(isinstance(b_or_t, CTransaction))
tx = b_or_t
tx_outcome = outcome
# Add to shared tx store and clear map entry
with mininode_lock:
self.tx_store.add_transaction(tx)
for c in self.connections:
c.cb.tx_request_map[tx.sha256] = False
# Again, either inv to all nodes or save for later
if (test_instance.sync_every_tx):
[ c.cb.send_inv(tx) for c in self.connections ]
self.sync_transaction(tx.sha256, 1)
if (not self.check_mempool(tx.sha256, outcome)):
raise AssertionError("Test failed at test %d" % test_number)
else:
invqueue.append(CInv(1, tx.sha256))
# Ensure we're not overflowing the inv queue
if len(invqueue) == MAX_INV_SZ:
[ c.send_message(msg_inv(invqueue)) for c in self.connections ]
invqueue = []
# Do final sync if we weren't syncing on every block or every tx.
if (not test_instance.sync_every_block and block is not None):
if len(invqueue) > 0:
[ c.send_message(msg_inv(invqueue)) for c in self.connections ]
invqueue = []
self.sync_blocks(block.sha256, len(test_instance.blocks_and_transactions))
if (not self.check_results(tip, block_outcome)):
raise AssertionError("Block test failed at test %d" % test_number)
if (not test_instance.sync_every_tx and tx is not None):
if len(invqueue) > 0:
[ c.send_message(msg_inv(invqueue)) for c in self.connections ]
invqueue = []
self.sync_transaction(tx.sha256, len(test_instance.blocks_and_transactions))
if (not self.check_mempool(tx.sha256, tx_outcome)):
raise AssertionError("Mempool test failed at test %d" % test_number)
print("Test %d: PASS" % test_number, [ c.rpc.getblockcount() for c in self.connections ])
test_number += 1
[ c.disconnect_node() for c in self.connections ]
self.wait_for_disconnections()
self.block_store.close()
self.tx_store.close()

View File

@ -0,0 +1,107 @@
#!/usr/bin/env python3
# Copyright (c) 2015-2016 The Bitcoin Core developers
# Copyright (c) 2020-2022 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
"""
This module contains utilities for doing coverage analysis on the RPC
interface.
It provides a way to track which RPC commands are exercised during
testing.
"""
import os
REFERENCE_FILENAME = 'rpc_interface.txt'
class AuthServiceProxyWrapper(object):
"""
An object that wraps AuthServiceProxy to record specific RPC calls.
"""
def __init__(self, auth_service_proxy_instance, coverage_logfile=None):
"""
Kwargs:
auth_service_proxy_instance (AuthServiceProxy): the instance
being wrapped.
coverage_logfile (str): if specified, write each service_name
out to a file when called.
"""
self.auth_service_proxy_instance = auth_service_proxy_instance
self.coverage_logfile = coverage_logfile
def __getattr__(self, *args, **kwargs):
return_val = self.auth_service_proxy_instance.__getattr__(
*args, **kwargs)
return AuthServiceProxyWrapper(return_val, self.coverage_logfile)
def __call__(self, *args, **kwargs):
"""
Delegates to AuthServiceProxy, then writes the particular RPC method
called to a file.
"""
return_val = self.auth_service_proxy_instance.__call__(*args, **kwargs)
rpc_method = self.auth_service_proxy_instance._service_name
if self.coverage_logfile:
with open(self.coverage_logfile, 'a+', encoding='utf8') as f:
f.write("%s\n" % rpc_method)
return return_val
@property
def url(self):
return self.auth_service_proxy_instance.url
def get_filename(dirname, n_node):
"""
Get a filename unique to the test process ID and node.
This file will contain a list of RPC commands covered.
"""
pid = str(os.getpid())
return os.path.join(
dirname, "coverage.pid%s.node%s.txt" % (pid, str(n_node)))
def write_all_rpc_commands(dirname, node):
"""
Write out a list of all RPC functions available in `bitcoin-cli` for
coverage comparison. This will only happen once per coverage
directory.
Args:
dirname (str): temporary test dir
node (AuthServiceProxy): client
Returns:
bool. if the RPC interface file was written.
"""
filename = os.path.join(dirname, REFERENCE_FILENAME)
if os.path.isfile(filename):
return False
help_output = node.help().split('\n')
commands = set()
for line in help_output:
line = line.strip()
# Ignore blanks and headers
if line and not line.startswith('='):
commands.add("%s\n" % line.split()[0])
with open(filename, 'w', encoding='utf8') as f:
f.writelines(list(commands))
return True

View File

@ -0,0 +1,294 @@
from operator import itemgetter
import struct
from functools import reduce
DEBUG = False
VERBOSE = False
word_size = 32
word_mask = (1<<word_size)-1
def expand_array(inp, out_len, bit_len, byte_pad=0):
assert bit_len >= 8 and word_size >= 7+bit_len
bit_len_mask = (1<<bit_len)-1
out_width = (bit_len+7)//8 + byte_pad
assert out_len == 8*out_width*len(inp)//bit_len
out = bytearray(out_len)
bit_len_mask = (1 << bit_len) - 1
# The acc_bits least-significant bits of acc_value represent a bit sequence
# in big-endian order.
acc_bits = 0;
acc_value = 0;
j = 0
for i in range(len(inp)):
acc_value = ((acc_value << 8) & word_mask) | inp[i]
acc_bits += 8
# When we have bit_len or more bits in the accumulator, write the next
# output element.
if acc_bits >= bit_len:
acc_bits -= bit_len
for x in range(byte_pad, out_width):
out[j+x] = (
# Big-endian
acc_value >> (acc_bits+(8*(out_width-x-1)))
) & (
# Apply bit_len_mask across byte boundaries
(bit_len_mask >> (8*(out_width-x-1))) & 0xFF
)
j += out_width
return out
def compress_array(inp, out_len, bit_len, byte_pad=0):
assert bit_len >= 8 and word_size >= 7+bit_len
in_width = (bit_len+7)//8 + byte_pad
assert out_len == bit_len*len(inp)//(8*in_width)
out = bytearray(out_len)
bit_len_mask = (1 << bit_len) - 1
# The acc_bits least-significant bits of acc_value represent a bit sequence
# in big-endian order.
acc_bits = 0;
acc_value = 0;
j = 0
for i in range(out_len):
# When we have fewer than 8 bits left in the accumulator, read the next
# input element.
if acc_bits < 8:
acc_value = ((acc_value << bit_len) & word_mask) | inp[j]
for x in range(byte_pad, in_width):
acc_value = acc_value | (
(
# Apply bit_len_mask across byte boundaries
inp[j+x] & ((bit_len_mask >> (8*(in_width-x-1))) & 0xFF)
) << (8*(in_width-x-1))); # Big-endian
j += in_width
acc_bits += bit_len
acc_bits -= 8
out[i] = (acc_value >> acc_bits) & 0xFF
return out
def get_indices_from_minimal(minimal, bit_len):
eh_index_size = 4
assert (bit_len+7)//8 <= eh_index_size
len_indices = 8*eh_index_size*len(minimal)//bit_len
byte_pad = eh_index_size - (bit_len+7)//8
expanded = expand_array(minimal, len_indices, bit_len, byte_pad)
return [struct.unpack('>I', expanded[i:i+4])[0] for i in range(0, len_indices, eh_index_size)]
def get_minimal_from_indices(indices, bit_len):
eh_index_size = 4
assert (bit_len+7)//8 <= eh_index_size
len_indices = len(indices)*eh_index_size
min_len = bit_len*len_indices//(8*eh_index_size)
byte_pad = eh_index_size - (bit_len+7)//8
byte_indices = bytearray(b''.join([struct.pack('>I', i) for i in indices]))
return compress_array(byte_indices, min_len, bit_len, byte_pad)
def hash_nonce(digest, nonce):
for i in range(8):
digest.update(struct.pack('<I', nonce >> (32*i)))
def hash_xi(digest, xi):
digest.update(struct.pack('<I', xi))
return digest # For chaining
def count_zeroes(h):
# Convert to binary string
if type(h) == bytearray:
h = ''.join('{0:08b}'.format(x) for x in h)
else:
h = ''.join('{0:08b}'.format(ord(x)) for x in h)
# Count leading zeroes
return (h+'1').index('1')
def has_collision(ha, hb, i, l):
res = [ha[j] == hb[j] for j in range((i-1)*l//8, i*l//8)]
return reduce(lambda x, y: x and y, res)
def distinct_indices(a, b):
for i in a:
for j in b:
if i == j:
return False
return True
def xor(ha, hb):
return bytearray(a^b for a,b in zip(ha,hb))
def gbp_basic(digest, n, k):
'''Implementation of Basic Wagner's algorithm for the GBP.'''
validate_params(n, k)
collision_length = n//(k+1)
hash_length = (k+1)*((collision_length+7)//8)
indices_per_hash_output = 512//n
# 1) Generate first list
if DEBUG: print('Generating first list')
X = []
tmp_hash = b''
for i in range(0, 2**(collision_length+1)):
r = i % indices_per_hash_output
if r == 0:
# X_i = H(I||V||x_i)
curr_digest = digest.copy()
hash_xi(curr_digest, i//indices_per_hash_output)
tmp_hash = curr_digest.digest()
X.append((
expand_array(bytearray(tmp_hash[r*n//8:(r+1)*n//8]),
hash_length, collision_length),
(i,)
))
# 3) Repeat step 2 until 2n/(k+1) bits remain
for i in range(1, k):
if DEBUG: print('Round %d:' % i)
# 2a) Sort the list
if DEBUG: print('- Sorting list')
X.sort(key=itemgetter(0))
if DEBUG and VERBOSE:
for Xi in X[-32:]:
print('%s %s' % (print_hash(Xi[0]), Xi[1]))
if DEBUG: print('- Finding collisions')
Xc = []
while len(X) > 0:
# 2b) Find next set of unordered pairs with collisions on first n/(k+1) bits
j = 1
while j < len(X):
if not has_collision(X[-1][0], X[-1-j][0], i, collision_length):
break
j += 1
# 2c) Store tuples (X_i ^ X_j, (i, j)) on the table
for l in range(0, j-1):
for m in range(l+1, j):
# Check that there are no duplicate indices in tuples i and j
if distinct_indices(X[-1-l][1], X[-1-m][1]):
if X[-1-l][1][0] < X[-1-m][1][0]:
concat = X[-1-l][1] + X[-1-m][1]
else:
concat = X[-1-m][1] + X[-1-l][1]
Xc.append((xor(X[-1-l][0], X[-1-m][0]), concat))
# 2d) Drop this set
while j > 0:
X.pop(-1)
j -= 1
# 2e) Replace previous list with new list
X = Xc
# k+1) Find a collision on last 2n(k+1) bits
if DEBUG:
print('Final round:')
print('- Sorting list')
X.sort(key=itemgetter(0))
if DEBUG and VERBOSE:
for Xi in X[-32:]:
print('%s %s' % (print_hash(Xi[0]), Xi[1]))
if DEBUG: print('- Finding collisions')
solns = []
while len(X) > 0:
j = 1
while j < len(X):
if not (has_collision(X[-1][0], X[-1-j][0], k, collision_length) and
has_collision(X[-1][0], X[-1-j][0], k+1, collision_length)):
break
j += 1
for l in range(0, j-1):
for m in range(l+1, j):
res = xor(X[-1-l][0], X[-1-m][0])
if count_zeroes(res) == 8*hash_length and distinct_indices(X[-1-l][1], X[-1-m][1]):
if DEBUG and VERBOSE:
print('Found solution:')
print('- %s %s' % (print_hash(X[-1-l][0]), X[-1-l][1]))
print('- %s %s' % (print_hash(X[-1-m][0]), X[-1-m][1]))
if X[-1-l][1][0] < X[-1-m][1][0]:
solns.append(list(X[-1-l][1] + X[-1-m][1]))
else:
solns.append(list(X[-1-m][1] + X[-1-l][1]))
# 2d) Drop this set
while j > 0:
X.pop(-1)
j -= 1
return [get_minimal_from_indices(soln, collision_length+1) for soln in solns]
def gbp_validate(digest, minimal, n, k):
validate_params(n, k)
collision_length = n//(k+1)
hash_length = (k+1)*((collision_length+7)//8)
indices_per_hash_output = 512//n
solution_width = (1 << k)*(collision_length+1)//8
if len(minimal) != solution_width:
print('Invalid solution length: %d (expected %d)' % \
(len(minimal), solution_width))
return False
X = []
for i in get_indices_from_minimal(minimal, collision_length+1):
r = i % indices_per_hash_output
# X_i = H(I||V||x_i)
curr_digest = digest.copy()
hash_xi(curr_digest, i//indices_per_hash_output)
tmp_hash = curr_digest.digest()
X.append((
expand_array(bytearray(tmp_hash[r*n//8:(r+1)*n//8]),
hash_length, collision_length),
(i,)
))
for r in range(1, k+1):
Xc = []
for i in range(0, len(X), 2):
if not has_collision(X[i][0], X[i+1][0], r, collision_length):
print('Invalid solution: invalid collision length between StepRows')
return False
if X[i+1][1][0] < X[i][1][0]:
print('Invalid solution: Index tree incorrectly ordered')
return False
if not distinct_indices(X[i][1], X[i+1][1]):
print('Invalid solution: duplicate indices')
return False
Xc.append((xor(X[i][0], X[i+1][0]), X[i][1] + X[i+1][1]))
X = Xc
if len(X) != 1:
print('Invalid solution: incorrect length after end of rounds: %d' % len(X))
return False
if count_zeroes(X[0][0]) != 8*hash_length:
print('Invalid solution: incorrect number of zeroes: %d' % count_zeroes(X[0][0]))
return False
return True
def zcash_person(n, k):
return b'ZcashPoW' + struct.pack('<II', n, k)
def print_hash(h):
if type(h) == bytearray:
return ''.join('{0:02x}'.format(x) for x in h)
else:
return ''.join('{0:02x}'.format(ord(x)) for x in h)
def validate_params(n, k):
if (k >= n):
raise ValueError('n must be larger than k')
if (((n//(k+1))+1) >= 32):
raise ValueError('Parameters must satisfy n/(k+1)+1 < 32')

View File

@ -0,0 +1,207 @@
from hashlib import blake2b
import struct
from typing import (List, Optional)
from .mininode import (CBlockHeader, block_work_from_compact, ser_compactsize, ser_uint256)
from .util import (NU5_BRANCH_ID, NU6_BRANCH_ID)
def H(msg: bytes, consensusBranchId: int) -> bytes:
digest = blake2b(
digest_size=32,
person=b'ZcashHistory' + struct.pack("<I", consensusBranchId))
digest.update(msg)
return digest.digest()
class ZcashMMRNode():
# leaf nodes have no children
left_child: Optional['ZcashMMRNode']
right_child: Optional['ZcashMMRNode']
# commitments
hashSubtreeCommitment: bytes
nEarliestTimestamp: int
nLatestTimestamp: int
nEarliestTargetBits: int
nLatestTargetBits: int
hashEarliestSaplingRoot: bytes # left child's sapling root
hashLatestSaplingRoot: bytes # right child's sapling root
nSubTreeTotalWork: int # total difficulty accumulated within each subtree
nEarliestHeight: int
nLatestHeight: int
nSaplingTxCount: int # number of Sapling transactions in block
# NU5 only.
hashEarliestOrchardRoot: Optional[bytes] # left child's Orchard root
hashLatestOrchardRoot: Optional[bytes] # right child's Orchard root
nOrchardTxCount: Optional[int] # number of Orchard transactions in block
consensusBranchId: bytes
@classmethod
def from_block(
Z, block: CBlockHeader, height,
sapling_root, sapling_tx_count,
consensusBranchId,
v2_data=None
) -> 'ZcashMMRNode':
'''Create a leaf node from a block'''
if v2_data is not None:
assert consensusBranchId in [NU5_BRANCH_ID, NU6_BRANCH_ID]
orchard_root = v2_data[0]
orchard_tx_count = v2_data[1]
else:
orchard_root = None
orchard_tx_count = None
node = Z()
node.left_child = None
node.right_child = None
node.hashSubtreeCommitment = ser_uint256(block.rehash())
node.nEarliestTimestamp = block.nTime
node.nLatestTimestamp = block.nTime
node.nEarliestTargetBits = block.nBits
node.nLatestTargetBits = block.nBits
node.hashEarliestSaplingRoot = sapling_root
node.hashLatestSaplingRoot = sapling_root
node.nSubTreeTotalWork = block_work_from_compact(block.nBits)
node.nEarliestHeight = height
node.nLatestHeight = height
node.nSaplingTxCount = sapling_tx_count
node.hashEarliestOrchardRoot = orchard_root
node.hashLatestOrchardRoot = orchard_root
node.nOrchardTxCount = orchard_tx_count
node.consensusBranchId = consensusBranchId
return node
def serialize(self) -> bytes:
'''serializes a node'''
buf = b''
buf += self.hashSubtreeCommitment
buf += struct.pack("<I", self.nEarliestTimestamp)
buf += struct.pack("<I", self.nLatestTimestamp)
buf += struct.pack("<I", self.nEarliestTargetBits)
buf += struct.pack("<I", self.nLatestTargetBits)
buf += self.hashEarliestSaplingRoot
buf += self.hashLatestSaplingRoot
buf += ser_uint256(self.nSubTreeTotalWork)
buf += ser_compactsize(self.nEarliestHeight)
buf += ser_compactsize(self.nLatestHeight)
buf += ser_compactsize(self.nSaplingTxCount)
if self.hashEarliestOrchardRoot is not None:
buf += self.hashEarliestOrchardRoot
buf += self.hashLatestOrchardRoot
buf += ser_compactsize(self.nOrchardTxCount)
return buf
def make_parent(
left_child: ZcashMMRNode,
right_child: ZcashMMRNode) -> ZcashMMRNode:
parent = ZcashMMRNode()
parent.left_child = left_child
parent.right_child = right_child
parent.hashSubtreeCommitment = H(
left_child.serialize() + right_child.serialize(),
left_child.consensusBranchId,
)
parent.nEarliestTimestamp = left_child.nEarliestTimestamp
parent.nLatestTimestamp = right_child.nLatestTimestamp
parent.nEarliestTargetBits = left_child.nEarliestTargetBits
parent.nLatestTargetBits = right_child.nLatestTargetBits
parent.hashEarliestSaplingRoot = left_child.hashEarliestSaplingRoot
parent.hashLatestSaplingRoot = right_child.hashLatestSaplingRoot
parent.nSubTreeTotalWork = left_child.nSubTreeTotalWork + right_child.nSubTreeTotalWork
parent.nEarliestHeight = left_child.nEarliestHeight
parent.nLatestHeight = right_child.nLatestHeight
parent.nSaplingTxCount = left_child.nSaplingTxCount + right_child.nSaplingTxCount
parent.hashEarliestOrchardRoot = left_child.hashEarliestOrchardRoot
parent.hashLatestOrchardRoot = right_child.hashLatestOrchardRoot
parent.nOrchardTxCount = (
left_child.nOrchardTxCount + right_child.nOrchardTxCount
if left_child.nOrchardTxCount is not None and right_child.nOrchardTxCount is not None
else None)
parent.consensusBranchId = left_child.consensusBranchId
return parent
def make_root_commitment(root: ZcashMMRNode) -> bytes:
'''Makes the root commitment for a blockheader'''
return H(root.serialize(), root.consensusBranchId)
def get_peaks(node: ZcashMMRNode) -> List[ZcashMMRNode]:
peaks: List[ZcashMMRNode] = []
# Get number of leaves.
leaves = node.nLatestHeight - (node.nEarliestHeight - 1)
assert(leaves > 0)
# Check if the number of leaves in this subtree is a power of two.
if (leaves & (leaves - 1)) == 0:
# This subtree is full, and therefore a single peak. This also covers
# the case of a single isolated leaf.
peaks.append(node)
else:
# This is one of the generated nodes; search within its children.
peaks.extend(get_peaks(node.left_child))
peaks.extend(get_peaks(node.right_child))
return peaks
def bag_peaks(peaks: List[ZcashMMRNode]) -> ZcashMMRNode:
'''
"Bag" a list of peaks, and return the final root
'''
root = peaks[0]
for i in range(1, len(peaks)):
root = make_parent(root, peaks[i])
return root
def append(root: ZcashMMRNode, leaf: ZcashMMRNode) -> ZcashMMRNode:
'''Append a leaf to an existing tree, return the new tree root'''
# recursively find a list of peaks in the current tree
peaks: List[ZcashMMRNode] = get_peaks(root)
merged: List[ZcashMMRNode] = []
# Merge peaks from right to left.
# This will produce a list of peaks in reverse order
current = leaf
for peak in peaks[::-1]:
current_leaves = current.nLatestHeight - (current.nEarliestHeight - 1)
peak_leaves = peak.nLatestHeight - (peak.nEarliestHeight - 1)
if current_leaves == peak_leaves:
current = make_parent(peak, current)
else:
merged.append(current)
current = peak
merged.append(current)
# finally, bag the merged peaks
return bag_peaks(merged[::-1])
def delete(root: ZcashMMRNode) -> ZcashMMRNode:
'''
Delete the rightmost leaf node from an existing MMR
Return the new tree root
'''
n_leaves = root.nLatestHeight - (root.nEarliestHeight - 1)
# if there were an odd number of leaves,
# simply replace root with left_child
if n_leaves & 1:
return root.left_child
# otherwise, we need to re-bag the peaks.
else:
# first peak
peaks = [root.left_child]
# we do this traversing the right (unbalanced) side of the tree
# we keep the left side (balanced subtree or leaf) of each subtree
# until we reach a leaf
subtree_root = root.right_child
while subtree_root.left_child:
peaks.append(subtree_root.left_child)
subtree_root = subtree_root.right_child
new_root = bag_peaks(peaks)
return new_root

View File

@ -0,0 +1,215 @@
# Copyright (c) 2011 Sam Rushing
#
# key.py - OpenSSL wrapper
#
# This file is modified from python-bitcoinlib.
#
"""ECC secp256k1 crypto routines
WARNING: This module does not mlock() secrets; your private keys may end up on
disk in swap! Use with caution!
"""
import ctypes
import ctypes.util
import hashlib
import sys
ssl = ctypes.cdll.LoadLibrary(ctypes.util.find_library ('ssl') or 'libeay32')
ssl.BN_new.restype = ctypes.c_void_p
ssl.BN_new.argtypes = []
ssl.BN_bin2bn.restype = ctypes.c_void_p
ssl.BN_bin2bn.argtypes = [ctypes.c_char_p, ctypes.c_int, ctypes.c_void_p]
ssl.BN_CTX_free.restype = None
ssl.BN_CTX_free.argtypes = [ctypes.c_void_p]
ssl.BN_CTX_new.restype = ctypes.c_void_p
ssl.BN_CTX_new.argtypes = []
ssl.ECDH_compute_key.restype = ctypes.c_int
ssl.ECDH_compute_key.argtypes = [ctypes.c_void_p, ctypes.c_int, ctypes.c_void_p, ctypes.c_void_p]
ssl.ECDSA_sign.restype = ctypes.c_int
ssl.ECDSA_sign.argtypes = [ctypes.c_int, ctypes.c_void_p, ctypes.c_int, ctypes.c_void_p, ctypes.c_void_p, ctypes.c_void_p]
ssl.ECDSA_verify.restype = ctypes.c_int
ssl.ECDSA_verify.argtypes = [ctypes.c_int, ctypes.c_void_p, ctypes.c_int, ctypes.c_void_p, ctypes.c_int, ctypes.c_void_p]
ssl.EC_KEY_free.restype = None
ssl.EC_KEY_free.argtypes = [ctypes.c_void_p]
ssl.EC_KEY_new_by_curve_name.restype = ctypes.c_void_p
ssl.EC_KEY_new_by_curve_name.argtypes = [ctypes.c_int]
ssl.EC_KEY_get0_group.restype = ctypes.c_void_p
ssl.EC_KEY_get0_group.argtypes = [ctypes.c_void_p]
ssl.EC_KEY_get0_public_key.restype = ctypes.c_void_p
ssl.EC_KEY_get0_public_key.argtypes = [ctypes.c_void_p]
ssl.EC_KEY_set_private_key.restype = ctypes.c_int
ssl.EC_KEY_set_private_key.argtypes = [ctypes.c_void_p, ctypes.c_void_p]
ssl.EC_KEY_set_conv_form.restype = None
ssl.EC_KEY_set_conv_form.argtypes = [ctypes.c_void_p, ctypes.c_int]
ssl.EC_KEY_set_public_key.restype = ctypes.c_int
ssl.EC_KEY_set_public_key.argtypes = [ctypes.c_void_p, ctypes.c_void_p]
ssl.i2o_ECPublicKey.restype = ctypes.c_void_p
ssl.i2o_ECPublicKey.argtypes = [ctypes.c_void_p, ctypes.c_void_p]
ssl.EC_POINT_new.restype = ctypes.c_void_p
ssl.EC_POINT_new.argtypes = [ctypes.c_void_p]
ssl.EC_POINT_free.restype = None
ssl.EC_POINT_free.argtypes = [ctypes.c_void_p]
ssl.EC_POINT_mul.restype = ctypes.c_int
ssl.EC_POINT_mul.argtypes = [ctypes.c_void_p, ctypes.c_void_p, ctypes.c_void_p, ctypes.c_void_p, ctypes.c_void_p, ctypes.c_void_p]
# this specifies the curve used with ECDSA.
NID_secp256k1 = 714 # from openssl/obj_mac.h
# Thx to Sam Devlin for the ctypes magic 64-bit fix.
def _check_result(val, func, args):
if val == 0:
raise ValueError
else:
return ctypes.c_void_p (val)
ssl.EC_KEY_new_by_curve_name.restype = ctypes.c_void_p
ssl.EC_KEY_new_by_curve_name.errcheck = _check_result
class CECKey(object):
"""Wrapper around OpenSSL's EC_KEY"""
POINT_CONVERSION_COMPRESSED = 2
POINT_CONVERSION_UNCOMPRESSED = 4
def __init__(self):
self.k = ssl.EC_KEY_new_by_curve_name(NID_secp256k1)
def __del__(self):
if ssl:
ssl.EC_KEY_free(self.k)
self.k = None
def set_secretbytes(self, secret):
priv_key = ssl.BN_bin2bn(secret, 32, ssl.BN_new())
group = ssl.EC_KEY_get0_group(self.k)
pub_key = ssl.EC_POINT_new(group)
ctx = ssl.BN_CTX_new()
if not ssl.EC_POINT_mul(group, pub_key, priv_key, None, None, ctx):
raise ValueError("Could not derive public key from the supplied secret.")
ssl.EC_POINT_mul(group, pub_key, priv_key, None, None, ctx)
ssl.EC_KEY_set_private_key(self.k, priv_key)
ssl.EC_KEY_set_public_key(self.k, pub_key)
ssl.EC_POINT_free(pub_key)
ssl.BN_CTX_free(ctx)
return self.k
def set_privkey(self, key):
self.mb = ctypes.create_string_buffer(key)
return ssl.d2i_ECPrivateKey(ctypes.byref(self.k), ctypes.byref(ctypes.pointer(self.mb)), len(key))
def set_pubkey(self, key):
self.mb = ctypes.create_string_buffer(key)
return ssl.o2i_ECPublicKey(ctypes.byref(self.k), ctypes.byref(ctypes.pointer(self.mb)), len(key))
def get_privkey(self):
size = ssl.i2d_ECPrivateKey(self.k, 0)
mb_pri = ctypes.create_string_buffer(size)
ssl.i2d_ECPrivateKey(self.k, ctypes.byref(ctypes.pointer(mb_pri)))
return mb_pri.raw
def get_pubkey(self):
size = ssl.i2o_ECPublicKey(self.k, 0)
mb = ctypes.create_string_buffer(size)
ssl.i2o_ECPublicKey(self.k, ctypes.byref(ctypes.pointer(mb)))
return mb.raw
def get_raw_ecdh_key(self, other_pubkey):
ecdh_keybuffer = ctypes.create_string_buffer(32)
r = ssl.ECDH_compute_key(ctypes.pointer(ecdh_keybuffer), 32,
ssl.EC_KEY_get0_public_key(other_pubkey.k),
self.k, 0)
if r != 32:
raise Exception('CKey.get_ecdh_key(): ECDH_compute_key() failed')
return ecdh_keybuffer.raw
def get_ecdh_key(self, other_pubkey, kdf=lambda k: hashlib.sha256(k).digest()):
# FIXME: be warned it's not clear what the kdf should be as a default
r = self.get_raw_ecdh_key(other_pubkey)
return kdf(r)
def sign(self, hash):
# FIXME: need unit tests for below cases
if not isinstance(hash, bytes):
raise TypeError('Hash must be bytes instance; got %r' % hash.__class__)
if len(hash) != 32:
raise ValueError('Hash must be exactly 32 bytes long')
sig_size0 = ctypes.c_uint32()
sig_size0.value = ssl.ECDSA_size(self.k)
mb_sig = ctypes.create_string_buffer(sig_size0.value)
result = ssl.ECDSA_sign(0, hash, len(hash), mb_sig, ctypes.byref(sig_size0), self.k)
assert 1 == result
return mb_sig.raw[:sig_size0.value]
def verify(self, hash, sig):
"""Verify a DER signature"""
return ssl.ECDSA_verify(0, hash, len(hash), sig, len(sig), self.k) == 1
def set_compressed(self, compressed):
if compressed:
form = self.POINT_CONVERSION_COMPRESSED
else:
form = self.POINT_CONVERSION_UNCOMPRESSED
ssl.EC_KEY_set_conv_form(self.k, form)
class CPubKey(bytes):
"""An encapsulated public key
Attributes:
is_valid - Corresponds to CPubKey.IsValid()
is_fullyvalid - Corresponds to CPubKey.IsFullyValid()
is_compressed - Corresponds to CPubKey.IsCompressed()
"""
def __new__(cls, buf, _cec_key=None):
self = super(CPubKey, cls).__new__(cls, buf)
if _cec_key is None:
_cec_key = CECKey()
self._cec_key = _cec_key
self.is_fullyvalid = _cec_key.set_pubkey(self) != 0
return self
@property
def is_valid(self):
return len(self) > 0
@property
def is_compressed(self):
return len(self) == 33
def verify(self, hash, sig):
return self._cec_key.verify(hash, sig)
def __str__(self):
return repr(self)
def __repr__(self):
# Always have represent as b'<secret>' so test cases don't have to
# change for py2/3
if sys.version > '3':
return '%s(%s)' % (self.__class__.__name__, super(CPubKey, self).__repr__())
else:
return '%s(b%s)' % (self.__class__.__name__, super(CPubKey, self).__repr__())

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,157 @@
#!/usr/bin/env python3
# Copyright (c) 2014-2016 The Bitcoin Core developers
# Copyright (c) 2019-2022 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
# Linux network utilities
import sys
import socket
import struct
import array
import os
from binascii import unhexlify, hexlify
# Roughly based on https://web.archive.org/web/20190424172231/http://voorloopnul.com:80/blog/a-python-netstat-in-less-than-100-lines-of-code/ by Ricardo Pascal
STATE_ESTABLISHED = '01'
STATE_SYN_SENT = '02'
STATE_SYN_RECV = '03'
STATE_FIN_WAIT1 = '04'
STATE_FIN_WAIT2 = '05'
STATE_TIME_WAIT = '06'
STATE_CLOSE = '07'
STATE_CLOSE_WAIT = '08'
STATE_LAST_ACK = '09'
STATE_LISTEN = '0A'
STATE_CLOSING = '0B'
def get_socket_inodes(pid):
'''
Get list of socket inodes for process pid.
'''
base = '/proc/%i/fd' % pid
inodes = []
for item in os.listdir(base):
target = os.readlink(os.path.join(base, item))
if target.startswith('socket:'):
inodes.append(int(target[8:-1]))
return inodes
def _remove_empty(array):
return [x for x in array if x !='']
def _convert_ip_port(array):
host,port = array.split(':')
# convert host from mangled-per-four-bytes form as used by kernel
host = unhexlify(host)
host_out = ''
for x in range(0, len(host) // 4):
(val,) = struct.unpack('=I', host[x*4:(x+1)*4])
host_out += '%08x' % val
return host_out,int(port,16)
def netstat(typ='tcp'):
'''
Function to return a list with status of tcp connections at linux systems
To get pid of all network process running on system, you must run this script
as superuser
'''
with open('/proc/net/'+typ,'r',encoding='utf8') as f:
content = f.readlines()
content.pop(0)
result = []
for line in content:
line_array = _remove_empty(line.split(' ')) # Split lines and remove empty spaces.
tcp_id = line_array[0]
l_addr = _convert_ip_port(line_array[1])
r_addr = _convert_ip_port(line_array[2])
state = line_array[3]
inode = int(line_array[9]) # Need the inode to match with process pid.
nline = [tcp_id, l_addr, r_addr, state, inode]
result.append(nline)
return result
def get_bind_addrs(pid):
'''
Get bind addresses as (host,port) tuples for process pid.
'''
inodes = get_socket_inodes(pid)
bind_addrs = []
for conn in netstat('tcp') + netstat('tcp6'):
if conn[3] == STATE_LISTEN and conn[4] in inodes:
bind_addrs.append(conn[1])
return bind_addrs
# from: https://code.activestate.com/recipes/439093/
def all_interfaces():
'''
Return all interfaces that are up
'''
import fcntl
is_64bits = sys.maxsize > 2**32
struct_size = 40 if is_64bits else 32
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
max_possible = 8 # initial value
while True:
bytes = max_possible * struct_size
names = array.array('B', b'\0' * bytes)
outbytes = struct.unpack('iL', fcntl.ioctl(
s.fileno(),
0x8912, # SIOCGIFCONF
struct.pack('iL', bytes, names.buffer_info()[0])
))[0]
if outbytes == bytes:
max_possible *= 2
else:
break
namestr = names.tobytes()
return [(namestr[i:i+16].split(b'\0', 1)[0],
socket.inet_ntoa(namestr[i+20:i+24]))
for i in range(0, outbytes, struct_size)]
def addr_to_hex(addr):
'''
Convert string IPv4 or IPv6 address to binary address as returned by
get_bind_addrs.
Very naive implementation that certainly doesn't work for all IPv6 variants.
'''
if '.' in addr: # IPv4
addr = [int(x) for x in addr.split('.')]
elif ':' in addr: # IPv6
sub = [[], []] # prefix, suffix
x = 0
addr = addr.split(':')
for i,comp in enumerate(addr):
if comp == '':
if i == 0 or i == (len(addr)-1): # skip empty component at beginning or end
continue
x += 1 # :: skips to suffix
assert(x < 2)
else: # two bytes per component
val = int(comp, 16)
sub[x].append(val >> 8)
sub[x].append(val & 0xff)
nullbytes = 16 - len(sub[0]) - len(sub[1])
assert((x == 0 and nullbytes == 0) or (x == 1 and nullbytes > 0))
addr = sub[0] + ([0] * nullbytes) + sub[1]
else:
raise ValueError('Could not parse address %s' % addr)
return hexlify(bytearray(addr)).decode('ascii')
def test_ipv6_local():
'''
Check for (local) IPv6 support.
'''
import socket
# By using SOCK_DGRAM this will not actually make a connection, but it will
# fail if there is no route to IPv6 localhost.
have_ipv6 = True
try:
s = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
s.connect(('::1', 0))
except socket.error:
have_ipv6 = False
return have_ipv6

View File

@ -0,0 +1,157 @@
"""
Copyright 2024 Zcash Foundation
ServiceProxy is just AuthServiceProxy without the auth part.
Previous copyright, from authproxy.py:
Copyright 2011 Jeff Garzik
AuthServiceProxy has the following improvements over python-jsonrpc's
ServiceProxy class:
- HTTP connections persist for the life of the AuthServiceProxy object
(if server supports HTTP/1.1)
- sends protocol 'version', per JSON-RPC 1.1
- sends proper, incrementing 'id'
- sends Basic HTTP authentication headers
- parses all JSON numbers that look like floats as Decimal
- uses standard Python json lib
Previous copyright, from python-jsonrpc/jsonrpc/proxy.py:
Copyright (c) 2007 Jan-Klaas Kollhof
This file is part of jsonrpc.
jsonrpc is free software; you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation; either version 2.1 of the License, or
(at your option) any later version.
This software is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License
along with this software; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
"""
import decimal
import json
import logging
from http.client import HTTPConnection, HTTPSConnection, BadStatusLine
from urllib.parse import urlparse
USER_AGENT = "ServiceProxy/0.1"
HTTP_TIMEOUT = 600
log = logging.getLogger("BitcoinRPC")
class JSONRPCException(Exception):
def __init__(self, rpc_error):
Exception.__init__(self, rpc_error.get("message"))
self.error = rpc_error
def EncodeDecimal(o):
if isinstance(o, decimal.Decimal):
return str(o)
raise TypeError(repr(o) + " is not JSON serializable")
class ServiceProxy():
__id_count = 0
def __init__(self, service_url, service_name=None, timeout=HTTP_TIMEOUT, connection=None):
self.__service_url = service_url
self._service_name = service_name
self.__url = urlparse(service_url)
self.timeout = timeout
self._set_conn(connection)
def _set_conn(self, connection=None):
port = 80 if self.__url.port is None else self.__url.port
if connection:
self.__conn = connection
self.timeout = connection.timeout
elif self.__url.scheme == 'https':
self.__conn = HTTPSConnection(self.__url.hostname, port, timeout=self.timeout)
else:
self.__conn = HTTPConnection(self.__url.hostname, port, timeout=self.timeout)
def __getattr__(self, name):
if name.startswith('__') and name.endswith('__'):
# Python internal stuff
raise AttributeError
if self._service_name is not None:
name = "%s.%s" % (self._service_name, name)
return ServiceProxy(self.__service_url, name, connection=self.__conn)
def _request(self, method, path, postdata):
'''
Do a HTTP request, with retry if we get disconnected (e.g. due to a timeout).
This is a workaround for https://bugs.python.org/issue3566 which is fixed in Python 3.5.
'''
headers = {'Host': self.__url.hostname,
'User-Agent': USER_AGENT,
'Content-type': 'application/json'}
try:
self.__conn.request(method, path, postdata, headers)
return self._get_response()
except Exception as e:
# If connection was closed, try again.
# Python 3.5+ raises BrokenPipeError instead of BadStatusLine when the connection was reset.
# ConnectionResetError happens on FreeBSD with Python 3.4.
# This can be simplified now that we depend on Python 3 (previously, we could not
# refer to BrokenPipeError or ConnectionResetError which did not exist on Python 2)
if ((isinstance(e, BadStatusLine) and e.line == "''")
or e.__class__.__name__ in ('BrokenPipeError', 'ConnectionResetError')):
self.__conn.close()
self.__conn.request(method, path, postdata, headers)
return self._get_response()
else:
raise
def __call__(self, *args):
ServiceProxy.__id_count += 1
log.debug("-%s-> %s %s"%(ServiceProxy.__id_count, self._service_name,
json.dumps(args, default=EncodeDecimal)))
postdata = json.dumps({'jsonrpc': '1.0',
'method': self._service_name,
'params': args,
'id': ServiceProxy.__id_count}, default=EncodeDecimal)
response = self._request('POST', self.__url.path, postdata)
if 'result' not in response:
raise JSONRPCException({
'code': -343, 'message': 'missing JSON-RPC result'})
else:
return response['result']
def _batch(self, rpc_call_list):
postdata = json.dumps(list(rpc_call_list), default=EncodeDecimal)
log.debug("--> "+postdata)
return self._request('POST', self.__url.path, postdata)
def _get_response(self):
http_response = self.__conn.getresponse()
if http_response is None:
raise JSONRPCException({
'code': -342, 'message': 'missing HTTP response from server'})
content_type = http_response.getheader('Content-Type')
if content_type != 'application/json; charset=utf-8':
raise JSONRPCException({
'code': -342, 'message': 'non-JSON HTTP response with \'%i %s\' from server' % (http_response.status, http_response.reason)})
responsedata = http_response.read().decode('utf8')
response = json.loads(responsedata, parse_float=decimal.Decimal)
if "error" in response and response["error"] is None:
log.debug("<-%s- %s"%(response["id"], json.dumps(response["result"], default=EncodeDecimal)))
else:
log.debug("<-- "+responsedata)
return response

View File

@ -0,0 +1,979 @@
#!/usr/bin/env python3
# Copyright (c) 2015-2016 The Bitcoin Core developers
# Copyright (c) 2017-2022 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
#
# script.py
#
# This file is modified from python-bitcoinlib.
#
"""Scripts
Functionality to build scripts, as well as SignatureHash().
"""
import sys
bchr = chr
bord = ord
if sys.version > '3':
long = int
bchr = lambda x: bytes([x])
bord = lambda x: x
from hashlib import blake2b
from binascii import hexlify
import struct
from test_framework.bignum import bn2vch
from test_framework.mininode import (CTransaction, CTxOut, hash256, ser_string, ser_uint256)
MAX_SCRIPT_SIZE = 10000
MAX_SCRIPT_ELEMENT_SIZE = 520
MAX_SCRIPT_OPCODES = 201
OPCODE_NAMES = {}
_opcode_instances = []
class CScriptOp(int):
"""A single script opcode"""
__slots__ = []
@staticmethod
def encode_op_pushdata(d):
"""Encode a PUSHDATA op, returning bytes"""
if len(d) < 0x4c:
return b'' + struct.pack('B', len(d)) + d # OP_PUSHDATA
elif len(d) <= 0xff:
return b'\x4c' + struct.pack('B', len(d)) + d # OP_PUSHDATA1
elif len(d) <= 0xffff:
return b'\x4d' + struct.pack(b'<H', len(d)) + d # OP_PUSHDATA2
elif len(d) <= 0xffffffff:
return b'\x4e' + struct.pack(b'<I', len(d)) + d # OP_PUSHDATA4
else:
raise ValueError("Data too long to encode in a PUSHDATA op")
@staticmethod
def encode_op_n(n):
"""Encode a small integer op, returning an opcode"""
if not (0 <= n <= 16):
raise ValueError('Integer must be in range 0 <= n <= 16, got %d' % n)
if n == 0:
return OP_0
else:
return CScriptOp(OP_1 + n-1)
def decode_op_n(self):
"""Decode a small integer opcode, returning an integer"""
if self == OP_0:
return 0
if not (self == OP_0 or OP_1 <= self <= OP_16):
raise ValueError('op %r is not an OP_N' % self)
return int(self - OP_1+1)
def is_small_int(self):
"""Return true if the op pushes a small integer to the stack"""
if 0x51 <= self <= 0x60 or self == 0:
return True
else:
return False
def __str__(self):
return repr(self)
def __repr__(self):
if self in OPCODE_NAMES:
return OPCODE_NAMES[self]
else:
return 'CScriptOp(0x%x)' % self
def __new__(cls, n):
try:
return _opcode_instances[n]
except IndexError:
assert len(_opcode_instances) == n
_opcode_instances.append(super(CScriptOp, cls).__new__(cls, n))
return _opcode_instances[n]
# Populate opcode instance table
for n in range(0xff+1):
CScriptOp(n)
# push value
OP_0 = CScriptOp(0x00)
OP_FALSE = OP_0
OP_PUSHDATA1 = CScriptOp(0x4c)
OP_PUSHDATA2 = CScriptOp(0x4d)
OP_PUSHDATA4 = CScriptOp(0x4e)
OP_1NEGATE = CScriptOp(0x4f)
OP_RESERVED = CScriptOp(0x50)
OP_1 = CScriptOp(0x51)
OP_TRUE=OP_1
OP_2 = CScriptOp(0x52)
OP_3 = CScriptOp(0x53)
OP_4 = CScriptOp(0x54)
OP_5 = CScriptOp(0x55)
OP_6 = CScriptOp(0x56)
OP_7 = CScriptOp(0x57)
OP_8 = CScriptOp(0x58)
OP_9 = CScriptOp(0x59)
OP_10 = CScriptOp(0x5a)
OP_11 = CScriptOp(0x5b)
OP_12 = CScriptOp(0x5c)
OP_13 = CScriptOp(0x5d)
OP_14 = CScriptOp(0x5e)
OP_15 = CScriptOp(0x5f)
OP_16 = CScriptOp(0x60)
# control
OP_NOP = CScriptOp(0x61)
OP_VER = CScriptOp(0x62)
OP_IF = CScriptOp(0x63)
OP_NOTIF = CScriptOp(0x64)
OP_VERIF = CScriptOp(0x65)
OP_VERNOTIF = CScriptOp(0x66)
OP_ELSE = CScriptOp(0x67)
OP_ENDIF = CScriptOp(0x68)
OP_VERIFY = CScriptOp(0x69)
OP_RETURN = CScriptOp(0x6a)
# stack ops
OP_TOALTSTACK = CScriptOp(0x6b)
OP_FROMALTSTACK = CScriptOp(0x6c)
OP_2DROP = CScriptOp(0x6d)
OP_2DUP = CScriptOp(0x6e)
OP_3DUP = CScriptOp(0x6f)
OP_2OVER = CScriptOp(0x70)
OP_2ROT = CScriptOp(0x71)
OP_2SWAP = CScriptOp(0x72)
OP_IFDUP = CScriptOp(0x73)
OP_DEPTH = CScriptOp(0x74)
OP_DROP = CScriptOp(0x75)
OP_DUP = CScriptOp(0x76)
OP_NIP = CScriptOp(0x77)
OP_OVER = CScriptOp(0x78)
OP_PICK = CScriptOp(0x79)
OP_ROLL = CScriptOp(0x7a)
OP_ROT = CScriptOp(0x7b)
OP_SWAP = CScriptOp(0x7c)
OP_TUCK = CScriptOp(0x7d)
# splice ops
OP_CAT = CScriptOp(0x7e)
OP_SUBSTR = CScriptOp(0x7f)
OP_LEFT = CScriptOp(0x80)
OP_RIGHT = CScriptOp(0x81)
OP_SIZE = CScriptOp(0x82)
# bit logic
OP_INVERT = CScriptOp(0x83)
OP_AND = CScriptOp(0x84)
OP_OR = CScriptOp(0x85)
OP_XOR = CScriptOp(0x86)
OP_EQUAL = CScriptOp(0x87)
OP_EQUALVERIFY = CScriptOp(0x88)
OP_RESERVED1 = CScriptOp(0x89)
OP_RESERVED2 = CScriptOp(0x8a)
# numeric
OP_1ADD = CScriptOp(0x8b)
OP_1SUB = CScriptOp(0x8c)
OP_2MUL = CScriptOp(0x8d)
OP_2DIV = CScriptOp(0x8e)
OP_NEGATE = CScriptOp(0x8f)
OP_ABS = CScriptOp(0x90)
OP_NOT = CScriptOp(0x91)
OP_0NOTEQUAL = CScriptOp(0x92)
OP_ADD = CScriptOp(0x93)
OP_SUB = CScriptOp(0x94)
OP_MUL = CScriptOp(0x95)
OP_DIV = CScriptOp(0x96)
OP_MOD = CScriptOp(0x97)
OP_LSHIFT = CScriptOp(0x98)
OP_RSHIFT = CScriptOp(0x99)
OP_BOOLAND = CScriptOp(0x9a)
OP_BOOLOR = CScriptOp(0x9b)
OP_NUMEQUAL = CScriptOp(0x9c)
OP_NUMEQUALVERIFY = CScriptOp(0x9d)
OP_NUMNOTEQUAL = CScriptOp(0x9e)
OP_LESSTHAN = CScriptOp(0x9f)
OP_GREATERTHAN = CScriptOp(0xa0)
OP_LESSTHANOREQUAL = CScriptOp(0xa1)
OP_GREATERTHANOREQUAL = CScriptOp(0xa2)
OP_MIN = CScriptOp(0xa3)
OP_MAX = CScriptOp(0xa4)
OP_WITHIN = CScriptOp(0xa5)
# crypto
OP_RIPEMD160 = CScriptOp(0xa6)
OP_SHA1 = CScriptOp(0xa7)
OP_SHA256 = CScriptOp(0xa8)
OP_HASH160 = CScriptOp(0xa9)
OP_HASH256 = CScriptOp(0xaa)
OP_CODESEPARATOR = CScriptOp(0xab)
OP_CHECKSIG = CScriptOp(0xac)
OP_CHECKSIGVERIFY = CScriptOp(0xad)
OP_CHECKMULTISIG = CScriptOp(0xae)
OP_CHECKMULTISIGVERIFY = CScriptOp(0xaf)
# expansion
OP_NOP1 = CScriptOp(0xb0)
OP_NOP2 = CScriptOp(0xb1)
OP_NOP3 = CScriptOp(0xb2)
OP_NOP4 = CScriptOp(0xb3)
OP_NOP5 = CScriptOp(0xb4)
OP_NOP6 = CScriptOp(0xb5)
OP_NOP7 = CScriptOp(0xb6)
OP_NOP8 = CScriptOp(0xb7)
OP_NOP9 = CScriptOp(0xb8)
OP_NOP10 = CScriptOp(0xb9)
# template matching params
OP_SMALLINTEGER = CScriptOp(0xfa)
OP_PUBKEYS = CScriptOp(0xfb)
OP_PUBKEYHASH = CScriptOp(0xfd)
OP_PUBKEY = CScriptOp(0xfe)
OP_INVALIDOPCODE = CScriptOp(0xff)
VALID_OPCODES = {
OP_1NEGATE,
OP_RESERVED,
OP_1,
OP_2,
OP_3,
OP_4,
OP_5,
OP_6,
OP_7,
OP_8,
OP_9,
OP_10,
OP_11,
OP_12,
OP_13,
OP_14,
OP_15,
OP_16,
OP_NOP,
OP_VER,
OP_IF,
OP_NOTIF,
OP_VERIF,
OP_VERNOTIF,
OP_ELSE,
OP_ENDIF,
OP_VERIFY,
OP_RETURN,
OP_TOALTSTACK,
OP_FROMALTSTACK,
OP_2DROP,
OP_2DUP,
OP_3DUP,
OP_2OVER,
OP_2ROT,
OP_2SWAP,
OP_IFDUP,
OP_DEPTH,
OP_DROP,
OP_DUP,
OP_NIP,
OP_OVER,
OP_PICK,
OP_ROLL,
OP_ROT,
OP_SWAP,
OP_TUCK,
OP_CAT,
OP_SUBSTR,
OP_LEFT,
OP_RIGHT,
OP_SIZE,
OP_INVERT,
OP_AND,
OP_OR,
OP_XOR,
OP_EQUAL,
OP_EQUALVERIFY,
OP_RESERVED1,
OP_RESERVED2,
OP_1ADD,
OP_1SUB,
OP_2MUL,
OP_2DIV,
OP_NEGATE,
OP_ABS,
OP_NOT,
OP_0NOTEQUAL,
OP_ADD,
OP_SUB,
OP_MUL,
OP_DIV,
OP_MOD,
OP_LSHIFT,
OP_RSHIFT,
OP_BOOLAND,
OP_BOOLOR,
OP_NUMEQUAL,
OP_NUMEQUALVERIFY,
OP_NUMNOTEQUAL,
OP_LESSTHAN,
OP_GREATERTHAN,
OP_LESSTHANOREQUAL,
OP_GREATERTHANOREQUAL,
OP_MIN,
OP_MAX,
OP_WITHIN,
OP_RIPEMD160,
OP_SHA1,
OP_SHA256,
OP_HASH160,
OP_HASH256,
OP_CHECKSIG,
OP_CHECKSIGVERIFY,
OP_CHECKMULTISIG,
OP_CHECKMULTISIGVERIFY,
OP_NOP1,
OP_NOP2,
OP_NOP3,
OP_NOP4,
OP_NOP5,
OP_NOP6,
OP_NOP7,
OP_NOP8,
OP_NOP9,
OP_NOP10,
OP_SMALLINTEGER,
OP_PUBKEYS,
OP_PUBKEYHASH,
OP_PUBKEY,
}
OPCODE_NAMES.update({
OP_0 : 'OP_0',
OP_PUSHDATA1 : 'OP_PUSHDATA1',
OP_PUSHDATA2 : 'OP_PUSHDATA2',
OP_PUSHDATA4 : 'OP_PUSHDATA4',
OP_1NEGATE : 'OP_1NEGATE',
OP_RESERVED : 'OP_RESERVED',
OP_1 : 'OP_1',
OP_2 : 'OP_2',
OP_3 : 'OP_3',
OP_4 : 'OP_4',
OP_5 : 'OP_5',
OP_6 : 'OP_6',
OP_7 : 'OP_7',
OP_8 : 'OP_8',
OP_9 : 'OP_9',
OP_10 : 'OP_10',
OP_11 : 'OP_11',
OP_12 : 'OP_12',
OP_13 : 'OP_13',
OP_14 : 'OP_14',
OP_15 : 'OP_15',
OP_16 : 'OP_16',
OP_NOP : 'OP_NOP',
OP_VER : 'OP_VER',
OP_IF : 'OP_IF',
OP_NOTIF : 'OP_NOTIF',
OP_VERIF : 'OP_VERIF',
OP_VERNOTIF : 'OP_VERNOTIF',
OP_ELSE : 'OP_ELSE',
OP_ENDIF : 'OP_ENDIF',
OP_VERIFY : 'OP_VERIFY',
OP_RETURN : 'OP_RETURN',
OP_TOALTSTACK : 'OP_TOALTSTACK',
OP_FROMALTSTACK : 'OP_FROMALTSTACK',
OP_2DROP : 'OP_2DROP',
OP_2DUP : 'OP_2DUP',
OP_3DUP : 'OP_3DUP',
OP_2OVER : 'OP_2OVER',
OP_2ROT : 'OP_2ROT',
OP_2SWAP : 'OP_2SWAP',
OP_IFDUP : 'OP_IFDUP',
OP_DEPTH : 'OP_DEPTH',
OP_DROP : 'OP_DROP',
OP_DUP : 'OP_DUP',
OP_NIP : 'OP_NIP',
OP_OVER : 'OP_OVER',
OP_PICK : 'OP_PICK',
OP_ROLL : 'OP_ROLL',
OP_ROT : 'OP_ROT',
OP_SWAP : 'OP_SWAP',
OP_TUCK : 'OP_TUCK',
OP_CAT : 'OP_CAT',
OP_SUBSTR : 'OP_SUBSTR',
OP_LEFT : 'OP_LEFT',
OP_RIGHT : 'OP_RIGHT',
OP_SIZE : 'OP_SIZE',
OP_INVERT : 'OP_INVERT',
OP_AND : 'OP_AND',
OP_OR : 'OP_OR',
OP_XOR : 'OP_XOR',
OP_EQUAL : 'OP_EQUAL',
OP_EQUALVERIFY : 'OP_EQUALVERIFY',
OP_RESERVED1 : 'OP_RESERVED1',
OP_RESERVED2 : 'OP_RESERVED2',
OP_1ADD : 'OP_1ADD',
OP_1SUB : 'OP_1SUB',
OP_2MUL : 'OP_2MUL',
OP_2DIV : 'OP_2DIV',
OP_NEGATE : 'OP_NEGATE',
OP_ABS : 'OP_ABS',
OP_NOT : 'OP_NOT',
OP_0NOTEQUAL : 'OP_0NOTEQUAL',
OP_ADD : 'OP_ADD',
OP_SUB : 'OP_SUB',
OP_MUL : 'OP_MUL',
OP_DIV : 'OP_DIV',
OP_MOD : 'OP_MOD',
OP_LSHIFT : 'OP_LSHIFT',
OP_RSHIFT : 'OP_RSHIFT',
OP_BOOLAND : 'OP_BOOLAND',
OP_BOOLOR : 'OP_BOOLOR',
OP_NUMEQUAL : 'OP_NUMEQUAL',
OP_NUMEQUALVERIFY : 'OP_NUMEQUALVERIFY',
OP_NUMNOTEQUAL : 'OP_NUMNOTEQUAL',
OP_LESSTHAN : 'OP_LESSTHAN',
OP_GREATERTHAN : 'OP_GREATERTHAN',
OP_LESSTHANOREQUAL : 'OP_LESSTHANOREQUAL',
OP_GREATERTHANOREQUAL : 'OP_GREATERTHANOREQUAL',
OP_MIN : 'OP_MIN',
OP_MAX : 'OP_MAX',
OP_WITHIN : 'OP_WITHIN',
OP_RIPEMD160 : 'OP_RIPEMD160',
OP_SHA1 : 'OP_SHA1',
OP_SHA256 : 'OP_SHA256',
OP_HASH160 : 'OP_HASH160',
OP_HASH256 : 'OP_HASH256',
OP_CODESEPARATOR : 'OP_CODESEPARATOR',
OP_CHECKSIG : 'OP_CHECKSIG',
OP_CHECKSIGVERIFY : 'OP_CHECKSIGVERIFY',
OP_CHECKMULTISIG : 'OP_CHECKMULTISIG',
OP_CHECKMULTISIGVERIFY : 'OP_CHECKMULTISIGVERIFY',
OP_NOP1 : 'OP_NOP1',
OP_NOP2 : 'OP_NOP2',
OP_NOP3 : 'OP_NOP3',
OP_NOP4 : 'OP_NOP4',
OP_NOP5 : 'OP_NOP5',
OP_NOP6 : 'OP_NOP6',
OP_NOP7 : 'OP_NOP7',
OP_NOP8 : 'OP_NOP8',
OP_NOP9 : 'OP_NOP9',
OP_NOP10 : 'OP_NOP10',
OP_SMALLINTEGER : 'OP_SMALLINTEGER',
OP_PUBKEYS : 'OP_PUBKEYS',
OP_PUBKEYHASH : 'OP_PUBKEYHASH',
OP_PUBKEY : 'OP_PUBKEY',
OP_INVALIDOPCODE : 'OP_INVALIDOPCODE',
})
OPCODES_BY_NAME = {
'OP_0' : OP_0,
'OP_PUSHDATA1' : OP_PUSHDATA1,
'OP_PUSHDATA2' : OP_PUSHDATA2,
'OP_PUSHDATA4' : OP_PUSHDATA4,
'OP_1NEGATE' : OP_1NEGATE,
'OP_RESERVED' : OP_RESERVED,
'OP_1' : OP_1,
'OP_2' : OP_2,
'OP_3' : OP_3,
'OP_4' : OP_4,
'OP_5' : OP_5,
'OP_6' : OP_6,
'OP_7' : OP_7,
'OP_8' : OP_8,
'OP_9' : OP_9,
'OP_10' : OP_10,
'OP_11' : OP_11,
'OP_12' : OP_12,
'OP_13' : OP_13,
'OP_14' : OP_14,
'OP_15' : OP_15,
'OP_16' : OP_16,
'OP_NOP' : OP_NOP,
'OP_VER' : OP_VER,
'OP_IF' : OP_IF,
'OP_NOTIF' : OP_NOTIF,
'OP_VERIF' : OP_VERIF,
'OP_VERNOTIF' : OP_VERNOTIF,
'OP_ELSE' : OP_ELSE,
'OP_ENDIF' : OP_ENDIF,
'OP_VERIFY' : OP_VERIFY,
'OP_RETURN' : OP_RETURN,
'OP_TOALTSTACK' : OP_TOALTSTACK,
'OP_FROMALTSTACK' : OP_FROMALTSTACK,
'OP_2DROP' : OP_2DROP,
'OP_2DUP' : OP_2DUP,
'OP_3DUP' : OP_3DUP,
'OP_2OVER' : OP_2OVER,
'OP_2ROT' : OP_2ROT,
'OP_2SWAP' : OP_2SWAP,
'OP_IFDUP' : OP_IFDUP,
'OP_DEPTH' : OP_DEPTH,
'OP_DROP' : OP_DROP,
'OP_DUP' : OP_DUP,
'OP_NIP' : OP_NIP,
'OP_OVER' : OP_OVER,
'OP_PICK' : OP_PICK,
'OP_ROLL' : OP_ROLL,
'OP_ROT' : OP_ROT,
'OP_SWAP' : OP_SWAP,
'OP_TUCK' : OP_TUCK,
'OP_CAT' : OP_CAT,
'OP_SUBSTR' : OP_SUBSTR,
'OP_LEFT' : OP_LEFT,
'OP_RIGHT' : OP_RIGHT,
'OP_SIZE' : OP_SIZE,
'OP_INVERT' : OP_INVERT,
'OP_AND' : OP_AND,
'OP_OR' : OP_OR,
'OP_XOR' : OP_XOR,
'OP_EQUAL' : OP_EQUAL,
'OP_EQUALVERIFY' : OP_EQUALVERIFY,
'OP_RESERVED1' : OP_RESERVED1,
'OP_RESERVED2' : OP_RESERVED2,
'OP_1ADD' : OP_1ADD,
'OP_1SUB' : OP_1SUB,
'OP_2MUL' : OP_2MUL,
'OP_2DIV' : OP_2DIV,
'OP_NEGATE' : OP_NEGATE,
'OP_ABS' : OP_ABS,
'OP_NOT' : OP_NOT,
'OP_0NOTEQUAL' : OP_0NOTEQUAL,
'OP_ADD' : OP_ADD,
'OP_SUB' : OP_SUB,
'OP_MUL' : OP_MUL,
'OP_DIV' : OP_DIV,
'OP_MOD' : OP_MOD,
'OP_LSHIFT' : OP_LSHIFT,
'OP_RSHIFT' : OP_RSHIFT,
'OP_BOOLAND' : OP_BOOLAND,
'OP_BOOLOR' : OP_BOOLOR,
'OP_NUMEQUAL' : OP_NUMEQUAL,
'OP_NUMEQUALVERIFY' : OP_NUMEQUALVERIFY,
'OP_NUMNOTEQUAL' : OP_NUMNOTEQUAL,
'OP_LESSTHAN' : OP_LESSTHAN,
'OP_GREATERTHAN' : OP_GREATERTHAN,
'OP_LESSTHANOREQUAL' : OP_LESSTHANOREQUAL,
'OP_GREATERTHANOREQUAL' : OP_GREATERTHANOREQUAL,
'OP_MIN' : OP_MIN,
'OP_MAX' : OP_MAX,
'OP_WITHIN' : OP_WITHIN,
'OP_RIPEMD160' : OP_RIPEMD160,
'OP_SHA1' : OP_SHA1,
'OP_SHA256' : OP_SHA256,
'OP_HASH160' : OP_HASH160,
'OP_HASH256' : OP_HASH256,
'OP_CODESEPARATOR' : OP_CODESEPARATOR,
'OP_CHECKSIG' : OP_CHECKSIG,
'OP_CHECKSIGVERIFY' : OP_CHECKSIGVERIFY,
'OP_CHECKMULTISIG' : OP_CHECKMULTISIG,
'OP_CHECKMULTISIGVERIFY' : OP_CHECKMULTISIGVERIFY,
'OP_NOP1' : OP_NOP1,
'OP_NOP2' : OP_NOP2,
'OP_NOP3' : OP_NOP3,
'OP_NOP4' : OP_NOP4,
'OP_NOP5' : OP_NOP5,
'OP_NOP6' : OP_NOP6,
'OP_NOP7' : OP_NOP7,
'OP_NOP8' : OP_NOP8,
'OP_NOP9' : OP_NOP9,
'OP_NOP10' : OP_NOP10,
'OP_SMALLINTEGER' : OP_SMALLINTEGER,
'OP_PUBKEYS' : OP_PUBKEYS,
'OP_PUBKEYHASH' : OP_PUBKEYHASH,
'OP_PUBKEY' : OP_PUBKEY,
}
class CScriptInvalidError(Exception):
"""Base class for CScript exceptions"""
pass
class CScriptTruncatedPushDataError(CScriptInvalidError):
"""Invalid pushdata due to truncation"""
def __init__(self, msg, data):
self.data = data
super(CScriptTruncatedPushDataError, self).__init__(msg)
# This is used, eg, for blockchain heights in coinbase scripts (bip34)
class CScriptNum(object):
def __init__(self, d=0):
self.value = d
@staticmethod
def encode(obj):
r = bytearray(0)
if obj.value == 0:
return bytes(r)
neg = obj.value < 0
absvalue = -obj.value if neg else obj.value
while (absvalue):
r.append(absvalue & 0xff)
absvalue >>= 8
if r[-1] & 0x80:
r.append(0x80 if neg else 0)
elif neg:
r[-1] |= 0x80
return struct.pack("B", len(r)) + r
class CScript(bytes):
"""Serialized script
A bytes subclass, so you can use this directly whenever bytes are accepted.
Note that this means that indexing does *not* work - you'll get an index by
byte rather than opcode. This format was chosen for efficiency so that the
general case would not require creating a lot of little CScriptOP objects.
iter(script) however does iterate by opcode.
"""
@classmethod
def __coerce_instance(cls, other):
# Coerce other into bytes
if isinstance(other, CScriptOp):
other = bytes([other])
elif isinstance(other, CScriptNum):
if (other.value == 0):
other = bytes([CScriptOp(OP_0)])
else:
other = CScriptNum.encode(other)
elif isinstance(other, int):
if 0 <= other <= 16:
other = bytes([CScriptOp.encode_op_n(other)])
elif other == -1:
other = bytes([OP_1NEGATE])
else:
other = CScriptOp.encode_op_pushdata(bn2vch(other))
elif isinstance(other, (bytes, bytearray)):
other = bytes(CScriptOp.encode_op_pushdata(other))
return other
def __add__(self, other):
# Do the coercion outside of the try block so that errors in it are
# noticed.
other = self.__coerce_instance(other)
try:
# bytes.__add__ always returns bytes instances unfortunately
return CScript(super(CScript, self).__add__(other))
except TypeError:
raise TypeError('Can not add a %r instance to a CScript' % other.__class__)
def join(self, iterable):
# join makes no sense for a CScript()
raise NotImplementedError
def __new__(cls, value=b''):
if isinstance(value, bytes) or isinstance(value, bytearray):
return super(CScript, cls).__new__(cls, value)
else:
def coerce_iterable(iterable):
for instance in iterable:
yield cls.__coerce_instance(instance)
# Annoyingly on both python2 and python3 bytes.join() always
# returns a bytes instance even when subclassed.
return super(CScript, cls).__new__(cls, b''.join(coerce_iterable(value)))
def raw_iter(self):
"""Raw iteration
Yields tuples of (opcode, data, sop_idx) so that the different possible
PUSHDATA encodings can be accurately distinguished, as well as
determining the exact opcode byte indexes. (sop_idx)
"""
i = 0
while i < len(self):
sop_idx = i
opcode = bord(self[i])
i += 1
if opcode > OP_PUSHDATA4:
yield (opcode, None, sop_idx)
else:
datasize = None
pushdata_type = None
if opcode < OP_PUSHDATA1:
pushdata_type = 'PUSHDATA(%d)' % opcode
datasize = opcode
elif opcode == OP_PUSHDATA1:
pushdata_type = 'PUSHDATA1'
if i >= len(self):
raise CScriptInvalidError('PUSHDATA1: missing data length')
datasize = bord(self[i])
i += 1
elif opcode == OP_PUSHDATA2:
pushdata_type = 'PUSHDATA2'
if i + 1 >= len(self):
raise CScriptInvalidError('PUSHDATA2: missing data length')
datasize = bord(self[i]) + (bord(self[i+1]) << 8)
i += 2
elif opcode == OP_PUSHDATA4:
pushdata_type = 'PUSHDATA4'
if i + 3 >= len(self):
raise CScriptInvalidError('PUSHDATA4: missing data length')
datasize = bord(self[i]) + (bord(self[i+1]) << 8) + (bord(self[i+2]) << 16) + (bord(self[i+3]) << 24)
i += 4
else:
assert False # shouldn't happen
data = bytes(self[i:i+datasize])
# Check for truncation
if len(data) < datasize:
raise CScriptTruncatedPushDataError('%s: truncated data' % pushdata_type, data)
i += datasize
yield (opcode, data, sop_idx)
def __iter__(self):
"""'Cooked' iteration
Returns either a CScriptOP instance, an integer, or bytes, as
appropriate.
See raw_iter() if you need to distinguish the different possible
PUSHDATA encodings.
"""
for (opcode, data, sop_idx) in self.raw_iter():
if data is not None:
yield data
else:
opcode = CScriptOp(opcode)
if opcode.is_small_int():
yield opcode.decode_op_n()
else:
yield CScriptOp(opcode)
def __repr__(self):
# For Python3 compatibility add b before strings so testcases don't
# need to change
def _repr(o):
if isinstance(o, bytes):
return b"x('%s')" % hexlify(o).decode('ascii')
else:
return repr(o)
ops = []
i = iter(self)
while True:
op = None
try:
op = _repr(next(i))
except CScriptTruncatedPushDataError as err:
op = '%s...<ERROR: %s>' % (_repr(err.data), err)
break
except CScriptInvalidError as err:
op = '<ERROR: %s>' % err
break
except StopIteration:
break
finally:
if op is not None:
ops.append(op)
return "CScript([%s])" % ', '.join(ops)
def GetSigOpCount(self, fAccurate):
"""Get the SigOp count.
fAccurate - Accurately count CHECKMULTISIG, see BIP16 for details.
Note that this is consensus-critical.
"""
n = 0
lastOpcode = OP_INVALIDOPCODE
for (opcode, data, sop_idx) in self.raw_iter():
if opcode in (OP_CHECKSIG, OP_CHECKSIGVERIFY):
n += 1
elif opcode in (OP_CHECKMULTISIG, OP_CHECKMULTISIGVERIFY):
if fAccurate and (OP_1 <= lastOpcode <= OP_16):
n += opcode.decode_op_n()
else:
n += 20
lastOpcode = opcode
return n
SIGHASH_ALL = 1
SIGHASH_NONE = 2
SIGHASH_SINGLE = 3
SIGHASH_ANYONECANPAY = 0x80
def getHashPrevouts(tx, person=b'ZcashPrevoutHash'):
digest = blake2b(digest_size=32, person=person)
for x in tx.vin:
digest.update(x.prevout.serialize())
return digest.digest()
def getHashSequence(tx, person=b'ZcashSequencHash'):
digest = blake2b(digest_size=32, person=person)
for x in tx.vin:
digest.update(struct.pack('<I', x.nSequence))
return digest.digest()
def getHashOutputs(tx, person=b'ZcashOutputsHash'):
digest = blake2b(digest_size=32, person=person)
for x in tx.vout:
digest.update(x.serialize())
return digest.digest()
def getHashJoinSplits(tx):
digest = blake2b(digest_size=32, person=b'ZcashJSplitsHash')
for jsdesc in tx.vJoinSplit:
digest.update(jsdesc.serialize())
digest.update(tx.joinSplitPubKey)
return digest.digest()
def getHashShieldedSpends(tx):
digest = blake2b(digest_size=32, person=b'ZcashSSpendsHash')
for desc in tx.shieldedSpends:
# We don't pass in serialized form of desc as spendAuthSig is not part of the hash
digest.update(ser_uint256(desc.cv))
digest.update(ser_uint256(desc.anchor))
digest.update(ser_uint256(desc.nullifier))
digest.update(ser_uint256(desc.rk))
digest.update(desc.proof)
return digest.digest()
def getHashShieldedOutputs(tx):
digest = blake2b(digest_size=32, person=b'ZcashSOutputHash')
for desc in tx.shieldedOutputs:
digest.update(desc.serialize())
return digest.digest()
def SignatureHash(script, txTo, inIdx, hashtype, amount, consensusBranchId):
"""Consensus-correct SignatureHash"""
if inIdx >= len(txTo.vin):
raise ValueError("inIdx %d out of range (%d)" % (inIdx, len(txTo.vin)))
if consensusBranchId != 0:
# ZIP 243
hashPrevouts = b'\x00'*32
hashSequence = b'\x00'*32
hashOutputs = b'\x00'*32
hashJoinSplits = b'\x00'*32
hashShieldedSpends = b'\x00'*32
hashShieldedOutputs = b'\x00'*32
if not (hashtype & SIGHASH_ANYONECANPAY):
hashPrevouts = getHashPrevouts(txTo)
if (not (hashtype & SIGHASH_ANYONECANPAY)) and \
(hashtype & 0x1f) != SIGHASH_SINGLE and \
(hashtype & 0x1f) != SIGHASH_NONE:
hashSequence = getHashSequence(txTo)
if (hashtype & 0x1f) != SIGHASH_SINGLE and \
(hashtype & 0x1f) != SIGHASH_NONE:
hashOutputs = getHashOutputs(txTo)
elif (hashtype & 0x1f) == SIGHASH_SINGLE and \
0 <= inIdx and inIdx < len(txTo.vout):
digest = blake2b(digest_size=32, person=b'ZcashOutputsHash')
digest.update(txTo.vout[inIdx].serialize())
hashOutputs = digest.digest()
if len(txTo.vJoinSplit) > 0:
hashJoinSplits = getHashJoinSplits(txTo)
if len(txTo.shieldedSpends) > 0:
hashShieldedSpends = getHashShieldedSpends(txTo)
if len(txTo.shieldedOutputs) > 0:
hashShieldedOutputs = getHashShieldedOutputs(txTo)
digest = blake2b(
digest_size=32,
person=b'ZcashSigHash' + struct.pack('<I', consensusBranchId),
)
digest.update(struct.pack('<I', (int(txTo.fOverwintered)<<31) | txTo.nVersion))
digest.update(struct.pack('<I', txTo.nVersionGroupId))
digest.update(hashPrevouts)
digest.update(hashSequence)
digest.update(hashOutputs)
digest.update(hashJoinSplits)
digest.update(hashShieldedSpends)
digest.update(hashShieldedOutputs)
digest.update(struct.pack('<I', txTo.nLockTime))
digest.update(struct.pack('<I', txTo.nExpiryHeight))
digest.update(struct.pack('<Q', txTo.valueBalance))
digest.update(struct.pack('<I', hashtype))
if inIdx is not None:
digest.update(txTo.vin[inIdx].prevout.serialize())
digest.update(ser_string(script))
digest.update(struct.pack('<Q', amount))
digest.update(struct.pack('<I', txTo.vin[inIdx].nSequence))
return (digest.digest(), None)
else:
# Pre-Overwinter
txtmp = CTransaction(txTo)
for txin in txtmp.vin:
txin.scriptSig = b''
txtmp.vin[inIdx].scriptSig = script
if (hashtype & 0x1f) == SIGHASH_NONE:
txtmp.vout = []
for i in range(len(txtmp.vin)):
if i != inIdx:
txtmp.vin[i].nSequence = 0
elif (hashtype & 0x1f) == SIGHASH_SINGLE:
outIdx = inIdx
if outIdx >= len(txtmp.vout):
raise ValueError("outIdx %d out of range (%d)" % (outIdx, len(txtmp.vout)))
tmp = txtmp.vout[outIdx]
txtmp.vout = []
for i in range(outIdx):
txtmp.vout.append(CTxOut())
txtmp.vout.append(tmp)
for i in range(len(txtmp.vin)):
if i != inIdx:
txtmp.vin[i].nSequence = 0
if hashtype & SIGHASH_ANYONECANPAY:
tmp = txtmp.vin[inIdx]
txtmp.vin = []
txtmp.vin.append(tmp)
s = txtmp.serialize()
s += struct.pack(b"<I", hashtype)
hash = hash256(s)
return (hash, None)

View File

@ -0,0 +1,162 @@
#!/usr/bin/env python3
# Copyright (c) 2015-2016 The Bitcoin Core developers
# Copyright (c) 2019-2022 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
'''
Dummy Socks5 server for testing.
'''
import socket, threading, queue
import traceback, sys
### Protocol constants
class Command:
CONNECT = 0x01
class AddressType:
IPV4 = 0x01
DOMAINNAME = 0x03
IPV6 = 0x04
### Utility functions
def recvall(s, n):
'''Receive n bytes from a socket, or fail'''
rv = bytearray()
while n > 0:
d = s.recv(n)
if not d:
raise IOError('Unexpected end of stream')
rv.extend(d)
n -= len(d)
return rv
### Implementation classes
class Socks5Configuration(object):
'''Proxy configuration'''
def __init__(self):
self.addr = None # Bind address (must be set)
self.af = socket.AF_INET # Bind address family
self.unauth = False # Support unauthenticated
self.auth = False # Support authentication
class Socks5Command(object):
'''Information about an incoming socks5 command'''
def __init__(self, cmd, atyp, addr, port, username, password):
self.cmd = cmd # Command (one of Command.*)
self.atyp = atyp # Address type (one of AddressType.*)
self.addr = addr # Address
self.port = port # Port to connect to
self.username = username
self.password = password
def __repr__(self):
return 'Socks5Command(%s,%s,%s,%s,%s,%s)' % (self.cmd, self.atyp, self.addr, self.port, self.username, self.password)
class Socks5Connection(object):
def __init__(self, serv, conn, peer):
self.serv = serv
self.conn = conn
self.peer = peer
def handle(self):
'''
Handle socks5 request according to RFC1928
'''
try:
# Verify socks version
ver = recvall(self.conn, 1)[0]
if ver != 0x05:
raise IOError('Invalid socks version %i' % ver)
# Choose authentication method
nmethods = recvall(self.conn, 1)[0]
methods = bytearray(recvall(self.conn, nmethods))
method = None
if 0x02 in methods and self.serv.conf.auth:
method = 0x02 # username/password
elif 0x00 in methods and self.serv.conf.unauth:
method = 0x00 # unauthenticated
if method is None:
raise IOError('No supported authentication method was offered')
# Send response
self.conn.sendall(bytearray([0x05, method]))
# Read authentication (optional)
username = None
password = None
if method == 0x02:
ver = recvall(self.conn, 1)[0]
if ver != 0x01:
raise IOError('Invalid auth packet version %i' % ver)
ulen = recvall(self.conn, 1)[0]
username = str(recvall(self.conn, ulen))
plen = recvall(self.conn, 1)[0]
password = str(recvall(self.conn, plen))
# Send authentication response
self.conn.sendall(bytearray([0x01, 0x00]))
# Read connect request
(ver,cmd,rsv,atyp) = recvall(self.conn, 4)
if ver != 0x05:
raise IOError('Invalid socks version %i in connect request' % ver)
if cmd != Command.CONNECT:
raise IOError('Unhandled command %i in connect request' % cmd)
if atyp == AddressType.IPV4:
addr = recvall(self.conn, 4)
elif atyp == AddressType.DOMAINNAME:
n = recvall(self.conn, 1)[0]
addr = recvall(self.conn, n)
elif atyp == AddressType.IPV6:
addr = recvall(self.conn, 16)
else:
raise IOError('Unknown address type %i' % atyp)
port_hi,port_lo = recvall(self.conn, 2)
port = (port_hi << 8) | port_lo
# Send dummy response
self.conn.sendall(bytearray([0x05, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00]))
cmdin = Socks5Command(cmd, atyp, addr, port, username, password)
self.serv.queue.put(cmdin)
print('Proxy: ', cmdin)
# Fall through to disconnect
except Exception as e:
traceback.print_exc(file=sys.stderr)
self.serv.queue.put(e)
finally:
self.conn.close()
class Socks5Server(object):
def __init__(self, conf):
self.conf = conf
self.s = socket.socket(conf.af)
self.s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.s.bind(conf.addr)
self.s.listen(5)
self.running = False
self.thread = None
self.queue = queue.Queue() # report connections and exceptions to client
def run(self):
while self.running:
(sockconn, peer) = self.s.accept()
if self.running:
conn = Socks5Connection(self, sockconn, peer)
thread = threading.Thread(None, conn.handle)
thread.daemon = True
thread.start()
def start(self):
assert(not self.running)
self.running = True
self.thread = threading.Thread(None, self.run)
self.thread.daemon = True
self.thread.start()
def stop(self):
self.running = False
# connect to self to end run loop
s = socket.socket(self.conf.af)
s.connect(self.conf.addr)
s.close()
self.thread.join()

View File

@ -0,0 +1,211 @@
#!/usr/bin/env python3
# Copyright (c) 2014-2016 The Bitcoin Core developers
# Copyright (c) 2016-2022 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
# Base class for RPC testing
import logging
import optparse
import os
import sys
import shutil
import tempfile
import traceback
from .proxy import JSONRPCException
from .util import (
zcashd_binary,
initialize_chain,
start_nodes,
connect_nodes_bi,
sync_blocks,
sync_mempools,
stop_nodes,
wait_bitcoinds,
enable_coverage,
check_json_precision,
PortSeed,
)
class BitcoinTestFramework(object):
def __init__(self):
self.num_nodes = 4
self.cache_behavior = 'current'
self.nodes = None
def run_test(self):
raise NotImplementedError
def add_options(self, parser):
pass
def setup_chain(self):
print("Initializing test directory "+self.options.tmpdir)
initialize_chain(self.options.tmpdir, self.num_nodes, self.options.cachedir, self.cache_behavior)
def setup_nodes(self):
return start_nodes(self.num_nodes, self.options.tmpdir)
def setup_network(self, split = False, do_mempool_sync = True):
self.nodes = self.setup_nodes()
# Connect the nodes as a "chain". This allows us
# to split the network between nodes 1 and 2 to get
# two halves that can work on competing chains.
connect_nodes_bi(self.nodes, 0, 1)
# If we joined network halves, connect the nodes from the joint
# on outward. This ensures that chains are properly reorganised.
if len(self.nodes) >= 4:
connect_nodes_bi(self.nodes, 2, 3)
if not split:
connect_nodes_bi(self.nodes, 1, 2)
sync_blocks(self.nodes[1:3])
if do_mempool_sync:
sync_mempools(self.nodes[1:3])
self.is_network_split = split
self.sync_all(do_mempool_sync)
def split_network(self):
"""
Split the network of four nodes into nodes 0/1 and 2/3.
"""
assert not self.is_network_split
stop_nodes(self.nodes)
wait_bitcoinds()
self.setup_network(True)
def sync_all(self, do_mempool_sync = True):
if self.is_network_split:
sync_blocks(self.nodes[:2])
sync_blocks(self.nodes[2:])
if do_mempool_sync:
sync_mempools(self.nodes[:2])
sync_mempools(self.nodes[2:])
else:
sync_blocks(self.nodes)
if do_mempool_sync:
sync_mempools(self.nodes)
def join_network(self):
"""
Join the (previously split) network halves together.
"""
assert self.is_network_split
stop_nodes(self.nodes)
wait_bitcoinds()
self.setup_network(False, False)
def main(self):
parser = optparse.OptionParser(usage="%prog [options]")
parser.add_option("--nocleanup", dest="nocleanup", default=False, action="store_true",
help="Leave bitcoinds and test.* datadir on exit or error")
parser.add_option("--noshutdown", dest="noshutdown", default=False, action="store_true",
help="Don't stop bitcoinds after the test execution")
parser.add_option("--srcdir", dest="srcdir", default="../../src",
help="Source directory containing bitcoind/bitcoin-cli (default: %default)")
parser.add_option("--cachedir", dest="cachedir", default=os.path.normpath(os.path.dirname(os.path.realpath(__file__))+"/../../cache"),
help="Directory for caching pregenerated datadirs")
parser.add_option("--tmpdir", dest="tmpdir", default=tempfile.mkdtemp(prefix="test"),
help="Root directory for datadirs")
parser.add_option("--tracerpc", dest="trace_rpc", default=False, action="store_true",
help="Print out all RPC calls as they are made")
parser.add_option("--portseed", dest="port_seed", default=os.getpid(), type='int',
help="The seed to use for assigning port numbers (default: current process id)")
parser.add_option("--coveragedir", dest="coveragedir",
help="Write tested RPC commands into this directory")
self.add_options(parser)
(self.options, self.args) = parser.parse_args()
self.options.tmpdir += '/' + str(self.options.port_seed)
if self.options.trace_rpc:
logging.basicConfig(level=logging.DEBUG, stream=sys.stdout)
if self.options.coveragedir:
enable_coverage(self.options.coveragedir)
PortSeed.n = self.options.port_seed
os.environ['PATH'] = self.options.srcdir+":"+os.environ['PATH']
check_json_precision()
success = False
try:
os.makedirs(self.options.tmpdir, exist_ok=False)
self.setup_chain()
self.setup_network()
self.run_test()
success = True
except JSONRPCException as e:
print("JSONRPC error: "+e.error['message'])
traceback.print_tb(sys.exc_info()[2])
except AssertionError as e:
print("Assertion failed: " + str(e))
traceback.print_tb(sys.exc_info()[2])
except KeyError as e:
print("key not found: "+ str(e))
traceback.print_tb(sys.exc_info()[2])
except Exception as e:
print("Unexpected exception caught during testing: "+str(e))
traceback.print_tb(sys.exc_info()[2])
except KeyboardInterrupt as e:
print("Exiting after " + repr(e))
if not self.options.noshutdown:
print("Stopping nodes")
stop_nodes(self.nodes)
wait_bitcoinds()
else:
print("Note: bitcoinds were not stopped and may still be running")
if not self.options.nocleanup and not self.options.noshutdown:
print("Cleaning up")
shutil.rmtree(self.options.tmpdir)
if success:
print("Tests successful")
sys.exit(0)
else:
print("Failed")
sys.exit(1)
# Test framework for doing p2p comparison testing, which sets up some bitcoind
# binaries:
# 1 binary: test binary
# 2 binaries: 1 test binary, 1 ref binary
# n>2 binaries: 1 test binary, n-1 ref binaries
class ComparisonTestFramework(BitcoinTestFramework):
def __init__(self):
super().__init__()
self.num_nodes = 1
self.cache_behavior = 'clean'
self.additional_args = []
def add_options(self, parser):
parser.add_option("--testbinary", dest="testbinary",
default=os.getenv("CARGO_BIN_EXE_zebrad", zcashd_binary()),
help="zebrad binary to test")
parser.add_option("--refbinary", dest="refbinary",
default=os.getenv("CARGO_BIN_EXE_zebrad", zcashd_binary()),
help="zebrad binary to use for reference nodes (if any)")
def setup_network(self):
self.nodes = start_nodes(
self.num_nodes, self.options.tmpdir,
extra_args=[['-debug', '-whitelist=127.0.0.1'] + self.additional_args] * self.num_nodes,
binary=[self.options.testbinary] +
[self.options.refbinary]*(self.num_nodes-1))
def get_tests(self):
raise NotImplementedError

View File

@ -0,0 +1,802 @@
#!/usr/bin/env python3
# Copyright (c) 2014-2016 The Bitcoin Core developers
# Copyright (c) 2016-2022 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
#
# Helpful routines for regression testing
#
import os
import sys
from binascii import hexlify, unhexlify
from base64 import b64encode
from decimal import Decimal, ROUND_DOWN
import json
import http.client
import random
import shutil
import subprocess
import tarfile
import tempfile
import time
import re
import errno
from . import coverage
from .proxy import ServiceProxy, JSONRPCException
LEGACY_DEFAULT_FEE = Decimal('0.00001')
COVERAGE_DIR = None
PRE_BLOSSOM_BLOCK_TARGET_SPACING = 150
POST_BLOSSOM_BLOCK_TARGET_SPACING = 75
SPROUT_BRANCH_ID = 0x00000000
OVERWINTER_BRANCH_ID = 0x5BA81B19
SAPLING_BRANCH_ID = 0x76B809BB
BLOSSOM_BRANCH_ID = 0x2BB40E60
HEARTWOOD_BRANCH_ID = 0xF5B9230B
CANOPY_BRANCH_ID = 0xE9FF75A6
NU5_BRANCH_ID = 0xC2D6D0B4
NU6_BRANCH_ID = 0xC8E71055
# The maximum number of nodes a single test can spawn
MAX_NODES = 8
# Don't assign rpc or p2p ports lower than this
PORT_MIN = 11000
# The number of ports to "reserve" for p2p and rpc, each
PORT_RANGE = 5000
def zcashd_binary():
return os.getenv("CARGO_BIN_EXE_zebrad", os.path.join("..", "target", "debug", "zebrad"))
def zebrad_config(datadir):
base_location = os.path.join('qa', 'base_config.toml')
new_location = os.path.join(datadir, "config.toml")
shutil.copyfile(base_location, new_location)
return new_location
class PortSeed:
# Must be initialized with a unique integer for each process
n = None
def enable_coverage(dirname):
"""Maintain a log of which RPC calls are made during testing."""
global COVERAGE_DIR
COVERAGE_DIR = dirname
def get_rpc_proxy(url, node_number, timeout=None):
"""
Args:
url (str): URL of the RPC server to call
node_number (int): the node number (or id) that this calls to
Kwargs:
timeout (int): HTTP timeout in seconds
Returns:
AuthServiceProxy. convenience object for making RPC calls.
"""
proxy_kwargs = {}
if timeout is not None:
proxy_kwargs['timeout'] = timeout
proxy = ServiceProxy(url, **proxy_kwargs)
proxy.url = url # store URL on proxy for info
coverage_logfile = coverage.get_filename(
COVERAGE_DIR, node_number) if COVERAGE_DIR else None
return coverage.AuthServiceProxyWrapper(proxy, coverage_logfile)
def p2p_port(n):
assert(n <= MAX_NODES)
return PORT_MIN + n + (MAX_NODES * PortSeed.n) % (PORT_RANGE - 1 - MAX_NODES)
def rpc_port(n):
return PORT_MIN + PORT_RANGE + n + (MAX_NODES * PortSeed.n) % (PORT_RANGE - 1 - MAX_NODES)
def check_json_precision():
"""Make sure json library being used does not lose precision converting ZEC values"""
n = Decimal("20000000.00000003")
zatoshis = int(json.loads(json.dumps(float(n)))*1.0e8)
if zatoshis != 2000000000000003:
raise RuntimeError("JSON encode/decode loses precision")
def bytes_to_hex_str(byte_str):
return hexlify(byte_str).decode('ascii')
def hex_str_to_bytes(hex_str):
return unhexlify(hex_str.encode('ascii'))
def str_to_b64str(string):
return b64encode(string.encode('utf-8')).decode('ascii')
def sync_blocks(rpc_connections, wait=0.125, timeout=60, allow_different_tips=False):
"""
Wait until everybody has the same tip, and has notified
all internal listeners of them.
If allow_different_tips is True, waits until everyone has
the same block count.
"""
while timeout > 0:
if allow_different_tips:
tips = [ x.getblockcount() for x in rpc_connections ]
else:
tips = [ x.getbestblockhash() for x in rpc_connections ]
if tips == [ tips[0] ]*len(tips):
break
time.sleep(wait)
timeout -= wait
""" Zebra does not support the `fullyNotified` field in the `blockchaininfo` RPC
# Now that the block counts are in sync, wait for the internal
# notifications to finish
while timeout > 0:
notified = [ x.getblockchaininfo()['fullyNotified'] for x in rpc_connections ]
if notified == [ True ] * len(notified):
return True
time.sleep(wait)
timeout -= wait
raise AssertionError("Block sync failed")
"""
return True
def sync_mempools(rpc_connections, wait=0.5, timeout=60):
"""
Wait until everybody has the same transactions in their memory
pools, and has notified all internal listeners of them
"""
while timeout > 0:
pool = set(rpc_connections[0].getrawmempool())
num_match = 1
for i in range(1, len(rpc_connections)):
if set(rpc_connections[i].getrawmempool()) == pool:
num_match = num_match+1
if num_match == len(rpc_connections):
break
time.sleep(wait)
timeout -= wait
""" Zebra does not support the `fullyNotified` field in the `getmempoolinfo` RPC
# Now that the mempools are in sync, wait for the internal
# notifications to finish
while timeout > 0:
notified = [ x.getmempoolinfo()['fullyNotified'] for x in rpc_connections ]
if notified == [ True ] * len(notified):
return True
time.sleep(wait)
timeout -= wait
raise AssertionError("Mempool sync failed")
"""
return True
bitcoind_processes = {}
def initialize_datadir(dirname, n, clock_offset=0):
datadir = os.path.join(dirname, "node"+str(n))
if not os.path.isdir(datadir):
os.makedirs(datadir)
rpc_u, rpc_p = rpc_auth_pair(n)
config_rpc_port = rpc_port(n)
config_p2p_port = p2p_port(n)
with open(os.path.join(datadir, "zcash.conf"), 'w', encoding='utf8') as f:
f.write("regtest=1\n")
f.write("showmetrics=0\n")
f.write("rpcuser=" + rpc_u + "\n")
f.write("rpcpassword=" + rpc_p + "\n")
f.write("port="+str(config_p2p_port)+"\n")
f.write("rpcport="+str(config_rpc_port)+"\n")
f.write("listenonion=0\n")
if clock_offset != 0:
f.write('clockoffset='+str(clock_offset)+'\n')
update_zebrad_conf(datadir, config_rpc_port, config_p2p_port)
return datadir
def update_zebrad_conf(datadir, rpc_port, p2p_port):
import toml
config_path = zebrad_config(datadir)
with open(config_path, 'r') as f:
config_file = toml.load(f)
config_file['rpc']['listen_addr'] = '127.0.0.1:'+str(rpc_port)
config_file['network']['listen_addr'] = '127.0.0.1:'+str(p2p_port)
config_file['state']['cache_dir'] = datadir
with open(config_path, 'w') as f:
toml.dump(config_file, f)
return config_path
def rpc_auth_pair(n):
return 'rpcuser💻' + str(n), 'rpcpass🔑' + str(n)
def rpc_url(i, rpchost=None):
rpc_u, rpc_p = rpc_auth_pair(i)
host = '127.0.0.1'
port = rpc_port(i)
if rpchost:
parts = rpchost.split(':')
if len(parts) == 2:
host, port = parts
else:
host = rpchost
# For zebra, we just use a non-authenticated endpoint.
return "http://%s:%d" % (host, int(port))
# We might want to get back to authenticated endpoints after #8864:
#return "http://%s:%s@%s:%d" % (rpc_u, rpc_p, host, int(port))
def wait_for_bitcoind_start(process, url, i):
'''
Wait for bitcoind to start. This means that RPC is accessible and fully initialized.
Raise an exception if bitcoind exits during initialization.
'''
time.sleep(1) # give zebrad a moment to start
while True:
if process.poll() is not None:
raise Exception('%s node %d exited with status %i during initialization' % (zcashd_binary(), i, process.returncode))
try:
rpc = get_rpc_proxy(url, i)
rpc.getblockcount()
break # break out of loop on success
except IOError as e:
if e.errno != errno.ECONNREFUSED: # Port not yet open?
raise # unknown IO error
except JSONRPCException as e: # Initialization phase
if e.error['code'] != -28: # RPC in warmup?
raise # unknown JSON RPC exception
time.sleep(0.25)
def initialize_chain(test_dir, num_nodes, cachedir, cache_behavior='current'):
"""
Create a set of node datadirs in `test_dir`, based upon the specified
`cache_behavior` value. The following values are recognized for
`cache_behavior`:
* 'current': create a 200-block-long chain (with wallet) for MAX_NODES
in `cachedir` if necessary. Afterward, create num_nodes copies in
`test_dir` from the cache. The resulting nodes will be configured to
use the -clockoffset config argument when starting to ensure that
the cached chain is not treated as being excessively out-of-date.
* 'sprout': use persisted chain data containing known amounts of Sprout
funds from the files in `qa/rpc-tests/cache/sprout`. This allows
testing of Sprout spends even though Sprout outputs can no longer
be created by zcashd software. The resulting nodes will be configured to
use the -clockoffset config argument when starting to ensure that
the cached chain is not treated as being excessively out-of-date.
* 'fresh': force re-creation of the cache, and then start as for `current`.
* 'clean': start the nodes without cached chain data, allowing the test
to take full control of chain setup.
"""
assert num_nodes <= MAX_NODES
def rebuild_cache():
#find and delete old cache directories if any exist
for i in range(MAX_NODES):
if os.path.isdir(os.path.join(cachedir,"node"+str(i))):
shutil.rmtree(os.path.join(cachedir,"node"+str(i)))
# Create cache directories, run bitcoinds:
block_time = int(time.time()) - (200 * PRE_BLOSSOM_BLOCK_TARGET_SPACING)
for i in range(MAX_NODES):
datadir = initialize_datadir(cachedir, i)
config = update_zebrad_conf(datadir, rpc_port(i), p2p_port(i))
binary = zcashd_binary()
args = [ binary, "-c="+config, "start" ]
bitcoind_processes[i] = subprocess.Popen(args)
if os.getenv("PYTHON_DEBUG", ""):
print("initialize_chain: %s started, waiting for RPC to come up" % (zcashd_binary(),))
wait_for_bitcoind_start(bitcoind_processes[i], rpc_url(i), i)
if os.getenv("PYTHON_DEBUG", ""):
print("initialize_chain: RPC successfully started")
rpcs = []
for i in range(MAX_NODES):
try:
rpcs.append(get_rpc_proxy(rpc_url(i), i))
except:
sys.stderr.write("Error connecting to "+rpc_url(i)+"\n")
sys.exit(1)
# Create a 200-block-long chain; each of the 4 first nodes
# gets 25 mature blocks and 25 immature.
# Note: To preserve compatibility with older versions of
# initialize_chain, only 4 nodes will generate coins.
#
# Blocks are created with timestamps 2.5 minutes apart (matching the
# chain defaulting above to Sapling active), starting 200 * 2.5 minutes
# before the current time.
for i in range(2):
for peer in range(4):
for j in range(25):
# Removed because zebrad does not has this RPC method:
#set_node_times(rpcs, block_time)
rpcs[peer].generate(1)
block_time += PRE_BLOSSOM_BLOCK_TARGET_SPACING
# Must sync before next peer starts generating blocks
sync_blocks(rpcs)
# Check that local time isn't going backwards
assert_greater_than(time.time() + 1, block_time)
# Shut them down, and clean up cache directories:
stop_nodes(rpcs)
wait_bitcoinds()
for i in range(MAX_NODES):
# record the system time at which the cache was regenerated
with open(node_file(cachedir, i, 'cache_config.json'), "w", encoding="utf8") as cache_conf_file:
cache_config = { "cache_time": time.time() }
cache_conf_json = json.dumps(cache_config, indent=4)
cache_conf_file.write(cache_conf_json)
# Removed as zebrad do not created these files:
#os.remove(node_file(cachedir, i, "debug.log"))
#os.remove(node_file(cachedir, i, "db.log"))
#os.remove(node_file(cachedir, i, "peers.dat"))
def init_from_cache():
for i in range(num_nodes):
from_dir = os.path.join(cachedir, "node"+str(i))
to_dir = os.path.join(test_dir, "node"+str(i))
shutil.copytree(from_dir, to_dir)
with open(os.path.join(to_dir, 'cache_config.json'), "r", encoding="utf8") as cache_conf_file:
cache_conf = json.load(cache_conf_file)
# obtain the clock offset as a negative number of seconds
offset = round(cache_conf['cache_time']) - round(time.time())
# overwrite port/rpcport and clock offset in zcash.conf
initialize_datadir(test_dir, i, clock_offset=offset)
def init_persistent(cache_behavior):
assert num_nodes <= 4 # only 4 nodes with Sprout funds are supported
cache_path = persistent_cache_path(cache_behavior)
if not os.path.isdir(cache_path):
raise Exception('No cache available for cache behavior %s' % cache_behavior)
chain_cache_filename = os.path.join(cache_path, "chain_cache.tar.gz")
if not os.path.exists(chain_cache_filename):
raise Exception('Chain cache missing for cache behavior %s' % cache_behavior)
for i in range(num_nodes):
to_dir = os.path.join(test_dir, "node"+str(i), "regtest")
os.makedirs(to_dir)
# Copy the same chain data to all nodes
with tarfile.open(chain_cache_filename, "r:gz") as chain_cache_file:
tarfile_extractall(chain_cache_file, to_dir)
# Copy in per-node wallet data
wallet_tgz_filename = os.path.join(cache_path, "node"+str(i)+"_wallet.tar.gz")
if not os.path.exists(wallet_tgz_filename):
raise Exception('Wallet cache missing for cache behavior %s, node %d' % (cache_behavior, i))
with tarfile.open(wallet_tgz_filename, "r:gz") as wallet_tgz_file:
tarfile_extractall(wallet_tgz_file, os.path.join(to_dir, "wallet.dat"))
# Copy in per-node wallet config and update zcash.conf to set the
# clock offsets correctly.
cache_conf_filename = os.path.join(to_dir, 'cache_config.json')
if not os.path.exists(cache_conf_filename):
raise Exception('Cache config missing for cache behavior %s, node %d' % (cache_behavior, i))
with open(cache_conf_filename, "r", encoding="utf8") as cache_conf_file:
cache_conf = json.load(cache_conf_file)
# obtain the clock offset as a negative number of seconds
offset = round(cache_conf['cache_time']) - round(time.time())
# overwrite port/rpcport and clock offset in zcash.conf
initialize_datadir(test_dir, i, clock_offset=offset)
def cache_rebuild_required():
for i in range(MAX_NODES):
node_path = os.path.join(cachedir, 'node'+str(i))
if os.path.isdir(node_path):
if not os.path.isfile(node_file(cachedir, i, 'cache_config.json')):
return True
else:
return True
return False
if cache_behavior == 'current':
if cache_rebuild_required(): rebuild_cache()
init_from_cache()
elif cache_behavior == 'fresh':
rebuild_cache()
init_from_cache()
elif cache_behavior == 'clean':
initialize_chain_clean(test_dir, num_nodes)
else:
init_persistent(cache_behavior)
def initialize_chain_clean(test_dir, num_nodes):
"""
Create an empty blockchain and num_nodes wallets.
Useful if a test case wants complete control over initialization.
"""
for i in range(num_nodes):
initialize_datadir(test_dir, i)
def persistent_cache_path(cache_behavior):
return os.path.join(
os.path.dirname(os.path.dirname(os.path.realpath(__file__))),
'cache',
cache_behavior
)
def persistent_cache_exists(cache_behavior):
cache_path = persistent_cache_path(cache_behavior)
return os.path.isdir(cache_path)
# Clean up, zip, and persist the generated datadirs. Record the generation
# time so that we can correctly set the system clock offset in tests that
# restore their node states using the resulting files.
def persist_node_caches(tmpdir, cache_behavior, num_nodes):
cache_path = persistent_cache_path(cache_behavior)
if os.path.exists(cache_path):
raise Exception('Cache already exists for cache behavior %s' % cache_behavior)
os.mkdir(cache_path)
for i in range(num_nodes):
node_path = os.path.join(tmpdir, 'node' + str(i))
# Clean up the files that we don't want to persist
os.remove(os.path.join(node_path, 'debug.log'))
os.remove(os.path.join(node_path, 'db.log'))
os.remove(os.path.join(node_path, 'peers.dat'))
# Persist the wallet file for the node to the cache
wallet_tgz_filename = os.path.join(cache_path, 'node' + str(i) + '_wallet.tar.gz')
with tarfile.open(wallet_tgz_filename, "w:gz") as wallet_tgz_file:
wallet_tgz_file.add(os.path.join(node_path, 'wallet.dat'), arcname="")
# Persist the chain data and cache config just once; it will be reused
# for all of the nodes when loading from the cache.
if i == 0:
# Move the wallet.dat file out of the way so that it doesn't
# pollute the chain cache tarfile
shutil.move(
os.path.join(node_path, 'wallet.dat'),
os.path.join(tmpdir, 'wallet.dat.0'))
# Store the current time so that we can correctly set the clock
# offset when restoring from the cache.
cache_config = { "cache_time": time.time() }
cache_conf_filename = os.path.join(cache_path, 'cache_config.json')
with open(cache_conf_filename, "w", encoding="utf8") as cache_conf_file:
cache_conf_json = json.dumps(cache_config, indent=4)
cache_conf_file.write(cache_conf_json)
# Persist the chain data.
chain_cache_filename = os.path.join(cache_path, 'chain_cache.tar.gz')
with tarfile.open(chain_cache_filename, "w:gz") as chain_cache_file:
chain_cache_file.add(node_path, arcname="")
# Move the wallet file back into place
shutil.move(
os.path.join(tmpdir, 'wallet.dat.0'),
os.path.join(node_path, 'wallet.dat'))
def _rpchost_to_args(rpchost):
'''Convert optional IP:port spec to rpcconnect/rpcport args'''
if rpchost is None:
return []
match = re.match(r'(\[[0-9a-fA-f:]+\]|[^:]+)(?::([0-9]+))?$', rpchost)
if not match:
raise ValueError('Invalid RPC host spec ' + rpchost)
rpcconnect = match.group(1)
rpcport = match.group(2)
if rpcconnect.startswith('['): # remove IPv6 [...] wrapping
rpcconnect = rpcconnect[1:-1]
rv = ['-rpcconnect=' + rpcconnect]
if rpcport:
rv += ['-rpcport=' + rpcport]
return rv
def start_node(i, dirname, extra_args=None, rpchost=None, timewait=None, binary=None, stderr=None):
"""
Start a bitcoind and return RPC connection to it
"""
datadir = os.path.join(dirname, "node"+str(i))
if binary is None:
binary = zcashd_binary()
config = update_zebrad_conf(datadir, rpc_port(i), p2p_port(i))
args = [ binary, "-c="+config, "start" ]
if extra_args is not None: args.extend(extra_args)
bitcoind_processes[i] = subprocess.Popen(args, stderr=stderr)
if os.getenv("PYTHON_DEBUG", ""):
print("start_node: bitcoind started, waiting for RPC to come up")
url = rpc_url(i, rpchost)
wait_for_bitcoind_start(bitcoind_processes[i], url, i)
if os.getenv("PYTHON_DEBUG", ""):
print("start_node: RPC successfully started for node {} with pid {}".format(i, bitcoind_processes[i].pid))
proxy = get_rpc_proxy(url, i, timeout=timewait)
if COVERAGE_DIR:
coverage.write_all_rpc_commands(COVERAGE_DIR, proxy)
return proxy
def assert_start_raises_init_error(i, dirname, extra_args=None, expected_msg=None):
with tempfile.SpooledTemporaryFile(max_size=2**16) as log_stderr:
try:
node = start_node(i, dirname, extra_args, stderr=log_stderr)
stop_node(node, i)
except Exception as e:
assert ("%s node %d exited" % (zcashd_binary(), i)) in str(e) # node must have shutdown
if expected_msg is not None:
log_stderr.seek(0)
stderr = log_stderr.read().decode('utf-8')
if expected_msg not in stderr:
raise AssertionError("Expected error \"" + expected_msg + "\" not found in:\n" + stderr)
else:
if expected_msg is None:
assert_msg = "%s should have exited with an error" % (zcashd_binary(),)
else:
assert_msg = "%s should have exited with expected error %r" % (zcashd_binary(), expected_msg)
raise AssertionError(assert_msg)
def start_nodes(num_nodes, dirname, extra_args=None, rpchost=None, binary=None):
"""
Start multiple bitcoinds, return RPC connections to them
"""
if extra_args is None: extra_args = [ None for _ in range(num_nodes) ]
if binary is None: binary = [ None for _ in range(num_nodes) ]
rpcs = []
try:
for i in range(num_nodes):
rpcs.append(start_node(i, dirname, extra_args[i], rpchost, binary=binary[i]))
except: # If one node failed to start, stop the others
stop_nodes(rpcs)
raise
return rpcs
def node_file(dirname, n_node, filename):
return os.path.join(dirname, "node"+str(n_node), filename)
def check_node(i):
bitcoind_processes[i].poll()
return bitcoind_processes[i].returncode
def stop_node(node, i):
try:
node.stop()
except http.client.CannotSendRequest as e:
print("WARN: Unable to stop node: " + repr(e))
bitcoind_processes[i].wait()
del bitcoind_processes[i]
def stop_nodes(nodes):
for node in nodes:
try:
node.stop()
except http.client.CannotSendRequest as e:
print("WARN: Unable to stop node: " + repr(e))
del nodes[:] # Emptying array closes connections as a side effect
def set_node_times(nodes, t):
for node in nodes:
node.setmocktime(t)
def wait_bitcoinds():
# Wait for all bitcoinds to cleanly exit
for bitcoind in list(bitcoind_processes.values()):
bitcoind.wait()
bitcoind_processes.clear()
def connect_nodes(from_connection, node_num):
ip_port = "127.0.0.1:"+str(p2p_port(node_num))
from_connection.addnode(ip_port, "onetry")
# poll until version handshake complete to avoid race conditions
# with transaction relaying
while any(peer['version'] == 0 for peer in from_connection.getpeerinfo()):
time.sleep(0.1)
def connect_nodes_bi(nodes, a, b):
connect_nodes(nodes[a], b)
connect_nodes(nodes[b], a)
def find_output(node, txid, amount):
"""
Return index to output of txid with value amount
Raises exception if there is none.
"""
txdata = node.getrawtransaction(txid, 1)
for i in range(len(txdata["vout"])):
if txdata["vout"][i]["value"] == amount:
return i
raise RuntimeError("find_output txid %s : %s not found"%(txid,str(amount)))
def gather_inputs(from_node, amount_needed, confirmations_required=1):
"""
Return a random set of unspent txouts that are enough to pay amount_needed
"""
assert(confirmations_required >=0)
utxo = from_node.listunspent(confirmations_required)
random.shuffle(utxo)
inputs = []
total_in = Decimal("0.00000000")
while total_in < amount_needed and len(utxo) > 0:
t = utxo.pop()
total_in += t["amount"]
inputs.append({ "txid" : t["txid"], "vout" : t["vout"], "address" : t["address"] } )
if total_in < amount_needed:
raise RuntimeError("Insufficient funds: need %d, have %d"%(amount_needed, total_in))
return (total_in, inputs)
def make_change(from_node, amount_in, amount_out, fee):
"""
Create change output(s), return them
"""
outputs = {}
amount = amount_out+fee
change = amount_in - amount
if change > amount*2:
# Create an extra change output to break up big inputs
change_address = from_node.getnewaddress()
# Split change in two, being careful of rounding:
outputs[change_address] = Decimal(change/2).quantize(Decimal('0.00000001'), rounding=ROUND_DOWN)
change = amount_in - amount - outputs[change_address]
if change > 0:
outputs[from_node.getnewaddress()] = change
return outputs
def random_transaction(nodes, amount, min_fee, fee_increment, fee_variants):
"""
Create a random transaction.
Returns (txid, hex-encoded-transaction-data, fee)
"""
from_node = random.choice(nodes)
to_node = random.choice(nodes)
fee = min_fee + fee_increment*random.randint(0,fee_variants)
(total_in, inputs) = gather_inputs(from_node, amount+fee)
outputs = make_change(from_node, total_in, amount, fee)
outputs[to_node.getnewaddress()] = float(amount)
rawtx = from_node.createrawtransaction(inputs, outputs)
signresult = from_node.signrawtransaction(rawtx)
txid = from_node.sendrawtransaction(signresult["hex"], True)
return (txid, signresult["hex"], fee)
def assert_equal(expected, actual, message=""):
if expected != actual:
if message:
message = "; %s" % message
raise AssertionError("(left == right)%s\n left: <%s>\n right: <%s>" % (message, str(expected), str(actual)))
def assert_true(condition, message = ""):
if not condition:
raise AssertionError(message)
def assert_false(condition, message = ""):
assert_true(not condition, message)
def assert_greater_than(thing1, thing2):
if thing1 <= thing2:
raise AssertionError("%s <= %s"%(str(thing1),str(thing2)))
def assert_raises(exc, fun, *args, **kwds):
assert_raises_message(exc, None, fun, *args, **kwds)
def assert_raises_message(ExceptionType, errstr, func, *args, **kwargs):
"""
Asserts that func throws and that the exception contains 'errstr'
in its message.
"""
try:
func(*args, **kwargs)
except ExceptionType as e:
if errstr is not None and errstr not in str(e):
raise AssertionError("Invalid exception string: Couldn't find %r in %r" % (
errstr, str(e)))
except Exception as e:
raise AssertionError("Unexpected exception raised: " + type(e).__name__)
else:
raise AssertionError("No exception raised")
def fail(message=""):
raise AssertionError(message)
# Returns an async operation result
def wait_and_assert_operationid_status_result(node, myopid, in_status='success', in_errormsg=None, timeout=300):
print('waiting for async operation {}'.format(myopid))
result = None
for _ in range(1, timeout):
results = node.z_getoperationresult([myopid])
if len(results) > 0:
result = results[0]
break
time.sleep(1)
assert_true(result is not None, "timeout occurred")
status = result['status']
debug = os.getenv("PYTHON_DEBUG", "")
if debug:
print('...returned status: {}'.format(status))
errormsg = None
if status == "failed":
errormsg = result['error']['message']
if debug:
print('...returned error: {}'.format(errormsg))
assert_equal(in_errormsg, errormsg)
assert_equal(in_status, status, "Operation returned mismatched status. Error Message: {}".format(errormsg))
return result
# Returns txid if operation was a success or None
def wait_and_assert_operationid_status(node, myopid, in_status='success', in_errormsg=None, timeout=300):
result = wait_and_assert_operationid_status_result(node, myopid, in_status, in_errormsg, timeout)
if result['status'] == "success":
return result['result']['txid']
else:
return None
# Find a coinbase address on the node, filtering by the number of UTXOs it has.
# If no filter is provided, returns the coinbase address on the node containing
# the greatest number of spendable UTXOs.
# The default cached chain has one address per coinbase output.
def get_coinbase_address(node, expected_utxos=None):
addrs = [utxo['address'] for utxo in node.listunspent() if utxo['generated']]
assert(len(set(addrs)) > 0)
if expected_utxos is None:
addrs = [(addrs.count(a), a) for a in set(addrs)]
return sorted(addrs, reverse=True)[0][1]
addrs = [a for a in set(addrs) if addrs.count(a) == expected_utxos]
assert(len(addrs) > 0)
return addrs[0]
def check_node_log(self, node_number, line_to_check, stop_node = True):
print("Checking node " + str(node_number) + " logs")
if stop_node:
self.nodes[node_number].stop()
bitcoind_processes[node_number].wait()
logpath = self.options.tmpdir + "/node" + str(node_number) + "/regtest/debug.log"
with open(logpath, "r", encoding="utf8") as myfile:
logdata = myfile.readlines()
for (n, logline) in enumerate(logdata):
if line_to_check in logline:
return n
raise AssertionError(repr(line_to_check) + " not found")
def nustr(branch_id):
return '%08x' % branch_id
def nuparams(branch_id, height):
return '-nuparams=%s:%d' % (nustr(branch_id), height)
def tarfile_extractall(tarfile, path):
if sys.version_info >= (3, 11, 4):
tarfile.extractall(path=path, filter='data')
else:
tarfile.extractall(path=path)

View File

@ -0,0 +1,294 @@
#!/usr/bin/env python3
# Copyright (c) 2021 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
#
# zip244.py
#
# Functionality to create txids, auth digests, and signature digests.
#
# This file is modified from zcash/zcash-test-vectors.
#
import struct
from hashlib import blake2b
from .mininode import ser_string, ser_uint256
from .script import (
SIGHASH_ANYONECANPAY,
SIGHASH_NONE,
SIGHASH_SINGLE,
getHashOutputs,
getHashPrevouts,
getHashSequence,
)
# Transparent
def transparent_digest(tx):
digest = blake2b(digest_size=32, person=b'ZTxIdTranspaHash')
if len(tx.vin) + len(tx.vout) > 0:
digest.update(getHashPrevouts(tx, b'ZTxIdPrevoutHash'))
digest.update(getHashSequence(tx, b'ZTxIdSequencHash'))
digest.update(getHashOutputs(tx, b'ZTxIdOutputsHash'))
return digest.digest()
def transparent_scripts_digest(tx):
digest = blake2b(digest_size=32, person=b'ZTxAuthTransHash')
for x in tx.vin:
digest.update(ser_string(x.scriptSig))
return digest.digest()
# Sapling
def sapling_digest(saplingBundle):
digest = blake2b(digest_size=32, person=b'ZTxIdSaplingHash')
if len(saplingBundle.spends) + len(saplingBundle.outputs) > 0:
digest.update(sapling_spends_digest(saplingBundle))
digest.update(sapling_outputs_digest(saplingBundle))
digest.update(struct.pack('<q', saplingBundle.valueBalance))
return digest.digest()
def sapling_auth_digest(saplingBundle):
digest = blake2b(digest_size=32, person=b'ZTxAuthSapliHash')
if len(saplingBundle.spends) + len(saplingBundle.outputs) > 0:
for desc in saplingBundle.spends:
digest.update(desc.zkproof.serialize())
for desc in saplingBundle.spends:
digest.update(desc.spendAuthSig.serialize())
for desc in saplingBundle.outputs:
digest.update(desc.zkproof.serialize())
digest.update(saplingBundle.bindingSig.serialize())
return digest.digest()
# - Spends
def sapling_spends_digest(saplingBundle):
digest = blake2b(digest_size=32, person=b'ZTxIdSSpendsHash')
if len(saplingBundle.spends) > 0:
digest.update(sapling_spends_compact_digest(saplingBundle))
digest.update(sapling_spends_noncompact_digest(saplingBundle))
return digest.digest()
def sapling_spends_compact_digest(saplingBundle):
digest = blake2b(digest_size=32, person=b'ZTxIdSSpendCHash')
for desc in saplingBundle.spends:
digest.update(ser_uint256(desc.nullifier))
return digest.digest()
def sapling_spends_noncompact_digest(saplingBundle):
digest = blake2b(digest_size=32, person=b'ZTxIdSSpendNHash')
for desc in saplingBundle.spends:
digest.update(ser_uint256(desc.cv))
digest.update(ser_uint256(saplingBundle.anchor))
digest.update(ser_uint256(desc.rk))
return digest.digest()
# - Outputs
def sapling_outputs_digest(saplingBundle):
digest = blake2b(digest_size=32, person=b'ZTxIdSOutputHash')
if len(saplingBundle.outputs) > 0:
digest.update(sapling_outputs_compact_digest(saplingBundle))
digest.update(sapling_outputs_memos_digest(saplingBundle))
digest.update(sapling_outputs_noncompact_digest(saplingBundle))
return digest.digest()
def sapling_outputs_compact_digest(saplingBundle):
digest = blake2b(digest_size=32, person=b'ZTxIdSOutC__Hash')
for desc in saplingBundle.outputs:
digest.update(ser_uint256(desc.cmu))
digest.update(ser_uint256(desc.ephemeralKey))
digest.update(desc.encCiphertext[:52])
return digest.digest()
def sapling_outputs_memos_digest(saplingBundle):
digest = blake2b(digest_size=32, person=b'ZTxIdSOutM__Hash')
for desc in saplingBundle.outputs:
digest.update(desc.encCiphertext[52:564])
return digest.digest()
def sapling_outputs_noncompact_digest(saplingBundle):
digest = blake2b(digest_size=32, person=b'ZTxIdSOutN__Hash')
for desc in saplingBundle.outputs:
digest.update(ser_uint256(desc.cv))
digest.update(desc.encCiphertext[564:])
digest.update(desc.outCiphertext)
return digest.digest()
# Orchard
def orchard_digest(orchardBundle):
digest = blake2b(digest_size=32, person=b'ZTxIdOrchardHash')
if len(orchardBundle.actions) > 0:
digest.update(orchard_actions_compact_digest(orchardBundle))
digest.update(orchard_actions_memos_digest(orchardBundle))
digest.update(orchard_actions_noncompact_digest(orchardBundle))
digest.update(struct.pack('B', orchardBundle.flags()))
digest.update(struct.pack('<q', orchardBundle.valueBalance))
digest.update(ser_uint256(orchardBundle.anchor))
return digest.digest()
def orchard_auth_digest(orchardBundle):
digest = blake2b(digest_size=32, person=b'ZTxAuthOrchaHash')
if len(orchardBundle.actions) > 0:
digest.update(bytes(orchardBundle.proofs))
for desc in orchardBundle.actions:
digest.update(desc.spendAuthSig.serialize())
digest.update(orchardBundle.bindingSig.serialize())
return digest.digest()
# - Actions
def orchard_actions_compact_digest(orchardBundle):
digest = blake2b(digest_size=32, person=b'ZTxIdOrcActCHash')
for desc in orchardBundle.actions:
digest.update(ser_uint256(desc.nullifier))
digest.update(ser_uint256(desc.cmx))
digest.update(ser_uint256(desc.ephemeralKey))
digest.update(desc.encCiphertext[:52])
return digest.digest()
def orchard_actions_memos_digest(orchardBundle):
digest = blake2b(digest_size=32, person=b'ZTxIdOrcActMHash')
for desc in orchardBundle.actions:
digest.update(desc.encCiphertext[52:564])
return digest.digest()
def orchard_actions_noncompact_digest(orchardBundle):
digest = blake2b(digest_size=32, person=b'ZTxIdOrcActNHash')
for desc in orchardBundle.actions:
digest.update(ser_uint256(desc.cv))
digest.update(ser_uint256(desc.rk))
digest.update(desc.encCiphertext[564:])
digest.update(desc.outCiphertext)
return digest.digest()
# Transaction
def header_digest(tx):
digest = blake2b(digest_size=32, person=b'ZTxIdHeadersHash')
digest.update(struct.pack('<I', (int(tx.fOverwintered)<<31) | tx.nVersion))
digest.update(struct.pack('<I', tx.nVersionGroupId))
digest.update(struct.pack('<I', tx.nConsensusBranchId))
digest.update(struct.pack('<I', tx.nLockTime))
digest.update(struct.pack('<I', tx.nExpiryHeight))
return digest.digest()
def txid_digest(tx):
digest = blake2b(
digest_size=32,
person=b'ZcashTxHash_' + struct.pack('<I', tx.nConsensusBranchId),
)
digest.update(header_digest(tx))
digest.update(transparent_digest(tx))
digest.update(sapling_digest(tx.saplingBundle))
digest.update(orchard_digest(tx.orchardBundle))
return digest.digest()
# Authorizing Data Commitment
def auth_digest(tx):
digest = blake2b(
digest_size=32,
person=b'ZTxAuthHash_' + struct.pack('<I', tx.nConsensusBranchId),
)
digest.update(transparent_scripts_digest(tx))
digest.update(sapling_auth_digest(tx.saplingBundle))
digest.update(orchard_auth_digest(tx.orchardBundle))
return digest.digest()
# Signatures
def signature_digest(tx, nHashType, txin):
digest = blake2b(
digest_size=32,
person=b'ZcashTxHash_' + struct.pack('<I', tx.nConsensusBranchId),
)
digest.update(header_digest(tx))
digest.update(transparent_sig_digest(tx, nHashType, txin))
digest.update(sapling_digest(tx.saplingBundle))
digest.update(orchard_digest(tx.orchardBundle))
return digest.digest()
def transparent_sig_digest(tx, nHashType, txin):
# Sapling Spend or Orchard Action
if txin is None:
return transparent_digest(tx)
digest = blake2b(digest_size=32, person=b'ZTxIdTranspaHash')
digest.update(prevouts_sig_digest(tx, nHashType))
digest.update(sequence_sig_digest(tx, nHashType))
digest.update(outputs_sig_digest(tx, nHashType, txin))
digest.update(txin_sig_digest(tx, txin))
return digest.digest()
def prevouts_sig_digest(tx, nHashType):
# If the SIGHASH_ANYONECANPAY flag is not set:
if not (nHashType & SIGHASH_ANYONECANPAY):
return getHashPrevouts(tx, b'ZTxIdPrevoutHash')
else:
return blake2b(digest_size=32, person=b'ZTxIdPrevoutHash').digest()
def sequence_sig_digest(tx, nHashType):
# if the SIGHASH_ANYONECANPAY flag is not set, and the sighash type is neither
# SIGHASH_SINGLE nor SIGHASH_NONE:
if (
(not (nHashType & SIGHASH_ANYONECANPAY)) and \
(nHashType & 0x1f) != SIGHASH_SINGLE and \
(nHashType & 0x1f) != SIGHASH_NONE
):
return getHashSequence(tx, b'ZTxIdSequencHash')
else:
return blake2b(digest_size=32, person=b'ZTxIdSequencHash').digest()
def outputs_sig_digest(tx, nHashType, txin):
# If the sighash type is neither SIGHASH_SINGLE nor SIGHASH_NONE:
if (nHashType & 0x1f) != SIGHASH_SINGLE and (nHashType & 0x1f) != SIGHASH_NONE:
return getHashOutputs(tx, b'ZTxIdOutputsHash')
# If the sighash type is SIGHASH_SINGLE and the signature hash is being computed for
# the transparent input at a particular index, and a transparent output appears in the
# transaction at that index:
elif (nHashType & 0x1f) == SIGHASH_SINGLE and 0 <= txin.nIn and txin.nIn < len(tx.vout):
digest = blake2b(digest_size=32, person=b'ZTxIdOutputsHash')
digest.update(bytes(tx.vout[txin.nIn]))
return digest.digest()
else:
return blake2b(digest_size=32, person=b'ZTxIdOutputsHash').digest()
def txin_sig_digest(tx, txin):
digest = blake2b(digest_size=32, person=b'Zcash___TxInHash')
digest.update(bytes(tx.vin[txin.nIn].prevout))
digest.update(ser_string(txin.scriptCode))
digest.update(struct.pack('<Q', txin.amount))
digest.update(struct.pack('<I', tx.vin[txin.nIn].nSequence))
return digest.digest()

View File

@ -0,0 +1,39 @@
#!/usr/bin/env python3
# Copyright (c) 2023 The Zcash developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or https://www.opensource.org/licenses/mit-license.php .
#
# zip317.py
#
# Utilities for ZIP 317 conventional fee specification, as defined in https://zips.z.cash/zip-0317.
#
from test_framework.mininode import COIN
from decimal import Decimal
# The fee per logical action, in zatoshis. See https://zips.z.cash/zip-0317#fee-calculation.
MARGINAL_FEE = 5000
# The lower bound on the number of logical actions in a tx, for purposes of fee calculation. See
# https://zips.z.cash/zip-0317#fee-calculation.
GRACE_ACTIONS = 2
# Limits the relative probability of picking a given transaction to be at most `WEIGHT_RATIO_CAP`
# times greater than a transaction that pays exactly the conventional fee. See
# https://zips.z.cash/zip-0317#recommended-algorithm-for-block-template-construction
WEIGHT_RATIO_CAP = 4
# Default limit on the number of unpaid actions in a block. See
# https://zips.z.cash/zip-0317#recommended-algorithm-for-block-template-construction
DEFAULT_BLOCK_UNPAID_ACTION_LIMIT = 50
# The zcashd RPC sentinel value to indicate the conventional_fee when a positional argument is
# required.
ZIP_317_FEE = None
def conventional_fee_zats(logical_actions):
return MARGINAL_FEE * max(GRACE_ACTIONS, logical_actions)
def conventional_fee(logical_actions):
return Decimal(conventional_fee_zats(logical_actions)) / COIN