2018-06-28 03:23:01 -07:00
|
|
|
//! # Dynamic Honey Badger
|
|
|
|
//!
|
|
|
|
//! Like Honey Badger, this protocol allows a network of `N` nodes with at most `f` faulty ones,
|
|
|
|
//! where `3 * f < N`, to input "transactions" - any kind of data -, and to agree on a sequence of
|
|
|
|
//! _batches_ of transactions. The protocol proceeds in _epochs_, starting at number 0, and outputs
|
|
|
|
//! one batch in each epoch. It never terminates: It handles a continuous stream of incoming
|
|
|
|
//! transactions and keeps producing new batches from them. All correct nodes will output the same
|
|
|
|
//! batch for each epoch.
|
|
|
|
//!
|
2018-06-29 08:20:54 -07:00
|
|
|
//! Unlike Honey Badger, this algorithm allows dynamically adding new validators from the pool of
|
|
|
|
//! observer nodes, and turning validators back into observers. As a signal to initiate that
|
2018-06-28 03:23:01 -07:00
|
|
|
//! process, it defines a special `Change` input variant, which contains either a vote
|
2018-06-29 08:20:54 -07:00
|
|
|
//! `Add(node_id, public_key)`, to add a new validator, or `Remove(node_id)` to remove it. Each
|
|
|
|
//! validator can have at most one active vote, and casting another vote revokes the previous one.
|
|
|
|
//! Once a simple majority of validators has the same active vote, a reconfiguration process begins
|
2018-06-28 03:23:01 -07:00
|
|
|
//! (they need to create new cryptographic key shares for the new composition).
|
|
|
|
//!
|
|
|
|
//! The state of that process after each epoch is communicated via the `Batch::change` field. When
|
|
|
|
//! this contains an `InProgress(Add(..))` value, all nodes need to send every future `Target::All`
|
|
|
|
//! message to the new node, too. Once the value is `Complete`, the votes will be reset, and the
|
2018-06-29 08:20:54 -07:00
|
|
|
//! next epoch will run using the new set of validators.
|
2018-06-28 03:23:01 -07:00
|
|
|
//!
|
|
|
|
//! ## How it works
|
|
|
|
//!
|
|
|
|
//! Dynamic Honey Badger runs a regular Honey Badger instance internally, which in addition to the
|
|
|
|
//! user's transactions contains special transactions for the change votes and the key generation
|
|
|
|
//! messages: Running votes and key generation "on-chain", as transactions, greatly simplifies
|
|
|
|
//! these processes, since it is guaranteed that every node will see the same sequence of votes and
|
|
|
|
//! messages.
|
|
|
|
//!
|
|
|
|
//! Every time Honey Badger outputs a new batch, Dynamic Honey Badger outputs the user transactions
|
|
|
|
//! in its own batch. The other transactions are processed: votes are counted and key generation
|
|
|
|
//! messages are passed into a `SyncKeyGen` instance.
|
|
|
|
//!
|
|
|
|
//! If after an epoch key generation has completed, the Honey Badger instance (including all
|
|
|
|
//! pending batches) is dropped, and replaced by a new one with the new set of participants.
|
|
|
|
//!
|
|
|
|
//! Otherwise we check if the majority of votes has changed. If a new change has a majority, the
|
|
|
|
//! `SyncKeyGen` instance is dropped, and a new one is started to create keys according to the new
|
|
|
|
//! pending change.
|
|
|
|
// TODO: Document how to add observers, once that is supported.
|
|
|
|
|
2018-06-14 08:41:35 -07:00
|
|
|
use std::collections::{BTreeMap, HashMap, VecDeque};
|
|
|
|
use std::fmt::Debug;
|
|
|
|
use std::hash::Hash;
|
|
|
|
use std::mem;
|
|
|
|
use std::rc::Rc;
|
|
|
|
|
|
|
|
use bincode;
|
|
|
|
use clear_on_drop::ClearOnDrop;
|
|
|
|
use serde::{Deserialize, Serialize};
|
|
|
|
|
|
|
|
use crypto::{PublicKey, PublicKeySet, SecretKey, Signature};
|
|
|
|
use honey_badger::{self, HoneyBadger};
|
2018-06-27 04:45:25 -07:00
|
|
|
use messaging::{DistAlgorithm, NetworkInfo, Target, TargetedMessage};
|
2018-06-14 08:41:35 -07:00
|
|
|
use sync_key_gen::{Accept, Propose, SyncKeyGen};
|
|
|
|
|
2018-06-28 03:23:01 -07:00
|
|
|
type KeyGenOutput = (PublicKeySet, Option<ClearOnDrop<Box<SecretKey>>>);
|
|
|
|
|
2018-06-14 08:41:35 -07:00
|
|
|
error_chain!{
|
|
|
|
links {
|
|
|
|
HoneyBadger(honey_badger::Error, honey_badger::ErrorKind);
|
|
|
|
}
|
|
|
|
|
|
|
|
foreign_links {
|
|
|
|
Bincode(Box<bincode::ErrorKind>);
|
|
|
|
}
|
|
|
|
|
|
|
|
errors {
|
|
|
|
UnknownSender
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// A node change action: adding or removing a node.
|
|
|
|
#[derive(Clone, Eq, PartialEq, Serialize, Deserialize, Hash, Debug)]
|
|
|
|
pub enum Change<NodeUid> {
|
|
|
|
/// Add a node. The public key is used only temporarily, for key generation.
|
|
|
|
Add(NodeUid, PublicKey),
|
|
|
|
/// Remove a node.
|
|
|
|
Remove(NodeUid),
|
|
|
|
}
|
|
|
|
|
2018-06-28 03:23:01 -07:00
|
|
|
impl<NodeUid> Change<NodeUid> {
|
|
|
|
/// Returns the ID of the current candidate for being added, if any.
|
|
|
|
fn candidate(&self) -> Option<&NodeUid> {
|
|
|
|
match *self {
|
|
|
|
Change::Add(ref id, _) => Some(id),
|
|
|
|
Change::Remove(_) => None,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// A change status: whether a node addition or removal is currently in progress or completed.
|
|
|
|
#[derive(Clone, Eq, PartialEq, Serialize, Deserialize, Hash, Debug)]
|
|
|
|
pub enum ChangeState<NodeUid> {
|
|
|
|
/// No node is currently being considered for addition or removal.
|
|
|
|
None,
|
|
|
|
/// A change is currently in progress. If it is an addition, all broadcast messages must be
|
|
|
|
/// sent to the new node, too.
|
|
|
|
InProgress(Change<NodeUid>),
|
|
|
|
/// A change has been completed in this epoch. From the next epoch on, the new composition of
|
|
|
|
/// the network will perform the consensus process.
|
|
|
|
Complete(Change<NodeUid>),
|
|
|
|
}
|
|
|
|
|
2018-06-14 08:41:35 -07:00
|
|
|
/// The user input for `DynamicHoneyBadger`.
|
|
|
|
#[derive(Clone, Debug)]
|
|
|
|
pub enum Input<Tx, NodeUid> {
|
|
|
|
/// A user-defined transaction.
|
|
|
|
User(Tx),
|
|
|
|
/// A vote to change the set of nodes.
|
|
|
|
Change(Change<NodeUid>),
|
|
|
|
}
|
|
|
|
|
|
|
|
/// A Honey Badger instance that can handle adding and removing nodes.
|
|
|
|
pub struct DynamicHoneyBadger<Tx, NodeUid>
|
|
|
|
where
|
|
|
|
Tx: Eq + Serialize + for<'r> Deserialize<'r> + Debug + Hash,
|
|
|
|
NodeUid: Ord + Clone + Serialize + for<'r> Deserialize<'r> + Debug,
|
|
|
|
{
|
|
|
|
/// Shared network data.
|
|
|
|
netinfo: NetworkInfo<NodeUid>,
|
|
|
|
/// The target number of transactions per batch.
|
|
|
|
batch_size: usize,
|
|
|
|
/// The first epoch after the latest node change.
|
|
|
|
start_epoch: u64,
|
2018-06-27 02:51:56 -07:00
|
|
|
/// Collected votes for adding or removing nodes. Each node has one vote, and casting another
|
2018-06-29 08:20:54 -07:00
|
|
|
/// vote revokes the previous one. Resets whenever the set of validators is successfully
|
|
|
|
/// changed.
|
2018-06-14 08:41:35 -07:00
|
|
|
votes: BTreeMap<NodeUid, Change<NodeUid>>,
|
|
|
|
/// The `HoneyBadger` instance with the current set of nodes.
|
|
|
|
honey_badger: HoneyBadger<Transaction<Tx, NodeUid>, NodeUid>,
|
2018-06-28 03:23:01 -07:00
|
|
|
/// The current key generation process, and the change it applies to.
|
|
|
|
key_gen: Option<(SyncKeyGen<NodeUid>, Change<NodeUid>)>,
|
2018-06-14 08:41:35 -07:00
|
|
|
/// A queue for messages from future epochs that cannot be handled yet.
|
2018-06-27 04:45:25 -07:00
|
|
|
incoming_queue: Vec<(NodeUid, Message<NodeUid>)>,
|
2018-06-14 08:41:35 -07:00
|
|
|
/// The messages that need to be sent to other nodes.
|
|
|
|
messages: MessageQueue<NodeUid>,
|
|
|
|
/// The outputs from completed epochs.
|
|
|
|
output: VecDeque<Batch<Tx, NodeUid>>,
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<Tx, NodeUid> DistAlgorithm for DynamicHoneyBadger<Tx, NodeUid>
|
|
|
|
where
|
|
|
|
Tx: Eq + Serialize + for<'r> Deserialize<'r> + Debug + Hash,
|
|
|
|
NodeUid: Eq + Ord + Clone + Serialize + for<'r> Deserialize<'r> + Debug + Hash,
|
|
|
|
{
|
|
|
|
type NodeUid = NodeUid;
|
|
|
|
type Input = Input<Tx, NodeUid>;
|
|
|
|
type Output = Batch<Tx, NodeUid>;
|
|
|
|
type Message = Message<NodeUid>;
|
|
|
|
type Error = Error;
|
|
|
|
|
|
|
|
fn input(&mut self, input: Self::Input) -> Result<()> {
|
2018-06-28 03:23:01 -07:00
|
|
|
// User transactions are forwarded to `HoneyBadger` right away. Internal messages are
|
|
|
|
// in addition signed and broadcast.
|
2018-06-27 04:45:25 -07:00
|
|
|
match input {
|
|
|
|
Input::User(tx) => {
|
|
|
|
self.honey_badger.input(Transaction::User(tx))?;
|
|
|
|
self.process_output()
|
|
|
|
}
|
|
|
|
Input::Change(change) => self.send_transaction(NodeTransaction::Change(change)),
|
|
|
|
}
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
fn handle_message(&mut self, sender_id: &NodeUid, message: Self::Message) -> Result<()> {
|
2018-06-27 04:45:25 -07:00
|
|
|
let epoch = message.epoch();
|
|
|
|
if epoch < self.start_epoch {
|
|
|
|
return Ok(()); // Obsolete message.
|
|
|
|
}
|
|
|
|
if epoch > self.start_epoch {
|
|
|
|
// Message cannot be handled yet. Save it for later.
|
|
|
|
let entry = (sender_id.clone(), message);
|
|
|
|
self.incoming_queue.push(entry);
|
|
|
|
return Ok(());
|
|
|
|
}
|
2018-06-14 08:41:35 -07:00
|
|
|
match message {
|
2018-06-27 04:45:25 -07:00
|
|
|
Message::HoneyBadger(_, hb_msg) => self.handle_honey_badger_message(sender_id, hb_msg),
|
|
|
|
Message::Signed(_, node_tx, sig) => self.handle_signed_message(sender_id, node_tx, sig),
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
fn next_message(&mut self) -> Option<TargetedMessage<Self::Message, NodeUid>> {
|
|
|
|
self.messages.pop_front()
|
|
|
|
}
|
|
|
|
|
|
|
|
fn next_output(&mut self) -> Option<Self::Output> {
|
|
|
|
self.output.pop_front()
|
|
|
|
}
|
|
|
|
|
|
|
|
fn terminated(&self) -> bool {
|
|
|
|
false
|
|
|
|
}
|
|
|
|
|
|
|
|
fn our_id(&self) -> &NodeUid {
|
|
|
|
self.netinfo.our_uid()
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<Tx, NodeUid> DynamicHoneyBadger<Tx, NodeUid>
|
|
|
|
where
|
|
|
|
Tx: Eq + Serialize + for<'r> Deserialize<'r> + Debug + Hash,
|
|
|
|
NodeUid: Eq + Ord + Clone + Debug + Serialize + for<'r> Deserialize<'r> + Hash,
|
|
|
|
{
|
|
|
|
/// Returns a new instance with the given parameters, starting at epoch `0`.
|
|
|
|
pub fn new(netinfo: NetworkInfo<NodeUid>, batch_size: usize) -> Result<Self> {
|
|
|
|
let honey_badger = HoneyBadger::new(Rc::new(netinfo.clone()), batch_size, None)?;
|
|
|
|
let dyn_hb = DynamicHoneyBadger {
|
|
|
|
netinfo,
|
|
|
|
batch_size,
|
|
|
|
start_epoch: 0,
|
|
|
|
votes: BTreeMap::new(),
|
|
|
|
honey_badger,
|
|
|
|
key_gen: None,
|
|
|
|
incoming_queue: Vec::new(),
|
|
|
|
messages: MessageQueue(VecDeque::new()),
|
|
|
|
output: VecDeque::new(),
|
|
|
|
};
|
|
|
|
Ok(dyn_hb)
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Handles a message for the `HoneyBadger` instance.
|
|
|
|
fn handle_honey_badger_message(
|
|
|
|
&mut self,
|
|
|
|
sender_id: &NodeUid,
|
|
|
|
message: honey_badger::Message<NodeUid>,
|
|
|
|
) -> Result<()> {
|
2018-06-27 02:37:05 -07:00
|
|
|
if !self.netinfo.all_uids().contains(sender_id) {
|
2018-06-27 04:45:25 -07:00
|
|
|
info!("Unknown sender {:?} of message {:?}", sender_id, message);
|
2018-06-27 02:37:05 -07:00
|
|
|
return Err(ErrorKind::UnknownSender.into());
|
|
|
|
}
|
2018-06-14 08:41:35 -07:00
|
|
|
// Handle the message and put the outgoing messages into the queue.
|
|
|
|
self.honey_badger.handle_message(sender_id, message)?;
|
2018-06-28 03:23:01 -07:00
|
|
|
self.process_output()
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
|
2018-06-28 03:23:01 -07:00
|
|
|
/// Handles a vote or key generation message and tries to commit it as a transaction. These
|
|
|
|
/// messages are only handled once they appear in a batch output from Honey Badger.
|
2018-06-27 04:45:25 -07:00
|
|
|
fn handle_signed_message(
|
|
|
|
&mut self,
|
|
|
|
sender_id: &NodeUid,
|
|
|
|
node_tx: NodeTransaction<NodeUid>,
|
|
|
|
sig: Box<Signature>,
|
|
|
|
) -> Result<()> {
|
|
|
|
self.verify_signature(sender_id, &*sig, &node_tx)?;
|
|
|
|
let tx = Transaction::Signed(self.start_epoch, sender_id.clone(), node_tx, sig);
|
|
|
|
self.honey_badger.input(tx)?;
|
|
|
|
self.process_output()
|
|
|
|
}
|
|
|
|
|
2018-06-14 08:41:35 -07:00
|
|
|
/// Processes all pending batches output by Honey Badger.
|
|
|
|
fn process_output(&mut self) -> Result<()> {
|
2018-06-28 03:23:01 -07:00
|
|
|
let start_epoch = self.start_epoch;
|
2018-06-14 08:41:35 -07:00
|
|
|
while let Some(hb_batch) = self.honey_badger.next_output() {
|
|
|
|
// Create the batch we output ourselves. It will contain the _user_ transactions of
|
2018-06-28 03:23:01 -07:00
|
|
|
// `hb_batch`, and the current change state.
|
2018-06-14 08:41:35 -07:00
|
|
|
let mut batch = Batch::new(hb_batch.epoch + self.start_epoch);
|
|
|
|
// Add the user transactions to `batch` and handle votes and DKG messages.
|
|
|
|
for (id, tx_vec) in hb_batch.transactions {
|
2018-06-27 02:37:05 -07:00
|
|
|
let entry = batch.transactions.entry(id);
|
2018-06-14 08:41:35 -07:00
|
|
|
let id_txs = entry.or_insert_with(Vec::new);
|
|
|
|
for tx in tx_vec {
|
|
|
|
match tx {
|
2018-06-27 04:45:25 -07:00
|
|
|
Transaction::User(tx) => id_txs.push(tx),
|
|
|
|
Transaction::Signed(epoch, s_id, node_tx, sig) => {
|
|
|
|
if epoch < self.start_epoch {
|
|
|
|
info!("Obsolete node transaction: {:?}.", node_tx);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if !self.verify_signature(&s_id, &sig, &node_tx)? {
|
|
|
|
info!("Invalid signature from {:?} for: {:?}.", s_id, node_tx);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
use self::NodeTransaction::*;
|
|
|
|
match node_tx {
|
|
|
|
Change(change) => self.handle_vote(s_id, change),
|
|
|
|
Propose(propose) => self.handle_propose(&s_id, propose)?,
|
|
|
|
Accept(accept) => self.handle_accept(&s_id, accept)?,
|
|
|
|
}
|
|
|
|
}
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2018-06-28 03:23:01 -07:00
|
|
|
if let Some(((pub_key_set, sk), change)) = self.take_key_gen_output() {
|
|
|
|
// If DKG completed, apply the change.
|
|
|
|
debug!("{:?} DKG for {:?} complete!", self.our_id(), change);
|
2018-06-29 08:20:54 -07:00
|
|
|
// If we are a validator, we received a new secret key. Otherwise keep the old one.
|
2018-06-28 03:23:01 -07:00
|
|
|
let sk = sk.unwrap_or_else(|| {
|
|
|
|
ClearOnDrop::new(Box::new(self.netinfo.secret_key().clone()))
|
|
|
|
});
|
|
|
|
// Restart Honey Badger in the next epoch, and inform the user about the change.
|
|
|
|
self.start_epoch = batch.epoch + 1;
|
|
|
|
self.apply_change(&change, pub_key_set, sk)?;
|
|
|
|
batch.change = ChangeState::Complete(change);
|
|
|
|
} else {
|
|
|
|
// If the majority changed, restart DKG. Inform the user about the current change.
|
|
|
|
self.update_key_gen()?;
|
|
|
|
if let Some((_, ref change)) = self.key_gen {
|
|
|
|
batch.change = ChangeState::InProgress(change.clone());
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
self.output.push_back(batch);
|
|
|
|
}
|
|
|
|
self.messages
|
|
|
|
.extend_with_epoch(self.start_epoch, &mut self.honey_badger);
|
|
|
|
// If `start_epoch` changed, we can now handle some queued messages.
|
2018-06-28 03:23:01 -07:00
|
|
|
if start_epoch < self.start_epoch {
|
2018-06-14 08:41:35 -07:00
|
|
|
let queue = mem::replace(&mut self.incoming_queue, Vec::new());
|
2018-06-27 04:45:25 -07:00
|
|
|
for (sender_id, msg) in queue {
|
|
|
|
self.handle_message(&sender_id, msg)?;
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Restarts Honey Badger with a new set of nodes, and resets the Key Generation.
|
|
|
|
fn apply_change(
|
|
|
|
&mut self,
|
|
|
|
change: &Change<NodeUid>,
|
|
|
|
pub_key_set: PublicKeySet,
|
|
|
|
sk: ClearOnDrop<Box<SecretKey>>,
|
|
|
|
) -> Result<()> {
|
|
|
|
self.votes.clear();
|
|
|
|
self.key_gen = None;
|
|
|
|
let mut all_uids = self.netinfo.all_uids().clone();
|
|
|
|
if !match *change {
|
|
|
|
Change::Remove(ref id) => all_uids.remove(id),
|
|
|
|
Change::Add(ref id, _) => all_uids.insert(id.clone()),
|
|
|
|
} {
|
2018-06-27 04:45:25 -07:00
|
|
|
info!("No-op change: {:?}", change);
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
let netinfo = NetworkInfo::new(self.our_id().clone(), all_uids, sk, pub_key_set);
|
|
|
|
self.netinfo = netinfo.clone();
|
2018-06-28 03:23:01 -07:00
|
|
|
// TODO: If there are more pending outputs, maybe their transactions should be added, too?
|
|
|
|
// They will have been removed from the buffer already.
|
2018-06-27 04:45:25 -07:00
|
|
|
let old_buffer = self.honey_badger.drain_buffer().into_iter();
|
|
|
|
let new_buffer = old_buffer.filter(Transaction::is_user);
|
|
|
|
self.honey_badger = HoneyBadger::new(Rc::new(netinfo), self.batch_size, new_buffer)?;
|
2018-06-14 08:41:35 -07:00
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
|
2018-06-28 03:23:01 -07:00
|
|
|
/// If the majority of votes has changed, restarts Key Generation for the set of nodes implied
|
|
|
|
/// by the current change.
|
|
|
|
fn update_key_gen(&mut self) -> Result<()> {
|
|
|
|
let change = match current_majority(&self.votes, &self.netinfo) {
|
|
|
|
None => {
|
|
|
|
self.key_gen = None;
|
|
|
|
return Ok(());
|
|
|
|
}
|
|
|
|
Some(change) => {
|
|
|
|
if self.key_gen.as_ref().map(|&(_, ref ch)| ch) == Some(change) {
|
|
|
|
return Ok(()); // The change is the same as last epoch. Continue DKG as is.
|
|
|
|
}
|
|
|
|
change.clone()
|
|
|
|
}
|
|
|
|
};
|
|
|
|
debug!("{:?} Restarting DKG for {:?}.", self.our_id(), change);
|
2018-06-14 08:41:35 -07:00
|
|
|
// Use the existing key shares - with the change applied - as keys for DKG.
|
2018-06-27 05:25:32 -07:00
|
|
|
let mut pub_keys = self.netinfo.public_key_map().clone();
|
2018-06-14 08:41:35 -07:00
|
|
|
if match change {
|
2018-06-28 03:23:01 -07:00
|
|
|
Change::Remove(ref id) => pub_keys.remove(id).is_none(),
|
|
|
|
Change::Add(ref id, ref pk) => pub_keys.insert(id.clone(), pk.clone()).is_some(),
|
2018-06-14 08:41:35 -07:00
|
|
|
} {
|
2018-06-28 03:23:01 -07:00
|
|
|
info!("{:?} No-op change: {:?}", self.our_id(), change);
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
// TODO: This needs to be the same as `num_faulty` will be in the _new_
|
|
|
|
// `NetworkInfo` if the change goes through. It would be safer to deduplicate.
|
|
|
|
let threshold = (pub_keys.len() - 1) / 3;
|
|
|
|
let sk = self.netinfo.secret_key().clone();
|
|
|
|
let our_uid = self.our_id().clone();
|
|
|
|
let (key_gen, propose) = SyncKeyGen::new(&our_uid, sk, pub_keys, threshold);
|
2018-06-28 03:23:01 -07:00
|
|
|
self.key_gen = Some((key_gen, change));
|
2018-06-27 02:37:05 -07:00
|
|
|
if let Some(propose) = propose {
|
2018-06-27 04:45:25 -07:00
|
|
|
self.send_transaction(NodeTransaction::Propose(propose))?;
|
2018-06-27 02:37:05 -07:00
|
|
|
}
|
2018-06-14 08:41:35 -07:00
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Handles a `Propose` message that was output by Honey Badger.
|
2018-06-27 04:45:25 -07:00
|
|
|
fn handle_propose(&mut self, sender_id: &NodeUid, propose: Propose) -> Result<()> {
|
2018-06-28 03:23:01 -07:00
|
|
|
let handle = |&mut (ref mut key_gen, _): &mut (SyncKeyGen<NodeUid>, _)| {
|
|
|
|
key_gen.handle_propose(&sender_id, propose)
|
|
|
|
};
|
2018-06-27 04:45:25 -07:00
|
|
|
match self.key_gen.as_mut().and_then(handle) {
|
|
|
|
Some(accept) => self.send_transaction(NodeTransaction::Accept(accept)),
|
|
|
|
None => Ok(()),
|
|
|
|
}
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
/// Handles an `Accept` message that was output by Honey Badger.
|
2018-06-27 04:45:25 -07:00
|
|
|
fn handle_accept(&mut self, sender_id: &NodeUid, accept: Accept) -> Result<()> {
|
2018-06-28 03:23:01 -07:00
|
|
|
if let Some(&mut (ref mut key_gen, _)) = self.key_gen.as_mut() {
|
2018-06-27 04:45:25 -07:00
|
|
|
key_gen.handle_accept(&sender_id, accept);
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
Ok(())
|
|
|
|
}
|
|
|
|
|
2018-06-27 04:45:25 -07:00
|
|
|
/// Signs and sends a `NodeTransaction` and also tries to commit it.
|
|
|
|
fn send_transaction(&mut self, node_tx: NodeTransaction<NodeUid>) -> Result<()> {
|
|
|
|
let sig = self.sign(&node_tx)?;
|
|
|
|
let msg = Message::Signed(self.start_epoch, node_tx.clone(), sig.clone());
|
|
|
|
self.messages.push_back(Target::All.message(msg));
|
2018-06-29 08:20:54 -07:00
|
|
|
if !self.netinfo.is_validator() {
|
2018-06-27 04:45:25 -07:00
|
|
|
return Ok(());
|
|
|
|
}
|
|
|
|
let our_uid = self.netinfo.our_uid().clone();
|
|
|
|
let hb_tx = Transaction::Signed(self.start_epoch, our_uid, node_tx, sig);
|
|
|
|
self.honey_badger.input(hb_tx)?;
|
|
|
|
self.process_output()
|
|
|
|
}
|
|
|
|
|
2018-06-14 08:41:35 -07:00
|
|
|
/// If the current Key Generation process is ready, returns the generated key set.
|
2018-06-28 03:23:01 -07:00
|
|
|
///
|
|
|
|
/// We require the minimum number of completed proposals (`SyncKeyGen::is_ready`) and if a new
|
|
|
|
/// node is joining, we require in addition that the new node's proposal is complete. That way
|
|
|
|
/// the new node knows that it's key is secret, without having to trust any number of nodes.
|
|
|
|
fn take_key_gen_output(&mut self) -> Option<(KeyGenOutput, Change<NodeUid>)> {
|
|
|
|
let is_ready = |&(ref key_gen, ref change): &(SyncKeyGen<_>, Change<_>)| {
|
|
|
|
let candidate_ready = |id: &NodeUid| key_gen.is_node_ready(id);
|
|
|
|
key_gen.is_ready() && change.candidate().map_or(true, candidate_ready)
|
|
|
|
};
|
|
|
|
if self.key_gen.as_ref().map_or(false, is_ready) {
|
|
|
|
let generate = |(key_gen, change): (SyncKeyGen<_>, _)| (key_gen.generate(), change);
|
|
|
|
self.key_gen.take().map(generate)
|
|
|
|
} else {
|
|
|
|
None
|
|
|
|
}
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
|
2018-06-27 04:45:25 -07:00
|
|
|
/// Returns a signature of `node_tx`, or an error if serialization fails.
|
|
|
|
fn sign(&self, node_tx: &NodeTransaction<NodeUid>) -> Result<Box<Signature>> {
|
|
|
|
let ser = bincode::serialize(node_tx)?;
|
2018-06-14 08:41:35 -07:00
|
|
|
Ok(Box::new(self.netinfo.secret_key().sign(ser)))
|
|
|
|
}
|
|
|
|
|
2018-06-27 04:45:25 -07:00
|
|
|
/// Returns `true` if the signature of `node_tx` by the node with the specified ID is valid.
|
2018-06-14 08:41:35 -07:00
|
|
|
/// Returns an error if the payload fails to serialize.
|
2018-06-27 04:45:25 -07:00
|
|
|
fn verify_signature(
|
2018-06-14 08:41:35 -07:00
|
|
|
&self,
|
2018-06-27 04:45:25 -07:00
|
|
|
node_id: &NodeUid,
|
2018-06-14 08:41:35 -07:00
|
|
|
sig: &Signature,
|
2018-06-27 04:45:25 -07:00
|
|
|
node_tx: &NodeTransaction<NodeUid>,
|
2018-06-14 08:41:35 -07:00
|
|
|
) -> Result<bool> {
|
2018-06-27 04:45:25 -07:00
|
|
|
let ser = bincode::serialize(node_tx)?;
|
2018-06-28 03:23:01 -07:00
|
|
|
let pk_opt = (self.netinfo.public_key_share(node_id)).or_else(|| {
|
|
|
|
self.key_gen
|
2018-06-27 04:45:25 -07:00
|
|
|
.iter()
|
2018-06-28 03:23:01 -07:00
|
|
|
.filter_map(|&(_, ref change): &(_, Change<_>)| match *change {
|
|
|
|
Change::Add(ref id, ref pk) if id == node_id => Some(pk),
|
|
|
|
Change::Add(_, _) | Change::Remove(_) => None,
|
|
|
|
})
|
2018-06-27 04:45:25 -07:00
|
|
|
.next()
|
|
|
|
});
|
2018-06-14 08:41:35 -07:00
|
|
|
Ok(pk_opt.map_or(false, |pk| pk.verify(&sig, ser)))
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Adds a vote for a node change by the node with `id`.
|
2018-06-27 04:45:25 -07:00
|
|
|
fn handle_vote(&mut self, sender_id: NodeUid, change: Change<NodeUid>) {
|
|
|
|
let obsolete = match change {
|
|
|
|
Change::Add(ref id, _) => self.netinfo.all_uids().contains(id),
|
|
|
|
Change::Remove(ref id) => !self.netinfo.all_uids().contains(id),
|
|
|
|
};
|
|
|
|
if !obsolete {
|
2018-06-27 02:51:56 -07:00
|
|
|
self.votes.insert(sender_id, change);
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
}
|
2018-06-28 03:23:01 -07:00
|
|
|
}
|
2018-06-14 08:41:35 -07:00
|
|
|
|
2018-06-28 03:23:01 -07:00
|
|
|
/// Returns the change that currently has a majority of votes, if any.
|
|
|
|
fn current_majority<'a, NodeUid: Ord + Clone + Hash + Eq>(
|
|
|
|
votes: &'a BTreeMap<NodeUid, Change<NodeUid>>,
|
|
|
|
netinfo: &'a NetworkInfo<NodeUid>,
|
|
|
|
) -> Option<&'a Change<NodeUid>> {
|
|
|
|
let mut vote_counts: HashMap<&Change<NodeUid>, usize> = HashMap::new();
|
|
|
|
for change in votes.values() {
|
|
|
|
let entry = vote_counts.entry(change).or_insert(0);
|
|
|
|
*entry += 1;
|
|
|
|
if *entry * 2 > netinfo.num_nodes() {
|
|
|
|
return Some(change);
|
2018-06-27 02:51:56 -07:00
|
|
|
}
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
2018-06-28 03:23:01 -07:00
|
|
|
None
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
/// The transactions for the internal `HoneyBadger` instance: this includes both user-defined
|
|
|
|
/// "regular" transactions as well as internal transactions for coordinating node additions and
|
|
|
|
/// removals and key generation.
|
|
|
|
#[derive(Eq, PartialEq, Debug, Serialize, Deserialize, Hash)]
|
|
|
|
enum Transaction<Tx, NodeUid> {
|
|
|
|
/// A user-defined transaction.
|
|
|
|
User(Tx),
|
2018-06-27 04:45:25 -07:00
|
|
|
/// A signed internal message that gets committed via Honey Badger to communicate synchronously.
|
|
|
|
Signed(u64, NodeUid, NodeTransaction<NodeUid>, Box<Signature>),
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<Tx, NodeUid> Transaction<Tx, NodeUid> {
|
|
|
|
/// Returns `true` if this is a user transaction.
|
|
|
|
fn is_user(&self) -> bool {
|
|
|
|
match *self {
|
|
|
|
Transaction::User(_) => true,
|
|
|
|
Transaction::Signed(_, _, _, _) => false,
|
|
|
|
}
|
|
|
|
}
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
/// A batch of transactions the algorithm has output.
|
2018-06-27 04:45:25 -07:00
|
|
|
#[derive(Clone, Debug, Eq, PartialEq)]
|
2018-06-14 08:41:35 -07:00
|
|
|
pub struct Batch<Tx, NodeUid> {
|
2018-06-27 02:51:56 -07:00
|
|
|
/// The sequence number: there is exactly one batch in each epoch.
|
2018-06-14 08:41:35 -07:00
|
|
|
pub epoch: u64,
|
2018-06-27 02:51:56 -07:00
|
|
|
/// The user transactions committed in this epoch.
|
2018-06-14 08:41:35 -07:00
|
|
|
pub transactions: BTreeMap<NodeUid, Vec<Tx>>,
|
2018-06-28 03:23:01 -07:00
|
|
|
/// The current state of adding or removing a node: whether any is in progress, or completed
|
|
|
|
/// this epoch.
|
|
|
|
pub change: ChangeState<NodeUid>,
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
impl<Tx, NodeUid: Ord> Batch<Tx, NodeUid> {
|
|
|
|
/// Returns a new, empty batch with the given epoch.
|
|
|
|
pub fn new(epoch: u64) -> Self {
|
|
|
|
Batch {
|
|
|
|
epoch,
|
|
|
|
transactions: BTreeMap::new(),
|
2018-06-28 03:23:01 -07:00
|
|
|
change: ChangeState::None,
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns an iterator over all transactions included in the batch.
|
|
|
|
pub fn iter(&self) -> impl Iterator<Item = &Tx> {
|
|
|
|
self.transactions.values().flat_map(|vec| vec)
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns the number of transactions in the batch (without detecting duplicates).
|
|
|
|
pub fn len(&self) -> usize {
|
|
|
|
self.transactions.values().map(Vec::len).sum()
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Returns `true` if the batch contains no transactions.
|
|
|
|
pub fn is_empty(&self) -> bool {
|
|
|
|
self.transactions.values().all(Vec::is_empty)
|
|
|
|
}
|
|
|
|
|
2018-06-28 03:23:01 -07:00
|
|
|
/// Returns whether any change to the set of participating nodes is in progress or was
|
|
|
|
/// completed in this epoch.
|
|
|
|
pub fn change(&self) -> &ChangeState<NodeUid> {
|
|
|
|
&self.change
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-06-27 04:45:25 -07:00
|
|
|
/// An internal message that gets committed via Honey Badger to communicate synchronously.
|
|
|
|
#[derive(Eq, PartialEq, Debug, Serialize, Deserialize, Hash, Clone)]
|
|
|
|
pub enum NodeTransaction<NodeUid> {
|
|
|
|
/// A vote to add or remove a node.
|
|
|
|
Change(Change<NodeUid>),
|
|
|
|
/// A `SyncKeyGen::Propose` message for key generation.
|
|
|
|
Propose(Propose),
|
|
|
|
/// A `SyncKeyGen::Accept` message for key generation.
|
|
|
|
Accept(Accept),
|
|
|
|
}
|
|
|
|
|
2018-06-14 08:41:35 -07:00
|
|
|
/// A message sent to or received from another node's Honey Badger instance.
|
|
|
|
#[cfg_attr(feature = "serialization-serde", derive(Serialize, Deserialize))]
|
|
|
|
#[derive(Debug, Clone)]
|
|
|
|
pub enum Message<NodeUid> {
|
|
|
|
/// A message belonging to the `HoneyBadger` algorithm started in the given epoch.
|
|
|
|
HoneyBadger(u64, honey_badger::Message<NodeUid>),
|
2018-06-27 04:45:25 -07:00
|
|
|
/// A transaction to be committed, signed by a node.
|
|
|
|
Signed(u64, NodeTransaction<NodeUid>, Box<Signature>),
|
|
|
|
}
|
|
|
|
|
|
|
|
impl<NodeUid> Message<NodeUid> {
|
|
|
|
pub fn epoch(&self) -> u64 {
|
|
|
|
match *self {
|
|
|
|
Message::HoneyBadger(epoch, _) => epoch,
|
|
|
|
Message::Signed(epoch, _, _) => epoch,
|
|
|
|
}
|
|
|
|
}
|
2018-06-14 08:41:35 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
/// The queue of outgoing messages in a `HoneyBadger` instance.
|
|
|
|
#[derive(Deref, DerefMut)]
|
|
|
|
struct MessageQueue<NodeUid>(VecDeque<TargetedMessage<Message<NodeUid>, NodeUid>>);
|
|
|
|
|
|
|
|
impl<NodeUid> MessageQueue<NodeUid>
|
|
|
|
where
|
|
|
|
NodeUid: Eq + Hash + Ord + Clone + Debug + Serialize + for<'r> Deserialize<'r>,
|
|
|
|
{
|
|
|
|
/// Appends to the queue the messages from `hb`, wrapped with `epoch`.
|
|
|
|
fn extend_with_epoch<Tx>(&mut self, epoch: u64, hb: &mut HoneyBadger<Tx, NodeUid>)
|
|
|
|
where
|
|
|
|
Tx: Eq + Serialize + for<'r> Deserialize<'r> + Debug + Hash,
|
|
|
|
{
|
|
|
|
let convert = |msg: TargetedMessage<honey_badger::Message<NodeUid>, NodeUid>| {
|
|
|
|
msg.map(|hb_msg| Message::HoneyBadger(epoch, hb_msg))
|
|
|
|
};
|
|
|
|
self.extend(hb.message_iter().map(convert));
|
|
|
|
}
|
|
|
|
}
|