mirror of https://github.com/BTCPrivate/z-nomp.git
Replaced various listeners (coinswitch and blocknotify) with united NOMP CLI (command-line interface)
This commit is contained in:
parent
35772fd780
commit
952c7105cc
37
README.md
37
README.md
|
@ -113,7 +113,8 @@ If your pool uses NOMP let us know and we will list your website here.
|
||||||
* http://kryptochaos.com
|
* http://kryptochaos.com
|
||||||
* http://pool.uberpools.org
|
* http://pool.uberpools.org
|
||||||
* http://onebtcplace.com
|
* http://onebtcplace.com
|
||||||
* https://minr.es
|
* http://minr.es
|
||||||
|
* http://mining.theminingpools.com
|
||||||
|
|
||||||
Usage
|
Usage
|
||||||
=====
|
=====
|
||||||
|
@ -166,7 +167,12 @@ Explanation for each field:
|
||||||
/* Specifies the level of log output verbosity. Anything more severy than the level specified
|
/* Specifies the level of log output verbosity. Anything more severy than the level specified
|
||||||
will also be logged. */
|
will also be logged. */
|
||||||
"logLevel": "debug", //or "warning", "error"
|
"logLevel": "debug", //or "warning", "error"
|
||||||
|
|
||||||
|
|
||||||
|
/* The NOMP CLI (command-line interface) will listen for commands on this port. For example,
|
||||||
|
blocknotify messages are sent to NOMP through this. */
|
||||||
|
"cliPort": 17117,
|
||||||
|
|
||||||
/* By default 'forks' is set to "auto" which will spawn one process/fork/worker for each CPU
|
/* By default 'forks' is set to "auto" which will spawn one process/fork/worker for each CPU
|
||||||
core in your system. Each of these workers will run a separate instance of your pool(s),
|
core in your system. Each of these workers will run a separate instance of your pool(s),
|
||||||
and the kernel will load balance miners using these forks. Optionally, the 'forks' field
|
and the kernel will load balance miners using these forks. Optionally, the 'forks' field
|
||||||
|
@ -205,33 +211,12 @@ Explanation for each field:
|
||||||
"port": 6379
|
"port": 6379
|
||||||
},
|
},
|
||||||
|
|
||||||
/* With this enabled, the master process listen on the configured port for messages from the
|
|
||||||
'scripts/blockNotify.js' script which your coin daemons can be configured to run when a
|
|
||||||
new block is available. When a blocknotify message is received, the master process uses
|
|
||||||
IPC (inter-process communication) to notify each thread about the message. Each thread
|
|
||||||
then sends the message to the appropriate coin pool. See "Setting up blocknotify" below to
|
|
||||||
set up your daemon to use this feature. */
|
|
||||||
"blockNotifyListener": {
|
|
||||||
"enabled": true,
|
|
||||||
"port": 8117,
|
|
||||||
"password": "test"
|
|
||||||
},
|
|
||||||
|
|
||||||
/* With this enabled, the master process will listen on the configured port for messages from
|
|
||||||
the 'scripts/coinSwitch.js' script which will trigger your proxy pools to switch to the
|
|
||||||
specified coin (non-case-sensitive). This setting is used in conjuction with the proxy
|
|
||||||
feature below. */
|
|
||||||
"coinSwitchListener": {
|
|
||||||
"enabled": false,
|
|
||||||
"port": 8118,
|
|
||||||
"password": "test"
|
|
||||||
},
|
|
||||||
|
|
||||||
/* In a proxy configuration, you can setup ports that accept miners for work based on a
|
/* In a proxy configuration, you can setup ports that accept miners for work based on a
|
||||||
specific algorithm instead of a specific coin. Miners that connect to these ports are
|
specific algorithm instead of a specific coin. Miners that connect to these ports are
|
||||||
automatically switched a coin determined by the server. The default coin is the first
|
automatically switched a coin determined by the server. The default coin is the first
|
||||||
configured pool for each algorithm and coin switching can be triggered using the
|
configured pool for each algorithm and coin switching can be triggered using the
|
||||||
coinSwitch.js script in the scripts folder.
|
cli.js script in the scripts folder.
|
||||||
|
|
||||||
Please note miner address authentication must be disabled when using NOMP in a proxy
|
Please note miner address authentication must be disabled when using NOMP in a proxy
|
||||||
configuration and that payout processing is left up to the server administrator. */
|
configuration and that payout processing is left up to the server administrator. */
|
||||||
|
@ -505,11 +490,11 @@ For more information on these configuration options see the [pool module documen
|
||||||
1. In `config.json` set the port and password for `blockNotifyListener`
|
1. In `config.json` set the port and password for `blockNotifyListener`
|
||||||
2. In your daemon conf file set the `blocknotify` command to use:
|
2. In your daemon conf file set the `blocknotify` command to use:
|
||||||
```
|
```
|
||||||
node [path to scripts/blockNotify.js] [listener host]:[listener port] [listener password] [coin name in config] %s
|
node [path to cli.js] [coin name in config] [block hash symbol]
|
||||||
```
|
```
|
||||||
Example: inside `dogecoin.conf` add the line
|
Example: inside `dogecoin.conf` add the line
|
||||||
```
|
```
|
||||||
blocknotify=node scripts/blockNotify.js 127.0.0.1:8117 mySuperSecurePassword dogecoin %s
|
blocknotify=node /home/nomp/scripts/cli.js blocknotify dogecoin %s
|
||||||
```
|
```
|
||||||
|
|
||||||
Alternatively, you can use a more efficient block notify script written in pure C. Build and usage instructions
|
Alternatively, you can use a more efficient block notify script written in pure C. Build and usage instructions
|
||||||
|
|
|
@ -1,7 +0,0 @@
|
||||||
{
|
|
||||||
"name": "Ecoin",
|
|
||||||
"symbol": "ECN",
|
|
||||||
"algorithm": "keccak",
|
|
||||||
"normalHashing": true,
|
|
||||||
"diffShift": 32
|
|
||||||
}
|
|
|
@ -1,6 +1,8 @@
|
||||||
{
|
{
|
||||||
"logLevel": "debug",
|
"logLevel": "debug",
|
||||||
|
|
||||||
|
"cliPort": 17117,
|
||||||
|
|
||||||
"clustering": {
|
"clustering": {
|
||||||
"enabled": true,
|
"enabled": true,
|
||||||
"forks": "auto"
|
"forks": "auto"
|
||||||
|
@ -26,47 +28,47 @@
|
||||||
"port": 6379
|
"port": 6379
|
||||||
},
|
},
|
||||||
|
|
||||||
"blockNotifyListener": {
|
"switching": {
|
||||||
"enabled": false,
|
"switch1": {
|
||||||
"port": 8117,
|
|
||||||
"password": "test"
|
|
||||||
},
|
|
||||||
|
|
||||||
"coinSwitchListener": {
|
|
||||||
"enabled": false,
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 8118,
|
|
||||||
"password": "test"
|
|
||||||
},
|
|
||||||
|
|
||||||
"proxy": {
|
|
||||||
"sha256": {
|
|
||||||
"enabled": false,
|
"enabled": false,
|
||||||
"port": "3333",
|
"algorithm": "sha256",
|
||||||
"diff": 10,
|
"ports": {
|
||||||
"varDiff": {
|
"3333": {
|
||||||
"minDiff": 16,
|
"diff": 10,
|
||||||
"maxDiff": 512,
|
"varDiff": {
|
||||||
"targetTime": 15,
|
"minDiff": 16,
|
||||||
"retargetTime": 90,
|
"maxDiff": 512,
|
||||||
"variancePercent": 30
|
"targetTime": 15,
|
||||||
|
"retargetTime": 90,
|
||||||
|
"variancePercent": 30
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"scrypt": {
|
"switch2": {
|
||||||
"enabled": false,
|
"enabled": false,
|
||||||
"port": "4444",
|
"algorithm": "scrypt",
|
||||||
"diff": 10,
|
"ports": {
|
||||||
"varDiff": {
|
"4444": {
|
||||||
"minDiff": 16,
|
"diff": 10,
|
||||||
"maxDiff": 512,
|
"varDiff": {
|
||||||
"targetTime": 15,
|
"minDiff": 16,
|
||||||
"retargetTime": 90,
|
"maxDiff": 512,
|
||||||
"variancePercent": 30
|
"targetTime": 15,
|
||||||
|
"retargetTime": 90,
|
||||||
|
"variancePercent": 30
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"scrypt-n": {
|
"switch3": {
|
||||||
"enabled": false,
|
"enabled": false,
|
||||||
"port": "5555"
|
"algorithm": "x11",
|
||||||
|
"ports": {
|
||||||
|
"5555": {
|
||||||
|
"diff": 0.001
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
||||||
|
|
107
init.js
107
init.js
|
@ -5,8 +5,7 @@ var cluster = require('cluster');
|
||||||
|
|
||||||
var async = require('async');
|
var async = require('async');
|
||||||
var PoolLogger = require('./libs/logUtil.js');
|
var PoolLogger = require('./libs/logUtil.js');
|
||||||
var BlocknotifyListener = require('./libs/blocknotifyListener.js');
|
var CliListener = require('./libs/cliListener.js');
|
||||||
var CoinswitchListener = require('./libs/coinswitchListener.js');
|
|
||||||
var RedisBlocknotifyListener = require('./libs/redisblocknotifyListener.js');
|
var RedisBlocknotifyListener = require('./libs/redisblocknotifyListener.js');
|
||||||
var PoolWorker = require('./libs/poolWorker.js');
|
var PoolWorker = require('./libs/poolWorker.js');
|
||||||
var PaymentProcessor = require('./libs/paymentProcessor.js');
|
var PaymentProcessor = require('./libs/paymentProcessor.js');
|
||||||
|
@ -82,18 +81,68 @@ if (cluster.isWorker){
|
||||||
var buildPoolConfigs = function(){
|
var buildPoolConfigs = function(){
|
||||||
var configs = {};
|
var configs = {};
|
||||||
var configDir = 'pool_configs/';
|
var configDir = 'pool_configs/';
|
||||||
|
|
||||||
|
var poolConfigFiles = [];
|
||||||
|
|
||||||
|
|
||||||
|
/* Get filenames of pool config json files that are enabled */
|
||||||
fs.readdirSync(configDir).forEach(function(file){
|
fs.readdirSync(configDir).forEach(function(file){
|
||||||
if (!fs.existsSync(configDir + file) || path.extname(configDir + file) !== '.json') return;
|
if (!fs.existsSync(configDir + file) || path.extname(configDir + file) !== '.json') return;
|
||||||
var poolOptions = JSON.parse(JSON.minify(fs.readFileSync(configDir + file, {encoding: 'utf8'})));
|
var poolOptions = JSON.parse(JSON.minify(fs.readFileSync(configDir + file, {encoding: 'utf8'})));
|
||||||
if (!poolOptions.enabled) return;
|
if (!poolOptions.enabled) return;
|
||||||
var coinFilePath = 'coins/' + poolOptions.coin;
|
poolOptions.fileName = file;
|
||||||
|
poolConfigFiles.push(poolOptions);
|
||||||
|
});
|
||||||
|
|
||||||
|
|
||||||
|
/* Ensure no pool uses any of the same ports as another pool */
|
||||||
|
for (var i = 0; i < poolConfigFiles.length; i++){
|
||||||
|
var ports = Object.keys(poolConfigFiles[i].ports);
|
||||||
|
for (var f = 0; f < poolConfigFiles.length; f++){
|
||||||
|
if (f === i) continue;
|
||||||
|
var portsF = Object.keys(poolConfigFiles[f].ports);
|
||||||
|
for (var g = 0; g < portsF.length; g++){
|
||||||
|
if (ports.indexOf(portsF[g]) !== -1){
|
||||||
|
logger.error('Master', poolConfigFiles[f].fileName, 'Has same configured port of ' + portsF[g] + ' as ' + poolConfigFiles[i].fileName);
|
||||||
|
process.exit(1);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (poolConfigFiles[f].coin === poolConfigFiles[i].coin){
|
||||||
|
logger.error('Master', poolConfigFiles[f].fileName, 'Pool has same configured coin file coins/' + poolConfigFiles[f].coin + ' as ' + poolConfigFiles[i].fileName + ' pool');
|
||||||
|
process.exit(1);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
poolConfigFiles.forEach(function(poolOptions){
|
||||||
|
|
||||||
|
poolOptions.coinFileName = poolOptions.coin;
|
||||||
|
|
||||||
|
var coinFilePath = 'coins/' + poolOptions.coinFileName;
|
||||||
if (!fs.existsSync(coinFilePath)){
|
if (!fs.existsSync(coinFilePath)){
|
||||||
logger.error('Master', poolOptions.coin, 'could not find file: ' + coinFilePath);
|
logger.error('Master', poolOptions.coinFileName, 'could not find file: ' + coinFilePath);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
var coinProfile = JSON.parse(JSON.minify(fs.readFileSync(coinFilePath, {encoding: 'utf8'})));
|
var coinProfile = JSON.parse(JSON.minify(fs.readFileSync(coinFilePath, {encoding: 'utf8'})));
|
||||||
poolOptions.coin = coinProfile;
|
poolOptions.coin = coinProfile;
|
||||||
|
|
||||||
|
if (poolOptions.coin.name in configs){
|
||||||
|
|
||||||
|
logger.error('Master', poolOptions.fileName, 'coins/' + poolOptions.coinFileName
|
||||||
|
+ ' has same configured coin name ' + poolOptions.coin.name + ' as coins/'
|
||||||
|
+ configs[poolOptions.coin.name].coinFileName + ' used by pool config '
|
||||||
|
+ configs[poolOptions.coin.name].fileName);
|
||||||
|
|
||||||
|
process.exit(1);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
configs[poolOptions.coin.name] = poolOptions;
|
configs[poolOptions.coin.name] = poolOptions;
|
||||||
|
|
||||||
if (!(coinProfile.algorithm in algos)){
|
if (!(coinProfile.algorithm in algos)){
|
||||||
|
@ -130,6 +179,10 @@ var spawnPoolWorkers = function(portalConfig, poolConfigs){
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for (var p in poolConfigs){
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
var serializedConfigs = JSON.stringify(poolConfigs);
|
var serializedConfigs = JSON.stringify(poolConfigs);
|
||||||
|
|
||||||
var numForks = (function(){
|
var numForks = (function(){
|
||||||
|
@ -185,25 +238,34 @@ var spawnPoolWorkers = function(portalConfig, poolConfigs){
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
var startBlockListener = function(portalConfig){
|
var startCliListener = function(cliPort){
|
||||||
//block notify options
|
var listener = new CliListener(cliPort);
|
||||||
//setup block notify here and use IPC to tell appropriate pools
|
|
||||||
var listener = new BlocknotifyListener(portalConfig.blockNotifyListener);
|
|
||||||
listener.on('log', function(text){
|
listener.on('log', function(text){
|
||||||
logger.debug('Master', 'Blocknotify', text);
|
logger.debug('Master', 'CLI', text);
|
||||||
});
|
}).on('command', function(command, params, options){
|
||||||
listener.on('hash', function(message){
|
|
||||||
|
|
||||||
var ipcMessage = {type:'blocknotify', coin: message.coin, hash: message.hash};
|
switch(command){
|
||||||
Object.keys(cluster.workers).forEach(function(id) {
|
case 'blocknotify':
|
||||||
cluster.workers[id].send(ipcMessage);
|
Object.keys(cluster.workers).forEach(function(id) {
|
||||||
});
|
cluster.workers[id].send({type: 'blocknotify', coin: params[0], hash: params[1]});
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
case 'coinswitch':
|
||||||
|
Object.keys(cluster.workers).forEach(function(id) {
|
||||||
|
cluster.workers[id].send({type: 'coinswitch', switchName: params[0], coin: params[1] });
|
||||||
|
});
|
||||||
|
break;
|
||||||
|
case 'restartpool':
|
||||||
|
Object.keys(cluster.workers).forEach(function(id) {
|
||||||
|
cluster.workers[id].send({type: 'restartpool', coin: params[0] });
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
});
|
console.log('command: ' + JSON.stringify([command, params, options]));
|
||||||
listener.start();
|
}).start();
|
||||||
};
|
};
|
||||||
|
|
||||||
|
/*
|
||||||
//
|
//
|
||||||
// Receives authenticated events from coin switch listener and triggers proxy
|
// Receives authenticated events from coin switch listener and triggers proxy
|
||||||
// to swtich to a new coin.
|
// to swtich to a new coin.
|
||||||
|
@ -219,7 +281,7 @@ var startCoinswitchListener = function(portalConfig){
|
||||||
cluster.workers[id].send(ipcMessage);
|
cluster.workers[id].send(ipcMessage);
|
||||||
});
|
});
|
||||||
var ipcMessage = {
|
var ipcMessage = {
|
||||||
type:'switch',
|
type:'coinswitch',
|
||||||
coin: message.coin
|
coin: message.coin
|
||||||
};
|
};
|
||||||
Object.keys(cluster.workers).forEach(function(id) {
|
Object.keys(cluster.workers).forEach(function(id) {
|
||||||
|
@ -228,6 +290,7 @@ var startCoinswitchListener = function(portalConfig){
|
||||||
});
|
});
|
||||||
listener.start();
|
listener.start();
|
||||||
};
|
};
|
||||||
|
*/
|
||||||
|
|
||||||
var startRedisBlockListener = function(portalConfig){
|
var startRedisBlockListener = function(portalConfig){
|
||||||
//block notify options
|
//block notify options
|
||||||
|
@ -324,14 +387,12 @@ var startProfitSwitch = function(portalConfig, poolConfigs){
|
||||||
|
|
||||||
startPaymentProcessor(poolConfigs);
|
startPaymentProcessor(poolConfigs);
|
||||||
|
|
||||||
startBlockListener(portalConfig);
|
|
||||||
|
|
||||||
startCoinswitchListener(portalConfig);
|
|
||||||
|
|
||||||
startRedisBlockListener(portalConfig);
|
startRedisBlockListener(portalConfig);
|
||||||
|
|
||||||
startWebsite(portalConfig, poolConfigs);
|
startWebsite(portalConfig, poolConfigs);
|
||||||
|
|
||||||
startProfitSwitch(portalConfig, poolConfigs);
|
startProfitSwitch(portalConfig, poolConfigs);
|
||||||
|
|
||||||
|
startCliListener(portalConfig.cliPort);
|
||||||
|
|
||||||
})();
|
})();
|
||||||
|
|
|
@ -1,69 +0,0 @@
|
||||||
var events = require('events');
|
|
||||||
var net = require('net');
|
|
||||||
|
|
||||||
var listener = module.exports = function listener(options){
|
|
||||||
|
|
||||||
var _this = this;
|
|
||||||
|
|
||||||
var emitLog = function(text){
|
|
||||||
_this.emit('log', text);
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
this.start = function(){
|
|
||||||
if (!options || !options.enabled){
|
|
||||||
emitLog('Blocknotify listener disabled');
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
var blockNotifyServer = net.createServer(function(c) {
|
|
||||||
|
|
||||||
emitLog('Block listener has incoming connection');
|
|
||||||
var data = '';
|
|
||||||
try {
|
|
||||||
c.on('data', function (d) {
|
|
||||||
emitLog('Block listener received blocknotify data');
|
|
||||||
data += d;
|
|
||||||
if (data.slice(-1) === '\n') {
|
|
||||||
c.end();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
c.on('end', function () {
|
|
||||||
|
|
||||||
emitLog('Block listener connection ended');
|
|
||||||
|
|
||||||
var message;
|
|
||||||
|
|
||||||
try{
|
|
||||||
message = JSON.parse(data);
|
|
||||||
}
|
|
||||||
catch(e){
|
|
||||||
emitLog('Block listener failed to parse message ' + data);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (message.password === options.password) {
|
|
||||||
_this.emit('hash', message);
|
|
||||||
}
|
|
||||||
else
|
|
||||||
emitLog('Block listener received notification with incorrect password');
|
|
||||||
|
|
||||||
});
|
|
||||||
}
|
|
||||||
catch(e){
|
|
||||||
emitLog('Block listener had an error: ' + e);
|
|
||||||
}
|
|
||||||
|
|
||||||
});
|
|
||||||
blockNotifyServer.listen(options.port, function() {
|
|
||||||
emitLog('Block notify listener server started on port ' + options.port)
|
|
||||||
});
|
|
||||||
|
|
||||||
emitLog("Block listener is enabled, starting server on port " + options.port);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
};
|
|
||||||
|
|
||||||
listener.prototype.__proto__ = events.EventEmitter.prototype;
|
|
|
@ -0,0 +1,40 @@
|
||||||
|
var events = require('events');
|
||||||
|
var net = require('net');
|
||||||
|
|
||||||
|
var listener = module.exports = function listener(port){
|
||||||
|
|
||||||
|
var _this = this;
|
||||||
|
|
||||||
|
var emitLog = function(text){
|
||||||
|
_this.emit('log', text);
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
|
this.start = function(){
|
||||||
|
net.createServer(function(c) {
|
||||||
|
|
||||||
|
var data = '';
|
||||||
|
try {
|
||||||
|
c.on('data', function (d) {
|
||||||
|
data += d;
|
||||||
|
if (data.slice(-1) === '\n') {
|
||||||
|
c.end();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
c.on('end', function () {
|
||||||
|
var message = JSON.parse(data);
|
||||||
|
_this.emit('command', message.command, message.params, message.options);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
catch(e){
|
||||||
|
emitLog('CLI listener failed to parse message ' + data);
|
||||||
|
}
|
||||||
|
|
||||||
|
}).listen(port, '127.0.0.1', function() {
|
||||||
|
emitLog('CLI listening on port ' + port)
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
listener.prototype.__proto__ = events.EventEmitter.prototype;
|
|
@ -1,56 +0,0 @@
|
||||||
var events = require('events');
|
|
||||||
var net = require('net');
|
|
||||||
|
|
||||||
var listener = module.exports = function listener(options){
|
|
||||||
|
|
||||||
var _this = this;
|
|
||||||
|
|
||||||
var emitLog = function(text){
|
|
||||||
_this.emit('log', text);
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
this.start = function(){
|
|
||||||
if (!options || !options.enabled){
|
|
||||||
emitLog('Coinswitch listener disabled');
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
var coinswitchServer = net.createServer(function(c) {
|
|
||||||
|
|
||||||
emitLog('Coinswitch listener has incoming connection');
|
|
||||||
var data = '';
|
|
||||||
try {
|
|
||||||
c.on('data', function (d) {
|
|
||||||
emitLog('Coinswitch listener received switch request');
|
|
||||||
data += d;
|
|
||||||
if (data.slice(-1) === '\n') {
|
|
||||||
c.end();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
c.on('end', function () {
|
|
||||||
|
|
||||||
var message = JSON.parse(data);
|
|
||||||
if (message.password === options.password) {
|
|
||||||
_this.emit('switchcoin', message);
|
|
||||||
}
|
|
||||||
else
|
|
||||||
emitLog('Coinswitch listener received notification with incorrect password');
|
|
||||||
|
|
||||||
});
|
|
||||||
}
|
|
||||||
catch(e){
|
|
||||||
emitLog('Coinswitch listener failed to parse message ' + data);
|
|
||||||
}
|
|
||||||
|
|
||||||
});
|
|
||||||
coinswitchServer.listen(options.port, function() {
|
|
||||||
emitLog('Coinswitch notify listener server started on port ' + options.port)
|
|
||||||
});
|
|
||||||
|
|
||||||
emitLog("Coinswitch listener is enabled, starting server on port " + options.port);
|
|
||||||
}
|
|
||||||
|
|
||||||
};
|
|
||||||
|
|
||||||
listener.prototype.__proto__ = events.EventEmitter.prototype;
|
|
|
@ -18,6 +18,8 @@ module.exports = function(logger){
|
||||||
|
|
||||||
var proxySwitch = {};
|
var proxySwitch = {};
|
||||||
|
|
||||||
|
var redisClient = redis.createClient(portalConfig.redis.port, portalConfig.redis.host);
|
||||||
|
|
||||||
//Handle messages from master process sent via IPC
|
//Handle messages from master process sent via IPC
|
||||||
process.on('message', function(message) {
|
process.on('message', function(message) {
|
||||||
switch(message.type){
|
switch(message.type){
|
||||||
|
@ -42,26 +44,39 @@ module.exports = function(logger){
|
||||||
break;
|
break;
|
||||||
|
|
||||||
// IPC message for pool switching
|
// IPC message for pool switching
|
||||||
case 'switch':
|
case 'coinswitch':
|
||||||
var logSystem = 'Proxy';
|
var logSystem = 'Proxy';
|
||||||
var logComponent = 'Switch';
|
var logComponent = 'Switch';
|
||||||
var logSubCat = 'Thread ' + (parseInt(forkId) + 1);
|
var logSubCat = 'Thread ' + (parseInt(forkId) + 1);
|
||||||
|
|
||||||
|
var switchName = message.switchName;
|
||||||
|
if (!portalConfig.switching[switchName]) {
|
||||||
|
logger.error(logSystem, logComponent, logSubCat, 'Switching key not recognized: ' + switchName);
|
||||||
|
}
|
||||||
|
|
||||||
var messageCoin = message.coin.toLowerCase();
|
var messageCoin = message.coin.toLowerCase();
|
||||||
var newCoin = Object.keys(pools).filter(function(p){
|
var newCoin = Object.keys(pools).filter(function(p){
|
||||||
return p.toLowerCase() === messageCoin;
|
return p.toLowerCase() === messageCoin;
|
||||||
})[0];
|
})[0];
|
||||||
|
|
||||||
if (!newCoin){
|
if (!newCoin){
|
||||||
logger.debug(logSystem, logComponent, logSubCat, 'Switch message to coin that is not recognized: ' + messageCoin);
|
logger.error(logSystem, logComponent, logSubCat, 'Switch message to coin that is not recognized: ' + messageCoin);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
var algo = poolConfigs[newCoin].coin.algorithm;
|
var algo = poolConfigs[newCoin].coin.algorithm;
|
||||||
|
|
||||||
|
if (algo !== proxySwitch[switchName].algorithm){
|
||||||
|
logger.error(logSystem, logComponent, logSubCat, 'Cannot switch a '
|
||||||
|
+ proxySwitch[switchName].algorithm
|
||||||
|
+ ' algo pool to coin ' + newCoin + ' with ' + algo + ' algo');
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
var newPool = pools[newCoin];
|
var newPool = pools[newCoin];
|
||||||
var oldCoin = proxySwitch[algo].currentPool;
|
var oldCoin = proxySwitch[switchName].currentPool;
|
||||||
var oldPool = pools[oldCoin];
|
var oldPool = pools[oldCoin];
|
||||||
var proxyPort = proxySwitch[algo].port;
|
var proxyPorts = Object.keys(proxySwitch[switchName].ports);
|
||||||
|
|
||||||
if (newCoin == oldCoin) {
|
if (newCoin == oldCoin) {
|
||||||
logger.debug(logSystem, logComponent, logSubCat, 'Switch message would have no effect - ignoring ' + newCoin);
|
logger.debug(logSystem, logComponent, logSubCat, 'Switch message would have no effect - ignoring ' + newCoin);
|
||||||
|
@ -74,25 +89,23 @@ module.exports = function(logger){
|
||||||
oldPool.relinquishMiners(
|
oldPool.relinquishMiners(
|
||||||
function (miner, cback) {
|
function (miner, cback) {
|
||||||
// relinquish miners that are attached to one of the "Auto-switch" ports and leave the others there.
|
// relinquish miners that are attached to one of the "Auto-switch" ports and leave the others there.
|
||||||
cback(miner.client.socket.localPort == proxyPort)
|
cback(proxyPorts.indexOf(miner.client.socket.localPort.toString()) !== -1)
|
||||||
},
|
},
|
||||||
function (clients) {
|
function (clients) {
|
||||||
newPool.attachMiners(clients);
|
newPool.attachMiners(clients);
|
||||||
}
|
}
|
||||||
);
|
);
|
||||||
proxySwitch[algo].currentPool = newCoin;
|
proxySwitch[switchName].currentPool = newCoin;
|
||||||
|
|
||||||
var redisClient = redis.createClient(portalConfig.redis.port, portalConfig.redis.host)
|
redisClient.hset('proxyState', algo, newCoin, function(error, obj) {
|
||||||
redisClient.on('ready', function(){
|
if (error) {
|
||||||
redisClient.hset('proxyState', algo, newCoin, function(error, obj) {
|
logger.error(logSystem, logComponent, logSubCat, 'Redis error writing proxy config: ' + JSON.stringify(err))
|
||||||
if (error) {
|
}
|
||||||
logger.error(logSystem, logComponent, logSubCat, 'Redis error writing proxy config: ' + JSON.stringify(err))
|
else {
|
||||||
}
|
logger.debug(logSystem, logComponent, logSubCat, 'Last proxy state saved to redis for ' + algo);
|
||||||
else {
|
}
|
||||||
logger.debug(logSystem, logComponent, logSubCat, 'Last proxy state saved to redis for ' + algo);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
});
|
||||||
|
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -119,7 +132,7 @@ module.exports = function(logger){
|
||||||
if (shareProcessing && shareProcessing.mpos && shareProcessing.mpos.enabled){
|
if (shareProcessing && shareProcessing.mpos && shareProcessing.mpos.enabled){
|
||||||
var mposCompat = new MposCompatibility(logger, poolOptions);
|
var mposCompat = new MposCompatibility(logger, poolOptions);
|
||||||
|
|
||||||
handlers.auth = function(workerName, password, authCallback){
|
handlers.auth = function(port, workerName, password, authCallback){
|
||||||
mposCompat.handleAuth(workerName, password, authCallback);
|
mposCompat.handleAuth(workerName, password, authCallback);
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -137,10 +150,30 @@ module.exports = function(logger){
|
||||||
|
|
||||||
var shareProcessor = new ShareProcessor(logger, poolOptions);
|
var shareProcessor = new ShareProcessor(logger, poolOptions);
|
||||||
|
|
||||||
handlers.auth = function(workerName, password, authCallback){
|
handlers.auth = function(port, workerName, password, authCallback){
|
||||||
if (shareProcessing.internal.validateWorkerAddress !== true)
|
if (shareProcessing.internal.validateWorkerAddress !== true)
|
||||||
authCallback(true);
|
authCallback(true);
|
||||||
else {
|
else {
|
||||||
|
port = port.toString();
|
||||||
|
if (portalConfig.switching) {
|
||||||
|
for (var switchName in portalConfig.switching) {
|
||||||
|
if (portalConfig.switching[switchName].enabled && Object.keys(portalConfig.switching[switchName].ports).indexOf(port) !== -1) {
|
||||||
|
if (workerName.length === 40) {
|
||||||
|
try {
|
||||||
|
new Buffer(workerName, 'hex');
|
||||||
|
authCallback(true);
|
||||||
|
}
|
||||||
|
catch (e) {
|
||||||
|
authCallback(false);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
else
|
||||||
|
authCallback(false);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
pool.daemon.cmd('validateaddress', [workerName], function(results){
|
pool.daemon.cmd('validateaddress', [workerName], function(results){
|
||||||
var isValid = results.filter(function(r){return r.response.isvalid}).length > 0;
|
var isValid = results.filter(function(r){return r.response.isvalid}).length > 0;
|
||||||
authCallback(isValid);
|
authCallback(isValid);
|
||||||
|
@ -153,8 +186,8 @@ module.exports = function(logger){
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
var authorizeFN = function (ip, workerName, password, callback) {
|
var authorizeFN = function (ip, port, workerName, password, callback) {
|
||||||
handlers.auth(workerName, password, function(authorized){
|
handlers.auth(port, workerName, password, function(authorized){
|
||||||
|
|
||||||
var authString = authorized ? 'Authorized' : 'Unauthorized ';
|
var authString = authorized ? 'Authorized' : 'Unauthorized ';
|
||||||
|
|
||||||
|
@ -202,9 +235,9 @@ module.exports = function(logger){
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|
||||||
if (typeof(portalConfig.proxy) !== 'undefined') {
|
if (portalConfig.switching) {
|
||||||
|
|
||||||
var logSystem = 'Proxy';
|
var logSystem = 'Switching';
|
||||||
var logComponent = 'Setup';
|
var logComponent = 'Setup';
|
||||||
var logSubCat = 'Thread ' + (parseInt(forkId) + 1);
|
var logSubCat = 'Thread ' + (parseInt(forkId) + 1);
|
||||||
|
|
||||||
|
@ -215,73 +248,93 @@ module.exports = function(logger){
|
||||||
// on the last pool it was using when reloaded or restarted
|
// on the last pool it was using when reloaded or restarted
|
||||||
//
|
//
|
||||||
logger.debug(logSystem, logComponent, logSubCat, 'Loading last proxy state from redis');
|
logger.debug(logSystem, logComponent, logSubCat, 'Loading last proxy state from redis');
|
||||||
var redisClient = redis.createClient(portalConfig.redis.port, portalConfig.redis.host);
|
|
||||||
redisClient.on('ready', function(){
|
|
||||||
redisClient.hgetall("proxyState", function(error, obj) {
|
|
||||||
if (error || obj == null) {
|
|
||||||
//logger.debug(logSystem, logComponent, logSubCat, 'No last proxy state found in redis');
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
proxyState = obj;
|
|
||||||
logger.debug(logSystem, logComponent, logSubCat, 'Last proxy state loaded from redis');
|
|
||||||
}
|
|
||||||
|
|
||||||
//
|
|
||||||
// Setup proxySwitch object to control proxy operations from configuration and any restored
|
|
||||||
// state. Each algorithm has a listening port, current coin name, and an active pool to
|
|
||||||
// which traffic is directed when activated in the config.
|
|
||||||
//
|
|
||||||
// In addition, the proxy config also takes diff and varDiff parmeters the override the
|
|
||||||
// defaults for the standard config of the coin.
|
|
||||||
//
|
|
||||||
Object.keys(portalConfig.proxy).forEach(function(algorithm) {
|
|
||||||
|
|
||||||
if (portalConfig.proxy[algorithm].enabled === true) {
|
|
||||||
var initalPool = proxyState.hasOwnProperty(algorithm) ? proxyState[algorithm] : _this.getFirstPoolForAlgorithm(algorithm);
|
|
||||||
proxySwitch[algorithm] = {
|
|
||||||
port: portalConfig.proxy[algorithm].port,
|
|
||||||
currentPool: initalPool,
|
|
||||||
proxy: {}
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
// Copy diff and vardiff configuation into pools that match our algorithm so the stratum server can pick them up
|
/*redisClient.on('error', function(err){
|
||||||
//
|
logger.debug(logSystem, logComponent, logSubCat, 'Pool configuration failed: ' + err);
|
||||||
// Note: This seems a bit wonky and brittle - better if proxy just used the diff config of the port it was
|
});*/
|
||||||
// routed into instead.
|
|
||||||
//
|
redisClient.hgetall("proxyState", function(error, obj) {
|
||||||
if (portalConfig.proxy[algorithm].hasOwnProperty('varDiff')) {
|
if (error || obj == null) {
|
||||||
proxySwitch[algorithm].varDiff = new Stratum.varDiff(proxySwitch[algorithm].port, portalConfig.proxy[algorithm].varDiff);
|
//logger.debug(logSystem, logComponent, logSubCat, 'No last proxy state found in redis');
|
||||||
proxySwitch[algorithm].diff = portalConfig.proxy[algorithm].diff;
|
}
|
||||||
}
|
else {
|
||||||
Object.keys(pools).forEach(function (coinName) {
|
proxyState = obj;
|
||||||
var a = poolConfigs[coinName].coin.algorithm;
|
logger.debug(logSystem, logComponent, logSubCat, 'Last proxy state loaded from redis');
|
||||||
var p = pools[coinName];
|
}
|
||||||
if (a === algorithm) {
|
|
||||||
p.setVarDiff(proxySwitch[algorithm].port, proxySwitch[algorithm].varDiff);
|
//
|
||||||
|
// Setup proxySwitch object to control proxy operations from configuration and any restored
|
||||||
|
// state. Each algorithm has a listening port, current coin name, and an active pool to
|
||||||
|
// which traffic is directed when activated in the config.
|
||||||
|
//
|
||||||
|
// In addition, the proxy config also takes diff and varDiff parmeters the override the
|
||||||
|
// defaults for the standard config of the coin.
|
||||||
|
//
|
||||||
|
Object.keys(portalConfig.switching).forEach(function(switchName) {
|
||||||
|
|
||||||
|
var algorithm = portalConfig.switching[switchName].algorithm;
|
||||||
|
|
||||||
|
if (portalConfig.switching[switchName].enabled === true) {
|
||||||
|
var initalPool = proxyState.hasOwnProperty(algorithm) ? proxyState[algorithm] : _this.getFirstPoolForAlgorithm(algorithm);
|
||||||
|
proxySwitch[switchName] = {
|
||||||
|
algorithm: algorithm,
|
||||||
|
ports: portalConfig.switching[switchName].ports,
|
||||||
|
currentPool: initalPool,
|
||||||
|
servers: []
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
|
// Copy diff and vardiff configuation into pools that match our algorithm so the stratum server can pick them up
|
||||||
|
//
|
||||||
|
// Note: This seems a bit wonky and brittle - better if proxy just used the diff config of the port it was
|
||||||
|
// routed into instead.
|
||||||
|
//
|
||||||
|
/*if (portalConfig.proxy[algorithm].hasOwnProperty('varDiff')) {
|
||||||
|
proxySwitch[algorithm].varDiff = new Stratum.varDiff(proxySwitch[algorithm].port, portalConfig.proxy[algorithm].varDiff);
|
||||||
|
proxySwitch[algorithm].diff = portalConfig.proxy[algorithm].diff;
|
||||||
|
}*/
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Object.keys(pools).forEach(function (coinName) {
|
||||||
|
var p = pools[coinName];
|
||||||
|
if (poolConfigs[coinName].coin.algorithm === algorithm) {
|
||||||
|
for (var port in portalConfig.switching[switchName].ports) {
|
||||||
|
if (portalConfig.switching[switchName].ports[port].vardiff)
|
||||||
|
p.setVarDiff(port, portalConfig.switching[switchName].ports[port].vardiff);
|
||||||
}
|
}
|
||||||
});
|
}
|
||||||
|
});
|
||||||
|
|
||||||
proxySwitch[algorithm].proxy = net.createServer(function(socket) {
|
|
||||||
var currentPool = proxySwitch[algorithm].currentPool;
|
|
||||||
var logSubCat = 'Thread ' + (parseInt(forkId) + 1);
|
|
||||||
|
|
||||||
logger.debug(logSystem, 'Connect', logSubCat, 'Proxy connect from ' + socket.remoteAddress + ' on ' + proxySwitch[algorithm].port
|
Object.keys(proxySwitch[switchName].ports).forEach(function(port){
|
||||||
+ ' routing to ' + currentPool);
|
var f = net.createServer(function(socket) {
|
||||||
|
var currentPool = proxySwitch[switchName].currentPool;
|
||||||
|
|
||||||
|
logger.debug(logSystem, 'Connect', logSubCat, 'Connection to '
|
||||||
|
+ switchName + ' from '
|
||||||
|
+ socket.remoteAddress + ' on '
|
||||||
|
+ port + ' routing to ' + currentPool);
|
||||||
|
|
||||||
pools[currentPool].getStratumServer().handleNewClient(socket);
|
pools[currentPool].getStratumServer().handleNewClient(socket);
|
||||||
|
|
||||||
}).listen(parseInt(proxySwitch[algorithm].port), function() {
|
}).listen(parseInt(port), function() {
|
||||||
logger.debug(logSystem, logComponent, logSubCat, 'Proxy listening for ' + algorithm + ' on port ' + proxySwitch[algorithm].port
|
logger.debug(logSystem, logComponent, logSubCat, 'Switching "' + switchName
|
||||||
+ ' into ' + proxySwitch[algorithm].currentPool);
|
+ '" listening for ' + algorithm
|
||||||
|
+ ' on port ' + port
|
||||||
|
+ ' into ' + proxySwitch[switchName].currentPool);
|
||||||
});
|
});
|
||||||
}
|
proxySwitch[switchName].servers.push(f);
|
||||||
else {
|
});
|
||||||
//logger.debug(logSystem, logComponent, logSubCat, 'Proxy pool for ' + algorithm + ' disabled.');
|
|
||||||
}
|
|
||||||
});
|
}
|
||||||
|
else {
|
||||||
|
//logger.debug(logSystem, logComponent, logSubCat, 'Proxy pool for ' + algorithm + ' disabled.');
|
||||||
|
}
|
||||||
});
|
});
|
||||||
}).on('error', function(err){
|
|
||||||
logger.debug(logSystem, logComponent, logSubCat, 'Pool configuration failed: ' + err);
|
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -148,7 +148,11 @@ module.exports = function(logger, portalConfig, poolConfigs){
|
||||||
symbol: poolConfigs[coinName].coin.symbol.toUpperCase(),
|
symbol: poolConfigs[coinName].coin.symbol.toUpperCase(),
|
||||||
algorithm: poolConfigs[coinName].coin.algorithm,
|
algorithm: poolConfigs[coinName].coin.algorithm,
|
||||||
hashrates: replies[i + 1],
|
hashrates: replies[i + 1],
|
||||||
poolStats: replies[i + 2] != null ? replies[i + 2] : { validShares: 0, validBlocks: 0, invalidShares: 0 },
|
poolStats: {
|
||||||
|
validShares: replies[i + 2] ? (replies[i + 2].validShares || 0) : 0,
|
||||||
|
validBlocks: replies[i + 2] ? (replies[i + 2].validBlocks || 0) : 0,
|
||||||
|
invalidShares: replies[i + 2] ? (replies[i + 2].invalidShares || 0) : 0
|
||||||
|
},
|
||||||
blocks: {
|
blocks: {
|
||||||
pending: replies[i + 3],
|
pending: replies[i + 3],
|
||||||
confirmed: replies[i + 4],
|
confirmed: replies[i + 4],
|
||||||
|
|
|
@ -1,34 +0,0 @@
|
||||||
/*
|
|
||||||
This script should be hooked to the coin daemon as follow:
|
|
||||||
litecoind -blocknotify="node /path/to/this/script/blockNotify.js 127.0.0.1:8117 password litecoin %s"
|
|
||||||
The above will send tell litecoin to launch this script with those parameters every time a block is found.
|
|
||||||
This script will then send the blockhash along with other information to a listening tcp socket
|
|
||||||
*/
|
|
||||||
|
|
||||||
var net = require('net');
|
|
||||||
var config = process.argv[2];
|
|
||||||
var parts = config.split(':');
|
|
||||||
var host = parts[0];
|
|
||||||
var port = parts[1];
|
|
||||||
var password = process.argv[3];
|
|
||||||
var coin = process.argv[4];
|
|
||||||
var blockHash = process.argv[5];
|
|
||||||
|
|
||||||
var client = net.connect(port, host, function () {
|
|
||||||
console.log('client connected');
|
|
||||||
client.write(JSON.stringify({
|
|
||||||
password: password,
|
|
||||||
coin: coin,
|
|
||||||
hash: blockHash
|
|
||||||
}) + '\n');
|
|
||||||
});
|
|
||||||
|
|
||||||
client.on('data', function (data) {
|
|
||||||
console.log(data.toString());
|
|
||||||
//client.end();
|
|
||||||
});
|
|
||||||
|
|
||||||
client.on('end', function () {
|
|
||||||
console.log('client disconnected');
|
|
||||||
//process.exit();
|
|
||||||
});
|
|
|
@ -16,76 +16,69 @@ Simple lightweight & fast - a more efficient block notify script in pure C.
|
||||||
|
|
||||||
(may also work as coin switch)
|
(may also work as coin switch)
|
||||||
|
|
||||||
Platforms : Linux,BSD,Solaris (mostly OS independent)
|
Platforms : Linux, BSD, Solaris (mostly OS independent)
|
||||||
|
|
||||||
Build with:
|
Build with:
|
||||||
gcc blocknotify.c -o blocknotify
|
gcc blocknotify.c -o blocknotify
|
||||||
|
|
||||||
|
|
||||||
Usage in daemon coin.conf
|
Example usage in daemon coin.conf using default NOMP CLI port of 17117
|
||||||
blocknotify="/bin/blocknotify 127.0.0.1:8117 mySuperSecurePassword dogecoin %s"
|
blocknotify="/bin/blocknotify 127.0.0.1:17117 dogecoin %s"
|
||||||
|
|
||||||
*NOTE* If you use "localhost" as hostname you may get a "13" error (socket / connect / send may consider "localhost" as a broadcast address)
|
|
||||||
|
|
||||||
// {"password":"notepas","coin":"Xcoin","hash":"d2191a8b644c9cd903439edf1d89ee060e196b3e116e0d48a3f11e5e3987a03b"}
|
|
||||||
// simplest connect + send json string to server
|
|
||||||
|
|
||||||
# $Id: blocknotify.c,v 0.1 2014/04/07 22:38:09 sysman Exp $
|
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
|
||||||
int main(int argc, char **argv)
|
int main(int argc, char **argv)
|
||||||
{
|
{
|
||||||
int sockfd,n;
|
int sockfd,n;
|
||||||
struct sockaddr_in servaddr,cliaddr;
|
struct sockaddr_in servaddr, cliaddr;
|
||||||
char sendline[1000];
|
char sendline[1000];
|
||||||
char recvline[1000];
|
char recvline[1000];
|
||||||
char host[200];
|
char host[200];
|
||||||
char *p,*arg,*errptr;
|
char *p, *arg, *errptr;
|
||||||
int port;
|
int port;
|
||||||
|
|
||||||
if (argc < 4)
|
if (argc < 3)
|
||||||
{
|
{
|
||||||
// print help
|
// print help
|
||||||
printf("NOMP pool block notify\n usage: <host:port> <password> <coin> <block>\n");
|
printf("NOMP pool block notify\n usage: <host:port> <coin> <block>\n");
|
||||||
exit(1);
|
exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
strncpy(host,argv[1],(sizeof(host)-1));
|
strncpy(host, argv[1], (sizeof(host)-1));
|
||||||
p=host;
|
p = host;
|
||||||
|
|
||||||
if ( (arg=strchr(p,':')) )
|
if ( (arg = strchr(p,':')) )
|
||||||
{
|
{
|
||||||
*arg='\0';
|
*arg = '\0';
|
||||||
|
|
||||||
errno=0; // reset errno
|
errno = 0; // reset errno
|
||||||
port=strtol(++arg,&errptr,10);
|
port = strtol(++arg, &errptr, 10);
|
||||||
|
|
||||||
if ( (errno != 0) || (errptr == arg) ) { fprintf(stderr, "port number fail [%s]\n",errptr); }
|
if ( (errno != 0) || (errptr == arg) )
|
||||||
// if(strlen(arg) > (errptr-arg) ) also fail, but we ignore it for now
|
{
|
||||||
// printf("host %s:%d\n",host,port);
|
fprintf(stderr, "port number fail [%s]\n", errptr);
|
||||||
}
|
}
|
||||||
|
|
||||||
// printf("pass: %s coin: %s block:[%s]\n",argv[2],argv[3],argv[4]);
|
}
|
||||||
snprintf(sendline,sizeof(sendline)-1,
|
|
||||||
"{\"password\":\"%s\",\"coin\":\"%s\",\"hash\":\"%s\"}\n",
|
|
||||||
argv[2], argv[3], argv[4]);
|
|
||||||
|
|
||||||
// printf("sendline:[%s]",sendline);
|
snprintf(sendline, sizeof(sendline) - 1, "{\"command\":\"blocknotify\",\"params\":[\"%s\",\"%s\"]}\n", argv[2], argv[3]);
|
||||||
|
|
||||||
sockfd=socket(AF_INET,SOCK_STREAM,IPPROTO_TCP);
|
sockfd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
|
||||||
bzero(&servaddr,sizeof(servaddr));
|
bzero(&servaddr, sizeof(servaddr));
|
||||||
servaddr.sin_family = AF_INET;
|
servaddr.sin_family = AF_INET;
|
||||||
servaddr.sin_addr.s_addr=inet_addr(host);
|
servaddr.sin_addr.s_addr = inet_addr(host);
|
||||||
servaddr.sin_port=htons(port);
|
servaddr.sin_port = htons(port);
|
||||||
connect(sockfd, (struct sockaddr *)&servaddr, sizeof(servaddr));
|
connect(sockfd, (struct sockaddr *)&servaddr, sizeof(servaddr));
|
||||||
|
|
||||||
int result = send(sockfd,sendline,strlen(sendline),0);
|
int result = send(sockfd, sendline, strlen(sendline), 0);
|
||||||
close(sockfd);
|
close(sockfd);
|
||||||
|
|
||||||
if(result == -1) {
|
if(result == -1) {
|
||||||
printf("Error sending: %i\n",errno);
|
printf("Error sending: %i\n", errno);
|
||||||
exit(-1);
|
exit(-1);
|
||||||
}
|
}
|
||||||
exit(0);
|
exit(0);
|
||||||
}
|
}
|
||||||
|
|
|
@ -0,0 +1,38 @@
|
||||||
|
var net = require('net');
|
||||||
|
|
||||||
|
var defaultPort = 17117;
|
||||||
|
var defaultHost = '127.0.0.1';
|
||||||
|
|
||||||
|
var args = process.argv.slice(2);
|
||||||
|
var params = [];
|
||||||
|
var options = {};
|
||||||
|
|
||||||
|
for(var i = 0; i < args.length; i++){
|
||||||
|
if (args[i].indexOf('-') === 0 && args[i].indexOf('=') !== -1){
|
||||||
|
var s = args[i].substr(1).split('=');
|
||||||
|
options[s[0]] = s[1];
|
||||||
|
}
|
||||||
|
else
|
||||||
|
params.push(args[i]);
|
||||||
|
}
|
||||||
|
|
||||||
|
var command = params.shift();
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
var client = net.connect(options.port || defaultPort, options.host || defaultHost, function () {
|
||||||
|
client.write(JSON.stringify({
|
||||||
|
command: command,
|
||||||
|
params: params,
|
||||||
|
options: options
|
||||||
|
}) + '\n');
|
||||||
|
}).on('error', function(error){
|
||||||
|
if (error.code === 'ECONNREFUSED')
|
||||||
|
console.log('Could not connect to NOMP instance at ' + defaultHost + ':' + defaultPort);
|
||||||
|
else
|
||||||
|
console.log('Socket error ' + JSON.stringify(error));
|
||||||
|
}).on('data', function(data) {
|
||||||
|
console.log(data.toString());
|
||||||
|
}).on('close', function () {
|
||||||
|
|
||||||
|
});
|
|
@ -1,37 +0,0 @@
|
||||||
/*
|
|
||||||
This script demonstrates sending a coin switch request and can be invoked from the command line
|
|
||||||
with:
|
|
||||||
|
|
||||||
"node coinSwitch.js 127.0.0.1:8118 password %s"
|
|
||||||
|
|
||||||
where <%s> is the name of the coin proxy miners will be switched onto.
|
|
||||||
|
|
||||||
If the coin name is not configured, disabled or matches the existing proxy setting, no action
|
|
||||||
will be taken by NOMP on receipt of the message.
|
|
||||||
*/
|
|
||||||
|
|
||||||
var net = require('net');
|
|
||||||
var config = process.argv[2];
|
|
||||||
var parts = config.split(':');
|
|
||||||
var host = parts[0];
|
|
||||||
var port = parts[1];
|
|
||||||
var password = process.argv[3];
|
|
||||||
var coin = process.argv[4];
|
|
||||||
|
|
||||||
var client = net.connect(port, host, function () {
|
|
||||||
console.log('client connected');
|
|
||||||
client.write(JSON.stringify({
|
|
||||||
password: password,
|
|
||||||
coin: coin
|
|
||||||
}) + '\n');
|
|
||||||
});
|
|
||||||
|
|
||||||
client.on('data', function (data) {
|
|
||||||
console.log(data.toString());
|
|
||||||
//client.end();
|
|
||||||
});
|
|
||||||
|
|
||||||
client.on('end', function () {
|
|
||||||
console.log('client disconnected');
|
|
||||||
//process.exit();
|
|
||||||
});
|
|
|
@ -1,56 +1,59 @@
|
||||||
<style>
|
<style>
|
||||||
|
|
||||||
#topCharts{
|
#topCharts {
|
||||||
padding: 18px;
|
padding: 18px;
|
||||||
}
|
}
|
||||||
|
|
||||||
#topCharts > div > div > svg{
|
#topCharts > div > div > svg {
|
||||||
display: block;
|
display: block;
|
||||||
height: 280px;
|
height: 280px;
|
||||||
}
|
}
|
||||||
|
|
||||||
.chartWrapper{
|
.chartWrapper {
|
||||||
border: solid 1px #c7c7c7;
|
border: solid 1px #c7c7c7;
|
||||||
border-radius: 5px;
|
border-radius: 5px;
|
||||||
padding: 5px;
|
padding: 5px;
|
||||||
margin-bottom: 18px;
|
margin-bottom: 18px;
|
||||||
}
|
}
|
||||||
|
|
||||||
.chartLabel{
|
.chartLabel {
|
||||||
font-size: 1.2em;
|
font-size: 1.2em;
|
||||||
text-align: center;
|
text-align: center;
|
||||||
padding: 4px;
|
padding: 4px;
|
||||||
}
|
}
|
||||||
|
|
||||||
.chartHolder{
|
.chartHolder {
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
table {
|
||||||
|
width: 100%;
|
||||||
|
}
|
||||||
|
|
||||||
</style>
|
</style>
|
||||||
<div id="footer" style="background-color:#2d2d2d;clear:both;text-align:center;">
|
|
||||||
<table class="pure-table">
|
<table class="pure-table">
|
||||||
<thead>
|
<thead>
|
||||||
<tr>
|
<tr>
|
||||||
<th>Pool</th>
|
<th>Pool</th>
|
||||||
<th>Algo</th>
|
<th>Algo</th>
|
||||||
<th>Workers</th>
|
<th>Workers</th>
|
||||||
<th>Valid Shares</th>
|
<th>Valid Shares</th>
|
||||||
<th>Invalid Shares</th>
|
<th>Invalid Shares</th>
|
||||||
<th>Blocks</th>
|
<th>Blocks</th>
|
||||||
<th>Hashrate</th>
|
<th>Hashrate</th>
|
||||||
</tr>
|
</tr>
|
||||||
</tr>
|
</tr>
|
||||||
</thead>
|
</thead>
|
||||||
{{ for(var pool in it.stats.pools) { }}
|
{{ for(var pool in it.stats.pools) { }}
|
||||||
<tr class="pure-table-odd">
|
<tr class="pure-table-odd">
|
||||||
<td>{{=it.stats.pools[pool].name}}</td>
|
<td>{{=it.stats.pools[pool].name}}</td>
|
||||||
<td>{{=it.stats.pools[pool].algorithm}}</td>
|
<td>{{=it.stats.pools[pool].algorithm}}</td>
|
||||||
<td>{{=Object.keys(it.stats.pools[pool].workers).length}}</td>
|
<td>{{=Object.keys(it.stats.pools[pool].workers).length}}</td>
|
||||||
<td>{{=it.stats.pools[pool].poolStats.validShares}}</td>
|
<td>{{=it.stats.pools[pool].poolStats.validShares}}</td>
|
||||||
<td>{{=it.stats.pools[pool].poolStats.invalidShares}}</td>
|
<td>{{=it.stats.pools[pool].poolStats.invalidShares}}</td>
|
||||||
<td>{{=it.stats.pools[pool].poolStats.validBlocks}}</td>
|
<td>{{=it.stats.pools[pool].poolStats.validBlocks}}</td>
|
||||||
<td>{{=it.stats.pools[pool].hashrateString}}</td>
|
<td>{{=it.stats.pools[pool].hashrateString}}</td>
|
||||||
</tr>
|
</tr>
|
||||||
{{ } }}
|
{{ } }}
|
||||||
</table>
|
</table>
|
||||||
</div>
|
|
||||||
|
|
Loading…
Reference in New Issue