loss of significance in difficulty (by lfm)

For instance any nBits compressed value from 0x1a44b800 thru
0x1a44b9ff will show as difficulty 244139.4816. This patch will
more accurately convert the nBits compressed values to the double
difficulty.

This will display any of the recent difficulty levels slightly
differently though. Early difficulties and testnet difficulties are
not large enough to trigger this bug.

None of the actual targets or compressed targets are changed, only
the conversion to the floating point difficulty is changed and afaik
it is only ever displayed, never converted back so the patch does not
effect the target calculations, binary files, databases nor the binary
protocol.
This commit is contained in:
Pieter Wuille 2011-05-26 23:35:00 +02:00
parent db69432dcd
commit 5e1e458ecb
1 changed files with 18 additions and 4 deletions

View File

@ -199,12 +199,26 @@ double GetDifficulty()
{
// Floating point number that is a multiple of the minimum difficulty,
// minimum difficulty = 1.0.
if (pindexBest == NULL)
return 1.0;
int nShift = 256 - 32 - 31; // to fit in a uint
double dMinimum = (CBigNum().SetCompact(bnProofOfWorkLimit.GetCompact()) >> nShift).getuint();
double dCurrently = (CBigNum().SetCompact(pindexBest->nBits) >> nShift).getuint();
return dMinimum / dCurrently;
int nShift = (pindexBest->nBits >> 24) & 0xff;
double dDiff =
(double)0x0000ffff / (double)(pindexBest->nBits & 0x00ffffff);
while (nShift < 29)
{
dDiff *= 256.0;
nShift++;
}
while (nShift > 29)
{
dDiff /= 256.0;
nShift--;
}
return dDiff;
}
Value getdifficulty(const Array& params, bool fHelp)