Satoshi Nakamoto – Bitcoin Wiki

"Eppur, se muove." | It's not even about the specifics of the specs. It's about the fact that (for the first time since Blockstream hijacked the "One True Repo"), *we* can now actually once again *specify* those specs. It's about Bitcoin Classic.

Right now, there's been a lot of buzz about Bitcoin Classic.
For the first time since Blockstream hijacked the "one true repo" (which they basically inherited from Satoshi), we now also appear to have another real, serious repo - based almost 100% on Core, but already starting to deviate every-so-slightly from it - and with a long-term roadmap that also promises to be both responsive and robust.
The Bitcoin Classic project already has some major advantages, including:
"When in the course of Bitcoin development ... it becomes necessary (and possible) to set up a new (real, serious) repo with a dev and a miner and a payment processor who are able to really understand the code at the mathematical and economical level, and really interact with the users at the social and political level...
(unlike the triad of tone-deaf pinheads at Blockstream, fueled by fiat, coddled by censorship, and pathologically attached to their pet projects: Adam Back and Gregory Maxwell and Peter Todd - brilliant though these devs may be as C/C++ programmers)
...then this will be a major turning point in the history of Bitcoin."
Bitcoin Classic
What is it?
Right now, it's probably more like just an "MVP" (Minimal Viable Product) for:
  • governance or
  • decentralized development or
  • a-new-codebase-which-has-a-good-chance-of-being-adopted-due-to-being-a-kind-of-Schelling-point-of-development-due-to-having-a-top-mineresearcher-on-board-JToomin-plus-a-top-dev/researcher-on-board-GavinAndresen-plus-a-really-simple-and-robust-max-blocksize-algorithm-BitPay's-Adaptive-Block-Size-Limit-which-empowers-miners-and-not-developers
Call it what you will.
But that's what we need at this point: a new repo which is:
  • a minimal departure from the existing One True repo
  • safe and sane in the sense that it empowers miners over devs
Paraphrasing the words of Paul Sztorc on "Measuring Decentralization", "decentralization" means "a very low cost for anyone to add...":
  • one more block,
  • one more verifying node,
  • one more mining node,
  • one more developer,
  • one more (real, serious) repo.
And this last item is probably what Bitcoin Classic is really about.
It's about finally being able to add one more (real, serious) repo...
...knowing that to a certain degree, some of the specific specs are still-to-be-specified
...but that's ok, because we can see that the proper social-political-ecomomic requirements for responsibly doing so finally appear to be in place: ie, we are starting to see the coalescence of a team...
...who experiment and observe - and communicate and listen - and respond and react accordingly
...so that they can faithfully (but conservatively) translate users' needs & requirements into code that can achieve consensus on the network.
As it's turned out, it has been surprisingly challenging to create this kind of bridge between users and devs (centered around a new, real, serious codebase with a good chance of adoption)...
...because (sorry for the stereotype) most users can't code, and many devs can't communicate (well enough)
...so, many devs can't (optimally) figure out what to code.
We've seen how out-of-touch the devs can be (particularly when shielded by censors and funded by venture capitalists), not only in the "blocksize wars", but also with decisions such as the insistence of Blockstream's devs to prioritize things like RBF and LN over the protests of many users.
But now it looks like, for the first time since Blockstream hijacked the one real, serious repo, we now have a new real, serious repo where...
(due to being a kind of "Schelling point of development" - ie a focal point many people can, well, "focus" on)
(due to having a responsive expert scientific miner like JToomim on-board - and a responsive expert scientific dev like Gavin on-board - with stated preference for a simple, robust, miner-empowering approach to block size - eg: BitPay's Adaptive Block Size)
... this repo actually has a very good chance of achieving:
  • rough consensus among the community (the "social" community of discussing and debating and developing), and
  • actual consensus on the network (eg 750 / 1000 of previous blocks, or whatever ends up being defined).
In the above, the words "responsive" and "scientific" have very concrete meanings:
  • responsive: they elicit-verify-implement actual users' needs & requirements
  • scientific: they use the scientific method of proposing-testing-and-accepting-or-rejecting a hypothesis
  • (in particular, they don't have hangups about shifting priorities among projects and proposals when new information becomes available - ie, they have the maturity and the self-awareness and the egolessness to not become pathologically over-attached to proving irrelevant points or pursuing pet projects)
So we could have the following definition of "centralization of development" (à la Paul Sztorc):
The "cost" of anyone adding a new (real, serious) repo must be kept as minimal as possible.
(But of course with the caveat or condition that: the repo still must be "real and serious" - which implies that it will have to overcome a high hurdle in order to be seriously entertained.)
And it bears repeating: As we've seen from the past year of raging debates, the costs and challenges of adding a new (real, serious) repo are largely social and political - and can be very high and exceedingly complex.
But that's probably the way it should be. Because adding a new repo is the first step on the road towards doing a hard fork.
So it is a journey which must not be embarked upon with levity, but with gravity - with all due deliberation and seriousness.
Which is one quite legitimate reason why the people against such a change have dug their heels in so determinedly. And we should actually be totally understanding and even thankful that they have done so.
As long it's a fair fight, done in good faith.
Which I think many of us can feel generous enough to say it indeed has been - for the most part.
Note: I always add the parenthetical "(real, serious)" to the phrase "a new (real, serious) repo" here the same way we add the parenthetical "(valid)" to the phrase: "the longest (valid) chain".
  • In order to add a "valid" block to this chain, there are algorithmic rules - purely mathematical.
  • In order to add a "real, serious" repo to the ecosystem - or to the website bitcoin.org for example, as we recently saw in the strange spectacle of CoinBase diplomatically bowing down to theymos - the rules (and costs) for determining whether a repo is "real and serious" are not purely mathematical but are social-political and economical - and ultimately human, all-too human.
But eventually, a new real serious repo does get added.
Which is what we appear to be seeing now, with this rallying of major talent around Bitcoin Classic.
It is of course probably natural and inevitable that the upholders / usurpers of the First and Only Real Serious Repo might be displeased to see any other new real serious repo(s) arising - and might tend to "unfairly" leverage any advantages they enjoy as "incumbents", in order to maintain their power. This is only human.
But all's fair in love in consensus, so we probably shouldn't hold any of these tendencies against them. =)
"Eppur, si muove."
=>
"But eventually, inexorably, a new 'real, serious' repo does get added."
[Sorry I spelled a word wrong in the OP title: should be "si" not "se"!]
(For some strange delicious reason, I hope luke-jr in particular reads the above lines. =)
So a new real serious repo does finally get set up on Github, and eventually downloaded and compiled to a new real serious binary.
And this binary gets tested on testnet and rolled out on mainnet and - if enough users adopt it (as proven by some easy-to-observe "trigger" - eg 750 of the past 1000 blocks being mined with it) - then this real serious new Bitcoin client gains enough "consensus" to "activate" - and a (hard) chainfork then ensues (which we expect and indeed endeavor to guarantee should only take a few hours at most to resolve itself, as all hashpower should quickly move to the longest valid chain).
Yes this process must involve intensive debate and caution and testing, because it is so very, very dangerous - because it is a "hard fork": initially a hard codefork which takes months of social-political debating to resolve, hopefully guided by the invisible hand of the market, and then a (hard) chainfork which takes only a few hours to resolve (we dearly hope & expect - actually we try to virtually guarantee this by establishing a high enough activation trigger eg "such-and-such percentage of the previous number of blocks must have been mined using the new program).
For analogies to a hard codefork in football and chess, you may find the the same Paul Sztorc article in the section on the dangers of hard forks interesting.
So a "hard fork" is what we must do sometimes. Rarely, and with great deliberation and seriousness.
And the first step involves setting up a new (real, serious) repo.
This is why the actual details on the max-blocksize-increments themselves can be (and are being) left sort of vague for the moment.
There's a certain amount of hand-waving in the air.
Which is ok in this case.
Because this repo isn't about the specifics of any particular "max blocksize algorithm" - yet.
Although we do already have an encouraging statement from Gavin that his new favorite max blocksize proposal is BitPay's Adaptive Block Size Limit - which is very promising, since this proposal is simple, it gives miners autonomy over devs, and it is based on the median (not the average) of previous blocks, and the median is known to be a "more robust" (hence less game-able) statistic.
So, in this sense, Bitcoin Classic is mainly about even being allowed to seriously propose some different "max blocksize" (and probably eventually a few other) algorithms(s) at all in the first place.
So far, in amongst all the hand-waving, here's what we do apparently know:
  • Definitely an initial bump to 2 MB.
  • Then... who knows?
Whatever.
At this point, it's not even the specificity of those specs that matter.
It's just that, for the first time, we have a repo whose devs will let us specify those specs.
  • evidently using some can-kick blocksize-bumps initially...
  • probably using some more "algorithmic" approach long-term - still probably very much TBD (to-be-determined - but that should be fine, because it will clearly be in consultation with the users and the empirical data of the network and the market!)...
  • and probably eventually also embracing many of the other "scaling" approaches which are not based on simply bumping up a parameter - eg: SegWit, IBLTs, weakblocks & subchains, thinblocks
So...
This is what Bitcoin Classic mainly seems to be about at this point.
It's one of the first real serious moves towards decentralized development.
It's a tiny step - but the fact that we can now even finally take a step - after so many months of paralysis - is probably what's really important here.
submitted by ydtm to btc [link] [comments]

Flux: Revisiting Near Blocks for Proof-of-Work Blockchains

Cryptology ePrint Archive: Report 2018/415
Date: 2018-05-29
Author(s): Alexei Zamyatin∗, Nicholas Stifter, Philipp Schindler, Edgar Weippl, William J. Knottenbelt∗

Link to Paper


Abstract
The term near or weak blocks describes Bitcoin blocks whose PoW does not meet the required target difficulty to be considered valid under the regular consensus rules of the protocol. Near blocks are generally associated with protocol improvement proposals striving towards shorter transaction confirmation times. Existing proposals assume miners will act rationally based solely on intrinsic incentives arising from the adoption of these changes, such as earlier detection of blockchain forks.
In this paper we present Flux, a protocol extension for proof-of-work blockchains that leverages on near blocks, a new block reward distribution mechanism, and an improved branch selection policy to incentivize honest participation of miners. Our protocol reduces mining variance, improves the responsiveness of the underlying blockchain in terms of transaction processing, and can be deployed without conflicting modifications to the underlying base protocol as a velvet fork. We perform an initial analysis of selfish mining which suggests Flux not only provides security guarantees similar to pure Nakamoto consensus, but potentially renders selfish mining strategies less profitable.

References
[1] Bitcoin Cash. https://www.bitcoincash.org/. Accessed: 2017-01-24.
[2] P2pool. http://p2pool.org/. Accessed: 2017-05-10.
[3] G. Andersen. Comment in ”faster blocks vs bigger blocks”. https://bitcointalk.org/index.php?topic=673415.msg7658481#msg7658481, 2014. Accessed: 2017-05-10.
[4] G. Andersen. [bitcoin-dev] weak block thoughts... https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011157.html, 2015. Accessed: 2017-05-10.
[5] E. Androulaki, S. Capkun, and G. O. Karame. Two bitcoins at the price of one? double-spending attacks on fast payments in bitcoin. In CCS, 2012.
[6] J. Becker, D. Breuker, T. Heide, J. Holler, H. P. Rauer, and R. Bohme. ¨ Can we afford integrity by proof-of-work? scenarios inspired by the bitcoin currency. In WEIS. Springer, 2012.
[7] I. Bentov, R. Pass, and E. Shi. Snow white: Provably secure proofs of stake. https://eprint.iacr.org/2016/919.pdf, 2016. Accessed: 2016-11-08.
[8] Bitcoin community. OP RETURN. https://en.bitcoin.it/wiki/OP\RETURN. Accessed: 2017-05-10.
[9] Bitcoin Wiki. Merged mining specification. [https://en.bitcoin.it/wiki/Merged\](https://en.bitcoin.it/wiki/Merged)) mining\ specification. Accessed: 2017-05-10.
[10] Blockchain.info. Hashrate Distribution in Bitcoin. https://blockchain.info/de/pools. Accessed: 2017-05-10.
[11] Blockchain.info. Unconfirmed bitcoin transactions. https://blockchain.info/unconfirmed-transactions. Accessed: 2017-05-10.
[12] J. Bonneau, A. Miller, J. Clark, A. Narayanan, J. A. Kroll, and E. W. Felten. Sok: Research perspectives and challenges for bitcoin and cryptocurrencies. In IEEE Symposium on Security and Privacy, 2015.
[13] V. Buterin. Ethereum: A next-generation smart contract and decentralized application platform. https://github.com/ethereum/wiki/wiki/White-Paper, 2014. Accessed: 2016-08-22.
[14] C. Decker and R. Wattenhofer. Information propagation in the bitcoin network. In Peer-to-Peer Computing (P2P), 2013 IEEE Thirteenth International Conference on, pages 1–10. IEEE, 2013.
[15] J. R. Douceur. The sybil attack. In International Workshop on Peer-toPeer Systems, pages 251–260. Springer, 2002.
[16] I. Eyal, A. E. Gencer, E. G. Sirer, and R. Renesse. Bitcoin-ng: A scalable blockchain protocol. In 13th USENIX Security Symposium on Networked Systems Design and Implementation (NSDI’16). USENIX Association, Mar 2016.
[17] I. Eyal and E. G. Sirer. Majority is not enough: Bitcoin mining is vulnerable. In Financial Cryptography and Data Security, pages 436–454. Springer, 2014.
[18] J. Garay, A. Kiayias, and N. Leonardos. The bitcoin backbone protocol: Analysis and applications. In Advances in Cryptology-EUROCRYPT 2015, pages 281–310. Springer, 2015.
[19] A. E. Gencer, S. Basu, I. Eyal, R. Renesse, and E. G. Sirer. Decentralization in bitcoin and ethereum networks. In Proceedings of the 22nd International Conference on Financial Cryptography and Data Security (FC). Springer, 2018.
[20] A. Gervais, G. Karame, S. Capkun, and V. Capkun. Is bitcoin a decentralized currency? volume 12, pages 54–60, 2014.
[21] A. Gervais, G. O. Karame, K. Wust, V. Glykantzis, H. Ritzdorf, ¨ and S. Capkun. On the security and performance of proof of work blockchains. https://eprint.iacr.org/2016/555.pdf, 2016. Accessed: 2016-08-10.
[22] M. Jakobsson and A. Juels. Proofs of work and bread pudding protocols. In Secure Information Networks, pages 258–272. Springer, 1999.
[23] A. Judmayer, A. Zamyatin, N. Stifter, A. G. Voyiatzis, and E. Weippl. Merged mining: Curse or cure? In CBT’17: Proceedings of the International Workshop on Cryptocurrencies and Blockchain Technology, Sep 2017.
[24] G. O. Karame, E. Androulaki, M. Roeschlin, A. Gervais, and S. Capkun. ˇ Misbehavior in bitcoin: A study of double-spending and accountability. volume 18, page 2. ACM, 2015.
[25] A. Kiayias, A. Miller, and D. Zindros. Non-interactive proofs of proof-of-work. Cryptology ePrint Archive, Report 2017/963, 2017. Accessed:2017-10-03.
[26] A. Kiayias, A. Russell, B. David, and R. Oliynykov. Ouroboros: A provably secure proof-of-stake blockchain protocol. In Annual International Cryptology Conference, pages 357–388. Springer, 2017.
[27] Y. Lewenberg, Y. Sompolinsky, and A. Zohar. Inclusive block chain protocols. In Financial Cryptography and Data Security, pages 528–547. Springer, 2015.
[28] Litecoin community. Litecoin reference implementation. https://github.com/litecoin-project/litecoin. Accessed: 2018-05-03.
[29] G. Maxwell. Comment in ”[bitcoin-dev] weak block thoughts...”. https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011198.html, 2016. Accessed: 2017-05-10.
[30] S. Micali. Algorand: The efficient and democratic ledger. http://arxiv.org/abs/1607.01341, 2016. Accessed: 2017-02-09.
[31] S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. https://bitcoin.org/bitcoin.pdf, Dec 2008. Accessed: 2015-07-01.
[32] Namecoin community. Namecoin reference implementation. https://github.com/namecoin/namecoin. Accessed: 2017-05-10.
[33] Narayanan, Arvind and Bonneau, Joseph and Felten, Edward and Miller, Andrew and Goldfeder, Steven. Bitcoin and cryptocurrency technologies. https://d28rh4a8wq0iu5.cloudfront.net/bitcointech/readings/princeton bitcoin book.pdf?a=1, 2016. Accessed: 2016-03-29.
[34] K. Nayak, S. Kumar, A. Miller, and E. Shi. Stubborn mining: Generalizing selfish mining and combining with an eclipse attack. In 1st IEEE European Symposium on Security and Privacy, 2016. IEEE, 2016.
[35] K. J. O’Dwyer and D. Malone. Bitcoin mining and its energy footprint. 2014.
[36] R. Pass and E. Shi. Fruitchains: A fair blockchain. http://eprint.iacr.org/2016/916.pdf, 2016. Accessed: 2016-11-08.
[37] C. Perez-Sol ´ a, S. Delgado-Segura, G. Navarro-Arribas, and J. Herrera- ` Joancomart´ı. Double-spending prevention for bitcoin zero-confirmation transactions. http://eprint.iacr.org/2017/394, 2017. Accessed: 2017-06-
[38] Pseudonymous(”TierNolan”). Decoupling transactions and pow. https://bitcointalk.org/index.php?topic=179598.0, 2013. Accessed: 2017-05-10.
[39] P. R. Rizun. Subchains: A technique to scale bitcoin and improve the user experience. Ledger, 1:38–52, 2016.
[40] K. Rosenbaum. Weak blocks - the good and the bad. http://popeller.io/ index.php/2016/01/19/weak-blocks-the-good-and-the-bad/, 2016. Accessed: 2017-05-10.
[41] K. Rosenbaum and R. Russell. Iblt and weak block propagation performance. Scaling Bitcoin Hong Kong (6 December 2015), 2015.
[42] M. Rosenfeld. Analysis of hashrate-based double spending. http://arxiv.org/abs/1402.2009, 2014. Accessed: 2016-03-09.
[43] R. Russel. Weak block simulator for bitcoin. https://github.com/rustyrussell/weak-blocks, 2014. Accessed: 2017-05-10.
[44] A. Sapirshtein, Y. Sompolinsky, and A. Zohar. Optimal selfish mining strategies in bitcoin. http://arxiv.org/pdf/1507.06183.pdf, 2015. Accessed: 2016-08-22.
[45] E. B. Sasson, A. Chiesa, C. Garman, M. Green, I. Miers, E. Tromer, and M. Virza. Zerocash: Decentralized anonymous payments from bitcoin. In Security and Privacy (SP), 2014 IEEE Symposium on, pages 459–474. IEEE, 2014.
[46] Satoshi Nakamoto. Comment in ”bitdns and generalizing bitcoin” bitcointalk thread. https://bitcointalk.org/index.php?topic=1790.msg28696#msg28696. Accessed: 2017-06-05.
[47] Y. Sompolinsky, Y. Lewenberg, and A. Zohar. Spectre: A fast and scalable cryptocurrency protocol. Cryptology ePrint Archive, Report 2016/1159, 2016. Accessed: 2017-02-20.
[48] Y. Sompolinsky and A. Zohar. Secure high-rate transaction processing in bitcoin. In Financial Cryptography and Data Security, pages 507–527. Springer, 2015.
[49] Suhas Daftuar. Bitcoin merge commit: ”mining: Select transactions using feerate-with-ancestors”. https://github.com/bitcoin/bitcoin/pull/7600. Accessed: 2017-05-10.
[50] M. B. Taylor. Bitcoin and the age of bespoke silicon. In Proceedings of the 2013 International Conference on Compilers, Architectures and Synthesis for Embedded Systems, page 16. IEEE Press, 2013.
[51] F. Tschorsch and B. Scheuermann. Bitcoin and beyond: A technical survey on decentralized digital currencies. In IEEE Communications Surveys Tutorials, volume PP, pages 1–1, 2016.
[52] P. J. Van Laarhoven and E. H. Aarts. Simulated annealing. In Simulated annealing: Theory and applications, pages 7–15. Springer, 1987.
[53] A. Zamyatin, N. Stifter, A. Judmayer, P. Schindler, E. Weippl, and W. J. Knottebelt. (Short Paper) A Wild Velvet Fork Appears! Inclusive Blockchain Protocol Changes in Practice. In 5th Workshop on Bitcoin and Blockchain Research, Financial Cryptography and Data Security 18 (FC). Springer, 2018.
[54] F. Zhang, I. Eyal, R. Escriva, A. Juels, and R. Renesse. Rem: Resourceefficient mining for blockchains. http://eprint.iacr.org/2017/179, 2017. Accessed: 2017-03-24.
submitted by dj-gutz to myrXiv [link] [comments]

Merged Mining: Analysis of Effects and Implications

Date: 2017-08-24
Author(s): Alexei Zamyatin, Edgar Weippl

Link to Paper


Abstract
Merged mining refers to the concept of mining more than one cryptocurrency without necessitating additional proof-of-work effort. Merged mining was introduced in 2011 as a boostrapping mechanism for new cryptocurrencies and countermeasures against the fragmentation of mining power across competing systems. Although merged mining has already been adopted by a number of cryptocurrencies, to this date little is known about the effects and implications.
In this thesis, we shed light on this topic area by performing a comprehensive analysis of merged mining in practice. As part of this analysis, we present a block attribution scheme for mining pools to assist in the evaluation of mining centralization. Our findings disclose that mining pools in merge-mined cryptocurrencies have operated at the edge of, and even beyond, the security guarantees offered by the underlying Nakamoto consensus for extended periods. We discuss the implications and security considerations for these cryptocurrencies and the mining ecosystem as a whole, and link our findings to the intended effects of merged mining.

Bibliography
[1] Coinmarketcap. http://coinmarketcap.com/. Accessed 2017-09-28.
[2] P2pool. http://p2pool.org/. Accessed: 2017-05-10.
[3] M. Ali, J. Nelson, R. Shea, and M. J. Freedman. Blockstack: Design and implementation of a global naming system with blockchains. http://www.the-blockchain.com/docs/BlockstackDesignandImplementationofaGlobalNamingSystem.pdf, 2016. Accessed: 2016-03-29.
[4] G. Andersen. Comment in "faster blocks vs bigger blocks". https://bitcointalk.org/index.php?topic=673415.msg7658481#msg7658481, 2014. Accessed: 2017-05-10.
[5] G. Andersen. [bitcoin-dev] weak block thoughts... https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011157.html, 2015. Accessed: 2017-05-10.
[6] L. Anderson, R. Holz, A. Ponomarev, P. Rimba, and I. Weber. New kids on the block: an analysis of modern blockchains. http://arxiv.org/pdf/1606.06530.pdf, 2016. Accessed: 2016-07-04.
[7] E. Androulaki, S. Capkun, and G. O. Karame. Two bitcoins at the price of one? double-spending attacks on fast payments in bitcoin. In CCS, 2012.
[8] A. Back, M. Corallo, L. Dashjr, M. Friedenbach, G. Maxwell, A. Miller, A. Poelstra, J. Timón, and P. Wuille. Enabling blockchain innovations with pegged sidechains. http://newspaper23.com/ripped/2014/11/http-_____-___-_www___-blockstream___-com__-_sidechains.pdf, 2014. Accessed: 2017-09-28.
[9] A. Back et al. Hashcash - a denial of service counter-measure. http://www.hashcash.org/papers/hashcash.pdf, 2002. Accessed: 2017-09-28.
[10] S. Barber, X. Boyen, E. Shi, and E. Uzun. Bitter to better - how to make bitcoin a better currency. In Financial cryptography and data security, pages 399–414. Springer, 2012.
[11] J. Becker, D. Breuker, T. Heide, J. Holler, H. P. Rauer, and R. Böhme. Can we afford integrity by proof-of-work? scenarios inspired by the bitcoin currency. In WEIS. Springer, 2012.
[12] I. Bentov, R. Pass, and E. Shi. Snow white: Provably secure proofs of stake. https://eprint.iacr.org/2016/919.pdf, 2016. Accessed: 2017-09-28.
[13] Bitcoin Community. Bitcoin developer guide- transaction data. https://bitcoin.org/en/developer-guide#term-merkle-tree. Accessed: 2017-06-05.
[14] Bitcoin Community. Bitcoin protocol documentation - merkle trees. https://en.bitcoin.it/wiki/Protocol_documentation#Merkle_Trees. Accessed: 2017-06-05.
[15] Bitcoin community. Bitcoin protocol rules. https://en.bitcoin.it/wiki/Protocol_rules. Accessed: 2017-08-22.
[16] V. Buterin. Chain interoperability. Technical report, Tech. rep. 1. R3CEV, 2016.
[17] W. Dai. bmoney. http://www.weidai.com/bmoney.txt, 1998. Accessed: 2017-09-28.
[18] C. Decker and R. Wattenhofer. Information propagation in the bitcoin network. In Peer-to-Peer Computing (P2P), 2013 IEEE Thirteenth International Conference on, pages 1–10. IEEE, 2013.
[19] C. Decker and R. Wattenhofer. Bitcoin transaction malleability and mtgox. In Computer Security-ESORICS 2014, pages 313–326. Springer, 2014.
[20] Dogecoin community. Dogecoin reference implementation. https://github.com/dogecoin/
[27] A. Gervais, G. Karame, S. Capkun, and V. Capkun. Is bitcoin a decentralized currency? volume 12, pages 54–60, 2014.
[28] A. Gervais, G. O. Karame, K. Wüst, V. Glykantzis, H. Ritzdorf, and S. Capkun. On the security and performance of proof of work blockchains. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 3–16. ACM, 2016.
[29] I. Giechaskiel, C. Cremers, and K. B. Rasmussen. On bitcoin security in the presence of broken cryptographic primitives. In European Symposium on Research in Computer Security (ESORICS), September 2016.
[30] J. Göbel, H. P. Keeler, A. E. Krzesinski, and P. G. Taylor. Bitcoin blockchain dynamics: The selfish-mine strategy in the presence of propagation delay. Performance Evaluation, 104:23–41, 2016.
[31] E. Heilman, A. Kendler, A. Zohar, and S. Goldberg. Eclipse attacks on bitcoin’s peer-to-peer network. In 24th USENIX Security Symposium (USENIX Security 15), pages 129–144, 2015.
[32] Huntercoin developers. Huntercoin reference implementation. https://github.com/chronokings/huntercoin. Accessed: 2017-06-05.
[33] B. Jakobsson and A. Juels. Proofs of work and bread pudding protocols, Apr. 8 2008. US Patent 7,356,696; Accessed: 2017-06-05.
[34] M. Jakobsson and A. Juels. Proofs of work and bread pudding protocols. In Secure Information Networks, pages 258–272. Springer, 1999.
[35] A. Judmayer, N. Stifter, K. Krombholz, and E. Weippl. Blocks and chains: Introduction to bitcoin, cryptocurrencies, and their consensus mechanisms. Synthesis Lectures on Information Security, Privacy, & Trust, 9(1):1–123, 2017.
[36] A. Juels and J. G. Brainard. Client puzzles: A cryptographic countermeasure against connection depletion attacks. In NDSS, volume 99, pages 151–165, 1999.
[37] A. Juels and B. S. Kaliski Jr. Pors: Proofs of retrievability for large files. In Proceedings of the 14th ACM conference on Computer and communications security, pages 584–597. Acm, 2007.
[38] H. Kalodner, M. Carlsten, P. Ellenbogen, J. Bonneau, and A. Narayanan. An empirical study of namecoin and lessons for decentralized namespace design. In WEIS, 2015.
[39] G. O. Karame, E. Androulaki, and S. Capkun. Double-spending fast payments in bitcoin. In Proceedings of the 2012 ACM conference on Computer and communications security, pages 906–917. ACM, 2012.
[40] G. O. Karame, E. Androulaki, M. Roeschlin, A. Gervais, and S. Čapkun. Misbehavior in bitcoin: A study of double-spending and accountability. volume 18, page 2. ACM, 2015.
[41] A. Kiayias, A. Russell, B. David, and R. Oliynykov. Ouroboros: A provably secure proof-of-stake blockchain protocol. In Annual International Cryptology Conference, pages 357–388. Springer, 2017.
[42] S. King. Primecoin: Cryptocurrency with prime number proof-of-work. July 7th, 2013.
[43] T. Kluyver, B. Ragan-Kelley, F. Pérez, B. E. Granger, M. Bussonnier, J. Frederic, K. Kelley, J. B. Hamrick, J. Grout, S. Corlay, et al. Jupyter notebooks-a publishing format for reproducible computational workflows. In ELPUB, pages 87–90, 2016.
[44] Lerner, Sergio D. Rootstock plattform. http://www.the-blockchain.com/docs/Rootstock-WhitePaper-Overview.pdf. Accessed: 2017-06-05.
[45] Y. Lewenberg, Y. Bachrach, Y. Sompolinsky, A. Zohar, and J. S. Rosenschein. Bitcoin mining pools: A cooperative game theoretic analysis. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pages 919–927. International Foundation for Autonomous Agents and Multiagent Systems, 2015.
[46] Litecoin community. Litecoin reference implementation. https://github.com/litecoin-project/litecoin. Accessed: 2017-09-28.
[47] I. Maven. Apache maven project, 2011.
[48] G. Maxwell. Comment in "[bitcoin-dev] weak block thoughts...". https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011198.html, 2016. Accessed: 2017-05-10.
[49] S. Meiklejohn, M. Pomarole, G. Jordan, K. Levchenko, D. McCoy, G. M. Voelker, and S. Savage. A fistful of bitcoins: characterizing payments among men with no names. In Proceedings of the 2013 conference on Internet measurement conference, pages 127–140. ACM, 2013.
[50] S. Micali. Algorand: The efficient and democratic ledger. http://arxiv.org/abs/1607.01341, 2016. Accessed: 2017-02-09.
[51] A. Miller, A. Juels, E. Shi, B. Parno, and J. Katz. Permacoin: Repurposing bitcoin work for data preservation. In Security and Privacy (SP), 2014 IEEE Symposium on, pages 475–490. IEEE, 2014.
[52] A. Miller, A. Kosba, J. Katz, and E. Shi. Nonoutsourceable scratch-off puzzles to discourage bitcoin mining coalitions. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 680–691. ACM, 2015.
[53] B. Momjian. PostgreSQL: introduction and concepts, volume 192. Addison-Wesley New York, 2001.
[54] Myriad core developers. Myriadcoin reference implementation. https://github.com/myriadcoin/myriadcoin. Accessed: 2017-06-05.
[55] S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. https://bitcoin.org/bitcoin.pdf, Dec 2008. Accessed: 2017-09-28.
[56] S. Nakamoto. Merged mining specification. https://en.bitcoin.it/wiki/Merged_mining_specification, Apr 2011. Accessed: 2017-09-28.
[57] Namecoin Community. Merged mining. https://github.com/namecoin/wiki/blob/masteMerged-Mining.mediawiki#Goal_of_this_namecoin_change. Accessed: 2017-08-20.
[58] Namecoin community. Namecoin reference implementation. https://github.com/namecoin/namecoin. Accessed: 2017-09-28.
[59] A. Narayanan, J. Bonneau, E. Felten, A. Miller, and S. Goldfeder. Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction. Princeton University Press, 2016.
[60] K. Nayak, S. Kumar, A. Miller, and E. Shi. Stubborn mining: Generalizing selfish mining and combining with an eclipse attack. In 1st IEEE European Symposium on Security and Privacy, 2016. IEEE, 2016.
[61] K. J. O’Dwyer and D. Malone. Bitcoin mining and its energy footprint. 2014.
[62] R. Pass, L. Seeman, and A. Shelat. Analysis of the blockchain protocol in asynchronous networks. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 643–673. Springer, 2017.
[63] D. Pointcheval and J. Stern. Security arguments for digital signatures and blind signatures. Journal of cryptology, 13(3):361–396, 2000.
[64] Pseudonymous("TierNolan"). Decoupling transactions and pow. https://bitcointalk.org/index.php?topic=179598.0, 2013. Accessed: 2017-05-10.
[65] P. R. Rizun. Subchains: A technique to scale bitcoin and improve the user experience. Ledger, 1:38–52, 2016.
[66] K. Rosenbaum. Weak blocks - the good and the bad. http://popeller.io/index.php/2016/01/19/weak-blocks-the-good-and-the-bad/, 2016. Accessed: 2017-05-10.
[67] K. Rosenbaum and R. Russell. Iblt and weak block propagation performance. Scaling Bitcoin Hong Kong (6 December 2015), 2015.
[68] M. Rosenfeld. Analysis of bitcoin pooled mining reward systems. arXiv preprint arXiv:1112.4980, 2011.
[69] M. Rosenfeld. Analysis of hashrate-based double spending. http://arxiv.org/abs/1402.2009, 2014. Accessed: 2016-03-09.
[70] R. Russel. Weak block simulator for bitcoin. https://github.com/rustyrussell/weak-blocks, 2014. Accessed: 2017-05-10.
[71] A. Sapirshtein, Y. Sompolinsky, and A. Zohar. Optimal selfish mining strategies in bitcoin. In International Conference on Financial Cryptography and Data Security, pages 515–532. Springer, 2016.
[72] Sathoshi Nakamoto. Comment in "bitdns and generalizing bitcoin" bitcointalk thread. https://bitcointalk.org/index.php?topic=1790.msg28696#msg28696. Accessed: 2017-06-05.
[73] O. Schrijvers, J. Bonneau, D. Boneh, and T. Roughgarden. Incentive compatibility of bitcoin mining pool reward functions. In FC ’16: Proceedings of the the 20th International Conference on Financial Cryptography, February 2016.
[74] B. Sengupta, S. Bag, S. Ruj, and K. Sakurai. Retricoin: Bitcoin based on compact proofs of retrievability. In Proceedings of the 17th International Conference on Distributed Computing and Networking, page 14. ACM, 2016.
[75] N. Szabo. Bit gold. http://unenumerated.blogspot.co.at/2005/12/bit-gold.html, 2005. Accessed: 2017-09-28.
[76] M. B. Taylor. Bitcoin and the age of bespoke silicon. In Proceedings of the 2013 International Conference on Compilers, Architectures and Synthesis for Embedded Systems, page 16. IEEE Press, 2013.
[77] Unitus developers. Unitus reference implementation. https://github.com/unitusdev/unitus. Accessed: 2017-08-22.
[78] M. Vukolić. The quest for scalable blockchain fabric: Proof-of-work vs. bft replication. In International Workshop on Open Problems in Network Security, pages 112–125. Springer, 2015.
[79] P. Webb, D. Syer, J. Long, S. Nicoll, R. Winch, A. Wilkinson, M. Overdijk, C. Dupuis, and S. Deleuze. Spring boot reference guide. Technical report, 2013-2016.
[80] A. Zamyatin. Name-squatting in namecoin. (unpublished BSc thesis, Vienna University of Technology), 2015.
submitted by dj-gutz to myrXiv [link] [comments]

Bitcoin dev IRC meeting in layman's terms (2016-01-21)

Once again my attempt to summarize and explain the weekly bitcoin developer meeting in layman's terms. Link to last summarisation
Disclaimer
Please bear in mind I'm not a developer so some things might be incorrect or plain wrong. There are no decisions being made in these meetings, but since a fair amount of devs are present it's a good representation. Copyright: Public domain

Logs

Main topics

Short topics

0.11 backport release for chainstate obfuscation

background

As some windows users might have experienced in the past, anti-virus software regularly detects values in the bitcoin database files which are false-positives. Thereby deleting those files and corrupting the database. To prevent this from happening developers discussed a way to obfuscate the database files and implemented it last year. While downgrading after upgrading is possible, if you start from a new 0.12 installation or you've done a -reindex on 0.12 it's impossible to downgrade to 0.11 (without starting from scratch).

meeting comments

The proposed pull-request detects the obfuscation in 0.11 so it throws a relevant error message. To avoid this in the future it would be good to have versionnumbers for the chainstate.

meeting conclusion

Release a 0.11 backport release right after the 0.12 final release to avoid confusion.

C++11 update

background

C++11 is an update of the C++ language. It offers new functionalities, an extended standard library, etc. Zerocash had to be written with some c++11 libraries and some IBLT simulation code was written in c++11, which they want to recycle for the eventual core commit.

meeting comments

All changes needed for C++11 have gone in and it's ready to switch. Cfields talked to the travis team and all the features needed (trusty, caching) will be ready by the end of the month, so he proposes to wait until then to flip the switch. Wangchung from f2pool indicated he would not run code that required a C++11 compiler. No one knows what his exact concerns are. Wumpus notes the gitian-built executables don't need any special OS support after the C++11 switch.

meeting conclusion

Wait for Travis update to switch to C++11. Talk to wangchung about his concerns.

EOL Policy / release cycles

background

In general bugfixes, translations and softforks are maintained for 2 major releases. btcdrak proposed to makes this official into a software life-cycle document for bitcoin core in order to inform users what to expect and developers what to code for. Pull request for this document. Given the huge 0.12 changelog jonasschnelli asks whether shorter release cycles might be a good idea. Currently there's a +/- 6 month release cycle.

meeting comments

Gmaxwell notes he doesn't know how useful the backports are given there's no feedback about them, but thinks the current policy is not bad. "I am observing the backports appear to be a waste of time. From a matter of principle, I think they are important, but the industry doesn't appear to agree." If no one is using the backports, it might not see sufficient testing. People generally agree with the 2 major releases approach.
The cyclelength also contributes to frustration and pressure to get features in, as it won't see the light of day for 6 months if it doesn't make the new release. For users it's not really better to have more frequent major releases, as upgrading may not always be a trivial process. There's also a lot of work going into releases. If the GUI and wallet where detached there could be more frequent releases for that part.

meeting conclusion

Policy will be: final release of 0.X means end-of-life of 0.(X-2), which means a 1 year support on the 6 month cycle.

Participants

wumpus Wladimir J. van der Laan gmaxwell Gregory Maxwell jonasshnelli Jonas Schnelli cfields Cory Fields btcdrak btcdrak sipa Pieter Wuille jtimon Jorge Timón maaku Mark Friedenbach kangx_ ??? Kang Zhang ??? sdaftuar Suhas Daftuar phantomcircuit Patrick Strateman CodeShark Eric Lombrozo bsm117532 Bob McElrath dkog ?dkog? jeremias ??? Jeremias Kangas ??? 

Comic relief

jonasschnelli maaku: refactoring? We have a main.cpp. We don't need refactoring. :) gmaxwell jonasschnelli: can we move everything back into main.cpp? I'd save a lot of time grepping. :P wumpus #endmeeting lightningbot` Meeting ended Thu Jan 21 19:55:48 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) btcdrak wumpus: hole in one maaku Did it right this time! gmaxwell Hurray! 
submitted by G1lius to Bitcoin [link] [comments]

The Big Blocks Mega Thread

Since this is a pressing and prevalent issue, I thought maybe condensing the essential arguments into one mega thread is better than rehashing everything in new threads all the time. I chose a FAQ format for this so a certain statement can be answered. I don't want to re-post everything here so where appropriate I'm just going to use links.
Disclaimer: This is biased towards big blocks (BIP 101 in particular) but still tries to mention the risks, worries and fears. I think this is fair because all other major bitcoin discussion places severely censor and discourage big block discussion.
 
What is the block size limit?
The block size limit was introduced by Satoshi back in 2010-07-15 as an anti-DoS measure (though this was not stated in the commit message, more info here). Ever since, it has never been touched because historically there was no need and raising the block size limit requires a hard fork. The block size directly limits the number of transactions in a block. Therefore, the capacity of Bitcoin is directly limited by the block size limit.
 
Why does a raise require a hard fork?
Because larger blocks are seen as invalid by old nodes, a block size increase would fork these nodes off the network. Therefore it is a hard fork. However, it is possible to downsize the block limit with a soft fork since smaller blocks would still be seen as valid from old nodes. It is considerably easier to roll out a soft fork. Therefore, it makes sense to roll out a more ambitious hard fork limit and downsize as needed with soft forks if problems arise.
 
What is the deal with soft and hard forks anyways?
See this article by Mike Hearn: https://medium.com/@octskyward/on-consensus-and-forks-c6a050c792e7#.74502eypb
 
Why do we need to increase the block size?
The Bitcoin network is reaching its imposed block size limit while the hard- and software would be able to support more transactions. Many believe that in its current phase of growth, artificially limiting the block size is stifling adoption, investment and future growth.
Read this article and all linked articles for further reading: http://gavinandresen.ninja/time-to-roll-out-bigger-blocks
Another article by Mike Hearn: https://medium.com/@octskyward/crash-landing-f5cc19908e32#.uhky4y1ua (this article is a little outdated since both Bitcoin Core and XT now have mempool limits)
 
What is the Fidelity Effect?
It is the Chicken and Egg problem applied to future growth of Bitcoin. If companies do not see how Bitcoin can scale long term, they don't invest which in turn slows down adoption and development.
See here and here.
 
Does an increase in block size limit mean that blocks immediately get larger to the point of the new block size limit?
No, blocks are as large as there is demand for transactions on the network. But one can assume that if the limit is lifted, more users and businesses will want to use the blockchain. This means that blocks will get bigger, but they will not automatically jump to the size of the block size limit. Increased usage of the blockchain also means increased adoption, investment and also price appreciation.
 
Which are the block size increase proposals?
See here.
It should be noted that BIP 101 is the only proposal which has been implemented and is ready to go.
 
What is the long term vision of BIP 101?
BIP 101 tries to be as close to hardware limitations regarding bandwidth as possible so that nodes can continue running at normal home-user grade internet connections to keep the decentralized aspect of Bitcoin alive. It is believed that it is hard to increase the block size limit, so a long term increase is beneficial to planning and investment in the Bitcoin network. Go to this article for further reading and understand what is meant by "designing for success".
BIP 101 vs actual transaction growth visualized: http://imgur.com/QoTEOO2
Note that the actual growth in BIP 101 is piece-wise linear and does not grow in steps as suggested in the picture.
 
What is up with the moderation and censorship on bitcoin.org, bitcointalk.org and /bitcoin?
Proponents of a more conservative approach fear that a block size increase proposal that does not have "developeexpert consensus" should not be implemented via a majority hard fork. Therefore, discussion about the full node clients which implement BIP 101 is not allowed. Since the same individuals have major influence of all the three bitcoin websites (most notably theymos), discussion of Bitcoin XT is censored and/or discouraged on these websites.
 
What is Bitcoin XT anyways?
More info here.
 
What does Bitcoin Core do about the block size? What is the future plan by Bitcoin Core?
Bitcoin Core scaling plan as envisioned by Gregory Maxwell: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe011865.html
 
Who governs or controls Bitcoin Core anyways? Who governs Bitcoin XT? What is Bitcoin governance?
Bitcoin Core is governed by a consensus mechanism. How it actually works is not clear. It seems that any major developer can "veto" a change. However, there is one head maintainer who pushes releases and otherwise organizes the development effort. It should be noted that the majority of the main contributors to Bitcoin Core are Blockstream employees.
BitcoinXT follows a benevolent dictator model (as Bitcoin used to follow when Satoshi and later Gavin Andresen were the lead maintainers).
It is a widespread believe that Bitcoin can be separated into protocol and full node development. This means that there can be multiple implementations of Bitcoin that all follow the same protocol and overall consensus mechanism. More reading here. By having multiple implementations of Bitcoin, single Bitcoin implementations can be run following a benevolent dictator model while protocol development would follow an overall consensus model (which is enforced by Bitcoin's fundamental design through full nodes and miners' hash power). It is still unclear how protocol changes should actually be governed in such a model. Bitcoin governance is a research topic and evolving.
 
What are the arguments against a significant block size increase and against BIP 101 in particular?
The main arguments against a significant increase are related to decentralization and therefore robustness against commercial interests and government regulation and intervention. More here (warning: biased Wiki article).
Another main argument is that Bitcoin needs a fee market established by a low block size limit to support miners long term. There is significant evidence and game theory to doubt this claim, as can be seen here.
Finally, block propagation and verification times increase with an increased block size. This in turn increases the orphan rate of miners which means reduced profit. Some believe that this is a disadvantage to small miners because they are not as well connected to other big miners. Also, there is currently a large miner centralization in China. Since most of these miners are behind the Great Firewall of China, their bandwidth to the rest of the world is limited. There is a fear that larger block propagation times favor Chinese miners as long as they have a mining majority. However, there are solutions in development that can drastically reduce block propagation times so this problem will be less of an issue long term.
 
What is up with the fee market and what is the Lightning Network (LN)?
Major Bitcoin Core developers believe that a fee market established by a low block size is needed for future security of the bitcoin network. While many believe fundamentally this is true, there is major dispute if a fee market needs to be forced by a low block size. One of the main LN developers thinks such a fee market through low block size is needed (read here). The Lightning Network is a non-bandwidth scaling solution. It uses payment channels that can be opened and closed using Bitcoin transactions that are settled on the blockchain. By routing transactions through many of these payment channels, in theory it is possible to support a lot more transactions while a user only needs very few payment channels and therefore rarely has to use (settle on) the actual blockchain. More info here.
 
How does LN and other non-bandwidth scaling solutions relate to Bitcoin Core and its long term scaling vision?
Bitcoin Core is headed towards a future where block sizes are kept low so that a fee market is established long term that secures miner incentives. The main scaling solution propagated by Core is LN and other solutions that only sometimes settle transactions on the main Bitcoin blockchain. Essentially, Bitcoin becomes a settlement layer for solutions that are built on top of Bitcoin's core technology. Many believe that long term this might be inevitable. But forcing this off-chain development already today seems counterproductive to Bitcoin's much needed growth and adoption phase before such solutions can thrive. It should also be noted that no major non-bandwidth scaling solution (such as LN) has been tested or even implemented. It is not even clear if such off-chain solutions are needed long term scaling solutions as it might be possible to scale Bitcoin itself to handle all needed transaction volumes. Some believe that the focus on a forced fee market by major Bitcoin Core developers represents a conflict of interest since their employer is interested in pushing off-chain scaling solutions such as LN (more reading here).
 
Are there solutions in development that show the block sizes as proposed via BIP 101 are viable and block propagation times in particular are low enough?
Yes, most notably: Weak Blocks, Thin Blocks and IBLT.
 
What is Segregated Witness (SW) and how does it relate to scaling and block size increases?
See here. SW among other things is a way to increase the block size once without a hard fork (the actual block size is not increased but there is extra information exchanged separately to blocks).
 
Feedback and more of those question/answer type posts (or revised question/answer pairs) appreciated!
 
ToDo and thoughts for expansion:
@Mods: Maybe this could be stickied?
submitted by BIP-101 to btc [link] [comments]

Encoding/decoding blocks in IBLT, experimets on O(1) block propagation

I've been working on an IBLT written in Java, as well as a project to encode and decode Bitcoin blocks using this IBLT. The main inspiration comes from Gavin Anresens (gavinandresen) excellent writeup on O(1) block propagation, https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2.
The projects are called ibltj (https://github.com/kallerosenbaum/ibltj) and bitcoin-iblt (https://github.com/kallerosenbaum/bitcoin-iblt). In bitcoin-iblt I've run some experiments to find a good value size and a good number of hash functions to use. Have a look at the results at https://github.com/kallerosenbaum/bitcoin-iblt/wiki/BlockStatsTest
I'm very interested in discussing this and listen to your comments. I also need some help to specify other tests to perform. I'm thinking it would be nice to have some kind of "Given that there is no more that 100 differing transactions, I need 867 cells of size 270 B to have <0.1% chance that decoding fails.". Any thoughts on this?
The test bench is pretty capable. I can perform tests on arbitrarily large fake blocks constructed from real world transactions. I can modify the following parameters:
submitted by kallerosenbaum to Bitcoin [link] [comments]

segwit after a 2MB hardfork

Disclaimer: My preferred plan for bitcoin is soft-forking segregated witness in asap, and scheduling a 2MB hardforked blocksize increase sometime mid-2017, and I think doing a 2MB hardfork anytime soon is pretty crazy. Also, I like micropayments, and until I learnt about the lightning network proposal, bitcoin didn't really interest me because a couple of cents in fees is way too expensive, and a few minutes is way too slow. Maybe that's enough to make everything I say uninteresting to you, dear reader, in which case I hope this disclaimer has saved you some time. :)
Anyway there's now a good explanation of what segwit does beyond increasing the blocksize via accounting tricks or however you want to call it: https://bitcoincore.org/en/2016/01/26/segwit-benefits/ [0] I'm hopeful that makes it a bit easier to see why many people are more excited by segwit than a 2MB hardfork. In any event hopefully it's easy to see why it might be a good idea to do segwit asap, even if you do a hardfork to double the blocksize first.
If you were to do a 2MB hardfork first, and then apply segwit on top of that [1], I think there are a number of changes you'd want to consider, rather than just doing a straight merge. Number one is that with the 75% discount for witness data and a 2MB blocksize, you run the risk of worst-case 8MB blocks which seems to be too large at present [2]. The obvious solution is to change the discount rate, or limit witness data by some other mechanism. The drawback is that this removes some of the benefits of segwit in reducing UTXO growth and in moving to a simpler cost formula. Not hard, but it's a tradeoff, and exactly what to do isn't obvious (to me, anyway).
If IBLT or weak blocks or an improved relay network or something similar comes out after deploying segwit, does it then make sense to increase the discount or otherwise raise the limit on witness data, and is it possible to do this without another hardfork and corresponding forced upgrade? For the core roadmap, I think the answer would be "do segwit as a soft-fork now so no one has to upgrade, and after IBLT/etc is ready perhaps do a hard-fork then because it will be safer" so there's only one forced upgrade for users. Is some similar plan possible if there's an "immediate" hard fork to increase the block size, to avoid users getting hit with two hardforks in quick succession?
Number two is how to deal with sighashes -- segwit allows the hash calculation to be changed, so that for 2MB of transaction data (including witness data), you only need to hash up to around 4MB of data when verifying signatures, rather than potentially gigabytes of data. Compare that to Gavin's commits to the 0.11.2 branch in Classic which include a 1.3GB limit on sighash data to make the 2MB blocksize -- which is necessary because the quadratic scaling problem means that the 1.3GB limit can already be hit with 1MB blocks. Do you keep the new limit once you've got 2MB+segwit, or plan to phase it out as more transactions switch to segwit, or something else?
Again, I think with the core roadmap the plan here is straightforward -- do segwit now, get as many wallets/transactions switched over to segwit asap (whether due to all the bonus features, or just that they're cheaper in fees), and then revise the sighash limits later as part of soft-forking to increase the blocksize.
Finally, and I'm probably projecting my own ideas here, I think a 2MB hardfork in 2017 would give ample opportunity to simultaneously switch to a "validation cost metric" approach, making fees simpler to calculate and avoiding people being able to make sigop attacks to force near-empty blocks and other such nonsense. I think there's even the possibility of changing the limit so that in future it can be increased by soft-forks [3], instead of needing a hard fork for increases as it does now. ie, I think if we're clever, we can get a gradual increase to 1.8MB-2MB starting in the next few months via segwit with a soft-fork, then have a single hard-fork flag day next year, that allows the blocksize to be managed in a forwards compatible way more or less indefinitely.
Anyhoo, I'd love to see more technical discussion of classic vs core, so in the spirit of "write what you want to read", voila...
[0] I wrote most of the text for that, though the content has had a lot of corrections from people who understand how it works better than I do; see the github pull request if you care --https://github.com/bitcoin-core/website/pull/67
[1] https://www.reddit.com/btc/comments/42mequ/jtoomim_192616_utc_my_plan_for_segwit_was_to_pull/
[2] I've done no research myself; jtoomim's talk at Hong Kong said 2MB/4MB seemed okay but 8MB/9MB was "pushing it" -- http://diyhpl.us/wiki/transcripts/scalingbitcoin/hong-kong/bip101-block-propagation-data-from-testnet/ and his talks with miners indicated that BIP101's 8MB blocks were "Too much too fast" https://docs.google.com/spreadsheets/d/1Cg9Qo9Vl5PdJYD4EiHnIGMV3G48pWmcWI3NFoKKfIzU/edit#gid=0 Tradeblock's stats also seem to suggest 8MB blocks is probably problematic for now: https://tradeblock.com/blog/bitcoin-network-capacity-analysis-part-6-data-propagation
[3] https://botbot.me/freenode/bitcoin-wizards/2015-12-09/?msg=55794797&page=4
submitted by ajtowns to btc [link] [comments]

Technical question: IBLT and IPC

I've been reading about Gavin's good work on invertible bloom lookup tables (IBLT) and also looking at systems to calculate the risk of accepting 0 confirmation transactions, like instant partial confirmations (IPC).
Am I right in thinking that IBLTs could make IPC more efficient/viable as well?
submitted by ej159 to Bitcoin [link] [comments]

Blocksizing = Bikeshedding

(definition of "Bikeshedding" on Wikipedia)
"Everyone can visualize a cheap, simple bicycle shed, so planning one can result in endless discussions because everyone involved wants to add a touch and show personal contribution."
Talk is cheap. Everyone has an opinion on easy, hot-button issues.
Devs ACKing on Github and users debating on Reddit get sucked into participating in the never-ending Blocksize BIP Bikeshedding debates (even jstolfi has BIP 99.5!) - because it's easy for everyone to weigh in and give their opinion on the starting value and periodic bump for a simple integer parameter - but meanwhile almost nobody is doing the hard work involving crypto and hashing to implement practical, useful stuff like IBLT or SegWit - or other features that have been missing for so long we've forgotten we even needed them (eg: HD - hierarchical deterministic wallets - without which you can't permanently back up your wallet).
BIP 202 is just the latest example of Blocksizing = Bikeshedding
The latest eposide of out-of-touch devs on Github ACKing yet another blocksize bikeshedding BIP (BIP 202 from jgarzik) is not actual "governance" and will not provide the scaling Bitcoin actually needs.
BIP 202 is wrong because it scales linearly instead of exponentially
https://np.reddit.com/btc/comments/3xf50u/bip_202_is_wrong_because_it_scales_linearly/
It would be like if you were selling a house for $ 200,000 dollars and the buyer originally offered $ 100,000 and then offered $ 100,002 - you wouldn't say you were willing to compromise - you'd simply laugh in their face.
BIP 202 isn't even acceptable as a "compromise".
https://np.reddit.com/btc/comments/3xedu8/a_comparison_of_bip202_bip101_growth_rates_with/cy45fzz
This is one of the reasons why this Blocksize BIP Bikeshedding debate is never-ending: it's easy, lazy, high-profile "executive decision-making" for devs, and easy, ponderous, philosophical pontificating for users, and everyone feels "qualified" to offer their expertise on how to set this one little parameter (which probably doesn't even need to be there in the first place since miners already soft-limit down as needed to avoid orphaning).
Nobody has been able to convincingly answer the question, "What should the optimal block size limit be?" And the reason nobody has been able to answer that question is the same reason nobody has been able to answer the question, "What should the price today be?" – tsontar
https://np.reddit.com/btc/comments/3xdc9e/nobody_has_been_able_to_convincingly_answer_the/
Setting a parameter is easy. Adding features is hard.
It's so much easier to simply propose a parameter versus actually adding any real features which real users really need in real life. There's a long list of much-needed features which none of these devs ever roll up their sleeves and work on, such as:
  • HD: hierachical deterministic wallets (BIP 32), without which it's impossible to back up your wallet permanently
  • simple optimizations and factorings like IBLT / Thin Blocks / Weak Blocks / SegWit
When are we going to get a pluggable policy architecture for Core?
https://np.reddit.com/btc/comments/3v4u52/when_are_we_going_to_get_a_pluggable_policy/
Bikeshedding in politics.
By the way, you can see the parallel in US electoral politics, on forums and comment threads and Facebook, where everyone has a really important opinion they urgently need to share with the world on the eternal trinity of American hot-button issues (abortion and racism and gays) - but nobody really feels like spending the time and effort to come up with solutions for the complicated stuff like education, healthcare, student loans, housing prices, or foreign policy.
It's all just bikeshedding - a way of feeling self-important and getting attention, while the more-important and less-glamorous bread-and-butter nuts-and-bolts real-life user-experience issues get left by the wayside, because they're just too "complicated" and "difficult" and not "sexy" enough for most devs to actually work on.
submitted by ydtm to btc [link] [comments]

Bitcoin dev IRC meeting in layman's terms (2016-01-21)

Once again my attempt to summarize and explain the weekly bitcoin developer meeting in layman's terms. Link to last summarisation
Disclaimer
Please bear in mind I'm not a developer so some things might be incorrect or plain wrong. There are no decisions being made in these meetings, but since a fair amount of devs are present it's a good representation. Copyright: Public domain

Logs

Main topics

Short topics

0.11 backport release for chainstate obfuscation

background

As some windows users might have experienced in the past, anti-virus software regularly detects values in the bitcoin database files which are false-positives. Thereby deleting those files and corrupting the database. To prevent this from happening developers discussed a way to obfuscate the database files and implemented it last year. While downgrading after upgrading is possible, if you start from a new 0.12 installation or you've done a -reindex on 0.12 it's impossible to downgrade to 0.11 (without starting from scratch).

meeting comments

The proposed pull-request detects the obfuscation in 0.11 so it throws a relevant error message. To avoid this in the future it would be good to have versionnumbers for the chainstate.

meeting conclusion

Release a 0.11 backport release right after the 0.12 final release to avoid confusion.

C++11 update

background

C++11 is an update of the C++ language. It offers new functionalities, an extended standard library, etc. Zerocash had to be written with some c++11 libraries and some IBLT simulation code was written in c++11, which they want to recycle for the eventual core commit.

meeting comments

All changes needed for C++11 have gone in and it's ready to switch. Cfields talked to the travis team and all the features needed (trusty, caching) will be ready by the end of the month, so he proposes to wait until then to flip the switch. Wangchung from f2pool indicated he would not run code that required a C++ compiler. No one knows what his exact concerns are. Wumpus notes the gitian-built executables don't need any special OS support after the C++11 switch.

meeting conclusion

Wait for Travis update to switch to C++11. Talk to wangchung about his concerns.

EOL Policy / release cycles

background

In general bugfixes, translations and softforks are maintained for 2 major releases. btcdrak proposed to makes this official into a software life-cycle document for bitcoin core in order to inform users what to expect and developers what to code for. Pull request for this document. Given the huge 0.12 changelog jonasschnelli asks whether shorter release cycles might be a good idea. Currently there's a +/- 6 month release cycle.

meeting comments

Gmaxwell notes he doesn't know how useful the backports are given there's no feedback about them, but thinks the current policy is not bad. "I am observing the backports appear to be a waste of time. From a matter of principle, I think they are important, but the industry doesn't appear to agree." If no one is using the backports, it might not see sufficient testing. People generally agree with the 2 major releases approach.
The cyclelength also contributes to frustration and pressure to get features in, as it won't see the light of day for 6 months if it doesn't make the new release. For users it's not really better to have more frequent major releases, as upgrading may not always be a trivial process. There's also a lot of work going into releases. If the GUI and wallet where detached there could be more frequent releases for that part.

meeting conclusion

Policy will be: final release of 0.X means end-of-life of 0.(X-2), which means a 1 year support on the 6 month cycle.

Participants

wumpus Wladimir J. van der Laan gmaxwell Gregory Maxwell jonasshnelli Jonas Schnelli cfields Cory Fields btcdrak btcdrak sipa Pieter Wuille jtimon Jorge Timón maaku Mark Friedenbach kangx_ ??? Kang Zhang ??? sdaftuar Suhas Daftuar phantomcircuit Patrick Strateman CodeShark Eric Lombrozo bsm117532 Bob McElrath dkog ?dkog? jeremias ??? Jeremias Kangas ??? 

Comic relief

jonasschnelli maaku: refactoring? We have a main.cpp. We don't need refactoring. :) gmaxwell jonasschnelli: can we move everything back into main.cpp? I'd save a lot of time grepping. :P wumpus #endmeeting lightningbot` Meeting ended Thu Jan 21 19:55:48 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) btcdrak wumpus: hole in one maaku Did it right this time! gmaxwell Hurray! 
submitted by G1lius to btc [link] [comments]

[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?

https://en.wikipedia.org/wiki/Embarrassingly_parallel
In parallel computing, an embarrassingly parallel workload or problem is one where little or no effort is required to separate the problem into a number of parallel tasks. This is often the case where there exists no dependency (or communication) between those parallel tasks.
What if the basic, accepted, bedrock components / behaviors of the Bitcoin architecture as we currently understand it:
... actually cannot scale without significantly modifying some of them?
Going by the never-ending unresolved debates of the past year, maybe we need to seriously consider that possibility.
Maybe we're doing it wrong.
Maybe we need to think more "outside the box".
Maybe instead of thinking about "hard forks", we should be thinking about "smart forks".
Maybe we can find a scaling solution which figures out a way to exploit something "embarrassingly parallel" about the above components and behaviors.
Even the supporters of most of the current scaling approaches (XT, LN, etc.), in their less guarded moments, have admitted that all of these approaches do actually involve some tradeoffs (downsides).
We seem to be in a dead-end; all solutions proposed so far involve too many tradeoffs and downsides to one group or another; no one approach is able to gain "rough consensus"; and we're starting to give up on achieving massive, natural, "embarrassingly parallel" scaling...
...despite the fact that we have 700 petahashes of mining power, and hard drive space is dirt cheap, and BitTorrent manages to distribute many gigabytes of files around the world all the time, and Google is somehow able to (appear to) search billions of web pages in seconds (on commodity hardware)...
Is there a "sane" way to open up the debate so that any hard-fork(s) will also be as "smart" as possible?
Specifically:
  • (1) Could we significantly modify some components / behaviors in the above list to provide massive scaling, while still leaving the other components / behaviors untouched?
  • (2) If so, could we prioritize the components / behaviors along various criteria or dimensions, such as:
    • (a) more or less unmodifiable / must remain untouched
    • (b) more or less expensive / bottleneck
For example, we might decide that:
  • "bandwidth" (for relaying transactions in the mempool) is probably the most-constrained bottleneck (it's scarce and hard to get without moving - just ask miners on either side of China's Great Firewall, or Luke-Jr who apparently lives in some backwater in Florida with no bandwidth)
  • "hard disk space" (for storing transactions in the blockchain) is probably the least-constrained bottleneck: hard drive space is generally much cheaper and plentiful compared to bandwidth and processing power
  • Some aspects such as the "blockchain" itself might also be considered "least modifiable" - we do want to find a way to append more transactions onto it (which might take up more space on hard drives), but we could agree that we don't want to modify the basic structure of the blockchain.
Examples:
  • SegWit would refactor the merkle trees in the blockchain to separate out validation data from address and amount data, making various types of pruning more natural, which would save on hard drive space (for SVP clients), but I'm not sure if it would save on bandwidth.
  • IBLT (Inverted Bloom Lookup Filters), Thin Blocks, Weak Blocks are all proposals (if I understand correctly) which involve "compressing" the transactions inside a block (using some clever hashing) - at least for purposes of relaying transactions, although (if I understand correctly) later the full, non-compressed block would still eventually have to be stored in the blockchain.
I keep coming up with crazy buzzwords in my head like "Hierarchical Epochs" or "Mempool Sharding" or "Multiple, Disjoint Czars" (???).
Intuitively all of these approaches might involve somehow locally mining a bunch of low-value or high-kB or in-person transactions and then globally consolidating them into larger hierarchical / sharded structures using some "embarrassingly parallel" algorithm (along the lines of MapReduce?).
But maybe I'm just being seduced by my own buzzwords - because I'm having a hard time articulating what such approaches might concretely look like.
The only aspect of any such approach which I can identify as probably being "key" (to making the problem "embarrassingly parallel") would come from the Wikipedia quote at the start of this post:
there exists no dependency (or communication) between those parallel tasks
Applying this to Bitcoin, we might get the following basic requirement for parallelization:
there exists no outputs in common between those parallel (sets of) transactions
TL;DR: Could there be some natural fairly natural ("embarrassingly parallel") way of:
  • decomposing the massive number of transactions in the mempool / in an epoch / among miners;
  • into hierarchical trees, or non-overlapping (disjoint) "shards";
  • and then recomposing them (and serializing them) as we append them to the blockchain?
submitted by ydtm to btc [link] [comments]

[brainstorming bitcoin scaling] Multiple Czars per Epoch: Is there some way we could better exploit miners' massive petahashes of processing power to find some approaches to massive scaling solutions?

TL;DR: During each 10-minute period, instead of appending a SINGLE block, append MULTIPLE mutually compatible ie non-overlapping blocks (eg, use IBLT to quickly and cheaply prove that the intersection of the sets of UTXOs being used in all these blocks is EMPTY).
Czar for an Epoch
The Bitcoin protocol involves solving an SHA hashing puzzle at the current mining difficulty to select one "czar" who gets to append their current block to the chain during the current "epoch".[1]
[1] This suggestive terminology of "czar" and "epoch" comes from the Cornell Bitcoin researchers who recently proposed Bitcoin-NG, where instead of electing a czar-cum-block for the current epoch the network would elect a czar-sans-block for the current epoch. This would drastically reduce the amount of network traffic for the election - but would also require "trusting" that czar in various ways (that he won't double-spend in the block he reveals now after his election, or that he won't become the target for a DDoS).
Architecturally, it seems that the most obvious bottlenecks in the existing architecture are this single czar and the single block they append to the chain.
What if we could figure out a way to append more blocks faster to the chain, while maintaining its structure?
What if we tried using something like IBLT to elect multiple czars per epoch?
Here's an approach I've been brainstorming, which I know might be totally crazy.
Hopefully some of the experts out there on stuff like IBLT (Inverted Bloom Lookup Tables) and related stuff could weigh in.
What if we elected multiple czars during an epoch - where each czar is incentivized to locally do whatever work they can in order to attempt to minimize the "overlap" (ie, the intersection) of their block (ie, the UTXOs in their block) with any other other blocks being submitted by other "czars" for this "epoch"?
This might work as follows:
  • Use a Bloom Filter / IBLT to check that the intersection of two sets of UTXOs is empty.
  • This check almost never gives a false-positive, and never gives a false-negative;
  • Every epoch, in addition to the "SHA minimum-length zero-prefix hash lottery" we would also have an "IBLT maximal-non-intersecting-UTXOs hash lottery" (after the normal lottery), to elect multiple czars (each submitting a block) per epoch / 10-minute period - ie, the "multiple czars for this epoch" would be: all miners who submit a block where their block is mutually disjoint from all other blocks (in terms of UTXOs used), so all these non-intersecting blocks would get appended to the current chain (and the append order shouldn't matter, if there's also no intersection among the receiving addresses =).
https://en.wikipedia.org/wiki/Bloom_filter#The_union_and_intersection_of_sets
The current lone winner: the "SHA longest-zero-prefix lottery" block
Basically, the block which currently wins the lottery could still win the lottery (this is what I was calling the "SHA minimum-length zero-prefix" lottery above) - because it has so many zeros at the front of its SHA hash. Such an "SHA longest-zero-prefix lottery block" could indeed contains UTXOs which conflict with other blocks - but it would override all those other blocks, and be the only "SHA longest-zero-prefix lottery block" appended to the chain for the current epoch.
The additional new winners: multiple "IBLT biggest-non-intersecting BLOCKS" (PLURAL)
Now there could also be a bunch of other blocks (which were not the unique block winning the above SHA lottery - indeed, they might not have to do any SHA hashing at all), for which it has been proven that no other miner is submitting blocks using these same UTXOs (using IBLT to quickly and inexpensively - with low bandwidth - prove this property of non-intersection with the other blocks).
So theoretically many blocks (from many czars) could be appended during an epoch - vastly scaling the system.
Weird beneficial side-effects?
(1) "Mine your own sales"
If you're Starbucks (or some other retailer who wants to use zero-conf) you could set up a system where your customers could submit their transactions directly to you - and then you mine them yourself.
In other words, your customers wouldn't have to even broadcast the transaction from their smartphone - they could just use some kind of near-field communication to transmit the signed transaction to you the vendor, and you the vendor would then broadcast all these transactions to the network - using your better connectivity, where you would normally be 100% certain that nobody else was broadcasting blocks to the network using the same UTXOs - an assumption that would be strengthened if people's smartphone wallet software generally came from reliable sources such as the Google and Apple app stores - and if we as a community discourage programmers from releasing apps which support double-spending =).
This would have the immense benefit of allowing the Starbucks Mining Pool to guarantee that its batch / block of transactions has zero intersection (is mutually disjoint) with all other blocks being mined for that period.
It would also significantly decentralize mining, and align the interests of miners and vendors (since in many cases, a vendor would also want to be a miner - under the slogan "mine your own sales").
(2) "Mine locally, append globally"
If you're on one side of the Great Firewall of China, you could give more preference mining the transactions that are "closest" to you, and give less preference to mining the transactions that are "farthest" from you (in terms of network latency).
This would impose a kind of natural "geo-sharding" on the network, where miners prefer mining the transactions which are "closest" to them.
(3) "Scale naturally"
The throughput of the overall Bitcoin network could probably "scale" very naturally. It might not even matter if we kept the 1 MB block size limit - the system could simply scale by supporting the appending of more and more of these 1 MB blocks faster and faster per 10-minute epoch - as long as the total set of blocks to be appended during the epoch all have mutually disjoint (non-intersecting) sets of UTXOs.
(4) "No IBLT false-negatives means no accidental IBLT double-spends"
IBLTs are probabilistic - ie, they do not provide a 100% safe or guaranteed algorithm for determining if the intersection of two sets contains an element, or is empty.
However, the imperfections in the probabilistic nature of IBLTs are (fortunately) tilted in our favor when it comes to trying to append multiple blocks during the same epoch while preventing double spends.
This is because:
  • False-positives are almost impossible, but
  • False-negatives are totally impossible.
So:
  • in the worse case, IBLTs might RARELY incorrectly tell us that two blocks are unsafe to both append to the chain (ie, that the intersection of their UTXOs is non-empty)
  • but IBLTSs will NEVER incorrectly tell us that two blocks are both safe to append (ie, that their intersection is empty).
This is exactly the kind of behavior we want.
Bonus if we could figure out a way to harness IBLT hashing the same way we currently harness SHA hashing (eg, have miners increment a "nonce" with each IBLT hash attempt, until all IBLT false positives are eliminated which incorrectly claimed that two blocks had intersecting UTXO sets).
submitted by ydtm to btc [link] [comments]

03 Ilhas do Perigo Eterno  Ronins na Ponte YouTube Did a Ransomware Virus Encrypt Your Files? Are You Looking ... Bitcoins Erklärung: In nur 12 Min. Bitcoin verstehen ... Klemens Kilic - YouTube

Block size of Bitcoin mining []. No issue in the history of cryptocurrencies has been debated as passionately, as often, or as forcefully as the bitcoin block size. To an outsider, it must be quite comical to witness folks debating a consensus parameter within the bitcoin network — no joke — as if it were a matter of life or death. Bitcoin IBLT Test Code. This proposes a wire format for sending bitcoin IBLT data, it then simulates sending that between nodes, using input from: Satoshi Nakamoto ist der Gründer von Bitcoin und der ursprüngliche Autor des Original Bitcoin Client.Er sagte in einem P2P Foundation Profil, dass er aus Japan kommt.Abgesehen davon gibt es kaum Informationen über seine Identität. Er hat seit 2007 an Bitcoin gearbeitet. kallerosenbaum / bitcoin-iblt. Watch 5 Star 10 Fork 1 Code. Issues 0. Pull requests 0. Actions Projects 0. Wiki Security Insights Code. Issues 0. Pull requests 0. Projects 0. Actions. Wiki. Security. Pulse Dismiss Document your code. Every project on GitHub comes with a version-controlled wiki to give your documentation the high level of care it deserves. It’s easy to create well-maintained ... From Bitcoin Wiki. Jump to: navigation, search. This page describes a BIP (Bitcoin Improvement Proposal). Please see BIP 2 for more information about BIPs and creating them. Please do not just create a wiki page. Please do not modify this page. This is a mirror of the BIP from the source Git repository here. BIP: 330 Layer: Peer Services Title: Transaction announcements reconciliation Author ...

[index] [30211] [3577] [44540] [30065] [38819] [33316] [8233] [7236] [34767] [21625]

03 Ilhas do Perigo Eterno Ronins na Ponte

Stream, Download & Buy Album "Only Love, L": https://umg.lnk.to/onlylove Kanal abonnieren: http://bit.ly/LenaYouTube Lena Merch bekommt ihr hier: Website: h... Gesellschaftspolitische Kommentare und Satire. That which is boxed must be unboxed. Unless it's filled with snakes. Don't open that. Bitcoin für Anfänger einfach erklärt! [auf Deutsch] Bitcoin-Börse (erhalte 10€ in BTC) https://finanzfluss.de/go/bitcoin-boerse *📱 Sicheres Bitcoin-Wallet... Der Galileo Bitcoin Milliardär Bitcoin Kurs unter 7.000 USD Blockchain, Türkei und Ausblick 2020 Inhaltsverzeichnis: 0:48 - Markt Update 3:35 - Die Bitco...

#