Hidden surprises in the Bitcoin blockchain and how they ...
Hidden surprises in the Bitcoin blockchain and how they ...
Bitcoin Rechner - BTC, Satoshi in Euro umrechnen
2020 report: The original Bitcoin speaks volumes - CoinGeek
Blockchain Explorer - Suche die Blockchain BTC ETH
Bitcoin Core version 0.12.0 released
Mimblewimble in IoT—Implementing privacy and anonymity in INT Transactions
https://preview.redd.it/kyigcq4j5p331.png?width=1280&format=png&auto=webp&s=0584cd96378f51ead05b447397dcb0489995af4e https://preview.redd.it/rfc3cw7q5p331.png?width=800&format=png&auto=webp&s=2b10b33defa0b354e0144745dd20c2f257812f29 The years of 2017 and ’18 were years focused on the topic of scaling. Coins forked and projects were hyped with this word as their sole mantra. What this debate brought us were solutions and showed us where we are right now satisfying the current need when paired with a plan for the future. What will be the focus of years to come will be anonymity and fungibility in mass adoption. In the quickly evolving world of connected data, privacy is becoming a topic of immediate importance. As it stands, we trust our privacy to centralized corporations where safety is ensured by the strength of your passwords and how much effort an attacker dedicates to breaking them. As we grow into the new age of the Internet, where all things are connected, trustless and cryptographic privacy must be at the base of all that it rests upon. In this future, what is at risk is not just photographs and credit card numbers, it is everything you interact with and the data it collects. If the goal is to do this in a decentralized and trustless network, the challenge will be finding solutions that have a range of applicability that equal the diversity of the ecosystem with the ability to match the scales predicted. Understanding this, INT has begun research into implementing two different privacy protocols into their network that conquer two of the major necessities of IoT: scalable private transactions and private smart contracts.
One of the privacy protocols INT is looking into is Mimblewimble. Mimblewimble is a fairly new and novel implementation of the same elements of Elliptic-Curve Cryptography that serves as the basis of most cryptocurrencies. https://preview.redd.it/dsr6s6vt5p331.png?width=800&format=png&auto=webp&s=0249e76907c3c583e565edf19276e2afaa15ae08 In bitcoin-wizards IRC channel in August 2016, an anonymous user posted a Tor link to a whitepaper claiming “an idea for improving privacy in bitcoin.” What followed was a blockchain proposal that uses a transaction construction radically different than anything seen today creating one of the most elegant uses of elliptic curve cryptography seen to date. While the whitepaper posted was enough to lay out the ideas and reasoning to support the theory, it contained no explicit mathematics or security analysis. Andrew Poelstra, a mathematician and the Director of Research at Blockstream, immediately began analyzing its merits and over the next two months, created a detailed whitepaper [Poel16] outlining the cryptography, fundamental theorems, and protocol involved in creating a standalone blockchain. What it sets out to do as a protocol is to wholly conceal the values in transactions and eliminate the need for addresses while simultaneously solving the scaling issue.
Let’s say you want to hide the amount that you are sending. One great way to hide information that is well known and quick: hashing! Hashing allows you to deterministically produce a random string of constant length regardless of the size of the input, that is impossible to reverse. We could then hash the amount and send that in the transaction. X = SHA256(amount) or 4A44DC15364204A80FE80E9039455CC1608281820FE2B24F1E5233ADE6AF1DD5 = SHA256(10) But since hashing is deterministic, all someone would have to do would be to catalog all the hashes for all possible amounts and the whole purpose for doing so in the first place would be nullified. So instead of just hashing the amount, lets first multiply this amount by a private blinding factor*.* If kept private, there is no way of knowing the amount inside the hash. X = SHA256(blinding factor * amount) This is called a commitment, you are committing to a value without revealing it and in a way that it cannot be changed without changing the resultant value of the commitment. But how then would a node validate a transaction using this commitment scheme? At the very least, we need to prove that you satisfy two conditions; one, you have enough coins, and two, you are not creating coins in the process. The way most protocols validate this is by consuming a previous input transaction (or multiple) and in the process, creating an output that does not exceed the sum of the inputs. If we hash the values and have no way validate this condition, one could create coins out of thin air. input(commit(bf,10), Alice) -> output(commit(bf,9), BOB), outputchange(commit(bf,5), Alice) Or input(4A44DC15364204A80FE80E9039455CC1608281820FE2B24F1E5233ADE6AF1DD5, Alice) -> output(19581E27DE7CED00FF1CE50B2047E7A567C76B1CBAEBABE5EF03F7C3017BB5B7, Bob) output(EF2D127DE37B942BAAD06145E54B0C619A1F22327B2EBBCFBEC78F5564AFE39D, Alice) As shown above, the later hashed values look just as valid as anything else and result in Alice creating 4 coins and receiving them as change in her transaction. In any transaction, the sum of the inputs must equal the sum of the outputs. We need some way of doing mathematics on these hashed values to be able to prove: commit(bf1,x) = commit(bf2,y1) + commit(bf3,y2) which, if it is a valid transaction would be: commit(bf1,x) - commit(bf2+bf3,y1+y2) = commit(bf1-(bf2+bf3),0) Or just a commit of the leftover blinding factors. By the virtue of hashing algorithms, this isn’t possible. To verify this we would have to make all blinding factors and amounts public. But in doing so, nothing is private. How then can we make a valued public that is made with a private-value in such a way that you cannot reverse engineer the private value and still validate it satisfies some condition? It sounds a bit like public and private key cryptography… What we learned in our primer on Elliptic-Curve Cryptography was that by using an elliptic curve to define our number space, we can use a point on the curve, G, and multiply it by any number, x, and what you get is another valid point, P, on the same curve. This calculation is quick but in taking the resultant point and the publically known generator point G, it is practically impossible to figure out what multiplier was used. This way we can use the point P as the public key and the number x as the private key. Interestingly, they also have the curious property of being additive and communicative. If you take point P which is x • G and add point Q to it which is y • G, its resulting point, W = P + Q, is equal to creating a new point with the combined numbers x+y. So: https://preview.redd.it/yv0knclr6p331.png?width=800&format=png&auto=webp&s=9a3abccdc164e615651147141736356013e4b829 This property, homomorphism, allows us to do math with numbers we do not know. So if instead of using the raw amount and blinding factor in our commit, we use them each multiplied by a known generator point on an elliptic curve. Our commit can now be defined as: https://preview.redd.it/aas2wm0u6p331.png?width=800&format=png&auto=webp&s=c3ebb5728f755f30e878ce5f1885397f6667d4f3 This is called a Pedersen Commitment and serves as the core of all Confidential Transactions. Let’s call the blinding factors r, and the amounts v, and use H and G as generator points on the same elliptic curve (without going deep into Schnorr signatures, we will just accept that we have to use two different points for the blinding factor and value commits for validation purposes**). Applying this to our previous commitments: https://preview.redd.it/zf246t8z6p331.png?width=800&format=png&auto=webp&s=17e2e155c59002f05f38ccb27082f79a5dd98a1f and using the communicative properties: https://preview.redd.it/km4fuf017p331.png?width=800&format=png&auto=webp&s=13541d62ec3f6e5728388b7a8d995c3829364a42 which for a valid transaction, this would equal: with ri, vi being the values for the input, ro,vo being the values for the output and rco, vco being the values for the change output. This resultant difference is just a commit to the excess blinding factor, also called a commitment-to-zero: https://preview.redd.it/tqnwao667p331.png?width=800&format=png&auto=webp&s=9da5ecab5c670024f171a441e0d2477cf8f41a56 You can see that in any case where the blinding factors were selected randomly, the commit-to-zero will be non-zero and in fact, is still a valid point on the elliptic curve with a public key, https://preview.redd.it/19ry9i297p331.png?width=800&format=png&auto=webp&s=4fb6628a01dc784816e1aea43cc0f5cfb025af52 And private key being the difference of the blinding factors. So, if the sum of the inputs minus the sum of the outputs produces a valid public key on the curve, you know that the values have balanced to zero and no coins were created. If the resultant difference is not of the form https://preview.redd.it/71mpdobb7p331.png?width=800&format=png&auto=webp&s=143d28da48d40208d5ef338444b3c7edea1fab9c for some excess blinding factor, it would not be a valid public key on the curve, and we would know that it is not a balanced transaction. To prove this, the transaction is then signed with this public key to prove the transaction is balanced and that all blinding factors are known, and in the process, no information about the transaction have been revealed (the by step details of the signature process can be read in [Arvan19]). All the above work assumed the numbers were positive. One could create just as valid of a balanced transaction with negative numbers, allowing users to create new coins with every transaction. Called Range Proofs, each transaction must be accompanied by a zero-knowledge argument of knowledge to prove that a private committed value lies within a predetermined range of values. Mimblewimble, as well as Monero, use BulletProofs which is a new way of calculating the proof which cuts down the size of the transaction by 80–90%. *Average sizes of transactions seen in current networks or by assuming 2 input 2.5 output average tx size for MW Up to this point, the protocol described is more-or-less identical between Mimblewimble and Monero. The point of deviation is how transactions are signed. In Monero, there are two sets of keys/addresses, the spend keys, and the view keys. The spend key is used to generate and sign transactions, while the view key is used to “receive” transactions. Transactions are signed with what is called a Ring Signature which is derived from the output being spent, proving that one key out of the group of keys possesses the spend key. This is done by creating a combined Schnorr signature with your private key and a mix of decoy signers from the public keys of previous transactions. These decoy signers are all mathematically equally valid which results in an inability to determine which one is the real signer. Being that Monero uses Pedersen Commitments shown above, the addresses are never publically visible but are just used for the claiming, signing of transactions and generating blinding factors. Mimblewimble, on the other hand, does not use addresses of any type. Yes. That’s right, no addresses. This is the true brilliance of the protocol. What Jedusor proved was that the blinding factors within the Pedersen commit and the commit-to-zero can be used as single-use public/private key pairs to create and sign transactions. All address based protocols using elliptic-curve cryptography generate public-private key pairs in essentially the same way. By multiplying a very large random number (k_priv) by a point (G) on an elliptic curve, the result (K_pub) is another valid point on the same curve. https://preview.redd.it/pt2xr33i7p331.png?width=800&format=png&auto=webp&s=1785cebcc842cab19b3987d848b2029032ae1195 This serves as the core of all address generation. Does that look familiar? Remember this commit from above: https://preview.redd.it/w9ooxudk7p331.png?width=800&format=png&auto=webp&s=d94ad3ac103352aa4c9653934d61cccc25a6bf8f Each blinding factor multiplied by generator point G (in red) is exactly that! r•G is the public key with private key r! So instead of using addresses, we can use these blinding factors as proof we own the inputs and outputs by using these values to build the signature. This seemingly minor change removes the linkability of addresses and the need for a scriptSig process to check for signature validity, which greatly simplifies the structure and size of Confidential Transactions. Of course, this means (at this time) that the transaction process requires interaction between parties to create signatures.
Even though all addresses and amounts are now hidden, there is still some information that can be gathered from the transactions. In the above transaction format, it is still clear which outputs are consumed and what comes out of the transaction. This “transaction graph” can reveal information about the owners of the blinding factors and build a picture of the user based on seen transaction activity. In order to further hide and condense information, Mimblewimble implements an idea from Greg Maxwell called CoinJoin [Max13] which was originally developed for use in Bitcoin. CoinJoin is a trustless method for combining multiple inputs and outputs from multiple transactions, joining them into a single transaction. What this does is a mask that sender paid which recipient. To accomplish this in Bitcoin, users or wallets must interact to join transactions of like amounts so you cannot distinguish one from the other. If you were able to combine signatures without sharing private keys, you could create a combined signature for many transactions (like ring signatures) and not be bound by needing like amounts. In this CoinJoin tx, 3 addresses have 4 outputs with no way of correlating who sent what In Mimblewimble, doing the balance calculation for one transaction or many transactions still works out to a valid commit-to-zero. All we would need to do is to create a combined signature for the combined transaction. Mimblewimble is innately enabled to construct these combined signatures with the commit of Schnorr challenge transaction construction. Using “one-way aggregate signatures” (OWAS), nodes can combine transactions, while creating the block, into a single transaction with one aggregate signature. Using this,Mimblewimble joins all transactions at the block level, effectively creating each block as one big transaction of all inputs consumed and all outputs created. This simultaneously blurs the transaction graph and has the power to remove in-between transactions that were spent during the block, cutting down the total size of blocks and the size of the blockchain.
We can take this one step further. To validate this fully “joined” block, the node would sum all of the output commitments together, then subtract all the input commitments and validate that the result is a valid commit-to-zero. What is stopping us from only joining the transactions within a block? We could theoretically combine two blocks, removing any transactions that are created and spent in those blocks, and the result again is a valid transaction of just unspent commitments and nothing else. We could then do this all the way back to the genesis block, reducing the whole blockchain to just a state of unspent commitments. This is called Cut-through. When doing this, we don’t have any need to retain the range proofs of spent outputs, they have been verified and can be discarded. This lends itself to a massive reduction in blockchain growth, reducing growth from O*(number of txs)* to O*(number of unspent outputs)*. To illustrate the impact of this, let’s imagine if Mimblewimble was implemented in the Bitcoin network from the beginning, with the network at block 576,000, the blockchain is about 210 GB with 413,675,000 total transactions and 55,400,000 total unspent outputs. In Mimblewimble, transaction outputs are about 5 kB (including range proof ~5 kB and Pedersen commit ~33 bytes), transaction inputs are about 32 bytes and transaction proof are about 105 bytes (commit-to-zero and signature), block headers are about 250 bytes (Merkle proof and PoW) and non-confidential transactions are negligible. This sums up to a staggering 5.3 TB for a full sync blockchain of all information, with “only” 279 GB of that being the UTXOs. When we cut-through, we don’t want to lose all the history of transactions, so we retain the proofs for all transactions as well as the UTXO set and all block headers. This reduces the blockchain to 322 GB, a 94% reduction in size. The result is basically a total consensus state of only that which has not been spent with a full proof history, greatly reducing the amount of sync time for new nodes. If Bulletproofs are implemented, the range proof is reduced from over 5kB to less than 1 kB, dropping the UTXO set in the above example from 279 GB to 57 GB. *Based on the assumptions and calculations above. There is also an interesting implication in PoS blockchains with explicit finality. Once finality has been obtained, or at some arbitrary blockchain depth beyond it, there is no longer the need to retain range proofs. Those transactions have been validated, the consensus state has been built upon it and they make up the vast majority of the blockchain size. If we say in this example that finality happens at 100 blocks deep, and assume that 10% of the UTXO set is pre-finality, this would reduce the blockchain size by another 250 GB, resulting in a full sync weight of 73 GB, a 98.6% reduction (even down 65% from its current state). Imagine this. A 73 GB blockchain for 10 years of fully anonymous Bitcoin transactions, and one third the current blockchain size. It’s important to note that cut-through has no impact on privacy or security. Each node may choose whether or not to store the entire chain without performing any cut-through with the only cost being increased disk storage requirements. Cut-through is purely a scalability feature resulting in Mimblewimble based blockchains being on average three times smaller than Bitcoin and fifteen times smaller than Monero (even with the recent implementation of Bulletproofs).
What does this mean for INT and IoT?
Transactions within an IoT network require speed, scaling to tremendous volumes, adapting to a variety of uses and devices with the ability to keep sensitive information private. Up till now, IoT networks have focused solely on scaling, creating networks that can transact with tremendous volume with varying degrees of decentralization and no focus on privacy. Without privacy, these networks will just make those who use it targets who feed their attackers the ammunition. Mimblewimble’s revolutionary use of elliptic-curve cryptography brings us a privacy protocol using Pedersen commitments for fully confidential transactions and in the process, removes the dependence on addresses and private keys in the way we are used to them. This transaction framework combined with Bulletproofs brings lightweight privacy and anonymity on par with Monero, in a blockchain that is 15 times smaller, utilizing full cut-through. This provides the solution to private transactions that fit the scalability requirements of the INT network. The Mimblewimble protocol has been implemented in two different live networks, Grin and Beam. Both are purely transactional networks, focused on the private and anonymous transfer of value. Grin has taken a Bitcoin-like approach with community-funded development, no pre-mine or founders reward while Beam has the mindset of a startup, with VC funding and a large emphasis on a user-friendly experience. INT, on the other hand, is researching implementing this protocol either on the main chain, creating all INT asset transfer private or as an optional and add-on subchain, allowing users to transfer their INT from non-private chain to the private chain, or vice versa, at will.
Where it falls short？
What makes this protocol revolutionary is the same thing that limits it. Almost all protocols, like Bitcoin, Ethereum, etc., use a basic scripting language with a function calls out in the actual transaction data that tells the verifier what script to use to validate it. In the simplest case, the data provided with the input calls “scriptSig” and provides two pieces of data, the signature that matches the transaction and the public key that proves you own the private key that created it. The output scripts use this provided data with the logic passed with it, to show the validator how to prove they are allowed to spend it. Using the public key provided, the validator then hashes it, checks that it matches the hashed public key in the output, if it does, it then checks to make sure the signature provided matches the input signature. https://preview.redd.it/5u6m1eiv7p331.png?width=1200&format=png&auto=webp&s=3729eb12037107ae744d15cea9f9bc1e18a3c719 This verification protocol allows some limited scripting ability in being able to tell validators what to do with the data provided. The Bitcoin network can be updated with new functions allowing it to adapt to new processes or data. Using this, the Bitcoin protocol can verify multiple signatures, lock transactions for a defined timespan and do more complex things like lock bitcoin in an account until some outside action is taken. In order to achieve more widely applicable public smart contracts like those in Ethereum, they need to be provided data in a non-shielded way or create shielded proofs that prove you satisfy the smart contract conditions. In Mimblewimble, as a consequence of using the blinding factors as the key pairs, greatly simplifying the signature verification process, there are no normal scripting opportunities in the base protocol. What is recorded on the blockchain is just: https://preview.redd.it/dwhiuc8y7p331.png?width=1200&format=png&auto=webp&s=69ea0a7797bc94a9766a4b31a639666bf9f1ebc4
Inputs used — which are old commits consumed
New outputs — which are new commits to publish
Transaction kernel — which contains the signature for the transaction with excess blinding factor, transaction fee, and lock_height.
And none of these items can be related to one another and contain no useful data to drive action. There are some proposals for creative solutions to this problem by doing so-called scriptless-scripts†. By utilizing the properties of the Schnorr signatures used, you can achieve multisig transactions and more complex condition-based transactions like atomic cross-chain swaps and maybe even lightning network type state channels. Still, this is not enough complexity to fulfill all the needs of IoT smart contracts. And on top of it all, implementing cut-through would remove transactions that might be smart contracts or rely on them. So you can see in this design we can successfully hide values and ownership but only for a single dimensional data point, quantity. Doing anything more complex than transferring ownership of coin is beyond its capabilities. But the proof of ownership and commit-to-zero is really just a specific type of Zero-knowledge (ZK) proof. So, what if, instead of blinding a value we blind a proof? Part 2 of this series will cover implementing private smart contracts with zkSNARKs.
Estimating the marginal cost of a transaction on the Bitcoin (Cash) network
Recently, the mempool has not been clearing with every block found. Should we immediately raise the block-size? Perhaps put plans to make the easy relay of sub satoshi/byte transaction on hold? Assumptions:
The marginal cost is made up of: hard-drive space (replicated world-wide), bandwidth relaying (also world wide -- partial fixed cost?)
Expanding the UTXO set incurs a further penalty, based on the price of RAM.
Mature network needs about 1000 mining nodes world-wide. Further, these mining nodes avoid the "tragedy of the commons" problem by charging a proportionally higher fee if they are less likely to get a block.
The following are fixed costs: Hashing equipment and maintenance, Space rental, electric, cooling, staff, etc.
I will neglect the CPU time required to validate a transaction since I have no good way to estimate that.
Assuming 1 CAD is 80 cents USD
Based on this discussion thread I am going to assume 8GB blocks to approximate the limit as the number of transactions go to infinity. Note: Chassis I chose only supports 300TB after parity.
Step 1: find the price of storing a transaction Searching NCIX:
As you can see. miners have a strong incentive to offer free UTXO consolidation transactions: and require bulk UTXO fanning transaction to pay a fee of 494.86sat/kB -- about 0.5 sat/byte. ((0.01249USD/kB)/(2523.96USD/BCH)*100,000,000sat/BCH) Fees are no where near that high due to the block subsidy. For an 8MB block: 1,250,000,000 satoshies/ 8000 kB -> 156,250sat/kB; or more conventionally: 157satosies/byte.. Note that the block subsidy per kB goes down with larger block-sizes. Step 2: Estimate Bandwidth costs Disclaimer: I am not too familiar with commercial bandwidth plans
According to slide 19 on this PDF document, you should be able to get IP transit for less than $10/Mbps in major cities (10GB-ethernet pricing).
Let's assume you budget 1Gbps of IP transit for your full node. You are also sharing with at least 8 peers. -> $10,000 USD/month
8GB blocks work out to 34.560TB/month x8 -> 276TB/month
That implies a utilization of: 64GB/600s*10bits/byte -> 1.066 Gbps -> we need a 10Gbps connection.
Cost Per kB: ($100,000/month)/(34.560TB/month)/(1,000,000,000kB/TB) -> 2.89 microdollars/kB (rounding error, unless I messed up the math)
Edit: If we assume the miners average their costs (like earlier) x1000: 2.89Mills/kB (57x the storage costs)
Exercise to the reader: Re-do these calculations for hobbyist hardware and internet connections. You probably have to assume a smaller block size: such as 100MB. Disclaimer: I later learned the site I was using for prices (NCIX) was bankrupt. Not sure how much that would skew prices.
BIP99½ - An Optimized Procedure to Increase the Block Size Limit
BIP: 99½ Title: An Optimized Procedure to Increase the Block Size Limit Author: Jorge Stolfi jstolfi Status: Crufty Draft Created: 2015-08-30 EDIT: Changed the critical block number from 385000 to 390000 (~2016-01-02). EDIT2: Slight wording changes ("hopefully" "assuming" as per tsontar). EDIT3: Changed again critical block number to 395000 (~2016-02-06). Note that the traffic has increased faster than expected, so all predictions would have to be updated. ABSTRACT This BIP proposes setting the maximum block size to 8 MB starting with block number 395000. MOTIVATION This proposal aims to postpone by a few years the imminent congestion of the Bitcoin network, which is expected to occur in 2016 if traffic continues to increase at the present rate. It also aims to reduce the risk of a crippling "spam attack", that could delay a large fraction of the legitimate traffic for hours or days at a relatively modest cost for the attacker. Congestion The current average traffic T is ~120'000 transactions issued by all clients, per day (~1.38 tx/s, ~0.45 MB/block, ~830 tx/block assuming ~530 bytes/tx). The maximum network capacity C with 1 MB blocks, revealed by the recent "stress tests", is ~200'000 tx/day (~2.32 tx/s, ~0.75 MB/block, ~1390 tx/block). Presumably, the main reason why it is less than 1 MB/block is because certain shortcuts taken by miners often force them to mine empty blocks. Note that the traffic now is 60% of the effective capacity. Since the traffic rate has weekly, daily, and random fluctuations by several tens of percent, recurrent "traffic jams" (when T is higher than C for several tens of minutes) will start to occur when the average daily traffic is still well below the capacity -- say 80% (160'000 tx/day) or even less. For transactions issued during a traffic jam, the average wait time for first confirmation, which is normally 10-15 minutes, will jump to hours or even days. Fee adjustments may change the order in which individual transactions are confirmed, but the average delay will not be reduced by a single second. Over the past 12 months, the traffic has approximately doubled, from ~60'000 tx/day. The growth seems to be linear, at the rate of 5000 tx/day per month. If the growth continues to be linear, it should reach 160'000 tx/day in ~8 months (before May 2016). If the growth is assumed to be exponential, it should reach that level in ~5 months, in February 2016. If the maximum block size were lifted to 8 MB, assuming that empty and partial blocks would continue to be mined in about the same proportion as today, the effective capacity of the network should rise in proportion, to ~6 MB/block (1'600'000 tx/day, 5.90 tx/s). Based on last year's growth, the 80% capacity level (1'280'000 tx/day) will be reached in ~19 years assuming linear growth, and ~3.4 years assuming exponential growth. Spam attacks An effective spam attack would have to generate enough spam transactions, with suitable fees, to reduce the effective capacity of the network to a fraction of the legitimate traffic. Then the fraction of the traffic that cannot be serviced will pile up in the queues, forming a growing backlog until the spam attack ends; and the backlog will then clear at the rate limited by the free capacity C - T. With the current capacity C (200'000 tx/day) and traffic T (120'000 tx/day) a spam attack that blocks half the legitimate traffic would require a spam rate S of at least C - T/2 = 140'000 tx/day (1.62 tx/s, 0.52 MB/block). The fee F per kB offered by those transactions would have to be larger than all but the top ~420 transactions in the queue. If that fee were to be 1 USD/tx, the attack may cost as little as 140'000 USD/day. The backlog of legitimate transactions would grow at the rate of T/2 = ~2500 tx/hour, and, when the attack stops, will be cleared at the maximum rate C - T = ~3300 tx/hour. With 8 MB block limit, assuming that the effective capacity C will be 1.6 M tx/day and traffic T at 60% of the capacity (like today; expected to be the case 3 years from now), a spam attack that blocks half the traffic would require C - T/2 = 1.12 M tx/day of spam (8 times what an attack would require today). If the required fee F were to be 1 USD/tx, the attack would cost 1.12 million USD per day (ditto). DEPLOYMENT The maximum block size would be programmed to be 1 MB until block number 394999, and 8 MB starting with block 395000; which, at 144 blocks/day, is expected to be mined around 2016-02-06. On the test network, the increase will start with block 390000, which is expected to be mined around 2016-01-02. In the interest of a quick and uneventful passage through that block number, major miners should publicly state their approval or rejection of it as soon as possible. If and when the plan is approved by miners comprising a majority of the hashpower, all miners and clients should be alerted and urged to upgrade or modify their software so that it accepts blocks up to 8 MB after the stated block number. If and when the plan is rejected by miners comprising a majority of the hashpower, all miners and clients shoudl be alerted and warned that this BIP will not be implemented. RATIONALE The proposal should have a good chance to be approved and implemented, since the five largest Chinese miners (who have more than 50% of the total hash rate) have already stated in writing that they would agree to an increase of the limit to 8 MB by the end of the year, even theough they did not approve futher increases (in particular, the doublings specified by BIP101). Several major services and other miners have expressed approval for such an increase in the net. OBJECTIONS TO THIS PROPOSAL There have been claims that increasing the block size beyond 1 MB would have negative consequences for the health of the network. However, no serious effects were demonstrated, by argument or experimentally. There are worrisome trends in sme parameters, such as the number of full nodes and and the centralization of mining; but those trends obviously are not related to the block size limit, and there is no reason to expect that they would be halted or reversed by imposing a 1 MB cap on the block size starting next year. It should be noted that the increase is only on the block size limit; the actual block sizes will continue to be determined by the traffic. Even with optimistic forecasts, the average block size should not exceed the 1 MB limit before the end of 2016. If any harmful effects of larger blocks are demonstrated until then, the limt can be reduced again by decision of a majority of the miners. It has been claimed that netowrk congestion would be beneficial since it would create a "fee market" whereby clients would compete for space in the blocks by paying higer transaction fees. It has been claimed that those fees would compensate for the drop in miners revenue that will follow the next reward halving in 2016. It has also been claimed that the higher fees will inhibit spam and other undesirable uses of the blockchain. However, the "fee market" would be a fundamental totally untested change in the client view of the system. It proposes a novel pricing mecanism that is not used by any existing commercial service, physical or internet-based. There is no evidence that the "fee market" would work as claimed, or that it would achieve any of its expected results. (Rather, there are arguments that it would not.) Congestion would defintely put a cap on usage of the protocol, reduce its value as a payment system, and drive away much legitimate traffic. Congestion, and the unpredictable delays that result from it, are also unlikely to make bitcoin attractive to high-value non-payment uses, such as settlements of other networks or notarization of asset trades. And, mainly, there is no reason to expect that the fee market will generate enough fees to cover the 500'000 USD/day that the miners will lose with the next halving. COMPATIBILITY If this change to the Bitcoin protocol gets implemented by a majority of the miners, all players will have to replace or modify their software so that it accepts blocks up to 8 MB after block 395000. Miners who fail to do so may soon find themselves mining a minority branch of the blockchain, that grows at a much slower rate, will probably be congested from the start, and will probably die soon. That branch will probably be ignored by all major services, therefore any rewards that they earn on that branch will probably be worthless and soon unspendable. Clients who fail to upgrade or fix their software will not "see" the majority-mined chain once someone creates a block with more than 1 MB. Then, those clients will either be unable to move their coins until they fix their software, or may see only the minority branch above. Transactions that they issue before the fix may get confirmed on the main branch, but may appear to remain unconfirmed on the minority chain. Useof tools like replace-by-fee or child-pays-for-parent while in that state may give confusing results. DISCLAIMER The author has never owned or used bitcoin, and has a rather negative view of it. In fact, he is a regular contributor to /buttcoin. While he sees bitcoin as a significant advance toward its stated goal ("a peer-to-peer payment system that does not depend on trusted third parties"), and finds bitcoin interesting as a computer science experiment, he is quite skeptical about its chances of widespread adoption. He also deplores the transformation of bitcoin into a negative-sum pyramid investment schema, which not only has spread much misery and distress allover the world, but has also spoiled the experiment by turning mining into an industrial activity controlled by half a dozen large companies. He hopes that the pyramid will collapse as soon as possible, and that the price will drop to the level predicted by the money velocity equation, so that the aberrant mining industry will disappear. (However, he does not think that this BIP will help to achieve this goal; quite the opposite, unfortunately.)
What the hell is going on in BTC FAQ: Noobs come here!
Hello, I'm seeing a lot of confusing about what segwit is and what it actually does for the network. Hopefully this post can clear things up for some people. This is targeted at the noobs of the subreddit Everyone told me that segwit would decrease the fees, wtf is going on??? So you've probably read about how when segwit is activated we'll have an increased blocksize. This isnt entirely true. Segwit actually does away with the whole concept of a blocksize, replacing it with a new parameter, "block weight." Bitcoin blocks will now have a "blockweight" limit of 4,000,000. The reason for the switch from size to weight is the way it handles the different type of data in a transaction. Inside a transaction, there are two types of data that is included. The first being whats called "witness data." This is signature of a transaction. The signature proves that the transaction is completely valid. The other type of data is the transaction data, which includes who you're sending the funds too and how much you're sending. This is going to get slightly mathy from here on in, sorry. We can simply convert bytes of data to weighted units by saying every 1 byte of data is worth 4 weighted units. However, this is only the case for transaction data, witness data is converted on a scale of every 1 byte of witness data is worth only 1 weighted unit. Lets give an example. Lets say the mempool (mempool - a big pool of all the transactions that are currently unconfirmed and waiting to be included in a block) has 1000 transactions in it, each transaction being 1 KB of data. Now lets have each one of these transactions be 400 bytes of witness data, and 600 bytes of transaction data. If segwit wasn't a factor here, 1000 one kilobyte transactions would fill up a 1MB block. There would be no room for other transactions. Lets convert these transactions to weighted units. The transaction data would be worth 2,400 units, and since the witness data is discounted its only worth 400 weighted units. Giving it a combined weight of 2,800 1000 of these transactions would give us a total weight of 2,800,000. with 1,200,000 units of space left, we can fit in a bunch more transactions! If any of this makes absolutely no sense, leave a comment down below. I'll try to help as many people understand as possible Okay so I get kinda how it works, why have fees this week been so high if it was activated? Why have they been only coming down in the past day or two? Segwit isn't instantly available for everyone to use right away. To send a Segwit transaction, you first need to send it to a Segwit compatible wallet. From that wallet you'll be able to send Segwit transactions. To fully realize the affect of Segwit, it will probably take weeks and weeks if not months to have all the coins that are transacted regularly to be moved to Segwit wallets. Another problem with the network at the moment is the huge hashpower oscillations. Many of you have probably heard about the fork that happened have at the beginning of August. Currently, the other network is having problems due to something called EDA, or emergency difficulty adjustment. See, Bitcoin works so that if a bunch of people turned on mining hardware, after a certain number of time it would become harder to create a block. This keeps the average block creation time to an average of 10 minutes. The "other coin's" EDA system works so that if the average block creation time is below two per hour for twelve hours, the difficulty will go down so that the average is once again 10 minutes. Here's where the problem comes in. Miners are taking advantage of this by mining the the other chain when the difficulty is super low after an EDA, making it much more profitable. And once the difficult adjusts again through normal means, they switch back to the Bitcoin chain until another EDA happens. Again, I'd love to help out as many people as possible get informed about what the tf is going on right now in the community because for a new comer this is probably massively overwhelming. Whats all this Segwit2x cheese I've been hearing about? So the 2x part is the second half of a scaling agreement known as the New York Agreement. It was a compromise between dozens of Bitcoin business and over 80% of the hashpower. This is an unprecedented amount of support, the likes of which really haven't been seen in Bitcoin. The original agreement was to activate Segwit ASAP and roughly three months after increase the blocksize to 2MB. With Segwit it would be a new blockweight of 8,000,000. I suggest investigating the pros and cons of a 2x blockweight increase for yourself. There are a LOT of conflicting opinions, and you shouldn't be blindly believing anyone. Leave a comment below if you have any further questions, I'll do my best to answer most. If you appreciated this FAQ, feel free to send a tip :) 1Gi9uberSWjPnWT6UUKePUFUWWryqUxaPk
Yesterday, CoinWallet.eu conducted a stress test of the Bitcoin blockchain. Not only was our plan to see the outcome, but also to see how easy it would be for a malicious entity or government to create havoc for the Bitcoin community. As you will see from the analysis below, delayed transactions are not the only issue that Bitcoin users experienced. Surprisingly, executing tens of thousands of transactions that correctly propagate to the network simultaneously is not as easy as we had expected. One of the methods that was used to increase the kb size of our transactions was to send transactions consisting of numerous small outputs (usually 0.0001) to make a single transaction of 0.01. A simple transaction is usually 225-500 bytes, while many of our transactions were 18 kb (A number which limits the blockchain to 5 transactions per minute). In our preliminary testing this was effective, however in practice it caused our servers to crash. Throughout the day and evening, our strategy and methodology changed multiple times. Initially the plan was to spend 20 BTC on transaction fees to flood the network with as many transactions as possible. Due to technical complications the test was concluded early, with less than 2 BTC spent on fees. The events of yesterday were accomplished with less than €500. Timeline
11:57 GMT - Transaction servers initiated. Thousands of 700 kb transactions completed within the first 20 minutes. Transactions were used to break coins into small 0.0001 outputs.
12:30 GMT - Servers begin sending larger 18kb transactions.
14:20 GMT - Our servers begin to crash. It becomes apparent that BitcoinD is not well suited to crafting transactions of this size.
14:30 GMT - Our test transactions are halted while alternate solutions are created. The mempool is at 12 mb.
17:00 GMT - Alternate transaction sending methods are started. Servers are rebooted. Mempool has fallen to 4mb.
21:00 GMT - The stress test is stronger than ever. Mempool reaches 15 mb and more than 14000 transactions are backlogged. The situation is made worse by F2Pool selfishly mining two 0kb blocks in a row.
23:59 GMT - 12 hours after starting, the test is concluded. Less than 2 BTC (€434) is spent on the test in total.
The following graph depicts the entire test from start to finish: https://anduck.net/bitcoin/mempool.png Observations Delayed confirmation times and large mempool buildups were not the only observation that came from our testing. Many more services were impacted than we had initially envisioned. Blockchain.info Over the past few months, Blockchain.info has become increasingly unreliable, however we are confident that yesterday's stress test had an impact on their website being offline or broken for 1/3 of the day. During periods where we sent excessive transactions, Blockchain.info consistently froze. It appeared as though their nodes were overwhelmed and simply crashed. Each time this occurred, the site would re-emerge 10-30 minutes later only to fail again shortly thereafter. Users of the blockchain wallet were unable to send transactions, login or even view balances during the downtimes. In response to our heavy Bitcoin usage, blockchain.info began to exclude certain transactions from their block explorer. This issue is explored further by the creators of Multibit, who can confirm that some transactions sent from their software were ignored by Blockchain, but were picked up by Blockr. Bitcoin ATMs Many ATMs operate as full nodes, however some ATMs rely on third party wallet services to send and receive transactions. The most prominent Bitcoin ATM of this type is Lamassu, which uses the blockchain.info API to push outgoing transactions from a blockchain.info wallet. Due to the blockchain.info issues, all Lamassu ATMs that use blockchain.info's wallet service were unavailable for the day. MultiBit Both versions of MultiBit suffered delayed transactions due to the test. Gary and Jim from MultiBit have created a full analysis from Multibit's perspective which can be read at https://multibit.org/blog/2015/06/23/bitcoin-network-stress-test.html The outcome was that transactions with the standard fee in Multibit HD took as many as 80 blocks to confirm (Approximately 13 hours). Standard 10000 satoshi fee transactions took an average of 9 blocks to get confirmed. Multibit has stated that they will be making modifications to the software to better cope with this type of event in the future. Tradeblock With Blockchain.info broken, we frequently referred to Tradeblock to track the backlog. Unfortunately Tradeblock was less than perfectly reliable and often failed to update when a new block had been mined. Regardless, at one point 15,000 unconfirmed transactions were outstanding. Bitpay Users reported issues with Bitpay not recognizing transactions during the test. Price Increase of $2. Contrary to some predictions, we did not short Bitcoin. Green Address While this app was not hindered directly by our test, we did send a series of 0.001 payments to a green address wallet. When attempting to craft a transaction from the wallet, an error occurs stating that it is too large. It appears that the coins that were sent to this wallet may be lost. Conclusions From a technical perspective, the test was not a success. Our goal of creating a 200mb backlog was not achieved due to miscalculations and challenges with pushing the number of transactions that we had desired. We believe that future tests may easily achieve this goal by learning from our mistakes. There are also numerous vulnerable services that could be exploited as part of a test, including Bitcoin casinos, on-chain gambling websites, wallets (Coinbase specifically pointed out that a malicious user could take advantage of their hosted wallet to contribute to the flooding), exchanges, and many others. Users could also contribute by sending small amounts to common brain wallets. We also learned that the situation could have been made worse by sending transactions with larger fees. We sent all transactions with the standard fee of 10000 satoshis per kb. If we had sent with 20000 satoshis per kb, normal transactions would have experienced larger delays. In our future stress tests, these lessons will be used to maximize the impact.
08-27 17:52 - 'Bitcoin Current Affairs FAQ: Segwit, fees, EDA, and 2x' (self.Bitcoin) by /u/hrones removed from /r/Bitcoin within 0-10min
''' Hello, I'm seeing a lot of confusing about what segwit is and what it actually does for the network. Hopefully this post can clear things up for some people. This is targeted at the noobs of the subreddit Everyone told me that segwit would decrease the fees, wtf is going on??? So you've probably read about how when segwit is activated we'll have an increased blocksize. This isnt entirely true. Segwit actually does away with the whole concept of a blocksize, replacing it with a new parameter, "block weight." Bitcoin blocks will now have a "blockweight" limit of 4,000,000. The reason for the switch from size to weight is the way it handles the different type of data in a transaction. Inside a transaction, there are two types of data that is included. The first being whats called "witness data." This is signature of a transaction. The signature proves that the transaction is completely valid. The other type of data is the transaction data, which includes who you're sending the funds too and how much you're sending. This is going to get slightly mathy from here on in, sorry. We can simply convert bytes of data to weighted units by saying every 1 byte of data is worth 4 weighted units. However, this is only the case for transaction data, witness data is converted on a scale of every 1 byte of witness data is worth only 1 weighted unit. Lets give an example. Lets say the mempool (mempool - a big pool of all the transactions that are currently unconfirmed and waiting to be included in a block) has 1000 transactions in it, each transaction being 1 KB of data. Now lets have each one of these transactions be 400 bytes of witness data, and 600 bytes of transaction data. If segwit wasn't a factor here, 1000 one kilobyte transactions would fill up a 1MB block. There would be no room for other transactions. Lets convert these transactions to weighted units. The transaction data would be worth 2,400 units, and since the witness data is discounted its only worth 400 weighted units. Giving it a combined weight of 2,800 1000 of these transactions would give us a total weight of 2,800,000. with 1,200,000 units of space left, we can fit in a bunch more transactions! If any of this makes absolutely no sense, leave a comment down below. I'll try to help as many people understand as possible Okay so I get kinda how it works, why have fees this week been so high if it was activated? Why have they been only coming down in the past day or two? Segwit isn't instantly available for everyone to use right away. To send a Segwit transaction, you first need to send it to a Segwit compatible wallet. From that wallet you'll be able to send Segwit transactions. To fully realize the affect of Segwit, it will probably take weeks and weeks if not months to have all the coins that are transacted regularly to be moved to Segwit wallets. Another problem with the network at the moment is the huge hashpower oscillations. Many of you have probably heard about the fork that happened have at the beginning of August. Currently, the other network is having problems due to something called EDA, or emergency difficulty adjustment. See, Bitcoin works so that if a bunch of people turned on mining hardware, after a certain number of time it would become harder to create a block. This keeps the average block creation time to an average of 10 minutes. the "other coins's" EDA system works so that if the average block creation time is below two per hour for twelve hours, the difficulty will go down so that the average is once again 10 minutes. Here's where the problem comes in. Miners are taking advantage of this by mining the other chain when the difficulty is super low after an EDA, making it much more profitable. And once the difficult adjusts again through normal means, they switch back to the Bitcoin chain until another EDA happens. Again, I'd love to help out as many people as possible get informed about what the tf is going on right now in the community because for a new comer this is probably massively overwhelming. Whats all this Segwit2x cheese I've been hearing about? So the 2x part is the second half of a scaling agreement known as the New York Agreement. It was a compromise between dozens of Bitcoin business and over 80% of the hashpower. This is an unprecedented amount of support, the likes of which really haven't been seen in Bitcoin. The original agreement was to activate Segwit ASAP and roughly three months after increase the blocksize to 2MB. With Segwit it would be a new blockweight of 8,000,000. I suggest investigating the pros and cons of a 2x blockweight increase for yourself. There are a LOT of conflicting opinions, and you shouldn't be blindly believing anyone. Leave a comment below if you have any further questions, I'll do my best to answer most. If you appreciated this FAQ, feel free to send a tip :) 1Gi9uberSWjPnWT6UUKePUFUWWryqUxaPk ''' Bitcoin Current Affairs FAQ: Segwit, fees, EDA, and 2x Go1dfish undelete link unreddit undelete link Author: hrones
While /bitcoin discusses the price, Eligius is having a meeting at #eligius discussing what transactions are to be included in Eligius blocks i.e. the core bitcoiners are busy with work not worrying about exchange rates. Don't get too concentrated on the price.
Transaction filtering: Mark outputs spent in others’ mempools for not mining (prevent policy abuse) Spam pkh matching: current policies Dust: ban regardless of fee Bare multisig: ban Non-softfork-safe: ban Non-shortest-pushop: ban Standard transactions: OP_RETURN: allow up to 80 bytes Transaction priority: p2pkh/p2sh: deprioritise address reuse Hash randomness testing: influence priority Non-standard transactions: P2SH, by request at admin discretion, or 100 TBC per 512 bytes rounded up Transaction fees: 0.1 TBC per 512 bytes, without rounding, up to 128 KB block ">"128 KB blocks, logarithmically increase min fee rate to 10 TBC Tx size discounts for UTXO reduction: not for the moment
The debate is not "SHOULD THE BLOCKSIZE BE 1MB VERSUS 1.7MB?". The debate is: "WHO SHOULD DECIDE THE BLOCKSIZE?" (1) Should an obsolete temporary anti-spam hack freeze blocks at 1MB? (2) Should a centralized dev team soft-fork the blocksize to 1.7MB? (3) OR SHOULD THE MARKET DECIDE THE BLOCKSIZE? (354 points, 116 comments)
"Notice how anyone who has even remotely supported on-chain scaling has been censored, hounded, DDoS'd, attacked, slandered & removed from any area of Core influence. Community, business, Hearn, Gavin, Jeff, XT, Classic, Coinbase, Unlimited, ViaBTC, Ver, Jihan, Bitcoin.com, btc" ~ u/randy-lawnmole (176 points, 114 comments)
"You have to understand that Core and their supporters eg Theymos WANT a hardfork to be as messy as possible. This entire time they've been doing their utmost to work AGAINST consensus, and it will continue until they are simply removed from the community like the cancer they are." ~ u/singularity87 (170 points, 28 comments)
3 excellent articles highlighting some of the major problems with SegWit: (1) "Core Segwit – Thinking of upgrading? You need to read this!" by WallStreetTechnologist (2) "SegWit is not great" by Deadalnix (3) "How Software Gets Bloated: From Telephony to Bitcoin" by Emin Gün Sirer (146 points, 59 comments)
Now that BU is overtaking SW, r\bitcoin is in meltdown. The 2nd top post over there (sorted by "worst first" ie "controversial") is full of the most ignorant, confused, brainwashed comments ever seen on r\bitcoin - starting with the erroneous title: "The problem with forking and creating two coins." (142 points, 57 comments)
enough with the blockstream core propaganda : changing the blocksize IS the MORE CAUTIOUS and SAFER approach . if it was done sooner , we would have avoived entirely these unprecedented clycles of network clogging that have caused much frustrations in a lot of actors (173 points, 15 comments)
Dear Theymos, you divided the Bitcoin community. Not Roger, not Gavin, not Mike. It was you. And dear Blockstream and Core team, you helped, not calling out the abhorrent censorship, the unforgivable manipulation, unbecoming of supposed cypherpunks. Or of any decent, civil persons. (566 points, 87 comments)
So, Alice is causing a problem. Alice is then trying to sell you a solution for that problem. Alice now tell that if you are not buying into her solution, you are the cause of the problem. Replace Alice with Greg & Adam.. (139 points, 28 comments)
SegWit+limited on-chain scaling: brought to you by the people that couldn't believe Bitcoin was actually a sound concept. (92 points, 47 comments)
Reality check: today's minor bug caused the bitcoin.com pool to miss out on a $12000 block reward, and was fixed within hours. Core's 1MB blocksize limit has cost the users of bitcoin >$100k per day for the past several months. (270 points, 173 comments)
Top post on /bitcoin about high transaction fees. 709 comments. Every time you click "load more comments," there is nothing there. How many posts are being censored? The manipulation of free discussion by /bitcoin moderators needs to end yesterday. (229 points, 91 comments)
Fantasy land: Thinking that a hard fork will be disastrous to the price, yet thinking that a future average fee of > $1 and average wait times of > 1 day won't be disastrous to the price. (209 points, 70 comments)
"Segwit is a permanent solution to refuse any blocksize increase in the future and move the txs and fees to the LN hubs. The chinese miners are not as stupid as the blockstream core devaluators want them to be." shock_the_stream (150 points, 83 comments)
In response to the "unbiased" ELI5 of Core vs BU and this gem: "Core values trustlessness and decentralization above all. Bitcoin Unlimited values low fees for on-chain transactions above all else." (130 points, 45 comments)
Core's own reasoning doesn't add up: If segwit requires 95% of last 2016 blocks to activate, and their fear of using a hardfork instead of a softfork is "splitting the network", then how does a hardfork with a 95% trigger even come close to potentially splitting the network? (96 points, 130 comments)
I'm more concerned that bitcoin can't change than whether or not we scale in the near future by SF or HF (26 points, 9 comments)
"The best available research right now suggested an upper bound of 4MB. This figure was considering only a subset of concerns, in particular it ignored economic impacts, long term sustainability, and impacts on synchronization time.." nullc (20 points, 4 comments)
At any point in time mining pools could have increased the block reward through forking and yet they haven't. Why? Because it is obvious that the community wouldn't like that and correspondingly the price would plummet (14 points, 14 comments)
Dear Theymos, you divided the Bitcoin community. Not Roger, not Gavin, not Mike. It was you. And dear Blockstream and Core team, you helped, not calling out the abhorrent censorship, the unforgivable manipulation, unbecoming of supposed cypherpunks. Or of any decent, civil persons. by parban333 (566 points, 87 comments)
The debate is not "SHOULD THE BLOCKSIZE BE 1MB VERSUS 1.7MB?". The debate is: "WHO SHOULD DECIDE THE BLOCKSIZE?" (1) Should an obsolete temporary anti-spam hack freeze blocks at 1MB? (2) Should a centralized dev team soft-fork the blocksize to 1.7MB? (3) OR SHOULD THE MARKET DECIDE THE BLOCKSIZE? by ydtm (354 points, 116 comments)
151 points: nicebtc's comment in "One miner loses $12k from BU bug, some Core devs scream. Users pay millions in excessive tx fees over the last year "meh, not a priority"
123 points: 1DrK44np3gMKuvcGeFVv's comment in "One miner loses $12k from BU bug, some Core devs scream. Users pay millions in excessive tx fees over the last year "meh, not a priority"
117 points: cryptovessel's comment in nullc disputes that Satoshi Nakamoto left Gavin in control of Bitcoin, asks for citation, then disappears after such citation is clearly provided. greg maxwell is blatantly a toxic troll and an enemy of Satoshi's Bitcoin.
117 points: seweso's comment in Roger Ver banned for doxing after posting the same thread Prohashing was banned for.
113 points: BitcoinIsTehFuture's comment in Dear Theymos, you divided the Bitcoin community. Not Roger, not Gavin, not Mike. It was you. And dear Blockstream and Core team, you helped, not calling out the abhorrent censorship, the unforgivable manipulation, unbecoming of supposed cypherpunks. Or of any decent, civil persons.
106 points: MagmaHindenburg's comment in bitcoin.com loses 13.2BTC trying to fork the network: Untested and buggy BU creates an oversized block, Many BU node banned, the HF fails • /Bitcoin
98 points: lon102guy's comment in bitcoin.com loses 13.2BTC trying to fork the network: Untested and buggy BU creates an oversized block, Many BU node banned, the HF fails • /Bitcoin
The most popular and trusted block explorer and crypto transaction search engine. If you convert those hex bytes to Unicode, you get the string 3Nelson-Mandela.jpg?, representing the image filename. Similarly, the following addresses encode the data for the image. Thus, text, images, and other content can be stored in Bitcoin by using the right fake addresses. Secret message in the first Bitcoin block It is well known that the Genesis block, the very first block of data in ... Bitcoin Core 0.11.x increases this default to 80 bytes, with the other rules remaining the same. Bitcoin Core 0.12.0 defaults to relaying and mining null data outputs with up to 83 bytes with any number of data pushes, provided the total byte limit is not exceeded. There must still only be a single null data output and it must still pay exactly 0 satoshis. The -datacarriersize Bitcoin Core ... 0.08 KB (80 bytes) 0.22 KB (222 bytes) 100KB (100000 bytes) Instant transactions (zero confirmation) No, due to RBF (Replace By Fee) Yes . Yes . Total transactions in 2020* 62 Million (62,455,497) 7 Million (7,426,837) 129 Million (129,873,038) Average daily transactions in 2020* 306,154 . 36,406 . 636,632 . Average daily transaction fee* 1.43 ... Predicting bitcoin fees for transactions. Fees are displayed in Satoshis/byte of data. Miners usually include transactions with the highest fees first.
Malwarebytes is the next-gen cybersecurity company that millions worldwide trust. Malwarebytes proactively protects people and businesses against dangerous t... Every time we send a bitcoin transaction, we pay a fee relative to its size. Strangely, this has almost nothing to do with how much money is being sent -- the blockchain world just isn't that simple! Share your videos with friends, family, and the world Bitcoin - 80 Trillion Dollar Exit. I talk about how Bitcoin will eventually become an exit ramp from the crashing 80 trillion dollar financial system, the ec... BITCOIN CASH: TUTORIAL - MOONCASH [BTC CASH] MOON CASH (MOON BITCOIN CASH) BITCOIN CASH (COINPOT) Bitcoin Cash: https://bit.ly/35iJjXZ Coinpot: https://coinpot.co FAUCETS PARA GANAR MONEDAS EN COINPOT