However, the process of mining is a stochastic random process. Essentially, the job of a miner is to find a partial "hash collision" - essentially, the miner hashes the transaction data and a random nonce, and aims to find a hash as close to 000000000....00 as possible. The bitcoin/alternative network agrees a priori, what threshold counts as a "hit". The miner essentially tries random nonces, until it either gets a hit, or is told that its transaction data is stale, and needs to be refreshed.
Because, in the case of bitcoin, the network sets the target such that on average 1 "hit" is found every 10 minutes worldwide. This means that an individual miner might have to run for weeks or months to get a win and be awarded the (currently) 25 BTC reward for successfully computing a hash below target. In practice, therefore most miners operate on "pools", where a central server coordinates multiple diverse pieces of mining hardware operated by multiple individual operators. The pool operator when they receive a 25 BTC reward, then divides it up amongst the contributors.
The way the individual pool servers account for hash rate is to set a lower hash target, and count the number of "hits" each miner gets. E.g. if the main bitcoin network has target is Because pools can only detect hashrate by the rate at which "hits" are delivered, the reported hashrate will necessarily vary by virtue of the statistical properties of a stochastic process. The degree of variation depends upon the "difficulty" (target) set by the pool operator, the degree of "smoothing" that the pool operator applies to the displayed statistics, your hash contribution (a bigger contributor, will have a smaller coefficient of variation in their displayed hashrate, again for statistical reasons) etc.
Things are further complicated because many of the affected pools are "multi-coin" pools. The pool server automatically scans multiple cryptocoin networks, and various cryptocoin exchanges, to work out which coin is most profitable, the server will then jump between coins every few seconds or minutes as needed. For various technical reasons, different coins have different "stale" and "orphan" rates - "hits" which should have resulted in new coin creation, but where the hit was rejected (either immediately - stale) or initially accepted, then rolledback (orphan). Some of the alternative coins had rather dubious technical designs which could lead to massive reject rates, and this too could result in displayed hash rates fluctuating like mad.
The final issue is that many pools were often run by rank amateurs, and were targets for hackers/DDos like red-rags are to a bull. DDoSes, random server crashes, bandwidth exceeds, etc. were all common place, as well as various software bugs in "multi-pool" backend software would cause miners to end up disconnected from servers. Smarter miners would have typically have several pools configured on their mining hardware, so that the software could fail-over to another server. However, even that wasn't always successful. I once left my mining hardware unattended for a week, and configured it with 8 pools. When I checked the logs when I got back, there was a period of about 24 hours when the mines were idle, as all servers were off line.
There were poor decisions and communication between various designers and operators. Take for example, the situation at reactor 1. After the generators started, the emergency reactor cooling condensers should have switched on to provide cooling. However, operators had found that they were very effective and being unfamiliar with their use were concerned that they would cause thermal shock to the reactor. Not familiar with the operation of this system, the operators decided to manually switch off the condenser system to arrest the temperature drop. They would then switch them on again manually as reactor temp rose again. This worked fine, until the generators failed, removing control and monitoring from this system.
Operators at emergency control, in a separate quake-proof building asked for confirmation of operation, but the control room could not give it. So,workers went out to inspect the reactor building for steam rising from the condenser stacks. They reported some steam rising, and it was assumed that the system was operational. However, the condenser system had never been used or tested since the plants were constructed 40 years ago. No one knew how they worked and how quickly they could cool the reactor, no one knew how much steam was produced during operation. It turns out that the workers sent out for reconnaissance saw only faint steam trickling from the stacks, consistent with the system having been switched off for many minutes, but still containing some residual heat. Had the system been switched on, the clouds of steam would have been so profuse and so dense that the it would have been impossible even to see the reactor building, let alone identify the condenser stacks.
On the assumption that the system was operational, other attempts to provide emergency cooling were suspended or delayed. A steam/battery powered pump system was available to deliver fresh water to the reactor, but without a heatsink (condenser) available, the reactor temperature rapidly rose and so did reactor pressure, eventually overcoming the maximum discharge pressure of the coolant injection system. After a few hours, the UPS controlling this system discharged and it also failed.
After 24 hours, reactor pressure unexpectedly dropped. Operators realised that this might permit external coolant injection and fire engines were called in. There was a huge delay, as the fire engines were unable to reach the site due to debris and some had been destroyed by the tsunami. Subsequent investigation showed that despite massive coolant injection, coolant did not rise in the reactor. The cause was thought to be due to damage to the reactor vessel or a pipe. In retrospect, it probably indicated damage to the reactor following meltdown of the fuel.
There were also design oversights in the emergency systems for the plants. One of the final backup schemes for reactor cooling was the ability to connect fire engines to the reactor to inject coolant. It subsequently became apparent that in units 2 and 3, this water didn't reach the reactor, and collected in a condenser unit instead. This was always going to happen, due to the way in which the water pipes were connected. There was a pump connected between the storage tank and the injection flow pipe. Under normal injection conditions, the pump would have been running, and any additional water from the fire engine would likely have gone towards the reactor, and this presumably was the assumption under which the water injection protocol was developed. However, under power failure conditions, the pump was unpowered. Due to the design of the pump - a rotodynamic (impeller) pump. this pump would have offered little or no resistance to reverse flow when unpowered.
The ISP must then wait for 10 days, to give the original complainant time to consider the "put back" notice, and decide whether a court case should commence. After the 10 day waiting period, if the ISP has not received notice of a restraining order blocking the put back because of an impending court hearing, then it is allowed to restore the content.
In order to avoid liability to their customer, the content must be restored with 14 days of receiving the "put back" notice, provided that the complainant has not obtained a restraining order blocking the put back.
The change to digital data is welcome.
At least in the UK's interpretation of this EC directive (the Distance Selling Regulations), digital downloads were NOT excluded. The purchase could cancel the purchase at any time up to 7 days after purchase and receive a full refund. Technically, you could download a software package or a movie, and then change your mind and claim a full refund.
While the Distance Selling Regulations specifically excluded copyright material such as computer software, movies, music, etc. - they do so only in physical form i.e. CDs, DVDs, etc. Downloads are treated as a "contract for a service" which do not fall in the scope of this very limited exclusion.
The ambiguity over digital downloads has caused a lot of heartache for a couple of small software developers that I know - albeit not enough to try to take it to court. I'm not sure that there is any caselaw actually addressing this loophole in the current system.
The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.