Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Let me guess (Score 2) 218

Antibiotics are like a pesticide. They will kill bacteria in a petri dish even without an immune system present.

Some antibiotics (like amoxicillin, cipro) work like that. Others, called bacteriostatic antibiotics (like erythromycin, tetracycline, linezolid), don't.

Bacteriostatic antibiotics temporarily halt bacterial replication, allowing the immune system to finish them off.

Comment Re:Why is this being posted now? (Score 1) 275

That's not really true. There has been definite harm at Fukushima; there have been a number of radiation injuries to workers in the emergency response (mainly local radiation burns - e.g. beta burns to feet from standing in contaminated water).

There has also been significant harm to the population around Fukushima as a result of the response to the accident. It can, reasonably, be argued that the population should not have been evacuated; but in that case, the population dose while small, would have been non-negligible. For example, one recent estimate is that living in the higher-radiation parts of the Fukushima exclusion zone would result in a reduction of life expectancy of around 3 months. To put this in perspective, reducing GDP per capita by approx $2k would be expected to result in a similar reduction in life expectancy. However, while it may be justifiable to mitigate a pollutant which reduced population life expectancy by 3 months, such a threshold is not applied consistently. For example, particulate emissions from diesel engine road vehicles in London, reduces population life expectancy by approximately 9-12 months. Yet, we do not have the UK government evacuating residents and workers from London, nor do we see other European governments doing the same for even more severe pollution levels in other cities.

Of course, evacuation and displacement of peoples is also not without hazard. There are stress related injuries, the stigma of being essentially a refugee, loss of support networks, loss of jobs and wealth, etc., and in turn many of these factors also result in poverty which in itself results in poor health outcomes. Using some more expansive definitions, the number of people harmed by the Fukushima evacuation numbers in the thousands.

Part of the problem is that the precedent of evacuation and exclusions zones was set after Chernobyl. There are certainly parts of Ukraine where pollution levels are intolerable by any reasonable measure, but approximately 60% of the exclusion zone is arguably unnecessary. Similar over reactions to Chernobyl have been put in place by other European governments with respect to contaminated food. In the UK, the sale of lamb and milk was restricted due to contamination of farmland. However, the restrictions remained in place despite exceedingly low levels of harm - such that a major meat eater eating only meat from the contaminated farms would suffer harm equivalent to a reduction in life expectancy measured in minutes; i.e. the harm from the contamination is minimal compared to the harm of a meat rich diet. The problem has been that the income reduction to affected farmers has been devastating to local economies, and the tangible health outcomes of the reduction in GDP would outweigh the harm from the contamination.

While it is easy to be critical of the response at Fukushima, one must not forget hindsight bias. The Fukushima accident was rapid in its development and complicated by lack of information. Infrastructure damaged by the earthquake/tsunami meant that early warning systems were out of action; all NPPs in Japan have real-time data links back to Tokyo for accident management, such that health physicists/meterologists/etc. can estimate risks in real time. In the absence of all data feeds, confused verbal messages from multiple plants all suffering severe accident situations, what should they have done? It is therefore probably incorrect to separate the harm from evacuation from the accident itself, as it is very hard to argue that the decision was clearly wrong given the circumstances of the time.

Comment Re:na (Score 4, Informative) 347

It is likely a linear power response to frequency with a small dead band.

In the UK, battery backed frequency response is an important contributor to frequency stability, and is operated with a dead band of 0.015 Hz. The power injection is required to be proportional to the frequency deviation from outside the dead band, reaching 100% rated power at 0.5 Hz deviation from nominal. Response time is a maximum of 1 s.

Additionally, in the UK, the requirement is that the frequency response is symmetrical. If frequency rises, then the system must absorb power - up to 100% of maximum rated power at 50.5 Hz, for a minimum of 15 minutes.

Comment Re:Future issues? Scalability? (Score 5, Interesting) 97

Yes. There are several problems.

The first, and most fundamental is that blockchain technology is inherently non-scalable. It is, effectively an ultra-redundant database system, operating on diverse hardware, in diverse regions, and which has a monotonically growing, non-prunable dataset. It is estimated that there are around 100k nodes in the bitcoin network maintaining a copy of the dataset, and participating in peer-to-peer replication. The total quantity of storage required for each database entry and the network traffic to replicate it are non-negligible.

The proximate problem is an artificial limit on transaction capacity implemented several years ago. At present, the system is designed so that the dataset cannot grow by more than 1 MB every 10 minutes. This limit was put in place to avoid spam attacks resulting in a DDOS of the network. There is a non-trivial computational cost to validate each entry cryptographically. Even my quad core i7 CPU can take 10-15 seconds to validate a 1 MB database update message. Lesser nodes, like ARM devices which wish to maintain a full copy of the dataset may need to dedicate 10-25% of CPU time just to handling incoming updates (i.e. each message incoming at 600 second intervals may require 60-150 seconds of CPU time).

The problem is that demand for transactions exceeds the hard limit. As a result, there is a queue of pending transactions, which currently stands at about 200 MB (or about 30 hours), with transactions removed from the queue in order of the amount they wish to pay as fees. This has led to spiralling fees as users try to outbid each other, in order to have their transactions accepted. Transactions with low fees have recently been timing out after 2 weeks in the queue and have been discarded unprocessed.

There has been an attempt to increase the transaction limit. However, because the bitcoin system is decentralised and based on consensus, such an upgrade would require 100% participation. Anyone that fails to upgrade would find their software completely broken, possibly silently so, such that they could spend bitcoins, but they would not be received, with no warning to this effect. A very complex solution has been developed which permits an increase in the limit with backwards compatibility (as well as fixing some minor security bugs), but as yet the reference node implementation has no GUI or meaningful CLI/API access to it; the network and accounting implementation is fully featured, but there is no practical way to use it, short of developing your own transaction bitstream generator, using some sort of middleware or a 3rd party library with its own low-level API, or a non-reference implementation of the software. As a result, use of this new format has been very limited, and only about 10% of the achievable capacity uplift has been realised.

This has seriously fractured the community. The incumbent developers take the view that the inherent inscalability of the blockchain concept means that development efforts should be focused on layered solutions. Transactions in the main bitcoin blockchain should be large value clearing transactions, which serve to aggregate large numbers of lower value transactions made at a second layer, with potentially microtransactions at a 3rd layer begin aggregated into 2nd layer transactions. Additionally, breaking changes should be avoided unless there is no other credible option, as not all participants run the reference code, and some may have made custom modifications which may require significant development time to implement new mandatory features.

Other groups have taken the view that convenience and capacity available immediately are more important, and have proposed breaking changes. One group, calling themselves bitcoin cash, changed the transaction limit to an 8 MB soft limit, based upon the a configuration option settable in the node options, hence the network can be upgraded to a larger capacity, simply by the majority of participants increasing the soft limit on their node configuration. It is estimated that there are around 10% as many "cash" nodes as bitcoin nodes, but transaction volume on the cash network is approximately 5% that of the bitcoin network. At the same time, the cash group has deliberately decided not to implement the bug fixes of the new transaction format, as without these fixes a layered solution is very challenging technically. Their answer to the issue of CPU and storage usage is that you shouldn't try to maintain your own copy of the database on inadequate hardware or network, and if you have a slow CPU or slow network, you should use a thin client which connects to their servers.

A separate group proposed a 2 MB hard limit but with the new transaction format, with no other long-term plan. This project subsequently failed and was never implemented. The community rejected it, as the modified client was specifically designed to masquerade as the original client on the network, so that "crosstalk" between the various networks and databases could occur, either by accident, or due to a malicious actor, potentially causing serious difficulties to users who wished to participate in both networks (e.g. exchanges or speculators/traders, adventurous users, etc.). There then followed an engineering arms race between the two factions - with countermeasures against cross-talk being developed, then counter-counter-measures from the other side, so that cross talk would be possible to bootstrap the 2X network, then counter^3 measures against this. Finally the 2X group abandoned the project, but not before core had spent so much engineering time on countermeasures that their schedule for deploying new APIs, etc. had slipped several months. In reality, the 2X would never have worked anyway, an off-by-one error would have resulted in a fatal deadlock when the 2X rule activation was triggered.

Comment Re: Bitcoin is bound to fail (Score 2) 202

The longer-term intention is for bitcoin to act as a clearing layer on top of a separate microtransaction layer.

The specification for the microtransaction layer (known as lightning network) is now published, and this specification uses a smaller denomination (1e-11 of 1 BTC) which is only rounded when payments are cleared to the bitcoin network.

Comment Re:Bitcoin tumbles from record high (Score 1) 130

There are plenty of day traders, including some using margin.

There was a major market dislocation on some alt coins yesterday - you think bitcoin is voltaile? It trades like a DJI stock compared to some of these alts, but people are jumping in with 4x leverage, and wondering why suddenly they have a zero or negative balance when there is a massive sell-off and the price crashes 90%.

Comment Re:pegged? (Score 1) 63

It provides a conduit for businesses dealing in cryptocurrencies to accept deposits from customers with dollars that want to buy. Accepting USD or other conventional currency is governed by extensive anti-money laundering legislation, and various other administration requirements (KYC, etc.).

A cryptocurrency pegged to the dollar which is redeemable in both directions by a single entity, concentrates the legal burden on that one controlling entity.

The multiple other smaller businesses do not need to concern themselves with the legalities and technicalities of handling national currencies (particularly as many commercial banks still refuse to offer accounts, or will close existing accounts, with businesses handling cryptocurrencies). With a pegged crptocurrency, it is straightforward for a dealer to gain access to this: they just install the relevant cryptocurrency client software and interface it to their software; tasks which are already fundamental to cryptocurrency dealing, and which will therefore be part of their core business skill.

Comment Re:Fukushima was older than Chernobyl (Score 1) 220

That is incorrect. The plumbing for the cooling system was damaged by the earthquake. The tsunami damage made it impossible to check it in the aftermath, and the fault went unnoticed until it was too late. That fault, specifically a key valve stuck in the wrong position, meant that the water that was pumped in to cool the reactors from fire engines was diverted to storage tanks. If it had reached the reactors then the explosions and meltdowns might have been avoided.

That's not really correct. The reason why the reactors were not cooled is more complex, and more related to delays in getting equipment to the correct sites. As well as loss of redundancy in electrical systems (all the switchgear taking mains power at multiple voltages, generator power and the UPS systems were all located in the same room at ground level.) As all key electrical switchgear and circuits were damaged at source, restoring electrical supply was very difficult - operators were carrying car batteries, or portable generators around the plant, and activating valves and instruments by going directly to a target device and splicing wires directly to the battery or generator.

At Unit 1, the accident progressed very quickly. The emergency core cooling systems were powered by AC electricity, so once the diesel generators stopped, these systems were unavailable. The normal gravity-powered shutdown cooling system (located outside containment) was locked out by a failsafe in the containment leak detection system. Under leak conditions, the containment system is sealed and all valves/pipes penetrating the containment are automatically closed and locked. Due to the unanticipated event of UPS protected circuits losing power while unprotected AC power circuits remained powered, the leak detection system went into failsafe mode, sealing containment and locking out the normal shutdown cooling system. (A simultaneous failure, or failure of UPS power following failure of mains power would not have resulted in such a lock out). The lock out meant that even when operators built a 120 V battery out of car batteries and spliced it to the shutdown cooling system operation valve, the system failed to operate.

The loss of cooling led to a rapid rise in reactor pressure, to a level above that at which fire pumps would be capable of delivering. However, by the time, injection started (14 hours after loss of cooling), the reactor had depressurised itself. The likely explanation being that the core had already melted, and that the combination of molten debris, high heat and pressure levels had cause the reactor pressure vessel to rupture. Of note was that a substantial amount of time was spent equipping the shutdown cooling system with auxiliary water supplies from fire pumps, in the mistaken belief that it was operating normally.

At units 2 and 3, the progression was slower. These reactors had steam powered emergency cooling systems, which started and continued to operate for long after their 4 hour design target (12 hours for unit 3 - when it was manually switched off in preparation for water injection, 72 hours for unit 2, when it stopped on its own). However, as these were steam powered, they required that the reactor be pressurised. Because manually depressurising the reactor would stop cooling, operators waited until the cooling systems had stopped on their own, before attempting injection.

At both units 2 and 3 there were long delays (nearly 12 hours) while operators tried to depressurise the reactors, and verify depressurisation prior to injection commencing. This would have been sufficient to allow for core melt in both cases, as in both cases, valves had to be operated manually by connecting compressed air cylinders directly pneumatically operated valves. This was done, and injection started. However, injection was inadequate. The reason for the inadequate injection is not clear, but it is thought the most likely explanation is that the pumps used for water injection simply were not powerful enough to achieve an adequate flow rate under the expected pressures, and/or that high containment temperatures/pressure resulted in back pressure on the reactor depressurisation valve pneumatic cylinders, causing them to close, and the reactor to repressurise and injection flow to stop.

Delays in venting the containment exacerbated this. In Japan, the government required the containment to be vented only as a last resort when containment rupture was imminent, as a result, the government ordered the containment vent system to be fitted with burst discs specifically to prevent manual venting, except when pressure was so high that the burst discs would rupture. By contrast, the US model of the same plant did not have rupture discs, and operators would have been able to vent the containment at any time; indeed, recognising that the BWR I containment would rapidly pressurise under accident conditions, the NRC issued standing instructions to operators of BWR I/II plants, that the containment MUST be vented immediately as soon as reactor cooling is threatened.

In reality, at units 2 and 3, the operators came reasonably close to preventing major core damage - but miscommunications (e.g. manually turning off the emergency cooling system at unit 3, believing water injection to be ready when in reality there were several more hours of prep work to do), and delays in operating valves under difficult conditions doomed the effort.

Comment Re:Fukushima was older than Chernobyl (Score 4, Informative) 220

The BWR 1 containment is a small containment. The small volume has the advantage of smaller diameters, hence supposedly the hoop stress should be smaller under hydrostatic load, making it relatively material efficient and easy to build. However, this was a very early design, and when the Mk3 containment was being designed, more robust analytic techniques revealed some significant concerns in the overall containment strength. In the US, the BWR operators formed a consortium to investigate and mitigate these problems, which they subsequently incorporated into their plants. In turn, this led to a number of lawsuits against GE as the cost of the upgrades were substantial.

Additionally, the small containment volume and small volume of in-containment water to act as thermal mass gives very poor performance against prolonged, simultaneous failure of containment cooling, and failure of reactor cooling, resulting in heat being dumped into containment. Prolonged total electrical failure was not anticipated at design time, and led to exactly this situation at all 3 fukushima plants. This led to rapid rupture of the containments once reactor cooling was lost. The latest designs of reactor in construction at present have containment volumes approaching 10x that of the BWR1 containment, as a result, pressure rises in accidents would be substantially lower and slower.

This risk was recognised by the manufacturer and the NRC (in their document NUREG-1150), and in 1987, the NRC published a circular to all BWR plants in the US, giving instructions to plant operators, that if reactor cooling is threatened, the plant operators should initiate containment venting as a matter of the highest priority; this would result in a controlled filtered release, but prevent containment rupture and long-term uncontrolled release.

In Japan, this risk was not acted upon. Whether it was communicated by the manufacturer to the government is not public. However, the TEPCO management had a policy where reactor operators were not authorized to initiate containment venting on their own, and required direct authority from senior management. Due to difficulties in communication, it took hours before the request was acted upon. At that point, rather than authorize venting, senior management decided to refer the matter to the government. Logs from the plants show that in all 3 cases, containment pressure dropped substantially before venting was finally authorized, indicating that the containments had ruptured during the delay for authorization.

Comment Re:Reasons not to use cryptocurrency (Score 4, Informative) 141

The mean quantity of work required to "mine" a "block" of bitcoin transactions is given by the equation W = D * 2^32, where D is the "difficulty level" (currently 1.2e12), and W is the number of hash operations. In other words, one block requires on average 5e21 hash operations.

The most efficient hash device available on the open market (and also used internally by the manufacturer for their own mining purposes) is the antminer S9 based on 16nm lithography ASICs. These have a specific energy consumption (this parameter is typically quoted as the main figure of merit for bitcoin mining systems, so is widely available for almost all mining hardware) of 60 pJ/hash.

From these figures, we can calculate an energy requirement of on average 300 GJ per block - or about 83 MWh.

A full block can contain approx 2000 transactions, giving a total energy consumption of approx 40 kWh per transaction at maximum transaction throughput (specific consumption is increased if transactions per block are reduced due to a low transaction rate).

Note that the above energy consumption figures are based on the ASICs only, and do not include power supply/distribution/conversion losses, as well as miscellaneous control devices/servers/networking. Add in these losses, and you could be looking at 45-50 kWh per transaction.

Comment Re:Shit is about to hit the fan: (Score 5, Informative) 66

It is generally a requirement of the authors to sign over the copyright to the publisher as a condition of acceptance for publication, unless you pay the "open access" fee. The authors can, of course, retain the manuscript, but in general they lose the right to distribute the manuscript or republish it. Exact terms and conditions vary, but in general, the publishers have more control over post-peer review versions of the manuscript, as technically, they have had some of the creative input.

For example, all but the strictest journals tend to permit personal redistribution of the original submission manuscript, before any recommendations from peer-review have been incorporated. Whereas the reproduction of post-peer review accepted manuscripts tends not to be permitted, except via the publisher. Some journals even go so far as to state that republication of accepted papers is strictly forbidden, and that if they discover republication (e.g. to a pre-print server, e.g. arxiv), then they reserve the right to issue a public retraction of the paper stating scientific misconduct by redundant publication.

Researchgate has been sailing pretty close to the wind. They have been advising authors of papers that their papers may be republishable, under the agreements with the journal publishers. E.g. some journals permit the authors to distribute the text and figures of the accepted manuscript (but not the final typeset document i.e. journal PDF file) on a "personal" basis such as "on an author's personal web site". RG have been actively encouraging authors to do this, on the basis that their RG profile page is a "personal web site"; every time I publish a paper, I get a ton of spam from RG begging me to upload a copy. I've always regarded this as somewhat dubious legally, as it is quite clear that the main purpose of RG is precisely to facilitate this sort of sharing. Indeed, I'm somewhat surprised that it has taken this long for the publishers to begin taking it seriously.

Journals vary in their policies towards open access. Some journals demand substantial "open access" fees, $2000-$5000 per paper. Other journals are often more progressive, and charge much smaller fees $500-750, and even if you don't pay, they release the papers under a CC license 12 months after first publication. There is also the issue of predatory new open access journals, which are quite happy to take the open access fees upfront, but offer no real peer review and will publish any nonsense.

While major funded research will often come with a condition that the papers should be open access, and there will be provision in the funding to pay the fees, not all research is done like this. I have done several projects on small grants coming from educational endowments (e.g. $2500 to pay for research materials) which is supplemented by myself and other researchers volunteering their time for free - but in such cases, there is no funding to pay the fees.

Comment Re: Same old, same old (Score 1) 112

L1C has not been deployed yet, even in part. First satellite with L1C hardware is expected to be launched next year. The L1C system is similar to and compatible with the L1 system used in the EU Galileo system, but with the benefit of some further modernisations.

While the L5 signal has not reached full operational capacity, it is nevertheless partially available with 12 functioning satellites. Similarly, the Galileo system has 12 fully operations satellites with L5 capacity, resulting in an overall system with 24 satellites, which could be regarded as a fully functional space segment.

Comment Re:government or technology L5. (Score 2) 112

L5 is used by multiple systems. It was first proposed for the EU system, and test signals were first transmitted in 2005. Recognising the value of multiple frequencies and chip rates for a civilian system, the US decided to implement an L5 signal in GPS, and started to test it in 2010. Both systems started transmitting functional navigation signals on L5 in 2015.

At present, neither GPS nor Galileo have fully deployed L5 capability to provide reliable operation independently. However, there are now sufficient L5 satellites, that a dual-system L5 receiver would be able to work when combining both services.

Comment Re:government or technology restriction? (Score 3, Informative) 112

It used to be the case that errors were intentionally injected into the "coarse" signal. However, the encrypted "precise" signal (reserved for military use) was left untouched.

After some minor SNAFUs during the 90s Gulf war, with allies being unable to source an adequate number of military GPS receivers, and needing to fallback on civilian gear, they navy decided to turn off the error injection temporarily during periods of conflict. As this made a nonsense of the deliberate error injection, which was intended to prevent enemies from obtaining the strategic advantage of GPS, the US gov decided to end the error injection, and switched it off permanently in 2001.

There are technical differences between the "coarse" and "precise" signals, which allow for better accuracy when both can be received and processed together. (The precise signal has a higher "chip rate" which allows its phase to be measured more precisely, and by using 2 frequencies, the signal dispersion in the atmosphere can be directly measured, rather than relying on a general model).

As multiple other countries/multinational goverments have developed their own GNSS technology, there has been migration of some of this "military grade" technology into the civilian space. There was some major political wrangling in the early 2000s, when the EU announced that their satellite navigation system would offer not just the classic civilian signal, but a free to use upgraded 2nd frequency "intermediate precision" civilian signal (giving most of the benefit of the US military signal), and an encrypted (paid subscription) commercial signal equivalent (equivalent or better performing) than the US military signal.

However, the political objections from the US dried up, and the newest US GPS satellites now offer similar upgraded free-to-use signals to the EU systems. China has done the same with their latest satellites.

Although, full roll-out of satellites offering the upgraded signals is not complete, there are now sufficient satellites offering the upgraded free to use signal (known as L5) that receivers with L5 capability can be expected to work out of the box.

Comment Re:Dual EC DRBG stuff...old news (Score 2) 104

New "ciphers". Same trick. However, ISO committee members didn't just approve the new ciphers like they did dual EC DRBG. They keep getting voted down, as not suitable for publication as an ISO standard, but the US keeps pushing ISO to accept them as an international standard.

Slashdot Top Deals

Recent investments will yield a slight profit.

Working...