Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Canada fully Independent (Score 1) 295

The advantage with Canada's independence is that we got it by asking nicely and without anyone having to die

Lots and lots of Canadians died prior to the Imperial Conference of 1930 (and the subsequent Statute of Westminster, 1931), with hundreds dead in actions specifically to free Canada from control by the British Cabinet. It is precisely on the basis of those deaths that Mackenzie-King (cf. Mackenzie, several paragraphs below) was able to lead the Conference to the principle that all the Dominions should have both legislative and foreign policy independence and control of their own militaries.

After 1931, even the formal ties were effectively cut: the Judicial Committee of the Privy Council had a subcommittee of Canadian judges who had *exclusive* appellate jurisdiction; the British government agreed that they would pass without amendment any Constitutional legislation Canada required provided the federal and provincial governments were in agreement (of which there was a strong lack readily visible to all onlookers until no earlier than 1982, and even then at least one Province claims to have withheld critical agreement on the formalization of the amending formula and the entrenchment of specific Acts).

1931 also marked the final time when Canadian troops would be summoned by the Imperial government to fight in wars directed by the Cabinet in London.

Compare with the various post-Boer War mutinies by Canadian troops condoned by the Canadian government. An example: when conscripted Canadian troops were held in awful conditions in Wales due to British government vs Canadian government conflict in de-deployment and repatriation policy (the British government were fairly plainly trying to keep the Canadians in service, in part because they cheaper and less politically connected than English troops):

"In all, between November 1918 and June 1919, there were thirteen instances or disturbances involving Canadian troops ... The most serious of these occurred in Kinmel Park on 4th and 5th March 1919, when dissatisfication over delays in sailing resulted in five men being killed and 23 being wounded. Seventy eight men were arrested, of whom 25 were convicted of mutiny and given sentences varying from 90 days' detention to ten years' penal servitude." [Nicholson, Official History of the Canadian Army in WW I]

This sort of thing led to a lack of conscription in Canada during the first part of WW II, and later on a plebiscite/referendum on the question of conscription late in the war (in April 1942) led through a series of compromises to the result that few conscripts actually left Canada and fewer still ended up on the front lines of the war (less than three thousand) -- the Canadian conscripts were mostly deployed to free up volunteers (and British conscripts...) from non-combat posts. It is entirely possible that had the Canadian government caved in to British demands for troop numbers and introduced conscription early in the war, Canada would have exited WW II before Pearl Harbor. Indeed, it is mainly Pearl Harbor and the entry into the war of the United States that led to the passage of the referendum at all.

Earlier in Canadian history there were even small-scale uprisings -- one might even call them revolutionary or civil wars -- that led to deaths and reprisals. Among them were the rebellions of 1837-1838 (William Lyon Mackenzie, Mackenzie-King's grandfather, declared a Republic of Canada and led armed skirmishes in what's now southern Ontario; Papineau, Storrow Brown, Chenier, Oklowski and the Nelsons led armed uprisings in and around Quebec City and Montreal -- a couple hundred dead all together, thousands wounded, and scores of executions and deportations to Australia). There were occasional low-level disturbances of the peace in Ontario, Quebec, New Brunswick and Nova Scotia more or less until the end of the U.S. Civil War at which point the anti-Republican parties that controlled the governments that confederated to form Canada and the British government agreed that self rule and full representation-by-population and universal adult male suffrage (neither of which Britain yet had in spite of the (Great) Reform Act of 1832) was the only way out of an eventual second North American Colonial Revolution with Britain and the United Empire Loyalists, and more specifically the supporters of the British Tories, again on the losing side.

Comment Re:What is Solaris good for? (Score 1) 99

OpenSolaris is old and discontinued. OpenIndiana is a CDDL fork of OpenSolaris, rebased onto what's now called Illumos (http://illumos.org/), and is one of several Illumos "distros".

OpenIndiana was meant to be an answer to desktop Linux. It did not do especially well in terms of uptake, for reasons related to Linux's desktop results. However, there are a variety of other distros which are more server-oriented, and they are fairly popular.

They include for example SmartOS (used by http://joyent.com/ for multitenant hosting and for their own software development), OmniOS (used for mainly single-tenant hosting, and for software development http://omniti.com/), Nexenta (used for building large storage systems), and Delphix (a data storage service).

They all rely on the debuggability of Illumos (mdb, dtrace), virtualization (zones, now including Linux branded zones, crossbow, kvm), services (NFS and iSCSI in particular, also various others like SMB), OpenZFS, and a variety of other useful features, such as even under light use making enormous use of threading for parallelism and concurrency (and the threading systems scale well; OpenZFS alone typically uses a couple thousand threads, hundreds of thousands of mutexes, and many condvars, and all will go higher with load; other kernel subsystems can be similar).

It's fairly common for computer services departments in universities and laboratories and so forth to use e.g. an OmniOS server in front of a large storage pool, offering up iSCSI, NFS and other shares to clients, or alternative SmartOS in front of a large storage pool, offering up lightweight VMs to clients.

Oracle's Solaris has diverged from Illumos (and vice-versa). The key features are similar, but Oracle has been targeting much higher-end applications -- much larger and busier storage pools, especially ones which are very heavily random-acess (big Oracle databases are an application). Like Illumos, it can run very well on hardware with huge numbers of cores (including hyperthread-like cores). Unlike Illumos, it's not developed in the open (and is not open source), but it is well-supported enough that expensive contracts get you fixes and sometimes features quickly. Illumos has been slower until fairly recently, for reasons including the lack of ability to do a fully self-hosted build (it relied on nonstandard build tools), an idiosyncratic source code repository, both of which have now been changed in the past few weeks.

Comment Re:The kilogram is based on a chunk of metal? (Score 1) 278

The metre and the second were closely related from the start; the relationship is through the hydrostatic equilibrium of an object of Earth's mass and angular momentum, both of which were fairly precisely understood in the late *17th* century, at least a hundred years before the SI metre was officially adopted.

There is a deep connection between the metre as the quarter-great-circle of the Earth and the metre as the length of a pendulum arm with a half-period of one second, which is unfortunately distorted by gravitational anomalies arising from crustal mass concentrations, tidal effects, and nonuniformities of the Earth's rotation that makes physical realization of a pendulum-based metre awkward (it can be done, but requires corrections based on time and location; it's a bit harder to do on a ship in less-than-calm conditions, however...).

The longitudinal survey definition won out because its errors at the time were smaller and the corrections were easier to generate and tabulate in almanacs.

The gravity-arm_length-time relationship is described here:

https://en.wikipedia.org/wiki/...

In any event, the metre is now mostly deparochialized in that it doesn't directly depend on the gravitational field sourced by the Earth or its rotation or orbit. You can make a practical realization of a metre anywhere in the universe where you can measure the speed of a massless particle.

Most people who grow up with exposure only to SI have no problem in using decimal fractions of SI units, including metres. Indeed, people who can deal with fractions have no problem applying them to SI units casually. Half a kilometre. Three quarters of a litre. Just like people who grew up with U.S. customary units will say things like half a gallon or a quarter of a mile.

People who are exposed only in adulthood to a different system of measurements are possibly frightened that they look ignorant or, worse, that they appear mentally unable to learn the new-to-them system.

Comment Re:This is huge (Score 1) 214

ER=EPR is designed to avoid superluminal representations of the Poincare group (which is the symmetry group of Special Relativity, and which has "c" as its sole free parameter, corresponding to that of a massless particle; photons are expected to be massless).

Avoidance of non-locality even gets a explicit mention in section 3.1 of the Malcadena & Susskind paper http://arxiv.org/abs/1306.0533

So, no, ER=EPR does not satisfy non-locality.

(It's mostly designed to try to preserve AdS/CFT in the face of the AMPS paradox, which strongly suggests that not all of AdS/CFT gauge/gravity, semiclassical gravity as an EFT outside the horizon, unitarity, or the "no drama" conjecture (and thus the strong Einstein Equivalence Principle) can be simultaneously valid. However, the introduction of a truly huge number of wormholes to a model of the universe is not calculationally attractive, and does not really help with intuiting the internal state of physical black holes any more than AdS/CFT has done so far. Additionally, it requires a modification of QFTs such as the Standard Model for at least some infallers (cf. p 36 at Polchinski's http://www.slideshare.net/joep... .))

Comment Re:So now we have a new paradox... (Score 1) 172

No, it still vanishes, however an imprint of the egg persists on the floor (but in the short run invisible even in principle to anything not actually in or under the floor) such that it interferes with the thermal radiation the floor produces on a cold cold cold day in the far future. Careful examination of that thermal radiation will show the mass-energy-momentum of the egg reached the floor at some point in the past, but is insufficient to reconstruct the egg.

(Additionally, there's the interesting point that the dropped egg is in free-fall until it hits the floor at which point it experiences a dramatic acceleration. The "no drama" conjecture holds that the dropped egg would pass through the event horizon without experiencing an acceleration, and it in turn is based on the (strong) Einstein Equivalence Principle. One of the reasons Hawking is even interested in this is the question about whether the EEP is preserved in a resolution of the AMPS (Polchinski et al) paradox, and his and his collaborators' solutions rely upon the BMS symmetry (and in particular supertranslations). Their argument suggests that the whole spacetime outside the black hole biases the Hawking radiation when the black hole evaporates, but this raises a number of so-far-unanswered questions (presumably this will form part of a future paper).

The biased Hawking radiation means that the entanglement energy of the swallowed half of entangled pairs ultimately escapes to infinity (they claim that this is background-independent, but that's something else which will have to be demonstrated in a forthcoming paper), and so there will be no firewall at the (inner) event horizon (of a Kerr black hole).

So there's no splattered egg. It may be (sort-of) splattered under the floor. (Both GR and semiclassical gravity predict this, but also that the splattering will be unseen above the floor, and also that the precise behaviour of the microscopic components of the egg depends on the behaviour of strongly curved spacetime and quantum fields, and theories describing those presently tend to make inconsistent or even incompatible predictions).)

Comment Re:Of course it never gets past the event horizon. (Score 1) 172

As you point out later in your post, an event horizon is mostly a concept relevant to external observers, not someone falling into a black hole

To reiterate what AC says, no, an event horizon is *especially* relevant to an infaller, because it's the boundary formed by the set of points surrounding a region of spacetime at which all null geodesics lead inside that boundary and ONLY inside that boundary (or outside it in the case of a cosmological event horizon).

Hawking's argument about apparent black holes effectively says that [a] no such point exists if the black hole can evaporate and [b] that enough quantized properties of infallen matter can be recovered from the configuration of all the fields local to the black hole (including the gravitational field) that conserved quantities stay conserved. (That's especially relevant for entangled pairs, and much less relevant for things like baryon number, in the Wilson sense of relevant).

It's kinda interesting to think of the picture for a cosmological event horizon for a universe that transitions from expansion to contraction -- the galaxies that are being carried across our Earth-centric cosmological event horizon could, with a suitable evolution of dark energy (e.g., if the metric expansion can accelerate, why can't it decelerate and reverse?) -- wind up being carried back in. They would still look like ordinary galaxies when they popped back into view, just like they did when they faded out, because conditions just outside the observable universe are almost certainly very similar to those just inside (and that's also true for Schwarzschild black holes, for a very careful definition of the spacetime region "just inside" the horizon, assuming that the "no drama" conjecture is correct, but that definition doesn't practically save an infaller that is self-gravitating (like a star freefalling through a supermassive black hole horizon) or bound together electromagnetically (like a person freefalling through a stellar black hole horizon)).

So our cosmological event horizon is probably real (since the metric expansion is unlikely to reverse) -- maybe black hole event horizons are too (possibly for the same reason: background dependence, e.g. \rho_{crit} in the standard model of cosmology).

Comment Re:Of course it never gets past the event horizon. (Score 1) 172

The black hole per se is not an emitter. Unruh radiation depends on the observer-dependent aspects of the horizon (any horizon; it's also true for the cosmological horizon) and is very dim and very similar to a very cold blackbody; while some observers will see it brighter and warmer than others, this is not true for any observer free-falling towards (and through) a horizon, because the Unruh radiation is also in free-fall You can reverse the picture and ask why a free-falling infaller does not get ionized by the light of distant stars as she approaches the horizon, and arrive at the same answer: the starlight is also in free-fall.

The picture is a little different for non-freefalling infallers: one that sees a big dipole redshift for distant starlight (ignoring the big Einstein lens in front) will see a shift for the horizon radiation as well.

A "true" horizon is not observer-independent -- if a boundary exists at which all available null (and by extension timelike) geodesics only lead inside that boundary, it's a true event horizon. Hawking has argued that they do not exist in any background (that's contentious) and instead says that only apparent horizons exists. The geodesics that exist at an apparent horizon are awfully messy, and the topic for him and his collaborators for the next couple of days will be explaining them for all backgrounds. As of now, I think nobody but them can do anything but guess about their explanation.

(Recovering information from a black hole that only has a (possibly long lived) apparent horizon and no real event horizon involves supertranslations which form part of the Bondi-Metzner-Sachs symmetry group, of which the Poincaré group (the symmetry group of Minkowski spacetime) is a subgroup; there's lots of headscratching about *how* one does this exactly, but certainly the BMS symmetry group provides lots of (even infinite) extra degrees of freedom into which one can move information about a particle in curved spacetime. (But doing so classically is both hard and maybe not complete because you can set up pretty realistic toy models in which you cannot actually extract the metric sourced from quantum particles)).

Comment Re:Of course it never gets past the event horizon. (Score 2) 172

The experience of a classical infaller (or an observer of a classical infaller) is not really relevant in this story (but please see my final paragraph). Hawking is trying to deal with the AMPS (Polchinksi et al) firewall paradox, wherein an entangled (quantum) pair has one pair partner fly off towards infinity with the other remaining gravitationally bound to the compact dense object that has a horizon.

AMPS strongly suggests that at least one of the following must be false: semiclassical gravity as a valid EFT right to the horizon, gauge/gravity correspondence (in particular AdS/CFT as a useful tool in probing energies higher than the EFT limit), unitarity, and the "no drama" result from General Relativity (which is pretty solidly rooted in the EEP).

Hawking is attempting to preserve all of the above by arguing that the non-escaping pair member ultimately escapes to infinity. In an expanding universe where the non-gravitational field content dilutes away (and consequently cools) leads to a relatively warm horizon temperature for most observers at a distance from the dense compact object. All horizons are observer-dependent (a standard result from General Relativity); all horizons emit a very nearly thermal spectrum (an accepted result from semiclassical gravity, and Hawking did a lot of work in that area, leading to the term Hawking radiation); that spectrum lifts energy away from the dense compact object (an accepted conjecture -- that's black hole evaporation); and when that spectrum is warmer on average than the temperature of the local non-gravitational field content, that evaporation is relevant.

Even in an expanding universe there are local configurations of field content in which dense compact objects persist forever, by exchanging evaporation energy with each other, directly and indirectly; the evaporation energy heats up local diffuse field content, which is then ingested by the black hole, which decreases its horizon temperature (black hole horizon temperature being inversely proportional to mass). An eternal configuration of "dark grey holes" is a possible result, and thus Hawking's proposal is incomplete, since it only resolves the 4-way conflict in particular configurations of an expanding universe. That such configurations are physically reasonable (or even more probable) does not really matter.

Your post correctly captures several aspects of the problem. You expect no drama as you reach an event horizon (specifically the point at which all available timelike geodesics lead inside the horizon), because horizons depend on the details of the configuration of events (including those of the infaller and things that can interact (say, electromagnetically or gravitationally) with the infaller). That is, while the definition of an event horizon is sharp, its coordinate location is observer dependent. As you say, a (classical) infaller crossing the real horizon may not even notice it. However, what about an entangled pair-partner?

Breaking an entanglement transfers mass-energy-momentum (in flat spacetime one would say it releases energy) and in a local theory, that must be sourced by one or both pair partners. If we have lots of such breaking pairs, we have a large amount of energy just inside the horizon -- a firewall.

Hawking tries to step around that by saying that there is no place in the universe where all timelike geodesics point inside a small region of spacetime. That is, all black holes ultimately fully evaporate. And, even if half of a pair is local to a compact dense object for a lonnnnnnng time (many trillions of years), there is no breakage of entanglement, and so no release of entanglement energy. Thus there is no conflict with "no drama", there is no breakdown of semiclassical gravity in the low energy limit (because you don't get probably-unphysical superpositions of the metric sourced by each half of the pairs), gauge/gravity remains useful (because you can still focus on the black hole surface area), and quantum fields evolve unitarily (because nothing stays local to the dense compact object forever).

But if even one black hole anywhere in the universe refuses to evaporate (for instance, because it is so large that it is always colder than the *cosmological* horizon, which also produces very nearly thermal spectrum), Hawking's argument falls apart. The likely accelerating expansion of the universe already imperils his solution for really super super massive black holes, which are not forbidden.

Your analysis is pretty good (for classical infallers); you might want to think about your last long paragraph in terms of an entangled infaller (whose entanglement partner is far away from the black hole), or a classical object made up of entangled particles (again with the entanglement partners far away from the black hole). Additionally, think of the case where the temperature of the CMB alone is always equal to or hotter than the horizon temperature of the (really massive) black hole, both for classical infallers like the one your post thinks about, and for entangled infallers. (You can also consider entangled infallers that "somehow" (there are various mechanisms) appear right at the horizon, with one half going inside the horizon (for some period of time; think about short and really really long periods) and the other half going to infinity right away -- it may help to think of neutrino/antineutrino pairs as they are not likely to interact much with any accretion disk material).

Comment Re:wft ever dude! (Score 1) 215

Note that there is a difference between routing logic and forwarding logic.

The latter is arguably simplified in IPv6; the former is essentially identical.

Variable Length Addresses were demonstrated by the TUBA team in 1994, with both Cisco and Proteon demonstrating slow and fast CPU paths and hardware assistance. The cost of handling fully variable lengths was noticeable, but vanished when a common length was chosen with uncommon lengths gated, rate-limited, quenched or otherwise controlled sourcewards.

In modern forwarding engine implementations using a dual between an m-way trie and associative real memories, the cost of a full VLA is now in the noise even for arbitrary streams of random-length VLA headers; the hard part is *still* the generation of the associative arrays from the routing tries. That is, the *routing* problem is the hard problem, not the forwarding. And VLAs can simplify the routing problem if they are designed with involuntary (proxy) aggregation in mind.

The early 1990s rejection of ideas from various IPNG proposals did not anticipate a mult-decade roll-out of the minimal changes settled on in SIP+PIP (which became IPv6), nor did it have any stubs whatsoever for adjusting the on-the-wire format in the future.

This exposes the biggest single problem with the ROAD/IPNG/IPv6 process: there was almost no thought in the working groups (which became increasingly detached from operators and middle-box vendors, and were dominated by systems vendors) to deployment scenarios that were very gradual and very local, with n-level enclaves of systems with just one protocol stack (e.g., an IPv6 only bubble inside an IPv4 only bubble attached to a the Internet via an IPv6 only gateway), and the hacks that have been developed to deal with such situations (which have arisen in real life) are at least as awkward as IPv4 NAT+address overloading.

IOW, it was all end-system-software-think and little to no thinking about broader issues on end systems (ones that are multiply attached to the rest of the world, notably, or ones that migrate from one network to another rapidly), and even less about routers (especially not routers that are themselves mobile).

The slogan, "every client is also a server" should have been extended to ".. and also a first-class router", which likely would have arrived at a better overall design for IPv6, and faster deployment.

Comment Re:There is a balance between article 8 and 10 (Score 3, Informative) 401

"What kind of idiots actually write such things?"

In most of your particular extracts, it was mainly the administrators of the Marshall Plan, namely Americans and British politicians, and principally Sir David Maxwell-Fyfe MP (as he then was) taking inspiration from the work of John Peters Humphrey.

Codification was considered a good idea to avoid relitigation of common exceptions and strikings-of-balance, and to avoid imposing the need to reference foreign case law on the non-common-law countries that would agree to the document. Conflicts arose as well because some well-organized political party groupings (Christian Democrats in particular) threatened to slow ratification in a number of states simultaneously.

I think the vast majority of the people of Estonia would disagree with your assessment of the ECHR; it was a live issue in their accession referendum and is far better than the Soviet equivalent in every practical way.

Likewise, at the time it was written, Nazi laws were still on the books in the various sectors of Germany, and the legal system was a mess in all the different occupied sectors. Getting it working in *all* the sectors of occupied Germany (and Austria), including the Soviet sector was an explicit goal of the convention, and it actually succeeded in that respect for about a decade.

Finally, the document is a live one, and PACE proposes changes to clarify conflicts, to strengthen individual rights (that's the main theme) and subsidiarity, and so forth. PACE is made up of parliamentarians from each of the COE's member-states, meaning it's mostly EU parliamentarians, and since so few of their constituents engage with them on PACE, a letter written to one in an arbitrary EU member-state is likely to be looked at seriously. Maybe you could put your questions to one of them, or make some suggestions for improvement? "Just scrap it" is something they hear a lot more often from non-politicians than "fix it like this, and I'd be happier".

Comment Re:Good (Score 3, Interesting) 401

See my comment here: http://yro.slashdot.org/commen...

Roughly and in terms of English law, the ECtHR ruling upheld the Estonian Supreme Court's ruling (and that of several Estonian courts) that "L" was defamed, and that Delfi AS exacerbated the defamation by its actions, incurring a small liability for damages. Delfi admits there was the equivalent of defamation in the comments and that they were fairly treated in the Estonian court of first instance, in terms of procedure. Its argument that the Estonian law on defamation is in conflict with the ECHR has been rejected by almost everyone who has heard the case. I'd be strongly surprised if their advocates at every stage had not suggested to them that they did not have clean enough hands in the matter to pursue it through the courts with hope of success.

The ruling is not a disaster, IMHO. It tries to strike a balance for protecting the general rights of freedom of communication with the general protections from lies that are calculated to injure the reputation (and/or income and/or quiet enjoyment of life without fear), and to make striking such balances in more local courts and legislatures easier.

In brutal terms, the quantum of damages assessed by the court of first instance against Delfi was very small -- a mere slap on the wrist -- and the ECtHR took that tiny figure into account in considering the reasonabless of the law and its application. Other courts should too. Nobody went to prison, lost their business, or the like. Delfi consequently should pay costs in the appeal -- they insisted on their right to have their day in court on a small matter, and lost.

Finally, since you ask in another comment below, this would not in any way prevent someone assessed a much more severe quantum of damages (or fines, incarceration or other punitive measures) even in similar circumstances from pursuing relief through the courts, including the ECtHR. Such a person could indeed point to this case in the first instance and likely achieve a better outcome than they would have absent this decision. That's why I say it's not a disaster, not even for free speech enthusiasts. Indeed, there are some newspaper publishers in England who likely will be wishing this ruling had been made before being pressured into a deal with the late coalition government on similar matters.

Comment Re:Good thing Slashdot isn't in the EU (Score 2) 401

It's far from "unrelated" to the EU.

After the accession to the Treaty of Amsterdam in 2004 (after a popular referendum in 2003), Estonia was obliged to be a member in good standing of the European Convention on Human Rights (ECHR), the court of which (ECtHR) is only nominally independent of the European Union.

The ECtHR and ECHR are administered by the parliamentary assembly of the Council of Europe (PACE, see below), which is dominated by EU member-states and in which since the Treaty of Lisbon the EU's institutions themselves directly participate. The explicit goal of the EU is ever closer cooperation with the COE, specifically that means greater subsidiarity, as in courts across the EU that are local or specialist in nature, and legislative assemblies at all levels, will take into consideration the previous decisions of the ECtHR, and the guiding principles laid out by the PACE with respect to human rights law. The reasoning is that this will lead to less litigation and fewer appeals (which benefits individuals), and facilitate the progression to higher courts (including the ECtHR) with novel or difficult human rights questions.

This is what the Commission of the European Union says about the ECtHR:

http://europa.eu/legislation_s...

This is what the UK Parliamentary all-party website says about the PACE. Much of the "tear up the UKHRA, withdraw from ECHR, Brexit if necessary!" faction in the UK Parliament are well aware of the work that their benchmates (and sometimes they themselves) do in PACE, which is almost always much more constructive and progressive than their public positions would suggest:

http://www.parliament.uk/mps-l...

So, it's not "unrelated" because it [a] all of the member-states of the EU participate in it at all levels, [b] the EU's principal organs (the Commission, the Council, the Courts and the Parliament) and the court in question have had a formal partnership since 2012, and [c] while there are other non-EU members and observers of the COE, the COE strongly reflects the consensus of the EU and its member-states by virtue of numbers (which makes sense, as the EU itself forms the largest part of the human-geographical area with which the COE is most directly concerned).

It is only barely safe to say that the ECtHR is independent of the EU; that's still true formally, but the lines have been deliberately blurred by the EU and the COE deliberately in recent years, because that makes for more efficient administration of human rights law in the EU itself, and opens up greater access to the ECtHR by non-EU-member-states (i.e., the idea is that UK or ES cases won't clog up the ECtHR to the point where cases from UA or TR have difficulty being considered).

Comment Re: I call BS (Score 1) 184

Ah, cannot ETA, so: decent USB 3 sticks make *excellent* l2arc devices in front of spinny disk pools. They can often deliver a thousand or two IOPS each with a warm l2arc, and that would otherwise mean disk seeks. I use them regularly.

(The downside is that in almost all cases l2arc is non-persistent across reboots, crashes and pool import/export, and can take a long time to heat up so that a good chunk of seeks are being absorbed by them, so you're limited by the IOPS of the storage vdevs and the size of ARC as it heats up. That's true for *any* l2arc device, though. Building a pool out of things you would use as l2arc devices gives you persistence across import/export, reboot, and ideally panic. :-))

(Some of these downsides are likely to fade with the eventual integration of persistent l2arc in all ports, ongoing changes to arc in opensolaris, being able to assign specific DMU data to specific pool VDEVs and/or further specialization of vdevs (i.e., in addtion to log and cache), and so forth in openzfs).

Comment Re: I call BS (Score 2) 184

I've done this, experimentally, using not-super-cheap 128GiB patriot and hyper-x usb3 sticks.

For a USENET load, performance will depend on whether your incoming feed is predominantly batched, streaming, or effectively random -- small writes bother these devices individually, and aggregating them into a pool works best if you can maximize write size. One way to do that is to use a wide stripe (e.g. zpool create foo raidz stick0 stick1 stick2 stick3 stick4 ...), which works well if your load is mainly batched or streaming, such that there is lots of stuff for zfs's various write aggregation strategies to cut down on the number of small writes.

You'll also want a reasonably large ashift, for similar reasons.

If your workload is mostly reading (e.g. you have lots of downstream feeds, especially synchronous or synchronously-batched ones, or many client readers), then a sufficiently large ARC should make your choice of pool layout somewhat less important. If you are VERY read-dominant, you'll likely want to take the hit on write performance and use n-way mirroring vdevs. (e.g. zpool create foo mirror stick0 stick1 mirror stick2 stick3 ...).

If you're loss-intolerant, raidzN where N>1, or 3+-way mirroring.

What can go wrong? Mostly that you'll run out of space. :-) For that reason, a mirroring strategy, even though it's space-intensive and slows down writes, provides useful flexibility, as you can always widen or narrow mirror vdevs, or add more mirror vdevs to a pool, and it's easier to swap out vdevs in a mirror with something else than it is to swap out vdevs in a wider individual vdev (a wide raidz stripe for instance). (But you can't remove mirror vdevs (or raidzN vdevs) from a pool...).

Device names are likely be important considerations. Make sure the pool devices use device names that persist across reboots, USB3 hub failures, physical or logical disconnects and reconnects, and so forth. How to do that depends on the specific zfs port and operating system.

What can go right? Negligible seek times! That makes *all* the difference compared to spinny disks. Even for super-cheap USB 3 sticks. Really.

Final thing: no zfs port (none of the openzfs ones, and not opensolaris either) is good with USB 2 in general, and USB 2 sticks are often flaky. Avoid them.

Submission + - Hum is the world's first artificially intelligent vibrator (dailydot.com)

Molly McHugh writes: Hum is a smart vibrator in the ultimate sense of the word: It learns what your body likes, and it responds accordingly, delivering varying frequencies of vibrations in response to how much pressure is exerted. Unlike most vibrators, which typically only let you cycle through a few set levels of speeds and vibration patterns, Hum responds to its user’s movements to provide a unique sexual experience that mimics what it’s like to be with an actual human partner.

Slashdot Top Deals

Pound for pound, the amoeba is the most vicious animal on earth.

Working...