Actually, that's not true. It is fairly rare, but at least at the state level (in many states), you have the right to petition the courts for a declaration of factual innocence. In such a proceeding, the burden of proof falls on the defendant—that is, you are presumed potentially guilty until proven innocent. However, if you succeed at doing so, the arrest record is expunged completely, as though you were never arrested or tried.
Rights: You know... your right to remain silent (unless told to "start talking", or forced to talk with torture), your right to attorney (after they get done with you), your right for a fair trial (unless charged with the espionage act, thrown into gitmo, or blown up by drone strike), etc. You have plenty of rights*. You live in the land of the free and home of the brave!
You forgot your right to a speedy trial, which guarantees that you'll get your day in court within a few years....
That's the right that I really want to see us get back. As far as I'm concerned, if the trial can't begin within 30 days, they should be required to let the person go, and the case should automatically be dismissed with prejudice. Such a policy would force the DAs to actually do their jobs and quit clogging the courts with penny ante crap like drug possession misdemeanors.
After all, it has been shown conclusively that the longer the delay between commission of a crime and actual punishment, the less effective the punishment is as a deterrent. Therefore, when you have districts with >3 year average time-to-trial, the entire system of law isn't really doing anything useful at that point. Abandoning 90% of those cases would therefore have little impact on the crime rate or the rate of recidivism.
This. And this is precisely the sort of monopoly abuse that let to the breakup of Ma Bell. The ISPs are offering non-connectivity services, then deliberately degrading service to companies that compete with those services. Monopolies like ISPs should absolutely not be allowed to do this. A company should either be an ISP or a content provider. As soon as you allow any company to be both, it pretty much guarantees abuse. The bigger the company, the bigger the abuse.
I am shocked
Me, too. I'm shocked that the researchers didn't know this. I knew this, I suspect that you knew this, and anybody who has ever read even a single Slashdot article about these machines knows this. The security holes in these things are so obvious that you should be able to think of at least a couple of ways around them without even trying.
Next thing you know, atmospheric researchers will discover that the sky is, in fact, predominantly blue.
Your logic is flawed a bit. You can't use the existence of speech as evidence that speech is not being restrained, because you can't know what things people decided not to post because of the policy.
The reality is that not all people have shame, so some people will be blatantly mean even with a real name policy. These people are mostly trolls. The people whose comments are most likely to go against the grain in an insightful way, by contrast, are mostly the ones who would be afraid to do so under their real names, because they actually have a verbal filter, and by consequence, a personal reputation to uphold.
For example, people who work for companies would be wary of posting anything critical of their employers for fear of reprisal. However, they are also the ones who would have the most insight into what's going on.
Anonymity is the only antidote to tyranny. Anyone who says otherwise is probably a supporter of tyranny.
If a group of unaffiliated individuals attack a country, that country has no recourse for nuclear retaliation.
Some governments, at least including the US, various UK states such as Canada and Australia, the PRC, and some continental European powers, have had agents working full time on just getting samples of radionucleotides from various fission plants, and analyzing those samples so that, if those nucleotides turn up in a dirty bomb or worse, an actual fission device, they can tell just where they were made by differences in various isotopic ratios, trace elements, and such. Knowing the source does not always mean those nations would retaliate against an attack from a group of apparently non-affiliated individuals, but it's certainly one piece of evidence in building a case for retaliation that would satisfy at least part of the international community. Nations have some interest therefore in reporting thefts of materials internationally, and various governments have some interest in setting up conditions for such reporting (i.e. in some cases, assuring the reporting country this will be classified and not released, and so hopefully not available for political candidate's uses.)
I'd say that against groups such as you describe, it may not be possible to respond with nuclear retaliation, or recourses may be limited. It may also be desirable to respond with something less damaging to innocent bystanders, other nations, and the environment, even if a nuclear option is possible. This could go anywhere from a use of actual boosted fission devices within hours of the first event, to a much more measured response, possibly weeks or even longer after the first event.
By the way, probably the most workable term for 'unaffiliated individuals' in US sources is "non-state actors", relatively short, straight-forward and to the point. In the US, emergency response teams called NEST would be responsible for the first stages of gathering samples from a dirty bomb incident or similar event, but their primary purpose is to stop such events before there is a detonation or risk to the public, if that's still possible when they become involved. NEST now stands for Nuclear Emergency Support Team, but in some older sources, the S stood for Search instead of Support. Calling one a NEST Team is redundant, but occasionally done by the media. NESTs are authorized to respond to incidents both inside and outside US borders, but just what that means in practice is unclear..
Literature and history are great things to study if you want to teach literature or history. And to an extent, they prove that you were smart enough and serious enough about learning to go to college, which might make a difference in getting certain jobs. But otherwise, yeah, they're equivalent to underwater basket weaving. College may not be a trade school, per se, but most people treat it like one. If you don't come out of college with a marketable skill that can net you a job that you otherwise couldn't get, then you spent tens of thousands of dollars solely for the love of learning. A few people might be rich enough to afford that, but not many.
Either way, my core point is that having a college degree doesn't make you a professional. Working in a field that requires a college degree or other formal education makes you a professional. As such, people working in low-end service jobs don't qualify, whether they are doing so by choice, because of the lack of better jobs, or because they lack any marketable skills.
Most metals are not ferromagnetic, and so are not held in place by magnets. I'm pretty sure neither indium nor gallium are ferromagnetic.
Most metals aren't, but the iron in your platelets is. Perhaps through carefully tuned EM fields, a natural clot could be formed in a novel way....
I wouldn't rule out the possibility that the bits of you in contact with the metal could get cooked.
I was reading an article a few years ago about doing precisely that—some kind of metal tending to bioaccumulate in tumors, and taking advantage of that in combination with semi-targeted EM fields to literally burn out the tumor.
The problem I see with any life+ based duration, is it selectively rewards people who have a big hit that keeps coming back into print early in their careers, and then live a long time afterwards, and the converse of that is it punishes the author who doesn't have much success until late in life, or worse, gets his or her career cut short by a fatal illness. You've suggested a system that (sort of) fixes the later case, but it doesn't address the first half of the problem. Also, any life plus system is going to look like a better deal if the author has heirs he or she cares about, and less of a deal if they don't. If the whole goal under the Constitution is to provide an incentive, we have to look very carefully at how some people may or may not feel "incentivised".
To show you how your system might have worked if it had been in effect all along, lets take two Fantasy/SF/Horror authors:
First, H. P. Lovecraft. His first real hit of a story was 1926, with Call of Cthulhu. Just about everything that got reprinted when he first gained posthmous popularity was written after that. Then he died of Bright's disease, in 1937. Under the system of that time, most, if not all, of his work was still in copyright. But, it was still the great depression, and after that, there were the wartime paper shortages, so Life +10 would leave his work coming out of copyright just about when there starts being a chance of it getting printed. With your 40 year clause, some of his original copyrights would have lasted until about 1974, by which time he was starting to be reevaluated, and effectively expired just about the time his work finally caught on. Under the system actually in effect, most of his work was still under copyright until well after the first film adaptation (Dean Stockwell and Sandra Dee in the Dunwich Horror). He did not have any direct heirs, and probably would not have believed as he wrote his last works that there was any chance he was leaving a literary estate that might actually become worth more than the cost of a cup of coffee. His closest heirs were a pair of aging aunts, and by the time there were payments, they went to very distant relatives indeed.
Second, Michael Moorcock. He starts writing professionally at 15, and some of his biggest successes were written by the time he was 20. In his 70s now and still going strong, he'd enjoy life +10 on most of his work, and it's not inconceivable that Life +10 might apply even to his most recent books. I don't know if he even has direct heirs, but he has been married a couple of times and had some living relatives, so I suppose it's at least somewhat likely there are children, or perhaps nieces or nephews. Under the existing system, he would theoretically have a longer period of protection, but that may not matter in practical terms. The older US or British systems, current law, or your system are likely to leave him about the same, financially, but current law is, in theory, better for him. However, it's a mystery to many people why his work hasn't been optioned more by Hollywood, to the point of a completed film or six. Your system just might ding him financially, if there are people who are hoping to get film rights cheap after he dies - they could just wait 10 years and let copyright on such Characters as Elric of Melnebone expire completely. Rationally, a shorter term may matter not at all or a great deal to him, but not just for the money.
First, you can take sample populations that 'exclusively possess" a particular feature, and they turn out not to. That is, it may be common for Danes to be blonde, but you can look at a large group of people from Denmark and see many people who don't have blonde hair, or otherwise don't fit whatever model of how that group should look someone is offering. You can try to filter your sample, for example, looking only at people who have records of descent from natives to that area going back five or ten generations, and that still will give you a population that has many exceptional examples wo don't have all the features you think make up a race. This happens near universally - you can go to more isolated villages or look at whole regions where it is believed the inhabitants lived cut off from other races, and you will still find that there are lots of exceptions. You can test this with 'extreme' examples - If you look at 100 Zulus, maybe half will look like stereotypical Zulus, and there will be 10% that are atypically short, lighter skinned, broader faced, narrower nosed or even with a "roman" nose, etc. (And it won't be the same 10% for each feature). Yeah, you're probably not going to find a blonde Zulu with epicanthic folds that stands 4" 3" as an adult in a sample of just 100, but you will find a lot of people who look not quite like what the standard model Zulu (or Polynesian Islander or Aboriginal Australian is supposed to be.
Second, those physical racial features have mostly evolved over periods as short as 10,000 years. You can find cases where they may have had longer periods of isolation, but even those are pretty short as regards human evolution. For example, the best estimate for when proto-Asiatic ancestors of the Native Americans crossed the land bridge between Siberia and Alaska might be as high as 20,000 years, but most ethnolgists think that, a) people kept following along on that same route until much more recently, and b) the various Pacific peoples also made it to the New World sometimes by oceanic routes. So, even the differences between a 'typical' North Korean and a "typical" Cherokee probably accrued over less than 10,000 years. (And the differences between "atypical" ones of each group? They took the exact same total time. Try to visualize that.). That's set against an evolutionary history of roughly 100 times as long for the development of tool use, fire, and other innovations that show original thinking, invention, creativity, general intelligence and what some people still call progress. The genes that let some of our ancestors figure out how to make a better clay pot than the last design have been steadily circulating among populations and leaving behind artifacts in all cultures. If those genes are still very rare, then the claim is that genes for being smart, creative, and adaptable don't have any better survival value than the others, as they get into populations the same ways as the genes for short Zulus, but somehow, they are not being selected for, over periods 100's of times as long. In fact, it's a claim that creative intelligence has negative survival value.
The reason this "science" on racial differences is nothing but good old fashioned racism follows from these two points. The argument becomes "Intelligence has no survival value. Nature selects against it except under very special circumstances such as Ice Ages. Inferior genes water down the superior ones unless the superior ones are kept isolated from them." Ultimately, this becomes the "one drop of black blood makes you black" argument of the Civil War era American South. And none of that, from the claim that bad genes can water inherently good genes down until they vanish, to the echoes of apologetics for American slavery, is science.
Lots of times, you see something wrong, and you want to point it out, but by limiting commenting to people with rep, if you don't have rep on that particular board, you are prevented from correcting the error. That means that there's wrong information without any hint that it might be wrong. So the worst-case scenario there is pretty bad.
By contrast, if you remove those limits, the worst-case scenario is that people who don't know what they're doing might say that it is wrong, at which point you'll have to investigate to figure out who is right. And if they're wrong in saying that it is wrong, you (who also probably have no rep) can comment and explain why they're wrong about it being wrong. And if they're right, then you saved yourself a lot of swearing.
So the worst-case scenario is considerably better without those limits (ignoring spam, of course, but that can largely be taken care of by a combination of a proper reporting mechanism, disallowing links by posters without reputation, etc.).
As for whether you can trust people with more rep to know more, for the most part, people who get upmodded more are, in fact, people who do know more. Mind you, there's always the possibility of an echo chamber effect, but that's a possibility no matter what you do. By using a weighted voting scheme, people who have shown more knowledge (and thus are more likely to be correct) can overcome voting of people who haven't (and thus are more likely to be wrong). Statistically speaking, this approach makes sense, at least on the average.
For maximum effectiveness, though, such a scheme should be combined with automatic flagging of any post whose reputation changes too far or too often, for future review by other subject-matter experts.
A self-signed certificate is never more secure than a CA-signed cert. Period. The only benefit to self-signed certs is cost. Any other perceived benefits are merely side effects caused by forcing you to do extra security checks to make up for the lack of a CA—checks that you could do anyway, but probably won't.
For example, if you're paranoid about a CA issuing a cert for your organization to someone else, then you might add code in your app to do your own set of checks to decide whether a cert is valid (such as ensuring that a specific cert issued within your organization is part of the chain of trust). You can do such tests on a CA-signed cert just as easily as you can on a self-signed cert. Even if that your policy is to trust only a pre-distributed set of self-signed certs, you can do the same thing by pre-distributing CA-signed certs.
Thus, in the worst-case scenario, the CA-signed cert gives you no less protection than the self-signed cert, and in the best case, it gives you additional protection.
So for some children there may certainly be a benefit to less vacation.
This really points to a need for a less formal summer education program, where parents can send their kids while they work, but where the kids aren't penalized for being gone when the parents decide to go on vacation. Each week be split between two classes for half a day every day with the subjects varying throughout the summer. One week might be "sculpting with clay" and "iambic pentameter unleashed". Another week might be "the science of butterflies" and "math in the real world". We actually had something like that at the university in my home town, though it only ran for a week or two, IIRC. It would be great if there were something like that throughout the entire summer, rather than the mostly non-educational summer programs that are fairly common.
Well, it parses now, but it still looks as archaic as K&R C.
That's the difference between learning and memorizing. To learn something, you incorporate it into your way of thinking. You might be able to pass the test by rote memorization, but that's not the same thing as truly understanding it.
Unfortunately, schools tend to overemphasize memorizing rather than understanding, which is a big part of the reason why kids forget so much over the summer. As you said, they never really learned it to begin with, at least not in any meaningful sense of the word.