Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Just in time. (Score 1) 219

Yeah, assuming you're not doing anything at all with the array while it's rebuilding, and none of the sectors have been remapped causing seeks in the middle of those long reads/writes.

To throw out one more piece of advice; RAID6 is useless without periodic media scans. You don't want to discover that one of your drives has bit errors while the array is rebuilding another failed drive. RAID6 can't correct a known-position error and an unknown-position error at the same time. raidz2 has checksums that should detect the bit flip and reconstruct the stripe from the N-2 known good copies, but at these sizes you should probably start worrying about the possibility of two bit flips in the same stripe.

Comment Nuclear chain reactions are just tools, too. (Score 3, Interesting) 455

Putting nuclear bombs on the tips of rockets and programming them to hit other parts of the Earth is also mere tool use. Tools are not inherently safe, and never have been. Autonomous tools are even less inherently safe. The most likely outcome of a failed singularity isn't being ruled by robot overlords, it's being dead.

Comment Re:more leisure time for humans! (Score 4, Insightful) 530

Both Capitalism and Communism are supposed to be about maintaining the work force, so guess where we all are today?

A nominally capitalist country pays a communist country for much of its manufacturing because it's cheaper, instead of employing its own citizens. So the logical next step is to just buy the robot factory workers from China to replace workers in the U.S. to save on shipping costs.

Comment Re: AI is always "right around the corner". (Score 1) 564

The machine has no fucking clue about what it is translating. Not the media, not the content, not even what to and from which languages it is translating (other than a variable somewhere, which is not "knowing". None whatsoever. Until it does, it has nothing to do with AI in the sense of TAFA. (The alarmist fucking article)

How would you determine this, quantitatively? Is there a series of questions you could ask a machine translator about the text that would distinguish it from a human translator? Asking questions like "How did this make you feel?" is getting into the Turing Test's territory. Asking questions like "Why did Alice feel X" or "Why did you choose this word over another word in this sentence?" is something that machines are getting better at answering all the time.

To head off the argument that machine translation is just using large existing corpus of human-generated text, my response is that is pretty much what humans do. Interact with a lot of other humans and their texts to understand the meaning. Clearly humans have the tremendous advantage of actually experiencing some of what is written about to ground their understanding of the language, but as machine translation shows it is not a necessity for demonstrating an understanding of language.

For the argument that meaning must be grounded in conscious experience for it to be considered "intelligence" I would argue that machine learning *has* experience spread across many different research institutions and over time. Artificial selection has produced those agents and models which work well for human language translation, and this experience is real, physical experience of algorithms in the world. Not all algorithms and models survived, the survivors were shaped by this experience even though it was not tied to one body, machine, location, or time. Whether machine translation agents are consciously aware of this experience, I couldn't say. They almost certainly have no direct memory of it, but evidence of the experience exists. Once a system gets to the point that it can provide a definite answer to the question "What have machine translation agents experienced?" and integrate everything it knows about itself and the research done to create it, then we'll have an answer.

Comment Re:AI is always (Score 1) 564

Everything humans do is simply a matter of following a natural-selection-generated set of instructions, bootstrapping from the physical machinery of a single cell. Neurological processes work together in the brain to produce intelligence in humans, at least as far as we can tell. Removing parts of the human brain (via disease, injury, surgery, etc.) can reduce different aspects of intelligence, so it's not unreasonable to think that humans are also a pile of algorithms united in a special way that leads to general intelligence and that AI efforts are only lacking some of the pieces and a way of uniting them. As researchers put together more and more of the individual pieces (speech and object recognition, navigation, information gathering and association, etc.) the results probably won't look like artificial general intelligence until all the necessary pieces exist and it's only the integration that remains to be done. For example there's another article today about the claustrum in a woman that appears to be an effective on-off switch for her consciousness, strengthening the evidence for consciousness being an integration of various neural subsystems mediated by other regions that produce consciousness.

It's important to consider that AGI may act nothing like human or animal intelligence, either. It may not be interested in communication, exploration, or anything else that humans are interested in. Its drives or goals will be the result of its algorithms, and we shouldn't discount the possibility of very inhuman intelligence that nonetheless has a lot of power to change the world. Expecting androids or anthropomorphic robots to emerge from the first AGI is wishful thinking. The simplest AGI would probably be most similar to bacteria or other organisms we find annoying; it would understand the world well enough to improve itself with advanced technology but wouldn't consider the physical world to consist of anything but resources for its own growth. It may even lack sentient consciousness.

Producing human-equivalent AGI is a step or two beyond functional AGI. Implementing all of nature's tricks for getting humans to do the things we do in silicon will not be a trivial task. Look at The Moral Landscape or similar for ideas about how one might go about reverse engineering what makes humans "human" so that the rules could be encoded in AGI.

Comment Re:Functionally correct, but insecure (Score 1) 199

Unless all the code running on the machine is absolutely type-safe and only allows "safe" reflection then trying to hide sensitive data from other bits of code in your address space is a lost cause. Code modification, emulation, tracing, breakpoint instructions, hardware debugger support, etc. are all viable ways for untrusted code with access to your address space to steal your data.

Wiping memory is only effective for avoiding hot or cold boot attacks against RAM, despite its frequent use for hacking terrible operating systems to hope/pretend that userspace software isn't leaking data into other processes either directly via attacks or accidentally through kernel mishandling of memory.

Comment Re:We've gone beyond bad science (Score 1) 703

Confidence bands/intervals don't make statements about the probability of certain outcomes. They make statements about the interval itself. At best 95% of the bands calculated will include the "true value". No, this is not a nitpick.

Mod up. There is quite a difference between being 95% certain of a particular outcome and a particular outcome being within a 95% confidence interval. When rolling a D20 a 10 is within the 95% confidence interval [2-20] but rolling a 10 sure as hell isn't 95% likely.

A credible interval (sometimes called a Baysian confidence interval) predicts how likely it is that the true value lies within the interval.

Comment "Whitelisted" binaries are the ones 0-days target. (Score 1) 195

So, sure, whitelisting might prevent your uses from running unapproved browsers at work, but it will not secure a computer system against actual attackers. Not to mention that a good chunk of would-be whitelisted binaries actually have embedded language environments (macros, javascript, shell/batch scripts, java, vbscript, etc.) that would also need to be added to the whitelisting framework.

Comment Re:Fuck religion. (Score 1) 903

Assuming we as a society believe that widely adopted health insurance is good, then is it better to have it supplied directly by the government or better to allow wider choice provided by a more free but still highly regulated open market?

It makes sense to let the government compete with commercial insurance via Medicare/Medicaid (in the U.S.). I've heard claims that the overhead of administering those programs is lower than for commercial insurance, but I don't know if it's cheaper than employer self-insurance, and Medicare reimbursement is currently heavily discounted when it pays providers, with Medicaid reimbursement varying by state as far as I know. The ability to opt into Medicare/Medicaid for an additional fee might work. Something else that might make more sense would be to enforce up-front pricing for medical services since at this point it's very difficult to get accurate estimates and of course that also breaks the free market. Health care is generally an infrequent expense without much choice in where it's delivered, however, so it does make some sense for market forces to come from insurance providers who have better pricing information instead of healthcare consumers. Making that kind of meta-pricing available to consumers when they purchase insurance would probably help, a sort of TCO for the insurance.

Comment Re:Fuck religion. (Score 1) 903

The idea that employers have a right to impose their religious beliefs on their employees should make anyone who actually believes in freedom of religion puke.

It's slightly more nuanced than that. If you run an insurance company you have to be responsible for the pharmacy formulary (deciding which drugs will be covered under the insurance plan) and the list of covered medical services and procedures (ER visits, well-checks, mammograms, abortions, caesarian sections, chiropracty, heart surgery, plastic surgery, gender re-assignment, etc.) that will be covered. Suppose you have a moral or financial objection to plastic surgery, which isn't too uncommon for insurance companies. Most insurance companies will not cover elective cosmetic procedures unless it's to treat an injury. This is an ethical/moral decision on the part of the insurance company; they believe that enhanced physical appearance is not important enough to the insured to cover fully. It's a very similar argument that a few insurance providers use to not cover contraceptives and abortion, and in general they should be free to cover whatever they feel is appropriate and the market should decide which insurance companies prosper.

The first problem occurs when restrictive insurance providers also force their employees to use the insurance they sell, which effectively happens any time an organization opts for medical self-insurance. The second problem is when the government requires all insurance providers to provide a basic level of service and forces entities to cover medical procedures or drugs that they don't think are morally acceptable. Both problems are infringements on free choice and the free market, but the latter is definitely closer to what the civil rights act prohibited, e.g. a correction of attitudes and beliefs that are just wrong and harmful.

I think pretty much every employer would prefer not to be involved in health care. It is a stupid system. But the reason that it was necessary is that insurance does not work when the insurer knows the individual risks. The individual insurance market began to collapse in the 1980s.

It's actually surprising that employers don't do the same screening that individual insurance carriers do and refuse to hire high-risk employees, since that would greatly lower the cost of self-insurance. Maybe the ADA prevents it? The closest example I can think of are campus smoking bans which effectively fire or cure employee smokers. Maybe campus fatty bans are next.

I agree with you that some sort of mandate is necessary so that everyone can be insured, but I am not enough of an expert to know what makes sense to mandate. Mandating that every medical procedure and drug including cosmetic surgery and off-label experimental use must be covered would be going too far, and mandating only that insurance had to pay for one clinic visit a year and up to $10,000 per ICU stay would be too limited. Driving some insurance companies out of business because they can't comply with the mandate is probably the lesser harm, but it is definitely a harm if employers/employees have to pay more for equivalent insurance elsewhere. For one thing, there are presumably people who want to buy insurance that matches their ethical standards and if the buyer and seller of the insurance aren't harming anyone else I don't think it's right to interfere. Clearly, only if an employee can freely choose the insurance they want is a requirement for the preceding to be true. It would be too easy for employers to make employees a deal they couldn't refuse otherwise.

The only way to get the ACA passed though was if people who already had insurance were assured that they wouldn't lose it. Many people have subsidized insurance built into their employment package and would lose substantially if that happened. Which is why the ACA has big tax penalties for employers who drop coverage and requires the coverage to meet certain minimum standards.

Perhaps if the tax loophole was changed so that employers could only pay the subsidized insurance cost directly to employees for use in purchasing their own insurance, there would be an incentive to make insurance marketable while getting employers out of directly providing healthcare. I haven't read the ACA directly, so perhaps that's the ultimate goal with the individual mandate.

Comment Re:Fuck religion. (Score 1) 903

Sorry, but you missed the point. Religion A says that pill X is against their religion. Insurance company is a Religion A organization, but government says that Insurance company cannot refuse to give pill X regardless of what they believe. In short, the government has decided that you must provide a service you believe is immoral.

The immorality was in coercing employers to provide insurance in the first place. In a sane world employers would pay their employees a larger salary and employees would purchase insurance, or the government would provide medical coverage directly. The ridiculous tax loopholes that give employers an incentive to provide insurance as a "benefit" led directly to the crazy individual mandate we have now, where no one is in a good position.

Comment Re:BTRFS filesystem (Score 1) 321

Yes, but earlier systems, which the OP was suggesting could be used for this purpose, lacks that functionality. Also, please reset your sarcasm detector, it appears to be out of alignment -- a functional detector would have pinged on "Raid 9 Million(tm)".

Apparently ReFS will have data and metadata checksums which combined with storage spaces could detect and correct bit rot if implemented properly. While I have no idea if the OP researched the actual capabilities of ReFS, with checksums it is possible to detect bit rot without parity, and correct it with an extra (good) copy. Sarcasm is fun, but only if it's accurate. You might argue that checksums are just a form of parity and maybe I'd agree with you since apparently the error-correction codes for RAID-6 are generally referred to as parity despite actually being linear error-correction codes. But the sense I got from your comment was that you didn't believe it was possible to prevent bit rot with just two copies of checksummed data, or by storing a single copy with an error-correcting code.

Correct, and those that are aren't immune to human stupidity. No filesystem can save you from a guy who decides to pour beer into the storage array, or who goes to move a directory and misclicks sending it to the trash. Disaster recovery is not a simple matter of choosing the right filesystem and then patting yourself on the back. It requires careful planning and consideration... None of which the majority of the people on this thread seem to be capable of. At least you seem to have some grasp of the underlying technology.

Most of your other points were spot-on. Relying on single storage systems that aren't geographically distributed is just asking for trouble. Not keeping administratively separate backups or immutable version history (read-only snapshots, revision control, etc.) is also a quick way to lose your data. I don't think there are any foolproof solutions you can get at the moment. Replicated git repos are close, but there was that KDE fiasco with git not explicitly checking the cryptographic hashes during all of its operations and allowing bitrot to be replicated to other repositories. Dumb. I have never been a fan of the Linus/Linux philosophy of trusting the hardware to provide 0 bit errors per yottabyte. It's just not realistic. Of course that means that the next step will be implementing lock-step (or at least consistency-point comparison) processing in software to work around CPU/RAM errors...

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...