Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Note: You can take 10% off all Slashdot Deals with coupon code "slashdot10off." ×

Comment Re:Traditional internal facing IT shop .. (Score 1) 193

1500 VMs isn't that crazy for 3000 people when you have to use Windows. Every individual piece of software is going to want its own VM, often two or more for redundancy/load balancing, plus an equal number for the test environment, and often a few more for dev/upgrade environments. Many software packages with a server component are big cumbersome globs of many .exes that the vendor "recommends" be run on separate VMs because the developers have no clue how to write software and rebooting Windows is the first solution to half the issues. Think a 3000 person company doesn't have the necessary ~200 apps to reach 1500 VMs by this measure? There's usually several software applications that are specific to each department, and there are lots of departments: purchasing, accounting, distribution/receiving, each core business unit, HR, PR, engineering/plantops, business office, sales, and last but not least IT which is guaranteed to run dozens if not hundreds of separate apps to do their jobs. Sure; not all of them require a server, but many do, even if it's just a ridiculous license server. Data? Anyone processing video or images is just going to have a crapload of data period. Same for some raw scientific data from instrumentation. That said, it really does depend on the industry; I can imagine a 3000 person company where most employees are sales/warehouse/factory drones not needing that much software. Basically if most employees are "knowledge workers" (or shoehorned into it like healthcare where doctors and nurses are required to use atrocious piles of software to record minutia about patient care) then IT is going to be bigger than others.

Comment Re:One time pad (Score 4, Interesting) 128

That said, you could probably use a synchronized random number generator as the shared pad data. The other side would only be able to decrypt messages for as long as they buffer the random number data; after which the message is lost to everyone for eternity. This could work for a TLS session where messages are exchanged with only a couple minutes (or preferably seconds) delay so that the buffer does not need to be very big.

That's roughly the definition of a stream cipher (e.g. RC4 or a block cipher in Counter mode). Only a cryptographically secure random number generator works, which is why such a thing is called a stream cipher and not just a "pseudo-random one time pad". In any case it's not a true one time pad because the entropy of the stream of pseudorandom data is limited to the entropy of the internal state of the cipher, and further limited by the entropy of the key. That means stream ciphers can be broken given only the ciphertext, as opposed to using a one time pad. Stream ciphers also share the same weakness as one time pads; reusing the same stream cipher key is just as bad as reusing a one time pad (virtually automatic recovery of all plaintexts encrypted with the same pad/stream).

Comment What are your IOPS and throughput requirements? (Score 2) 219

For high throughput/IOPS requirements build a Lustre/Ceph/etc. cluster and mount the cluster filesystems directly on as many clients as possible. You'll have to set up gateway machines for CIFS/NFS clients that can't directly talk to the cluster, so figure out how much throughput those clients will need and build appropriate gateway boxes and hook them to the cluster. Sizing for performance depends on the type of workload, so start getting disk activity profiles and stats from any existing storage NOW to figure out what typical workloads look like. Data analysis before purchasing is your best friend.

If the IOPS and throughput requirements are especially low (guaranteed < 50 random IOPS [for RAID/background process/degraded-or-rebuilding-array overhead] per spindle and what a couple 10gbps ethernet ports can handle, over the entire lifetime of the system) then you can probably get away with just some SAS cards attached to SAS hotplug drive shelves and building one big FreeBSD ZFS box. Use two mirrored vdevs per pool (RAID10-alike) for the higher-IOPS processing group and RAIDZ2 or RAIDZ3 with ~15 disk vdevs for the archiving group to save on disk costs.

Plan for 100% more growth in the first year than anyone says they need (shiny new storage always attracts new usage). Buy server hardware capable of 3 to 5 years of growth; be sure your SAS cards and arrays will scale that high if you go with one big storage box.

Comment Re:Singularity (Score 1) 484

The only thing you're missing is support for arbitrary SIP-level proofs beyond type safety (e.g. support for arbitrary proofs of SIP behavior such as time/space complexity, halting, semantic properties, etc.) , and a formally verified self-verifying proof-checker to make sure the compiler is generating correct code and proofs. It looks like you're looking into PCC and TAL, so once you can ship the verifier with its own proof and self-verify during the boot process, you can be fairly certain that hardware errors are the only problem left. I assume you've already executing with a subset of the x86(_64) instruction set for easier verification. I figure that limiting code generation to the smallest set of opcodes can take advantage of formal verification done by Intel/AMD/others in processor design, while excluding all the complex protected-mode and virtualization instructions. Turning off SMM and injection of other arbitrary BIOS/EFI code would also be handy. The hardest part to model and prove correct will probably be the mutli-processor cache coherency behavior, but hopefully Intel at least has done some of that work already and can guarantee adherence to the specs.

Comment They already have quick access to social media. (Score 1) 562

With a warrant, that is. Same with webmail and any other hosted service. Warrents describing a particular place and person have a way of producing encryption keys from service providers. When warrents aren't fast enough for them, then you know they're doing something very, very wrong. Unlike movies where Jack Spy decrypts the terrorists' plans in real-time to thwart them, our jokers can barely even share high-priority bulletins about suspected terrorists planning to board a plane in a day or two. It's ludicrous to suggest that they need faster access to information when they can't even manage what they have already.

Comment Voluntary key escrow (Score 1) 562

How about this; the 9 supreme court justices post their public keys on www.supremecourt.gov, keep their private keys safe, and I'll voluntarily split copies of my private keys into 5-of-9 shares using Shamir's secret-sharing scheme and encrypt each share to one justice and post the ciphertext publicly. Then the NSA can stop introducing weaknesses in the free software I use, and heaven forbid they need to peek at my shopping list, but if they do they can convince some actual judges to let them see it.

Comment Re:You have been challenged statist! (Score 2) 248

I dunno, I'm happy enough with my voluntary free association with the United States. I'm free to leave if I stop liking it, as are you.

What anti-state people don't seem to grasp is that the very same people who you hate in the government, the people who want to control your life and take things from you, weren't made that way by big government. Just look at Mexico. Big drug cartels (who may or may not be entirely the creation of anti-drug big government) are more powerful than the government. Wherever there is an advantage to be had by banding together and robbing the weaker or more honest people, you'll find that niche being filled. The job of government is to fill that niche with the least harmful and most inept robbers. That overpaid, uncooperative, unfriendly civil servant that you despite? Give them a gun and a posse and see how well that turns out for you.

Comment Re:Just in time. (Score 1) 219

Yeah, assuming you're not doing anything at all with the array while it's rebuilding, and none of the sectors have been remapped causing seeks in the middle of those long reads/writes.

To throw out one more piece of advice; RAID6 is useless without periodic media scans. You don't want to discover that one of your drives has bit errors while the array is rebuilding another failed drive. RAID6 can't correct a known-position error and an unknown-position error at the same time. raidz2 has checksums that should detect the bit flip and reconstruct the stripe from the N-2 known good copies, but at these sizes you should probably start worrying about the possibility of two bit flips in the same stripe.

Comment Nuclear chain reactions are just tools, too. (Score 3, Interesting) 455

Putting nuclear bombs on the tips of rockets and programming them to hit other parts of the Earth is also mere tool use. Tools are not inherently safe, and never have been. Autonomous tools are even less inherently safe. The most likely outcome of a failed singularity isn't being ruled by robot overlords, it's being dead.

Comment Re:more leisure time for humans! (Score 4, Insightful) 530

Both Capitalism and Communism are supposed to be about maintaining the work force, so guess where we all are today?

A nominally capitalist country pays a communist country for much of its manufacturing because it's cheaper, instead of employing its own citizens. So the logical next step is to just buy the robot factory workers from China to replace workers in the U.S. to save on shipping costs.

Comment Re: AI is always "right around the corner". (Score 1) 564

The machine has no fucking clue about what it is translating. Not the media, not the content, not even what to and from which languages it is translating (other than a variable somewhere, which is not "knowing". None whatsoever. Until it does, it has nothing to do with AI in the sense of TAFA. (The alarmist fucking article)

How would you determine this, quantitatively? Is there a series of questions you could ask a machine translator about the text that would distinguish it from a human translator? Asking questions like "How did this make you feel?" is getting into the Turing Test's territory. Asking questions like "Why did Alice feel X" or "Why did you choose this word over another word in this sentence?" is something that machines are getting better at answering all the time.

To head off the argument that machine translation is just using large existing corpus of human-generated text, my response is that is pretty much what humans do. Interact with a lot of other humans and their texts to understand the meaning. Clearly humans have the tremendous advantage of actually experiencing some of what is written about to ground their understanding of the language, but as machine translation shows it is not a necessity for demonstrating an understanding of language.

For the argument that meaning must be grounded in conscious experience for it to be considered "intelligence" I would argue that machine learning *has* experience spread across many different research institutions and over time. Artificial selection has produced those agents and models which work well for human language translation, and this experience is real, physical experience of algorithms in the world. Not all algorithms and models survived, the survivors were shaped by this experience even though it was not tied to one body, machine, location, or time. Whether machine translation agents are consciously aware of this experience, I couldn't say. They almost certainly have no direct memory of it, but evidence of the experience exists. Once a system gets to the point that it can provide a definite answer to the question "What have machine translation agents experienced?" and integrate everything it knows about itself and the research done to create it, then we'll have an answer.

Comment Re:AI is always (Score 1) 564

Everything humans do is simply a matter of following a natural-selection-generated set of instructions, bootstrapping from the physical machinery of a single cell. Neurological processes work together in the brain to produce intelligence in humans, at least as far as we can tell. Removing parts of the human brain (via disease, injury, surgery, etc.) can reduce different aspects of intelligence, so it's not unreasonable to think that humans are also a pile of algorithms united in a special way that leads to general intelligence and that AI efforts are only lacking some of the pieces and a way of uniting them. As researchers put together more and more of the individual pieces (speech and object recognition, navigation, information gathering and association, etc.) the results probably won't look like artificial general intelligence until all the necessary pieces exist and it's only the integration that remains to be done. For example there's another article today about the claustrum in a woman that appears to be an effective on-off switch for her consciousness, strengthening the evidence for consciousness being an integration of various neural subsystems mediated by other regions that produce consciousness.

It's important to consider that AGI may act nothing like human or animal intelligence, either. It may not be interested in communication, exploration, or anything else that humans are interested in. Its drives or goals will be the result of its algorithms, and we shouldn't discount the possibility of very inhuman intelligence that nonetheless has a lot of power to change the world. Expecting androids or anthropomorphic robots to emerge from the first AGI is wishful thinking. The simplest AGI would probably be most similar to bacteria or other organisms we find annoying; it would understand the world well enough to improve itself with advanced technology but wouldn't consider the physical world to consist of anything but resources for its own growth. It may even lack sentient consciousness.

Producing human-equivalent AGI is a step or two beyond functional AGI. Implementing all of nature's tricks for getting humans to do the things we do in silicon will not be a trivial task. Look at The Moral Landscape or similar for ideas about how one might go about reverse engineering what makes humans "human" so that the rules could be encoded in AGI.

[A computer is] like an Old Testament god, with a lot of rules and no mercy. -- Joseph Campbell

Working...