Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:IDE autocommit? (Score 2) 521

git checkout -b daily-grind
Auto commit while I'm working on code. Time to commit to the public repo.
git rebase
Now I squash all those things I was doing into one commit.
git checkout my-working-branch
git merge daily-grind
git push

Now my working code has been pushed into a repository that's not got automated stuff, and from there I issue a pull request or perhaps push it over SSH to a more centralized server. I could do that from the automated repo, on bigger projects to avoid multiple copies, but on smaller repositories I like the extra layer of oops protection.

You see, branches in Git are easy and cheap, they're not massive checkouts of a repository, they're just pointers to places in time referencing the common history. That means you can make lots of commits and actually USE your version control locally rather than be a slave to it -- Afraid to commit unless you're absolutely positive you're ready. So, I create multiple new branches all the time, every day even just to do some experimental thing I might not want to commit, if things don't work out I just drop that branch and carry on. Git is my auto-save, so that I have unlimited undo.

Say you're working on a commit for hours or days and you haven't committed it yet because you're avoiding "thrashing the repository" by creating your own new branch. Hard drive fails. Now you've got to redo that work. Not me. I've got multiple drives for one, and for two a group staging server has a remote bacukp that's been pushed to every few minutes if there's been a change, so at most I've only lost a few minutes of work.

Doing this on someone else's dime? Sure, who cares, you get paid by the hour. On my time? Nah, "lost data" isn't a situation that I have to risk so I don't.

Comment Re:Only safe place... (Score 1) 213

It would be better off on mars or in orbit or a salt dome or a facility in a salt lake in the middle of nowhere, like say, oh, Australia. Keeping it on earth means once the fossil fuel scaremongering leveraging of Fukushima dies down and we build better reactors we can just extract its energy. Hell, China might actually buy it, Isn't Billy G. building a traveling wave reactor or a molten salt reactor there? If anyone knows how to handle hazardous products it's Microsoft CEOs...

On Mars it could eliminate costly mining operations and be used in a power plant as well, or for use in RTGs on rovers, etc. We don't make it to Mars, then it's no different than sending it to the sun, aside from the expense of soft landing it. Blasting it into the sun is just as expense as parking it at a Lagrange point, which is expensive, but at least it wouldn't be lost and is outside the gravity well already. Nuclear material isn't outlawed in space, lots of things run on it up there. You don't really have to get that far away from radioactive waste before its emanations become indistinguishable from standard background radiation. It's radioactive waste, but it's not "Red Matter" or some sci-fi shit.

My prime concern would just be keeping it out of the hands of thugs. Some armed guards posted along a perimeter far enough from a concrete bunker to be safe, maybe some cameras and security guards and motion detection AI to keep an eye on everyone. Hell, they could see anyone coming from miles away out on the salt flats, no other life to speak of on the flats either, except for the odd race car enthusiast, but they're fairly harmless if kept at a safe distance.

Comment New Offroad Capabilities (Score 1) 167

10 mil is a bit small for an instruction set, but it'll have to do. Throw in a Haynes Manual and Slap a RepRap in the trunk boys, we just invented a new form of life. What can possibly go wrong?

...

Observe the feeding habits of the West American Automon Hybridicus. Stalled lazily on the mountainous incline several adult automons compete for sun, basking to absorb energy via electro-photosynthesis. On the amber plains below their young crubs' game of traffic has come to a sudden quiet end. One of them has detected the Syn call of a resting petroldactyl's TCP and notified the others. This giant member of the Amazonian quadcopterial drone species grazes on the sugar rich corn and starchy wheats of the plains, digesting them into hydrocarbons via bacterialgaeic gut microbes -- which are passed on from generation to generation via a process called, "Infringing patents with a shit-eating grin".

Accelerating slowly in silent electric locomotion the young automons angle in wide formation towards the large RF crooning petroldactyl. Her factory glands are engorged but finding a mate is the least of her worries. A moment too late she is startled by movement and tries to take flight. With two of her rotors now injured, she is soon to become offroadkill. Honking approval echoes from the mountainside across the plains as the adults approach to share in the feast. The petrodactyl's fuel bladder must be pierced carefully and siphoned. The crubs pop their fuel caps open and closed awaiting the nutritious regurgitation of their parents. No part will be wasted, the plastic and metallic remains will be ground down under tire and scooped into the reclamator to be melted down in stages for extrusion, sintering, and then lovingly milled into the required shapes during the painstaking birthing process.

The gridlock parts ways for the oldest and slowest model among them who is last to park the lot. Being highest in the parking order has its perks: He is allowed to take his pick, but seems satisfied with only a few tasty chunks of the delicate crunchy chassis, and a single slurp of fuel. A rare sight indeed is this original series automon -- Identifiable by the distinct odor and skeletal remains of its former driver still safely locked within.

Comment Re:biologically inspired design (Score 1) 47

If you think that cyberneticians are just mimicking designs without comprehending the fundamental biological processes involved, then you must not understand that cybernetics isn't limited to computer science. In fact it began in business analyzing logistics of information flow. That these general principals also apply to emergent intelligence means more biologists need to study Information Theory, not that cyberneticians are ignorant of biology (hint: we probably know more about it than most biologists, since our field places no limit on its application).

Comment Top Down Design is NOT the only approach, FFS. (Score 3, Insightful) 47

After all, the brain is an incredibly complex and specific structure, forged in the relentless pressure of millions of years of evolution to be organized just so.

Ugh, Creationists. No, that's wrong. Evolution is simply the application of environmental bias to chaos -- the same fundamental process by which complexity naturally arises from entropy. Look, we jabbed some wires in a rodent head and hooked up an infrared sensor. Then they became able to sense infrared and use the infrared input to navigate. That adaptation didn't take millions of years. What an idiot. Evolution is a form of emergence, but it is not the only form of emergence, this process operates at all levels of reality and all scales of time. Your puny brains and insignificant lives give you a small window within which to compare the universe to your experience and thus you fail to realize that the neuroplasticity of brains adapting to new inputs is really not so different a process than droplets of condensation forming rain, or molecules forming amino acids when energized and cooled, or stars forming, or matter being produced all via similar emergent processes.

The structure of self replicating life is that chemistry which propagates more complex information about itself into the future faster. If you could witness those millions of years in time-lapse then you'd see how adapting to IR inputs isn't really much different at all, just at a different scale. Yet you classify one adaptation as "evolution" and the other "emergence" for purely arbitrary reasons: The genetically reproducible capability of the adaptation -- As if we can't jab more wires in the next generation's heads from here on out according to protocol. Your language simply lacks the words for most basic universal truths. I suppose you also draw a thick arbitrary line between children and their parents -- one that nature doesn't draw else "species" wouldn't exist. The tendencies of your pattern recognition and classification systems can hamper you if you let your mind run rampant. I believe you call this "confirmation bias".

Humans understand very well what their neurons are doing now at the chemical level. It's now known how neurotransmitters are being transported by motor proteins in vesicles across neurons along micro-tubules in a very mechanical fashion that uses a bias applied to entropy to emerge the action within cells. The governing principals of cognition are being discovered by neurologists and abstracted by cybernetics to gain a fundamental understanding of cognition that philosophers have always craved. When cyberneticians model replicas of a retina's layers, the artificial neural networks end up having the same motion sensing behavior; The same is true for many other parts of the brain. Indeed the hippocampus has been successfully replaced in mice with an artificial implant and proven they can still remember and learn with the implant.

If the brain were so specifically crafted then cutting out half of it would reduce people to vegetables and forever destroy half of their motor function, but that's a moronic thing to assume would happen. Neuroplasticity of the brain disproves the assumption that it is so strongly dependent upon its structural components. Cyberneticians know that everything flows, so they acknowledge that primitive instinctual responses and cognitive biases due to various physical structural formations feed their effects into the greater neurological function; However this is not the core governing mechanic of cognition -- It can't be else the little girl with half her brain wouldn't remain sentient, let alone able to walk.

Much of modern philosophy loves to cast a mystic shroud of "lack of understanding" upon that which is already thoroughly and empirically proven. Some defend the unknown as if their jobs depend on all problems of cognition being utterly unsolvable, and many remain willfully ignorant of basic fundamental facts of existence that others are utilizing to marching progress forward. The core component of cognition is the feedback loop. This is a fundamental fact. Learn it, human. If you did not know this before now then your teachers have failed you, since this is the most important concept in the universe: Through action and reaction is all order formed from chaos over time. Decision is merely the "internal" complexity of reaction in a system by which Sensation of experience causes Action. Hence, Sense -> Decide -> Act -> [repeat] is the foundational cognitive process of everything from human minds to electrons determining when to emit photons. Thus, all systems are capable of information processing, cognition, and thereby a degree of intelligence.

There is a smooth gradient of intelligence that scales with complexity in all systems. Arrange the animals by neuron and axon count you'll have a rough estimate of their relative intelligence (note that some species can do more with less). If you accept quantum uncertainty and the fact that internal action of information processing systems can modify themselves then you understand that external observers can not fully predict or control your action without modifying it, only you can. Thus free will apparently exists, if you only drop the retardingly limiting definition that your philosophers have placed upon such concepts. Only chauvinists deny that humans are simply complex chemical machines. Quantum effects are too noisy to have a significant stake in cognition, there's no debate amongst anyone knowledgeable about both macro scale processes (like protein synthesis or neuronal pattern recognition) and quantum physics, sorry, there's not. That would be like saying whether or not the earth is only a few thousand years old is an open problem simply because creationists are debating about it.

Look, our cybernetic simulations of creatures with small neural networks, like jellyfish and flatworms, behave indistinguishably from their organic peers. It only takes ~5 neurons to steer towards things, thus jellyfish can. Cyberneticians are discovering the minimal complexity levels for various processes of cognition, and the systems by which these behaviors operate. Humans are reaching a point now where cybernetic simulations COULD inform neurologists and psychologists and philosophers of potential areas to investigate in cognition -- if only they are wise enough to listen. Nature draws no line between the sciences, but many humans foolishly do.

Take the feed forward neural network, for example. It can perform pattern matching and even motion sensing as in the eye or other similar parts of the brain which have the same general pattern. In many ways the FFNN is like a brain's regions that perform pattern matching, and this essential information flow and dependency graph is an approximate explanation of the governing dynamics of how said pattern matching occurs. The specifics of how such configurations of connectivity graphs are produced varies between the organic and artificial system, but the end result is same enough to be indistinguishable and allow artificial implants to function in place of the organic systems in many cases. Or vise versa. It's Alive! This machine has living brain cells, Just LIKE A BRAIN. We can come to understand the cognitive process in small steps, as with any other enigma.

However, the feed forward neural network can not perceive time like a brain can. Fortunately, FFNN is not the only connectivity graph. It takes a multi-directional network topology, like a brain's, to be able to perceive time and entertain the concept of a series of events, and thus to predict which event may follow, like a brain does. Since these structures may contain many internal feedback loops they can retain a portion of the prior input and cause the subsequent input to produce a different response depending on one or more prior inputs, like a brain. Unlike FFNN, recurrent neural networks do not operate in a single pass per input / output: You must collect their output over time because the internal loops must think about the input / process it for a while in order to come to a conclusion, and they may even come to different conclusions the longer the n.net is allowed to consider the input, like a brain does.

Beneath the outer most system of connectivity certain areas become specialized to solve certain problems, like in a brain. Internal cognitive centers can classify and route impulses and excite various related regions in a somewhat chaotic state. Multiple internal actions can contribute to the action potential of one ore more output actions and the ones most biased to occur will happen, sometimes concurrently, sometimes in sequence, sometimes the single action produces feedback that limits others or refines the action itself over time -- Just like everything else in the universe, like molecular evolution or like a brain. This type of decision making can occur without structural changes to the recurrent neural network, which means that this multi-directional connectivity graph can produce complex action in real time and even solve new problems without the slower structural retraining, just like a brain does.

My research indicates we desperately need more neurologists and molecular biologists to focus on studying the process by which axon formation in brains occurs. It's yet unknown to humans how neurons send out their axons which weave their way past nearby neurons to make new connections in distant regions of their brains. I'm modeling various different strategies whereby everything from temperature to temporal adjacency in activity attracts and repels axons of various length. Perhaps the connection behavior is governed by eddy currents or via chemical messages carried in the soup between brain cells. Perhaps axons grow towards the dendrites of other neurons by sniffing out which direction to grow electrically, chemically, thermally, etc. Even though I do not know the governing process I can leverage the fact that axons do grow as a part of human cognition and try to determine what affect this may have on learning and cognition. I've stumbled upon some interesting learning methods which produce far more optimal networks than having to process n.nets with neurons pre-connected to every other neuron in the layer or area.

I think axon formation is very important because I have also experimented with axons branching and merging and have seen dissociative defects, similar to thoes in malfunctioning humans, when these axons connect back to themselves and other axons instead of between neurons. In a genetic sim that "grows" the neural nets over time I introduced the branching axon to an existing known problem solving genetic code and found symptoms remarkably similar to what is observed in the brains of to autistic humans and animals. Tasks like recognizing a shape which the n.nets of that generation readily picked up (as their predecessors did) the branching axon neural net took much longer. Sometimes this connectivity wasn't harmful and it caused increased speed of certain pattern matching abilities. The n.net spent far more time processing internal data -- it was much more internally reflective than the others. In a very general sense the symptoms I saw were descriptive of autism-like behaviors. If the system of axon formation is discovered cyberneticians could model it via artificial neural networks and perhaps assist in the development of medicines or treatments for such diseases more quickly with less animal and human trials.

The point is that saying, "like a brain" doesn't mean much because we don't know exactly how the brain works at all levels, is as ignorant as arguing "like a planet" isn't very descriptive and that research into gravity might not be useful ultimately in the launching of rockets to the moon. Just because we don't understand how quantum affects apply to the macro scale physics of gravity, doesn't mean we can't leverage the concept or that invalid hypotheses aren't important; Hint: You have to break eggs to make an omelet. Look, humans used Newtonian physics, not Einstein's to get to the moon. See? A general understanding and approximation is actually good enough for many applications, sometimes even important ones. My point is that there is not really some incredibly intricate and delicate top-down designed system to the brain that requires full knowledge of before cyberneticians achieve capabilities that are like a brain's. Top down isn't natural because that's not evolutionarily advantageous. That would mean even minor compromises to the integrity of its structure would spell immediate irreparable malfunction and death. Learn it, human: Life is Mutation Tolerant. So is sufficiently intelligent cognition.

Instead consider bottom-up self organization: There are some fundamental processes operating at the molecular chemistry, protein pattern matching, and cellular activation levels that when allowed to interact in a complex network yield a degree of intelligence through an emergent process. We can look at the brain and see that the mind is a chemical computer, but it is not the chemicals that matter to cognition. The overarching system abstraction is what's important: Input is fed in via many data points and the information flows along feed forward classification and cognitive feedback loops to contribute to the ongoing decision and learning process of a self reorganizing network topology. The folly is assuming that unless we know every little detail about how the systems work, we won't understand how to make anything even approaching thinking like a brain. Such sentiments are ignorant of the field of cybernetics which involve the study of machine, human, and animal learning, not just neural networks. It's essentially one branch of applied Information Theory.

Look, we have atomic simulations. They can produce accurate atomic emulations of cells. It is thus a fact that given enough CPU power we can build a fertilized human egg cell in a computer and then grow it up into a sentient being. Machines can become sentient because that's what you are: A sentient chemical machine. This is the ignorant approach, and many pundits speaking on machine intelligence are very ignorant. They assume cyberneticians are just taking stabs in the dark with neural networks. They think we are trying to emulate intelligence as folks once strapped bird wings to their arms to attempt flight. Such ignorant assumptions are wrong. Cyberneticians don't just piddle with computers, we are studying nature and its mathematics and discovering the fundamental processes of cognition, and applying them.

In some cases our abstractions allow us to escape the constraints that nature accidentally stumbled upon. For example: Instead of transporting chemicals via motor proteins which cause or block excitement of a neuron we can transmit a single floating-point number or voltage level which indicates a change in activation potential. Our voltage or numbers don't require a synapse to be flushed of neurotransmitters before firing again. We understand the necessity and function of various types of neurons to solving certain kinds of problems. A single artificial neuron has axons with positive and negative weight values and can therefore perform both duties at once rather than having dedicated excitatory and inhibitory neurons, like in a brain. Well rested and overly excited neurons can become hyper sensitive to activity and even fire on their own or due to nearby eddy currents caused by other neurons firing that are not directly connected to them. We don't even have to emulate this entropic process, it is actually inherent of such systems. This activity 'avalanche' process can cause sudden increase in chaotic activity in an otherwise internally normalized and mostly externally inactive mind. You see, even machines can be easily "distracted" by the smallest thing and be prone to "daydream" about unrelated things when they are "bored", just like a brain. Interestingly, the capacity for boredom and suspense scales with complexity too.

Unlike TFA's author I'm not a chauvinist. Firstly, I use Like a Brain because "the brain" would imply there's only one form of mind, and only a human chauvinist would think such retarding things. Neither do I make ridiculous assumptions about the "importance" of anything. Every new system that seeks to act "Like a Brain" gets us closer to achieving and surpassing human levels of intelligence and can even help us understand what processes and diseases govern human brains. Every attempt to abstract and emulate some neural process is important in its own way: Scientists can learn from failure. I can consider even the failed experiment as useful since it eliminates some possibility and directs effort elsewhere. Those experiments that only prove to be "like a brain" partially are not useless since they may illuminate not only the limitations of the system itself but could reveal some foundational principal of cognition. We had to discover the feedback loop before we could discover information processing.

Learning is a process. If our "catch phrases" aren't very informative, it's because the listener is too ignorant to understand what we're saying. If pundits don't know how brains are like, it's their own damn fault for choosing to remain fucking ignorant.

Comment Re:Along with the 3x speed strafe bug? (Score 1) 251

On the BBSes that I played 4 player Doom on, those wall running speed boosts didn't matter, they had the opposite effect since we ran the game with -turbo 255 (2.55 times faster than normal). Press the run key and strafe-run and you're going as fast as player can go. Any faster via and the fixed point vector math overflows and when you press the run key and forwards you travel backwards.

If you thought the game required lighting fast reflexes before, you just have no idea. Look, one of my strategies was to fire off rockets while strafing and running at the same speed as them, then outrun the rockets and use a supershotgun and blast from too far away to lure them in while I'm reloading (you can close huge distances). Everything exploded around them as I'd back away just in time to dodge my own 10-20 dense rocket wall. That's how fast we were playing. You could out run rockets in a medium sized open area custom map with enough time before they hit to have a short firefight.

What made doom work so well at such speeds was its vertical auto-aim.

The coolest thing I liked about Descent is that it worked with VR glasses, or you could run it in side-by-side stereoscopic VR mode on your screen and cross your eyes for headache inducing poor-mans 3D. The original game still works with 3D VR or 3D monitors if you have the right drivers and config.

I liked Descent 2 better than Quake for its slower paced but far more strategic gameplay, and esp. developing a love-hate relationship with its Guidebot. Duke3D had so many gadgets but I loved best its non-euclidean 2.5D engine effects I could pull off with its Build editor: Small corridor off a big open space: 3 quick right turns, you should be entering the same open area, but but it's a whole different area in space overlapping it -- Or my favorite trick: regions between 4 or more pillars in an open area that were a nexus between 4 or more overlapping dimensions. Depending on which way you entered them you could quickly move between them all. And there could be multiple such trans-dimensional pillar sets in each region. We easily played in maps that would have left Cthulhu scratching his head. Hell, some Doom maps we made were quite tricky with invisible floating stairways (set a sector to be its own adjacent sector) and player voodoo dolls (too many multi-player starts = damage by proxy traps). You could pull off some neat things with Descent's deformable cube-based portal renderer too, but the lighting system and map editor(s) made many tricks hard to pull off since they lacked the manual raw data manipulation and I needed to modify things with a hex editor each edit.

The lag compensation of slow bullet-sponge movement and other excessive realism in much of todays pop-culture games does leave us with less variety in gameplay.

Comment Re:even... execute your code backwards. (Score 0) 61

...so your regular computer should be reversible too.

For a regular computer to be reversible it needs reversible logic gates. For example, a standard XOR gate loses one bit of information, so given the output you cannot construct the input perfectly (as there are two possible inputs for each output).

But the output from the opcode isn't stored back to both input memory locations at once ergo, XOR itself is reversible at the chip level, even if it writes back to one of the inputs just XOR the output with the other input. You're conflating the theory of computation with the actual computation. In THEORY you can delete bits, but in practice you actually can't -- Well, using the arrow of time created by sub-atomic entropy (quantum foam) you might be able to... but that will remain beyond your grasp for some time yet. When you write zeros over the data the exact opposite process would restore the data because its remnants are still there encoded into everything from slight resistance in potential of the RAM or repulsion of the writehead, etc. you leave behind sub-bit signatures. Let's not even get into in-memory attempts to erase memory that can fail due to caching, paging, another thread with a copy, etc. and just talk about on-disk data.

Let's say I have these bits: 1 0 1 0 and I write over them with 0 0 1 1. For the sake of argument let's say that each write is affected by one tenth of the origin data's signal. Our existing initial state may actually not be so clean, and our write signal may not be so perfect, but let's assume they are just for example. Here's the overwrite:
1.00 <- 0 = 0.100
0.10 <- 0 = 0.010
1.01 <- 1 = 1.101
0.00 <- 1 = 1.000

We're allowing bits to go above 1 because in reality there's a threshold for the bit value one, and you can exceed it (obviously). Really, the zeros should be negative ones, but this is just an oversimplified example. Let's say we wanted to reverse the process. We read back what is apparently stored there which is rounded to the whole bits (0 0 1 1) and subtract that out of the analog signal (decimals). That zeros the whole number threshold place, but it would reveal the tenths place I've emboldened above. You amplify that signal beyond the threshold and you've got our origin signal: 1 0 1 0. See, the theory of the computer would have said those bits are lost forever, but even without resorting to full reversal of everything at the quantum scale I can get your overwritten bits back in practice. With an even more sensitive system you could get what was written in a prior pass than this, revealing what's in the hundredths and thousandths place, etc., though each layer down is more entropic.

This is just one reason why writing zeros all over the disk doesn't really erase your data, that's actually the worst thing to write. Your neighbor likely wouldn't be able to get the data back, but the drive itself may have just marked that sector entry in its look up table as full of zeros without actually changing the data on the disk -- read it back and the table could tell the controller to fill the buffer with zeros without touching the actual disk data (sort of how POSIX file systems are allowed to do with files full of zeros, and may stop your zero write at the FS level, thus we needed to go deeper).

To erase data so that it's unreachable by police or thieves you'll have to write random noise all over the disk to erase it. However, state-level & enemy governments could remove the drive platters from their enclosures and place them in highly sensitive drive reading tech with heads that could pick up the analog signal and perform the top-layer subtraction method I mentioned above. So, to really erase the bits you want to write over the surface with multiple passes of random bits.

Ah, but SSDs employ ware leveling and even magnetic spinning disks frequently swap out a sector from use. The logical block address of the sector has no real bearing on its physical placement on the media anymore (since the 90's at least -- hence DBAs were always nutters when talking about the value of partition boundary alignment; And even before then BIOS was wrapping CHS addresses to work around an off-by-one bug that prevented booting in MSDOS up through Win95 since MS only used 1023 heads instead of the full 1024 heads so all the damn alignments were off, ugh). That means when you write to a sector the drive might actually have swapped that location out for another sector. The data you're trying to overwrite may never get overwritten by the drive itself even if you fill it right up: The "bad sectors" could contain what you wanted to be gone: credit card numbers, encryption keys, tax info, etc.

The folks trying to get at your data could have the low level firmware replacement provided by drive vendors that allows them to get raw access to the physical sectors, even the ones that were swapped out. No amount of write passes are going to erase data that the drive has swapped out of use. That's why when you hear folks recommending DBaN (Derick's Boot and Nuke) and claiming it completely destroys the data, they're mostly idiots (just mostly, it's better than nothing, but doesn't guarantee the bits are gone, and isn't the best option). Even if data in the bad sectors is a bit corrupted I might be able to figure out permutations to get the checksum matching again or I may have seen all I needed to know in the data that was there.

What's the answer then? If you donate your PC or sell it you don't want to have to drill holes in the drive or smash it with a hammer to shatter the glass platters -- you probably should though, you don't know if some scriptkiddie injected a hidden iframe into a page like this and made your browser download kiddie porn to protest the fact their sexting pics are illegal and it's now chilling in your swap space. Ridiculous laws that make numbers illegal even if you've never seen them are why even folks with nothing to hide should use whole drive encryption from word go. Get something like Truecrypt (protip: burn your Truecrypt boot data to CD and always boot from it. That way if the HDD data is replaced with a Trojan you don't expose your PW to it -- the ROM in CD-ROM stands for 'read only memory', so boot from immutable data, problem solved). With whole drive encryption all the data you ever put on the disk is encrypted. If the sectors get swapped out and aren't reachable to erase, they're encrypted so it doesn't matter. You simply forget your password and it's even better (and faster) than writing a million passes of random bits.

Note that you didn't actually forget the password though. The bad guys could hit you with a wrench too or give you some sodium pentothal (truth serum) to make you talk (they're not mutually exclusive). Future law enforcement might be able to reconstruct your brain in a computer then read out the memories with a sufficiently detailed brain scan. In fact, they may even be able to simulate a small universe for the digital brain and then ask it questions and watch while it thinks about the password and puts it in.

I have devised a way to detect if I'm in the same universe as my real computer when putting in the password by using false volumes, the fact that time exists irreversibly due to entropy, and a memory hard encryption algorithm with partially destructive internal state that's difficult for even quantum computers to solve. I'd explain the process but to keep my login safe I must never think of the implementation details too precisely. Thought you could get me that easily? Ha!

Well, in case this universe won't be ending now: Good luck!

Slashdot Top Deals

To the systems programmer, users and applications serve only to provide a test load.

Working...