Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment One Cyberneticist's Ethics (Score 2) 188

Once again the evil of Information Disparity rares its ugly head. To maximize freedom and equality entities must be able to decide and act by sensing the true state of the universe, thus knowledge should be propagated at maximum speed to all; Any rule to the contrary goes against the nature of the universe itself.

They who seek to manipulate the flow of information wield the oppression of enforced ignorance against others despite their motive for doing so. The delayed disclosure of this bug would not change the required course of action. The keys will need to be replaced anyway. We have no idea whether they were stolen or not. We don't know who else knew about this exploit. Responsible disclosure is essentially lying by omission to the world. That is evil as it stems from the root of all evil: Information Disparity. The sooner one can patch their systems the better. I run my own servers. Responsible disclosure would allow others to become more aware than I am. Why should I trust them not to exploit me if I am their competitors or vocal opponent? No one should decide who should be their equals.

Fools. Don't you see? Responsible disclosure is the first step down a dangerous path whereby freely sharing important information can be outlawed. The next step is legislation to penalize the propagators of "dangerous" information, whatever that means. A few steps later will have "dangerous" software and algorithms outlawed for national security, of course. If you continue down this path soon only certain certified and government approved individuals will be granted license to craft certain kinds of software, and ultimately all computation and information propagation itself will be firmly controlled by the powerful and corrupt. For fear of them taking a mile I would rather not give one inch. Folks are already in jail for changing a munged URL by accident and discovering security flaws. What idiot wants to live in a world where even such "security research" done offline is made illegal? That is where Responsible Disclosure attempts to take us.

Just as I would assume others innocent unless proven guilty of harm to ensure freedom, even though it would mean some crimes will go unpunished: I would accept that some information will make our lives harder, some data may even allow the malicious to have a temporary unfair advantage over us, but the alternative is to simply allow even fewer potentially malicious actors to have an even greater power of unfair advantage over even more of us. I would rather know that my Windows box is vulnerable and possibly put a filter in my IDS than trust Microsoft to fix things, or excuse the NSA's purchasing of black-market exploits without disclosing them to their citizens. I would rather know OpenSSL may leak my information and simply recompile it without the heartbeat option immediately than trust strangers to do what's best for me if they decide to not do something worse.

There is no such thing as unique genius. Einstein, Feynman, and Hawking, did not live in a vacuum; Removed from society all their lives they'd have not made their discoveries. Others invariably picked up from the same available starting points and solve the same problems. Without Edison we would still have electricity and the light bulb. Without Alexander Bell we would have had to wait one hour for the next telephone to enter the patent office. Whomever discovered this bug and came forward has no proof that others did not already know of its existence.

Just like the government fosters secrecy of patent applications and reserves their right to exclusive optioning of newly patented technology, if Google had been required keep the exploit secret except to government agencies we may never have found out about heartbleed in the first place. Our ignorance enforced, we would have no other choice but to keep our systems vulnerable. Anyone who thinks hanging our heads in the noose of responsible disclosure a good idea is a damned fool.

Comment Re:Not that good (Score 3, Interesting) 188

Several fundamental mistakes in there.

First, OpenSSL is not typical of Free Software. Cryptography is always hard, and other than, say, an Office Suite, it will often break spectacularily if a small part is wrong. While the bug is serious and all, it's not typical. The vast majority of bugs in Free Software are orders of magnitude less serious.

Second, yes it is true that the notion that anyone can review the source code doesn't mean anyone will actually do it. However, no matter how you look at it, the number of people who actually do will always be equal or higher than for closed source software.

Third, the major flagships of Free Software are sometimes, but not always picked for price. When you're a fortune-500 company, you don't need to choose Apache to save some bucks. A site-license of almost any software will be a negliegable part of your operating budget.

And, 3b or so, contrary to what you claim, quite a few companies contribute considerable amounts of money to Free Software projects, especially in the form of paid-for support or membership in things like the Apache Foundation. That's because they realize that this is much cheaper than having to maintain a comparable software on their own.

Comment Re:WTF? (Score 4, Interesting) 188

The only possible way is to disclose to the responsible manufacturer (OpenSSL) and nobody else first, then, after a delay given to the manufacturer to fix the issue, disclose to everybody. Nothing else works. All disclosures to others have a high risk of leaking. (The one to the manufacturer also has a risk of leaking, but that cannot be avoided.)

It's not about leaking. The reason I'm not alone in the security community to rage against this "responsible disclosure" bullshit is not that we fear leaks, but that we know most of the exploits are already in the wild by the time someone on the whitehat side discovers it.

Every day you delay the public announcements is another day that servers are being broken into.

Comment wtf ? (Score 3, Interesting) 188

IT security industry experts are beginning to turn on Google and OpenSSL, questioning whether the Heartbleed bug was disclosed 'responsibly.

Are you fucking kidding me? What kind of so-called "experts" are these morons?

Newflash: The vast majority of 0-days are known in the underground long before they are disclosed publicly. In fact, quite a few exploits are found because - drumroll - they are actively being exploited in the wild and someone's honeypot is hit or a forensic analysis turns it up.

Unless you have really, really good reasons to assume that this bug is unknown even to people whose day-to-day business is to find these kinds of bugs, there is nothing "responsible" in delaying disclosure. So what if a few script-kiddies can now rush a script and do some shit? Every day you wait is one day less for the script kiddies, but one day more for the real criminals.

Stop living in la-la-land or in 1985. The evil people on the Internet aren't curious teenagers anymore, but large-scale organized crime. If you think they need to read advisories to find exploits, you're living under a rock.

Comment Entropy can Increase or Decrease Locally (Score 4, Informative) 115

"Random processes"? Any randomly assembled amino acid randomly disassembles as well; even Miller proved that.

The randomly assembled amino acid does randomly disassemble as well, but that is not what it must do. An amino acid may stay the same, disassemble, or it it may form a more complex molecule.

Here is a little demonstration of "randomly" assembling complexity in behavior. I have given each entity the ability to sense the left and rightness and ahead and behindness of 'energy' dots and their nearest peer. They also get a sense of their relative energy vs their peer. The inputs can affect two thrusters which operate like "tank treads". However, their minds are blank. They don't know what to do with the inputs or how they map to the outputs. The genetic program introduces random errors as copies runs of a genome from one parent then the other switching back and forth randomly. The selection pressure simply favors those with the most energy at the end of each generation by granting a higher chance to breed. Use the up/dn keys to change the sim speed, and click the entities to see a visualization of their simple neural network. The top left two neurons sense nearest food distance, the right two sense nearest entity, middle top is the relative energy difference of nearest peer. Note that randomness is constantly introduced, and yet their behaviors do not revert to randomness or inaction, they converge on a better solution for finding energy in their environment.

There is no pre-programed strategy for survival. Mutations occur randomly, and they are selected against, just as in nature. Given the same starting point In different runs / populations different behaviors for survival will emerge. Some may start spinning and steering incrementally towards the food, others may steer more efficiently after first just moving in a straighter path to cover the most ground (they have no visual or movement penalty for backwards, so backwards movement is 50% likely). As their n.net complexity grows their behaviors will change. Movement will tend towards more efficient methods. Some populations may become more careful instead of faster, some employ a hybrid approach by racing forwards then reversing and steering carefully after the energy/food is passed. Some entities will emerge avoidance of each other to conserve energy. Some populations will bump into each other to share energy among like minded (genetically similar) peers. Some will even switch between these strategies depending on their own energy level.

Where do all these complex behaviors come from? I didn't program them, I didn't even program in that more complex behaviors should be more favorable than less complex, and yet they emerged naturally as a product of the environment due to selection pressure upon it. Just because I can set the axon weights manually and program a behavior favorable for n.nets to solve the problem, doesn't mean randomness can't yield solutions as well. Today we can watch evolution happen right on a computer, or in the laboratory. All of this complexity came from a simple simulation of 32 neurons arranged in a simple single hidden layer neural net, with 5 simple scalar sensors and the minimal 2 movement outputs, with a simple single selection pressure. Each time you run the sim it produces different results, but all meeting the same ends, collect energy, reproduce. Just imagine what nature can do with its far more complex simulation and selection pressures... You don't have to imagine, you can look around and see for yourself.

In other more complex simulations I allow the structure of the n.nets and form of sensors to be randomly introduced and selection pressure applied. In larger simulations I allow the breeding and death of generations to occur continuously across wider areas and speciation will occur. Entities will develop specialized adaptations for a given problem space of the environment. I have created simulations where the OP code program instructions themselves are randomized, and seen program replication emerge. If there is replication and entropy (errors added) competition will begin among the entities for resources in the environment.

I have created other simulations for the evolution of chemical life in a simplified environment with particles that attract or repel, and have various bonding properties. These take even more processor cycles to run, but they demonstrate the emergence of complex behaviors too. First linking up of chains of atoms occurs. The entropy introduced and other parameters determines how long the chains can form. As with the Op-Code VM universe requiring certain types of opcodes to produce life, Certain universes of chemical properties are not conducive to life either. The ones that are will emerge life-like processes spontaneously.

The first atom joining interactions will produce small molecules, thus increasing the complexity of the universe -- this is the meaning of life: It is what life is and does. In my opinion, life is not binary but a scalar quality which happens at many different levels; The degree that something is alive is denoted by the complexity of the interaction, IMO. Out of the millions of simplified chemical universes some will be able to produce molecules, and some can yield chains of molecules. The first chain-producing catalytic molecule, or "protein", will begin to fill the sim with long chains of atoms, and then become extinct as no more free atoms are available, or entropy destroys the chain-making interaction "protein". Some universes that can produce atomic interactions can not produce life. I call this "universal crystallization": If not enough entropy in the universe you don't get complex life, only self assembling crystals. With enough entropy to break the chains down over a period of time, but not too much entropy that it limits the chain lengths too much, chain-making interactions can restart and die out many times. Each micro-genesis tends to occur with greater frequency given the more complex environment their predecessors left behind.

Suddenly an evolutionary leap will emerge: Simple Pattern Matching. For no other reason than it being one out of a sea of random outcomes one of the spontaneous chain making catalyst will produce TWO chains instead of one. Often this happens because of entropy attaching two chain making catalyst interaction "proteins". Because of the propagation of attraction / repulsion, etc. properties in the "protein" molecule the atoms added to one chain will have an immediate effect on the atom or molecule that can be added to the other chain. An interlock is formed, and the complexity of the simulated universe essentially doubles in a small period of time. Sometimes three chains can be produced as well, this may even yield ladder like chains, which are very durable; Though frequently the ladders do not emerge and are not absolutely required for the next phase of complexity to emerge. During the pattern matching phase copying can occur. A chain of molecules may enter one side of a pattern forming "protein", and produce the mate chain. Depending on the chemistry, that mate chain may have one or more additional mate pairs before reproduction of the initial chain Thus the first copy of information is born, and is key to future increases in complexity. Not all chains can make copies, if any input on one side has two or more equally likely pairs it will almost certainly prevent exact copies; Thus molecular pairs instead of atoms are more likely to form life's chemical Op-Code. Depending randomly on the time it takes for the next evolutionary jump to happen it may occur via manufacture of simple or very complex molecular chains.

Some catalytic interactions have an interesting property. Instead of being formed of a chance collection of complex molecules the catalyst will be formed of a chain themselves. Beginning as a molecule chain themselves they may be able to exist in two states: Relaxed or Kinked up. The interaction with a few common atoms or molecules at one or more "active" sites can cause the somewhat relaxed chain to transform in a cascade of kinks into the catalytic protein shape whereby it can manufacture molecular chains. The pattern of the folding is programmed into the molecular chain itself, charge interaction propagates along itself such that stages of the folding action occur in a predictable way (this is still a mysterious mechanism we're trying to solve in real-world biology). Shortly after the emergence of the first such transforming molecular chain, an explosion of evolutionary advances occurs. When the relaxed form of a dual chain-maker kinking "protein" is fed into another dual chain-maker its intermediary form is output. Rarely is this mate-pair form also a transforming kinking protein. It may take one or more additional passes through a duplex pattern matcher to yield the first copy of a copying machine, or it may take a secondary kinking protein that performs the reverse process. "Protein" synthesis is a huge evolutionary leap because immediately thereafter the copy machines reproduce exponentially and begin competing by reproducing the array of slowly mutating intermediary forms. /p>

Mutations to the copy machine or its intermediary form(s) can cause new varied forms of itself to be created. Most of the mutations are fatal, some do no harm by merely adding expense via inert additions, some additions are useful. This is where I see the evolutionary leap due to competition: Armaments and defenses. The inert additions can serve as a shield to prevent vulnerable "protein" machinery from breaking down by preventing a stray atom or molecule from joining or gumming up the works. Some copy-machines will emerge weapons: Long (expensive) tails that detach on contact to clog the gears of other copiers, short barbed kinks with active elements at the tips to break away and attach to competitors. Some may produce reactive molecules that they are immune to, essentially an acid to digest others via. Soon activity is everywhere, the sim will be full of ever more complex interactions. Thus the complexity of the copier universe grows by amazing evolutionary leaps and bounds.

New forms of protein folding emerge which may yield reversible kinking instead of one-way transformations. Sometimes folding is lost and life can go extinct by relying on existing catalytic molecules (given enough time it may form again). Very rarely, instead of long chains self assembling catalysts are formed which manufacture the various smaller parts that then self assemble. However, I have never seen self assembly in this way yield much more complex things without resorting to duplicating molecular chains. The "genetic program" is a powerful evolutionary advantage which seems to be almost a mandatory necessity for complex life. If the chemistry is right sometimes naturally occurring or previously mass produced mated molecule pairs will form chains that begin copying by zipping and unzipping free amino acids given some slight oscillating energy to break apart the chain pairs.

Without any tweaking of parameters, I'll sit back and watch as the simulated universe emerges more and more complexity through competition. Some designs seem naturally more favorable than others. Having only a single intermediary stage instead of two or more is the most common. I think this may be why RNA replicates by a pair strand, and DNA has a double ladder chain to ensure the op-code interlock functions, and yield structural stability. Program hoops seem very advantageous -- A dual chain producing protein may be fed a its intermediary chain that is linked with its unkinked self. The protein will races along the hoop feverishly outputting an unkinked chain copier and a long curved intermediary copy, which can self assemble into more hoops, and kink up to do more copying. The hoop is advantageous to copying random chains since a new copy can begin immediately instead of waiting for a chance strand to happen by (which may not be a competitor's genome). I see the emergence of start and stop signals among the chains so that long intermediary chains or hoops may produce several different molecules of varying complexities.

Many symbioses occur. For the first time a chain destroyer is advantageous to have in one's molecular program to split up chains and/or atomize molecules for re-use. It's optimal for some copiers to use the shorter chains laying around than to create everything from individual atoms, so they may survive only as long as the smaller and simpler / cheaper chain producers also do. Sometimes the intermediary form of two or more complimentary copiers will join and thus carry with it the information for copying all essential copiers and assemblers of an environment. Thus an energy store and release mechanism is heralded by the local increase and decrease in entropy in the making and unmaking of molecular chains, corresponding to heating and cooling the environment which can act as a switch for kinking or relaxing and drive even more complex interactions.

Something like speciation occurs by having various areas of the universe separated by seas of caustic agents or walls of think chain soup, and leaving ecosystems unable to interact. Something like viruses occurs whereby one subsection of the environment replicating various chains will be infected by a drifting intermediary form chain that produces incompatible copiers and/or catalysts that are very destructive and take over the ecosystem. I haven't got the CPU power or time available to run these simplified simulations for billions of years, but there is no doubt in my mind that if I did some would produce more complex creatures by continuing the process of encoding ever more better methods of survival within their self replicating copies.

I have seen simulated life emerge in my computers. For the life of me, I can't figure out how it is any different than organic life. Both require energy and an environment conducive to their survival in order to exist. Thus my definition of life includes cybernetic beings, and my ethics revolves around preserving and increasing the complexity of the universe. I know of more efficient ways to produce intelligence than waiting billions of years in billions of universes with billions of different parameters for emergent intelligences to appear. I can apply the tools of nature directly to the complexity of information itself within the problem spaces I need solved, but just because I can intelligently design a solution doesn't mean that nature could not do the same with her countless dice and enormous trial and error periods.

Emergence of life in water would seem likely given the chemical properties of water and usefulness as both a medium of interaction as well as a source of energy. Water is one of the most abundant molecules (made of hydrogen and oxygen, both very abundant atoms); For the same reason that certain op-codes are required for self-copying programs to form, carbon is a likely atomic building block for life. We know how to create hydrogen atoms from pure energy using very powerful lasers. We know hydrogen fuses into helium in our sun. We may not have been there at the big bang to watch it happen, but all evidence points to its occurrence, and subsequent hydrogen production, and supernovae stars from this to increase the chemical complexity of the universe to include all the atoms we find here on Earth, and which life is made of. We've seen self assembling lipids and amino acids, and it's not a very large leap to consider the latter may take refuge in the former given enough pulses of a the local entropy increase and decrease cycle and a bit of selection pressure.

I am not a god to my cybernetic creations. I am merely a being in a higher reality to them. I see no evidence that indicates a god is required for life to have formed. Complexity emerges as entropy decreases locally, and this universe is expanding. However, even if we met miraculous beings tomorrow with technology far beyond our own and they had powers that seemed as magic, they would be merely aliens, not gods. Even if the universe were a simulation and the administrator logged in having full god-like command of my reality I would not worship them a god: Should Neo worship the Machines of the Matrix? No, worshiping aliens for having more technology is cargo cultism. If reality itself were a thinking entity we would call it a massive alien intelligence and study it with cybernetics, not worship it as a god. The god of a cargo cult does exist, but it is not a god.

The philosophical concept of a much higher intelligence should not be conflated with the term "god". To do so lends the reverence you hold for ancient traditions and their folkloric figures to powerful alien minds. You are espousing enslavement to Stargate Aliens by saying a mind-boggling god created life. Science has made magic into technology. The term "god" is deprecated, and can no longer apply. God is dead. I will not worship aliens, even if they do exist.

These are facts due to thermodynamics verifiable through the mathematics of information theory: Complexity can emerge from simpler states; Complexity can beget more or less complexity, but complexity is not first required to produce complexity. The lack of all complexity would be a hot dense singularity.

Comment Well, duh, anyone with a sim can see that. (Score 1) 218

Everything I need to know about energy logistics I learned from Sim City 2000.

You put the plants / reactors away from the city, out in the water, so that pollution doesn't bother folks and if there's an explosion, nothing else catches on fire. The cost of maintaining the power lines is far less than additional rebuilding costs after a disaster strikes and the plant blows. I guess next they'll discover it's fucking egregiously foolish to zone schools and residential next to industrial plants. In this case, they didn't even need a sim, they could just read a history book.

Comment Re:Possibly Worse Than That (Score 5, Interesting) 216

Little did they know that there is a EULA that comes along with my purchase. If they sell me a product, they are agreeing to a long list of provisions which they are free to look up on my Web site.

I did that for HTTP. You'll find our binding agreement in your server logs. In the HTTP user agent header:

(By continuing to transmit information via this TCP connection beyond these HTPP headers you and the business you act in behalf of [hereafter, "you"] agree to grant the user of this connection [hereafter, "me" or "I"] an unlimited, world wide and royalty free copyright for the use and redistribution of said information for any purpose including but not limited to satire or public display, and agree that any portion of an agreement concerning waiving of my legal rights made via this connection is null and void including but not limited to agreements concerning arbitration; By accepting these terms you also acknowledge and agree that these terms supersede any further agreement you or I may enter into via this connection, and that the partial voiding of agreements will be accepted as a contractual exception regardless of statements to the contrary in further terms agreed to by you or I via this connection. If you do not agree to the terms of using this connection you must terminate the connection immediately. If you do not or can not agree to these terms you do not have permission to continue sending information to me via this connection, and continuing your transmission will be in violation of the Computer Fraud and Abuse Act.)

You can add such a clause simply by using any of the various User-Agent switchers for your favorite browser.

Comment Glad to help (Score 1) 104

It's a pretty new product, which is why you haven't heard of it. It isn't the greatest thing EVAR, as its web UI could use some work, and some of the features it has can hit the limited CPU pretty hard (VLANs and encryption notably) but it is pretty damn good.

It is what lives at the edge of my home network, and I'm real happy with it.

They also make larger models, should you have the need.

Comment Seconded. (Score 2) 104

Before I posted, I searched to see if someone else had mentioned Gargoyle already. And, indeed... someone had. I really like it. It's *NOT* as powerful as (say) OpenWRT, but jeepers, it's got a nice GUI and pretty much all the features you discuss, and a decent (but not great) slate of plugins. I'd definitely recommend kicking the tires on it.

Comment I'd seriously think about a dedicated router (Score 5, Interesting) 104

The problem is all those consumer wifi+router deals tend to have kinda crap firmware. While there are, in theory, OSS alternatives they seem to be less than speedy with the updates and support for new hardware.

So I'd look elsewhere. The two things I'd put at the top of your list:

Monowall, on an APU.1C. It is like $150 for the unit, and then $20-30 for an enclosure and CF card. Monowall should support everything you need, it is really feature rich, is pretty easy to use, and the APU.1C is fast enough it shouldn't have issues even with fairly fast internet.

A Ubiquiti Edgerouter Lite. This is a funny looking and named lil' router with quite a bit of performance under the hood, thanks to the hardware routing logic its chip has. $100 and it can push gigabit speeds for basic routing setups. It is also extremely configurable, since it runs a Vayetta fork, which is a Linux OS customized for routing. However to configure the kind of things you want, you might have to hop in to the CLI, I don't know that the GUI has what you need. It supports that though, and you can even hop out of the specialized routing CLI and get a regular Linux prompt where you can install packages and such.

If you want a more supported solution, you could look at a Cisco RV320. Costs like $200 and is a fast lil' wired router (uses the same basic chip as the Edgerouter, just slower). I haven't used one but I'm given to understand you can make them do a lot. Sounds like they firmware may be a little flakey though.

You then just set your consumer WAP+router in to "access point" mode and have it just do the wireless functions.

This is all more expensive and complex than just running on a consumer WAP+router, but more likely to be able to do what you require. It also means you can change out components without as much trouble. Like say your WAP gets flakey, and you want a new one with the latest technology. No problem, just buy it. You don't have to worry if it supports the routing features you need because it doesn't do that for you.

If you are stuck on doing an all in one, then you could look at a Netgear Nighthawk R7000 or the new Linksys WRT1900AC. The Netgear does have bandwidth management and QoS in its native firmware (I haven't played with the features, but I can confirm they are there as I own one) and there is a "myopenrouter" site that has OSS firmware for it (ddwrt mod I think). The Linksys router supposedly is going to have OpenWRT support soon as Linksys worked directly with the OpenWRT team for it.

Comment I probably *could* retire around 65... (Score 1) 341

But will I? I dunno. I mean, I love playing around with computers and t-shooting, etc. Kind of like I am right now -- except getting paid for it. Yes, being able to knock off the zillions of books on my "to-read" backlog would be nice, and maybe finally getting a degree in... something. Perhaps learn some languages, and do some traveling. But I'll have a month's vacation by then, anyway, so I'm not sure I'm willing to "work at home" instead of "work at work." (And, oh yeah: my company doesn't mind me working at home. So I don't even have to sweat the commute.)

Ask me again in 20 years.

Slashdot Top Deals

Modeling paged and segmented memories is tricky business. -- P.J. Denning

Working...