Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Why bother with tricks? (Score 0, Flamebait) 297

Yeah, I might give three fucks about your pedantry if there weren't innocent folks in Gitmo who are detained without trial for years, and may never leave. "Treason", or "Terrorism" whatever. They can snatch up anyone for any reason and claim, "national security" or "state secrets".

You're like a damn dumb BASIC prompt, balking at petty syntax errors when you could use your marvelous brain instead understand the message. What a waste of flesh.

Comment This was also done back in 1997. (Score 4, Informative) 223

Back in 1997 at Stanford green laser light was smashed into gamma rays to produce matter.

Scientists Use Light to Create Particles

Photons of light from the green laser were allowed to collide almost head-on with 47-billion-electronvolt electrons shot from the Stanford particle accelerator. These collisions transferred some of the electrons' energy to the photons they hit, boosting the photons from green visible light to gamma-ray photons, and forcing the freshly spawned gamma photons to recoil into the oncoming laser beam. The violent collisions that ensued between the gamma photons and the green laser photons created an enormous electromagnetic field.

This field, Melissinos said, "was so high that the vacuum within the experiment spontaneously broke down, creating real particles of matter and antimatter."

This breakdown of the vacuum by an ultrastrong electromagnetic field was hypothesized in 1950 by Dr. Julian S. Schwinger, who was awarded a Nobel Prize in physics in 1965.

Emphasis mine.

Thus, we do know that we can create matter by colliding photons already. The new experiment proposed could be useful because it does not require the electron-photon collision near the detector in order to produce the gamma photons and subsequent light on light reaction. They'll be firing gamma rays through a cylinder full of black body radiation. A gamma-gamma collision would be more interesting, IMO. The new gamma or black-body radiation collision experiment should be of even lower energy than the gamma and green laser collisions which produced matter in 1997.

Why even go for a lower energy apparatus than what has been demonstrated at all? Simple: To verify the minimal energy level required to make the vacuum puke.

Comment Yet More Belief Biased Science (Score 1) 86

It's pretty obvious from the paper that this is just more pseudo scientific "Static!" scaremongering to get folks to buy those stupid ESD bracelets and "non edible" silica packets. Let me get this straight: Not only are my delicate circuits vulnerable to some "invisible force" that only "scientists" can see, but now you tell me this because they're covered in a thin layer of water?!

Comment Re:So someone didn't follow the practice ... (Score 1) 152

Correct First, Clever Later is my core philosophy. There is nothing I don't abstract via portability layer.

I wrote a "hobby" OS kernel from scratch, starting only with a bootable hex editor on x86. The FIRST thing I did was create a simple bytecode VM to call into a function table to give me ASM level abstraction, before I even switched out of realmode to protected mode. THEN I created the bootloader, a simple text editor, assembler, disassembler, a crappy starter file system, and that was enough to bootstrap into a a full OS completely free from the Ken Thompson's Unix Compiler Hack. A bytecode to machine code compiler was one of the very last things that got written (it was boring). All programs on my OS (yes even Java or C code) compiles into bytecode and the kernel can link programs into machine code at "install time" or include the (shared mem page) VM stub and run them as emulated -- Which is nice to provide application plugins, scripting, or untrusted code. Since the kernel is compiling things it can do far more checks on the validity of binary code, as well as optimizations. Since multiple languages can compile to the same bytecode I can transparently integrate libraries written in different languages instead of each language needing it's own implementation of basic things like big-integer math. I can even change calling conventions on the fly for transparent RPC.

For Gravmass last year I got some nifty embedded ARM systems to upgrade a few of my my x86 robotics projects with (for better battery life). Instead of building a cross compiler (which takes a while) and compiling the OS components into ARM from x86, I just implemented the bytecode VM on ARM in one weekend, and I had my full OS software stack up and running (albeit slowly since I lacked native compilation).

I took the time to do things right so porting painless and it was actually fun to migrate to a whole new architecture. Then I was free to debug, build, and test stuff right on the hardware over serial console. Now I've got a project to build a usable computer from scratch (a transistor version of a CPU I built from about 400 contactors) and the new system will use my VM bytecode as its native machine language, so I won't have to write a compiler. I'm just taking things nice and easy, even teaching kids and neighborhood enthusiasts CS101 as I go. Man it sure is nice to have as much time as I want on these hobby projects -- Time to actually do things absolutely correct first, then as clever as I dream of later...

However, imagine a Publisher is breathing down your neck to get things up and working and done before a deadline. They don't care about you making it harder to porting your code to a new system, right now what matters is getting your basic engine up and running pronto so other folks can start working with their content on the dev kits / consoles. Now imagine even if you started out with a cross platform system, you've got to squeeze a new feature in and it needs to work fast, and yesterday -- No, you don't have time to write a slow version and a fast non-portable version and you've made this sort of addition so many times that it's hasn't been possible to run the slow portable code with the latest assets in over a year.

Yeah, the GPU and shaders aren't all that different but this new platform is the difference between porting a program design from Erlang to a multi-threaded C program. Getting an image up isn't just getting a spinning flat-shaded teapot displayed -- You've got a bunch of code in your asset loading system to handle your data storage format before you can even get started, and both of these were optimized for the CELL processor's big endian while the new platform is little endian. The asset streaming was probably done in several stages distributed among the CPUs, and operating on batches of data a few KB at a time. I've got no doubt that before you could see a properly textured and lit wall there'd be some serious lifting to be done. Floating point formats might not even be directly convertible since IEEE-754 standard doesn't say shit about byte order. One mistake anywhere throws a matrix off or gorks vertex order and you're looking at nothing with no clue why. You can pray that your SDK has a debugger for the GPU... but it probably won't since this is a new platform and the debugger you wrote for the previous system may not ever work on it properly. Which is why some devs shed a tear of joy when Valve opened their VOGL debugger.

Regular software devs don't face the same crap that devs do trying to debug games - You can probably even fire up an emulator, pause and dump machine state. Graphics programmers are largely going in blind. If something doesn't work you hope it's something you just did now and not some subtle bug introduced a week ago that you're running over. Remember before you learned to use a debugger, and you'd just print messages to the console? Yeah, well, debugging on a GPU these days is worse than that, we don't even have a console log to print out. I've taken to debugging shaders separately from the main program, but when something goes wrong I can't debug externally I'm pretty screwed. even more so if the project is already got a lot of complex things going on beyond the spot I'm testing.

I'm sure once they got a model up on the screen the going was a lot smoother working with code in the high level language, maybe re-implementing any missing code that existed as just an ASM routine. The meat of the matter is that they'll have to re-architect the whole system. Even if they had a perfect cross platform "version of the source code that runs correctly (but slowly)" they'd still have to playtest side by side to detect any subtle lighting, texturing, rigging glitches in the data itself, especially if they use any higher res textures or slightly better shaders on the new platform -- better to change these now rather that do all the tests twice. Off by one and just one light is missing that the level designers and artists created a scene around, it's not the end of the world, but you can't really chance it.

Games just aren't the same beasts as business software. Even from the less complex embedded world, I know for a fact there's quite a few MIPS systems out there running a hacked together FORTH interpretor as their bootload / init process, and some of theses things control industrial machines. Even if I got the time to fix things, no amount of begging will get the boss to agree to spend time on making things forward portable, they'd rather charge that to the future guys that have to fix the mess. It's working? You're done.

Lots of gamers understand that the CPU architecture between PS3 and PS4 is different, but many don't understand just how much different it actually is, and how much of a pain in the ass debugging graphical applications is. Personally, I like things like TFA. Used to be players didn't get to know anything about how games were developed at all. Now there's a little bit more info leaking through. Hopefully publishers will eventually take a page from the openly developed indie scene, maybe then they can get some feedback BEFORE they make a flop. Maybe market forces will push towards a more open model of development. What I think is kind of fun is that these programmers are getting a taste of what the testing department goes through, heh.

In other words: Yeah, they fucked up under pressure, but that's par for the course. You might want to pay attention to the difficulties of porting between such parallel systems, since material science isn't exactly keeping up with processor speed. I mean, Look who's talking: Your kernels don't have bytecode compilers so you can't even do basic tasks like copy all of your programs between your desktop, phone, tablet or robot and run them on whatever machine you have. Even your VM OSs like Android aren't compiling once at install time, so they don't leverage full native speeds. You think hardware migration woes inexcusable? Well your program doesn't even run in more than one language. Porting applications between C, Android Java, HTML+JS and ObjC means a complete rewrite for most folks since you don't even write programs in a cross-platform meta-language and compile down into target languages with a compiler-compiler. Anyone who's been around long enough knows that languages can come and go. Best hope when Moore's Law bottoms out around 2020 (due atomic sizes) that we don't all migrate from C/C++ to something like Erlang for system level programming in order to leverage all of the many cores we'll need to go faster, otherwise you'll appreciate the pain of porting too.

My hope for humanity certainly isn't shaken by gamedevs under pressure since it wasn't destroyed by you "Language Level" coders who poke fun at "Assembly Level" coders while you're in the same damn boat just crossing a bigger pond -- ask folks who ported codebases from COBOL to FORTRAN to C, etc. Hell, some folks are STILL in that process. Be smug all you want, but you're still just 'the pot calling the kettle black' to me until you can figure out how to write your C program such that you don't have to rewrite it in Python, Erlang or Go, or recompile it to run it on another system.

Comment Re:In other words... (Score 1) 293

There is a dislike in Facebook. You un-friend or hide the individual item. The people asking for dislike on facebook are asking for censorship. They want it to be harder to find things they don't personally like. I hope Facebook doesn't do it.

Not sure if troll... They do do that, automagically Now you know, and knowing is half the battletoads.

Comment Re:Its easy to be critical (Score 2, Informative) 164

Not me. I'm not guilty. Even responses to requests for help in compiling against OpenSSL were a huge red flag: "If you're not compiling from source, we won't help you." Asking to clarify behavior about things in their API you're linking against was frowned upon, so I go get get the sources, and then I see why they don't want to help anyone -- they don't know how. Told the 3rd party I was contracting for at the time that I would not recommend OpenSSL for future projects, and to use GnuTLS, Mozilla's Network Security Services (NSS) (Apache mod_nss + mod_revocator), or for embedded: PolarSSL or MatrixSSL. When asked why I said the project did not have code quality or standard testing harnesses and that there were many unpatched bugs filed against the system which had gone unfixed for years.

The only people who are guilty are fucking morons. Saying "We're All Guilty" is just blame shifting. I was just building some business logic for a proprietary CMS / Portal and wrote that shit off on my first encounter -- Fuckers actually put OpenSSL on a distro CD for an OS that purports to be security focused. How retarded is that?!

After having more experience with NSS and MatirxSSL and seeing the complexity FUBAR in the entire security theater CA system I've been "rolling my own" security suites despite "common wisdom". That shit doesn't pass as "secure". Honestly I can't fathom why ANY of the OpenSSL codebase is being used in LibreSSL. Implementing these ciphers isn't hard, the math is well understood, the example code is published, and things like side-channel attack mitigation are well documented, if a bit tinfoil hatty (considering the state of end user OS security). The OpenSSL project does deserve bashing. It doesn't even have a proper test harness and unit test with proper code coverage. How can you even tell if both sides of an #ifdef are equivalent, let alone which one is faster? How can a security product be released with ANY untested branches of code?

Given the existing alternatives and reduction in their target platform environments it just seems fucked to me that LibreSSL would use anything other than the header files from OpenSSL. Sorry, anything to do with that cluster fuck is tainted. No amount of "well, you're guilty too" will dissuade me: OpenBSD, you got egg on your face, and you're aiming for more of the same with forking OpenSSL. Oh, wait... None of those SSL libs I mentioned are BSD licensed... Ah, I see. It's fanaticism vs security, round two, fight!

Comment Re:Missing Point. (Score 1) 403

Can't we just compile a version without EME? I mean Stallman should have just pointed that at least Firefox is truly free unlike IE, chrome and others whilst reminding us that we can just recompile sans EME.

Most users are getting the binaries from a trusted source. Do you have 64 gigs of RAM? Because I compile my own Firefox, and that's what it takes -- Most users can not compile their own browser. However, I won't have the keys to make my EME plugin system work so no, even if you wanted to compile Firefox you won't have a copy of Firefox. IMO, Mozilla should build two versions, one with and without the DRM rather than having lots of folks do it themselves and waste the CPU. They could have an option pop up when the automated updater runs asking which version to use, and it can be a option in the settings for which version to install. Otherwise they'll fracture their userbase: Even my grandma and elderly neighbor are asking me if they should use something other than Firefox to avoid the spying DRM -- Snowden has changed everything. Mozilla is going to bake a closed source DRM plugin system into their browser and call it Firefox. That means my compiled version without said system will NOT be Firefox, and yes, I'll be prohibited from calling it that. Even if I wanted EME to work on my compiled version the DRM decryption modules won't recognize my version as valid. So, the answer is: No, most folks can not just compile Firefox, and those that do have not compiled a version of Firefox, as far as Mozilla and the EME system is concerned. Previously all forks could have interoperability if desired, EME changes this.

I've already figured out how to compile Iceweasel and apply most of my custom FF patches to the Debian source releases. I am one set of the many eyes that Mozilla will be losing with this move to adopt DRM. All of my friends and family look to me for recommendations about browsers, and will follow my lead, and their friends follow their lead. Apparently Mozilla has forgotten how Firefox was even able to gain traction in the first place. Why would they switch away you ask? Because hackers hate DRM, and they love a challenge. Now in addition to patching bugs more white-hats will wear gray and actively release details about how to exploit the NEW CODE (since all new code is buggy) to help ensure this move is more than a waste of time, but poison for any browser vendor to deploy this rubbish. Consider it a last ditch attempt to win back from the dark side what used to be an end-user loving browser developer.

Alienating users via Facebook-style, "Two steps towards heinous, one step back while apologizing", is not going to help adoption rate. Remember, "Take back the web"? The (somewhat bogus, but useful) grassroots meme of "Faster, more More Secure, and Free" browser can be turned on an equally "correct" dime: "Oh No! Firefox has fallen to the spying systems by adopting the EME remote DRM code execution back door", then whichever fork is most successful, be it Swiftweasel, Iceweasel, etc, will be the one that users associate with the new slogan, "Take Back Your Browser". Blam. Kiss the FF userbase goodbye, just like IE did once upon a time. Look, if you think for one second that folks like me can't develop a CDM that demonstrates how malicious this DRM system is, you need to seriously think again: Consider that other hackers don't necessarily wear white hats.

This is yet another case of failure withing the Free community; Destruction without ensuring the core values are witheld.

There's no such thing as the "Free community", this isn't a hippie commune. I'll give you the benefit of the doubt that you aren't some kind of shill, and assume you mean the Free (Libre) and Open Source Software community. In any event you're wrong: This is Destruction BECAUSE the stated core values, upon which our trust was built, were not upheld. Your comment reeks of failure to understand the technology you use and the situation at hand. For similar reasons ignorant folks are willing to accept Chrome's "snappier" connection speed: They don't understand that Chrome is not following the cert revocation protocol which checks to ensure certs are actually valid. Instead of pushing forward with honoring cert stapling, they just ignore the revocation protocol. So all those certs leaked via heartbleed? Yeah, almost all of them (including some of my revoked certs) Chrome thinks are just fine even though they're completely invalid. Google's "CRLSets" only blacklists certs from "important" targets, they'll ensure Google's services have the revocation, but who's to say yourbank.com or other competitor services are in their recovation list? Google doesn't consider my certs "important", so I tell folks to use a better browser, like Firefox or even IE over Chrome. You can turn on even more strict hard-fail cert revocation in Firefox: Options -> Advanced -> Encryption -> Validation -> Check the box "When an OCSP server connection fails, treat the certificate as invalid". How many idiots are railing on about heartbleed and are using browsers that still consider millions of those revoked certs to be valid? The same amount that is saying shit like you are about EME.

Stop being a pedant about compiling shit and use your damn brain: That we can leave Firefox isn't the problem, that we should leave is. FSF even acknowledges that Mozilla is doing this reluctantly, they likely have no choice. We've only ever needed a slightly better reason than "UI team is ignoring user input, again" to attract critical mass of devs to a polished up fork of Firefox and point users at it instead of Mozilla's FF, and EME is that reason. The loss of Mozilla to the dark side should be the message you take away, forks are a given, everyone with an ounce of gray matter knows there's existing forks out there. It would have been nice for Mozilla to have stuck to their mission statement and released EME as an optional plugin. See, it would have been trivial for them to have a signed plugin that allows EME to work with Firefox INSTALLED AT THE USER'S OPTION. This would mean CDMs deployed would simply validate the browser and plugin signature. Mozilla's current change of stance on integration of proprietary systems creates a situation where there's no way to NOT get Firefox with EME inside. EME is the ActiveX of DRM, "What can possibly go wrong?" This move is in direct opposition to Mozilla's own mission statement. It would be prudent to consider Mozilla part of the surveillance infrastructure, as RMS likely does, and everyone who talks about browsers with me from here on out will -- I'm not 100% sure about anything, but this breeches my trust threshold so why risk it when there is an alternative?

As usual, despite the heated discussions there's a dearth of explanation about what EME is on the web. To help dispel the ignorance: Encrypted Media Extensions is a client side system which coordinates with closed source Content Decryption Modules that work with the browser to display encrypted content. The content producers consider the browser "untrusted" and have pushed to move descrambling of the content out of the browser to the proprietary CDM Digital Restriction Managment. FYI, the HTML5 EME DRM API allows CDMs to be part of the browser bundle (which makes no sense since the browser is "untrusted"), or it can be a component of the OS or in hardware firmware like the TPM or TrustZone (which means users without an "approved" OS wouldn't get to view the content), or the CDM can be downloaded separately (which means a closed source blob running amok in your system ala ActiveX) CDMs can opt to run on "approved" browsers only since a user compiled browser could snag the keys and log the media to disk.

The CDM may do anything it wants including, but not limited to, validating the fingerprint of the browser running (hope they update as fast as FF does ;-), checking if unapproved screen-grabber software is installed, decryption and passing back buffers of encoded content for display by the browser; It may handle both decryption and decoding and pass back raw frames for the browser to paint, or decode and transfer pixels to the OS and bypass the browser -- YAY SECURITY -- or even bypass the OS by working with the GPU hardware directly to decrypt and decode the data. If this shit doesn't send up every red flag in your inventory, get a checkup from the neck up. Aside: would be nice if the GPUs supported FLOSS video standards so the browser could just offload, oh, say webm, themselves, eh?

This HTML5 DRM scheme does nothing to address the fact that anything I can see or hear on my computer I can capture in near perfect quality either via my own software, external screen / audio capturing cables, or a 4K digital camera pointed at the screen. Thus it solves nothing and only introduces what will surely be horrible user experiences and the inability to know exactly what your computer is doing. For more reasons why DRM is bogus, simply ask PC gamers.

Since the FBI prioritizes copyright violation higher than missing persons you can see why folks would be angry that Mozilla would switch their stance now when they previously firmly rejected H.264 for far less reasons. The whole HTML5 video debacle is particularly suspicious since the browser could just ask for an OS or plugin supplied <video> element and use whatever codecs the user has available, thus leaving it up to the user or OS vendors to install optional DRM laden codec packs... The only "advantage" the EME protocol offers over said embedded elements is that EME allows built in ActiveX-like DRM whereby servers send you non-sandboxed code to run on your system.

As always, look for technologies to be adopted and standardized across the board AFTER working implementations become popular if said tech might be good for users; However, if there is unilateral adoption of a technology without said popularity of implementation and user demand first, then you are witnessing an anti-capitalistic collusion to deploy something that is against the citizens' best interests (see also: automobile, PC, or phone remote kill switches). If Mozilla was alone in not adopting EME then users would likely flock to that platform in a post-Snowden world, making EME pointless. Think about it: Isn't it odd that none of the mainstream browser vendors are even holding off on EME just to differentiate themselves in case everyone hates the DRM so they can capitalize on the situation and gain more users? I mean, they could quickly deploy EME after the fact without much fuss if it becomes necessary. Nope? No one hedging their bets, eh? Oh, and this game isn't rigged? Yeah right. They needed near unilateral browser support to force this non-feature on users.

Given the stance of all major browser vendors across the board and the fact that there is no EME enabled content out in the wild yet to gauge adoption or user demand, known reluctance to switch to a forked browser, along with the fact that Mozilla has built a trusting rapport with users (lending much needed weight to EME if they adopt it), and that Mozilla is jeopardizing this trust in reversal of their prior stances: It is far more than merely questionable that the pressure to adopt EME has legitimate sources -- much in the same way that "theft prevention" is not a legitimate explanation for MANDATORY hardware kill switches. Either let the market decide if they want these "features" rather than force adoption by legislature or collusion, otherwise the "features" should be considered harmful and rejected. In this light, it's hard to rationalize any other reason for EME to exist except to compromise end users' systems.

If folks like J. Random Hacker or the FSF don't send a strong response to such behavior then other user friendly systems may not see any down side in allowing the exploit of their users by antagonistic 3rd parties.

Comment Re:I really like this "Mental Organelle" model. (Score 2) 189

This is how the brain works.

That's somewhat how the brain works, but it's not how consciousness works. I have Sleep Paralysis, so I get to experience how the brain actually works while I remain conscious every night as my body and brain attempts to drift off to sleep and every morning when I wake up and my brain and body remain largely asleep. Random neuron firing in the brain triggers "hallucinations" of every kind imaginable, and more: From audio visual to sensation of movement to even strange ideas and thoughts. I even see neuron cascades directly in my visual cortex as flashes in the shape of roots or branches. These typically correspond to a rushing or pulsing sensation and/or sound. The auditory hallucinations are obvious too since they typically interrupt my tinnitus. The seemingly random hallucinations are primarily isolated events which become more accelerated over time, and as I allow my executive consciousness to dissolve the triggered hallucinations run "longer" -- They trigger somewhat related impulses in a series of "reminding" cascades that start to blend together, and that's what a dream is. The themes of dreams seem to be guided by activation of primitive impulses, and remains of prior short-term experiences, but can just as easily be nonsensical. The point is that there is no real executive system governing the brain's "program", it's a series of emergent events that depends on the chemistry of the brain but cognition is not explicitly scheduled -- Else I wouldn't be able to offset my sleep cycle and have it mix with wakefulness.

The primary insight I've gained is in the linkages of thoughts, and improved memory retention ability of tangentially related dream events. While wakeful dreaming it takes much practice to discern that which is a voluntary thought and that which is triggered by the brain's sleep pattern. It occurs to me from experience with meditation that the impulsive activation of thoughts are very similar to those randomly triggered sleep, except that they are triggered by existing cascades, not via an overseer process which requires consumption of CPU power to leverage predictive powers gained through experience. That is the great thing about our neural networks: The past load of training causes structural changes that can be leveraged without expending much CPU power to solve the problem again later -- We "reuse our code", so to speak. In the human brain there is no "process load" and "contention" the whole thing is extremely distributed. The notion of a "thought process" is entirely artificial -- A form of confirmation bias. Each neuron recharges its (ATP) energy, produces and recycles chemical messages and fires them off. An unused neuron gathers its resources and then waits to be triggered without consuming cognitive load by merely existing -- There's no scheduler to speak of. One thing that AI researchers may not have considered is that neurons can fire off without direct connections triggering them, simply due to eddy currents or abundant chemical energy. Indeed a brain with inputs and thoughts "normalized" becomes hype sensitive to neuron activation. I can "see" a sort of "flashover" from my ears when a sudden noise occurs during meditation due to tangential activity activating the primed neurons, a form of synesthesia. What logical "cognitive process" would be programmed to allow such things? There is none. The program doesn't exist. The process is an imaginary construct.

Consciousness works by the same sort of filter upon entropy that allows matter to arise from quantum foam, or complex molecular chemistry to arise from an energy gradient. The mind is awash in thought patterns each bubbling beneath the surface in various strengths. There is no "successful" thought stream that becomes the conscious thought, there is no singular "train of thought" at any one time. In fact, you can tell just by typing that the brain is capable of multi-processing. The motor control responses can become tangentially linked to linguistic outputs such that finger patterns produce words without executive thought. In the same way gauging speed and distance can trigger reflexive "muscle memory" (strongly connected cognitive pathways) that causes the foot to depress the clutch and hand to manually shift the gear whilst considering traffic patterns and talking to the passenger. We can will our minds to focus Attention but note that it is more of a guiding of the direction of thought, not an executive order to cogitate. Our concentration can easily be whisked away by distractions because those inactive pathways become primed and ready to fire upon any fleeting stimuli. To demonstrate: One can even hum a tune while reading with the tiniest bit of practice. Who hasn't had their mind drift onto other thoughts while reading the words of a book -- Despite the words are being "said" in one's head another thought "process" coexists? No, many concurrent cascades continue constantly, the "loudest" one gaining the most energy / relevance is the one you notice and fallaciously label "consciousness".

1) a predictive process make a prediction based on incoming data ...

2) a predictive process investing a lot of resources into one prediction

It is not the neuron but the feedback loop which is the core component of cognition, that is what allows an impulse to converge upon the desired result. When a feedback is activated temporally adjacent to another neuron-set new axons gravitate towards said regions of activity over time. Thus there is no effort in "prediction" per se, it is merely the fact that one activity automatically sparks a pre-trained cognitive loop to become active -- We call this "learning". There are no additional processes to be spawned. Imagine a capacitor and resistor in series a single inorganic neuron, now imagine many of these connected in parallel, energized and firing -- Imagine some process, perhaps heat, weakening certain resistors over time. Where is the damn "CPU process" here? We emulate similar setup via CPU and artificial digital neural networks, but the digital neural network does not have individual processes running around inside it either. The updates happen as fast as they can in parallel and a single "thought" may span several threads and many cycles in its propagation through the n.net (due to internal feedback loops). We don't Program a Machine Intelligence, we TRAIN it to solve a problem space, and some of us clever cyberneticians can even use delta compression to identify relevant segments and multiply different meshes trained for separate problems into a single mind so our artificial n.nets can solve multiple problems at once without having to train the n.net on the problems concurrently. In this new configuration it takes the same CPU power to solve one problem, the other, or both at once.

Comment Re:I wank, therefore nothing much (Score 1) 189

Roger Penrose believes that human creativity is rooted at quantum effects,

Hahah, LOL, wat? Who gives a fuck what some moron believes. There's folks who believe extra-terrestrial ghosts called Body Thetans cause illness, doesn't make shit true. Life is largely a thermodynamic mechano-molecular process. Any teenager who's been through biology class can see what a hack is fool is. I just love how most Philosophers are completely fucking ignorant about everything.

Comment Re:The indoctrination of a new generation (Score 1) 111

If you raise a child in a bad situation, and that's all they really know, then they adapt to that situation; it becomes 'normal' to them, and they'll actually become uncomfortable if you try to 'improve' their situation, actually seeking the conditions they're adapted to.

Ah, the social blank slate theory. It's rubbish you know. Just imagine: A kid hit in the head twice every day. Oh, they'll think that's normal, and they'll miss getting the pain to the noggin if you stop! The genetic program that designed their prime cognitive pathways has no effect on the pain reception and aggression circuits? You don't think once that kid's big enough they'll wallop whomever's trying to bash them in the skull? What kind of idiot are you? Tell me: Why don't we have to teach babies to suckle right out of the womb? I mean, we could give them cups to drink out of then, eh? No. Instincts exist. Even in humans. Some shit isn't overridable. The kids that get spied on will have an aversion to being spied on.

Hell, I used to stay up all night writing code in middle school. I'd have already done my mathematics class work and homework days or weeks before -- the knowledge having been needed in years prior to write a bit of software, I'd blaze through the assignments once I figured out the teacher's curriculum. Then I'd doodle or sleep in class. Sometimes I'd be have awake and let out snore just to catch the teacher's eye. From my actions she predicted that I didn't know what was going on in class. So at first she was always very surprised when I perfectly solved the problems she posed on the blackboard without ever paying attention to her. I'm sure a predictive program would have thought I had problems with mathematics instead of just being bored to tears.

Oh sure, just ignore that every new generation creates their own music and clothing styles despite being "socialized" to accept their parents dress and cultural tastes. Just ignore that strong independence impulse that curiously strikes right around breeding age when hormones are going wild. Yes, just assume that kids will just accept their programming and become good little drones.

No, that's not how it works. People can adapt to COPE with situations, but that doesn't mean they FEEL (a primal intuition) that their experience is normal. Humans are tool using creatures. They'll adopt new technologies that are useful even if you embed undesired features into it. However, as history hath shewn, the children will grow up and try to change the things they think are fucked up in society. Whether the entrenched establishment overpowers the counter cultures is an altogether different matter, but what we do know is that enough of said friction destroys empires.

Comment Grow up? Bitch, please. (Score 2) 232

"Grow Up"? Seriously? Some of us started coding at single digit ages. I "grew up" at age 17, when I was homeless and fending for myself on the streets. Patronize someone else, moron, you don't even know what life is. Ever seen someone's skull stomped in? You learn real quick what's actually important in life once yours is on the line. I learned real quick to have a plan B: Always have a contingency plan. Idiots without one are not, "grown up."

I've forgotten more languages than most have learned, but I'd be fine with folks not being considered programmer material at age 40 if they would hire from within for management positions. Instead of employing middle management drones with unrelated "business" (Secretary++) degrees give the folks with actual hands-on experience the job of managing the people in the job they actually know how to do. Face it: Those HR goons are morons, they can't tell good from evil, or else explain how the odd Napoleon-complexes and Micro-dictators in management even got there? If HR wasn't dumb as rocks they'd require demonstrations of skill, a coding test, not accreditations: Degree mills exist, fools; This is especially true overseas. Ah, but the that's getting to the real issue: Skill sets aren't what's really important to upper management... TFA's author isn't as "grown up" as they think.

The new platforms will keep coming, but the solutions will largely be the same. Now I can undercut competition via barging onto any new platform with my whole codebase faster than the other guy can tell you why the new language is "More Expressive". I just have to implement a new "platform runtime" for my meta compiler and then I can check off that platform as a capability and deploy ALL of my existing solutions on the platform since they're written in an even higher level language, and compile down into other languages. Sometimes this means we have to implement features the language doesn't have in the target language -- If I need a coroutine in C: When returning to the trampoline record the exit point. When calling into the coroutine specify the entry point to resume at. I generate a switch with GOTOs to resume from the next point of the operations (GOTO is very valuable, only idiots say otherwise; Look at any VM). Lambda lifting mutable persistent state out of the function scope has the benefit of thread safety. Since I treat comments as first class lexical tokens the compiler-compiler's output is fully readable and commented code in the target output language and following whatever style guide you want. (LOL @ brace placement arguments, what noobs)

See, experienced coders understand languages so well they aren't even languages to us, they're just problem solving APIs: The problem-space is independent of the implementation's solution-space. When we pick up new languages or platforms we're looking for, "How does this language let me solve problem $X?", but more importantly our experience lets us identify what solutions the platform lends itself to solving. Just because a new platform comes out doesn't mean it's more capable of solving every problem. Do this long enough and you'll get tired of re-inventing all your wheels in each new platform and just create a meta compiler, as I've done.

Fortunately I've always crossed off (and initialed) that employment contract paragraph that said everything I would create (even off the clock) would belong to the company: "I don't watch TV. I have several hobby projects I do instead and they need to remain mine. If you want me to give up my hobbies while working here you're going to have to pay me a lot more. Would you sign a contract to work somewhere that said you couldn't ever watch TV?" Protip: Most places have another employment contract without that clause, just tell them you make indie games or have a robotics / embedded systems project, contribute to open source in your spare time, etc. Make your hobby profitable. That way you can always have a plan B, and you'll have more leverage in any salary negotiations: "Well, that's less than I expected. I don't feel the figures speak to the value I provide. This isn't the only place I can work, you see..."

Your life may depend on having a backup plan if shit really hits the fan. You have a backup, right? If you do then you can leverage it (no insurance isn't a backup). If folks learned to keep contingency plans then this story wouldn't even need to exist. Plan A threatening to shit the rotor? Switch to plan B.

Some new languages (or versions thereof) have features that I can leverage directly and don't have to implement in the platform runtime. I can typically pick up a new platform in two weeks and have ALL of my codebase deployable. See, I've got experience. I learned to see languages as those young dumb programmers treat machine instruction sets: That's what languages are, sets of instructions. You wouldn't write code is Assembly, so why would you do the same thing by committing your solutions to one language at a time? Where most see lexical rules and syntax, I see idiom opcodes.

Like I said, I grew up at a young age. I can see through some very thick bullshit. We all know the real issue isn't that coders don't know the new languages. Any idiot can see that the real issue is that around 40 you start having a families and your insurance premiums go up. So, since the tech companies don't really care about quality code they look for any reason under the sun to get rid of you, and replace you with a lower paid code-monkey. That means ridiculous qualification requirements such as degrees in shit that takes you two weeks of RTFM and piddling to become fluent while some greenhorn moron needed a college course or certification, and still doesn't grok the real world use cases. These are Von Neumann Architectures, all the languages are just different descriptions of the same old shit. This is why the "Lack of qualified STEM workers" meme is bogus. It's also why they harp on about more men entering STEM than women -- They argue that more lower paid H1B employees is the solution. They have seminars on how to minimally comply with government requirements for job-listing while expressly NOT finding "qualified" workers, so they can legally outsource.

Think about it: New languages and platforms boast that you can do some set of functionality in less SLOC. Well, if you already have said functionality... then migrating to the new platform to re-invent the wheel is always a net loss. There will never be one uniform platform API, so just as you abstract the implementation from the chipset with languages, abstract the solution from its implementation. Since maintaining the translation for each language into each other language is O(n^2) complex, use a meta language and compile down into the targets for O(n) complexity. You do that, and the whole "OMAGHERD! Rubies n Rails!" bullshit can be dismissed: You can bench your solution on different platforms... ugh, why are we even having this discussion? It's 2014, not 1999, get with the program folks.

Comment Re: Humans Can Not (Score 1) 165

Killer robots allow to solve conflicts without sacrifice.

If you think they won't be turned against you, Educate Yourself. Anti-activism is really the only reason to use automated drones: They can be programmed not to disobey orders, and murder friendly people. Seriously, humans are cheaper, more plentiful, and more versatile, etc. Energy resupply demands must be met any way you look at it. Unmanned drones with human operators just allow one person to do more killing -- take the lead of the pack of drones, it gets killed, they switch to the next unharmed unit. This way a majority of soldiers don't have to be convinced what they're doing is right.

Any robot that can help a wounded person could easily be re-purposed to fire weaponry instead of administer first aid -- Especially if they can do injections.

Comment Just give me the chassis I'll get the 7.5 million. (Score 2) 165

The chassis is the hard part, not the ethics. The ethics are dead simple. This doesn't even require a neural net. Weighted decision trees are so stupidly easy to program AIs that we are already using them in video games.

To build the AI I'll just train OpenCV to pattern match wounded soldiers in a pixel field. Weight "help wounded" above "navigate to next waypoint", aaaaand, Done. You can even have an "top priority" version of each command in case you need it to ignore the wounded to deliver evacuation orders, or whatever: "navigate to next waypoint, at any cost". Protip: This is why you should be against unmanned robotics (drones): We already have the tech to replace the human pilots and machine ethics circuits can be overridden. Soldiers will not typically massacre their own people, but automated drone AI will. Even if you could impart human level sentience to these machines, there's no way to prevent your overlords from inserting a dumb fall-back mode with instructions like: Kill all Humans. I call it "Red Dress Syndrome" after the girl in the red dress in The Matrix.

We've been doing "ethics" like this for decades. Ethics are just a special case of weighted priority systems. That's not even remotely difficult. What's difficult is getting the AI to identify entity patterns on its own, learn what actions are appropriate, and come up with its own prioritized plan of action. Following orders is a solved problem, even with contingency logic. I hate to say it, but folks sound like idiots when they discuss machine intelligence nowadays. Actually, that's a lie. I love pointing out when humans are blithering idiots.

Slashdot Top Deals

Our business in life is not to succeed but to continue to fail in high spirits. -- Robert Louis Stevenson

Working...