Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Problem with releasing an underpowered console (Score 4, Informative) 117

The PS3 plays a lot of games at 1080p native...

There is nothing wrong with the PS4/XB1, other than for $400/$500, they don't really offer anything new.

PS1 was the first major 3D console, it was a massive improvement over the SNES.

The PS2 offered DVD, vastly upgraded graphics, etc.

The PS3 offered Blu-Ray, 1080p, and the first serious online console (from Sony).

The PS4? Meh, it is a faster PS3, but otherwise, it doesn't offer anything new.

Um...The PS3 renders very few games at 1080p native. Maybe a dozen titles out of the entire catalog.

Don't forget the other dimension. 1080 is only 360 more than 720, but 1920 is over 800 more pixels than 1280. IMO, that's the dimension we should be talking about, since its more significant. However, per pixel calculation load scales with area, not 1/2 perimeter. So, if we look at total pixels: 1280x720p = 921,600 pixels, and 1920x1080p = 2,073,600, the difference being 1,152,000, so a lot of people don't understand that going from 720 to 1080 is MORE THAN TWICE the pixels, in pixel shader costs you might as well be rendering a full secondary screen.

Now, that's not to say the total cost in rendering will absolutely increase over two fold. Full screen effects like Bloom or HDR are going to come it at about twice the cost. Interpolating a texture coordinate to look up pixel values is cheap compared to most any shader program, even to do something like cube-map specular highlight/reflections, bump mapping (I prefer parallax mapping), shadow mapping, or etc. However, the complexity of geometry calculations can be the same at both resolutions. In a ported / cross-platform game the geometry assets are rarely changed (too expensive in terms of re-rigging and all the animations, testing, etc.) so given slightly better hardware a game at the same resolution will have the prime difference be in adding more particle effects, increased draw distance, maybe even a few whole extra pixel sharers (perhaps the water looks way more realistic, or flesh looks fleshier, blood is bloodier, reflections are more realistic, etc.)

Jumping up to 1080p makes your pixel shader cost a lot more frame time. Developing for 1080p vs 720p would optimally mean completely reworking the graphics and assets and shaders to adapt to the higher shader cost, maybe cut down on pixel shader effects and add more detailed geometry. I encounter folks who think "1080 isn't 'next gen', 4K would have been next gen" -- No, that's ridiculous. 1080p is "next gen resolution", but the new consoles are barely capable of it while having a significant degree of increase in shader complexity from last gen, and we're seeing diminishing returns on increasing the resolution anyway. So, I wouldn't call the consoles quite 'next-gen' in all areas. IMO, next gen console graphics would handle significantly more shaders while running everything smoothly at 1080p, just like the above average gaming PC I got my younger brother for his birthday which kicks both PS4 and Xbone's ass on those fronts. That would be the sort of leap in graphics scale between PS1 and PS2 or Xbox and the 360. 4K would be a generation beyond 'next-gen' because of the way shaders must scale with resolution.

One of the main advances this new console generation brings is in the way memory is managed. Most people don't even understand this, including many gamedevs. Traditionally we have to had two copies of everything in RAM, one texture loaded from storage to main memory, and another copy stored in the GPU; Same goes for geometry, but sometimes even a third lower detail geometry will be stored in RAM for the physics engine to work on. The other copy in main RAM is kept ready to shove down the GPU pipeline, and the resource manager tracks which assets can be retired and which will be needed to prevent cache misses. That's a HUGE cost in total RAM. Traditionally this bus bandwidth has been a prime limitation in interactivity. Shader programs exist because we couldn't manipulate video RAM directly (they were the first step on the return to software rasterizer days, where the physics, logic and graphics could all interact freely). Shoving updates to the GPU is costly, but reading back any data from the GPU is insanely expensive. With shared memory architecture we don't have to keep that extra copy of the assets, so without an increase in CPU/GPU speed just full shared memory by itself would practically double the amount geometry and detail the GPU could handle. The GPU could directly use what's in memory and the CPU can manipulate some GPU memory directly. It means we can compute stuff on the GPU and then readily use it to influence game logic, or vise versa, without paying a heavy penalty in frame time. The advance in heterogeneous computing should be amazing, if anyone knew what to do with it.

Ultimately I'd like to put the whole damn game in the GPU, it's not too hard on traditional memory model hardware (well, it's insane but not impossible): You can keep all the gamestate and logic in buffers on the GPU and bounce between two state buffer objects using shaders to compute physics and update the buffer as input for the next physics and render pass; Pass in a few vectors to the programs for control / input. I've even done this with render to texture but debugging VMs made of rainbow colored noise is a bitch. The problem is that controller input, drives, and the NIC aren't available to the GPU directly so I can't really make a networked game that streams assets from storage completely in the GPU alone, there has to be an interface and that means CPU feeding data it and reading data out across the bus, and that's slow for any moderate size of state I'd want to sync. At least with everything GPU bound I can make particle physics interact with not just static geometry, but dynamic geometry, AND even game logic: I can have each fire particle be able to spawn more fire emitters if they touch a burnable thing right on the GPU and make that fire damage the players and dynamic entities; I can even have enemy AI reacting to the state updates without a round trip to the CPU if their logic runs completely on the GPU... With CPU side logic that's not possible, the traditional route of read-back is too slow, so we have particles going through walls, and use something like "tracer-rounds", a few particles (if any) on the CPU to interact with the physics and game logic. With the shared memory model architecture more of this becomes possible. The GPU can do calculations on memory that the CPU can read and apply to game logic without the bus bottle neck; CPU can change some memory to provide input into the GPU without shoving it across a bus. The XBone and PS4 stand to yield a whole new type of interaction to games, but it will require a whole new type of engine to leverage the new memory model. It may even require new types of game. "New memory architecture! New type of games are possible!" Compared with GP: "Meh, it is a faster PS3, but otherwise it doesn't offer anything new." . . . wat?

As a cyberneticist, all these folks wanking over graphics make me cry. The AI team is allowed 1%, or maybe 2% of the budget. All those parallel Flops! And they're just going to PISS THEM AWAY instead of putting in actual machine intelligence that can be yield more dynamism or even learn and adapt as the game is played? You return to town and the lady pushing the wheelbarrow is pushing that SAME wheelbarrow the same way. They guy chopping wood just keeps chopping that wood forever: Beat the boss, come back, still chopping that damn wood! WHAT WAS THE POINT OF WINNING? The games are all lying to you! They tell you, "Hero! Come and change the world!", and now you've won proceed to game over. Where's the bloody change!? Everything just stays pretty much the same!? Start it up again, you get the same game world? Game "AI" has long been a joke, it's nothing like actual machine learning. It's an indication of a Noob gamedev when they claim their AI will learn using neural networks, and we'd all just laugh or nod our heads knowingly, but I can actually do that now, for real, on the current and this new generation of existing hardware... If the AI team is allowed the budget.

A game is not graphics. A game is primarily rules of interaction, without them you have a movie. Todays AAA games are closer to being movies than games. Look at board games or card games like Magic the Gathering -- It a basic set of rules and some cards that a add a massive variety of completely new rules to the game mechanics so the game is different every time you play. I'm not saying make card games. I'm saying that mechanics (interaction between players, the simulation and the rules) is what a game is. Physics is a rule set for simulating, fine, you can make physics games and play within simulations, but a simulation itself isn't really a game, at the very least a world's geometry dictates how you can interact with the physics. Weapons and some spells, item effects, etc. things might futz with the physics system, but it is very rare to see a game that layers on rules dynamically during the course of play in a real-time 3D the way that paper and dice RPGs or even simple card games do. League of Legends does a very job of adding new abilities that have game changing ramifications and the dynamic is great because of it, but that's a rare example and is still not as deep as simple card games like MtG. It's such a waste, because we have the ram and processing power to do such things, but we're just not using it.

I love a great stories, but it looks like all the big-time studios are fixated on only making these interactive movies to the exclusion of what even makes a game a game: The interaction with various rule logic. AAA games are stagnant in my opinion, it's like I'm playing the same game with a different skin, maybe a different combination of the same tired mechanics. The asset costs and casting, scripts, etc. prevent studios from really leveraging the amazing new dynamics and logic detail that are available in this generation of hardware, let alone next-gen hardware with shared memory architectures. IMO, most AAA studios don't need truly next-gen hardware because they don't know what the fuck to do with it -- Mostly because they've been using other people's engines for decades. These 'next-gen' consoles ARE next gen in terms of the game advancement they enable, even rivaling PCs in that regard, but no one is showing them off. I hope that changes. Most folks are scratching their head and asking, "How do I push more pixels with all this low latency RAM?" and forgetting that pixels make movie effects, not games. I mean, I can run my embarrassingly parallel n.net hive on this hardware, and give every enemy and NPC its own varied personality where the interactions with and between them become more deep and nuanced than Dwarf Fortress, and the towns and scenarios and physics interactions more realistic, or whimsical, or yield cascades of chaotic complexity... but... Dem not nxtGen, cuz MUH PIXZELS!!1!!1

The enemies and NPCs in your games are fucking idiots because "AI" and rules are what games are made of, and the AI team is starving to death while watching everyone else gorge themselves at the graphics feast. It's ridiculous. It's also pointless. So what if you can play Generic Army Shooter v42 with more realistic grass? Yeah, it's nice to have new shooters to play, but you're not getting the massive leap in gameplay. You could be protecting the guys who are rigging a building to crush the enemies as you retreat and cut off their supply lines. No, the level of dynamism in a FPS today is barely above that of a team of self-interested sharp shooters honing their bullseye ability. It's boring to me, great, I'm awesome at shooting while running now. So fucking what. Protip: that's why adding vehicles was such a big deal in FPSs -- That was a leap in game mechanics and rules. I'm picking on FPS, but I can leverage the same at any genre: There's little in the way of basic cooperative strategy (cooperative doesn't have to mean with other players, instead of re-spawning why not switch between bodies of a team having them intuitively carry out the task you initiate when not in the body anymore). We barely have any moderate complexity available in strategy itself let alone the manipulation of new game rules on the fly for tactical, logistical, or psychological warfare. How many pixels does it take to cut off a supply line, or flank your enemies?

Comment Re:more pseudo science (Score 4, Insightful) 869

I suppose you can't ascertain whether the universe was created 5 seconds ago either. Fortunately the laws of physics, chemistry, thermodynamics, biology, etc. allow science to make Predictions not only about the future outcome of an event, but also about the probability of circumstances which caused observable outcomes.

If you leave your sandwich near me and come back to find a bite taken out of it, would you accept the argument, "You cannot ascertain the intake of past consumption with enough precision to absolutely blame me for eating your sandwich", or would you say I'm full of shit?

You're full of shit.

Comment Re:It seems so obvious now (Score 1) 448

land sweetheart pre-IPO deals

The thing about pre-IPO is that it means IPO is in the future. Think about IPO. Now, if you're working for investors who pay you to analyze investment risk, then wouldn't having Rice on the board factor into the Risk category pretty heavily? One fucked up privacy/advertising foobar influenced by this spy-happy nutter on the board could easily end the company. It's not like everyone and their mother isn't competing in cloud storage now.

Furthermore, in a post-Snowden world the appointment of Rice doesn't reflect well on the decision making capability of an Internet enable service company or its CEO. That's getting tallied in the graph right as a mark against the IPO valuation; Even if it was a smart move for connections and she was out before the IPO it's not a smart move for the owners or future shareholders. Since Dropbox proved they're not capable of figuring out that corporate decisions affect consumer perception of their image I wouldn't invest a dime at IPO even if I had no other reason not to do so -- Like their past deception over user data privacy (there is none, the encryption is for transport but they can see what's stored).

With distributed solutions having actual security being common, it's only a matter of time before someone makes a slick interface for Freenet, and puts solutions like Dropbox out of business. The looming IPO is essentially the DB owners cashing in on their doomed business, and their only market value will be in short term speculation on their stock price. I see this retarding Rice appointment as a poison pill to ensure the IPO goes through without anyone buying them -- You'd have to be a fool to try buying them now.

Comment Agriculture's Holy Grail: Open Source Food! (Score 1) 116

If you want me to eat something, you have to tell me exactly what it is, and how it was grown; If it's something from the animal kingdom then I want to know what you're feeding them, and how they're raised. We require ingredients lists on our other food products too. Before you cook shrimp or prawn you have to remove their "sand vein" AKA their digestive tract AKA their shit tube -- Guess what's in there? What they last ate. Some of that shit gets into what I eat. Now their job is to convince me that none of the "marine micro-organisms" in Novaq are harmful, and are free of things like, say, marine flesh eating bacteria...

All the food I eat I've grown myself, or gotten from the farmer's market from local farmers who's farm I have visited, or at the very least it has all of the ingredients listed. I only have one life, and I should have the information available to make an informed decision about what I fuel myself with, and the cost to the environment that I am a part of. That information includes how and where things are fished, hunted, farmed, etc. This extends to other purchases too. Eg: I'd only buy lab-grown diamonds to ensure I'm not supporting the blood-diamond trade. Electronics are often made in shitty conditions too. Just like it was unfortunate but necessary to use proprietary Unixes to make GNU/Linux, it is unfortunate that I must purchase hardware made under pitiful working conditions. When I do so I buy the fastest and most upgradeable hardware available so as to mitigate the frequency of my hardware purchases. Retired hardware goes to into the server rack or my home-grown cloud cluster that serves all my AV storage, display and streaming needs. What is decommissioned gets recycled, just like all the packaging I buy. I do the same with food waste via compost pile for my own garden.

It's more expensive to eat free-range chickens which keep the bugs out of the pesticide free garden, but they produce tastier eggs and taste better themselves (yes, I've done double blind taste tests, For Science!). It's usually more expensive, but sometimes it can be cheaper, to go in with a few friends or family on beef from a mobile butcher and have it cut however we like from a cow of our choice at a local farm. I understand that not everyone can afford to eat the way I do. However, if I can afford to eat better or healthier or in a way that enriches the local community or ecosystem then I do so.

I don't eat pesticide or herbicide. It is not necessary to do so. Contrary to popular belief, these poisons have not been tested for safety on animals, humans, or the ecosystem. Seriously, the chemicals they test on animals and humans are then added to other "stabilizing" or "inactive" chemicals prior to use in the field and the end result does change the properties of the pesticides and herbicides, they become more deadly, and the end result has not been tested on animals or humans. I also don't take drugs that have been on the market for less than 10 years (thus has 20-25 years of testing). Did you believe Tobacco farming corporations when they valued profit over people and said smoking is good for you, or when they said it wasn't harmful for decades? Why would you believe chemical making corporations then? I don't eat plants covered in poison (or that produce poison internally that kills critters we need for our ecosystem), I don't eat meat that eats such poison or that is sick or raised on feed that is a "closely guarded secret". I don't feed my family milk that has growth hormones either.

Did you know you can leave seeds in the sun to accelerate mutations for the test crops you select against to produce better yield while preserving genetic diversity rather than use a corporation's mono-culture which nature simply adapts to? You see, "exposing plants to UV light" isn't patentable and doesn't yield patentable produce. It's true that without poisons bugs will eat some of the plants. The portion of a crop that nature reclaims is the cost of doing business in her neck of the woods. It's only common business sense to diversify to ensure a single crop / market failure won't end your operation.

Turns out, when I look at the cost distribution of my food consumption it more closely matches the ratios of food one should consume. Less meats and fats and more fruits and vegetables. Instead of prawn, I just had some wonderful big spicy Cajun Crayfish, raised locally. The farmer showed me how they were part of a hydroponics system that scavenges (filters) the nasty things from the nitrogen fixing fish tank before the water is recirculated to feed some of the most amazing tomatoes I've ever tasted. That greenhouse eco-system also produces aphid eating ladybugs, of which I bought about a thousand to release in my own garden. Go for crawdads and come back with lady-bugs and tomatoes too. I never know what I'll buy when I go "grocery" shopping, but I know one thing: It will not have "secret ingredients". I eat open source food.

P.S. I also brew beer that is free as in freedom and free as in software...

Comment Re:Wait What??? (Score 1) 612

Heisenberg's uncertainty principle allows a small region of empty space to come into existence probabilistically due to quantum fluctuations

I don't remember that in the principle when I took physics. I think they are skipping quite a few steps in the summary.

No no, it's quite simple really:

"It it not improbable that everything suddenly sprang into existence from nothing?"

"Well, yes, that's HIGHLY unlikely!"

"So it is. Therefore, given an infinite metaverse this has certainly occurred. Thus, even if not the origin of this universe, it absolutely is the case in an infinite number of others, including an uncertain number which are indistinguishable from our own wherein we are having the same conversation. Q.E.D."

This is all uncovered extensively in "Super Fragile Improbabilistic Theoreticalidocious". The improbable motive force of creation was first theorized by none other than the esteemed Douglas Adams himself.

Comment Re:Why is he even excusing himself ? (Score 4, Insightful) 447

As an open-source dev myself, I often wonder why the fuck I do anything useful for others when they'll just turn on me the moment their toys don't work exactly as desired because -- gorsh -- I'm not perfect, though I work very hard to be.

Well, I'm a developer too. Mostly open source. Thing is, I don't bite off more than I can chew. This is a security product. They're not using basic code coverage tools on every line, or input fuzzing. They missed a unit test that should have been automatically generated. This is like offering a free oil change service boasting A+ Certified Mechanics, then forgetting to put oil in the damn car. Yeah, it was a free oil change, but come the fuck-on man. You really can't fuck up this bad unless you're stoned! I mean, if you change the oil, you check the oil level after you're done to ensure it hasn't been over-filled... You check all the code-paths, and fuzz test to make sure both sides of the #ifdef validate the same, or else why even keep that code? "I can accept the responsibility of maintaining and contributing to an industry standard security product" "YIKES I Didn't Fully Test my Contribution! Don't blame me! I never said I could accept the responsibility of contributing to or maintaining an industry standard security product!"

It's cancerous shit like you that give open source a bad name. Own up, or Fuck off.

Comment Re:code review idea (Score 5, Insightful) 447

Well, maybe this is a blessing. While it's open source, maybe multiple eye's need to look at it for final validation.

No it's a curse. I have input fuzzing, unit tests, code coverage profiling and Valgrind memory tests. Such a bug wouldn't have slipped past me with both eyes shut -- no seriously! If I fuck up accidentally like this THE COMPUTER TELLS ME SO without ever having to do anything but make the mistake and type make test all. I test every line of code on every side of my #ifdef options, in all my projects. If you're implementing ENCRYPTION AND/OR SECURITY SOFTWARE then I expect such practices as the absolute minimum effort -- I mean, that's what I do, even when it's just me coding on dinky indie games as a hobby. I don't want to be known as the guy who's game was used to compromise users' credentials or data, that would be game over for me.

These ass-hats have just shown the world that they can't be trusted to use the fucking tools we wrote that would have prevented this shit if they'd have just ran them. It's really not acceptable. It's hard to comprehend the degree of unacceptable this is. It reeks of intentional disaster masquerading as coy "accidental" screw up, "silly me, I just didn't do anything you're supposed to do when you're developing industry standard security software". No. Just, no. An ancient optimization that was made default even though it only mattered on SOME slower platforms? Yeah, OK, that's fucking dumb, I can buy it as an accident. However, NOT TESTING BOTH BRANCHES for that option? What the actual fuck? I could see someone missing an edge case in their unit test, but not even using input fuzzing at all? It's not hard, shit, I have a script that generates the basic unit fuzzing code from the function signatures in .H files, you know, so you don't miss a stub...

"Never attribute to malice what can be adequately explained by stupidity." -- The level of stupidity required is unexplainable. How the fuck are they this inept and in charge of THIS project? THAT'S the real issue. This isn't even the fist time OpenSSL shit the bed so bad. In <- this linked example, it was Debian maintainers and not the OpenSSL maintainers fault (directly): Instead of adding an exception to the Valgrind ignore list (which you most frequently must have in any moderately sized project, esp one that handles its own memory management) they instead commented out the source of entropy, making all the SSL connections and keys generated by OpenSSL easily exploitable since it gutted the entropy of the random number generator (which is a known prime target for breakage that's very hard to get right even if you're not evil, so any change thereto needs to be extremely well vetted). Last time the OpenSSL maintainers brazenly commented they "would have fallen about laughing, and once we had got our breath back, told them what a terrible idea this was." -- Except that they silently stopped paying attention to to the public bug tracker / questions and quietly moved to another dev area, making it nearly impossible to contact them to ask them about anything (a big no-no in Open Source dev), but it gives you a better idea about the sort of maintainers these fuck-tards are.

We don't know absolutely for sure, but we're pretty damn close to absolutely certain that OpenSSL and other security products (see: RSA's BSafe) are being targeted for anti-sec by damn near all the powers that be. So, now we find out OpenSSL has an obsolete optimization -- a custom memory pool (red flag goes up right away if you see memory reuse in a security product, that shit MUST be even more throughly checked than entropy-pools, since it can cause remote code execution, memory leaks, and internal state exposure... you don't say?). We find that optimization would have been caught by basic fuzz test with Valgrind, which apparently folks have been using previously according to the comments in the prior SNAFUBAR. Even if unit-test missed the out of bounds edge-case values on the alternate codepath, the alternate codepath NEVER was tested for YEARS? That's inexcusable, almost as bad as developers of a popular open source security entering silent-running mode... It's pretty clear why: If that branch was compiled in it would reveal this bug that was splaying OpenSSL wide open. Now get this: They use the excuse that the codepath hadn't been tested in so long to KEEP USING THE BUGGY CUSTOM MEMORY MANAGER?! Look, if I was writing a memory manager for a security product, guess what my #1 reason for doing so would be? To memset freed memory to zeros automatically and PREVENT this type of data leak.

If I did something like this I would fire myself, I'm being perfectly honest. I would demote myself from executive software architect all the way down to build-script code-monkey. I'd stop everything, slap myself upside the head, contact my customers and apologize for the delay, offer a refund if they can't wait and need to go with another solution, I'd cancel my vacation to make up for the money I won't have, and get my ass in gear putting together a proper built, test, deploy framework and I wouldn't accept a single new feature until everything was checking out, had unit tests for edge cases and fuzzing for all functions, and I had 100% code coverage on these tests. This is a security product for fuck's sake, so in that case, If I messed up this bad I'd have myself neutered on top of all that so as to spare the gene pool any chance of my brain-damaged contribution.

This is a curse because in a post-Snowden world OpenSSL should be dead to us now, but it won't be, it's entrenched -- Like a grandma zombie it will lumber on without anyone having the guts to bash its brains in because it looks like something we were deeply connected to just a moment ago. The thing is though, that zombie is not our granny anymore, it's infected. There is absolutely no telling how many more glaring fuckups that exist, readily compromising the security of the entire stack and which have oh-so-convenient plausible deniability if ever discovered. OpenSSL needs a full and complete security audit, and the maintainers should be banned from any open source security software that wants to be seen as credible. If I were them I'd be apologizing and stepping down, asking the community to appoint a new maintainer. The code is now cursed, and so are its maintainers, just as some of its contributors, and all of its users are.

Fortunately for me, from the first time I looked at the design of the TLS/SSL PKI CA system and vomited at the stupidity of it all, I have been operating under the assumption that the entire thing is purpose built to be a complex security theater that offers no security whatsoever. It really is a collection of single points of failure: Any CA can create certs for any domain, so you have to trust ALL of them to never be compromised or the whole thing falls apart, not to mention all the bad-actors listed as trusted roots by default in all browsers ensuring that you can not trust all of the CAs ever. If the implementations could be trusted you could use trusted self-signed certs to secure endpoints for your internal business domain, but I really don't see how any CA can prove that it is trustworthy given government gag-orders exist, and I haven't yet seen an implementation of SSL that I feel I can trust.

Look: It sucks, but if you want it done right you really are going to have to do it yourself, or find a trustworthy bloke to do it for you. It's the keys to the kingdom people, that's what's called a single point of failure. It's guaranteed to be 100% screwed if its widespread and an unrelated 3rd party or committee is in charge, count on it. Damn near every government on the planet employs a building full of people who's job it is to make this so. The only blessing in disguise is that this is yet another nail in the coffin, not just for OpenSSL, but for the Public Key Infrastructure in general. More and more folks are jumping ship and looking for better solutions, realizing that no CAs can be trusted at all, and trusting hundreds of them is pure stupidity.

It's a real shame no one invented a trust graph system where multiple intermediaries can vouch for identities like in PGP, oh, wait... Huh, why does TLS even exist then? Right, to make sure your shit is very exploitable.

Comment Close your eyes so the world will not exist. (Score 2) 292

Oh? Scientists are taking longer and longer to get Nobel Prizes, meanwhile our President got one just for being elected! Never mind the more competent and capable black fellow who Obama got redistricted out of office to begin his ascent... maybe Gerrymandering is a feat worth a "Nobel" prize? Ah, wait, now I remember, these prizes are just political bullshit, who gives a fuck about them? I don't.

Neurology is unlocking the mystery of the mind and Cybernetics provides models for the creation of new mental latices so that minds may escape their bodies. Information theory gives us insight into the quantification of cognition and its unification with mathematics. Philosophy may soon have epistemology verifiable through quantum physics and ethics based on rigorously provable physics equations. The theory of expansion says there are multiverses and we haven't even colonized the moon let alone been to the nearest planet in person not to mention the nearest star or galaxy... and these fuckers want to claim science is winding down? Sounds like some Grade A+ Christian Fundamentalist Pandering to me: "Science is almost dead! See, it didn't have all the answers. Yaaaay God!"

Hell, I can barely keep up with feeding my distributed neural network experiments ever more precessing power due to the exponential increase in cheap computation complexity. For the first time on this planet a species stands poised to intelligently design and manufacture the biogenesis of a completely new form of life, and some idiots are saying we've reached the end of the road in science? Fuck that. If PCs continue their progress then by 2050 the machine intelligences in my server racks alone will have many times more computation power than a human head does, to say nothing of the Internet as a whole. We just began 3D printing new organs and regenerating existing organs too. We're making ARTIFICIAL EYES and we can even cure deafness. We've got artificial brain implants restoring and repairing the functionality of minds, we even have the first ever telepathy by way of copying the thoughts and memories of one mouse into another. We may not only colonize the asteroid belt, but even create self assembling minds the size of small planets with electromagnetic brain waves so powerful they can shape reality itself concentrating energy matter at a whim, like the most powerful coherent beams on Earth crudely do now. Science killed the old gods, deprecating the term by defining new ones like Alien Intelligence. Now we are closer than ever to creating god-like beings or becoming like gods ourselves, at the very least immortal, and yet science is "running out" of great things to discover? Really?

I could go on about discoveries and achievements to be made in every field from education to material science, from grief counseling to artificial flavoring, from textiles to construction there is not a single area of research that doesn't stand to make revolutionary advances for humanity in everything from self healing metals and glass to houses that think to transforming electro-chemically powered clothing to vegetables and meats that grow in your fridge to environmentally friendly cellularly engineered organically grown building construction or even just candy that repairs and prevents cavities.

It would take a really small minded and ignorant fool to claim science is running out of achievements or advancements. Try peering out from under a rock some time. With each new technology the door opens to even more progress. Just compare the last century to the century before that to refute the bullshit claim; Try it with millenniums to get a real grasp on progress. Machines have developed capabilities in a few short decades that took organic life billions of years to emerge. All observational evidence proves such nonsensical statements as in TFA ill-informed at best, and an indication of brain damage at worse.

The article is sensationalists anti-science garbage. Nature will grant the same fate to troglodytes as trilobites. If you lack adequate awareness, you become a fossil. Adapt or become extinct.

Comment Re:Themes... (Score 3, Interesting) 452

Looks like and acts like are totally different things. While looking like windows might get you past the initial "it's not what I know" reaction, it's still going to take training to take windows folks into the brave new world of Linux.

As contrasted with training users to embrace the utter cluster fsck of nausea inducing purple and green bruised UI vomit that is Windows 8?

I install Debian and Gnome (2 or 3) or KDE for elderly folks at the community center. Guess what? They have less of a problem going from XP to Linux than from XP to Vista, 7 or 8. Gnome's "dead-zone" which prevents shaky hands from accidentally copying when they want to double click is a favorite feature among the elderly. In fact, since Windows8's release I have tripled the number Linux installs and instead of just extending the life of old hardware both young and old folks just want a release from the non-communicative anti-discoverable W8 interface bullshit. I have been met with driver issues downgrading from Win 8 to Win 7 on many occasions, whereas a Linux live CD works out of the box far more reliably. On systems where the install wouldn't work for some reason, e.g. MS surface or surface pro hardware, most folks I meet would rather return it to the store or pawn it than continue using Windows, AOL Kids Edition.

If barely computer literate fuddie-duddies can cope, then the "Linux retraining cost" is just FUD. Anyone who really can't adapt should be fired for incompetence, heaven forbid a necessary website be changed while they're employed with you.

Comment Natural Born Cancer (Score 5, Insightful) 301

Well, what you are pointing out is that a CA is a single point of failure -- Something actual security conscious engineers avoid like the plague. What you may not realize is that collectively the entire CA system is compromised by ANY ONE of those single points of failure because any CA can create a cert for ANY domain without the domain owner's permission. See also: The Diginotar Debacle.

The thing is, nobody actually checks the the cert chain, AND there's really no way to do so. How do I know if my email provider switched from Verisign to DigiCert? I don't, and there's no way to find out that's not susceptible to the same MITM attack.

So, let's take a step back for a second. Symmetric stream ciphers need a key. If you have a password as the key then you need to transmit that key back and forth without anyone knowing what it is. You have to transmit the secret, and that's where Public Key Crypto comes in, however it doesn't authenticate the identity of the endpoints, that's what the CA system is supposed to do. Don't you see? All this CA PKI system is just moving the problem of sharing a secret from being the password, to being which cert the endpoint is using -- That becomes the essential "secret" you need to know, and it's far less entropy than a passphrase!

At this time I would like to point out that if we ONLY used public key crypto between an client and server to establish a shared secret upon account creation, then we could use a minor tweak to the existing HTTP Auth Hashed Message Authentication Code (HMAC) proof of knowledge protocol (whereby one endpoint provides a nonce, then the nonce is HMAC'd with the passphrase and the unique-per-session resultant hash provides proof that the endpoints know the same secret without revealing it) to secure all the connections quite simply: Server and client exchange Nonces & available protocols for negotiation, the nonces are concatenated and HMAC'd with the shared secret stored at both ends, then fed to your key-stretching / key expansion system AND THAT KEYS THE SYMMETRIC STREAM CIPHER SIMULTANEOUSLY AT BOTH ENDS so the connection proceeds immediately with the efficient symmetric encryption without any PKI CA system required.

PKI doesn't really authenticate the endpoint, it just obfuscates the fact that it doesn't by going through the motions and pretending to do so. It's a security theater. SSL/TLS and PKI are essentially the Emperor's New Secure Clothes. At least with the shared secret model I mention above, there's just that one-time small window of PK crypto for secret exchange at worst (failing to intercept account creation means no MITM) and at best you would actually have the CHANCE to go exchange your secret key out of band -- Visit your bank in person and exchange the passphrase, etc. then NO MITM could intercept the data. HTTP Auth asks for the password in a native browser dialog BEFORE showing you any page to login (and it could remember the PW in a list, or even generate them via hashing the domain name with a master PW and some salt so you could have one password for the entire Internet). That's how ALL security should work, it ALL relies on a shared secret, so you want the MOST entropic keyspace not the least entropic selection (which CA did they use). If you're typing a password into a form field on a web page, it's ALREADY game over.

Do this: Check the root certs in your browser. For Firefox > Preferences > Advanced > Certificates > View. See that CNNIC one? What about the Hong Kong Post? Those are Known bad actors that your country is probably at cyber war with, and THEY ARE TRUSTED ROOTS IN YOUR FUCKING BROWSER?! Not to mention all the other Russian ones or Turkish, etc. ones that are on the USA's official "enemy" list. Now, ANY of those can pretend to be whatever domain's CA they want, and if your traffic bounces through their neck of the woods they can MITM you and you'll be none the wiser. Very few people if anyone will even inspect the cert chain which will show the big green bar, and even if they do they really can't know whether the domain has a cert from these trusted CAs just to comply with that country's laws or whatever.

So, I would put it to you this whole "Heartbleed" business is totally overblown. If you're NOT operating under the assumption that the entire TLS/SSL Certificate Authority / Public Key Infrastructure isn't purposefully defective by design and that all your keys are bogus as soon as they are created, then you're just an ignorant fool. Heartbleed doesn't change jack shit for me. My custom VPN uses the HMAC key expansion protocol mentioned above. I don't do ANYTHING online that I wouldn't do on the back of a post-card, because that's the current level of security we have.

I would STRONGLY encourage you to NOT TRUST the IETF or ANY security researchers who think the SSL/TLS system was ever a secure design. It was not ever secure, and this has been abundantly clear to everyone who is not a complete and utter moron. Assuming that the entire web security field isn't completely bogus is bat-shit insane.

Slashdot Top Deals

Any given program will expand to fill available memory.

Working...