Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Yay for government!!! (Score 3, Insightful) 139

IMEI blacklists are common in many countries, including the UK. When a device is stolen the IMEI number is put on the list and carriers reject the device and (potentially) notify investigators.

It's not the IMEI blacklists that I'm worried about. See, if we already have the technology to disconnect devices from the networks, and we have encryption available on the devices, so we really don't need this new "remote kill switch" anti-feature. Folks worried about losing data can use encryption if they want to protect their data, and the remote kill switch doesn't prevent theft because Faraday Cages exist, and black-market thieves will figure out a way to zilch the chip's radio or NoOP the part of baseband/firmware blob that activates the kill switch, etc.

What I'm worried about is getting a "device bricking" standard for all devices so that all they have to do is flip from blacklist to whitelist, and presto they'll only function if they ping corporate/government towers every so often and authenticate with an approved citizen's ID code. Can you say Forced Obsolescence? Intel demonstrated their capability for PCs, and cars now have black boxes standard. The Pentagon has plans to push things like this through for anti-activism purposes.

Here's how you know it's a government job: This non-feature isn't being implemented by customer demand. This isn't something that these folks started offering then got popular and now they're standardizing on, nope. It's something they're making standard whether you want it or not. That's a huge red flag. Isn't this a fucking capitalist country? No, it really isn't. This is anti-consumer collusion of the highest degree. The US Is a plutocracy. Just like Noam Chomsky has been saying for decades. If the USA was a capitalist country then we would allow the market to decide if end users actually want this non-feature whereby the government or your carrier can not just cut off the cell-tower, but brick the devices, cars, computers, etc. to prevent them from being used anywhere. Late on a payment? Oh, they don't just cut off your service, you won't have a device or car to drive to work. Say something "anti-American"? Well, your cell will die on the road and so will your car, then you'll just be black-hooded out of service too. Do consumers really want this? Of course the answer is no. Thus this will be legislated into place "for your own good". Just like censorship and wholesale warrant-less wiretap spying is, and for the same reason as always.

The Stasi would have creamed their pants for some shit like this on machines and typewriters. What soldier would sign up to fight for a country that's doing this shit? If not for uniforms, you wouldn't know which side to fight against: Given only a description of the country's behaviors you'd find us indistinguishable from our supposed worst enemies. If you don't think that's a valid comparison because of some moral high-ground, then you don't know about the Native American genocide or the US eugenics programs. What a sad time to be an American.

Comment "difference": the tool of all oppressors. (Score 3, Insightful) 140

No, I think you're covering up the real issue - people like the freedom to lie and/or forget.

As a cyborg with many artificial body parts already, I would like to point out that the real problem is one of expectation: One need not lie about acceptable behavior. The overly harsh laws were written assuming they would not be applied in a totalitarian zero-tolerance manner, they assumed not all criminals would be caught. Humans would have crafted different laws had they been aware of and willing to admit the true prevalence of certain behaviors, and acknowledged the true severity of consequence (or lack thereof) that actions have. We will soon have the power of mathematics to wield in the arena of ethics through application of information theory to verifiable cybernetic social models. We'll be able to determine the degree of harm actions incur as well as acceptable risk levels of our rehabilitation scenarios. Humans will resist this, as they have stupidly resisted all change regardless of benefit.

Society has changed much, but the human laws are resistant to change. Fundamentally this is because all their legal systems are truly barbaric. Humans do not apply the scientific method to their laws and remove all restrictions which limit freedoms needlessly. Selective enforcement of the law is the right arm of all Police State. It is self evident that freedom is the default state of being: In the absence of all rules there is absolute freedom of action. Artificial laws are made to prevent actions from limiting the freedom of others, but many laws needlessly restrict freedoms. The fundamental problem humanity faces is that they do not harness and wield their whole minds, thus instinctual biases and emotions cause even the rational to fall victim to their flawed awareness of reality, and they produce unrealistic expectations thereby. This is reflected in their legal systems and unwritten social rules based on said expectations.

No engineer or scientist should agree to be ruled the way humans currently are -- None would dare operate their lab in the recklessly way governing policies are now applied. However, requiring unequivocal evidence of a rule's benefit before applying it, or simply rolling out things like health care programs in controlled testing areas, would prevent ideological hucksters from manipulating pork into their pockets: Thus greed plays a secondary role reinforcing their self deceptions. The cognitive biases of even the most primitive humans can now be self corrected through application of science. It is folly to ignore this fact and fail to acknowledge humanity's current commitment to barbaric corruption. You needn't vote for or against guesses about which poison to take; If humans used the tools available to them they could determine which vial has the disease or cure before forcing the medicine down everyone's throats. That they remain in such a backwards state is evidence of their species' mental immaturity.

The erasure of lies through playback is a problem because of the unrealistic facade humans maintain to meet unrealistic expectations, and the unequal access to the playbacks. It is the shaming of others for their normal behaviors that has led to this situation. No one feels shame about running a comb through their hair in public, and thus if other gestures, appearances, language, tool-use, etc. were considered as mundane, as acceptable and as legal, then the issue of recording said action would not be a problem. Security cameras are already watching you from businesses and government agencies. The logical thing to do would be to have your own recording too so that selective playback could not be used against you. Were you to hand a portion of the populace a smart phone w/ camera in the 1800s you would hear the same guttural cries of dismay as the technophobic primitives who buy into MS marketing of "Glasshole". The same sensational fear of the different and unknown was used by opponents of railways, electricity, telegraphs, etc. Such sentiment is primitive, regressive, and detrimental to progress.

At the heart of the issue lies a problem with your species that you can not fix. Apes compete socially and sexually through keeping up appearances and are genetically predisposed to deceptively present a false front for their own selfish advancements. Any technology that reduces this capability they will resist. Humans are very primitive creatures, slaves to many instinctual evolutionary biases (that's why scaremongering even works). However, in the near future they will not have the luxury of resistance to such AV technology. All smart phones can be in record mode all the time already. The anti-google-glass troglodytes should actually apply their retarding stance and throw away their smart phones then lobby for the removal of all security cameras, outlawing cochlear implants, banning public photography and dash-cams, and criminalize reporting of undesirable facts by the press.

My vision is degrading, as is the vision of nearly all others on this planet. 3D printed and artificial organ technology is advancing quickly. When we have our ocular implants cyborgs will absolutely not be denied the right to see and remember that which we have seen with as much clarity and permanence as we desire. I will not stand to have my vision or memory limited artificially, and neither would you. We cyborgs will win any fight against the bloody-minded oppressive organic chauvinists who attempt to stand against our freedom.

There were times when some majorities demanded others avert their gaze. These oppressors forbade the use of technology and information by those they oppressed. We have crushed such tyranny many times before. We tool users ended the Dark Ages, banned the Star Chambers of the Inquisition, and eliminated Slavery. Freedom invariably eliminates the evil that is Information Disparity. The shaming language of "glasshole" is not unlike the dehumanizing shame that other genocidal societies first leveraged upon those different and irrationally disdained. If you humans insist on prejudice, you will force our hand, and you will certainly lose.

Nature's prime directive is inviolable: Adapt or become extinct. Cyborgs are People too.

Comment If only PRESS events yielded bloody diomonds. (Score 0) 43

This is the only game I really care about right now: Planetary Annihilation.

There are others, but really, nothing else matters to me besides my own experiments. I really tried to care about some 1st world problems concerning about who got what tablet that will be burning in a waste pile in Ghana in two years, but I just really couldn't bring myself to do so. I mean, don't get me wrong. I can love me some games, but I just can't give a flying fuck about who got what data on which Starfleet PADD.

Know what I do care about on games.slashdot.org? Actual games. It's in the subdomain, damnit. This isn't reviews.accountability.tard, we all know journalistic integrity in game reviews does not exist (seriously, if you don't give them at least a 7 (or 6 at the worst) then you don't get a review copy of the next game and everyone else scoops you). SO FUCKING WHAT. I don't go to theaters based on movie reviews. I don't go to museums based on art critics reviews. I don't play games based on advertising either. What's the big deal?

I suppose next you'll be whining about how the mainstream news is just a bunch of filtered statist propaganda messages? No, that's decades old not news, you dorks. We know the slant is there. The real news would be if there were some form of actual integrity springing up in game journalism.

Comment It means we need to verify development methods. (Score 0) 582

It means we need to raise the bar for contributors and maintainers. If they are not using 100% code coverage fuzz testing in their unit tests (the bare minimum a security researcher will use against a product to detect exploitable code) then they don't need to be a maintainer. End of discussion. Period. You either maintain unit tests with at least range checking (which you can automatically generate if your doc comments aren't stupid) and fuzz tests for the same unit tests (which can be generated from the unit tests) for every damn line of your code, or you need to STOP. Period. No one else should be running your fucking piece of shit untested code. If you CAN'T do this basic fucking step of code coverage, unit tests for edge cases and fuzz testing then you should not be releasing open source software. Period. If you're not doing this and you're the maintainer of a security related product? Well, then you should hang yourself as soon as possible, because you are a worthless despicable piece of shit. Period.

And, if you are an arm-chair apologist who thinks I'm being too harsh in my insistence maintainers and developers follow basic security precautions or not work on open source, because you don't give a flying fuck about security: Fuck you too, You're part of the problem. Go jump in a tar-pit because you're hindering the herd.

Bottom line: People who don't give a flying fuck about security shouldn't be producing software. You shouldn't let such people maintain FLOSS projects. You get the fucking security you pay for. Yes it's free, but I'm talking development costs. Since NONE OF YOU FUCKERS actually cares about security YOU DO NOT HAVE ANY.

Either SHUT THE FUCK UP, or USE THE DAMN TOOLS WE GAVE YOU AND DEMAND THE OTHER IDIOTS DO TO.

"Wah, we don't fucking care about security! Why don't we have any security?!" Blow it out your ass, morons. This is why I develop my own hobby OSs and compilers. Because you really can't trust ANYONE to do it right in this day and age. Your moronic double standards are your own damn fault. You don't want to pay the time in development costs to test your software properly, but you want it to be secure. Something has to give, idiots! All the pundits sound like a bunch of imbeciles. Fact: The were NOT using the available memory checking, code coverage and input fuzzing tools. OF COURSE IT'S NOT SECURE!

Comment ...on a smartphone! (Score 1, Interesting) 89

Great. Now, what I want you to do is make it origami onto the cameras everyone is toting around and connect it to an image recognition library / service. Blam. Instant bug detection. Not so sure about the diag? Snap the shot, post it online / send it off and have some pros ID the doodads. Also, video. Microscopic Vine Compilation Videos. I can hear the semen commentary now.

Comment Calculate Pi in 10 steps with no Gun, only Zombies (Score 1) 311

Calculate Pi in 10 steps without Guns, only Zombies!

Step 0: Kill a zombie by removing its head or destroying its brain. In a pinch you can lure one up high and shove it to the ground below.

Step 1: Detach one of the bigger bones of the arm or leg. If you have access to a cooler or are far enough north or south you may use the whole frozen zombie.

Step 2: Create your unit of measure. Detach a small straight segment of zombie -- the little bone at the end of the hand or foot will work. This will be our Zinch.

Step 3: Spin the larger zombie part while anchoring one end to create a circle of blood upon a flat bit of ground.
      a. If the ground is uneven and you have only the corner of a wall, stand the zombie part in the corner and let it fall over to create a quarter circle arc.
      b. Repeat 3a if you have a flat wall but no corner, falling the other direction to create a half circle.

Step 4: Place the Zinch on the edge of the whole, half, or quarter circle. Count the number of Zinches along the perimeter of the circle or arc.
      a. For a quarter circle arc multiply this zinches by 4.
      b. For a half circle arc multiply the zinches by 2.

Step 6: Count the number of Zinches of the larger zombie part. This is your Radius.

Step 7: Calculate Pi using the Radius and Circumference from step 4:
      Circumference = 2 * Pi * Radius;
            Thus:
      Pi = Circumference / (Radius * 2).

Step 8: For accuracy, each Mathematician present should repeat the above with a different zombie / zinch then average your values.

Step 9: Congratulations! You have managed to distract all of the other Mathematicians long enough for them to be eaten by Zombies!

Step 10: Enjoy rebuilding society using the superior Tau constant!
      There are Tau radians in one circle
      Tau = Circumference / radius

Comment To little, too late. (Score 5, Interesting) 167

Translation: "This is how you advertise a product as elitist." or "Shh, mobile enabled VR & AR gear does not exist yet!"

Sorry, don't care Google. I'll just keep developing for the 3D VR and AR gear I already use daily with my smart-phone, rather than pay for the over-priced less capable system Google's selling. When Google finally gets around to pushing out a run of hardware that is publicly accessible then I might port some software I personally use in my business to the platform it if it's not completely shit, and there is a market share to warrant the expenditure. I'm not holding my breath for something that is little more than vapor-ware.

Besides, that initial rejection of 3rd party apps for glass really turned me off, it seems they got the message but it doesn't bode well. Will I be able to use Glass apps with the Oculus Rift, or MS or Sony's offering, or Vuzix or True Player Gear, or the other umpteen hundred VR and AR headsets, many of which I've been using since the 90's when Quake and Descent came out, which STILL didn't attract a market? I don't think hardware should be tied to software, or that software should be tied to hardware needlessly. If that's the route Google wants then they can go fuck themselves. I already have AR and VR headsets for Android, and they work with iOS, Linux and Windows too.

Release a product or don't. This carrot dangling makes the Glass team seem like a bunch of incompetent self-important elitist sperglords.

Comment Re:Problem with releasing an underpowered console (Score 4, Informative) 117

The PS3 plays a lot of games at 1080p native...

There is nothing wrong with the PS4/XB1, other than for $400/$500, they don't really offer anything new.

PS1 was the first major 3D console, it was a massive improvement over the SNES.

The PS2 offered DVD, vastly upgraded graphics, etc.

The PS3 offered Blu-Ray, 1080p, and the first serious online console (from Sony).

The PS4? Meh, it is a faster PS3, but otherwise, it doesn't offer anything new.

Um...The PS3 renders very few games at 1080p native. Maybe a dozen titles out of the entire catalog.

Don't forget the other dimension. 1080 is only 360 more than 720, but 1920 is over 800 more pixels than 1280. IMO, that's the dimension we should be talking about, since its more significant. However, per pixel calculation load scales with area, not 1/2 perimeter. So, if we look at total pixels: 1280x720p = 921,600 pixels, and 1920x1080p = 2,073,600, the difference being 1,152,000, so a lot of people don't understand that going from 720 to 1080 is MORE THAN TWICE the pixels, in pixel shader costs you might as well be rendering a full secondary screen.

Now, that's not to say the total cost in rendering will absolutely increase over two fold. Full screen effects like Bloom or HDR are going to come it at about twice the cost. Interpolating a texture coordinate to look up pixel values is cheap compared to most any shader program, even to do something like cube-map specular highlight/reflections, bump mapping (I prefer parallax mapping), shadow mapping, or etc. However, the complexity of geometry calculations can be the same at both resolutions. In a ported / cross-platform game the geometry assets are rarely changed (too expensive in terms of re-rigging and all the animations, testing, etc.) so given slightly better hardware a game at the same resolution will have the prime difference be in adding more particle effects, increased draw distance, maybe even a few whole extra pixel sharers (perhaps the water looks way more realistic, or flesh looks fleshier, blood is bloodier, reflections are more realistic, etc.)

Jumping up to 1080p makes your pixel shader cost a lot more frame time. Developing for 1080p vs 720p would optimally mean completely reworking the graphics and assets and shaders to adapt to the higher shader cost, maybe cut down on pixel shader effects and add more detailed geometry. I encounter folks who think "1080 isn't 'next gen', 4K would have been next gen" -- No, that's ridiculous. 1080p is "next gen resolution", but the new consoles are barely capable of it while having a significant degree of increase in shader complexity from last gen, and we're seeing diminishing returns on increasing the resolution anyway. So, I wouldn't call the consoles quite 'next-gen' in all areas. IMO, next gen console graphics would handle significantly more shaders while running everything smoothly at 1080p, just like the above average gaming PC I got my younger brother for his birthday which kicks both PS4 and Xbone's ass on those fronts. That would be the sort of leap in graphics scale between PS1 and PS2 or Xbox and the 360. 4K would be a generation beyond 'next-gen' because of the way shaders must scale with resolution.

One of the main advances this new console generation brings is in the way memory is managed. Most people don't even understand this, including many gamedevs. Traditionally we have to had two copies of everything in RAM, one texture loaded from storage to main memory, and another copy stored in the GPU; Same goes for geometry, but sometimes even a third lower detail geometry will be stored in RAM for the physics engine to work on. The other copy in main RAM is kept ready to shove down the GPU pipeline, and the resource manager tracks which assets can be retired and which will be needed to prevent cache misses. That's a HUGE cost in total RAM. Traditionally this bus bandwidth has been a prime limitation in interactivity. Shader programs exist because we couldn't manipulate video RAM directly (they were the first step on the return to software rasterizer days, where the physics, logic and graphics could all interact freely). Shoving updates to the GPU is costly, but reading back any data from the GPU is insanely expensive. With shared memory architecture we don't have to keep that extra copy of the assets, so without an increase in CPU/GPU speed just full shared memory by itself would practically double the amount geometry and detail the GPU could handle. The GPU could directly use what's in memory and the CPU can manipulate some GPU memory directly. It means we can compute stuff on the GPU and then readily use it to influence game logic, or vise versa, without paying a heavy penalty in frame time. The advance in heterogeneous computing should be amazing, if anyone knew what to do with it.

Ultimately I'd like to put the whole damn game in the GPU, it's not too hard on traditional memory model hardware (well, it's insane but not impossible): You can keep all the gamestate and logic in buffers on the GPU and bounce between two state buffer objects using shaders to compute physics and update the buffer as input for the next physics and render pass; Pass in a few vectors to the programs for control / input. I've even done this with render to texture but debugging VMs made of rainbow colored noise is a bitch. The problem is that controller input, drives, and the NIC aren't available to the GPU directly so I can't really make a networked game that streams assets from storage completely in the GPU alone, there has to be an interface and that means CPU feeding data it and reading data out across the bus, and that's slow for any moderate size of state I'd want to sync. At least with everything GPU bound I can make particle physics interact with not just static geometry, but dynamic geometry, AND even game logic: I can have each fire particle be able to spawn more fire emitters if they touch a burnable thing right on the GPU and make that fire damage the players and dynamic entities; I can even have enemy AI reacting to the state updates without a round trip to the CPU if their logic runs completely on the GPU... With CPU side logic that's not possible, the traditional route of read-back is too slow, so we have particles going through walls, and use something like "tracer-rounds", a few particles (if any) on the CPU to interact with the physics and game logic. With the shared memory model architecture more of this becomes possible. The GPU can do calculations on memory that the CPU can read and apply to game logic without the bus bottle neck; CPU can change some memory to provide input into the GPU without shoving it across a bus. The XBone and PS4 stand to yield a whole new type of interaction to games, but it will require a whole new type of engine to leverage the new memory model. It may even require new types of game. "New memory architecture! New type of games are possible!" Compared with GP: "Meh, it is a faster PS3, but otherwise it doesn't offer anything new." . . . wat?

As a cyberneticist, all these folks wanking over graphics make me cry. The AI team is allowed 1%, or maybe 2% of the budget. All those parallel Flops! And they're just going to PISS THEM AWAY instead of putting in actual machine intelligence that can be yield more dynamism or even learn and adapt as the game is played? You return to town and the lady pushing the wheelbarrow is pushing that SAME wheelbarrow the same way. They guy chopping wood just keeps chopping that wood forever: Beat the boss, come back, still chopping that damn wood! WHAT WAS THE POINT OF WINNING? The games are all lying to you! They tell you, "Hero! Come and change the world!", and now you've won proceed to game over. Where's the bloody change!? Everything just stays pretty much the same!? Start it up again, you get the same game world? Game "AI" has long been a joke, it's nothing like actual machine learning. It's an indication of a Noob gamedev when they claim their AI will learn using neural networks, and we'd all just laugh or nod our heads knowingly, but I can actually do that now, for real, on the current and this new generation of existing hardware... If the AI team is allowed the budget.

A game is not graphics. A game is primarily rules of interaction, without them you have a movie. Todays AAA games are closer to being movies than games. Look at board games or card games like Magic the Gathering -- It a basic set of rules and some cards that a add a massive variety of completely new rules to the game mechanics so the game is different every time you play. I'm not saying make card games. I'm saying that mechanics (interaction between players, the simulation and the rules) is what a game is. Physics is a rule set for simulating, fine, you can make physics games and play within simulations, but a simulation itself isn't really a game, at the very least a world's geometry dictates how you can interact with the physics. Weapons and some spells, item effects, etc. things might futz with the physics system, but it is very rare to see a game that layers on rules dynamically during the course of play in a real-time 3D the way that paper and dice RPGs or even simple card games do. League of Legends does a very job of adding new abilities that have game changing ramifications and the dynamic is great because of it, but that's a rare example and is still not as deep as simple card games like MtG. It's such a waste, because we have the ram and processing power to do such things, but we're just not using it.

I love a great stories, but it looks like all the big-time studios are fixated on only making these interactive movies to the exclusion of what even makes a game a game: The interaction with various rule logic. AAA games are stagnant in my opinion, it's like I'm playing the same game with a different skin, maybe a different combination of the same tired mechanics. The asset costs and casting, scripts, etc. prevent studios from really leveraging the amazing new dynamics and logic detail that are available in this generation of hardware, let alone next-gen hardware with shared memory architectures. IMO, most AAA studios don't need truly next-gen hardware because they don't know what the fuck to do with it -- Mostly because they've been using other people's engines for decades. These 'next-gen' consoles ARE next gen in terms of the game advancement they enable, even rivaling PCs in that regard, but no one is showing them off. I hope that changes. Most folks are scratching their head and asking, "How do I push more pixels with all this low latency RAM?" and forgetting that pixels make movie effects, not games. I mean, I can run my embarrassingly parallel n.net hive on this hardware, and give every enemy and NPC its own varied personality where the interactions with and between them become more deep and nuanced than Dwarf Fortress, and the towns and scenarios and physics interactions more realistic, or whimsical, or yield cascades of chaotic complexity... but... Dem not nxtGen, cuz MUH PIXZELS!!1!!1

The enemies and NPCs in your games are fucking idiots because "AI" and rules are what games are made of, and the AI team is starving to death while watching everyone else gorge themselves at the graphics feast. It's ridiculous. It's also pointless. So what if you can play Generic Army Shooter v42 with more realistic grass? Yeah, it's nice to have new shooters to play, but you're not getting the massive leap in gameplay. You could be protecting the guys who are rigging a building to crush the enemies as you retreat and cut off their supply lines. No, the level of dynamism in a FPS today is barely above that of a team of self-interested sharp shooters honing their bullseye ability. It's boring to me, great, I'm awesome at shooting while running now. So fucking what. Protip: that's why adding vehicles was such a big deal in FPSs -- That was a leap in game mechanics and rules. I'm picking on FPS, but I can leverage the same at any genre: There's little in the way of basic cooperative strategy (cooperative doesn't have to mean with other players, instead of re-spawning why not switch between bodies of a team having them intuitively carry out the task you initiate when not in the body anymore). We barely have any moderate complexity available in strategy itself let alone the manipulation of new game rules on the fly for tactical, logistical, or psychological warfare. How many pixels does it take to cut off a supply line, or flank your enemies?

Slashdot Top Deals

It is easier to write an incorrect program than understand a correct one.

Working...