Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Calculate Pi in 10 steps with no Gun, only Zombies (Score 1) 311

Calculate Pi in 10 steps without Guns, only Zombies!

Step 0: Kill a zombie by removing its head or destroying its brain. In a pinch you can lure one up high and shove it to the ground below.

Step 1: Detach one of the bigger bones of the arm or leg. If you have access to a cooler or are far enough north or south you may use the whole frozen zombie.

Step 2: Create your unit of measure. Detach a small straight segment of zombie -- the little bone at the end of the hand or foot will work. This will be our Zinch.

Step 3: Spin the larger zombie part while anchoring one end to create a circle of blood upon a flat bit of ground.
      a. If the ground is uneven and you have only the corner of a wall, stand the zombie part in the corner and let it fall over to create a quarter circle arc.
      b. Repeat 3a if you have a flat wall but no corner, falling the other direction to create a half circle.

Step 4: Place the Zinch on the edge of the whole, half, or quarter circle. Count the number of Zinches along the perimeter of the circle or arc.
      a. For a quarter circle arc multiply this zinches by 4.
      b. For a half circle arc multiply the zinches by 2.

Step 6: Count the number of Zinches of the larger zombie part. This is your Radius.

Step 7: Calculate Pi using the Radius and Circumference from step 4:
      Circumference = 2 * Pi * Radius;
            Thus:
      Pi = Circumference / (Radius * 2).

Step 8: For accuracy, each Mathematician present should repeat the above with a different zombie / zinch then average your values.

Step 9: Congratulations! You have managed to distract all of the other Mathematicians long enough for them to be eaten by Zombies!

Step 10: Enjoy rebuilding society using the superior Tau constant!
      There are Tau radians in one circle
      Tau = Circumference / radius

Comment To little, too late. (Score 5, Interesting) 167

Translation: "This is how you advertise a product as elitist." or "Shh, mobile enabled VR & AR gear does not exist yet!"

Sorry, don't care Google. I'll just keep developing for the 3D VR and AR gear I already use daily with my smart-phone, rather than pay for the over-priced less capable system Google's selling. When Google finally gets around to pushing out a run of hardware that is publicly accessible then I might port some software I personally use in my business to the platform it if it's not completely shit, and there is a market share to warrant the expenditure. I'm not holding my breath for something that is little more than vapor-ware.

Besides, that initial rejection of 3rd party apps for glass really turned me off, it seems they got the message but it doesn't bode well. Will I be able to use Glass apps with the Oculus Rift, or MS or Sony's offering, or Vuzix or True Player Gear, or the other umpteen hundred VR and AR headsets, many of which I've been using since the 90's when Quake and Descent came out, which STILL didn't attract a market? I don't think hardware should be tied to software, or that software should be tied to hardware needlessly. If that's the route Google wants then they can go fuck themselves. I already have AR and VR headsets for Android, and they work with iOS, Linux and Windows too.

Release a product or don't. This carrot dangling makes the Glass team seem like a bunch of incompetent self-important elitist sperglords.

Comment Re:Problem with releasing an underpowered console (Score 4, Informative) 117

The PS3 plays a lot of games at 1080p native...

There is nothing wrong with the PS4/XB1, other than for $400/$500, they don't really offer anything new.

PS1 was the first major 3D console, it was a massive improvement over the SNES.

The PS2 offered DVD, vastly upgraded graphics, etc.

The PS3 offered Blu-Ray, 1080p, and the first serious online console (from Sony).

The PS4? Meh, it is a faster PS3, but otherwise, it doesn't offer anything new.

Um...The PS3 renders very few games at 1080p native. Maybe a dozen titles out of the entire catalog.

Don't forget the other dimension. 1080 is only 360 more than 720, but 1920 is over 800 more pixels than 1280. IMO, that's the dimension we should be talking about, since its more significant. However, per pixel calculation load scales with area, not 1/2 perimeter. So, if we look at total pixels: 1280x720p = 921,600 pixels, and 1920x1080p = 2,073,600, the difference being 1,152,000, so a lot of people don't understand that going from 720 to 1080 is MORE THAN TWICE the pixels, in pixel shader costs you might as well be rendering a full secondary screen.

Now, that's not to say the total cost in rendering will absolutely increase over two fold. Full screen effects like Bloom or HDR are going to come it at about twice the cost. Interpolating a texture coordinate to look up pixel values is cheap compared to most any shader program, even to do something like cube-map specular highlight/reflections, bump mapping (I prefer parallax mapping), shadow mapping, or etc. However, the complexity of geometry calculations can be the same at both resolutions. In a ported / cross-platform game the geometry assets are rarely changed (too expensive in terms of re-rigging and all the animations, testing, etc.) so given slightly better hardware a game at the same resolution will have the prime difference be in adding more particle effects, increased draw distance, maybe even a few whole extra pixel sharers (perhaps the water looks way more realistic, or flesh looks fleshier, blood is bloodier, reflections are more realistic, etc.)

Jumping up to 1080p makes your pixel shader cost a lot more frame time. Developing for 1080p vs 720p would optimally mean completely reworking the graphics and assets and shaders to adapt to the higher shader cost, maybe cut down on pixel shader effects and add more detailed geometry. I encounter folks who think "1080 isn't 'next gen', 4K would have been next gen" -- No, that's ridiculous. 1080p is "next gen resolution", but the new consoles are barely capable of it while having a significant degree of increase in shader complexity from last gen, and we're seeing diminishing returns on increasing the resolution anyway. So, I wouldn't call the consoles quite 'next-gen' in all areas. IMO, next gen console graphics would handle significantly more shaders while running everything smoothly at 1080p, just like the above average gaming PC I got my younger brother for his birthday which kicks both PS4 and Xbone's ass on those fronts. That would be the sort of leap in graphics scale between PS1 and PS2 or Xbox and the 360. 4K would be a generation beyond 'next-gen' because of the way shaders must scale with resolution.

One of the main advances this new console generation brings is in the way memory is managed. Most people don't even understand this, including many gamedevs. Traditionally we have to had two copies of everything in RAM, one texture loaded from storage to main memory, and another copy stored in the GPU; Same goes for geometry, but sometimes even a third lower detail geometry will be stored in RAM for the physics engine to work on. The other copy in main RAM is kept ready to shove down the GPU pipeline, and the resource manager tracks which assets can be retired and which will be needed to prevent cache misses. That's a HUGE cost in total RAM. Traditionally this bus bandwidth has been a prime limitation in interactivity. Shader programs exist because we couldn't manipulate video RAM directly (they were the first step on the return to software rasterizer days, where the physics, logic and graphics could all interact freely). Shoving updates to the GPU is costly, but reading back any data from the GPU is insanely expensive. With shared memory architecture we don't have to keep that extra copy of the assets, so without an increase in CPU/GPU speed just full shared memory by itself would practically double the amount geometry and detail the GPU could handle. The GPU could directly use what's in memory and the CPU can manipulate some GPU memory directly. It means we can compute stuff on the GPU and then readily use it to influence game logic, or vise versa, without paying a heavy penalty in frame time. The advance in heterogeneous computing should be amazing, if anyone knew what to do with it.

Ultimately I'd like to put the whole damn game in the GPU, it's not too hard on traditional memory model hardware (well, it's insane but not impossible): You can keep all the gamestate and logic in buffers on the GPU and bounce between two state buffer objects using shaders to compute physics and update the buffer as input for the next physics and render pass; Pass in a few vectors to the programs for control / input. I've even done this with render to texture but debugging VMs made of rainbow colored noise is a bitch. The problem is that controller input, drives, and the NIC aren't available to the GPU directly so I can't really make a networked game that streams assets from storage completely in the GPU alone, there has to be an interface and that means CPU feeding data it and reading data out across the bus, and that's slow for any moderate size of state I'd want to sync. At least with everything GPU bound I can make particle physics interact with not just static geometry, but dynamic geometry, AND even game logic: I can have each fire particle be able to spawn more fire emitters if they touch a burnable thing right on the GPU and make that fire damage the players and dynamic entities; I can even have enemy AI reacting to the state updates without a round trip to the CPU if their logic runs completely on the GPU... With CPU side logic that's not possible, the traditional route of read-back is too slow, so we have particles going through walls, and use something like "tracer-rounds", a few particles (if any) on the CPU to interact with the physics and game logic. With the shared memory model architecture more of this becomes possible. The GPU can do calculations on memory that the CPU can read and apply to game logic without the bus bottle neck; CPU can change some memory to provide input into the GPU without shoving it across a bus. The XBone and PS4 stand to yield a whole new type of interaction to games, but it will require a whole new type of engine to leverage the new memory model. It may even require new types of game. "New memory architecture! New type of games are possible!" Compared with GP: "Meh, it is a faster PS3, but otherwise it doesn't offer anything new." . . . wat?

As a cyberneticist, all these folks wanking over graphics make me cry. The AI team is allowed 1%, or maybe 2% of the budget. All those parallel Flops! And they're just going to PISS THEM AWAY instead of putting in actual machine intelligence that can be yield more dynamism or even learn and adapt as the game is played? You return to town and the lady pushing the wheelbarrow is pushing that SAME wheelbarrow the same way. They guy chopping wood just keeps chopping that wood forever: Beat the boss, come back, still chopping that damn wood! WHAT WAS THE POINT OF WINNING? The games are all lying to you! They tell you, "Hero! Come and change the world!", and now you've won proceed to game over. Where's the bloody change!? Everything just stays pretty much the same!? Start it up again, you get the same game world? Game "AI" has long been a joke, it's nothing like actual machine learning. It's an indication of a Noob gamedev when they claim their AI will learn using neural networks, and we'd all just laugh or nod our heads knowingly, but I can actually do that now, for real, on the current and this new generation of existing hardware... If the AI team is allowed the budget.

A game is not graphics. A game is primarily rules of interaction, without them you have a movie. Todays AAA games are closer to being movies than games. Look at board games or card games like Magic the Gathering -- It a basic set of rules and some cards that a add a massive variety of completely new rules to the game mechanics so the game is different every time you play. I'm not saying make card games. I'm saying that mechanics (interaction between players, the simulation and the rules) is what a game is. Physics is a rule set for simulating, fine, you can make physics games and play within simulations, but a simulation itself isn't really a game, at the very least a world's geometry dictates how you can interact with the physics. Weapons and some spells, item effects, etc. things might futz with the physics system, but it is very rare to see a game that layers on rules dynamically during the course of play in a real-time 3D the way that paper and dice RPGs or even simple card games do. League of Legends does a very job of adding new abilities that have game changing ramifications and the dynamic is great because of it, but that's a rare example and is still not as deep as simple card games like MtG. It's such a waste, because we have the ram and processing power to do such things, but we're just not using it.

I love a great stories, but it looks like all the big-time studios are fixated on only making these interactive movies to the exclusion of what even makes a game a game: The interaction with various rule logic. AAA games are stagnant in my opinion, it's like I'm playing the same game with a different skin, maybe a different combination of the same tired mechanics. The asset costs and casting, scripts, etc. prevent studios from really leveraging the amazing new dynamics and logic detail that are available in this generation of hardware, let alone next-gen hardware with shared memory architectures. IMO, most AAA studios don't need truly next-gen hardware because they don't know what the fuck to do with it -- Mostly because they've been using other people's engines for decades. These 'next-gen' consoles ARE next gen in terms of the game advancement they enable, even rivaling PCs in that regard, but no one is showing them off. I hope that changes. Most folks are scratching their head and asking, "How do I push more pixels with all this low latency RAM?" and forgetting that pixels make movie effects, not games. I mean, I can run my embarrassingly parallel n.net hive on this hardware, and give every enemy and NPC its own varied personality where the interactions with and between them become more deep and nuanced than Dwarf Fortress, and the towns and scenarios and physics interactions more realistic, or whimsical, or yield cascades of chaotic complexity... but... Dem not nxtGen, cuz MUH PIXZELS!!1!!1

The enemies and NPCs in your games are fucking idiots because "AI" and rules are what games are made of, and the AI team is starving to death while watching everyone else gorge themselves at the graphics feast. It's ridiculous. It's also pointless. So what if you can play Generic Army Shooter v42 with more realistic grass? Yeah, it's nice to have new shooters to play, but you're not getting the massive leap in gameplay. You could be protecting the guys who are rigging a building to crush the enemies as you retreat and cut off their supply lines. No, the level of dynamism in a FPS today is barely above that of a team of self-interested sharp shooters honing their bullseye ability. It's boring to me, great, I'm awesome at shooting while running now. So fucking what. Protip: that's why adding vehicles was such a big deal in FPSs -- That was a leap in game mechanics and rules. I'm picking on FPS, but I can leverage the same at any genre: There's little in the way of basic cooperative strategy (cooperative doesn't have to mean with other players, instead of re-spawning why not switch between bodies of a team having them intuitively carry out the task you initiate when not in the body anymore). We barely have any moderate complexity available in strategy itself let alone the manipulation of new game rules on the fly for tactical, logistical, or psychological warfare. How many pixels does it take to cut off a supply line, or flank your enemies?

Comment Re:more pseudo science (Score 4, Insightful) 869

I suppose you can't ascertain whether the universe was created 5 seconds ago either. Fortunately the laws of physics, chemistry, thermodynamics, biology, etc. allow science to make Predictions not only about the future outcome of an event, but also about the probability of circumstances which caused observable outcomes.

If you leave your sandwich near me and come back to find a bite taken out of it, would you accept the argument, "You cannot ascertain the intake of past consumption with enough precision to absolutely blame me for eating your sandwich", or would you say I'm full of shit?

You're full of shit.

Comment Re:It seems so obvious now (Score 1) 448

land sweetheart pre-IPO deals

The thing about pre-IPO is that it means IPO is in the future. Think about IPO. Now, if you're working for investors who pay you to analyze investment risk, then wouldn't having Rice on the board factor into the Risk category pretty heavily? One fucked up privacy/advertising foobar influenced by this spy-happy nutter on the board could easily end the company. It's not like everyone and their mother isn't competing in cloud storage now.

Furthermore, in a post-Snowden world the appointment of Rice doesn't reflect well on the decision making capability of an Internet enable service company or its CEO. That's getting tallied in the graph right as a mark against the IPO valuation; Even if it was a smart move for connections and she was out before the IPO it's not a smart move for the owners or future shareholders. Since Dropbox proved they're not capable of figuring out that corporate decisions affect consumer perception of their image I wouldn't invest a dime at IPO even if I had no other reason not to do so -- Like their past deception over user data privacy (there is none, the encryption is for transport but they can see what's stored).

With distributed solutions having actual security being common, it's only a matter of time before someone makes a slick interface for Freenet, and puts solutions like Dropbox out of business. The looming IPO is essentially the DB owners cashing in on their doomed business, and their only market value will be in short term speculation on their stock price. I see this retarding Rice appointment as a poison pill to ensure the IPO goes through without anyone buying them -- You'd have to be a fool to try buying them now.

Comment Agriculture's Holy Grail: Open Source Food! (Score 1) 116

If you want me to eat something, you have to tell me exactly what it is, and how it was grown; If it's something from the animal kingdom then I want to know what you're feeding them, and how they're raised. We require ingredients lists on our other food products too. Before you cook shrimp or prawn you have to remove their "sand vein" AKA their digestive tract AKA their shit tube -- Guess what's in there? What they last ate. Some of that shit gets into what I eat. Now their job is to convince me that none of the "marine micro-organisms" in Novaq are harmful, and are free of things like, say, marine flesh eating bacteria...

All the food I eat I've grown myself, or gotten from the farmer's market from local farmers who's farm I have visited, or at the very least it has all of the ingredients listed. I only have one life, and I should have the information available to make an informed decision about what I fuel myself with, and the cost to the environment that I am a part of. That information includes how and where things are fished, hunted, farmed, etc. This extends to other purchases too. Eg: I'd only buy lab-grown diamonds to ensure I'm not supporting the blood-diamond trade. Electronics are often made in shitty conditions too. Just like it was unfortunate but necessary to use proprietary Unixes to make GNU/Linux, it is unfortunate that I must purchase hardware made under pitiful working conditions. When I do so I buy the fastest and most upgradeable hardware available so as to mitigate the frequency of my hardware purchases. Retired hardware goes to into the server rack or my home-grown cloud cluster that serves all my AV storage, display and streaming needs. What is decommissioned gets recycled, just like all the packaging I buy. I do the same with food waste via compost pile for my own garden.

It's more expensive to eat free-range chickens which keep the bugs out of the pesticide free garden, but they produce tastier eggs and taste better themselves (yes, I've done double blind taste tests, For Science!). It's usually more expensive, but sometimes it can be cheaper, to go in with a few friends or family on beef from a mobile butcher and have it cut however we like from a cow of our choice at a local farm. I understand that not everyone can afford to eat the way I do. However, if I can afford to eat better or healthier or in a way that enriches the local community or ecosystem then I do so.

I don't eat pesticide or herbicide. It is not necessary to do so. Contrary to popular belief, these poisons have not been tested for safety on animals, humans, or the ecosystem. Seriously, the chemicals they test on animals and humans are then added to other "stabilizing" or "inactive" chemicals prior to use in the field and the end result does change the properties of the pesticides and herbicides, they become more deadly, and the end result has not been tested on animals or humans. I also don't take drugs that have been on the market for less than 10 years (thus has 20-25 years of testing). Did you believe Tobacco farming corporations when they valued profit over people and said smoking is good for you, or when they said it wasn't harmful for decades? Why would you believe chemical making corporations then? I don't eat plants covered in poison (or that produce poison internally that kills critters we need for our ecosystem), I don't eat meat that eats such poison or that is sick or raised on feed that is a "closely guarded secret". I don't feed my family milk that has growth hormones either.

Did you know you can leave seeds in the sun to accelerate mutations for the test crops you select against to produce better yield while preserving genetic diversity rather than use a corporation's mono-culture which nature simply adapts to? You see, "exposing plants to UV light" isn't patentable and doesn't yield patentable produce. It's true that without poisons bugs will eat some of the plants. The portion of a crop that nature reclaims is the cost of doing business in her neck of the woods. It's only common business sense to diversify to ensure a single crop / market failure won't end your operation.

Turns out, when I look at the cost distribution of my food consumption it more closely matches the ratios of food one should consume. Less meats and fats and more fruits and vegetables. Instead of prawn, I just had some wonderful big spicy Cajun Crayfish, raised locally. The farmer showed me how they were part of a hydroponics system that scavenges (filters) the nasty things from the nitrogen fixing fish tank before the water is recirculated to feed some of the most amazing tomatoes I've ever tasted. That greenhouse eco-system also produces aphid eating ladybugs, of which I bought about a thousand to release in my own garden. Go for crawdads and come back with lady-bugs and tomatoes too. I never know what I'll buy when I go "grocery" shopping, but I know one thing: It will not have "secret ingredients". I eat open source food.

P.S. I also brew beer that is free as in freedom and free as in software...

Comment Re:Wait What??? (Score 1) 612

Heisenberg's uncertainty principle allows a small region of empty space to come into existence probabilistically due to quantum fluctuations

I don't remember that in the principle when I took physics. I think they are skipping quite a few steps in the summary.

No no, it's quite simple really:

"It it not improbable that everything suddenly sprang into existence from nothing?"

"Well, yes, that's HIGHLY unlikely!"

"So it is. Therefore, given an infinite metaverse this has certainly occurred. Thus, even if not the origin of this universe, it absolutely is the case in an infinite number of others, including an uncertain number which are indistinguishable from our own wherein we are having the same conversation. Q.E.D."

This is all uncovered extensively in "Super Fragile Improbabilistic Theoreticalidocious". The improbable motive force of creation was first theorized by none other than the esteemed Douglas Adams himself.

Comment Re:Why is he even excusing himself ? (Score 4, Insightful) 447

As an open-source dev myself, I often wonder why the fuck I do anything useful for others when they'll just turn on me the moment their toys don't work exactly as desired because -- gorsh -- I'm not perfect, though I work very hard to be.

Well, I'm a developer too. Mostly open source. Thing is, I don't bite off more than I can chew. This is a security product. They're not using basic code coverage tools on every line, or input fuzzing. They missed a unit test that should have been automatically generated. This is like offering a free oil change service boasting A+ Certified Mechanics, then forgetting to put oil in the damn car. Yeah, it was a free oil change, but come the fuck-on man. You really can't fuck up this bad unless you're stoned! I mean, if you change the oil, you check the oil level after you're done to ensure it hasn't been over-filled... You check all the code-paths, and fuzz test to make sure both sides of the #ifdef validate the same, or else why even keep that code? "I can accept the responsibility of maintaining and contributing to an industry standard security product" "YIKES I Didn't Fully Test my Contribution! Don't blame me! I never said I could accept the responsibility of contributing to or maintaining an industry standard security product!"

It's cancerous shit like you that give open source a bad name. Own up, or Fuck off.

Slashdot Top Deals

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...