Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:American economy (Score 1) 232

China is already selling to everyone willing to buy. There is no missing demand in the global system, no extra waiting to pick up the slack if another region falters. There is no place that China told "nope, we're busy, wait in line." That's what globalization is, and that's why recessions lately end up going global.

Comment Keep it fairly abstract (Score 1) 315

You'll get the "how do I hack?" "how do I make games?" questions no matter what. But if you do the talk about right, those will be flippant jokes rather than serious questions.

Basically you need to open with your way of saying "everything you've seen on TV or in movies is wrong. There are no falling columns of Matrix code controlling everything, and there is no 'hacking' by flying through 3D cities or typing for 30 seconds. World of Warcraft took sixty million dollars and three years to build. Whoever fried Iran's uranium centrifuges, wink wink, took years of planning."

You don't really have time to go in depth, nor do they have the background for you to show them actual code. So don't worry about those. The key is to pick examples that they'll already be at least a little familiar with and that you're comfortable with, and realistically de-magic those examples a bit.

I'd recommend two flavors of examples before you open for questions. The first one is based around "this is what I do in real life". You can very easily tie your database stuff to, well, every big popular site the students have ever used. Facebook, Google search and maps, any webmail, ebay, Amazon, itunes, 4chan, Slashdot, and so on. All of that is based on gathering, sorting, storing, and searching through vast amounts of information. You can do the old dictionary example - use a physical dictionary, solicit a word from the crowd, and then look it up quickly right in front of them. Then point out how it'd take all day if you had to read every line in order from the front or the back. Then point out that Google's database printed out as dictionaries wouldn't fit in the entire internal volume of the school - floor to ceiling, wall to wall, all the rooms and hallways and the cafeteria and gym and auditorium and so on - and yet Google needs to do millions of lookups per second. All that is math. Not Einstein's rocket ship time machine math, but stuff not much harder than what they'll see next year in Precalculus. But without the math, it's like the warehouse at the end of Indiana Jones, where cool things go to die (because no one can ever find them again).

The second one would be any common-but-hard problem in games. Something like pathing AI. You don't need to have actually written those programs; the idea is that you can explain it's too complex to calculate every possible path and pick the best one. The gamers in the crowd will grasp this fairly well, because they've all seen games where the pathing sucked, or where the third person viewpoint kept blocking them, or where the AI enemies seemed to cheat, or where the interface was poorly designed and you could never find what you needed in its maze of nested sub-menus. Again, it's all math. But it's hard-but-interesting math. The computer can do the calculations, but you need to know what its limits are, what calculations need to be done to do the work you want, and how to tell the computer to do those. You cannot say "computer, make me a sandwich"; you've got to write it a cookbook first.

Comment Re:Probably Not (Score 1) 314

> hyper accelerate us at higher than gravitational effect of earth
> 2G acceleration

I don't know why you think we need these things.
Assuming for the moment that getting to a cruising speed of 90% of lightspeed is practical and desirable, time dilation cuts the cruise part of the trip in half for the crew. But even rounding down, the nearest star is 4 light years away, so even if we had instant acceleration and deceleration, it would still take more than 2 years to get there (crew's point of view. Obviously viewed from earth it'd still take them over 4 years).

Doing the acceleration and deceleration phases at 2G instead of 1G would cut the time needed for those parts in half, but there really isn't much benefit to it. (Checking the math on another site, it only actually saves you a bit over a year). But since the ship already needs to take a couple years to get there, a couple years to get back, and would presumably spend a couple years exploring the other system... cutting two years off the round trip actually doesn't matter much. Either way, you have to have solved the crew's life support issues with heavy recycling. And if you've solved the life support issue, then if a 2G mission is viable timewise, then so is a 1G mission. Likewise, even if you can aim for a higher cruising speed (for even better time dilation), whether you're at 1G or 2G, you're going to be spending years accelerating and decelerating.

And that is enough, by itself, for humanity to outright take over the entire galaxy. Because although the galaxy is 100,000 light years wide and 10,000 light years thick, you only have to go from one star to the next. That means you can do it in easy hops of 3-10 light years at a time.

I don't mean to say that 3-10 light years is easy, just that we don't need more than 1G acceleration to do it. We do need better tech than we have right this moment - I left all of that out of the post to keep it down to reasonable length - but we don't need magical warp drives. Also, that other thread from a day or two ago - about average healthy lifespan hitting 150 years - would certainly also help.

Comment Re:What Does This Mean? (Score 1) 414

Well, with a few more digits of pi, you could get the error down to some arbitrarily small fraction of one Planck length. Then you're well below the absolute hard limit of measurability, and therefore the circle really would be perfect.

Hmm. A bit of googling, and someone else has already done the calculations. Apparently that only requires 61 digits of pi.

That's only the visible universe, though. You might want a few more digits, just to be sure. You're still not going to need ten trillion digits, though. Probably not even a hundred digits.

Comment Re:Loss of (or difference in) color fidelity? (Score 1) 227

You'd be able to see Imaginary Colors . Note the graph in the top right; normally, two or three cones are reacting to the same wavelength, and our brain does some interpretation afterwards.

So, for example, if red and blue cones were shut off, you'd only be able to see green (and, well, also the monochrome from rods). Impossibly pure green at the peak, and duller green at the extremes. And any two wavelengths with the same y value on the green curve of that graph? Those two colors would look identical to you. That means it'd be possible to make a red and blue checkerboard pattern that looks like uniform green to you.

Comment Re:Good (Score 2) 253

Or it could be that demand has grown fast enough to keep pace with growth in supply. This does happen naturally sometimes. The smartphone market has been growing for that entire time, the amount of flash in each phone is been going up, there's also the tablet market and the ebook reader market, the digital camera market, solid state drives for all those Macbook Airs, and so on. This can keep prices high without monopoly or collusion. If the manufacturers know they're guaranteed to sell everything they make at the current price, then it doesn't benefit them to lower their prices.

If flash fabs didn't cost tens of billions of dollars to make, then yes, we might still be able to argue that manufacturers were colluding to limit production to keep prices high. But due to the expense, they've got a pretty good argument that building new fabs is too risky. Especially since we're bumping up against the reliability limits of flash now; they can't just double density again.

This is where HP hopes to swoop in with memristor tech and save the day / get rich. They're claiming their test runs are already competitive with flash performance and with better reliability, and that the tech is no where near its limits yet. Theoretically, as soon as they start putting this into production, they'll start grabbing the high end market share, and either the price of flash with crash in response (the fabs are about paid off already, so they have a lot of flexibilty to lower prices) or everyone will license from HP and make memristors.

Comment Re:And how is this different than a bank? (Score 1) 436

Banks don't have all the money on hand because it is invested (loaned out). In the long run and on average, the loans are repaid with interest. A bank can and does operate this way indefinitely. It's also regulated as to what percentage of the accounts does have to be on hand, and it's also federally insured up to a certain amount per account. The money lent out isn't gone off of the face of the earth; it exists as debt that someone owes back to the bank.

Ponzi schemes don't have all the money on hand because it doesn't all even exist. That's a key, defining feature of the scheme: that some portion of the stated assets are fictitious. The owner of the scheme has been taking some of the money for himself instead and lying about the returns from the investments (which never happened). Payouts to other investors are only to perpetuate the illusion that the whole thing is as profitable as claimed.

The Poker thing much more closely matches the Ponzi scheme, not the bank. For starters, the players' balances were never supposed to be invested or loaned out. (Consider how the poker game works: people play against each other and the house collects its cut no matter who wins. All money involved would normally be immediately on hand because the total amount never changed, only how much of it belongs to who). The money was not there because the company owners took it! Gone from the company completely, not invested and not living on as debt owed back to the company.

They were only able to do this for so long because it was operated online; in a normal casino, every player is cashing out their chips on their way out, but on the internet they had virtual chips to leave in their digital account for the next time they played. Thus the owners could spend most of the actual underlying money, and as long as enough new players were putting money into their accounts, the few players that wanted to make withdrawals could be paid out of that new money. You know, just like in a Ponzi scheme.

Comment Re:trial and error is good (Score 1) 85

I'd think this doesn't actually eliminate trial and error. Consider:

Step 1: algorithm suggests what to try first

Step 2a: we also try small variants of everything from step 1

Step 2b: we play around with everything from step 1 and step 2a that wasn't what we wanted, to see if it has other uses.

So unless the algorithm is so good that the first try is always what we were looking for, we'll still end up doing things pretty much the same way as we had been before. We're just starting out with what is hopefully a better first guess.

Oh, and if the suggested materials take many steps to synthesize, we'll also have other stuff to play with - all the middle steps, plus any batch of middle step that we make mistakes creating and end up with something unexpected.

Comment Re:Say waht you will about MS (Score 1) 474

If your goal is to save the environment, please don't bring up batteries.

House batteries don't have to meet the same energy density requirements (both volume and weight) as, say, cell phone batteries or electric car batteries. We don't need to pick them up and move them, and we don't need to store them somewhere especially small.

That frees us to use other tech. (I assume you were referring to either the toxicity of chemical batteries, or the strip mining for lithium? Or both). For example, the stuff from this recent slashdot article: http://hardware.slashdot.org/story/11/06/01/1549209/Using-Flywheels-to-Meet-Peak-Power-Grid-Demands
Those are just carbon fiber flywheels in steel cases.
(And looking up the size of the thing: for a house, one fridge-sized unit would be enough. It's good for storing 25 kwh.)

Comment Re:Seriously - do the GenEd (Score 1) 913

You may *think* that your high school covered all of that, but honestly, they likely did not. Even if it seems like total crap, you'll likely learn things about art, philosophy, English, history and the like that a high school class could never cover.

This is something that really needs to come up more often in these slashdot college discussions. The standards in college are higher. I went to a decent high school, but even at a fairly cheap state college, a one-semester college class ends up covering what would have taken two years in high school. Yes, that means they're also going to seem hard compared to the four years of vacation you just finished in high school, and they're sometimes going to require amounts of work that look absolutely insane compared to high school homework. They're also much, much more interesting; yes, even the mandatory gen ed classes that I originally didn't want to take.

I had maybe one class that I could call fluff. But now that I think back... the final paper for that class was longer than the final paper I had to do for high school AP English. And this was a music class, and I was a CS major and Math minor.. (The math for me was what programming was for Penguinisto... the thing I originally didn't want to take, and then ended doing a lot more of).

Comment true for a whole century. (Score 1) 453

At the risk of invoking Plato's rant about youth: this isn't very new. The last couple waves of technology befuddled new users too. Remember all the VCRs permanently blinking 12:00 in the 1980s, followed by microwaves doing the same in the 1990s? And that's just sticking to old jokes about digital clocks. But I'm sure most of us who're old enough, or knew others who were old enough, have heard a wealth of similar things about devices of the same decades... and similar things about cars dating to several decades before that (not even the maintenance issues, but even simple operation issues like finding the lights and wipers on different models, or driving a manual shift vehicle at all).

Some of these even apply in the other timewise direction; today's youth would not find many devices from the 1960s intuitive at first contact either. Record players and typewriters (and adding machines) come to mind as things one might have trouble using without instruction, especially if they weren't already set up and ready to use.

Comment Re:On vacuum tubes. (Score 2) 347

Well, yes and no. The physics is unforgiving; as Kaku says, we're going to hit a transistor shrink wall. At that point, the easy advances are over.

The transistor shrink wall isn't the same thing as peak computation power, only a predictor of it. We have room for advancement in how well we used the transistors we have; once we've got them as small as possibly, we can improve how densely we pack them, and how efficiently we utilize them for computation. Those problems are harder to solve, so we haven't been doing them as quickly yet as the transistor shrinks. But Kaku's conclusion does partially still stand even then, because the pace of improvement is going to drastically slow down. And it will be a very interesting time to be a computer engineer or computer scientist when that happens, as we're likely to resume trying wildly different experimental architectures to eke out some more improvements.

Some paths are already kind of obvious. From ARM's successes, we can see that we can get the power requirements lower, which then means we can have more cores. From AMD's upcoming chips, we can see that cores can be partly merged (to reduce the amount of idle duplicate hardware). There's room for improvement in software, too, of course. But to put it in mathy ways: there exists a Most Efficient Computer that can be made with transistors, and the pace of advancement is going to slow down as we approach it. I don't know how much time it'll take between getting the smallest transistors and getting close enough to their optimal use; I'm guessing at least another decade (so, until 2030), but that's a very loose guess.

Note that this speculation all goes out the window if we figure out another approach entirely, like quantum computing, or replacing the transistor entirely with something that can get us a better calculation density.

Slashdot Top Deals

"Intelligence without character is a dangerous thing." -- G. Steinem

Working...