Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment different ordering (Score 1) 583

Many other posts explain why certain math-that-doesn't-matter really does matter to CS students.

Part of the problem is just that different fields need to master different math in different orders, even if at the end all of them end up having mastered almost all the same things. The math department's ordering specifically works best for math majors, though, and colleges aren't going to make a special duplicate math department for the exclusive use of the computer science department.

Therefore, we end up getting our math content in a less than ideal order; some things maybe much too early, other things too late. It can take a few semesters before all the pieces snap together well. This contributes to the attrition among CS students; certain CS classes may be much harder if you haven't mastered the right math yet, and sometimes certain math classes may be harder if you don't yet understand how you'll be applying it.

Comment Re:Overpromise and underdeliver (Score 1) 164

I'll be happy enough when it's up to competing with rotating memory, which is a lot more likely. Serial memory is serial memory, and promising to replace Random Access Memory in latency-critical applications like main memory is just nonsense. Either the people putting out these claims are stupid or they think we are.

You won't be asking to access the one bit at the end of a 8KB track (and stalling the CPU waiting for it). Modern chips move a whole line of cache at once - a whole 64 bytes for my current chip. And according to the wikipedia article on racetrack memory, the tracks are only 10-20 bits each - not terribly serial. If one track can be cycled around as fast as DRAM, then a bunch of tracks can be done in parallel to handle 64 bytes at once just as fast as DRAM. That's probably years away, but it's not as crazy as your instincts say it is.

Comment Re:Meanwhile, in Japan (Score 1) 611

The answer is that *local* population density is not the only issue. You have to not only connect some people to each other, but also connect the people to the content. In South Korea, half the population and all the content are already in the greater Seoul metro area. In Japan, 20% of the population are in the greater Tokyo metro area. When you have a natural hub like that, there's an obvious incremental strategy; wire the core, then gradually plug in more outlying areas. Each step of the plan links more people to the total body of content. Many of the smaller European countries have some of the same plans - though they are not as densely packed, all the native language content is in the same smallish area.

In the US, the major cities are individually fairly small fractions of the total population (New York City is the biggest at 6-7%); you'd have to not only wire them up, but connect them all to each other. Otherwise you've got a blazing fast connection to your neighbor but dialup-like speed to the server on the other side of the country. And they're not close together at all; they're spread all along thousands of miles of coastline and chunks of the interior.

And, as far as I know, that's how it was - eventually - done. The higher density areas were done sooner and hooked into a very large backbone network. And that's why it's taking so long. To *really* connect the most English-speakers to the most English language content, it also has to be tightly linked in to Canada and the UK.

Comment Re:energy density (Score 1) 570

That condition is sufficient but not necessary.

Part of why energy density (and fuel tank size) matter for combustion engines is that refueling is inconvenient. You have to go to a gas station to do it, and we don't want to do that much more often than once a week. Basically, our needs here are actually being dictated by the other limitations of the system.

Electrical availability is different, and so our needs are not exactly the same as with gas cars. Electrics need one day worth of range, not one week's worth. The new ones go 100 miles on a charge, and most people drive well under 50 miles per weekday, so they've even got a very comfortable buffer of extra range.

Comment Re:What I find more interesting... (Score 3, Informative) 138

With a little bit of searching, I come up with about 20 megapixels for a perfect shot on perfect 35mm film, 12 megapixels for a merely "good" shot. The best film scanners can go up to 36 bit color depth per pixel, also.

The best DSLRs I can find on newegg today are 21 megapixel cameras in the $6000 range and claim true 14 bits per color channel (which would be 42 bit color), so yes, it seems they've passed 35mm film.

The camera tier under that are about 18 megapixels and 22 bit color, for $800-$1300.

Keep in mind that to get that top quality data, you'd have to set the camera to save everything raw instead of using lossy compression, so the files will be huge. (A quick, rounded calculation says 110 MB per shot). 35mm film comes in 24 shot rolls, right? So that's 2640 MB for a roll-equivalent. For kicks, looking up the biggest and fastest flash memory card, I see a 64 MB card for over $600 that claims 90 MB/sec write speed. That's equivalent to 24 rolls of film (576 shots), though, and it's reusable. Cheap 35mm film looks to be about $10 for four rolls, so $60 for the same number of shots, but I don't know what higher quality film costs and I'm not sure how to find out. Still, you've come out ahead with the memory card if you fill it more than ten times. Oh, and I left the cost (time and/or money) of developing and scanning the film.

Comment Re:Never (Score 1) 606

The altitude plane solution isn't really as simple as it sounds, though. The xy planes are good, but the endpoints of travel require ascent and descent, and in any popular areas that means it'll be the equivalent of making lots of z-axis lane changes. In cities that means everyone is going in every direction at the same time, and there's a constant flow of z-axis lane changes over every popular place. Really good traffic control networks will be essential to pull that off. (And I'm assuming perfect hover control too, otherwise even talking about this gets complicated). You'll also have a few surprisingly densely flown corridors during the morning and evening commute. Draw a straight line over a city map; even if you have 360 xy planes of flight, one for each direction of travel, there will still be many people who both live and work at places on (or very close to) that line. If you have only 36 planes (one for each 10 degree arc) you'll have a lot of "merging" and "crossing" for routes that approach the city.

If you CAN get the whole thing automated, though, that makes for some very nice parking garages with great storage-per-ground-footprint ratios. You could have vehicle entrances and exits on every floor, on every side of the building, with no ramps hogging space inside. You might also be able to utilize a lot of roof space, too, though depending on the technology involved it might be better to use that for solar panels or something.

Comment Re:Helium (Score 1) 184

You don't have to vent or compress the lifting gas. You can use one these: http://en.wikipedia.org/wiki/Ballonet

It's the air version of what submarines do. Subs have a few tanks that can be empty (full of air) for buoyancy or full (full of seawater) for a dive. The airships are like that, except "full" of air to lose lift, and "empty" to gain lift.

Comment Re:Statistics (Score 1) 467

> If you want to hide your data, the file must ostensibly have some other purpose... something that isn't obviously a lie

Hmm. I bet you could hide a lot in the form of actual executable machine code. Part of it would, of course, be dummy code that doesn't accomplish anything if run (but doesn't do any damage either). Considering "Hello World" C++ compiles to a 450+ KB executable, who's going to notice a few extra functions in a program that has a big executable and a bunch of libraries? Your data hider/"compiler" could be arbitrarily complex at making the hidden data look like real code. Possibly to the level of the data retrieval process involving running parts of it in a VM to pluck out a few bytes here and there that would be left on the stack.

Further, if the legit runnable program with data hidden in it is something really big like a game, you could probably have extra things stuffed into the non-executable resources of the game. If your big compressed glob of textures *works*, containing all the structure you'd expected out of it, who's going to notice the few MB of textures that the game loads but never uses? Maybe you use an indie game with lots of mods available, and you say some of the mods you downloaded were a bit sloppily put together. It's just that if you jump at the right spot in the right map, you fall out of the world and into the real application.

For extra kicks, it doesn't have to be code for the actual hardware you're running - it could be java or .net bytecode. Hell, the java bytecode could be sitting in your browser's cache.

Comment Re:It has been obvious for years. (Score 5, Insightful) 162

It's not as obvious as it sounds. Some things get easier if you're basically still building a 2D chip but with one extra z layer for shorter routing. It quickly gets difficult if you decide you want your 6-core chip to now be a 6-layer one-core-per-layer chip. Three or four issues come to mind.

First is heat. Volume (a cubic function) grows faster than surface area (a square function). It's hard enough as it is to manage the hotspots on a 2D chip with a heatsink and fan on its largest side. With a small number of z layers, you would at the very least need to make sure the hotspots don't stack. For a more powerful chip, you'll have more gates, and therefore more heat. You may need to dedicate large regions of the chip for some kind of heat transfer, but this comes at the price of more complicated routing around it. You may need to redesign the entire structure of motherboards and cases to accommodate heatsinks and fans on both large sides of the CPU. Unfortunately, the shortest path between any two points is going to be through the center, but the hottest spot is also going to be the center, and the place that most needs some kind of chunk of metal to dissipate that heat is going to have to go through the center. In other words, nothing is going to scale as nicely as we like.

Second is delivering power and clock pulses everywhere. This is already a problem in 2D, despite the fact that radius (a linear function) scales slower than area and volume. There's so MUCH hardware on the chip that it's actually easier to have different parts run at different clock speeds and just translate where the parts meet, even though that means we get less speed than we could in an ideal machine. IIRC some of the benefit of the multiple clocking scheme is also to reduce heat generated, too. The more gates you add, the harder it gets to deliver a steady clock to each one, and the whole point of adding layers is so that we can add gates to make more powerful chips. Again, this means nothing will scale as nicely as we like (it already isn't going as nicely as we'd like in 2D). And you need to solve this at the same time as the heat problems.

Third is an insurmountable law of physics: the speed of light in our CPU and RAM wiring will never exceed the speed of light in vacuum. Since we're already slicing every second into 1-4 billion pieces, the amazing high speed of light ends up meaning that signals only travel a single-digit number of centimeters of wire per clock cycle. Adding z layers in order to add more gates means adding more wire, which is more distance, which means losing cycles just waiting for stuff to propagate through the chip. Oh, and with the added complexity of more layers and more gates, there's a higher number of possible paths through the chip, and they're going to be different lengths, and chip designers will need to juggle it all. Again, this means things won't scale nicely. And it's not the sort of problem that you can solve with longer pipelines - that actually adds more gates and more wiring. And trying to stuff more of the system into the same package as the CPU antagonizes the heat and power issues (while reducing our choices in buying stuff and in upgrading. Also, if the GPU and main memory performance *depend* on being inside the CPU package, replacement parts plugged into sockets on the motherboard are going to have inherent insurmountable disadvantages).

Comment Re:Big deal. Radix sort works well IF ... (Score 2, Insightful) 187

It's generally not size of RAM that breaks radix sort; it's the size of cache. Modern processors are highly reliant on cache, which means they're highly reliant on things in memory being in small tight chunks that fit in cache - because cache misses are expensive enough that if you thrash cache badly enough, you may end up running slower than if you hadn't had any cache at all.

Good comparison sorts may start fragmented, but by their very nature each pass of the algorithm makes them less so. Radix sort is the other way around; it follows pointers (so more precious scarce cache in use already) that point in more and more fragmented patterns with every pass. That's why even though radix sort's average speed is theoretically faster than quicksort, quicksort still wins on real life hardware. And that's probably why radix sort wins on GPUs - the data fits in the card's dedicated memory, which is already optimized to be accessed in a much more parallel way than main memory.

Comment Re:I have been expecting this for a while now (Score 2, Interesting) 104

IMO, it would be a better idea for it to be the other way around; have a microsd and low power bluetooth inside the watch casing with the battery. Store things like program personalization profiles, bookmarks, ebooks, and maybe some mp3s in there, and authorize other devices to access them. This works out nicely because the watch can probably do all this and still be the same (small) size and have the same long battery life that we expect out of watches, still be waterproof, and since it's strapped to your wrist you're unlikely to lose it or run it through the washer/dryer or have it stolen.

With a decent display, it should also be able to show info from other devices. Specifically, maybe caller ID/text message/email alerts from the cellphone that's sitting in your pocket in silent mode. Since the display is very tiny, this shouldn't make it all that much more expensive, judging by what some mp3 players in the $50 range have today. Nor should it make the watch ungainly large; watches with these features can still be traditional fashion accessories for those that want them to be.

It'd be nice if it all integrated well with the blacked out... smartphone-pda and tablet/bookreader?... in the pic, as well as your laptop/desktop. Since microsd cards already come in 4/8/16 GB capacities, then in addition to automagical profile synching, you could actually store a decently useful amount of additional data in a watch and have it more easily accessible than the USB flash drive on your keychain.

Comment "Not one Democrat voted against" (Score 4, Insightful) 341

"Not one Democrat voted against" seems an odd way to word it. Why not actually give the totals? There are 435 Representatives, but the given 357 yes / 32 no count only adds up to 389. That means the difference of 46 were conveniently absent or didn't vote. And there are currently 253 Democrats and 178 Republicans in the House, so that means even if all the nonvoting ones this time were Republican, fully 100 Republicans voted for it. (And I'd like to hear the excuses of the members who didn't vote, from both parties).

I can't call a bill that more than half of the opposition voted for anything but bipartisan, so why word the results in a partisan way? The blame should correctly fall on *all but the 32 who voted no*.

Comment Re:Churchill said it best (Score 1) 270

> Yes UO fans, UO might have been first, it might have done things no other game has done BUT it also didn't manage to get a large number of subscribers. According to wikipedia it PEAKED at 250.000. Eve claims to have reached 300.000 and that game is considered to be niche.

Eve is considered niche primarily because of its gameplay, not because of its size. It was considered niche back when its size was more respectable relative to the leaders (before the WoW juggernaut had been released to skew the charts).

> Just how can it be that so many claim UO is the best when so few played it?
You should also note that the peak of a chart isn't the same as the total volume of the chart. The 250k peak tells you the month with the most players at once, but it doesn't tell you how many people have every played the game. There've been a lot of months since the game came out in 1997, and plenty of churn in the playerbase as new players come in and old players leave, just like any other MMO.

(And, In My Opinion, most of the people who'll tell you they liked UO aren't saying they want its original PVP system in other games. For example, when the subject comes up, I mostly see comments about the level-less skill system and crafting and housing. Historically, the anti-PVP crowd was MUCH louder than the pro-PVP crowd, which was why they changed the game early on...)

Comment Incremental upgrades (Score 1) 543

I had to vote "which component?", since I gradually swap things out over time.

I rebuilt this machine (my desktop PC, primary machine) in summer 2009, because the mobo I'd been using since 2005 finally flaked out in enough ways that I could no longer work around them. I also rotated out the oldest drive in the system (my first SATA drive, a 120GB drive I'd bought ages ago... 2004 at the latest, I think). But a lot of other parts have carried through. My secondary monitor is still the first LCD monitor I ever got, from summer 2000, still kicking though kinda low res. I think the 3.5" floppy drive is actually the one that came in the first computer I personally owned, which I got in 1996; I use floppies so rarely these days that it'll probably just never die. The case is the one I got in 2000 or 2001 and it might have to go in the next major upgrade cycle because the longer video cards are too long to fit in it; then again, I don't game THAT much anymore, so possibly whenever I next need to upgrade the video card, whatever is good at $120ish may still fit in the case. The power supply is the same one I got when I got the case; 350 watts, I believe, which means I'll have to be careful the next time I look at video cards, because I might need more power then. The front intake fans that blow over the hard drives are still the ones that came with it, also. I think I got this mouse in 2002 - a wired USB optical that still works fine, and the USB extension cable it's plugged into is about as old too. The NIC I'm currently using is from 2001 at the latest, possibly from as far back as 1997; the onboard port crashes the computer while torrenting, but this old card can handle everything fine as long as I'm using the right drivers for it. I still use the oldest USB flash drive I have, a 1 GB one I got around 2006, for walking smallish files around. My speakers are from 1997 I think.

I might rotate out the current oldest hard drive in this system this summer. But other than replacing parts that outright break, I hope to be using this for another two or three years before the next major upgrade cycle. My laptop is about two and a half years old now, too, but I expect to keep it until some non-replaceable component breaks.

Slashdot Top Deals

According to all the latest reports, there was no truth in any of the earlier reports.

Working...