Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Combined speed? (Score 4, Informative) 496

Your physics makes no sense. Why is this modded informative? The ground is not a magical reference point!

If two cars travelling in opposite directions at 40 MPH slam into each other, that's exactly equivalent, in terms of energy dissipation and momentum transfer, to one car travelling at 80MPH slamming into a stationary vehicle. Each vehicle, in its own reference frame, sees another vehicle travelling at 80MPH.

Think about it: if two identical cars crash, and one is stationary, then for a moment (before they come to a stop due to friction against the pavement) they'll be moving together at half the speed of the moving car before the crash. One car goes from 80MPH to 40MPH (40MPH difference); the other goes from 0MPH to 40MPH (40MPH difference).

This is exactly equivalent to going from 40MPH to 0MPH (40MPH difference).

When you're working out simple kinematics like this you should be starting with momentum, which is linear with velocity. You can work out how much energy is released afterwards; you'll see that it works out:

(1/2) * (1500kg) * (36m/s) ^ 2 = 972 kJ - Amount of kinetic energy in the moving car at 80MPH
(1/2) * (1500kg) * (18m/s) ^ 2 * 2 = 486 kJ - Amount of kinetic energy left after the crash: 2 cars at 40MPH
972 kJ - 486 kJ = 486 kJ - Amount of kinetic energy dissipated in the crash

(1/2) * (1500kg) * (18m/s) ^ 2 * 2 = 486 kJ - Amount of kinetic energy in 2 cars at 40MPH
(1/2) * (1500kg) * (0m/s) ^ 2 * 2 = 0 kJ - Amount of kinetic energy left after the crash: in 2 cars at 0MPH
486 kJ - 0 kJ = 486 kJ - Amount of kinetic energy dissipated in the crash

(Yes, kinetic energy is 1/2 mv^2, not mv^2!)

Comment Re:A compelling Linux on ARM netbook will worry MS (Score 3, Insightful) 521

If the ARM had equal processing power, but five times the battery life, they'd have a compelling product.

Well, it sort of does. Battery life and CPU power are actually somewhat convertible.

When the CPU isn't doing work, its power consumption drops considerably -- if you have two CPUs with the same designed maximum consumption, but one has twice the computing power available, then for the same workload that processor will use (a little bit more than) half the energy.

Of course the real picture is not so rosy, because a CPU that uses that little power to start with is probably accounting for less than half of the total power consumption of the system, and of course the workload is likely to increase if you have more CPU available (people watch video fullscreen instead of windowed, games will generally render as fast as they can and use all available CPU, etc.).

Comment Re:Misses the point (Score 2, Insightful) 371

Actually, car trips should show a similar curve, since city driving has the highest risk of accidents. Once you get on the highway your accident risk goes down considerably. Of course, if you do get in an accident, the chance it'll be fatal for you goes up if it's on the highway -- the fact that car accidents are not usually fatal is an extra wrinkle in the whole thing...

It would be interesting to actually run the numbers.

AMD

AMD's DX11 Radeons Can Drive Six 30 Displays 439

J. Dzhugashvili writes "Whereas most current graphics cards can only drive a pair of displays, AMD has put some special sauce in its next-generation DirectX 11 GPUs to enable support for a whopping six monitors. There's no catch about supported resolutions, either. At an event yesterday, AMD demonstrated a single next-gen Radeon driving six 30" Dell monitors, each with a resolution of 2560x1600, hooked up via DisplayPort. Total resolution: 7680x3200 (or 24.6 megapixels). AMD's drivers present this setup as a single monitor to Windows, so in theory, games don't need to be updated to support it. AMD showed off Dead Space, Left 4 Dead, World of Warcraft, and DiRT 2 running at playable frame rates on the six displays."

Comment Re:Fascinated by the porting aspect (Score 1) 78

Obviously much of game design is not really "science", but other design fields still do carefully analyze existing works, try to identify which elements specifically mattered, etc.;

Not to detract from your main point, but give them some credit, game designers totally do this. The field is still relatively young, and you're right that there's not the same body of literature yet as there is for, say, graphic design, but that's got more to do with the fact that you can't get tenure at a major university teaching game design yet than anything.

The game designers I work with can certainly break down what makes a game addictive and fun. Give them a chance and they'll talk your ear off about compulsion loops and memorable moments...

Comment Re:And this differs how? (Score 1) 371

They're already at the mercy of the holder of the key for signing games. Unless they want their release restricted to homebrew / modchipped consoles, there would be no difference.

Indeed. Retailers and publishers have a bit of flexibility on pricing now, but in practice the console makers have a pretty big influence on how much games end up costing. Old games don't get cheaper because of some competitive thing between game retailers, it's a market segmentation strategy, and it makes just as much sense in future electronic retailing monopolies as it does to the current system.

Once you sort out the chaff, the article reduces to the last couple of paragraphs where the author complains that he won't be able to trade used games in anymore. The archivist in me does despair a little about this, the increasing effectiveness of DRM in games, and the fact that of the games without serious DRM, more and more are online and require a working server -- in 100 years will anyone be able to play WoW? WoW maybe, but any of the less popular MMOs, probably not.

That said I think the author's mostly complaining because he's cheap. You got a game for 74 cents? Great! Go you. The developers that went on to not sell more copies of those three games you traded in probably love you.

Comment Re:Humour is too expensive (Score 1) 202

Why is Hollywood so much better at it?

If I had to guess, I'd say probably fewer people in the critical path -- a couple actors, a writer, and a director, rather than a producer and team of 5-20 designers (including lead and narrative) -- and the fact that you're generally producing less hours of content with a film, so each hour can be more polished, and that you live and die on story and humour, rather than gameplay.

But I'm not in film, and although I've been in games for a while and know a bit about how things are generally done, I've only seen my slice of the industry in depth.

Comment Re:Humour is too expensive (Score 4, Interesting) 202

Speaking as someone in the industry...

Nobody but the cheapest developers recycle assets. Slight differences in pipeline, technology, art direction, etc. conspire to make it not happen even if you're trying to share assets between projects.

Also, decent writers will work for peanuts. One or two narrative designers who are being paid as much as a mid-level designer make little difference to the bottom line on a team of 50-200 developers. Getting everyone to agree on who the good writer is, well, that's harder... getting a substantial team of designers who all have different senses of humour to form some kind of consensus and maintain a shared, consistent vision with the writer, that's nigh impossible.

Comment Re:When will MS learn? (Score 3, Insightful) 486

no ISO body has deprecated functions like close(2), open(2), read(2), and write(2)

That's correct, because ISO C++ never included those functions in the first place. POSIX != ISO C. (Not that MSVC is on any kind of reasonable schedule for keeping up with ISO standards, but that's a whole different issue...)

Basically MS is deprecating their own terrible implementation of some POSIX compatibility. This is actually required for ISO C compliance: the compiler is not supposed to define a bunch of extraneous functions in the global namespace, because they might conflict with your names. Once those functions are removed entirely (and I believe you can #define them away right now) you can implement your own compatibility functions for software you're porting to Windows.

Now, this is all entirely separate from the SDL warnings GP is complaining about, which show up when you use standard ISO C functions like strcpy, sprintf, and apparently now memcpy. Which, honestly, I wish weren't quite so irritatingly implemented, although I'm torn because using those functions really is terrible.

It's not really that worth getting up in arms about, though, because JESUS CHRIST there's a compiler flag to disable the warnings, just put it in your makefile and quit bitching already!

Comment Re:So... (Score 1) 326

You don't. An adder is a peice of hardware.

Since we're being smartasses, actually you do, because the 6502 only has an 8-bit adder implemented in hardware. If you want to add 32-bit numbers you need to write a sequence of add-with-carry instructions.

Also I don't know why you'd want to use a broom at all to clean up damp pet food, I always use paper towels. Icky.

Comment Re:Well $27B buys you a lot of panels... (Score 2, Interesting) 416

Running the figures through Google math, starting with a 60"x42" panel generating 55W at peak, I calculate a 116 mile x 2 meter strip of solar panels would generate ~12MW. That's an order of magnitude short... I don't know what kind of duty cycle the 110MW is required at, but if that's continuous to run the train line, it's only going to be able to operate for an hour a day.

It's enough to make one suspicious about feasibility, anyway.

Comment Re:Yes/no (Score 3, Insightful) 187

What's your point? Processors can pipeline across branches just fine, and the main effect of cache is to give a performance boost to smaller code -- code that separates and reuses functions rather than inlining them willy-nilly.

Inlining can still be a win sometimes, but compilers will do that for you automatically anyway...

Comment Re:Opera of the phantom (Score 2, Interesting) 553

Actually the idea makes sense. When they say VM, they mean like Java VM, or .NET runtime VM. The quote you pasted has the goods: "this VM has no means to convert integer to pointer". So you can't make a pointer into your neighbour process' data unless that neighbour process gives you such a pointer, because the only way to get pointers in the first place is from malloc().

This is the basis of security in sandboxed Java applications, it's not controversial or new. IIRC MS Research is working on a similar operating system that uses the .NET runtime -- ah yes, Singularity OS.

The state save on shutdown, far from being the best thing about this OS, as far as I'm concerned is the worst thing. Even if the software written for this thing is bug-free and never corrupts its own state, hardware is not 100% reliable -- memory gets corrupted, disks get corrupted, drivers end up wedged in unexpected states due to flaky hardware.

Imagine if a BSOD-equivalent occurs due to something that got corrupted 30 seconds ago, and that state got persisted to disk. From now on, every time you turn the machine on, you have less than 30 seconds before that exact same BSOD happens. Congratulations, your computer is now useless until you reinstall your OS! Brilliant.

The obvious workaround is, of course, to save program state out regularly as files in a constrained, standard format, which is independent of your program's implementation. Other reasons you might want to do this include upgrading software and interoperation between different applications.

But of course, as soon as you admit that, you admit that the new paradigm is not actually going to be a programming revolution at all, from an application perspective. You have to be able to save your state to a file and restore it: the only difference is that now that code will get executed less. As a programmer, though, it makes no difference to me whether the code is executed once or a million times, it's exactly the same effort to write it.

The filesystem is an ugly anachronism in a lot of ways, but it's also really, really technically practical.

That said, I wouldn't be surprised either if we were using VM-based operating systems in 10 years or so. There are some really interesting things you can do with JIT compilation when the OS and application code are not divided by a giant wall. But I do think they'll have filesystems of some sort.

Comment Re:Are IT embargoes even possible? (Score 1) 287

I would imagine they work pretty much the same way bans, embargoes, and tariffs work for all goods: exports and imports are declared by the sender and inspected at the border. The government bodies that deal with imports and exports have been doing it for a really long time.

That's not to say smuggling doesn't happen, but I think by now it's a pretty well-understood problem.

When the ban was put in place the people who put it in place surely knew roughly how many printers were likely to be smuggled in from the US anyway, how many would come from sources in other countries, etc. I can believe there wasn't a good reason for passing the law, but assuming they were completely ignorant of the possibility of smuggling is going a little far...

Slashdot Top Deals

The computer is to the information industry roughly what the central power station is to the electrical industry. -- Peter Drucker

Working...