Forgot your password?
typodupeerror

Comment: Re:Technically, it's not a "draft notice" (Score 1) 196

Interesting.

Given MAD, it's hard to imagine another WWII-type scenario (though it would be a bad day if China invaded Taiwan). But I could foresee something like Afghanistan spreading to the entire Middle East, where they couldn't nuke us (at least, not more than a couple of times, not like Cold War-style "nuclear winter" barrages), and we'd be strongly pressured not to nuke them. But the theater would be so wide that we'd need vast, vast number of ground troops.

Comment: Re:Wish I could say I was surprised (Score 1) 148

by TheRaven64 (#47430255) Attached to: Peer Review Ring Broken - 60 Articles Retracted
In the UK, university research departments are assessed base on the Research Excellence Framework (REF, formerly the Research Assessment Exercise [RAE]). Each faculty member is required to submit 4 things demonstrating impact. These are typically top-tier conference or journal papers, but can also be artefacts or examples of successful technology transfer. The exercise happens every four years, so to get the top ranking you need to write one good paper a year. The only incentive for publishing in second-tier venues is meeting other people who might lead to interesting collaborations.

Comment: Re:Wish I could say I was surprised (Score 1) 148

by TheRaven64 (#47430227) Attached to: Peer Review Ring Broken - 60 Articles Retracted
Reproducing work is often a good thing to set for first-year PhD students to do. If they reproduce something successfully, then they've learned about the state of the art and are in a good position to start original research. If they can't reproduce it, then they've got a paper for one of the debunking workshops that are increasingly attached to major conferences and that's their first publication done...

Comment: Re: 2 months, but they all quit! (Score 1) 193

by pla (#47430113) Attached to: My most recent energy-saving bulbs last ...
Well maybe your richy rich multi millionaire bulbs last a long ass time

Ever heard of "moving"? I don't own two houses, I've lived two different places in the past decade.


but the normal $2-5 per bulbs are garbage. I have to replace at least one every 6 months out of aprox 15 bulbs installed in my apt.
[...]I like the energy savings, and lower heat, but old ass bulbs are far more reliable.

FIrst, I buy the Home Depot discount bulk packs, in the 4 bulbs for $10 range. So yeah, comparing apples to apples here

Second, you have to replace ONE out of fifteen, every six months? Do you remember having incandescents at all? You have to replace all of them every six months (except maybe that one lonely attic light that you only use a total of 10 hours of per year), and the highest use ones, you could expect to replace every 2-3 months. People actually used to keep a six-pack of replacement bulbs around to deal with one or three dying at the worst possible time. Today? do people actually keep spare CFLs around? I don't, seems like a waste of space for how often I need one.

We apparently don't define "reliable" the same way.


The balast generally goes and then the bulb is toast. Sometimes they go grey first in the tube, but most are heavily yellowed from heat damage.

Ballasts go because of poor quality power, nothing more and nothing less (or putting a non-dimmable one on a dimmer circuit - same thing, just self-inflicted poor power quality). As for heat damage, Yes Virginia, some fixtures designed for burn-to-the-touch incandescents don't make suitable fixtures for CFLs. Specifically, if it has a heat shield on the base and a completely enclosing shade, yeah, you'll cook your CFLs nicely.

Comment: Not any more. (Score 1) 436

I do not use onboard computer DACs. Never found one that I liked.

Yes I am insanely fussy about sound quality compared to most folks.

It used to be that I would get a sound card in order to get digital sound out. SPDIF so I could run it through a nice external DAC (typical good ones cost about $1K and up.).

Nowadays that isn't needed any more. Integrated sound almost always comes with SPDIF out, and most external DACs have USB capability. So I don't need sound cards to get the sound into my DAC these days.

Comment: Re:Technically, it's not a "draft notice" (Score 1) 196

The closest we get to that is the airport, where rights have been considerably and visibly curtailed (as opposed to the comparatively invisible loss of rights due to government intrusion in electronic communications). People seem to have accepted that more or less gracefully: they bitch, but it's not seen as a massive imposition on most people's daily lives.

I don't know if we'd ever get to the point of rationing food. Even if we declared a full-scale war, technology means we grow a lot of surplus food in this country. Prices might rise, but I don't think we'd ever see "grow victory gardens" posters as we did in the last unlimited war.

Oil, however, would skyrocket, and technology might be severely curtailed. It would be interesting to see how people reacted to that. It's hard to say whether that would be a bigger factor than outrage at a draft of manpower. In Korea and Vietnam, a lot of the public seemed to take the draft with equanimity since it came without the kind of rationing we saw during World War II.

Comment: Re:As plain as the googgles on your face (Score 1) 56

by Sloppy (#47427477) Attached to: The Future of Wearables: Standalone, Unobtrusive, and Everywhere

As intrusive as the Google Glass has proven to be, it will only be worse when observation recording tech is more difficult to detect.

I disagree. The exact opposite: when people stop noticing, they will stop caring. It won't be perceived as intrusive anymore, and people will be less annoyed by it.

It's the conspicuousness of the camera in Google Glass, the constant reminder that you might be recorded, that makes most people feel creeped out. For the previous decade leading up to that product, nobody cared about small+cheap camera tech itself. And people walk/drive by fixed-position cameras all the time, and don't give a fuck there either. Peoples's behavior shows that "intrusiveness" happens when a cameras looks like a camera, and I suspect it also has something to do with being face-level, literally "in your face" and you're making eye contact with it, unlike the case with less conspicuous cameras. It was never about privacy; it's some aspect of self-consciousness kind of related to privacy, but a different thing.

You might say "maybe you, but I sure care. Hell yes it's about privacy." Of course you say that. I'm talking about how people behave and the emotions they display. Not their innermost secret thoughts that they are always terrified to express in voting booths or policy decisions, yet are happy to speak of on the Internet.

You know, the Internet, where they don't have a camera in their face making them all self-conscious! The Internet, where instead of a terrifying 1x1 pixel image that makes you think "WTF is that? That's weird! Are you watching me?" you now instead see a bunch of "like buttons" which are obviously for liking things, not getting your browser to send a request to an unrelated tracking server.

In addition, there's a certain inevitability about it all. The cameras have been there a long time, there are more today, and there will be even more tomorrow. You can't do anything about it, except stay at home. So you'll either accept or you'll go insane and get selected out. You'll handle it. (Contrast that to Google Glass, the one small camera out of the hundreds out there, that you actually recognize and is also rare enough that there's little social cost to shunning. With GG you can refuse to accept and also stay within social norms, so GG is different.)

Comment: Re:Technically, it's not a "draft notice" (Score 1) 196

I recall some talk during the lead-up to the Afghan war about the potential for a draft. It wasn't clear at the time just how big that particular conflict would get. It wasn't impossible to imagine it turning into World War-sized scenario against a lot of Islamic countries. The resulting conflicts were small compared to that, but we had to scale up the military substantially and if they'd grown any bigger we'd have had to have a draft.

Now that women are allowed access to combat positions, it's going to be very hard to exempt them from a draft should one be necessary. I can't conceive of the legislature passing any such bill right now (I can't imagine this Congress passing any non-trivial bill, and I don't see that changing), but a wise legislature would want to do that ahead of need rather than after the fact. If women are going to be drafted, you'd need to start registering them now.

I sincerely hope that it's never necessary. And if a war of that scope does happen again, we'll probably be a lot less selective with our weapons of war. (Afghanistan and Iraq were fought house-to-house, because as bizarre as it sounds that was a way of reducing civilian casualties, at least compared to just flattening entire cities as was done in World War II.) So we may well not have a draft even in a bigger conflict. But I think that, while it's politically impossible, a really good pragmatic case could be made for starting to require Selective Service registration for everybody right now.

Comment: Re:Most humans couldn't pass that test (Score 1) 274

by jfengel (#47426033) Attached to: The Lovelace Test Is Better Than the Turing Test At Detecting AI

To me, this seems to cut to the heart of it. AI is commonly conceived of as trying to mimic human intelligence, while there are cognitive tasks that cats and even mice can do that prove too hard for computers. A cat can recognize a mouse with essentially 100% accuracy, from any angle, in an eyeblink. No computer would come close, and the program that came closest wouldn't be a general-purpose object matcher.

Vertebrate brains are pretty remarkable. Human brains are an amazing extra step on top of that. We don't know exactly what that is in part because we don't really understand the simpler vertebrate brains. IMHO, we won't have a good mimic for sapience until we've gotten it to first do sentience. We don't have to rigorously follow the evolutionary order, but it seems to me that conversation-based tests are rewarding the wrong features, and even if they get better by that definition they're not getting us any closer to the actual goals of understanding (and reproducing) intelligence.

Comment: Re:Tannenbaum's predictions... (Score 1) 127

by TheRaven64 (#47425531) Attached to: Prof. Andy Tanenbaum Retires From Vrije University
Predicting that x86 would go away was more wishful thinking than anything else. At the time, Intel had just switched from pushing the i960 to pushing the i860 and would later push Itanium as x86 replacements (their first attempt at producing a CPU that it was impossible to efficiently compile code for, the iAPX432, had already died). Given that Intel was on its second attempt to kill x86 (the 432 largely predated anyone caring seriously about x86), it wasn't hard to imagine that it would go away soon...

Comment: Re:A great writer (Score 2) 127

by TheRaven64 (#47425431) Attached to: Prof. Andy Tanenbaum Retires From Vrije University
I found Modern Operating Systems better than the Minix book. The Minix book tells you exactly how a toy OS works in detail. Kirk McKusick's Design and Implementation of the FreeBSD OS (new version due out in a month or two) tells you how a real modern OS works in detail. Modern Operating Systems gives you a high-level overview of how modern operating systems work and how they should work. If you want to learn about operating systems, I'd recommend reading the FreeBSD D&I book and Tanenbaum's Modern Operating Systems and skipping the Minix book (which was also a bit too heavy on code listings for my tastes).

Comment: Re:Does this mean the death of Minix3? (Score 1) 127

by TheRaven64 (#47425395) Attached to: Prof. Andy Tanenbaum Retires From Vrije University

I feel it necessary to point out, though, that OS X is not a microkernel system comparable to Minix

While this is true, it's worth noting that a lot of the compartmentalisation and sandboxing ideas that most of the userland programs on OS X employ (either directly or via standard APIs) have roots in microkernel research. OS X is in the somewhat odd situation of having userspace processes that are a lot more like multiserver microkernels than its kernel...

An optimist believes we live in the best world possible; a pessimist fears this is true.

Working...