Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Monitors != Lighting (Score 1) 328

Yes, you can't tell the difference in the spectrum of the light when you are staring at the light, so when looking at objects whose color is purely emisive, like TVs and monitors then you can represent the entire gamut of color that the human eye can see by combining three primary colors.

But this breaks down when you are looking at objects that are reflecting that light, because the way those materials reflect light absolutely is wavelength specific. In that case if you have two lights that appear to be the exact same color when staring at them (or when shining them against a white wall), but have different spectra, then objects illuminated with those two lights can look very different because they absorb and reflect those spectra differently. A normal person won't be able to quantify why they look different, but they will know that something is "off" and may get an impression of the lighting in vague terms like mood or character.

So no, you can't fake a lighting spectrum with just 3 primaries, which is why producing good LED lighting has been much harder than producing good LED monitors.

Comment Re:This is a bug not a feature (Score 4, Insightful) 328

For the entire history of the human race nearly all the lighting we have encountered has been block-body radiation, and a black body spectrum will always look better and more natural to us than other light spectrum. So florescent and sodium vapor will finally die off as LEDs become less expensive, but variations in color temperature will never go away. Warm lights will always feel more cozy and intimate just like campfires and candles have always been. Cool light will always feel a bit dreary, like an overcast day. And Daylight spectrum will always feel bright and cheerful. Opinions on whether a living room should be bright and cheerful or warm and comforting may vary. But unless we somehow stop experiencing natural lighting whatsoever, and evolve into Morlocks, variants of black body light will retain their historical associations.

Comment Layers of imitation. (Score 3, Insightful) 328

These are really cool. But it did make me chuckle when the article talked about how current LED candelabra bulb in particular are quite ugly. The candelabra bulbs were made to (poorly) mimic the shape of candle flame, and now we are attempting to mimic that imitation because we have gotten used to the way it looks :)

Comment Re:No warning ? (Score 2) 204

How often do we need to repeat this mantra to people?

Not quite so often as you think, if this is just an excuse to regard an accessible, but possibly degraded primary copy as worse than having no backup of your backup at all.

Having in my possession a ZFS backup with some corrupt nodes, I could still have a provable hash from the Merkle tree of the content desired, which I could recover from a corrupt primary copy (i.e. the live drive itself) with no concern whatsoever about the corruption, so long as the checksum matches.

Anything that can go wrong in a primary copy can pretty much also go wrong in a backup copy. Hot media is more likely to fail due to write errors (or overwrite errors) whereas cold media is poor at prompt notification of physical degradation.

The rule of Occam's orthogonality says don't brick the primary device unnecessarily.

Comment Re:It is too much code to secure. (Score 1) 69

Here's the best part: they can audit the security of nearly a half a million lines of code in "several months".

You don't need to look for kidney stones in bone marrow. Most likely what they are doing is better described by "screening" rather than "auditing" even though the later is the conventional word.

Algorithms (such as ciphers) tend to be fairly easy to cover with test suites, whereas memory management and handling of randomness sources are both fraught with peril and difficult to formally test.

It really helps to reduce audit coverage if your code analysis tools can eliminate big chunks of code as purely functional with no side-effects on system state. A purely functional function would not include code that performs heap-based memory allocation, and would exclude the vast majority of system calls.

Even so, I suspect there's a pretty steep gradient on where to direct your best attention to identify misguiding coding constructs (approaches that are worse than wrong)—if you're not determined to check for identify kidney stones lurking in bone marrow.

Comment Re:What's TSYNC ? (Score 1) 338

Would have been nice if TFS had included an explanation of what the TSYNC feature is.

This would be inconsistent with masses of people clicking into the discussion thread going "WTF?" and then sticking around to post a comment.

I'd quit Slashdot in a heartbeat (abandoning what limited loyalty remains) if I were willing to wade through the alternatives in search of an alternative forum in which the paragraph as a unit of discourse has not yet been un-invented.

Back in grade nine, back in the 1970s, in a school where the majority of students ended up in vocational college, I already held a low opinion of people who charged ahead with the lingo-of-the-day without providing the least context. Slashdot in its current incarnation routinely falls below the personal standards I used to judge my 14-year-old classmates back when Star Wars was the hottest property in known history (I was quietly polite about it, but none of those people became my friends). Every freaking time a Slashdot story does this (i.e. pretty much daily), I have a grade-nine flashback to the least nerd-compatible environment I've ever been forced to endure.

Edge has a pretty good piece today: Yuval Noah Harari in conversation with Daniel Kahneman.

I don't have a solution, and the biggest question maybe in economics and politics of the coming decades will be what to do with all these useless people.

He merely means by "useless" the portion of the population who have no skills at anything that can't be better done by a (recently or soon-to-be-invented) machine.

There's no fixed algorithm for ensuring that one remains a viable member of the "useful" population, but I'm going to continue with my grade-nine policy of gravitating toward those who 1) employ paragraphs when engaged in written communication; and 2) provide adequate background before lapsing into the lingo-of-the-moment.

As I said, there's no fixed algorithm and I might well be wrong, but from where I presently sit, I'm voting as stated on this matter with my entire bag of skin.

Comment brain-damaged simplicity boners (Score 3, Insightful) 277

an hour earlier

An hour earlier than what?

Humans have been phase-locked to the mean solar day for just over 200 out of the last 6 million years.

1883: Railroads create the first time zones

Not even the sun is phase-locked to mean solar time. There's this little detail called the Equation of time whose discovery dates back to the Babylonians, which governs annual variation in apparent solar time. Apparent solar time just happens to be the primary zeitgeber on circadian rhythmicity in all mammals (that I've heard of) and a great deal more.

The majority of people feel that DST is a bad idea and want it to stop.

Majority of what population? People living north of the 49th? I doubt it.

Majority of people who wish pi was equal to 3 and that the earth's orbit were circular? Almost certainly, even though I don't think these two simplicity boners are conceptually compatible.

Comment Re:I have said it before (Score 1) 384

When you begin counting the cost of nuclear, you've got to count ALL the costs. Including, as at Fukushima, basic engineering errors that ultimately cost astronomical amounts years after construction.

Do you know what the lead engineer of the GE design team for the original Fukushima reactor drove around town? A 1959 Edsel Ranger.

Certain mistakes were made back then in the heyday of mature industries like OS/360 and the Boeing 707 that we no longer make. Even the outlandish and highly inflated AI claims from the same era (which were held against the entire discipline for 50 years) are now almost becoming reality with deep learning. Times change. Even for AI. Even for nuclear.

Semi-retraction: Although I just made up that bit about the Edsel, I can't actually claim it's a false statement.

Comment Re:I'm healthy... (Score 1) 134

You're somewhat delusional if you believe this was pure fat loss. I regard it as a disservice to give people the impression that this kind of fat loss is either possible or healthy.

At the level of exercise required to sustain a caloric balance of -2700 calories per day over four months, the body would become severely protein challenged. Even converting fat to energy increases protein demand, as those organelles burn hard and wear out.

What happens with formerly fit individuals who then become obese is that these individuals actually have extremely large reserves of skeletal muscle (obese people tend to have extremely strong legs for practical reasons, it just doesn't seem like it as hefting their own body weight consumes most of their strength). As this kind of person goes into an endurance exercise program, he or she actually needs far less muscle mass than they have starting out.

If his story is true, I bet he lost a great deal of skeletal muscle mass in addition to a lot of fat. The muscle that remained would be extremely fit and efficient, but less strong.

A similarly obese person without the muscular reserve would be flirting with death in attempting to replicate these figures. If his story is even true. And if it is true, why did he quit and put all those pounds back on again? Could it be that his body figured out that the stress of the program was unreasonable to begin with?

Did he actually measure his body composition before and after, or did he just take a weight difference and presume that anyone who exercises that much couldn't possibly have shed any muscle mass?

I don't feel like digging up particulars I last read five years ago, but I distinctly do not recall having ever read anything credible which suggests this level of weight loss can be achieved on a pure fat-burning basis.

Comment Re:Necissary, not sufficient. (Score 1) 99

I think you're misguided. The criteria for patentability has never been bad, and has actually gotten worse since the recent change to "first to file".

Yes it has been, and your following paragraphs demonstrate clearly why this is so

The problem is it's impossible for anyone to know what can or cannot be patented without spending hundreds of thousands of dollars hiring an entire team of lawyers to search through the back catalogue of patents and inventions and court precedents.
The patent office does not have enough staff to do proper research while a patent is being filed. If they did proper research, they would only be able to approve a handful of patents per year with the number of employees currently working at the PTO.

The problem with the current system is that the PTO has taken the approach of only rejecting patents if they can find documented evidence that someone has done the exact same thing before. If there is a single independent claim for which they can't find exact prior art in a timely manner, then they approve the patent, regardless of how similar it is to other prior art. They deliberately ignore the obviousness of the patent because they don't want to have to defend subjective decisions against appeal.

The recent Supreme Court rulings have forcefully asserted that this is not acceptable. The law clearly states that obviousness is one of the criteria for patentability and therefore the USPTO and courts must take that into consideration when deciding patentability. Furthermore, they have stated that if the improvement that an invention makes on prior art is not patentable by itself, then the invention is not patentable. This is a huge decision because it rules out a ton of "on a computer" and business model patents that combined things that weren't patentable on their own into something that was patentable in aggregate. This second issue is likely to have an even bigger impact as it can be applied more objectively than the first which increases the chances that the USPTO will embrace it. Furthermore, if anything these changes decrease the amount of research the PTO has to perform for an average application.

It simply isn't possible for a small company to defend themselves at all, their only viable option is to settle out of court which inevitably means nobody actually knows whether or not the patent is valid. After years of watching this issue closely I have never seen a small company defend themselves in court. Some have tried, but every single one gives up and settles out of court half way through the process.

Agreed which is why we need these reforms. They proposed two important changes. First is to strictly limit how much information the plaintiff can subpoena during discover. This prevents fishing expeditions and prevents discovery from turning into a war of attrition, which will make defending oneself against patent claims faster and less expensive. Secondly it allows defendant to challenge the validity of the patent before discovery has taken place, potentially avoiding the vast majority of the expense of defending oneself, if the patent is determined to be invalid by the new post-Alice standards.

Personally I don't see how any reform could possibly fix the problem. There are certainly ways to improve the situation but I don't think anything can truly fix it. I've never seen anybody suggest a viable solution.

I have no disillusions that these changes will magically make the patent system perfect. In fact I expect the USPTO and the lower courts to continue to be slow to adopt them, but they address the two biggest issues with the patent system today - the low standards for patents and the cost of defending against them - which is more than I can say about any other proposed changes to the patent system in the last 50 years.

Comment Necissary, not sufficient. (Score 4, Interesting) 99

Granted, the biggest problem with the patent system has been that the criteria for patentability has been so loose, and the recent Supreme Court rulings will certainly do more to fix that root cause than the recent patent reform bills. Hopefully going forward these new rulings will improve the quality of patents approved and upheld in court, which is by far the single most important reform needed in the long run.

But in the meanwhile there are more than 20 years of bad patents that have been granted, and the costs of defending against a patent lawsuit is still far greater than the cost of settling. We need to make it less expensive to challenge existing patents if we don't want them to continue to be a burden for the next 20+ years. That is exactly what the reform bills were about. They were designed to be complementary to the Supreme Court rulings, addressing a different parts of the problem.

Comment Re:Cash (Score 1) 230

Yeah, but the cash registers don't record anything. That eliminates all the automated tracking of your purchases which is 99% percent of the problem. It is still possible to track what you buy though manual investigation, but that would be true even without the ATM info (security camera correlated with register records, etc).

Comment choose your lens mount carefully (Score 1) 407

You can always find another language that is better at it on every single aspect you look at. Jack of all trades master of none.

Master of Jack is the one thing where no other compiled language triumphs over C++.

If you're sure on day one that there are language features your project will never need (on any project fork)—cross my heart and hope to die—then go ahead and pick a less cluttered language better suited to your constrained subdomain.

What you're really saying here is that you'd rather work in a constrained subdomain—pretty much any constrained subdomain—than hump around on crowded streets hulked up with a universal camera bag (source Mumbai-based photojournalist Dilish Parekh).

Slashdot Top Deals

Beware of all enterprises that require new clothes, and not rather a new wearer of clothes. -- Henry David Thoreau

Working...