Forgot your password?
typodupeerror

Comment: This is why we can't have nice things! (Score 1) 608

by Yevoc (#47418003) Attached to: Normal Humans Effectively Excluded From Developing Software

Setting up environments and frameworks has never been more annoying and irritating in history. Even if I code something to be readable, I can't get anyone in my research group or circle of friends (all of whom HAD coding experience before 2005) who wants to take the time to get their entire environment setup just so that the code runs and things start making sense. I can't say I blame them, as they've all heard my screaming for days on end whenever I have to use a new language/environment/IDE/framework. The shit is brittle as fuck and is horribly unintuitive. Face it, the reason this problem is getting worse is because you "true" codemonkeys refuse to acknowledge that the barrier to getting anything worthwhile running IS getting higher on average!

The only thing I can still get noncoders to look at is LabVIEW. Because code is pictorial, even non-coders can understand what I'm doing. You are right that SOME people are not of the programming mindset, because even when the vast algorithm is shown pictorially (and of course is still nested with functions and libraries that you have to click through to understand), they can't (or refuse to) wrap their heads around what the algorithm is doing. HOWEVER, I'd say only 50% of the non-programmers I work with, friends and family included, show absolutely no interest in how the algorithm works, how to change/improve it, etc.

In my extensive experience in working with rusty/newbie coders, the algorithmic barrier of programming weeds out about 50% of the population. Maybe 80% if the algorithm gets ugly/big/bloated. But if you require setting up a kludgy/brittle environment, mountains of text, and bad documentation? Virtually NO ONE I've worked with who isn't already steeped in this shit wants to deal with it. At all. Ever. Most of the time I don't want to either.

Everyone here is already set in their ways though, so I'm not even sure why I'm posting this. My point is that once my PhD is done, I'm going to be building my own visual/pictorial/graphical language (no, not a "Visual" Studio, that is 5% visual) in small pieces so that 1) I can drastically improve my overall workflow and 2) make it so that the inevitable non-coders I work with can actually see what's going on and maybe even work on it themselves at some point. I've really been waiting for the community to step up and solve this in the past 6 years that I've been waiting, but things are instead moving in the wrong direction entirely and making the problem worse.

I've never been able to convince anyone to take action on anything, so the only alternative is to do the whole damned thing by yourself. Story of my life. At least I'll have something for my daughters to work with that isn't as ancient as I am.

Comment: Re:The irony of the 1919 data is overwhelming (Score 1) 120

by Yevoc (#47136891) Attached to: Happy 95th Anniversary, Relativity

The point is that he had enough hubris to believe that general relativity worked up to cosmic scales without any error. The fact that there was in fact a substantial amount of experimental error, some of which didn't add up, didn't disturb him in the least.

General relativity actually does have problems on the cosmic scale, which is what led Einstein to introduce the cosmological constant, amongst other things. (Today people still don't know if/how general relativity holds in some places in the cosmos, ironically enough right next to a large star is one of them, kind of like the 1919 measurement) Between the arbitrary cosmological constant and his undying disbelief in quantum physics, you can see he had a disdain for reality and instead preferred the fiction inside his head.

I wouldn't harbor so much venom for the man if he hadn't become such a celebrity with this kind of behavior (which of course is something he actively pursued and enjoyed).

Comment: The irony of the 1919 data is overwhelming (Score 0) 120

by Yevoc (#47129975) Attached to: Happy 95th Anniversary, Relativity

According to Scientific American, the eclipse-based measurements of 1919 ended up yielding only 3 data points, all of which had very large error bars. One of the three results actually corroborated Newtonian gravity from insufficient lensing, while the other two results showed enough lensing to corroborate general relativity.

With only 2 out of 3 data points, with the other one validating the competing theory, and Einstein still said this of the experiment : "[If the data had disproved relativity?] Then I would feel sorry for the good Lord. The theory is correct."

This is coming from a man who failed his PhD thesis more than once due to algebraic errors and other sloppiness.

It figures that this same man is also the most venerated scientist of all time (by non-scientists at least).

Comment: MIT Economists blame this on better automation (Score 1) 288

by Yevoc (#47085123) Attached to: HP Makes More Money, Cuts 16,000 Jobs

If you read "The Second Machine Age," the authors (economists) make a good case that this kind of behavior of HP's has been increasing steadily over time since the 80s. They place the largest amount of the blame on improving technology and automation, where more work can be done by fewer people. They admit there is some amount of greed and corruption but that their analysis pegs it accounting for less than 10% of work-force-reduction/money-consolidation behavior. The rest is just natural market forces which pressure monetary efficiency on everyone. (Example: I didn't hire 10 people for my startup when 2 of us got the job done. It saved me money that I didn't have.)

During their research for the book, they interviewed tens of CEOs at large companies who first lamented that firing significant amounts of people is actually quite hard due to the regulatory environment of almost any developed nation. These CEOs went on to admit that in 2008-2010, they were finally able to show less profits in order to fire people that they'd been itching to get rid of for many years before that, and it gave them a chance to flex their technology muscle (paraphrased, don't remember the exact wording) and not lose an ounce of productivity while lopping off a huge chunk of their workers. The ability to pare one's workforce by such a huge margin without the company skipping a beat is considered a very important (and apparently rare) ability in the upper echelons of business governance. This, the CEOs said, is why the rockstar CEOs take home such big compensation: as a company, you can't afford for them to screw up like a normal employee can. Firing 16k people sounds trivial, but they strongly contend that it isn't.

Based on my personal experience of (very briefly) working with a former CTO of AT&T, this info is spot on. The guy was a scumbag and wasn't too good with tech (though he sure was good at putting his name on stuff made by others), but he was *amazingly* good at judging talent and figuring out the precise minimum number of people needed for a job. Startups rolled by bigwigs still leverage his expertise in this area, much to the chagrin of the actual working engineer, like myself, who end up getting leaned on VERY heavily (or else you're fired) as a result. Fortunately, my talents were also exceedingly rare, so when he told the startup to fire me when I asked for more money, they found they couldn't because there was no replacement within their critical 6 month timeframe.

That was also the big takeaway from the book: Those whose skills are topnotch and are in demand with the present (and upcoming) shifts in high-tech shall win big. Everyone else will scrape by with less and less. Much fewer people are needed to make the next big company, after all.

Comment: This approach isn't going to be easy (Score 1) 1

by Yevoc (#47043483) Attached to: Fusion power by 2020? Researchers say yes and turn to crowdfunding.

The fundraising pitch they deliver makes me frown a bit. They claim "Our success doesn't depend on...coming up with new [laws of science]. We're just applying known scientific principles in new ways to advance fusion technology." However, the independent review of their research that they (thankfully) made public indicates the opposite.

As the independent review states:
1) The quantum effect upon magnetic fields as expected/hoped for in this vein of research was "postulated...but has not been verified in laboratory experiments."
2) The temperature of Injected electrons must remain relatively low (to keep the plasma density high), and "This effect has never been seen in laboratory experiments."
3) "Ignition in PB11 (lead isotope) has...not been previously considered possible in other pB11 fusion concepts."

The reviewers go on to admit that other arguments provided by the researchers are "plausible" but are careful to refrain from ever stating that any of the desired effects have *ever* been demonstrated before.

Essentially, the group seeks to make tiny plasma "pinches" that manage to ignite fusion for a very brief period of time and then repeat that as many times as possible per second, leading to an overall energy production from small pinches that individually jump over the fusion ignition density threshold before quickly petering out. The difficulty boils down to getting the pinches to reliably shrink themselves to the point of igniting fusion from the resulting density/temperature rise. They admit they are 1000 times away from the necessary density and try to shrug that off as not a big deal, when people in fusion research know exactly how big a deal that is, as plasma generally refuses to be squeezed as much as they need it to without giving off its energy first. As the reviewers state, they are basically going against known experiments that show many energy transfer mechanisms which prevent plasma from *ever* holding onto its energy enough while collapsing to ignite.

It's one thing to shoot for the moon. I do it all the time in my research and also fail spectacularly every time. It's another thing entirely to paint your story with irrational exuberance and sweep your upcoming monumental obstacles under the rug so that the public will unquestioningly fund you, via tax or crowd-sourcing.

Maybe this is why I don't get much funding anymore...

Comment: Re:Can't Tell Them Apart (Score 1) 466

by Yevoc (#47006959) Attached to: Ask Slashdot: Minimum Programming Competence In Order To Get a Job?

I don't really agree that this is a coding question. It's a thought/personality question about what kind of person you are hiring. If someone actually took the bait and started unquestioningly brainstorming/making an algorithm for large n digits of pi on the whiteboard, I would say they failed in my eyes. I do have to admit you had me going there for a bit, as I was itching to devise an algorithm as well.

My initial answer would be that I have the first 11 digits of pi memorized, which I can quickly verify to be true just by plugging them into taylor expanded trig functions (not a library). Computations on a universal-to-atomic scale are possible with less than 40 digits, and NASA's most precise calculations are with 16 digits, which they admit to be unneccessary, so I would say the correct answer to the problem is to say "no thank you." If 20 digits were necessary for your quantum-scale deployment to the entire solar-system business, I would just randomly change each digit as I moved down to a less significant digit until the error was minimized in the same taylor expanded trig functions (no libraries used). Horrible brute force I know, but it would take an instant to complete, and I would have spent very little development time on it to yield a realistic solution to a potentially infinite problem.

Comment: Big Bang Theory does us all in STEM a disservice (Score 2) 253

As a PhD in engineering who has worked with physics PhDs on cutting edge stuff, (one group got the Nobel Prize in 2001 for Bose-Einstein Condensates, which is a field they work on in the show) I can safely say that Big Bang Theory doesn't come ANYWHERE close to reality. The reason I know this beyond a shadow of a doubt, is that when I tell people what the realm of high level engineering and scientific research is really like in America, they are downright shocked and horrified.

One, the show should consist of 5 Chinese, 3 Indians, and MAYBE one American. The Chinese and Indians are of course desperate to get their green cards, and the American is wondering why he ever got into a STEM field to begin with. Communication is difficult, with the Chinese constantly reverting back to speaking Chinese amongst each other.
Two, the characters should constantly pull an 80-100 hour work week and get paid half of minimum wage or less (This will easily explain why the show has 9 roommates in one apartment).
Three, they spend at least 40% of their time writing proposals.
Four, their dialogue is 95% work, 5% geeky stuff.
Five, the stuff they are working on NEVER works out as planned and always fails miserably.
Six, they spend the rest of their time writing and publishing papers. Much of the science dialogue should always be colored with the attempt to publish whatever it is you are doing.

Obviously Big Bang Theory will never spawn a reality show, but the fact that our STEM in the US is so horribly broken makes me really upset whenever I see Big Bang Theory now. The sheer ignorance of the public about our situation in this arena is precisely why it can be perpetuated like it is. Big Bang Theory is usually the only STEM ambassador to the general public that I hear about anymore, and it is horribly ineffective or even downright misleading in its job.

Oh, and it doesn't help that the science is commonly wrong or badly portrayed. I guess that's what you get when you have only 1 science consultant for a jillion hours of dialogue. (Or just the need to gloss everything over with a mile thick coat of Vaseline to make it sexy)

Comment: This exchange wasn't damning or even remarkable (Score 1) 396

I watched Snowden's recorded question and Putin's response on Russia Today, and neither part was terribly remarkable. I certainly don't view Snowden any less for doing this. His question was legit.

Snowden plainly stated in this phone-in that mass surveillance has been ruled and determined several times in the USA to be ineffective. He then went on to plainly ask if Russia surveils ordinary citizens and to comment on its effectiveness. That was it.

Putin responded by saying Russia's special forces can "stalk" people only by court order and need special equipment to do so. He then joked that Russia doesn't have nearly as much money as the US to perform blanket mass surveillance (this I believe, considering the ridiculous server farms we have here for such a purpose) and that he hopes it never comes to pass in the future.

Granted, Snowden wasn't allowed to physically appear and play hardball, but his question was more posed as a statement about blanket spying. Putin didn't exactly deny that spying over ordinary communications channels was happening in Russia.

Comment: Appreciating the sheer impact of climate change (Score 1) 712

To address your siderant...

****TL;DR Drought. Desertification. The kind that won't stop until the remnants of humanity are only at the Arctic circle by 2100.

My parent post is precisely the kind of mindset I had until sometime last year when I researched this more in-depth. I think this is ultimately why first-world citizens are still ambivalent to climate change: we don't appreciate the potential magnitude of what's going to happen.

So the planet warms a few degrees and some cuddly animals go extinct, maybe even lose some ocean-front property, why should I care?

When I looked at the latest papers myself, I was shocked. The models and predictions have way more confidence than they used to, but scientists simply aren't framing the results at all for the public anymore. (This is understandable due to the fear of stirring up political backlash, but it means the public at large is becoming more and more desensitized to the biggest threat to our species) [Keep in mind that climate change nearly wiped us out 100,000 years ago. Did you know that we all come from roughly 100 surviving humans who holed up in a beach-front cave in South Africa?]

As the Hadley cells expand, the deserts near the equator will grow in size, and the temperate zones will constantly shift North. How much they grow depends fully on what we (and by we I mean China and India) do RIGHT NOW (and by right now, I really mean ten years ago, not ten years from now). If we (again, humanity as a whole, the US makes almost no impact from here on out **except by leading by example**) continue to exponentially grow our coal industry at full steam for the next few decades, then the areas in extreme drought (think worse than the US dust bowl) will basically cover the entire planet by 2100.

Comparing our current CO2 ppm to the dinosaur era isn't accurate enough to spell out our future (I wish it were). Never before has CO2 concentration increased this quickly in such a short period of time. The climate changed gradually over millions of years in the dinosaur era. Biodiversity can easily morph on a planet-wide scale on that time period. Here, we've done that same ppm transition in roughly a century, and with our exponential growth, we're likely to nearly *double* our current concentration in another 50 years. That concentration alone is unprecedented in our planet's entire history, let alone the impossibly short time scale that we'll achieve it in.

Once we appreciate that history has no lessons for our new unchartered territory, we look long and hard at the best simulations we've made to date, which take into account the continents' shapes and locations (which are extremely different from Pangea, I might add), and you find a very grim story: The ocean loses its ability to carbon-sink, the permafrost begins melting and emitting methane, and you have accelerated warming even without anymore added help from us. The result is a disruption of all jet-streams to the point where every year looks different from the previous. Very wet to very dry. Hot to cold. Our existing breadbaskets simply can not handle this level of volatility. Massive crop failure will become the "new normal," and the world's impoverished will be ravaged with famine. Famine is the biggest driver of civil unrest, which probably means many third world countries will simply cease to exist as they degrade into permanent chaos. Almost half of the entire world will become malnourished. This is to say nothing about how the extreme droughts will affect water scarcity, which is already considered the world's biggest risk in terms of likeliness and impact by the World Economic Forum. Water scarcity won't hit the world as hard in more developed regions, because we will burn more energy (like oil, natural gas, and coal...and yes, **eventually** solar power) to desalinate water.

That's the first whammy to humanity. As time marches on, you'll then find the steady desertifcation I mentioned above (Google 'future drought map' and just look at those images until it sinks in). Northern Europe, Canada, and ultimately Russia will, as you alluded to, become wonderful places to grow crops, but the rest of the world will not. Think about the deaths, wars, and sheer magnitude of refugees that will all entail. However, unless we then manage astounding engineering feats requiring resources and coordinated efforts that dwarf World War 2, warming will continue even if we're mostly renewable energy by that point thanks to the long half-life of CO2. That warming will continue the inexorable march of northward desertification until your grandchild is buying oceanfront property in north Canada for a jillion dollars.

What happens after that is a bit speculative, but there is now evidence that mass-desertification of the oceans might explain many of our planet's previous mass-extinction events better than any other phenomenon. (Google 'Canfield ocean') It supposedly is linked to very high CO2 concentrations even beyond that which the dinosaurs enjoyed. Fortunately, all of the work in that field indicates we have at least 150 years to combat the full brunt of that kind of phenomenon. Far longer than the -5 to 15 years we have to fight climate change before it becomes virtually intractable.

To sum up, what the United States does now makes very little difference. We aren't growing anymore in terms of CO2, while developing nations are set to unleash huge exponential growth that will require massive amounts of power. Whatever we do has to take China's view on us into account: that's the only way we can make a difference. But more importantly, the trends on a log plot don't lie. What's coming to hit us like a ton of bricks isn't going to be stopped just like you can't convince 2 billion people to suddenly do what you want. What the world needs now is better climate science and better engineering so that we can quickly and cheaply offset what's coming when the time is right, because right now, we're nowhere close to ready.

If we don't select the outcome for ourselves, nature will do it for us, whether we believe in it or not.

Comment: I did my part (Score 5, Insightful) 362

by Yevoc (#44380607) Attached to: NSA Still Funded To Spy On US Phone Records
Now that we all know we're being surveilled, I can understand why others may not make similar posts, but I'm going to risk it and say it anyway. I read the previous slashdot article on the amendment. I immediately called my representative. He voted YES! Even if the ship sinks, I still feel very good about this moment. The system may be dysfunctional, but at least some of us are still doing the right thing. The worst thing we can do is succumb to despair. It may take some really tough times to happen, but we WILL eventually emerge on the other side with a better system. It's what life always manages to do, no matter how dark the times become.

Comment: Plasmonic devices=a bit far from any practical use (Score 5, Informative) 202

by Yevoc (#41134121) Attached to: New Flat Lens Focuses Without Distortion
My colleagues work on the exact same gold-based nano-antennae used by this work. All of the nano-antennae on the lens' surface are basically arranged to absorb and re-transmit the incoming light into a near perfect spot. Because it uses metal on nanoscopic scales to manipulate light in a way other than pure reflection (like a mirror), it's in the field of plasmonics. (Below a certain frequency [of light] the electrons in a metal react like a plasma, hence the name.)

Whenever us optical engineers hear about plasmonics, we internally roll our eyes, because metal almost always absorbs far too much light to be useful. Even tens of nanometers of penetration and/or propagation can extinguish almost all of the light. This essentially relegates the entire field to the realm of theoretical curiosities and nothing more. (This work uses 60nm thick gold)

The authors of this paper admit that absorption is their biggest obstacle, as this lens only passes 10% of the incoming light. There are other issues for making this work a reality, but they pale in comparison to the classic brick wall you get when passing light through metal.

Comment: Re:will the public appreciate the sublteties? (Score 5, Insightful) 293

by Yevoc (#35200954) Attached to: Watch IBM's Watson On Jeopardy Tonight
You have got to be kidding right? Any reading about Watson will quickly reveal that the subtleties of language (specifically English metaphors, similes, and irony) as well as the ingrained underpinnings of Western culture have been its two biggest obstacles since day one, and that's precisely why IBM chose Jeopardy as their next grand AI challenge.

Having dozens of Chinese colleagues, I can assure you that the hidden meanings and references we bury in the English language are completely lost to them even though they know the English words. Do you really think I will understand their jokes, movies, books, etc, just because I flipped a switch and heard a word-for-word translation from Chinese to English? (Even that situation is absurd, actually, because Chinese-English translators have to see a sentence and translate holistically, where many colloquialisms and phrases lose their meaning in translation)

Here's a quick example:
(Exact translation from Chinese to English) "Watchful caution! Avatar come!"

If you thought that meant a blue creature or a virtual representation of a person was coming for you, you'd be wrong. Chinese gamers call a bombing helicopter/hovercraft an "Avatar," because they first saw one in the movie Avatar. If Watson got that right, he'd have to know a very subtle fact about Chinese culture, and Jeopardy is replete with these cultural landmines.

If IBM can prove a machine understands the deep underpinnings of our language AND culture by correctly answering very apocryphal questions better than a Jeopardy champion, then the company will have effectively demonstrated the world's best language and cultural interpreter to bridge the gap between man and machine.

Comment: Too short a distance with an impossible junction (Score 1) 72

by Yevoc (#31696168) Attached to: World's Smallest Superconductor Discovered
It is worth noting that much (silicon included) doesn't scatter electrons over such a short range, so a great many materials could technically be quantified as superconductors on this scale.
The reason such small things aren't normally touted as "superconducting" is because the contact resistance with something so small becomes so amazingly large that the whole reason of having a superconductor is destroyed. This is precisely why the superconducting regimes of graphene and nanotubes aren't practical: forming a decent contact is not doable at present.

Because of this, it is far more important to create a superconducting wire of substantial length to save power, as resistance scales with length anyway.

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...