Forgot your password?
typodupeerror

Comment: Re:Two Minutes Hate (Score 5, Insightful) 667

by Lairdykinsmcgee (#46550913) Attached to: Creationists Demand Equal Airtime With 'Cosmos'
It's not hate, it's recoil. Time and time again, Creationism seeks to undermine legitimate scientific thought in order to shout its psychobabble at us and expect us to call it 'legitimate science.' Those who recoil aren't doing it out of hate or disgust, but well-founded fear-- the fear of what will happen when religious ignorance dresses up as science for Halloween and people actually take it seriously. It's not just ignorant though; it's irresponsible, because it affects public policy. Texas representative Joe Barton SERIOUSLY said that the 'great flood' from the Bible was evidence of climate change not being influenced by human activity. These are the ideas that are truly terrifying because they poison people's minds and any responsible scientific mind would do everything it could to assist in debunking these ridiculous ideas. Again-- not hate, recoil-- recoil out of fear on behalf of the whole of society.

Comment: They are obligated to behave this way? (Score 1) 406

by Lairdykinsmcgee (#46460161) Attached to: Apple Demands $40 Per Samsung Phone For 5 Software Patents
Some time ago, someone commenting on yet another patent conflict between Apple and Samsung suggested that Apple was obligated to defend its patents for fear of losing them. My understanding of this premise is that if a company owns a patent on some technology and does not defend itself against any potential forms of patent infringement, then that company could potentially lose that patent. Under these conditions, a company would be given incentive to fight any and all things that even resemble patent infringement just as a way to insure that they will never lose their patents. Is there any legitimacy to this claim? If so, this would simply say that patent law needed to change (which it surely does regardless), not that Apple's behavior needs to change.

Comment: Re:There are also significant risks to old mothers (Score 2) 192

I am a 24 years old male and in a serious relationship with someone who is 38 (female). Our situation is, of course, rife with stigma and expected impracticalities, but one of the larger ones we have faced is the dilemma of at some point having a child. We are both aware of the extensive research done on maternal age in relation to congenital problems and disorders, but we were always under the assumption that paternal age doesn't present too much risk. Obviously, my being young does not exactly 'decrease' the risk of the development of congenital disorders, but certainly it seems to not 'increase' that risk. Looking forward, it's a precarious and frightening decision for us to make for a number of reason, the risks involved with maternal age certainly being one of them.

Comment: The Definition of 'Human.' (Score 1) 101

by Lairdykinsmcgee (#46281841) Attached to: Are You a Competent Cyborg?
An important element to this conversation is what we do and do not consider to make us human. There are plenty of people walking around with pacemakers, prosthetic limbs, and metal in their skeletal structures. We don't question their humanity, and yet by definition they are 'cyborg.' If I have a contact lens, or hell a surgically implanted visual augmentation system, will my humanity come into question? Will it change who I am? If I walk around with a device implanted in my stomach that tells me how many calories I've consumed and the nutritional breakdown of those calories, will I no longer me human? The definitions of 'cyborg, trans-humanism, post-humanism' aren't the only things at stake when we talk about this. The definition of 'human' is also at stake. Changes in technology have always been incremental enough for human beings to perceive them, adapt to them, and become comfortable with them. It might be, before we notice it, that we will have adjusted to content of 'cyborg' before e adjust to the term 'cyborg.'

Comment: Turing Test. (Score 1) 241

While I find the concept of achieving eternal life for the AI version of myself rather... well, stupid. I do think this startup starts up an interesting version of the Turing Test. I would be curious to see what the version of me they could create would seem like, and whether or not either strangers or family/friends could distinguish between the 'real me' and the 'AI me.'

Comment: Not Tweeting. (Score 5, Insightful) 57

by Lairdykinsmcgee (#44638801) Attached to: Twitter-Based Study Figures Out Saddest Spots In New York City
The saddest parts of New York City are not where people who own mobile devices and laptops convene. The saddest parts of New York City are where people are wearing trash bags, begging for food and shelter... They are not begging for attention by Tweeting their pretentious frivolity.

Comment: 5.4 Trillion Dollars. (Score 3, Interesting) 366

According to Wiki, http://en.wikipedia.org/wiki/Forbes_list_of_billionaires, the 1,426 billionaires in 2013 have a combined net worth of $5.4 trillion. So those people could afford to build 6 of these structures and an additional one about half its size (assuming the cost to size ratio is linear).

Comment: GMO is scary... for now. (Score 3, Interesting) 358

by Lairdykinsmcgee (#44406067) Attached to: GMO Oranges? Altering a Fruit's DNA To Save It
Genetic modification of crops in a formal sense scares people for now. But, this is a young technology, and current genetic modifications are made, to a certain extent, blindly. While these modifications have known effects, they are also bound or at least potentially bound to have unknown effects as well. The reason, however, that these do not scare me so much is that this technology will only progress, and we will only gain a better understanding of how these modifications are affecting our crops. Hopefully, we can make decent decisions ab out regulating this in the mean time, but I think it won't be terribly long before we can make genetic modifications that are solely safe and hopefully better for consumers. In terms of the historical progression of agriculture, there has never been a time in human history that we have NOT modified the genes of our crops; only, we have done this through controlled abuse of the relatively quick and convenient evolution of crops given their short lifespan (new generations are quick to rise). Barely anything we eat today would be naturally occurring in actual nature. We designed these things to occur through comparatively (to GMO) crude methods. Bigger watermelons, redder strawberries, beefier wheat, or what have you. GMO could be the next step in this progression of healthy and nutritious foods IF done correctly. All the same, with knuckle-heads controlling the direction of GMO, it could have vastly different and unknown consequences. I'm simultaneously nervous and interested to see where it goes with a little more time.

Comment: Re:Professor Moron! (Score 2) 808

by Lairdykinsmcgee (#43746387) Attached to: Rice Professor Predicts Humans Out of Work In 30 Years

I think you've misinterpreted this notion of a more robotic labor force as some sort of idealism, altruism, prosperity, instead of simple economics. He seems to want to say that robot labor will be cheaper than human labor; and here, your thesis is correct. Humans very often seem to value short term gain over long term gain, and more importantly personal gain over utilitarian gain. If a manufacturing company recognizes that it can make a bundle of profit off of laying off 90% of its human workforce in order to 'employ' machines, it very well might ignore the fact that this will put a lot of human beings out of work. This already occurs in outsourcing human labor in one country to cheaper human labor in another country.

This is not necessarily a Utopian idea to say that the undeniable rise in machine intelligence could possibly result in an incurable rise in human unemployment. There won't necessarily always be something for uneducated, unskilled workers to do in order to scrape by and make a living. We see more and more that higher qualifications are typically required for not so difficult work. College education instead of high school education is becoming a norm these days; how long until one must receive a master's degree in order to be considered economically competitive? This isn't to say that I know for sure that machine intelligence will entirely make obsolete human labor, but it seems rather plausible that if there is a cheaper (more consistent, safer) alternative to human labor, then companies that are admittedly not altruistic entities will not hesitate to make changes that will negatively affect the humans that depend on them for employment.

The question then would become, yes, what will the world look like at that point? What if we really do see consistent 25% unemployment? How do we support those people who could not help being replaced? Will we be expected to and will we even desire to rise to that new challenge? Will it be necessary, or will abundance of resources support that burden? ... What happens if unemployment continues to rise even higher than that?

I'm not sure its moronic to ask these questions in a serious and critical way. In fact, I think there is every reason to. Worst case scenario, we're wrong. Best case scenario, we've had thoughtful discussion on a particularly meaningful and potent topic in development of human economics and the capitalistic concept of earning one's keep.

Comment: A new model for passwords? (Score 1) 538

by Lairdykinsmcgee (#42826951) Attached to: Deloitte: Use a Longer Password In 2013. Seriously.
I can admit immediately that I know incredibly little about this subject. So, I'm wondering if the cure for this issue is not necessarily longer passwords, but a different style of passwords? Ignoring the shear inconvenience of a model like any of the following, would they indeed solve the problem? 1) Require captcha every time we enter a password? 2) Include a captcha style word displayed on the page that is tacked on to the end of your personal password? (If my password is 'dogs1337,' and the captcha is 'gelmug,' the new password would simply be 'dogs1337gelmug') 3) Require two distinct 8+ character passwords? Any of the above would at least allow for a significant increase in possible password combinations if all we are worried about is the ability to brute force 8 character passwords. But, I suspect that might not be the only worry?

Comment: Neuroscience anyone? (Score 2) 470

by Lairdykinsmcgee (#42805311) Attached to: Is the Era of Groundbreaking Science Over?
Physics has gotten a lot of attention in response to this question, but what about Neuroscience? As a field of science, the study of the human brain really only picked up speed about 100 years ago. And, that was more so in the hands of neuro-philosophy and psychology (Freud, Bergson, etc). Neuroscience only very recently began looking like what it does today; we have only just touched the surface of answering meaningful questions about consciousness, memory, thought, pattern recognition, emotion, perception. We have only just begun to realistically pose these questions in the context of science, as opposed to a context of philosophy of the mind. We may even find, once our understanding of our brains progresses, that we weren't even asking the right questions about these subjects to begin with. The one consistency about humanity's relationship with any field of science throughout history has been that, over and over again, we think we have figured mostly everything out and all that's left is the grunt work. But every time we reach that conclusion, the next generation of scientific progression flips it on its head. Humanity as a collection of thinks will always believe it knows more than it doesn't know and it will always be sheepishly mistaken. It may be the case that we are capable of answering most of the questions we know to ask (though I doubt this as well), but the bigger truth is that we haven't yet thought to ask the questions most worth asking.

Those who do things in a noble spirit of self-sacrifice are to be avoided at all costs. -- N. Alexander.

Working...