Follow Slashdot stories on Twitter


Forgot your password?

Comment: Re:Salary versus cost of living in each city (Score 1) 129

by AthanasiusKircher (#48896195) Attached to: By the Numbers: The Highest-Paying States For Tech Professionals

I'm a home owner but I don't think there is such a huge gap between owning and renting.

Short-term (less than 5 years or so)? Not a big gap. Long-term? Absolutely. Run the numbers. There may be a few markets in the U.S. where it makes sense to rent, or if you're a person who definitely plans to make big moves at least every 5 years or so. If you plan to stay in the same area for a decade or more, though, owning almost always wins out bigtime.

A lot of older owners are faced with having to sell their homes after retirement and moving somewhere cheaper when they would rather stay where they are. It's more like a safety net and less like a nest-egg, frankly.

Well, that's because many people own "too much home," and the vast majority of people are very poor about planning appropriately for retirement expenses. I don't think this says as much about home ownership in general as about those who buy homes they can't sustain in the long-term and/or don't plan appropriately for retirement. (Sure, there are some states/areas where property taxes suddenly jump or whatever, and people can't afford to continue living where they are, but that's certainly not everywhere.)

Also, there's the not so insignificant issue of needing more space when you have kids, but not as much when you're an old couple living by yourself. That's more difficult to plan for, particularly if you have a large family -- in that case, you probably should have planned ahead to move into a smaller home when the kids move out, unless you have enough money saved to keep up the big home through old age.

Lastly, when downsizing, you most certainly have a "nest egg" if you can sell your big home and completely pay off the new home you're going to live through retirement in COMPLETELY right away (with no mortgage). That's the whole point of having a nest egg. If you were a renter and didn't save, you would still have to shell out big monthly payments for all of your retirement years... whereas even if you have to downsize, these homeowners probably don't. (Aside, of course, from regular homeowning expenses, which are not insignificant, as you point out, but they're generally nowhere near as much as rental costs once you have the mortgage paid off.)

Comment: Re:Salary versus cost of living in each city (Score 1) 129

by AthanasiusKircher (#48896141) Attached to: By the Numbers: The Highest-Paying States For Tech Professionals

This can be beneficial, unless house prices are as inflated as they are now. We're at the point where you'd have to rent for over 30 years now to break even.

I'd really be interested in seeing where that's true. The average break-even point for renting vs. owning is probably 5-7 years in most areas. Some areas it may be as little as 2-3 (if rents are really high), other places it may be as much as 10 years or a little more (if rents are really low, but prices are high).

Rental markets generally adjust to housing prices over time, so it's unlikely that you could have a long-term sustainable market where you'd need to take a lot more than 10 years to break even unless it was somewhere where no one EVER sells real estate. (Such things do exist, such as in old Italian cities like Rome, where it's next to impossible to buy anything, since properties have been in the same family for centuries... but it's extremely rare in the U.S.)

And even if housing prices are inflated, interest rates are still quite low now (but may start rising). Which means that you may still be able to get an interest rate that roughly tracks inflation over the long term. Effectively, that means you're not really "paying interest" but getting a "zero interest" loan on a huge sum of money for 30 years (since you get to pay later in constant payments, which will be cheaper as inflation makes the dollars worth less). Rents, on the other hand, will rise with inflation.

Take this into account, and I sincerely doubt you'll find many places where renting makes sense for much more than 10 years.

Comment: Re:Probe (Score 1) 170

You recall correctly; Pluto hasn't even made it half a lap around the sun since we discovered it.

It was discovered in 1906, 108 years ago

WTF MODS? This gets "+5 informative"??

Pluto was discovered in 1930, as anyone could verify anywhere. Jesus Christ. This is NOT an obscure fact, particularly for anyone who knows anything about the solar system. I've been reading Slashdot for a long time, and I've seen a lot of crap, but I'm seriously thinking of leaving now. News for "nerds" my ass.

Comment: Re:"AI" vs Strong AI (Score 1) 227

by AthanasiusKircher (#48826405) Attached to: An Open Letter To Everyone Tricked Into Fearing AI

By your own admission, AI *might* eventually be capable of the kind of "malice that people seem to be afraid of". And that malicious developers can cause destruction even sooner.

Not the GP, but yep, bad things are possible. Yay!


And the laws of physics clearly predict that strong AI is possible. or do you consider intelligence to be some kind of supernatural quality?

Invoking "the laws of physics allow it" as an argument that we should actually be worried about something happening here on earth in the near future is pretty slim evidence, no? I mean, the laws of physics allow a LOT of stuff to be possible.

That said, this isn't really about the laws of physics -- it's about basic biological systems here on earth which have intelligent properties. So, it's a lot easier to create intelligent life than invoking the laws of physics. (People have babies all the time.) The question is how long it will take us humans to figure out a way to create something that has certain intelligence properties... and that could be next year, next decade, next century, next millennium....

Also it is the experts in AI who are predicting that AI will be possible and achieved in a matter of decades. Why would you even come out and pretend that it isn't?

Because the "experts in AI" have a pretty bad track record for predicting advances -- the cynic in me would say probably because many of them get their grants funded by predicting major advances.

Back in the 1950s, the "experts in AI" predicted that a group of 10 smart dudes could get together and solve all the major problems of AI (like natural language comprehension, true adaptive learning, etc.) in 2 months over the summer. Over fifty years later, we're nowhere close to solving most of their identified problems -- most of our advances are due to better searching algorithms, faster hardware, and more data. Not really significant advances in true adaptive learning.

Alan Turing, in the same era, predicted by the year 2000 that we'd have machines so fluent in natural language that we'd have to debate the word choices that could be substituted in Shakespearean sonnets to tell the difference between a human and a computer. Instead, we get crap reported again and again and again that the "Turing test" was "passed" by some idiotic program that pretends to be a retarded non-English-speaking teenager who's acting like a 5-year-old.

How low our "bar" has sunk that we need to have such declarations every year or two to keep proving to ourselves that we have great "AI."

No -- we don't. We've barely squeaked by with any significant advances toward the kinds of goals articulated in the 50s about strong AI.

Now, I'm sure you're all going to talk about Deep Blue and chess. But how do these chess programs win? By doing exhaustive searches far ahead of what humans are capable of and having exhaustive libraries of games and strategies far greater than any human is capable of. I'm not saying these computers aren't significant advances in SOMETHING. But they aren't exhibiting the kind of efficient adaptive intelligence that the original "strong AI" proponents thought would happen when they proposed chess as a worthy goal for AI. It's like comparing someone with a high-IQ, advanced math and logic skills solving a complex problem in 5 steps with another guy who brute-forced the problem on a supercomputer and ran quadrillions of simulations until he came up with the right answer by elimination of other possibilities. Is the latter displaying anything like the "intelligence" of the former?

A similar thing with Watson. Natural language processing admittedly has made big strides in the past few years, but mostly because we've finally given up on the models of language and linguistic cognition that all those "experts in AI" insisted were the solution for decades. Instead, when you use Google to do something like translation or whatever, it guesses solely on the basis of huge databases... it doesn't "understand" language. Hell, we're still working on having simple language recognition be able to understand what the antecedent of a pronoun in a sentence is, let along grasp the meaning of a full sentence in natural language or paragraphs or larger contexts. But throw a big enough database at something, and limit the forms of questions you can ask it, and it will be able to do some awesome things. "Intelligence" in the sense of true adaptive learning, concept formation, or efficient understanding anything like humans (or even many animals) have? No way.

But go back and read the kind of crap predictions that "experts" made after Deep Blue a couple decades ago. Compare them with what happened. Compare what the "neural net" afficionados have been saying since the 1980s with what actually happened. Compare what the AI "cognitive science" weirdos have been talking about since the 1970s.

The "experts in AI" have always had predictions that were a crapshoot.

So, why would *I* even "come out and pretend that" strong AI may not be possible in the next few decades? Because, based on empirical evidence, the "experts" have often have unrealistic expectations in this area.

are you saying that people have no right to worry about problems that aren't likely to happen for 20 years? is that the cut off date?

I'd say we should worry about problems that are pressing and/or longer-term problems where we actually have a pretty good idea that they might be happening sometime soon. Strong AI will probably happen at some point in the future; I'm with you there. But I frankly have no idea whether it's gonna be here in year or in a thousand years. I'd like to guess based on acceleration in technology that we'll be somewhere close in the next couple centuries, but I don't really know... even after following a lot of aspects of AI over the past decades.

And despite all of these crazy warnings, I think we'll have plenty of time to ramp up to the point where we want to start worrying. When you can show me a computer that has the cognitive skills and adaptive capabilities for learning of a 5-year-old, then we start worrying. Maybe even a 2-year-old. Right now, we have machines that are really good at doing certain kind of tasks -- but a machine that can brute-force a way to beat a grandmaster at chess isn't much more "intelligent" than a toaster.

When that machine that beats a grandmaster at chess can also figure out by itself how to build a toaster and serve me breakfast... then I'll be worried.

Comment: Re:People Dying in Pain (Score 1) 790

by AthanasiusKircher (#48784939) Attached to: Ask Slashdot: Sounds We Don't Hear Any More?

Unless you work at a hospital, or are a soldier in a war.

We are a people more disconnected from death than any in history.

This has to be one of the most insightful comments here. Want more specific? How about the distinctive sound of a child with a serious and potentially fatal case of whooping cough?

Oh wait, the anti vaccination wackos are intent on bringing that one back....

Comment: Re:Time for some leaps and not baby steps (Score 1) 142

I don't think anyone in the scientific community has any doubts that there was life there at one time. It's just a matter of proving it.

I certainly hope you're wrong in this statement. Otherwise, it implies the "scientific community" is no better than a bunch of religious wackos when it comes to evaluating evidence.

There is absolutely no reason to believe one way or another that life should exist or should have existed on Mars. If you go back to the Drake equation, we have only one datapoint regarding the probability of abiogenesis. It could be that life spontaneously appears on every random planet and is even multiple places in our solar system. But, based on current evidence, it could also be that life only appears on 1 in 100 planets that seem to have "good" conditions to us based on our really vague theories about how it happened. Or it could be 1 in 1000 or 1 in a million or 1 in a quadrillion.

We have one data point. I sincerely hope that there are some in the scientific community who still allow for the possibilty that there may NOT be life on Mars... maybe even not elsewhere in our solar system... maybe even not elsewhere in our galaxy. Maybe it's really common. But we have absolutely no reason to think so at this time, and thus it is really not a very "scientific" attitude to have no "doubts" about it.

Comment: Re: Its a cost decision (Score 1) 840

It's not about cost. It's about design. They used to build things to last. ... [snip] ...It lowers cost and means you buy a new electric carving knife every couple of years.

Speaking about design, why the heck are you using an electric carving knife in the first place? Just buy a decent actual (non-electric) slicing knife, and keep it sharp. You can probably use it for a half-century and then will it to the grandkids. Learning how to keep quality knives sharp is an easy skill and will save you hours in the kitchen, not to mention probable injuries. (Dull knives make any cutting or chopping take much longer and require more effort, and they are much more prone to slipping when you try to force them, thus creating accidents.)

I can't stand cooking at most other people's houses, because they often have no knives that are actually sharp. Food prep is annoying with bad tools, and I understand why most people just give up and rarely cook with tools like that.

Anyhow, the reason for this short rant about old-fashioned knife maintenance is because part of our "design" problem these days is that we think we need some "gadget" to do everything. Yes, many gadgets are helpful. But a lot of times they replace a perfectly straightforward non-complex tool that would last for years with a complicated electronic device or at least something with a very special design, a bunch of breakable plastic parts, and no easy way to repair when it fails.

If you asked me to a choose a useless gadget that I'd NEVER bother replacing because I could just use a simple tool that will last for generations, the electric carving knife would be near the top of my list.

Comment: Re:Dupe (Score 2) 840

Fine, do you know how to churn your own butter or butcher your own chickens? My grandfather did all of these things, but my dad (who is still a farmer) has no idea how to do either. And even if you are one of the rare ones who knows how to do those things, I doubt almost all of your generation can.

I don't mean to be a jerk about this, but THESE are the two things you bring out regarding examples of difficult things your dad couldn't figure out how to do??

Churning butter only takes cream and agitation. You can do it in a mixer. You can even do it in a sealed jar just by shaking (though it will take longer). Eventually the proteins will separate from the whey, and you just form them into a glob, squeeze it out, and you have butter. That's it. There's no "secret" to churning butter.

As for "butchering" a chicken, I don't know if you're referring to the complete act of killing and prepping, or merely the work most butchers do these days, which is mostly a small amount of prep and then perhaps cutting the whole chicken into pieces like breasts and thighs. That later thing is anything any competent cook can do, and I could show you how to do it in about five minutes.

As for slaughtering, well, chickens are relatively easy. You can go for the messy way and just chop the head off, but if you prefer less mess and a calmer chicken, just use a killing cone, hang upside down, slit the throat, let the blood drain for a couple minutes. Dunk in scalding water, pluck feathers. The only mildly hard part is getting out the viscera, and that's only because you don't want to puncture the intestines (and get feces all over) -- so make a cut in the right place, then use you hand gently to yank them out. Cut off the feet and neck, and you're basically done.

Again, other than having someone show you how to cut to get the viscera out, this is a really easy process you can probably figure out pretty easily. It's not as hard as processing a large animal, where you actually want to produce useful cuts and such.

Sheesh. I mean, I understand that maybe your dad has no idea how to do these things, but he could basically become an expert on both of them in a couple hours. These aren't insanely complex tasks or anything that requires a lot of intuition and analysis coming from long experience.

Comment: Re:Few you say? (Score 2) 578

by AthanasiusKircher (#48724203) Attached to: What Language Will the World Speak In 2115?

I personally see no reason why a single language, and particularly English, SHOULDN'T replace other languages eventually.

Because it is inadequate for use in other cultures.

THIS. Individual languages develop around culture and then take an active role in shaping it, though most people within that culture don't realize it until they step outside of their language and culture. It can lead to concepts that are truly untranslatable, in the sense that there is no single word or short phrase that could convey the concept precisely in another language.

Most people who argue that we wouldn't lose much if we all spoke the same language also seem to believe in the "dictionary model" of meaning, where atomic words with exact meanings are combined together to make language. But that's NOT how meaning actually works; it's just an illusion created by dictionary organization. (If it were true, we would have also solved the automatic computer translation problem decades ago.)

In reality, language and meaning is a complex network of associations, where word choice often conveys subtleties of meaning because of the various connotations and connected concepts in a language. Everyone makes a big deal about mostly mythical ideas like languages that could have dozens of words for snow or something... But it's not only the specialist technical terms where the distinctive character of a language resides. (And those can often be borrowed directly into other languages.)

Instead, languages often make subtle connections in even the basic core vocabulary. For some perspective on this, take a look at a comparative dictionary of Indo-European languages sometime. You would quickly see that while many basic ideas in a language may derive from the same roots, a specific concept may have a number of different strands of development in different languages. For example, three languages may all have different primary words derived from different roots for concept X, each with their own distinctive set of connotations. While it may seem like there's a simple A=B=C equivalence between words, the meaning that is conveyed in translation could be significantly changed or lacking in nuance.

In many cases, this may be a small thing -- but the reality is that language does shape thought and even perception of the world. If it's easier to make a particular connection between concepts in one language because of this network of meaning relationships, it can actually change the way people are able to discover new things or consider new possible ideas. Of course, it's not impossible to do this in another language... It's just less intuitive and thus perhaps less easy for people in another language to see the connection.

Comment: Re:MicroSD card? (Score 1) 325

It was badly worded but it's pretty obvious that the GP was referring to the well known and documented phenomenon of iOS upgrades causing devices to slow down. You can of course not upgrade, but the real issue is why the OS gets slower over time.

And of course there are the battery issues too. I know three people personally who have had an iPhone with a battery that would last them for a week or more with low usage, but they made the mistake of upgrading the OS. Suddenly they had to recharge their phone multiple times per day. I and other people have often refused to upgrade to avoid such potential issues, which often leads to problems because Apple will stop supporting things unless you upgrade, leaving you in a place where either some important system apps no longer work or you roll the dice and upgrade to discover you have a slow brick that can't go more than 8 hours without a charge. (For the record, I don't buy iPhones myself, but I inherit them when other family members upgrade; I would never voluntarily buy an Apple product for many of these reasons.)

I won't go so far to say that Apple degrades battery performance deliberately. Maybe it does, or maybe they don't give a crap anymore about testing such things on older generations before releasing the OS. Either way, all the Apple BS about how "we limit our hardware choices so we can give you a better user experience" clearly goes out the window when your device is more than a year or two old.

Comment: Re:MicroSD card? (Score 3, Insightful) 325

The quality of removable storage media, especially SD cards (and derivative formats) varies drastically. Apple likes to ensure a consistent ecosystem so that all users have as consistent an experience as possible.

Yeah, I guess that makes sense. I mean, there's no way it could have anything to do with the fact that flash memory prices have dropped significantly and the only way Apple can get away with charging its ridiculous premiums for slightly more memory is to prevent users from easily adding their own. (With micro SD prices now, I could find something costing less than $1/gigabyte, or if Apple supported USB OTG, I could even use a flash drive for about 30 cents/GB, but instead I have to pay about $2/GB if I want an iPad or whatever with more memory.)

And it couldn't possibly have anything to do with the fact that those ridiculous premiums for lots of memory cause consumers to buy cheaper models rather than spending a couple hundred more dollars on an already way overpriced piece of hardware, and then are forced to upgrade to a new generation device in a couple years when they realize they don't have enough space.

Yeah, I'm sure you're right -- the huge profit motive here has nothing to do with it... It's just Apple being a good citizen and helping its users not up have to put up with some inferior piece of freakin' flash memory they might buy.

That MUST be it. Thanks for telling us.

Comment: Re:And that's still too long (Score 1) 328

The original statute was written in 1710 with the title

No, it wasn't. TFA is talking about American copyright law, which dates to the first copyright act of 1790. The statute you're citing applied in England, but it was certainly not the first copyright statute, whose concept dates back to the late 1400s in various Italian cities where certain publishers or writers were granted exclusive writes to publication, usually for periods of 7-10 years.

Comment: Re:And that's still too long (Score 5, Insightful) 328

I completely agree with you that 20ish years is plenty before a work enters public domain. The original 1790 statute which had a default period of 14 years was also plenty.

However, I think there are some things overlooked in your arguments...

It sounds plenty fucking fair. Architects & engineers don't get paid royalties for years & years on work we did ages ago.

That's because you have a choice to get paid up-front. Most artists/creators don't. If someone offered you a contract: "Hey -- you can design my building for me, and I'll give you X% of the rents for the next Y years, but I'll pay you nothing now," would you do it? What if the building was in the middle of nowhere in a completely untested market? What if your design was also very unconventional and you didn't even know if it would work?

Those are the kinds of things a novelist or even a non-fiction author, say, has to deal with all the time. They invest their time and effort spending months or perhaps years generating a work, often with no money up-front. And unless they're an established author, they're often breaking new ground, perhaps trying out something new which may or may not sell well.

I suspect most architects and engineers here wouldn't take such a risky deal. They'd prefer to actually get paid when they do their work, as do most people. Most creators take much bigger risks in the hope that MAYBE some day down the line they might recoup their expenses and time.

And -- of course -- the vast majority DON'T. For every creator who makes millions of dollars off of their books or songs or screenplays or whatever, there are thousands of creators who never really make a profit. But they try anyway, and maybe they get something back.

We certainly don't continue to get paid after we're dead.

I don't know why everyone is so obsessed with deaths of authors.

Look -- copyright is broken, but it's effectively a contract between creators and the public. If you signed onto a deal like I offered you above, where you got no money up-front, but I said you'd get a share of the rents on the building you designed for 20 years, that contract generally wouldn't void at your death. The rents would be paid to your estate or your heirs for the original term of 20 years.

Why should it be any different? The few creators who do actually make money often have kids to feed. If I spent a year writing a novel and with my family suffering without enough money expecting X years of possible revenue from my novel, why should they not get the expected years of revenue if I drop dead from a heart attack the minute after my book is published? Copyright terms should be fixed and short -- whatever they are. The death of the artist is irrelevant.

And if we fuck up, things fall apart. People can get hurt. People can die. If a screenwriter fucks up, nothing of any consequence happens.

Not sure what this has to do with anything. Are you saying that we shouldn't pay anyone anything if they don't do something "essential" enough or something? Why the heck do we pay sports players or actors or whatever? Most people spend significant portions of their days listening to music, watching TV, etc. Just because something is viewed by you as "entertainment" or something doesn't mean that it isn't hugely important to you or society -- and if we don't have a system that rewards creators, art gets worse. Good artists choose to do something else with their time. And there are also writers who contribute significantly to new ideas, knowledge, etc. -- if these people won't get compensation, they may not choose to do it. That's potentially "somethign of consequence" happening.

If you did the work 20 years ago, tough shit. Welcome to the world of everybody else.

Again, I think most artists/creators would LOVE to take a deal like most people and get paid up-front. But that's not generally an option, and it's not the way we seem to do things in our culture. The only people who can demand significant compensation up-front (commissions, book contracts, etc.) are generally the people who will already be guaranteed to get a significant return anyway. For artists who do things that aren't as high-profile or are less popular, they take big gambles. If they do succeed, they should be appropriately compensated -- otherwise, why the heck should anyone spend months or years of their time creating quality intellectual works?

Comment: Re:Extreme climate event: Hell freezes over (Score 2) 341

by AthanasiusKircher (#48710999) Attached to: Pope Francis To Issue Encyclical On Global Warming

What you consider 'many' is for others just a drop in the ocean.

Really? A list like this is just a "drop in the ocean"? And that's just Catholic clerics who made scientific contributions; it doesn't include other non-ordained folks supported by the church over the centuries. People who founded entire new major ideas in science (Copernicus, Mendel, Mersenne, Roger Bacon, etc., if you include non-clerics, people like Lavoisier, Descartes, Pasteur, etc.) are just a "drop in the ocean"?

During the times you mention the 'scientific' disciveries of the catholic church is dwarfed by islamic, indian and chineese research and discoveries ...

The "times [I] mention" were the past 1000 years. It's true that European scientific advances were slower for maybe the first 500 years of that or so, and activity outside Europe was often greater. But the Catholic Church was the "best game in town" for supporting science and production of new research into nature, mathematics, etc. during Europe of that time.

But you'll also notice many, many scientists (mostly Jesuits) listed in the link above from the past couple centuries too. During the "Age of Discovery" in the 1500s, 1600s, and 1700s, Catholic missionaries were a huge network of people who shared and then distributed new knowledge and findings around the world. There's also a reason why dozens of craters on the moon are named after Jesuit scientists -- who were incredibly active in astronomy for centuries (despite the common myths in the Galileo story about Carholics who supposedly refused to look through telescopes and believe what they saw).

Look -- even if you believe that all of this is just a "drop in the ocean" of scientific discovery, I wasn't trying to argue that the Catholic Church was solely responsible for scientific discovery -- only that it has not been vehemently anti-science throughout its history, as some people seem to imply.

You want to know what is really a "drop in the ocean"? Give me a list of scientists who were supposedly actively persecuted by the Catholic Church during its history for their "scientific" findings. You have Galileo and maybe Bruno (if you even count him as a "scientist" -- his ideas were pretty wacky and his "methods" were more of speculative philosophy than anything like "science"). That's two people. Maybe a few other incidents in a thousand years, but somehow that's all most people seem to know about the Catholic Church and science. How does that square with the list of people in my links above? Church persecution and suppression of science is a "drop in the ocean" compared to its consistent support of science over the centuries.

Weekends were made for programming. - Karl Lehenbauer