Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Trust the World's Fastest VPN with Your Internet Security & Freedom - A Lifetime Subscription of PureVPN at 88% off. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment Re:I hate euphemisms.... (Score 5, Insightful) 89

The "gig" economy is a bullshit attempt to glamorize and hide the real issue, which is a population outpacing the availability stable employment that provides necessary benefits.

In some cases, this is being driven by population vs. employment. But in many cases, simple greed is a major contributing factor. It's so much cheaper to operate a business with a bunch of part-time workers. Many businesses would prefer it, if they could get away with it.

Instability should never be viewed as a good thing.

Yeah, unfortunately most folks in the past couple generations never had to see what the "gig" economies of the past were actually like. Back when you'd go to your local town square or down to the docks or whatever, and stand in line waiting for some potential employer to choose you for work FOR THE DAY. And then you'd break your back doing labor for the day, make enough money to feed your kids, and you'd be off again begging for work the next morning. If you hurt your back or got sick or whatever, you and your family were screwed. End of story.

This was what employment was like for LOTS of people for millennia. Skilled workers like craftsmen could sometimes get more stable jobs, because their skills increased the productivity of the business and employers recognized that.

But for laborer jobs or other things you could likely be trained to do in an hour or two? Not so much. And that's what many modern "gig economy" employers are exploiting again -- can you drive a car? Fine: you're a part-time Uber driver. Be sure to show up on time and be pleasant enough to keep the high ratings, or you won't have a job tomorrow.

Lots of people today criticize unions (sometimes rightly) for corruption, etc. But what unions fought so hard for for a century or so was to finally get modern civilization out of that and recognize that even laborers and unskilled workers deserve to be treated with dignity in their jobs, rather than discarded at the end of the day.

But no more -- I'm frankly shocked at how many younger folks seem brainwashed by all the hooplah over the supposed benefits of the "gig economy." People who know anything about history, on the other hand, see this as exactly what it is: an opportunity for businesses to return to a model where they make greater profits and don't have any obligations to their workers beyond today.

Comment Re:All these words (Score 1) 103

The problem is that someone (I think intentionally) co-opted "fake news" to mean "biased reporting"- that's not originally what it meant at all and a lot of people are still using the term to mean something else. "Fake news" originally (as in a couple of months ago) meant completely fabricated stories.

You gotta understand free expression of language these days -- I guess they're just "free wielding" the term.

(Or, at least that's what I'm guessing these two words from TFS might mean when put in sequence. But from the summary, I still can't understand exactly what Zuckerberg's note was "wielding." Didn't know Slashdot was into actually spawning NEW eggcorns.)

Comment Re:Not accidentally! (Score 2) 106

Well, as other posts have already replied to you, the strict distinction you're trying to make here doesn't really hold in English. Both accidentally and inadvertently can easily apply to something that was trying to be avoided.

But I sense a problem with the headline too, and I think the real issue is -- why are the POTTERS mentioned at all? There's a kind of implication with the way the headline is worded that the potters "recorded" information, except they had no concept of what such recording might amount to or what that information might be. Really, the potters had no intentionality here at all.

This is even worse in the original NPR headline, which is "Iron Age Potters CAREFULLY Recorded Earth's Magnetic Field -- By Accident." That's more problematic from an English usage standpoint, because "carefully" implies they did something with care... specifically they "recorded" with care. But they didn't know they were recording anything, so how could they do it with care?

The more clear way to word all of this would of course be to take the potters out of it completely, since they weren't "recording" anything -- intentionally or unintentionally. They were making pottery. A better headline might just be "Strength of Earth's Magnetic Field Recorded in Structure of Iron Age Pottery" or something. Even better, leave "record" out of it entirely, since that usually implies intentionally preserving information to begin with -- maybe "Historical Variance in Earth's Magnetic Field Strength Measured Using Iron Age Pottery" or something.

Comment Re:Management doesn't know what it wants (Score 1) 158

Those that are quite horrifying. I'm thinking call center jobs or any such service level position. Ones where you are not measured by how well you resolve the customer's issue but how many calls you get through and how quickly you do it.

Yeah, exactly. And this is NOTHING new. A couple of decades I took a summer job in a collections department. It was a horrid job, but it paid reasonably well for a summer position. Our productivity was measured almost solely in the number of accounts we handled and the amount of money we brought in through collections. Whether we actually handled the accounts "well" wasn't really a factor (which led to gross inefficiencies and hoards of "problem accounts" that simply went more and more delinquent as they were passed off because nobody wanted to take the time to deal with them). I managed to figure out ways to make my own account handling more efficient, so I actually processed significantly more accounts than anyone else in the office.

Anyhow, after I had been there for a couple months, they decided we still weren't "efficient" enough and they weren't tracking us enough. (We spent more time filing in stupid useless spreadsheets tracking all the calls we did so that someone in management could monitor our "productivity" than we frequently did making calls.)

And so they instituted a policy that we had to "log in" to our phones while we were at our desks, and log out whenever we were on break or at lunch or whatever.

About two weeks after this policy started, one day I ended up skipping lunch because I was dealing with a particularly problematic account. But then I took a longer afternoon break to make up for it and was out for 18 or 19 minutes instead of 10 minutes -- I figured this wouldn't be a problem, since I had given up my 30-minute lunch break and effectively worked "for free" for 20 extra minutes that day.

The next day, I get called over to my boss's desk, where she had been instructed by company-wide memo to reprimand me, because my name ended up at the top of some list of people who took extra-long breaks. My boss was apologetic, since she knew I was more productive than anyone else in the department, but this is what management were wasting their time doing by sorting some spreadsheet column or whatever and looking only at how long somebody was out for a single break.

Anyhow, with only a few weeks in the summer left, I simply said, "Sorry, these are unacceptable working conditions -- I'll pack up my things and leave," and simply left the job on the spot. (There had been other similar crap leading up to this too.) Only time in my life that I did that, but I think it was entirely justified. Last I heard, the entire collections division ended up shut down and outsourced a few months later, probably because workers were spending twice as much time filling out spreadsheets and logging into phones to prove how "productive" they were rather than actually doing work. Idiots.

Comment Re:Fake news is real (Score 3, Insightful) 892

Both items were passed off as "news" by seemingly legitimate news organizations. Both items are fake news - literally fake.

You seem to not understand the difference between "fake" and "incorrect/erroneous" If you hand a bouncer a "fake ID" at a bar, it doesn't mean you accidentally handed them someone else's ID or maybe accidentally handed credentials that were expired or otherwise unacceptable to get into a bar. A "fake ID" implies that you KNOWINGLY manufactured a false ID (or had someone do it for you) with intent to pass it off as real.

Do you have evidence that the reporters in question actually INTENTIONALLY passed along false information? If not, they were not "fake news" according to the standard definition of the English word "fake."

And they offered corrections. Here's the detailed account from Time about the MLK bust. The reporter corrected his tweets as soon as he had recognized an error. That's NOT what actual "fake news" sites do -- because fake news sites KNOW their information is false when they MAKE IT UP, so they don't offer corrections.

As for the other incident, it's yet another example of poor reporting, but only because the Olympian gave an interview that IMPLIED a connection with Trump's immigration policies and only FOUR DAYS LATER tweeted that actually the incident occurred in December. Again, we should be critical for poor reporting here that then made an EXPLICIT connection with Trump, it should have fact-checked when the event actually occurred, but the Olympian in question was vague in her original interview and implied it had happened recently.

So, who exactly is at fault here? The Olympian was expressing concern over current immigration policies and made a vague reference to detention, which was only later clarified. Was she part of some massive media "conspiracy" to hide the truth until four days later? Or did she just innocently make reference in an interview to an unpleasant experience that occurred to her in immigration recently -- and some media articles misinterpreted her vague timeline?

I'm NOT going to excuse those media reporters who implied a Trump connection -- they made a serious journalistic error by not doing appropriate fact-checking. We should condemn their actions and poor journalism.

But once more detailed information became available, they corrected their stories -- once again, that's NOT the practice of "fake news."

There are various bad journalistic practices in the world. And we should condemn them, and even fire journalists sometimes for making truly egregious errors or showing unreasonable bias or whatever. BUT UNINTENTIONAL ERRORS ARE NOT "FAKE NEWS." Fake news is a separate problem -- and a serious one that we ignore by misusing the English word "fake" or redefining it to dilute its meaning.

Comment Re:Uber? (Score 2) 640

This is a very interesting and well thought-out post. Thank you.

Most posts here miss the point that almost every event has multiple contributing factors. Obviously the driver was drunk. That's probably the most major contributing factor. But could a car with unusual acceleration characteristics also be a contributing factor? Possibly. Heck, the car ran into a tree. Maybe somebody planted that tree there 40 years ago. Did that person contribute something to this accident? Yes, obviously... maybe if that tree hadn't been there, everything would be okay.

The issue isn't whether or not there were hundreds of contributing factors to an event, or whether any one of them could have prevented it ("Darn that tree planting guy!"), but rather which contributing factors may have displayed negligence and created a "hazard" -- either legally or morally.

I haven't driven a Tesla, so I don't know how it handles. Clearly there are a lot of Tesla drivers who like how they drive and don't see a problem with them. But the parent here has a valid point that at some point we may get to a place where accelerating power and handling in some cars become more hazardous for the average driver.

And that's about the only part of this story that's worthy of debate.

Should this story be on Slashdot? NO. NO. NO. Clearly, it is an attempt by the editors to play off of the libertarian sympathies of many people here who are still pissed at how Tesla has had to do battle with car dealership laws, etc., and whose ire has already been inflamed by ridiculous charges about how the media seems to be attacking Tesla whenever it can... and now here comes a grieving father who is lashing out at something that really COULD have contributed something to this crash (in addition to the alcohol, etc.).

Let's all just take a deep breath, acknowledge that we all would not want to be in a place where we are grieving for a child and if we were, we'd likely want to find "someone to blame" too. And then let's move on from the the TROLL NAMED BeauHD WHO POSTED THIS STORY HERE TO GET EVERYONE YELLING.

Comment Re:He does have a point... (Score 3, Interesting) 251

However, I'd like to see some discussion of his statement.
Would a better connection between humans and machines be beneficial?
What would be the benefits/ problems?
How could this be achieved?

To discuss something meaningfully, you need to have a freakin' clue how it might work. At this point in time, we don't. We don't know how the brain works. We don't have anything close to strong AI. The best interfaces we're looking at now are stuff like moving artificial limbs or whatever. To speculate on what might happen IF we could all of this would be sort of like walking up to Isaac Newton in the 18th century and saying, "Sir Isaac, what problems do you think will occur with the internet next year? What will the major benefits/problems be of new advances?" Even if you explained the basic idea of the internet to Newton, I doubt he'd have enough perspective to meaningfully debate what might happen.

But, having put forth that disclaimer, I'll just note a few complete speculations in response to Musk. First is that his argument seems premised on the idea that a faster interface from brain to world would be beneficial to humans. Maybe it would. OR maybe our brains are somewhat limited in maximum input/output in ways that we can't really understand yet because we've never tried what he's proposing. Typing is about the right "speed of thought" for me to create coherent text. I've tried dictating, and I need to pause, correct, and reword too much for it to be useful to me. That seems to be how my brain works... although if I really needed to, I probably could retrain myself to dictate better.

But what if you increased my potential output by 10-fold, 100-fold, a million-fold. Would that actually be useful for me to interact with the world better or faster? Or would it just result in gibberish because my brain literally can't adapt to working much faster than it already does in USEFUL output? Or maybe the plasticity of the adult brain isn't enough to adapt -- so we try hooking up infants from birth with these things. Maybe it works... or maybe it just drives them to be insane or to have other brain development that effectively renders them LESS functional than "normal" humans. Not saying this WILL happen, but it's a possibility when you're talking about an interface with absolutely NO IDEA on what specs might work. Human brains have spent millions of years evolving into what they are to work efficiently at the speeds they do. Just because you could theoretically hook up a device to increase input/output doesn't mean the brain can actually change and adapt enough to make use of the throughput meaningfully.

Also, I think it's important to note in a discussion like this that one of the PRIMARY hallmarks of human intelligence is FORGETTING. One of the things that makes humans so much better than machines is our ability for abstraction -- finding larger patterns so we don't have to parse the "stream of consciousness" directly all the time. And then we sleep, and our brains revisit the memories of things that we've evolved to assimilate as "important" data, while we forget millions of random little details of our day at the same time.

Effectively, we take a very TINY percentage of the "noise" that is input into our brains and actually remember it in any detail, mostly through complex pattern-matching that we're only even beginning to emulate in specific cases with computer algorithms. But the point is that there's only so much that we CAN assimilate into our brains -- and that goes not just for memories, but for new skills or whatever. (Think about when you've tried to learn a skill by "cramming" for a full day or two vs. when you've done practice for a few minutes/day over weeks or months. Your brain needs the "downtime" to assimilate new skills... increasing input or output seems unlikely to make that process faster.)

My speculation is that Musk's idea is rather pointless for somehow keeping humans "relevant" or whatever. IF we develop strong AI that can actually learn as well as humans do, the increased capacity and efficiency will potentially grow exponentially and render humans obsolete within a generation or two. Human/machine interfaces might provide some interesting advantages for humans, but I sincerely doubt we'd actually maintain an "edge" over pure machine intelligence once strong AI occurs. But who knows? Perhaps there is something special about biological systems that won't be replicated easily with circuits, so maybe a "hybrid" could still have a purpose. But to speculate on that would be even worse than Newton speculating on new developments of the internet.

Comment Re: Another Black Mirror episode (Score 1) 130

Kind of like the TIna Fey comments in an awards show setting against Bill Cosby years before serious accusations became public and widespread.

Just to note, the accusations WERE public and widespread back in 2004-2006 or so. It's just few people took them seriously... the "Cosby Show dad" mystique and years promoting kids Jell-O etc. seem to have protected him back then though. Tina Fey was one of many back then who DID pick up on it, but most of the media just forgot about it.

I don't remember that stuff in 2005 or whatever, but I distinctly remember when I myself discovered this stuff about Cosby when I somehow happened upon a story entitled something like, "How we all forgot out about how Bill Cosby is a rapist" -- and that was back in 2012, I think. And that was a couple years before it was plastered all over the news again -- but once I read about it somewhere, it was easy to find all sorts of stuff on it, even prior to when I was reading in 2012.

Tina Fey was reacting to something that was actually public knowledge and had been the subject of news stories at major media sites... it's just that the rest of the media didn't pick up on the "drumbeat" until a decade later.

Comment Re: but but but (Score 1) 557

Yup, and this essentially amounts to doing things the way that MS Office does them. The way you've already learnt to do things is the easy way, because doing things any other way first requires unlearning the way you've already learnt.

Your point would have greater import if Microsoft itself didn't have a history of upending the UI and forcing users to completely unlearn the way they used to do things and learn a whole new system -- see the "Ribbon" debacle.

I'll admit you have points otherwise, but at least with a FOSS solution, you're likely to see a fork if any development group attempts to change a major application so drastically. And even if they don't, with FOSS you can at least pay some developers to maintain the old code -- likely for a lot less than licensing for deploying commercial software across an entire city/corporation for decades.

Comment Re:It's not office. (Score 1) 557

I don't need formula and all the other reasons for using latex are no longer that relevant.

Word processors are not appropriate for large documents that need consistent formatting. They're not desktop publishing applications. Aside from a more "pretty" output with less work, LaTeX also ensures lots of consistency across your document without having to think about it... whereas Word tricks you into thinking you've done something right with the WYSIWYG environment... until you accidentally do something that messes up the formatting.

Also don't want to spend time learning what is essentially a new language with often cryptic build tools so I can write a document.

One word -- LyX.

It's basically a GUI word processor of sorts, with LaTeX under the hood. Click a button to get a PDF.

Yes, if you need really specialized custom formatting or unusual features, you may need to dig around a bit to figure out how to do them. But the good news is once you solve a problem in LaTeX (and LyX), your solution usually "sticks." Solve a problem in MS Word with layout, and change some other random feature in your document, and suddenly your custom formatting screws up in all sorts of unpredictable ways. That's because MS Word is NOT a desktop publishing tool. If you want proper handling of large documents with consistent formatting, etc., you want to use something appropriate -- either LaTeX or something commercial like InDesign.

With LyX, you won't have the learning curve for LaTeX in pure text form. Mostly you just choose a document class appropriate to your task, select a few options from it to customize your formatting, and you're good to go. Even use a non-TeX font with built-in XeTeX/LuaTeX support. I'm not going to oversell this, though -- you will probably spend a couple days setting up a custom document preamble to get everything exactly the way you want it (if you care about typography... but if you care about typography, you wouldn't be using Word or any normal word processor).

But if you don't care about typography so much, you just need to conform your thesis to your university's requirements. Some schools actually have LaTeX templates available for use (officially or unofficially)... but if not, you may need to do some customizations. Luckily, you can often just ask in a TeX forum somewhere (e.g., on StackExchange) and people will frequently just give you the appropriate commands to include if you ask your question clearly.

As someone who went through the process of writing a thesis and also helped a couple others deal with last-minute formatting problems in MS Word, let me just say: you're going to spend at least a few days dealing with formatting issues no matter what. If you go with MS Word, unless you're a wizard who knows all the possible places Word will screw things up, you're going to spend several days at the end dealing with headaches where the text just doesn't flow properly or that figure/table/image/whatever simply disappears or completely ruins the formatting for an entire chapter for no apparent reason.

LyX isn't a perfect solution, since ideally you need to be familiar with the underlying LaTeX code to fix the few things that do go wrong. But if you're just doing one document like this, you can likely get the support you need on a forum. Chances are many of the questions you may have are already answered for you out there.

Comment Re:A lunar eclipse at full moon... (Score 5, Informative) 28

A lunar eclipse at full moon is a given ??? Are you crazy? No, its not!

I think you misunderstood GP, which admittedly was a bit poorly worded. I'm pretty sure GP meant that IF a lunar eclipse is happening, it IS a full moon. So two of the "triple features" mentioned in the summary are bound to be together anyway... all the nonsense about the "snow moon" notwithstanding. (What is the sudden obsession with old moon names in the past year or two? Very few people used these terms anymore outside of the Farmer's Almanac for years, and suddenly they're all over the news... and people keep acting like they're significant -- "Ooooh a 'SNOW moon'... ooooh a 'HARVEST moon'..." -- these happen every single year and mean nothing other than what month it is.)

The fact that the summary doesn't acknowledge that full moons are just normal at lunar eclipses makes it sound silly at best, ignorant of basic astronomy at worst. Terms like "rare convergence" make it even worse. And lunar eclipses aren't exactly "rare" events to begin with, although there are some unusual features of this penumbral one... but the fact that it's penumbral will actually make it less interesting to look at to the average observer.

Is it just me, or has there been an increase in hyperbolic astronomy stories recently? There was all the "supermoon" nonsense last year -- again, mildly interesting for astronomy nerds, but not so impressive for the average Joe who barely looks up at the sky. (The moon really wasn't THAT much bigger.) Now we're billing a "triple feature" for a lackluster lunar eclipse coupled with a full moon (which would be there anyway if an eclipse is happening), and a comet that you need binoculars to see.

I'm all for getting people to look at the sky and to get interested in astronomy, but if you overbill the significance of such things, I don't think it helps.

Comment Re:"...which begs the question..." (Score 1, Insightful) 341

No matter how many people use literally to mean figuratively, no matter how many dictionaries take note of the inverse usage, it is still wrong, and anyone trying to avoid looking like a moron would be wise to steer clear of incorrect uses. Ditto "begging the question".

While I absolutely agree with you that educated speakers/writers need to simply avoid "begging the question," I also absolutely disagree with you about your use of the word "wrong" here.

Language is about communication of meaning. It's not a "game" where you get to "win" if you check off enough of the "rules." I'm not sure there is ANY English speaker out there familiar with the phrase "begging the question" who is unfamiliar with its meaning to "raise the question," and generally it's clear from context if this is the meaning intended.

Meanwhile, I can guarantee you that outside of philosophical circles and wacko grammar pedants, NO ONE will understand you if you use "begging the question" to mean petitio principii.

Hence, 98% of people will understand the common meaning of the phrase "begging the question" to mean "raises the question," and of the remaining 2%, the 1% of philosophers won't much care which meaning you use. And the other 1% of wacko grammar pedants actually KNOW about the modern usage, so they'll understand it too, even if they mutter under their breath.

So, if we're looking at language as successful communication, using "begging the question" to mean "raising the question" has a near 100% success rate in communication, and a 1% failure rate among the lunatics who don't realize language isn't a weird game where you keep score. But if you use "begging the question" to mean petitio principii, you'll likely only communicate with 2% of your audience at best (and that's assuming an educated audience). Communication failure.

There are all sorts of reasons "begging the question" was doomed to failure as an English phrase from the start -- it was a bizarre and archaic attempt at a translation of the Latin phrase even when it was coined hundreds of years ago, and it was based on a poor Latin translation of the original Greek. The "modern" meaning of "raising the question" has been used in learned discourse and by good writers for well over a century -- in previous threads about this, I've posted an example of a debate in Parliament from the 1820s I think where the "new" meaning was already so well established that a representative could make a pun on the two meanings.

The battle has been lost. "Wrong" is meaningless here.

That said, I'll agree with you that "literally" is a different sort of beast, since it has much greater potential for confusion between the two meanings. That doesn't mean I would condemn the new meaning as illegitimate -- but I agree that there's a good reason to stick to the original meaning there. "Begging the question" is no longer even in the running. I avoid it everywhere not because of confusion (since EVERYONE knows what it means, i.e., what you declare to be "wrong"), but because of the tiny minority of self-righteous lunatics who can't understand that educated usage has already changed... about a century ago.

Comment Re:Cheap (Score 1, Offtopic) 626

not because Trump is about to take away their cheap slave-labor pool and make them hire American workers.

You obviously have a point that many of these companies have an interest in getting cheaper workers, but I really wish you'd tone down the rhetoric of "slave labor." Slaves, typically, aren't paid at all. Yes, employers can be more abusive toward H1B workers, and that's a real problem, but "slave labor" is just hyperbole.

You want to talk about immigration and "slave labor"? Look toward what could happen under the Trump presidency and farm workers like pickers. Those are people who frequently work 12-16 hour days in fields, frequently in 100-degree weather. And get paid something like $10/hour (for SKILLED workers). Stats usually say that half of the U.S. farm labor force is illegal, and 3/4 of it is composed of immigrants -- because Americans simply refuse to work these jobs for that pay. There are actually other issues too -- because many of these jobs are really "skilled labor" in the sense that it can take a few years of picking a specific crop to get really good and fast at it, and these workers generally get paid per volume picked.

Not to go too off-topic here, but the U.S. is going to have to deal with some harsh realities if Trump actually tries to follow through with his immigration threats against Mexicans and Latin Americans. A number of states tried placing harsh restrictions on immigrants a few years ago, and farm owners ended up with worker shortages and crops that rotted on the fields. Without immigrants, we're looking at a future of either food shortages or significantly higher food prices... or both.

Anyhow, if you want to talk about "slave labor," there are plenty of jobs Americans get immigrants to do in much harsher conditions with backbreaking work often at significantly lower wages than any tech workers. I'm not saying tech workers aren't exploited too -- but the grand scheme of things, it's not "slave labor."

Comment Re:Coding achieves the "expand your mind" objectiv (Score 4, Insightful) 328

I don't necessarily have an objection to some form of coding requirement. However...

So, if you look at the foreign language requirement for what it is (an "expand your mind" requirement)

"Expand your mind"? That's really vague. Just a few things foreign language requirements help with that coding doesn't:

-- English grammar and usage. Many good writers and speakers have noted that they first really understand grammar and details of English usage when they study a foreign language. Now, of course it's possible to refine one's language use without formal grammar training, but the process of deconstructing a foreign language is often helpful to understand one's own.
-- English etymology and vocabulary use. Particularly if one studies Latin-based language like Spanish, French, or Italian, one gains knowledge of Latinate roots, which are often helpful in figuring out Latin-based English words. Frequently in the first few years of language instruction, you'll learn a lot more English vocabulary through relationships with the other language. Germanic languages also are helpful in learning new English words, due to common older roots.
-- Communication skills. A lot of students who just take a couple years of a language in high school or whatever don't really get a proficient speaking level, but that's largely due to lack of practice and subsequent failure to "keep up" the training. Nevertheless, for many students who do take the oral skills seriously, languages like Spanish can be incredibly helpful for communicating with customers/users and other job contacts in many professions. If you have an opportunity, doing something like Mandarin or Japanese can open yet other doors.
-- As one learns another language, generally one learns about other cultures too. Which again is often an introspective exercise in learning about your own culture -- you don't realize your assumptions about the word often until you contrast them with someone else's. This can be a very eye-opening exercise for young people.

None of this is an argument against coding. But there are more specific things language requirements do, aside from basic skills in that language or "expanding your mind" (whatever that means).

I think that it is not too much of a stretch to think that coding will eventually become the Latin and Greek of our culture.

Huh. I'm not sure even how to begin responding to this. The reason Latin and Greek were taught in schools commonly until the mid-20th century is because they not only served as a common communication system in many fields, were the basis of many modern languages, and were the most common languages of historical documents over a span of more than 2000 years, but also were the foundation of much of Western culture and political systems. There's still a vast amount of classical, medieval, and early modern literature unavailable in translation -- and when I saw "literature" I mean all documents, including scientific and technical advances, as well as cultural artifacts.

While I'm not arguing for a return to Latin or Greek requirements, I don't think it's a coincidence that the U.S. government started wildly straying from the original restrictions on federal power in the early to mid 20th century as knowledge of Latin/Greek and related Roman/Greek history (and political science) decreased. Sure, it's possible to read about these things in English in translation, but the widespread use of Latin led to a promotion of related cultural knowledge (see above), including political and philosophical questions. The Founders of the U.S. all knew their history very well and designed our government in various ways to prevent recurrence of problems that happened in ancient societies. All of this is largely forgotten these days, at best a marginal sidenote to history courses in many public school curricula.

And that's just the tip of the iceberg. Latin and Greek had even more benefits for learning about English language -- both in contrast and through the systematization learned through formal grammar classification. And the etymological aid is significant in deciphering a lot of technical English words.

In fact, the entire system of rational discourse itself has suffered greatly with the decline of Latin and Greek study. Not because these language hold any special value in themselves, but we lost of a lot of the accompanying "culture" of these languages. For example, 3rd-year or so Latin students traditionally would study the rhetoric of Cicero, and in the process, they themselves would learn rhetorical terminology. Similar students in Greek classes would read philosophical texts. Whether or not the students continued to read in the original languages, culturally they were imbued with the knowledge and interest to head to Plato and Aristotle, etc. In studying rhetorical and philosophical forms, they'd also learn errors of logic, as well as examples of fallacies for rhetorical purpose.

All of those qualities would be important for people to know today as they are confronting political speeches and oratory, trying to parse their persuasive style and rhetoric, their methods of "stirring hearts" and their fallacies glossed over.

Obviously back when Latin and Greek were standard high-school curricula (with all of the accompanying classical history, rhetoric, etc.), it was also a different time when high-school was still mostly something done by upper-class folks and those headed in that direction. So whether that would have ever translated well to the broader audience of today's educational systems is questionable. But we didn't really try... we just dropped it.

Finally, I'd just note that anyone who viewed the possibility of reading a Latin sentence as an "intellectual puzzle" doesn't actually know how to read Latin. Latin was basically a living language until the early 1800s among most intellectuals, who spoke it at schools. It gradually died out over the 1800s and early 1900s, earlier in the U.S. than in many European curricula. Already in the late 1800s you can begin to see articles angry about the loss of oral/aural knowledge and how it impeded an understanding of Latin in curricula. The idea of Latin as "intellectual puzzle" is only the horrid end of a dying language tradition as it hung around for several decades after proper oral instruction ceased. Nevertheless, even in this moribund state, it still could lead to many of the linguistic and cultural benefits I mentioned above.

Again, none of this is an argument against coding. But the idea that coding would have the same cultural place as Latin and Greek just seems a bizarre claim.

Slashdot Top Deals

We are not a loved organization, but we are a respected one. -- John Fisher

Working...