Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Too Much Hype for the Khan Academy! (Score 1) 133

I agree that watching lectures online is not the most effective way to gain a true mastery of knowledge. It's also true that there are some subjects which require access to lab equipment and other physically expensive or rare materials. It's worth looking into ways to make it more effective and it will still be a very long time before people such as hiring managers will be convinced of the credibility of self-taught students. But it IS an excellent way to prime yourself for an upcoming class by seeing some example problems, or simply to gain an introductory level knowledge through recorded survey-type courses requiring little technical background (iTunes U has a lot of this kind of material).

Not all learners are trying to replace a traditional university education with online lectures, or to achieve parity with a graduate student. Some of them are middle-aged, career-laden, family-burdened, cash-strapped people who just want to broaden their horizons or professionals who just want to gain interdisciplinary knowledge. And, of course, some learners live in places and situations where they could not dream of getting a college education. Any effort to get knowledge closer to these people and conditions is to be praised.

There is no harm in a private company making a large donation to one of the most prolific individual contributors to the field. The money is partially going towards translating his content into many languages. If anything, that will allow this material to be used as modern teaching aids in places where no free material is available in the most common languages of the region. Much of the internet's undergraduate-level educational resources are still English-centric.

I applaud Khan, the many YouTube channels dedicated to sharing knowledge, institutuional projets like OpenCourseWare and of course Wikipedia for making free knowledge available online. I can find no good reason not to be glad that Google is making a cash contribution and for maintaining YouTube as a free service, without which, Khan might not have been able to get started with hosting and streaming this much video to as many users.

Comment Re:Stoned... (Score 1) 408

You make a lot of good points about separating the cold, rational side of science as a discipline from what is not its proper domain. But I must steal back some of the soul you have purged from it by abstracting away the person – science does not happen in the absence of a human actor.

It has not been my experience that science offers none of the pleasures and dilemmas you neatly factor from it. There is something to be said for luminaries whose unique insight would have been ignored if they did not have the venue of science to knock heads against hard facts and force cultural change. Hard work, altruism, empathy, controversy, struggle and desire to touch lives are no less evident in the everlasting life stories of many scientists whose names, long engraved on their tombs, can still be found embedded in research papers for novel cures, in their contributions to institutions for human progress and the countless human generations their work will continue to effect.

As for whether science can tell right from wrong, why you are here, your purpose in life, or push you to become a better human being, I doubt that any of these could so easily be factored from the scientist in the person as from the raw, disembodied notion of science. The pursuit of knowledge is a passion that has led to many deeds of both profound and dubious value, but always by the hand of people.

In the spirit of skepticism, I hope that when you met your example of an evolutionist (and chided him for the mistake of simplifying it down to mere randomness), you gave fair thought to the analogously religious man who defers to supernatural authority when justifying earthly violence. We can both agree that any doctrine will not necessarily impart its wisdom in all fullness, to fallible humans, and that humans need guidance of many kinds to be both rational and emotional beings.

Comment Re:Stoned... (Score 1) 408

Thank you for this rational perspective, which, for the sake of responding to a central point, I shall crudely boil down into “question everything.” At the heart of science is critical thinking, which, even in the presence of mountains of evidence, cannot be suspended. Knowledge is truly illuminating, but just as we might endeavor to shine a light on a dark spot in our mind, we might then turn away in confidence that we have explored enough of its folded surfaces to explain its true nature.

Science builds our understanding of the universe through rationalizing observations of reality and adhering to logic for arguing our conclusions. We may gain a reasonable confidence that our models fit the reality we observe if our data and logic support those models. But “fit” may be the best that we can do in any case. There is no shame in this – all good science acknowledges falsifiable experimentation. Regardless, that perpetually unresolved mystery is the dynamo that fuels young minds to make their life work out of attacking those shadowy folds. A world without that mystery would be very dull to me.

Science could neither be said to be a purely academic exercise of irresolvable and tenuous conclusions, nor does it typically lead to absolute truths. We make use of those models to explain, predict and improve our world through engineering, medicine, commerce and any number of fields for which their application is, for most purposes, to our great benefit. It is not the fault of science that its students often come away with the belief that there are usually absolute truths (they are indeed rare). Not all minds are prepared for or necessarily benefit from filtering all imparted knowledge through intense critical thought. To ignore that and continue on regarding others crudely for their misunderstandings forces us to behave as pedantic jerks and without regard for the very people our science and teaching actually effects.

It is important to recognize that our experts (good scientists) often regard their conclusions with even more scrutiny than we do, and stake their careers on it. Sometimes their motives, methods or deductive powers are suspect, but our protection from that is built directly into the scientific method, to which they must adhere if they are to be respected. Falsifiable experimentation, well documented, repeatable methods and attainable data, as well as adhering to strong logical arguments and mathematics constitute the language of higher order understanding.

It is a question for philosophers whether it is necessary to strictly rationalize everything. I believe science, through critical thinking, is the way to raise rational humans and that doing so will lead to a better plural civilization, but I will defer to understanding little about raising good people.

Comment Re:Here We Go Again (Score 2, Insightful) 238

http://www.vimeo.com/siai/videos/sort:oldest
http://singinst.org/media/interviews
http://www.youtube.com/user/singularityu

Well, lack of searching is not a lack of material, you can find several hours of Ray's talks on video at Singularity Summit 2007, 2008, 2009, TED.com, Singularity University and just plain independent YouTube videos. He also has two movies out (I haven't seen either), the Transcendent Man criticisng his esoteric side and The Singularity Is Near (based on his book) supporting his ideas.

All of this talk about his figures being wrong is quite far from the point. To say we'll have conversations with virtual humans in 2030 or that we may have to cope with an AI superintelligence by 2050 is quite far from noting that either of these situations are entirely possible extrapolated from trends and the discussion should be had.

As a computer scientist, I can say that it will be hard to do. As a scientist, it's pretty foolish to say that because something is hard that it will never happen (we did and building a human is pretty hard).

Comment Re:We all know about the scientific method. (Score 1) 238

You’re right that the 30000:1 error ratio was pulled out of someone’s ass as a dogmatic argument, it’s the ratio of 90 million to 3000, and it was brought up to espouse the unsupported claim (it doesn’t mesh with my beliefs, therefore there must be something positively wrong with it) that radioactive decay is so inaccurate as to be unusable as a scientific tool, often based on willful denial of scientific experimentation (which is the opposite of dogma).

The rate of radioactive decay is measured in decay activity per second (curie) in experimentation but it is usefully indexed for radiometric dating purposes in terms of half life, which is a period of time in which 50% of a sample will decay to its stable isotope. We don’t have to wait for 50% of a sample to decay (although we can in particle colliders to the tune of picoseconds!). Half life is converted up from its much finer measurement in a far shorter time with a much more accurate instrument (using, say, accelerator mass spectrometry). Imagine the absurdity of presuming to date an object millions of years old accurately in terms of seconds it would be like saying I always get 3% of the way to work in 27,128.395 milliseconds when it is far more useful (and for all practical purposes analogous) to say that it usually takes me 15 minutes to get to work. Note that I am being confidently more accurate than “between a fraction of a second and 313 days.”

Far from ignoring cosmic radiation, I’ll cite research, experimentation and data: http://donuts.berkeley.edu/papers/EarthSun.pdf

Comment Re:Dating methods are accurate! (Score 2, Insightful) 238

How wildly different? In science we almost never get the same answer; instead we get a statistical gradient (yet science still works!). I'm prepared to assume +/- 3% is a reasonable error for accuracy in some experiments, while you might require +/- 0.1%. Or an experimenter might draw false conclusions from the data, or the error might be so large as to invalidate the correlation he or she draws, or the method might be entirely discredited. Either way, the results are rarely glaringly obvious (otherwise we wouldn’t need rigorous peer-reviewing processes) and you must qualify your criticism for it to be anything but speculation.

Comment Re:We all know about the scientific method. (Score 1) 238

We do not know for certain if a particular strata was exactly 90 million years old, but a possible error rate of 30000:1 is not being passed off as credible research by any scientist in radioactive decay-based dating. For practical purposes, bell curves serve as a useful indicator of probability by showing a gradient of weight around a mean, not to prove that the leftmost infinitesimally improbable armpit of the curve represents any significant doubt to the central argument.

Comment Re:This is a classic mistake in academics (Score 1) 830

I agree with your assertion that building a brain is nontrivial. The problem with the blog post is that the guy says Kurzweil believes life processes are trivial, which is completely wrong if you open the damn book, flip to the page with this quote and start reading around it. He's (quite explicitly) making a paper napkin estimate of human brainpower in CPU cycles and plotting it on a linear regression curve of historical data.

But I will take your argument one further and say that a blueprint is a bad example for a genome, because a blueprint implies a 1:1 relationship of data to construction, which is not what we see by observation at all (Dawkins has a section on this in his newest book). It's actually closer to a computer program, because nobody could say that what happens at runtime is correlated absolutely to with what is written in the source (there are users, much to the dismay of many programmers). These days, programmers don't understand what goes on in individual blocks of memory or processor registers, they just call their Date library or some GUI code with an API method that was developed long ago to solve a class of problem programmers typically had to deal with all the time. What's going on in memory is some arcane science packed into a compiler or an interpreter somewhere, having been solved long ago by some anonymous programmer whose code worked well enough to have been considered "a good enough solution."

Nevertheless, we understand the mechanisms that drive that code, even if the programmer is long gone and with enough incredibly difficult and soulcrushing effort, one could actually recreate the source with a hex editor, a stack tracer, a means of monitoring memory, etc.

Comment Please read the abstract, mmkay? (Score 1) 830

I’m glad the headline formed an opinion for me, because clearly I’m not capable of making one for myself. If not for such an eloquently worded preface to this wonderfully vetted blog post, I might have had to exercise a few brain cycles to determine what I should think of Ray Kurzweil. I’m also glad the poster has decided to snipe away at a single contextually free quote from Kurzweil’s entire book on estimating a time frame for science to apply information technology to genomics. It’s too bad Kurzweil couldn’t boil his entire research effort into a single sentence like this guy did, because his book is just TL;DR.

Sarcasm aside, I don’t agree that the predictions Kurzweil makes are ludicrous at all if you throw away his estimates (as Kurzweil duly calls them) and start over with the same logic. Don’t make the mistake of accusing Kurzweil’s numbers of being a literal prophecy - the metaphor is cute but only a device of criticism for the critic and useful controversial publicity for the author. Scientists are quite happy to be rational about their assertions if you give them more than a paragraph to make the case (say, the length of a chapter). Simply quote mining for phrases that are easily attacked is a device that preys on the reader, not the actual topic of criticism. This entire chapter is about statistical estimates, not to be confused with irrefutable facts (as he disclaims in the text). Visit a bookstore and read this chapter while being very critical of every premise Kurzweil makes all the way through, then try to come away with the same fixation on this passage as a deal-breaker to his whole argument as this critic does.

The singularity - the point at which AI may surpass the human brain in cognitive capacity, is quite likely to occur in the expanse of time ahead of us as long as technology improves (a more-than-fair assumption from historical data, which is all Kurzweil asserts). I have been aware of Kurzweil’s predictions for quite a while and am happy to throw away specific dates for statistical probabilities, because the idea is rational enough without specifics. A mathematician might extrapolate a trend from data without making hard assertions of future data points (which Kurzweil only does for illustrative purposes) and subsequently tying their entire argument to those assertions like an anchor (sink or swim, as the critic does). The assertion that we will understand the human brain in the next 10 years may fall flat on its face without breaking a conviction that we may some day understand the human brain enough to engineer a virtual copy with improvements.

The critic makes the error of claiming that Kurzweil thinks that life processes are trivial by his discourse on deriving a formula for estimation purposes of human-brain-level computational power for $1000. He then goes on to claim nucleic acids “bootstrapping” into meatspace is such a mystery as to be impenetrable to computer science (what are those guys at folding@home and MIT doing?). He supports this by spicing his article with fragments of complex gene-protein interactions derived from experimentation, which likely could not have occurred without the aide of computer science and will likely have an effect on future genomics work which feeds back into computer science. Kurzweil is quite clear that this a book of predictions of when human beings may achieve such an understanding of genomics as to be able to simulate an organism, not that we have already done so. Once we understand how those interactions take place, we can then iteratively simulate them until our model produces similar results to experiment, building a model of nature (the scientific method). Each step on this process brings us closer to understanding the brain, no matter how complex the brain is or how long it will take (throw away the numbers and logic still remains).

I regret that Kurzweil chose to compress the genome before making his calculation (as there may be no meaningful analogue in biology), but there is no (rational) doubt that the genome and a hearty soup of assorted chemicals are all that are physically required to be present for an evolved sequence of nucleic acids to produce a living organism, brain inclusive, from a single cell. Kurweil explains very well that there are mixed complex self-assembly instructions and information embedded in the genome, which the author has comfortably omitted in order to make his accusation.

As another poster said quite eloquently, it’s a bunch of chemicals and physics that produces a body from an embryo, which can (and as Kurzweil states; will ultimately) be modeled; otherwise we might as well throw out our science and go back to wild speculation as to the nature of our universe.

Comment Re:Living wages in virtual worlds (Score 1) 267

With a virtual workforce you get to leverage the labor of the entire world. You can be selective, take advantage of competition and people willing to exploit a niche.

If you have a language barrier, you can hire bilingual translators to sit in on your online conversations as a 3rd party. Interview many of them and keep some on retainer, make a list of their available hours, rate them on their rate of translation, clarity and accuracy. Offer more money for sessions if you want better translators. If you’re a company, hire them if you like (they may have other marketable skills), ask them to sign an NDA for sensitive conversations, and even have them translate your company’s materials to become a global company. Hire a second translator to check the first one’s work on a contract basis.

Anyone can make a little money offering their translation services on-call. They could use their cell phones with a high quality headset and make themselves available through an automated scheduling and queuing system. This would work in any country with cellular infrastructure for any bilingual individual of any pair of languages. Rarer language pairings could be valued higher, fluency would be valued over basic conversational language. Different time zones would be valued automatically by the automation system for the minimization of lag between geolocated parties, but generally anyone could get an advantage on others in their region by supporting less popular hours. Recruiters could make a buck finding new qualified translators for less popular segments. It creates a few new markets.

Want your child to have a marketable skill? Teach them another language. Encourage educational programs that get people from one side of the world talking to another through translation. You could even teach second, third, fourth, etc, languages through this system too just by making one-on-one conversation with translators who have flagged themselves as language tutors at an agreeable rate. Maybe it will finally encourage the development of an online A.I. realtime audio-to-audio language translation service to grab up some of that market value.

Finally, why is there a preoccupation with accommodating Western workers? Why must the system conform to the Western lifestyle and not the other way around? Part of the reason we are in this mess is because we have grown accustomed to high wages to pay for outrageous lifestyles for far less work than others are willing to do. I don’t see how that entitles us Americans to greater consideration. Furthermore, the West has always been about Free Enterprise and the best person for the job and I don’t see how a business owner can see a downside in all of this. Virtual operations and worldwide labor pools will lower operating expenses dramatically farther than anything that could be achieved on American soil, allowing the business to make more money. Anyone can start and operate a business or work as a contractor from their home as long as they have a product or service people actually want at a price they can actually afford. It goes both ways.

Comment Re:Living wages in virtual worlds (Score 2, Interesting) 267

*Sigh* There were line spaces when I composed this...

DISCLAIMER: I hope /. readers will apply the knowledge that technology improves over time to my rationale.

It’s tempting to view these types of job mills as unethical and exploitative, but until collaboration tools improve, this type of “click”-work is the only kind which can be trusted to essentially unskilled and untrustworthy anonymous laborers. The rates are just a product of having access to a global workforce and the trivial nature of the work. Also, the iterative trial and error lessons learned from these firms will certainly train the industry on how to manage a virtual workforce better. For all the sweatshop analogies, workers and job posters still have the choice of which firm they go to – there just aren’t many right now.

Graphic and web design jobs are actually fairly practical - most of the freelance jobs I’ve done have been people whom I’ve never spoken to outside of e-mail. Although the rates are often very low for creative work, customers understand that they get what they pay for and both parties can still chose not to participate. For a logo design job, you may want to pay 20 people a one-time throw-away fee of $75 just for creative diversity, rather than paying one professional $1,500 and rolling the dice on whether you’ll like what they come up with. You can still take the top 3 from those to the professional and say “can you do this right?

Before too long, many of us will be working from within virtual worlds for many virtual sources at a time. Most of those sources will be other independent contractors just like us from within chains of divided labor that span the globe. “Working online” will mean putting on a head mounted display and casually, visually conversing with your design and development team in a quick scrum session in a virtual space to mock up some ideas with a 3D ‘whiteboard,’ divide up the work and knock out a contract. Or you can join a guild of professionals with high standards and a good reputation and score decent contracts, just like design houses today minus expensive office space and the associated geographic limitation. The key is that these organizations will be comprised of independent, willful laborers from all over the world whose efficacy stands on their work quality and ethic alone, self-organized through online venues like forums and virtual words with next-to-zero operating expenses.

Even today, I could organize a group of graphic designers, copy writers, another web developer or two, a couple of account managers, a project coordinator, a headhunter, a contract hunter and an accountant and we could all just meet periodically to review bids and commit to a monthly project or two. It would be supplemental income for all of us. You could add a really decent collaboration system that lets us voice/video chat with a whiteboard and host a web-site complete with a forum and customer login interface with just a little FOSS savvy. You would only need a $7/mo hosting account to run mediawiki, phpbb3, wordpress and a few external services like openerp and gmail. There are paid services you could upgrade to when the revenue kicks in.

END DISCLAIMER. To say that the technology doesn’t exist to implement this stuff is frankly a cop-out. A /. audience should understand.

I can’t wait for the Metaverse to be born (although I think it will be augmented virtual, not immersive virtual); being a gargoyle sounds like my kind of gig.

Comment Living wages in virtual worlds (Score 1) 267

DISCLAIMER: I hope /. readers will apply the knowledge that technology improves over time to my rationale. It’s tempting to view these types of job mills as unethical and exploitative, but until collaboration tools improve, this type of “click”-work is the only kind which can be trusted to essentially unskilled and untrustworthy anonymous laborers. The rates are just a product of having access to a global workforce and the trivial nature of the work. Also, the iterative trial and error lessons learned from these firms will certainly train the industry on how to manage a virtual workforce better. For all the sweatshop analogies, workers and job posters still have the choice of which firm they go to – there just aren’t many right now. Graphic and web design jobs are actually fairly practical - most of the freelance jobs I’ve done have been people whom I’ve never spoken to outside of e-mail. Although the rates are often very low for creative work, customers understand that they get what they pay for and both parties can still chose not to participate. For a logo design job, you may want to pay 20 people a one-time throw-away fee of $75 just for creative diversity, rather than paying one professional $1,500 and rolling the dice on whether you’ll like what they come up with. You can still take the top 3 from those to the professional and say “can you do this right?” Before too long, many of us will be working from within virtual worlds for many virtual sources at a time. Most of those sources will be other independent contractors just like us from within chains of divided labor that span the globe. “Working online” will mean putting on a head mounted display and casually, visually conversing with your design and development team in a quick scrum session in a virtual space to mock up some ideas with a 3D ‘whiteboard,’ divide up the work and knock out a contract. Or you can join a guild of professionals with high standards and a good reputation and score decent contracts, just like design houses today minus expensive office space and the associated geographic limitation. The key is that these organizations will be comprised of independent, willful laborers from all over the world whose efficacy stands on their work quality and ethic alone, self-organized through online venues like forums and virtual words with next-to-zero operating expenses. Even today, I could organize a group of graphic designers, copy writers, another web developer or two, a couple of account managers, a project coordinator, a headhunter, a contract hunter and an accountant and we could all just meet periodically to review bids and commit to a monthly project or two. It would be supplemental income for all of us. You could add a really decent collaboration system that lets us voice/video chat with a whiteboard and host a web-site complete with a forum and customer login interface with just a little FOSS savvy. You would only need a $7/mo hosting account to run mediawiki, phpbb3, wordpress and a few external services like openerp and gmail. There are paid services you could upgrade to when the revenue kicks in. END DISCLAIMER. To say that the technology doesn’t exist to implement this stuff is frankly a cop-out. A /. audience should understand. I can’t wait for the Metaverse to be born (although I think it will be augmented virtual, not immersive virtual); being a gargoyle sounds like my kind of gig.

Submission + - Is there a cloud standard?

jcampbelly writes: Anyone with a sizeable IT department is trying to find ways to turn their cost center into a revenue source by selling their compute power and IT staff as ‘the cloud.’ Every colo and web host has their own cloud because it’s really just a service built on top of some of what they already do. This is spawning thousands of unique implementations of VM farms, control panels, and management APIs. The last hurdle of ‘the cloud’ for ubiquity seems to be the complete utility abstraction from the developer side. Users are still getting the same service whether I’m using provider A or B, but which one I choose dramatically changes how I build and manage that service. Does anyone know of any open source effort to standardize 'the cloud' on an API? Are there providers who are adhering or paying particular attention to this and profess interoperability?

Some examples: I may have dozens of VMs built on top of images from a cloud provider. If my customer or I don’t like their edge performance, I would like to be able to extend their cloud with my own equipment at a regional datacenter or in their building. Maybe I just like the idea of competition amongst the thousands of providers, because my VMs or apps are universal and so are my custom-built control tools. There is potential for a distributed platform hosted on dozens of providers around the world that doesn’t depend on a single corporate entity.

Comment Wake up and smell the developers (oh god, n/m) (Score 1) 605

Can anyone explain the business sense? What does Apple gain by taking such a hard stance on not condoning modifications? Why wouldn't they have a hands-off approach, and not stand in the way of "developers"? Other technologies seem to be gaining so much more from a formalized, open API such as new users, cost-free (to Apple) additional features, and great press from supporting open source on a mobile device which desperately needs a corporate sugar daddy. At least it's my perception that the reputation and success of a company is benefitted by a wide and open developer community, maybe the board of directors have a different take (slightly fewer new cars per year). It seems to me that Apple is hoping to sell "toggle" features piecemeal at ridiculous additional cost, and make tons of money through an exclusivity agreement which is frankly going to severely limit the amount of money they can make with open carrier choice. But they know as well as we that there will be a minute fraction of people who will say "I want a different carrier so bad, I'll risk destroying my device" and "I want a custom application on there to the degree that I may not be able to use the phone component". But I think an overwhelming majority (and these people are the cash cows) will simply follow exactly whatever Apple says on the 10-step list inside the box just to get their iPhone activated. I don't understand why companies act like they're going to lose ridiculous money when someone steps up with an acute understanding of the technology and the willpower and skill to manipulate it. There is still a moderate technical challenge in modifying such a device as an end-user, which is a significant enough barrier to prevent the hordes from flocking to it by the thousands it would take to impact their revenue. Why not support this subculture with tools, and enable them to produce even more compelling features for normal users, and make available a Firefox-like Add-on hub? Isn't the fact that the words "firmware" "patch" "flash" and "usb" are essentially jibberish to the average joe user of this device enough to disuade the perception that the Unwashed Masses are going to hijack their product and form a new pseudo-Apple who will take over their IPs, sink their stock and put M80s in all their toilets?

Slashdot Top Deals

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...