Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment WebODF seems to use Dojo? (Score 1) 91

https://github.com/kogmbh/WebO...

I like Dojo in part because it attempts to make all the core widgets accessible. From:
http://dojotoolkit.org/referen...
"Dojo has made a serious commitment to creating a toolkit that allows the development of accessible Web applications for all users, regardless of physical abilities. The core widget set of Dojo, dijit, is fully accessible since the 1.0 release, making Dojo the only fully accessible open source toolkit for Web 2.0 development. This means that users who require keyboard only navigation, need accommodations for low vision or who use an assistive technology, can interact with the dijit widgets."

Comment See also Dr. David Goodstein on the Big Crunch (Score 1) 538

http://www.its.caltech.edu/~dg...
"Although hardly anyone noticed the change at the time, it is difficult to imagine a more dramatic contrast than the decades just before 1970, and the decades since then. Those were the years in which science underwent an irreversible transformation into an entirely new regime. Let's look back at what has happened in those years in light of this historic transition. ...
    We must find a radically different social structure to organize research and education in science after The Big Crunch. That is not meant to be an exhortation. It is meant simply to be a statement of a fact known to be true with mathematical certainty, if science is to survive at all. The new structure will come about by evolution rather than design, because, for one thing, neither I nor anyone else has the faintest idea of what it will turn out to be, and for another, even if we did know where we are going to end up, we scientists have never been very good at guiding our own destiny. Only this much is sure: the era of exponential expansion will be replaced by an era of constraint. Because it will be unplanned, the transition is likely to be messy and painful for the participants. In fact, as we have seen, it already is. Ignoring the pain for the moment, however, I would like to look ahead and speculate on some conditions that must be met if science is to have a future as well as a past.
    It seems to me that there are two essential and clearly linked conditions to consider. One is that there must be a broad political consensus that pure research in basic science is a common good that must be supported from the public purse. The second is that the mining and sorting operation I've described must be discarded and replaced by genuine education in science, not just for the scientific elite, but for all the citizens who must form that broad political consensus. ..."

So, the academics you knew were from before the "Big Crunch". Such people advised me, from their success, and meaning well, to get a PhD. But the world I faced was post-Big-Crunch and so their advice did not actually make much sense (although it took me a long time to figure that out).

More related links:
http://p2pfoundation.net/backu...

Comment Echoing Greenspun on academia (Score 1) 538

From: http://philip.greenspun.com/ca...
---
Why does anyone think science is a good job?

The average trajectory for a successful scientist is the following:
age 18-22: paying high tuition fees at an undergraduate college
age 22-30: graduate school, possibly with a bit of work, living on a stipend of $1800 per month
age 30-35: working as a post-doc for $30,000 to $35,000 per year
age 36-43: professor at a good, but not great, university for $65,000 per year
age 44: with (if lucky) young children at home, fired by the university ("denied tenure" is the more polite term for the folks that universities discard), begins searching for a job in a market where employers primarily wish to hire folks in their early 30s

This is how things are likely to go for the smartest kid you sat next to in college. He got into Stanford for graduate school. He got a postdoc at MIT. His experiment worked out and he was therefore fortunate to land a job at University of California, Irvine. But at the end of the day, his research wasn't quite interesting or topical enough that the university wanted to commit to paying him a salary for the rest of his life. He is now 44 years old, with a family to feed, and looking for job with a "second rate has-been" label on his forehead.

Why then, does anyone think that science is a sufficiently good career that people should debate who is privileged enough to work at it? Sample bias. ...

Does this make sense as a career for anyone? Absolutely! Just get out your atlas.

Imagine that you are a smart, but impoverished, young person in China. Your high IQ and hard work got you into one of the best undergraduate programs in China. The $1800 per month graduate stipend at University of Nebraska or University of Wisconsin will afford you a much higher standard of living than any job you could hope for in China. The desperate need for graduate student labor and lack of Americans who are interested in PhD programs in science and engineering means that you'll have no trouble getting a visa. When you finish your degree, a small amount of paperwork will suffice to ensure your continued place in the legal American work force. Science may be one of the lowest paid fields for high IQ people in the U.S., but it pays a lot better than most jobs in China or India.
---

Comment Example Vitamin D reduces cancer risk study: (Score 1) 51

http://www.ncbi.nlm.nih.gov/pu...
"This was a 4-y, population-based, double-blind, randomized placebo-controlled trial. The primary outcome was fracture incidence, and the principal secondary outcome was cancer incidence."

Eating a lots of vegetables and fruits and mushrooms can also reduce cancer risk (see Dr. Joel Fuhrman's summary works like "Eat To Live" with many references). I've found by eating more fruits and vegetables that my skin tone has changed from pale to having more color (even in winter). Adequate iodine can also help prevent cancer.

Reducing risk of incidence is not the same as cure though. Sorry to hear about you father getting cancer. Once you get cancer, everything is iffy, so cancer is best avoided preventatively. Fasting may also help in some cancer situations, and it also helps with chemotherapy by protecting cells from the toxic chemicals (since fasting seems to causes many normal cells to go into a safe survival mode but cancer cells generally do not). And eating better may hope prevent recurrence. In general, the human body is always developing cancerous cells, but generally they are dealt with by the immune system. So boosting the immune system could help with some cancers and there are many ways to do that -- but again, it is all iffy once cancer is established.

See also for other ideas:
http://science-beta.slashdot.o...

I agree supplements and natural sunlight are probably better choices than tanning beds --although there may still be unknowns about how the skin reacts to sun or tanning beds and produces many compounds vs. supplements. I also agree conventional tanning beds are not tuned to give lots of vitamin D.That is unfortunate, even if they produce some. See also about other tanning choices (and supplement suggestions):
http://www.vitamindcouncil.org...
"If you choose to use a tanning bed, the Vitamin D Council recommends using the same common sense you use in getting sunlight. This includes:
Getting half the amount of exposure that it takes for your skin to turn pink.
Using low-pressure beds that has good amount of UVB light, rather than high-intensity UVA light."

BTW, if you look into chemotherapy for cancer, for many cancers you'll find it is of questionable value relative to the costs both in money and suffering, where is on average may add at most a couple months of life on average if that. Chemotherapy can apparently even sometimes make cancer worse:
http://www.nydailynews.com/lif...
"The scientists found that healthy cells damaged by chemotherapy secreted more of a protein called WNT16B which boosts cancer cell survival."

It's hard to know who to trust regarding medical research results or interpretations:
http://www.pdfernhout.net/to-j...
"The problems I've discussed are not limited to psychiatry, although they reach their most florid form there. Similar conflicts of interest and biases exist in virtually every field of medicine, particularly those that rely heavily on drugs or devices. It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of The New England Journal of Medicine. (Marcia Angell)"

Good luck sorting it all out. I've suggested creating better tools for medical sensemaking, but still not time to work on them...

Comment Nutrition and longevity (Score 1) 186

While you make a good point, nutrition works right now (along with exercise, good sleep, a less stressful lifestyle, avoiding hazards like smoking, community connectedness, and so on like in "Blue Zones"). The rest of life extension is just a hope that maybe we can create new technologies. Also, for most people, if they can make it in good health to 100+ years old via such well-proved things, they will then be around for more breakthroughs in the next 50+ years.

Also, probably most invasive life extension technologies for extreme longevity could also be turned into biological weapons (like rewriting DNA or reorganizing the brain). So, we may end up with technologies that could allow people to live in good health for 1000s of years, have any skin color or nose shape they want, and people will use them to kill off everyone else that has a different skin color or nose shape then they currently have (which would be very sadly ironic). Improved nutrition does not have that existential risk associated with it for the most part.

Comment This is why corporations should have *no* privacy (Score 1) 534

http://www.corporatecrimerepor...

Fines or imprisoning CEOs do little to change the pattern of relationships and values and policies that make an organization what it is, any more than a human body loosing some skill cells or even brain cells usually changes how a person behaves very much.

Seriously, why should any corporate communications have any expectation of privacy? Corporations with "limited liability" are chartered for the public interest. 150 years ago, US Americans put such creatures on very short leashes because they had seen what trouble resulted from big British corporations in the American colonies. Individuals have now lost pretty much all informational privacy due to large corporations and the current internet. Why should bigger more powerful creatures than humans like corporation have more privacy in practice than humans? See also David Brin's "The Transparent Society". Any argument that corporations need privacy (like for salaries or payments for services) for some sort of commercial advantage is trumped by the public interest in understanding what corporations are doing and also that if all corporations were transparent there would be a level playing field. Granted, it would require new ways of doing business, but books like "Honest Business" also extol the value of "open books". Or perhaps corporations should be forced to choose -- if they want limited liability for shareholders then they need to be transparent; if every shareholder accepts full responsibility for all actions of the organization, then they can have privacy?

And see also my comments from 2000, the relevant section copied below (sadly a lot of links there have rotted):
http://www.dougengelbart.org/c...

========= machine intelligence is already here =========

I personally think machine evolution is unstoppable, and the best hope
for humanity is the noble cowardice of creating refugia and trying, like
the duckweed, to create human (and other) life faster than other forces
can destroy it. [Well, I now in 2014 think there are also other options, like symbiosis, maybe friendly AI, and in general trying to be nicer to each other like with a basic income in hopes that leads to a happier singularity...]

Note, I'm not saying machine evolution won't have a human component --
in that sense, a corporation or any bureaucracy is already a separate
machine intelligence, just not a very smart or resilient one. This sense
of the corporation comes out of Langdon Winner's book "Autonomous
Technology: Technics out of control as a theme in political thought".
    http://www.rpi.edu/~winner/
You may have a tough time believing this, but Winner makes a convincing
case. He suggests that all successful organizations "reverse-adapt"
their goals and their environment to ensure their continued survival.

These corporate machine intelligences are already driving for better
machine intelligences -- faster, more efficient, cheaper, and more
resilient. People forget that corporate charters used to be routinely
revoked for behavior outside the immediate public good, and that
corporations were not considered persons until around 1886 (that
decision perhaps being the first major example of a machine using the
political/social process of its own ends).
    http://www.adbusters.org/magaz...
Corporate charters are granted supposedly because society believe it is
in the best interest of *society* for corporations to exist.

But, when was the last time people were able to pull the "charter" plug
on a corporation not acting in the public interest? It's hard, and it
will get harder when corporations don't need people to run themselves.
    http://www.adbusters.org/magaz...
    http://www.adbusters.org/campa...

I'm not saying the people in corporations are evil -- just that they
often have very limited choices of actions. If a corporate CEOs do not
deliver short term profits they are removed, no matter what they were
trying to do. Obviously there are exceptions for a while -- William C.
Norris of Control Data was one of them, but in general, the exception
proves the rule. Fortunately though, even in the worst machines (like in
WWII Germany) there were individuals who did what they could to make
them more humane ("Schindler's List" being an example).

Look at how much William C. Norris http://www.neii.com/wnorris.ht... of
Control Data got ridiculed in the 1970s for suggesting the then radical
notion that "business exists to meet society's unmet needs". Yet his
pioneering efforts in education, employee assistance plans, on-site
daycare, urban renewal, and socially-responsible investing are in
part what made Minneapolis/St.Paul the great area it is today. Such
efforts are now being duplicated to an extent by other companies. Even
the company that squashed CDC in the mid 1980s (IBM) has adopted some of
those policies and directions. So corporations can adapt when they feel
the need.

Obviously, corporations are not all powerful. The world still has some
individuals who have wealth to equal major corporations. There are
several governments that are as powerful or more so than major
corporations. Individuals in corporations can make persuasive pitches
about their future directions, and individuals with controlling shares
may be able to influence what a corporation does (as far as the market
allows). In the long run, many corporations are trying to coexist with
people to the extent they need to. But it is not clear what corporations
(especially large ones) will do as we approach this singularity -- where
AIs and robots are cheaper to employ than people. Today's corporation,
like any intelligent machine, is more than the sum of its parts
(equipment, goodwill, IP, cash, credit, and people). It's "plug" is not
easy to pull, and it can't be easily controlled against its short term
interests.

What sort of laws and rules will be needed then? If the threat of
corporate charter revocation is still possible by governments and
collaborations of individuals, in what new directions will corporations
have to be prodded? What should a "smart" corporation do if it sees
this coming? (Hopefully adapt to be nicer more quickly. :-) What can
individuals and governments do to ensure corporations "help meet
society's unmet needs"?

Evolution can be made to work in positive ways, by selective breeding,
the same way we got so many breeds of dogs and cats. How can we
intentionally breed "nice" corporations that are symbiotic with the
humans that inhabit them? To what extent is this happening already as
talented individuals leave various dysfunctional, misguided, or rouge
corporations (or act as "whistle blowers")? I don't say here the
individual directs the corporation against its short term interest. I
say that individuals affect the selective survival rates of
corporations with various goals (and thus corporate evolution) by where
they choose to work, what they do there, and how they interact with
groups that monitor corporations. To that extent, individuals have some
limited control over corporations even when they are not shareholders.
Someday, thousands of years from now, corporations may finally have been
bred to take the long term view and play an "infinite game".

Comment Insightful; see also "The Difference: ... (Score 1) 370

... How the Power of Diversity Better Groups, Firms, Schools, and Societies" http://www.amazon.com/Differen...
"In this landmark book, Scott Page redefines the way we understand ourselves in relation to one another. The Difference is about how we think in groups--and how our collective wisdom exceeds the sum of its parts. Why can teams of people find better solutions than brilliant individuals working alone? And why are the best group decisions and predictions those that draw upon the very qualities that make each of us unique? The answers lie in diversity--not what we look like outside, but what we look like within, our distinct tools and abilities.
    The Difference reveals that progress and innovation may depend less on lone thinkers with enormous IQs than on diverse people working together and capitalizing on their individuality. Page shows how groups that display a range of perspectives outperform groups of like-minded experts. Diversity yields superior outcomes, and Page proves it using his own cutting-edge research. Moving beyond the politics that cloud standard debates about diversity, he explains why difference beats out homogeneity, whether you're talking about citizens in a democracy or scientists in the laboratory. He examines practical ways to apply diversity's logic to a host of problems, and along the way offers fascinating and surprising examples, from the redesign of the Chicago "El" to the truth about where we store our ketchup.
    Page changes the way we understand diversity--how to harness its untapped potential, how to understand and avoid its traps, and how we can leverage our differences for the benefit of all."

An aspect of that is also that humans are adapted to argue together in small groups and find creative solutions together:
http://artsbeat.blogs.nytimes....
http://lifehacker.com/can-rati...

Of course, then to keep a group of such people motivated, they need autonomy, challenge/mastery, and purpose, like Dan Pink outlines here:
"RSA Animate - Drive: The surprising truth about what motivates us"
http://www.youtube.com/watch?v...

And until we get a basic income for all, at least enough money to live a decent life in our society so money is essentially off the table as it has reached the point of diminishing returns for people who like their work:
http://science.slashdot.org/st...

Comment Thanks for the reply! Bolo history & metagames (Score 1) 222

I don't seem to have "Rogue Bolo" in my sci-fi book collection, but the cover on Amazon looks familiar. I think I might have given it away one Halloween decades ago living in Princeton, NJ when I gave the option of getting books instead of candy to some trick-or-treaters (several teens there seemed to prefer the books).

Your point on a Bolo singularity makes me think about the Asimov universe, and how his robots there eventually interpreted the three laws in a way "The Zeroth Law" that gave them lots of independence, and that saw themselves as in a way more "human" than humans, and also caused them to start intervening in history behind the scenes. There is no set of laws or constitution that ultimately does not need some intelligent judge to interpret the meaning or spirit of the words in a present day context, and once some intelligent entity (including an AI) starts creatively interpreting "rules" including "metarules" about how rules can be changed, who knows where it will end?
http://en.wikipedia.org/wiki/R...

Inspired by your post, I've been looking through my Bolo books. I started rereading "Ploughshare" by Todd Johnson in "Bolos: Book 1: Honor of the Regiment" where "Das Afrika Corps" and other Mark XVI C Bolos act a bit odd due to a spilled milkshake by the "Director's son" in the "White Room" psychotronics lab and the use of "DK-41" cleaning fluid to fix that mess up. Another case of the unexpected...

I liked "Bolo Rising" novel which has a Bolo Mark XXXIII series HCT called Hector. That is an interesting novel of a Bolo regaining its operational capacity after being infiltrated and locked down by alien technology. There is another XXXIII in "Bolo Strike". But while those mega-Bolo stories are interesting in their own sort of over-the-top way (maybe your point about "the other guy"), I like the diversity in the short stories in "Honor of the Regiment" by a variety of authors covering the whole history of different Bolos of various capabilities and their unfolding increasing sentience and self-directedness. What does "Honor" or "Service" means over time and shading into a meta-level? For example, are whistleblowers like Manning, Snowden, or Kiriakou honorable and engaged in service and fulfilling their oath to defend the Constitution? Or are they traitors? Complex questions... Perhaps "Rogue Bolo" goes deeper into such issues? As a lesser example, "Bolo Brigade" explores the issue of a conflict between "rules of engagement" and a Bolo's desire to get its job done. Conflicts between priorities are not something that only humans will face...

It is not clear where the singularity of emerging AI and technologically-expanded-or-narrowed humans and so on will all lead in reality -- especially with Bolo vs. Beserker as an option. I forget the plot of "Bolo Strike" as I look at my Bolo books, but the blurb on the back says "as Bolo faces human-Bolo hybrid in a cataclysmic showdown". So there are other ways automated systems can cause change, either their own independence or by empowering some few independent humans. As I essentially say near the end of the 2000 post to the Unrev-II Engelbart Bootstrap mailing list, corporations are like vast machine intelligence at his point. And like the present day, what is the real difference to most people if the Earth is laid waste, the seas polluted, the mountains leveled, the oceans strip-mined, and most of the people kept down in their aspirations for a decent life by "aliens from outer space" or by some 1% of vampire-like human-machine-hybrid-organizational "aliens" who have become specialized in "extracting wealth" by privatizing gains and socializing costs (including the cost to the worker of unpleasant work environments)? Even without human-Bolo hybrids, there can be vast technological/bureaucratic enterprises that make use of humans as parts much the same as the "!*!*!" of "Bolo Rising" tried to do in their quest of "efficiency" -- "efficiency" to what end and to whose benefit? So much of sci-fi is a way that people can reflect of concerns of the day, but at a safer intellectual distance.

Anyway, reminds me I should try playing with my kid this old Metagame I have around somewhere and have not played in decades and was inspired by the Bolo series:
http://en.wikipedia.org/wiki/O...

Comment Re:Academic pyramid scheme and basic income soluti (Score 1) 325

Well, 3D printing is a lot like magic cauldrons, so we may both be right in the end. :-)

Of course, magic cauldrons are not without their downsides: :-)
http://en.wikipedia.org/wiki/S...

Yeah, I've seen surveys that say humanity in the West can more easily imagine nuclear war or other destruction of everything we care about instead of significant social change... None-the-less, as Howard Zinn wrote:
http://www.commondreams.org/vi...
"In this awful world where the efforts of caring people often pale in comparison to what is done by those who have power, how do I manage to stay involved and seemingly happy? I am totally confident not that the world will get better, but that we should not give up the game before all the cards have been played. The metaphor is deliberate; life is a gamble. Not to play is to foreclose any chance of winning.
    To play, to act, is to create at least a possibility of changing the world. There is a tendency to think that what we see in the present moment will continue. We forget how often we have been astonished by the sudden crumbling of institutions, by extraordinary changes in people's thoughts, by unexpected eruptions of rebellion against tyrannies, by the quick collapse of systems of power that seemed invincible. What leaps out from the history of the past hundred years is its utter unpredictability. This confounds us, because we are talking about exactly the period when human beings became so ingenious technologically that they could plan and predict the exact time of someone landing on the moon, or walk down the street talking to someone halfway around the earth."

Comment Thanks! (Score 1) 222

BTW, to fix a typo, one sentence should be: "While I like Iain Bank's Culture Novels, I wonder why the [AIs] there, both human-level and way-beyond-human-level take so much effort to take care of humans and cater to them."

Comment I accidentally created self-replicating... (Score 4, Interesting) 222

... simulated cannibalistic robot killers in the 1980s on a Symbolics running ZetaLisp. I gave a couple conference talks about it, plus one at NC State (where I wrote the simulation) that I think even may have influenced Marshall Brain. I had created a simulation of self-replicating robots that reconstructed themselves to an ideal from spare parts in their simulated environment (something proposed first by von Neumann, but I may have been the first to make such a simulation). The idea was that a robot that was essentially half of an "ideal" robot would make its other half by adding parts to itself, then split in two by cutting some links, and then do it again. The very first one assembled its other half, cut the links to divide itself, and then proceeded (unexpectedly to me) to then start cutting apart its offspring for parts to do it again. I had to add a sense of "smell" so robots would set the smell of parts they used and then not try to take parts that smelled the same. I also mention that simulation here:
http://www.dougengelbart.org/c...

Decades later, I still got a bit freaked out when our chickens would sometimes eat their own eggs...

My point though is that completely unintentionally, these devices I designed to create ended up destroying things -- even their own offspring. It was a big lesson for me, and has informed my work and learning in various directions ever since. Things you build can act in totally unexpected ways. And since creation involves changing the universe, any change also involves to some extent destroying something that is already there.

James P. Hogan in his 1982 book "The Two Faces of Tomorrow" which I had read earlier should have been a warning. In it he makes clear how any AI could gain a survival instinct and then could perceive things like power fluctuations as threats -- even if there was not intent on the part of the original programmers for that to happen.
http://www.jamesphogan.com/boo...

Langdon Winner's book "Autonomous Technology: Technics-out-of-control as a theme in political thought" assigned as reading in college also should have been another warning.
http://en.wikipedia.org/wiki/L...

It's been sad to watch the progression of real killer autonomous robots since the 1980s... Here is just one example, and the exciting, upbeat music in the video shows the political and social problem more than anything:
"Samsung robotic sentry (South Korea, live ammo)"
https://www.youtube.com/watch?...

Just because we can do something does not mean we should...

I was impressed that this recent Indian Bollywood film about an AI-powered robot took such a nuanced view of the problems. A bit violent for me, but otherwise an excellent and thought provoking film:
http://en.wikipedia.org/wiki/E...
"Enthiran is a 2010 Indian Tamil science fiction techno thriller, co-written and directed by Shankar.The film features Rajinikanth in dual roles, as a scientist and an andro humanoid robot, alongside Aishwarya Rai while Danny Denzongpa, Santhanam, Karunas, Kalabhavan Mani, Devadarshini, and Cochin Haneefa play supporting roles. The film's story revolves around the scientist's struggle to control his creation, the android robot whose software was upgraded to give it the ability to comprehend and generate human emotions. The plan backfires as the robot falls in love with the scientist's fiancee and is further manipulated to bring destruction to the world when it lands in the hands of a rival scientist."

But yes, the Beserker Series is another signpost in that direction -- perhaps countered a bit by the Bolo series by Keith Laumer? :-)
http://en.wikipedia.org/wiki/B...

But I tend to think that, for reasons like I wrote, and Hogan wrote, and Winner wrote, we will never be able to keep all AIs on a short leash. While I like Iain Bank's Culture Novels, I wonder why the Is there, both human-level and way-beyond-human-level take so much effort to take care of humans and cater to them. Having been in an Ecology and Evolution grad program since those robot days, there are just lots of things one can expect of any living system.... And the initial Rossum's Universal Robots stories that gave us the term "Robot" was about essentially a slave uprising. I don't have a problem imagining we can make flexible machines that help us. But there is likely some fuzzy line (and I don't know where it is exactly) where systems that start to act intelligent or sentient begin to deserve rights -- both for their own sake and for ours, since while slavery degrades and warps the slave, it also ends up degrading and warping the master as well in other ways. If we build advanced AIs, the outcome may well be better if we intend them as "Mind Children" like Hans Moravec writes about (and whose lab I hung out in at CMU for a time in the 1980s when he was writing it) -- or at least companions and friends (if also helpmates working together with us for our mutual benefit) more like the robots became in the movie "Silent Running".

A funny robot movie we watched the other day that again gets at some deep themes:
http://en.wikipedia.org/wiki/R...

Comment Academic pyramid scheme and basic income solution (Score 1) 325

Caltech Vice-Provost on pyramid scheme: http://www.its.caltech.edu/~dg...

From 2004, and it has only gotten worse: http://www.villagevoice.com/20...

Still, also problems in science for anyone: http://philip.greenspun.com/ca...

More by me from 2009:
"[p2p-research] College Daze links (was Re: : FlossedBk, "Free/Libre and Open Source Solutions for Education")"
http://p2pfoundation.net/backu...
"[p2p-research] The Higher Educational Bubble Continues to Grow"
http://p2pfoundation.net/backu...

We can and should do better than this as a society.

My proposed solution: a "basic income" (as well as an expanded gift economy and better subsistence via 3D printing and cheap solar panels and cheap agricultural robots). Then anyone can live like a graduate and think and talk and publish all they want on whatever topic they like. Of course, if people want to afford lab space or equipment, that is more of a challenge, and they might have to do paying work. But so much can be done with cheap computers and cheap equipment now, that a lot of good tabletop research can still be done on a shoestring.
http://www.basicincome.org/bie...

One example (not saying it will work, but is it tabletop physics/chemistry on the cheap):
http://www.e-catworld.com/2014...

Even most millionaires would be better off with a basic income IMHO:
http://www.pdfernhout.net/basi...

Now if only the legions of unemployed humanities PhDs (and some unemployed law school graduates too) would just collectively take up this cause for a basic income and expanded gift economy etc. and write stories about it, write persuasive essays about it, write funny viral videos about it, lobby for incremental laws about it (Social Security for All from Birth), and so on. Then we might see some accelerating movement on it... My own attempts in that direction, which I'm sure those legions could vastly improve on:
"The Richest Man in the World: A parable about structural unemployment and a basic income "
https://www.youtube.com/watch?...

Nothing short of a big social shift like that is going to solve the fix academia is in, between the student load debt bubble about to burst and the collapsing pyramid scheme of the value of a PhD to train other PhDs. Instead we are seeing play out the ultimate folly of expanding cradle-to-grave schooling as a sort of arms race where parents invest vast amounts of money in hopes their offspring will have secure more credentials than someone else whose parents have less money and so get some coveted job in academia or elsewhere. All the while, AI and robotics are taking on more and more jobs -- even grading student essays and doing it so cheaply that, as in the parable above, humans need not apply.
http://tech.slashdot.org/story...

Comment 1956 story by Sturgeon inspired Nelson/Xanadu (Score 5, Informative) 90

See "The Skills of Xanadu", as text: http://books.google.com/books?...
and as audio: https://archive.org/details/pr...

Around 2001 or 2002, while working at at IBM Research I went to a talk by Ted Nelson there, and I asked him about the story given the similar name. He said that the story had inspired him (at least partially) to do his work, and thanked me for telling him the name of the story, saying he had been looking for that story for a long time. While I did not say so, his reply about looking for the story surprised me given that there are probably not many stories with Xanadu in the title so a library search would have found it I would think.. Ted Nelson records everything around him on a tape recorder (or at least did then), so that interaction should be on one of his tapes...

The 1956 story by Theodore Sturgeon is am amazing work that features a world networked by wireless mobile wearable computing supporting freely shared knowledge and skills through a sort of global internet-like concept. Some of that knowledge was about advanced nanotech-based manufacturing. The system powered an economy reflecting ideas like Bob Black writes about in "The Abolition of Work", where much work had become play coordinated through this global network. The story has inspired other people as well, both me from when I read it (and forgot it mostly for a long time, except for the surprise ending), and also a Master Inventor at IBM I worked with who got inspired by the nanotech aspects of that story when he was young. Even almost sixty years later, that story still has things we can learn from about a vision of a new type of society (including with enhanced intrinsic&mutual security) made possible through advanced computing.

A core theme is an interplay between meshwork and hierarchy, reminiscent of Manuel De Landa's writings:
http://www.egs.edu/faculty/man...
"Indeed, one must resist the temptation to make hierarchies into villains and meshworks into heroes, not only because, as I said, they are constantly turning into one another, but because in real life we find only mixtures and hybrids, and the properties of these cannot be established through theory alone but demand concrete experimentation. Certain standardizations, say, of electric outlet designs or of data-structures traveling through the Internet, may actually turn out to promote heterogenization at another level, in terms of the appliances that may be designed around the standard outlet, or of the services that a common data-structure may make possible. On the other hand, the mere presence of increased heterogeneity is no guarantee that a better state for society has been achieved. After all, the territory occupied by former Yugoslavia is more heterogeneous now than it was ten years ago, but the lack of uniformity at one level simply hides an increase of homogeneity at the level of the warring ethnic communities. But even if we managed to promote not only heterogeneity, but diversity articulated into a meshwork, that still would not be a perfect solution. After all, meshworks grow by drift and they may drift to places where we do not want to go. The goal-directedness of hierarchies is the kind of property that we may desire to keep at least for certain institutions. Hence, demonizing centralization and glorifying decentralization as the solution to all our problems would be wrong. An open and experimental attitude towards the question of different hybrids and mixtures is what the complexity of reality itself seems to call for."

See also, for other "old" ideas we could still benefit from thinking about:
"The Web That Wasn't"
https://www.youtube.com/watch?...
"Google Tech Talks October, 23 2007
    For most of us who work on the Internet, the Web is all we have ever really known. It's almost impossible to imagine a world without browsers, URLs and HTTP. But in the years leading up to Tim Berners-Lee's world-changing invention, a few visionary information scientists were exploring alternative systems that often bore little resemblance to the Web as we know it today. In this presentation, author and information architect Alex Wright will explore the heritage of these almost-forgotten systems in search of promising ideas left by the historical wayside.
    The presentation will focus on the pioneering work of Paul Otlet, Vannevar Bush, and Doug Engelbart, forebears of the 1960s and 1970s like Ted Nelson, Andries van Dam, and the Xerox PARC team, and more recent forays like Brown's Intermedia system. We'll trace the heritage of these systems and the solutions they suggest to present day Web quandaries, in hopes of finding clues to the future in the recent technological past.
    Speaker: Alex Wright
Alex Wright is an information architect at the New York Times and the author of Glut: Mastering Information Through the Ages. Previously, Alex has led projects for The Long Now Foundation, California Digital Library, Harvard University, IBM, Microsoft, Rollyo and Sun Microsystems, among others. He maintains a personal Web site at http://www.alexwright.org/"

For example, here is what people were doing in 1910:
http://en.wikipedia.org/wiki/M...
http://en.wikipedia.org/wiki/P...
"The Mundaneum was an institution created in 1910, following an initiative begun in 1895 by Belgian lawyers Paul Otlet and Henri La Fontaine, as part of their work on documentation science. It aimed to gather together all the world's knowledge and classify it according to a system they developed called the Universal Decimal Classification. Otlet and La Fontaine organized an International Conference of International Associations which was the origin of the Union of International Associations (UIA). ... Otlet regarded the project as the centerpiece of a new 'world city' -- a centrepiece which eventually became an archive with more than 12 million index cards and documents. Some consider it a forerunner of the Internet (or, perhaps more appropriately, of systematic knowledge projects such as Wikipedia and WolframAlpha) and Otlet himself had dreams that one day, somehow, all the information he collected could be accessed by people from the comfort of their own homes."

Slashdot Top Deals

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...