Forgot your password?
typodupeerror

Comment Re:Here it comes (Score 1) 70

You're confusing the importance of avoiding Kessler syndrome in LEO with the difficulty of causing Kessler syndrome. GEO debris can potentially remain there for millions of years before interactions between the gravitational pull of the Sun, Earth, and Moon sufficiently perturb it. LEO debris remains for weeks to months. You have to have many orders of magnitude more debris in LEO to trigger Kessler Syndrome, where the rate of collisions exceeds the rate of debris loss.

The fact that a LEO Kessler Syndrome would also be short is something that exists on top of that.

It's also worth nothing that not only are modern satellites not only vastly better at properly disposing of themselves than they were in the 1970s when Kessler Syndrome was proposed, but they're also vastly better at avoiding debris strikes. All of these factors are multiplicative together.

Comment Re:Here it comes (Score 3, Insightful) 70

People forget that the primary concerns about Kessler Syndrome were about geosynchronous orbit, which used to be where all the most important satellites went (many of course still go there, but not the megaconstellations). It takes a long, long time for debris to leave GEO. But LEO is a very different beast.

Comment Re:Here it comes (Score 4, Informative) 70

Yeah. In particular:

with fragments likely to fall to Earth over the next few weeks

LEO FTW. Kessler Syndrome is primarily a risk if you put too much stuff with too poor of an end-of-life disposal rate in GEO. End-of-life without proper disposal rates have declined exponentially since Kessler Syndrome was first proposed (manufacturers both understand the importance more, and do a better job, of decreasing the rate of failures before deorbit - in the past, sometimes there wasn't even attempts to dispose of a craft at end-of-life). And now we're increasingly putting stuff in LEO, where debris falls out of orbit relatively quickly. It's not impossible in LEO, esp. with higher LEO orbits - but it's much more difficult.

Or to put it another way: fragments can't build up to hit other things if they're gone after just a couple weeks.

And this trend is likely to continue - a lower percentage of premature failures, and decreasing altitudes / reentry times. Concerning ever-decreasing altitudes, we've already been doing this via use of ion engines to provide more reboost (with mission lifespans designed for only several years before running out of propellant, instead of decades like the giant GEO ones), but there's an increasing interest in "sky skimming" satellites that function in a way somewhat reminiscent of a ramjet - instead of krypton or xenon as the propellant for an ion engine, the sparse atmospheric air itself is the propellant, so the craft can in effect fly indefinitely until it fails, wherein it quite rapidly enters the denser atmosphere and burns up.

Comment Re:Doing the editor's job. (Score 5, Informative) 41

Relativity = gravity is represented by the curvature of spacetime. Curvature is linear, R. The formula treats curvature linearly. As things get closer and curvature spikes, the math just scales at a 1:1 rate

Quadratic gravity = Squares the curvature. Doesn't really change things much when everything is far apart, but heavily changes things when everything is close together.

Pros: prevents infinities and other problems when trying to reconcile quantum theory with relativity ("makes the theory renormalizable"). E.g. you don't want to calculate "if I add up the probabilities of all of these possible routes to some specific event, what are the odds that it happens?" -> "Infinity percent odds". That's... a problem. Renormalization is a trick for electromagnetism that prevents this by letting the infinities cancel out. But it doesn't work with linear curvature - gravitons carry energy, which creates gravity, which carries more energy... it explodes, and renormalization attempts just create new infinities. But it does work with quadratic curvature - it weakens high-energy interactions and allows for convergence.

Cons: Creates "ghosts" (particles with negative energies or negative probabilities, which create their own problems). There's various proposed solutions, but none that's really a "eureka!" moment. Generally along the lines of "they exist but are purely virtual and don't interact", "they exist but they're so massive that they decay before they can interact with the universe", "they don't exist, we're just using the math out of bounds and need a different representation of the same", "If we don't stop at R^2 but also add in R^3, R^4, ... on to infinity, then they go away". Etc.

The theory isn't new, BTW. The idea is from 1918 (just a few years after Einstein's theory of General Relativity was published), and the work that led to the "Pros" above is from 1977.

Comment Re:And media selection of alarmist data (Score 4, Interesting) 50

A bit more about the latter. Beyond organophosphates, the main other alternative is pyrethroids. These are highly toxic to aquatic life, and they're contact poisons to pollinators just landing on the surface (some anti-insect clothing is soaked in pyrethrin for its effect). Also, neonicotinoids are often applied as seed coatings (which are taken up and spread through the plant), which primarily just affect the plant itself. Alternatives are commonly foliar sprays. This means drift to non-target impacts as well, such as in your shelterbelts, private gardens, neighbors' homes, etc. You also have to use far higher total pesticide quantities with foliar sprays instead of systematics, which not only drift, but also wash off, etc. Neonicotinoids can impact floral visitors, with adverse sublethal impacts but e.g. large pyrethroid sprayings can cause massive immediate fatal knockdown events of whole populations of pollinators.

Regrettable substitution is a real thing. We need to factor it in better. And that applies to nanoplastics as well.

Comment Re:And media selection of alarmist data (Score 5, Interesting) 50

So, when we say microplastics, we really mainly mean nanoplastics - the stuff made from, say, drinking hot liquids from low-melting-point plastic containers. And yeah, they very much look like a problem. The strongest evidence is for cardiovascular disease. The 2024 NEJM study for example found that for patients with above-threshold levels of nanoplastics in cartoid artery plaque were 4,5x more likely to suffer from a heart attack. Neurologically, they cross the brain-blood barrier (and quite quickly). A 2023 study found that they cause alpha-synuclein to misfold and clump together, a halmark of Parkinsons and various kinds of dementia. broadly, they're associated with oxidative stress, neuroinflammation, protein aggregation, and neurotransmitter alterations. Oxidative stress is due to cells struggling to break down nanoplastics in them. They're also associated with immunotoxicity, inflammatory bowel disease, and reproductive dysfunction, including elevating inflammatory markers, impairing sperm quality, and modulating the tumor microenvironment. With respect to reproduction, they're also associated with epigenetic dysregulation, which can lead to heritable changes.

And here's one of the things that get me - and let me briefly switch to a different topic before looping back. All over, there's a rush to ban polycarbonate due to concerns over a degradation product (bisphenol-A), because it's (very weakly) estrogenic. But typical effective estrogenic activity from typical levels of bisphenol-A are orders of magnitude lower than that of phytoestrogens in food and supplements; bisphenol-A is just too rare to exert much impact. Phytoestrogens have way better PR than bisphenol-A, and people spend money buying products specifically to consume more of them. Some arguments against bisphenol-A focus on what type of estrogenic activity it can promote (more proliferative activity), but that falls apart given that different phytoestrogens span the whole gamut of types of activation. Earlier research arguing for an association with estrogen-linked cancer seems to have fallen apart in more recent studies. It does seem associated with PCOS, but it's hard to describe it as a causal association, because PCOS is associated with all sorts of things, including diet (which could change the exposure rate vs. non-PCOS populations) and significant hormonal changes (which could change the clearance rate of bisphenol-A vs. non-PCOS populations). In short, bisphenol-A from polycarbonate is not without concern, but the concern level seems like it should be much lower than with nanoplastics.

Why bring this up? Because polycarbonate is a low-nanoplastic-emitting material. It is a quite resilient, heat tolerant plastic, and thus - being much further from its glass transition temperature - is not particularly prone to shedding nanoplastics. By contrast, its replacements - polyethylene, polypropylene, polyethylene terephthate, etc - are highly associated with nanoplastic release, particularly with hot liquids. So by banning polycarbonate, we increase our exposure to nanoplastics, which are much better associated with actual harms. And unlike bisphenol-A, which is rapidly eliminated from the body, nanoplastics persist. You can't get rid of them. If some big harm is discovered with bisphenol-A that suddenly makes the risk picture seem much bigger than with nanoplastics, we can then just stop using it, and any further harm is gone. But we can't do that with nanoplastics.

People seriously need to think more about substitution risks when banning products. The EU in particular is bad about not considering it. Like, banning neonicotinoids and causing their replacement by organophosphates, etc isn't exactly some giant win. Whether it's a benefit to pollinators at all is very much up in the air, while it's almost certain that the substitution is more harmful for mammals such as ourselves (neonicotinoids have very low mammalian toxicity, unlike e.g. organophosphates, which are closely related to nerve agents).

Comment IBM: The origins of THINK (Score 1) 78

https://www.ibm.com/history/th...
"An ad hoc lecture [from 1915] from IBMâ(TM)s future CEO spawned a slogan to guide the company through a century and beyond"

https://humancenteredlearning....
"And we must study through reading, listening, discussing, observing and thinking. We must not neglect any one of those ways of study. The trouble with most of us is that we fall down on the latter -- thinking -- because it's hard work for people to think. And, as Dr. Nicholas Murray Butler said recently, 'all of the problems of the world could be settled easily if men were only willing to think.' (Thomas Watson, IBM)"

https://en.wikiquote.org/wiki/...
"All the problems of the world could be settled easily if men were only willing to think. The trouble is that men very often resort to all sorts of devices in order not to think, because thinking is such hard work. (Nicholas Murray Butler, often misattributed to Thomas J. Watson)"

So, yeah, echoing your point, make programmers do the hardest parts of their job all the time -- especially reviewing code from inconsistent-to-put-it-politely AI contributors -- and no wonder they feel "fried".

Does AI support for programming need to be this way? I might hope not, but we are also mainly hearing about AI used within a short-term-profit-maximizing hyper-competitive corporate social context. Like I say in my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

Comment Re:Also required reading in history of technology (Score 1) 39

A tangent on another historian of science and technology I enjoyed taking a course from and who wrote about information technology and who died relatively way too young:
https://en.wikipedia.org/wiki/...
        "James Ralph Beniger (December 16, 1946 -- April 12, 2010) was an American historian and sociologist and Professor of Communications and Sociology at the Annenberg School for Communication at the University of Southern California, particularly known for his early work on the history of quantitative graphics in statistics, and his later work on the technological and economic origins of the information society."

RIP Jim. Thanks for being an excellent -- and kind -- professor. And mentioning me and other students with thanks in your Control Revolution book.

And thanks also introducing me to the 1978 book "Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought" by Langdon Winner --- which just gets more relevant every day.
https://www.amazon.com/Autonom...
        "This study of the idea of technology out of control makes an important contribution to our understanding of the problems of civilization. The basic argument is not that some persons or groups promote technology against the public interest (true though that is), or even that our technology develops in its own way in spite of all our efforts to control it (also true in some respects). Rather, Winner is concerned with a more subtle effect: the artifacts that we have invented to satisfy our material wants have now developed, in size and complexity, to the point of delimiting or even determining our conception of the wants themselves. In that way, we as a civilization are losing mastery over our own tools.... 'As a source for readings and reflections on this problem, the book is rich and rewarding.... If it has a practical lesson, it is that of awareness: only by recognizing the boundaries of our socially constructed scientific-technological reality can we transcend them in imagination and then achieve effective human action.'"

Winner's book is perhaps a sort of antithesis of "The Soul of A New Machine"? In that sometimes technologists (I'm looking at your OpenAI and cohorts) can get so excited about problem solving and miss the big picture. Related humor:
https://www.reddit.com/r/Jokes...
        "This reminds me of the engineer who is condemned to death, but as his turn approaches they find that the mechanism on the electric chair is no longer functioning. About to send everyone home, the engineer calls out "Wait! I think i can see the problem!""

I heard that Winner did not get tenure at MIT essentially because he suggested that the method of education used there was essentially blinding the MIT students to the social implications of what they were doing. A related book:
https://web.archive.org/web/20...
        "Who are you going to be? That is the question.
        In this riveting book about the world of professional work, Jeff Schmidt demonstrates that the workplace is a battleground for the very identity of the individual, as is graduate school, where professionals are trained. He shows that professional work is inherently political, and that professionals are hired to subordinate their own vision and maintain strict "ideological discipline."
        The hidden root of much career dissatisfaction, argues Schmidt, is the professional's lack of control over the political component of his or her creative work. Many professionals set out to make a contribution to society and add meaning to their lives. Yet our system of professional education and employment abusively inculcates an acceptance of politically subordinate roles in which professionals typically do not make a significant difference, undermining the creative potential of individuals, organizations and even democracy.
        Schmidt details the battle one must fight to be an independent thinker and to pursue one's own social vision in today's corporate society. He shows how an honest reassessment of what it really means to be a professional employee can be remarkably liberating. After reading this brutally frank book, no one who works for a living will ever think the same way about his or her job."

Anyway, I am so thankful for people like Jim Beniger, Langdon Winner, Michael Mahoney, and others who provided a larger perspective to technologists on what they were doing.

If I can find fault with "The Soul of a New Machine" from hazy recollections from 40+ years ago I don't think it did that. It's still informative though on how technology -- like now AI development -- can become addictive for techies with both good and bad aspects.
https://en.wikipedia.org/wiki/...
      "The book follows many of the designers as they give almost every waking moment of their lives to design and debug the new machine."

It's maybe kind of like the difference between books that glorify warmaking and "winning" versus those that glorify peacemaking and "win/win/win"?

An example of the latter is "To Become a Human Being: The Message of Tadodaho Chief Leon Shenandoah".
https://en.wikipedia.org/wiki/...

Or an essay by Terry Dobson on Aikido:
https://livingthepresentmoment...
https://en.wikipedia.org/wiki/...
      "My teacher taught us each morning that the art was devoted to peace. "Aikido," he said again and again, "is the art of reconciliation. Whoever has the mind to fight has broken his connection with the universe. If you try to dominate other people, you are already defeated. We study how to resolve conflict, not how to start it." ..."

Or in a way Theodore Sturgeon's "The Skills of Xanadu" 1956 short story about wearable networked mobile computing for sharing knowledge that inspired Ted Nelson to invent hypertext which contributed to the creation of the web.

Or Alfie Kohn's "No Contest: The Case Against Competition":
https://www.alfiekohn.org/cont...

One of the most interesting things I overheard at lunch at IBM Research was something along the lines of "We hire the top people from the most competitive schools -- and then wonder why they can't get along and cooperate."

And I'd add, wonder why many technologists rarely take much time to think about the broader social implications of what they are doing.

And "Soul of a New Machine" -- from what I can remember of it -- exemplifies and celebrates that focused mindset.

I can thank someone (maybe Bruce Maier?) who posted this essay to the Lyrics TOPS-10 timesharing system I used in high school circa 1980 -- for helping me have some reservations about such narrow focus:
"The Hacker Papers [about some Stanford CS students]"
https://cdn.preterhuman.net/te...
      "As much as an essay, this is a story. It is a true story of people paying $9,000 a year to lose elements of their humanity. It is a story of the breaking of wills and of people. It is a story of addictions, and of misplaced values. In a large part, it is my own story. ..."

Comment Also required reading in history of technology (Score 1) 39

Also assigned reading in Michael Mahoney's course on the history of science and technology at Princeton circa 1984. Although I had read it already -- and found it inspiring.

RIP Tracy Kidder.

And also RIP Professor Mahoney who died in 2008 at the relatively young age of 69 -- just when a historian is typically getting very productive.

https://en.wikipedia.org/wiki/...

https://www.dailyprincetonian....

"Histories of Computing"
https://www.hup.harvard.edu/bo...
"Computer technology is pervasive in the modern world, its role ever more important as it becomes embedded in a myriad of physical systems and disciplinary ways of thinking. The late Michael Sean Mahoney was a pioneer scholar of the history of computing, one of the first established historians of science to take seriously the challenges and opportunities posed by information technology to our understanding of the twentieth century."

It's going to be a rough next decade with ongoing loss of pioneers of personal computing and those who wrote about them or who inspired them. People like Steve Jobs and Doug Engelbart and Isaac Asimov and Theodore Sturgeon who have passed on years ago. Glad that Steve Wozniak and Alan Kay and Dan Ingalls and so on are still hanging in there! A heartfelt thanks to all of them for giving us possibilities -- even if we may not be doing great things with them right now.

"Original architects [Wozniak] of the personal computer hate what it's become..."
https://www.youtube.com/watch?...

"Steve Wozniak says he's "disappointed a lot" by AI and rarely uses it"
https://www.techspot.com/news/...

Comment Re:Robot philosopher? (Score 1) 94

Touché! Good point on "actually" and whether it is qualified, thanks. I guess that word only fits in relation to there actually being a novel which I was referring to? But I agree I should have worded that better. The word "fictionally" might have been a better choice? Glad your comment sparked some tangential discussion by others.

Comment Re:Robot philosopher? (Score 3, Insightful) 94

Children raised by robot educators/philosophers actually worked out well in the fictional sci-fi novel by James P. Hogan "Voyage from Yesteryear":
https://en.wikipedia.org/wiki/...
"The story opens early in the 21st century, as an automated space probe is being prepared for a mission to explore habitable exoplanets in the Alpha Centauri system. However, Earth appears destined for a global war which the probe designers fear that humanity may not survive. It appears that the only chance for the human species is to reestablish itself far away from the conflict but there is no time left for a crewed expedition to escape Earth. The team, led by Henry B. Congreve, change their mission priority and quickly modify the design to carry several hundred sets of electronically coded human genetic data. Also included in this mission of embryo space colonization is a databank of human knowledge, robots to convert the data into genetic material and care for the children and construct habitats when the destination is reached, and a number of artificial wombs. The probe's designers name it the Kuan-Yin after the bodhisattva of childbirth and compassion.
        Shortly after the launch, global war indeed breaks out and several decades later, Earthbound humanity is united under an authoritarian government. It is this government that receives a radio message from the fledgling "Chironian" civilization revealing that the probe found a habitable planet (Chiron) and that the first generation of children have been raised successfully.
        As the surviving power blocs of Earth before the conflict are still evident, North America, Europe and Asia each send a generation ship to Alpha Centauri to take control of the colony. By the time that the first generation ship (the American Mayflower II) arrives after 20 years, Chironian society is in its fifth generation.
        The Mayflower II has brought with it thousands of settlers, all the trappings of the authoritarian regime along with bureaucracy, religion, fascism and a military presence to keep the population in line. However, the planners behind the generation ship did not anticipate the direction that Chironian society took: in the absence of conditioning and with limitless robotic labor and fusion power, Chiron has become a post-scarcity economy. Money and material possessions are meaningless to the Chironians and social standing is determined by individual talent, which has resulted in a wealth of art and technology without any hierarchies, central authority or armed conflict.
        In an attempt to crush this anarchist adhocracy, the Mayflower II government employs every available method of control; however, in the absence of conditioning the Chironians are not even capable of comprehending the methods, let alone bowing to them. The Chironians simply use methods similar to Gandhi's satyagraha and other forms of nonviolent resistance to win over most of the Mayflower II crew members, who had never previously experienced true freedom, and isolate the die-hard authoritarians. ...

I guess it maybe all depends how wisely the robots are programmed initially? Sadly, with several AI companies racing forward in a winner-takes-all hyper-competitive safety-ignoring way to earn a lot of fiat dollars, seems like something needs to change to build a more hopeful future. Still, there is always the "Optimism of Uncertainty" as Howard Zinn called it.
https://www.thenation.com/arti...

Comment Tools of abundance misused from scarcity mindset (Score 1) 312

Time to trot out my sig again, sigh: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
https://pdfernhout.net/recogni...
"The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream."

It was awesome ten days ago to see someone else who understands that irony:
"The Real Danger of AI Has Nothing to Do With AI" by Mo Gawdat
https://www.youtube.com/watch?...
        "Artificial intelligence is often described as a force that will shape humanity's future, yet technology itself carries no intention, morality, or agenda. The direction it takes depends entirely on the values and decisions of the people creating and using it.
        As intelligence becomes more powerful and capable of solving increasingly complex problems, the real question shifts away from what machines can do and toward what humanity chooses to prioritize. Every major technological leap reflects the ethical frameworks of the society behind it, meaning the future shaped by AI will ultimately mirror human behavior rather than machine logic.
        The challenge is not teaching machines to think -- it is learning to think more wisely ourselves."

Mo Gawdat was talking about AI, but the same applies to using advanced manufacturing and supply chains to make hypersonic missiles as in the article. That's US$99,000 that can't otherwise be used to make food, water, shelter, energy, 3D printers, and so on -- and so exacerbating the very conflicts that lead people to want to use hypersonic missiles.

Or for a more humorous take on this by me from 2009 as another "Downfall" bunker scene parody (which coincidentally starts with a mention of rockets/missiles):
https://groups.google.com/g/po...
        "Dialog of alternatively a military officer and Hitler:
"It looks like there are now local digital fabrication facilities here, here, and here."
"But we still have the rockets we need to take them out?"
"The rockets have all been used to launch seed automated machine shops for self-replicating space habitats for more living space in space."
"What about the nuclear bombs?"
"All turned into battery-style nuclear power plants for island cities in the oceans."
"What about the tanks?"
"The diesel engines have been remade to run biodiesel and are powering the internet hubs supplying technical education to the rest of the world."
"I can't believe this. What about the weaponized plagues?"
"The gene engineers turned them into antidotes for most major diseases like malaria, tuberculosis, cancer, and river blindness."
"Well, send in the Daleks."
"The Daleks have been re-outfitted to terraform Mars. There all gone with the rockets."
"Well, use the 3D printers to print out some more grenades."
"We tried that, but they only are printing toys, food, clothes, shelters, solar panels, and more 3D printers, for some reason."
"But what about the Samsung automated machine guns?"
"They were all reprogrammed into automated bird watching platforms. The guns were taken out and melted down into parts for agricultural robots."
"I just can't believe this. We've developed the most amazing technology the world has ever known in order to create artificial scarcity so we could rule the world through managing scarcity. Where is the scarcity?"
"Gone, Mein Fuhrer, all gone. All the technologies we developed for weapons to enforce scarcity have all been used to make abundance."
"How can we rule without scarcity? Where did it all go so wrong? ... Everyone with an engineering degree leave the room ... now!" [Cue long tirade on the general incompetence of engineers. :-) Then cue long tirade on how could engineers seriously wanted to help the German workers to not have to work so hard when the whole Nazi party platform was based on providing full employment using fiat dollars. Then cue long tirade on how could engineers have taken the socialism part seriously and shared the wealth of nature and technology with everyone globally.]
"So how are the common people paying for all this?"
"Much is free, and there is a basic income given to everyone for the rest. There is so much to go around with the robots and 3D printers and solar panels and so on, that most of the old work no longer needs to be done."
"You mean people get money without working at jobs? But nobody would work?"
"Everyone does what they love. And they are producing so much just as gifts."
"Oh, so you mean people are producing so much for free that the economic system has failed?"
"Yes, the old pyramid scheme one, anyway. There is a new post-scarcity economy, where between automation and a a gift economy the income-through-jobs link is almost completely broken. Everyone also gets income as a right of citizenship as a share of all our resources for the few things that still need to be rationed. Even you."
"Really? How much is this basic income?"
"Two thousand a month."
"Two thousand a month? Just for being me?"
"Yes."
"Well, with a basic income like that, maybe I can finally have the time and resources to get back to my painting...""

Sadly we are seeing some real bunker scenes with billionaires and they are not so funny:
https://www.vice.com/en/articl...
        "The men cited potential disasters caused by electromagnetic pulses, economic downturn, disease, or war that might "necessitate them leaving their Silicon Valley ranches and retreating to these fortified bunkers in the middle of nowhere."
        These "luxury bunkers" include features most of us could only ever dream of, like indoor pools and artificial sunlight, allowing them to remain sealed off from the world for years at a time, if necessary.
        "The billionaires understand that they're playing a dangerous game," Rushkoff said. "They are running out of room to externalize the damage of the way that their companies operate. Eventually, there's going to be the social unrest that leads to your undoing."
        Like the gated communities of the past, their biggest concern was to find ways to protect themselves from the "unruly masses," Rushkoff said. "The question we ended up spending the majority of time on was: 'How do I maintain control of my security force after my money is worthless?'"
        That is, if their money is no longer worth anything--if money no longer means power--how and why would a Navy Seal agree to guard a bunker for them?
        "Once they start talking in those terms, it's really easy to start puncturing a hole in their plan," Rushkoff said. "The most powerful people in the world see themselves as utterly incapable of actually creating a future in which everything's gonna be OK.""

Slashdot Top Deals

"History is a tool used by politicians to justify their intentions." -- Ted Koppel

Working...