Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Off Insulin onto immunosuppressants for life... (Score 3, Informative) 39

I agree that this therapy is not without significant risks, so it's not to be taken lightly.

That said, the long-term health outcomes of T1DM are also significant. So the way I see this development is that it is one more step on the path toward finding a durable, safe, and effective cure. And if approved, it may offer some patients another choice, one that of course should involve an informed discussion with competent healthcare providers.

It's important to keep in mind that healthcare is not a "one size fits all" thing. Two patients that have the same condition can respond very differently to the same therapy. Before the discovery of insulin, diabetics literally just...died. So on the path to understanding this relationship between the individual patient and the selected therapy, medical science can only offer a range of treatment options. At one time, humans believed in bloodletting, lobotomies, and arsenic to treat various illnesses. We built leper colonies. And in some places in the world, menstruation is still considered "dirty." We have made many advances, but there are still many more to be discovered.

Comment Energy Cost of Slashdot (Score 1) 48

Are you feeling shame for the environmental impact that your use of LLMs is having?

I doubt anyone here does otherwise we would not be burning power on a laptop or mobile phone to post our opinions to Slashdot for others to burn more power reading. It may be less power than an LLM query but it's not none and it's not really necessary.

Comment Relative Risk, not Absolute (Score 3, Informative) 89

The efficacy likely a wash or possibly harmful.

So how to you explain the significant lack of Covid hospitalization and deaths in those in the vaccine trails? The 95% efficacy was in stopping you getting any Covid symptoms, what mattered more was the massive drop in hospitalization and fatality rates amongst the vaccinated. You can recover from mild flu symptoms, it's a lot harder to recover from death. Yes, the vaccine was rushed out faster than normal because once the rate of significant reaction to the vaccine was under 1 in ~100,000 any harm from the vaccine was orders lower than catching Covid.

Any medical procedure can cause harm so if your criterion is that there is no risk of harm your only option is to never go to the doctor. The relevant question is whether the harm of the procedure is less than the harm caused by not getting it and for Covid vaccines the data show that catching Covid is overwhelmingly more harmful than any risk of harm from the vaccine.

Comment "Is AI Apocalypse Inevitable? - Tristan Harris" (Score 1) 77

Another video echoing the point on the risks of AI combined with "bad" capitalism: https://www.youtube.com/watch?...
        "(8:54) So just to summarize: We're currently releasing the most powerful inscrutible uncontrollable technology that humanity has ever invented that's already demonstrating the behaviors of self-preservation and deception that we thought only existed in sci-fi movies. We're releasing it faster than we've released any other technology in history -- and under the maximum incentive to cut corners on safety. And we're doing this because we think it will lead to utopia? Now there's a word for what we're doing right now -- which is this is insane. This situation is insane.
        Now, notice what you're feeling right now. Do do you feel comfortable with this outcome? But do you think that if you're someone who's in China or in France or the Middle East or you're part of building AI and you're exposed to the same set of facts about the recklessness of this current race, do you think you would feel differently? There's a universal human experience to the thing hat's being threatened by the way we're currently rolling out this profound technology into society. So, if this is crazy why are we doing it? Because people believe it's inevitable. [Same argument for any arms race.] But just think for a second. Is the current way that we're rolling out AI actually inevitable like if if literally no one on Earth wanted this to happen would the laws of physics force AI out into society? There's a critical difference between believing it's inevitable which creates a self-fulfilling prophecy and leads people to being fatalistic and surrendering to this bad outcome -- versus believing it's really difficult to imagine how we would do something really different. But "it's difficult" opens up a whole new space of options and choice and possibility than simply believing "it's inevitable" which is a thought-terminating cliche. And so the ability for us to choose something else starts by stepping outside the self-fulfilling prophecy of inevitability. We can't do something else if we believe it's inevitable.
        Okay, so what would it take to choose another path? Well, I think it would take two fundamental things. The first is that we have to agree that the current path is unacceptable. And the second is that we have to commit to finding another path -- but under different incentives that offer more discernment, foresight, and where power is matched with responsibility. So, imagine if the whole world had this shared understanding about the insanity, how differently we might approach this problem..."

He also makes the point that we ignored the downsides of social media and so got the current problematical situations related to it -- and so do we really want to do the same with way-more-risky AI? He calls for "global clarity" on AI issues. He provides examples from nuclear, biotech, and ozone on how collective understanding and then collective action made a difference to manage risks.

Tristan Harris is associated with "The Center For Humane Technology" (of which I joined their mailing list while back):
https://www.humanetech.com/
"Articulating challenges.
Identifying interventions.
Empowering humanity."

Just saw this yesterday on former President Obama talking about concerns about AI not being hyped (mostly about economic disruption) and also how cooperation between people is the biggest issue:
"ORIGINAL FULL CONVERSATION: An Evening with President Barack Obama"
https://www.youtube.com/watch?...
        "(31:43) The changes I just described are accelerating. If you ask me right now the thing that is not talked about enough but is coming to your neighborhood faster than you think, this AI revolution is not made up; it's not overhyped. ... I was talking to some people backstage who are uh associated with businesses uh here in the Hartford community. Uh, I guarantee you you're going to start seeing shifts in white collar work as a consequence of uh what these new AI models can do. And so that's going to be more disruption. And it's going to speed up. Which is why uh, one of the things I discovered as president is most of the problems we face are not simply technical problems. If we want to solve climate change, uh we probably do need some new battery technologies and we need to make progress in terms of getting to zero emission carbons. But, if we were organized right now we could reduce our emissions by 30% with existing technologies. It'd be a big deal. But getting people organized to do that is hard. Most of the problems we have, have to do with how do we cooperate and work together, uh not you know ... do we have a ten point plan or the absence of it."

I would respectfully build on what President Obama said by adding that a major reason why getting people to cooperate about such technology is because we need to shift our perspective as suggested with my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

I said much the same in the open letter to Michelle Obama from 2011:
https://pdfernhout.net/open-le...

One thing I would add to such a letter now is a mention of Dialogue Mapping using IBIS (perhaps even AI-assisted) to help people cooperate on solving "wicked" problems through visualizing the questions, options, and supporting pros and cons in their conversations:
https://cognitive-science.info...
https://pdfernhout.net/media/l...

Here is one example of some people working in that general area to support human collaboration on "wicked problems" (there are others, but I am conversing with related people at the moment): "The Sensemaking Scenius" (as one way to help get the "global clarity" that Tristan Harris and, indirectly, President Obama calls for):
https://www.scenius.space/
        "The internet gods blessed us with an abundance of information & connectivity -- and in the process, boiled our brains. We're lost in a swirl of irrelevancy, trading our attention, at too low a price. Technology has destroyed our collective sensemaking. It's time to rebuild our sanity. But how?
Introducing The Sensemaking Scenius, a community of practice for digital builders, researchers, artists & activists who share a vision of a regenerative intentional & meaningful internet."

Something related to that by me from 2011:
http://barcamp.org/w/page/4722...
        "This workshop was led by Paul Fernhout on the theme of tools for collective sensemaking and civic engagement."

I can hope for a convergence of these AI concerns, these sorts of collaborative tools, and civic engagement.

Bucky Fuller talked about being a "trim tab", a smaller rudder on a big rudder for a ship, where the trim tab slowly turns the bigger rudder which ultimately turns the ship. Perhaps civic groups can also be "trim tabs", as in: "Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has. (Margaret Mead)"

To circle back to the original article on what Facebook is doing, frankly, if there are some people at Facebook who really care about the future of humanity more than the next quarter's profits, this is the kind of work they could be doing related to "Artificial Super Intelligence". They could use add tools for Dialogue Mapping to Facebook's platform (like with IBIS or similar, perhaps supported by AI) to help people understand the risks and opportunities of AI and to support related social collaboration towards workable solutions -- rather than just rushing ahead to create ASI for some perceived short-term economic advantage. And this sort of collaboration-enhancing work is the kind of thing Facebook should be paying 100 million dollar signing bonuses for if such bonuses make any sense.

I quoted President Carter in that open letter, and the sentiment is as relevant about AI as it was then about energy:
        http://www.pbs.org/wgbh/americ...
        "We are at a turning point in our history. There are two paths to choose. One is a path I've warned about tonight, the path that leads to fragmentation and self-interest. Down that road lies a mistaken idea of freedom, the right to grasp for ourselves some advantage over others. That path would be one of constant conflict between narrow interests ending in chaos and immobility. It is a certain route to failure. All the traditions of our past, all the lessons of our heritage, all the promises of our future point to another path, the path of common purpose and the restoration of American values. That path leads to true freedom for our nation and ourselves. We can take the first steps down that path as we begin to solve our energy [or AI] problem."

Comment Re:If the shoe fits... (Score 1) 24

The two aren't actually so different. You do get to make economic arguments a lot more openly about copyright(while, when it comes to killing, we normally make them relatively quietly and circumspectly when the unpleasant matter of what risks to the public are just part of The March of Progress and which ones are negligent or reckless comes up. We prefer not to talk about it; and have some proxies like 'VSL/ICAF' to help; but we do it); but the classifications are ultimately a policy thing and open to amendment as needed.

"Murder" superficially resembles a stable category just because of a true-by-definition trick: we call it 'murder' if a killing is unlawful and forbidden(or, rhetorically, if we think it ought to be unlawful and forbidden); so there's always a strong anti-murder consensus because everyone is against killings that are forbidden, except a few Raskolnikov-type edgelords. What there is not is actually a consensus on what killings we are or aren't against. The people who think that every other defensive option must be exhausted and the ones who are just itching to castle-doctrine the next fool who steps over the property line are both anti-murder; but not entirely in agreement on what that means; same with the current dispute over whether euthanasia is a legitimate exercise of self determination or nihilistic hyper-sin; or any of the wartime arguments over where 'collateral damage' stops being unfortunate-but-proportionate and just goes into being bulk murder.

It is somewhat more common to find(in public, not so much remotely in the vicinity of legislative power) people who will outright claim to be against copyright; because they do not consider any derivative works to be legitimately unauthorized; but here it's a more or less straightforward fight between two entities that would both claim to be in favor of copyright; but who differ on whether setting up a data mine in the BBC's backyard is copyright infringement or not.

Comment Re: Wrong headline (Score 1) 14

Well, in the case from late last year that I was a member of the class, I thought the claim was totally bogus. And that case did not go to trial, instead the company settled out of court. That's the best result for the scumbag lawyers, they get paid without having their bogus claims subjected to actual trial. I think this claim against Apple is similarly bogus, and has the same goal. Settle out of court, lawyers pocket their huge payout....

The one class where there was a legitimate concern, a meaningful settlement, and I actually suffered significant damages, was a case over failed plumbing (plastic pipes.) But by the time our pipes failed, the settlement was out of money.

Comment Re:"News for Nerds, Stuff that Matters" (Score 4, Informative) 52

He's the founder and CEO of one of the companies HP bought during Apotheker's...impressive...string of failures. That was in 2011; but it remained in the news first when HP wrote down their 10.3 billion dollar buy by 8.8 billion dollars; then when the litigation began by HP against previous management on the theory that they must have been cooking the books a bit for things to go so wrong so fast under HP's illustrious management.

The charges stuck against the CFO; but the CEO and VP of finance were acquitted. Then the VP of finance got hit by a car; and the CEO's celebratory yacht outing took a literal turn when the ship capsized and he died; then the VP of finance finished succumbing to his head injuries and died less than 48 hours later.

I'm not sure anyone thinks well enough of HP's ability to execute to seriously suspect them; but the background probably didn't reduce interest in getting a nice decisive root cause for the boat issue.

Slashdot Top Deals

The world is moving so fast these days that the man who says it can't be done is generally interrupted by someone doing it. -- E. Hubbard

Working...