Forgot your password?
typodupeerror

Comment Re:Nobody (Score 1) 43

To be fair, outside of GPUs there really isn't much need for third party cards, and arguably even GPUs aren't a show stopper with third party GPU cages.

And Apple's total lack of third-party GPU support on Apple Silicon (beyond a few kludges that use them for AI workloads, but no display), losing the current Mac Pro is no great loss. :-)

And yeah, the BlackMagic stuff I've bought lately is Thunderbolt. Also, most of the software for things like real-time switching runs on Windows anyway, so I'd imagine the market for that on Mac is not huge. And so much stuff gets brought in over networks these days (NDI, SRT, etc.) that HDMI ingest probably isn't that interesting anyway. You're more likely to use a dedicated encoder box that provides streams over Ethernet. My production work has been doing it that way since the pandemic.

Comment Re:the last mac pro had an big upchange for very l (Score 1) 43

Even more than the PCI-lanes, there wasn't hardware to justify it. With Apple Silicon, the GPU is built in and you can't fill the case with cards from NVidia to make it a CUDA-monster or handle graphics beyond the (impressive) abilities of the combined CPU/GPU.

Exactly this. Apple neutered the Mac Pro by making all of its additional functionality useless.

Years ago, they announced that they were killing support for kernel-space drivers. Then they announced a user-space replacement, DriverKit, that is basically half-assed when it comes to PCI, providing no support for any of the sorts of PCIe drivers that anyone would actually want to write. The operating system already comes with built-in support for USB xHCI silicon and most major networking chipsets, nobody builds PCIe audio anymore, FireWire support is dropped in the current OS release, and video drivers can't be written because Apple didn't bother writing the hooks.

That last one is the showstopper for PCI slots on a Mac. The main reason people bought Mac Pro or bought Thunderbolt enclosures was to support high-end video cards. With Apple not supporting any non-Apple GPUs on Apple Silicon, the slots are basically useless. I'm not saying that PCIe is useless by any means, just that the neutered, broken, driverless PCIe-lite hack that Apple actually makes available on macOS is basically useless.

I suppose you could theoretically provide DriverKit support for RAID cards, but really at this point everybody just uses external RAID hardware attached over a network anyway, so the number of people who would buy a Mac Pro for something like that is negligible.

And I guess in theory, you could port Linux video card drivers over if the only thing you're doing is using GPUs for non-video purposes (e.g. for AI model training or offline 3D rendering), but tying it into the operating system as a video output device is likely impossible without additional support from Apple, and nobody is going to bother to do that for the tiny number of people who would want that when you can just run Linux on x86 and not have to do all that porting work. After all, for those sorts of tasks, you probably aren't benefitting much from the OS or the CPU or memory performance anyway.

So basically the Mac Pro was dead on arrival because of Apple dropping support for very nearly every single thing that the Mac Pro could do that couldn't be done just as easily with a Studio (without even attaching a Thunderbolt PCIe enclosure). And once the Studio came out and had a comparable CPU in a much smaller form factor, the writing was on the wall.

More than that, the Apple Silicon Mac Pro is a sad toy that was never truly worthy of the Mac Pro name by any stretch of the imagination. It doesn't even have ECC memory or upgradable RAM. IMO, Apple really should have just been honest with its pro users and said "We no longer care about you," and then they should have dropped the Mac Pro as part of the Apple Silicon transition, rather than shipping something so massively downgraded that is so many miles from being a true pro desktop machine.

Anyone who is even slightly surprised by it being discontinued was obviously not paying attention.

Comment Re:Who gave Paul modpoints? (Score 1) 85

I am not even going to assess Biden's compos mentis. Maybe it was some medication or some benign reason, it doesn't matter. But what I can say is that his performance during the debate caused many of the die hard democrats to declare him incompetent and made it acceptable for media and pundits to turn on him (which they never did before).

What phrase would you use to describe that, other than "spoke incoherently"? I mean, I can think of some medical terms that might apply, but that's how I would describe his debate performance. He wandered off the subject, had trouble forming a complete thought... basically like a Trump speech, only he paused a lot when he lost his train of thought instead of rambling about illegal aliens eating pets or whatever.

Comment Re:Also required reading in history of technology (Score 1) 30

A tangent on another historian of science and technology I enjoyed taking a course from and who wrote about information technology and who died relatively way too young:
https://en.wikipedia.org/wiki/...
        "James Ralph Beniger (December 16, 1946 -- April 12, 2010) was an American historian and sociologist and Professor of Communications and Sociology at the Annenberg School for Communication at the University of Southern California, particularly known for his early work on the history of quantitative graphics in statistics, and his later work on the technological and economic origins of the information society."

RIP Jim. Thanks for being an excellent -- and kind -- professor. And mentioning me and other students with thanks in your Control Revolution book.

And thanks also introducing me to the 1978 book "Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought" by Langdon Winner --- which just gets more relevant every day.
https://www.amazon.com/Autonom...
        "This study of the idea of technology out of control makes an important contribution to our understanding of the problems of civilization. The basic argument is not that some persons or groups promote technology against the public interest (true though that is), or even that our technology develops in its own way in spite of all our efforts to control it (also true in some respects). Rather, Winner is concerned with a more subtle effect: the artifacts that we have invented to satisfy our material wants have now developed, in size and complexity, to the point of delimiting or even determining our conception of the wants themselves. In that way, we as a civilization are losing mastery over our own tools.... 'As a source for readings and reflections on this problem, the book is rich and rewarding.... If it has a practical lesson, it is that of awareness: only by recognizing the boundaries of our socially constructed scientific-technological reality can we transcend them in imagination and then achieve effective human action.'"

Winner's book is perhaps a sort of antithesis of "The Soul of A New Machine"? In that sometimes technologists (I'm looking at your OpenAI and cohorts) can get so excited about problem solving and miss the big picture. Related humor:
https://www.reddit.com/r/Jokes...
        "This reminds me of the engineer who is condemned to death, but as his turn approaches they find that the mechanism on the electric chair is no longer functioning. About to send everyone home, the engineer calls out "Wait! I think i can see the problem!""

I heard that Winner did not get tenure at MIT essentially because he suggested that the method of education used there was essentially blinding the MIT students to the social implications of what they were doing. A related book:
https://web.archive.org/web/20...
        "Who are you going to be? That is the question.
        In this riveting book about the world of professional work, Jeff Schmidt demonstrates that the workplace is a battleground for the very identity of the individual, as is graduate school, where professionals are trained. He shows that professional work is inherently political, and that professionals are hired to subordinate their own vision and maintain strict "ideological discipline."
        The hidden root of much career dissatisfaction, argues Schmidt, is the professional's lack of control over the political component of his or her creative work. Many professionals set out to make a contribution to society and add meaning to their lives. Yet our system of professional education and employment abusively inculcates an acceptance of politically subordinate roles in which professionals typically do not make a significant difference, undermining the creative potential of individuals, organizations and even democracy.
        Schmidt details the battle one must fight to be an independent thinker and to pursue one's own social vision in today's corporate society. He shows how an honest reassessment of what it really means to be a professional employee can be remarkably liberating. After reading this brutally frank book, no one who works for a living will ever think the same way about his or her job."

Anyway, I am so thankful for people like Jim Beniger, Langdon Winner, Michael Mahoney, and others who provided a larger perspective to technologists on what they were doing.

If I can find fault with "The Soul of a New Machine" from hazy recollections from 40+ years ago I don't think it did that. It's still informative though on how technology -- like now AI development -- can become addictive for techies with both good and bad aspects.
https://en.wikipedia.org/wiki/...
      "The book follows many of the designers as they give almost every waking moment of their lives to design and debug the new machine."

It's maybe kind of like the difference between books that glorify warmaking and "winning" versus those that glorify peacemaking and "win/win/win"?

An example of the latter is "To Become a Human Being: The Message of Tadodaho Chief Leon Shenandoah".
https://en.wikipedia.org/wiki/...

Or an essay by Terry Dobson on Aikido:
https://livingthepresentmoment...
https://en.wikipedia.org/wiki/...
      "My teacher taught us each morning that the art was devoted to peace. "Aikido," he said again and again, "is the art of reconciliation. Whoever has the mind to fight has broken his connection with the universe. If you try to dominate other people, you are already defeated. We study how to resolve conflict, not how to start it." ..."

Or in a way Theodore Sturgeon's "The Skills of Xanadu" 1956 short story about wearable networked mobile computing for sharing knowledge that inspired Ted Nelson to invent hypertext which contributed to the creation of the web.

Or Alfie Kohn's "No Contest: The Case Against Competition":
https://www.alfiekohn.org/cont...

One of the most interesting things I overheard at lunch at IBM Research was something along the lines of "We hire the top people from the most competitive schools -- and then wonder why they can't get along and cooperate."

And I'd add, wonder why many technologists rarely take much time to think about the broader social implications of what they are doing.

And "Soul of a New Machine" -- from what I can remember of it -- exemplifies and celebrates that focused mindset.

I can thank someone (maybe Bruce Maier?) who posted this essay to the Lyrics TOPS-10 timesharing system I used in high school circa 1980 -- for helping me have some reservations about such narrow focus:
"The Hacker Papers [about some Stanford CS students]"
https://cdn.preterhuman.net/te...
      "As much as an essay, this is a story. It is a true story of people paying $9,000 a year to lose elements of their humanity. It is a story of the breaking of wills and of people. It is a story of addictions, and of misplaced values. In a large part, it is my own story. ..."

Comment Also required reading in history of technology (Score 1) 30

Also assigned reading in Michael Mahoney's course on the history of science and technology at Princeton circa 1984. Although I had read it already -- and found it inspiring.

RIP Tracy Kidder.

And also RIP Professor Mahoney who died in 2008 at the relatively young age of 69 -- just when a historian is typically getting very productive.

https://en.wikipedia.org/wiki/...

https://www.dailyprincetonian....

"Histories of Computing"
https://www.hup.harvard.edu/bo...
"Computer technology is pervasive in the modern world, its role ever more important as it becomes embedded in a myriad of physical systems and disciplinary ways of thinking. The late Michael Sean Mahoney was a pioneer scholar of the history of computing, one of the first established historians of science to take seriously the challenges and opportunities posed by information technology to our understanding of the twentieth century."

It's going to be a rough next decade with ongoing loss of pioneers of personal computing and those who wrote about them or who inspired them. People like Steve Jobs and Doug Engelbart and Isaac Asimov and Theodore Sturgeon who have passed on years ago. Glad that Steve Wozniak and Alan Kay and Dan Ingalls and so on are still hanging in there! A heartfelt thanks to all of them for giving us possibilities -- even if we may not be doing great things with them right now.

"Original architects [Wozniak] of the personal computer hate what it's become..."
https://www.youtube.com/watch?...

"Steve Wozniak says he's "disappointed a lot" by AI and rarely uses it"
https://www.techspot.com/news/...

Comment Re:Bye bye Wikipedia (Score 5, Insightful) 24

Even on for authors, of encyclopedia articles, and this notihing wrong with telling ChatGTP to, "take this list of bullets and write it up as a paragraph."

Until it hallucinates and adds something that wasn't there or changes the meaning significantly. In my experience, AI is really good at screwing things up in ways that nobody expects. And if the people making the changes aren't subject-matter experts, but are just doing drive-by edits to try to make things more digestible, they might not notice the errors if they are subtle enough. Allowing any random person to do stuff like that could potentially cause a lot of damage really quickly.

Nor is there anything wrong with asking it to make a diagram of some process etc.

Until it steals the chart blatantly from somebody's published book, and Wikipedia gets sued for copyright infringement. Wikipedia isn't just trying to protect itself from erroneous data. It's trying to protect itself from liability. With user-uploaded content, the user can self-certify that they have the right to upload it, and apart from user incompetence, that's usually going to be good enough. With AI-generated images, it is impossible for a user to know for certain whether what they are uploading is infringing, and would be hard to later prove which AI generated the diagram to transfer the liability to the AI company.

But the biggest risk, IMO, would be asking it to make a chart with numbers from some table. It could manipulate the numbers, and if someone isn't checking closely, they might not see the error, but the incorrect chart could easily mislead people. AI-based chart generation seems way more likely to introduce errors than a human copying and pasting the table into a spreadsheet and generating the chart with traditional non-AI-based tools.

Someone else is going to clone wikipedia and the authorship will no doubt migrate to where they are allowed to use contemporary tooling.

And after a few months, people will complain that the content is constantly wrong, the editors over there will give up trying to keep the error rate under control, and anyone with a clue will come running back to Wikipedia.

Comment Re:Who gave Paul modpoints? (Score 1) 85

I agree on why Trump got a lot of his votes. We have ample evidence that there is a very racist and misogynist element within the "conservative right."

The conservative right wouldn't have voted for Harris anyway. That's not why he won.

He won because the Democrats care about whether their candidate stumbles across words and speaks incoherently, so Biden was pressured to step down, and Harris was forced to step up at the last second, with nobody really knowing who she was or who she stood for, thus limiting her ability to bring voters out.

He won because the Democrats weren't clueful enough to get Biden to fully step down and make Harris the next President immediately, which would have given the public months of seeing her actually lead the country.

If he really was struggling, then he won because Biden did not step down and let Harris take power before people started questioning whether he was fit to be in office.

He won because Biden did not recognize that he would have a hard time running again and allow an open primary.

He won because the Republicans were able to paint it as a coverup of Biden's feeble-mindedness, and the Democrats weren't able to show people that struggling mentally when you're physically fatigued isn't inherently a sign of dementia.

He won because Democrats had too much class to use the dementia card on Trump, either first or in retaliation.

He won because too many people conveniently forgot what a disaster he was during his first term, and too many people gave him a pass for the economic damage he did, and the folks prosecuting him for crimes were way too slow so it was still going at the next election.

He won because Kamala Harris was a center-right Democrat who tried to put on progressive clothes to get votes, then swung back towards the center again to get votes. Her time as a prosecutor draws into question her progressive bona fides. That meant the left didn't come out to vote.

I really don't understand why the only two women candidates that Democrats have run have at least appeared to be at the authoritarian end of their party. That doesn't win the presidency unless you're running as a Republican. Both of these candidates were mistakes. There are plenty of women in the Democratic party who would have been better choices.

In short, there were so many things wrong with her candidacy that it's hard to count them all. Gender and race were likely not a meaningful part of why she lost, or at least there are so many other confounding factors that it would be impossible to pin it on either of those.

Comment Re:Robot philosopher? (Score 1) 85

Touché! Good point on "actually" and whether it is qualified, thanks. I guess that word only fits in relation to there actually being a novel which I was referring to? But I agree I should have worded that better. The word "fictionally" might have been a better choice? Glad your comment sparked some tangential discussion by others.

Comment Re:Robot philosopher? (Score 3, Insightful) 85

Children raised by robot educators/philosophers actually worked out well in the fictional sci-fi novel by James P. Hogan "Voyage from Yesteryear":
https://en.wikipedia.org/wiki/...
"The story opens early in the 21st century, as an automated space probe is being prepared for a mission to explore habitable exoplanets in the Alpha Centauri system. However, Earth appears destined for a global war which the probe designers fear that humanity may not survive. It appears that the only chance for the human species is to reestablish itself far away from the conflict but there is no time left for a crewed expedition to escape Earth. The team, led by Henry B. Congreve, change their mission priority and quickly modify the design to carry several hundred sets of electronically coded human genetic data. Also included in this mission of embryo space colonization is a databank of human knowledge, robots to convert the data into genetic material and care for the children and construct habitats when the destination is reached, and a number of artificial wombs. The probe's designers name it the Kuan-Yin after the bodhisattva of childbirth and compassion.
        Shortly after the launch, global war indeed breaks out and several decades later, Earthbound humanity is united under an authoritarian government. It is this government that receives a radio message from the fledgling "Chironian" civilization revealing that the probe found a habitable planet (Chiron) and that the first generation of children have been raised successfully.
        As the surviving power blocs of Earth before the conflict are still evident, North America, Europe and Asia each send a generation ship to Alpha Centauri to take control of the colony. By the time that the first generation ship (the American Mayflower II) arrives after 20 years, Chironian society is in its fifth generation.
        The Mayflower II has brought with it thousands of settlers, all the trappings of the authoritarian regime along with bureaucracy, religion, fascism and a military presence to keep the population in line. However, the planners behind the generation ship did not anticipate the direction that Chironian society took: in the absence of conditioning and with limitless robotic labor and fusion power, Chiron has become a post-scarcity economy. Money and material possessions are meaningless to the Chironians and social standing is determined by individual talent, which has resulted in a wealth of art and technology without any hierarchies, central authority or armed conflict.
        In an attempt to crush this anarchist adhocracy, the Mayflower II government employs every available method of control; however, in the absence of conditioning the Chironians are not even capable of comprehending the methods, let alone bowing to them. The Chironians simply use methods similar to Gandhi's satyagraha and other forms of nonviolent resistance to win over most of the Mayflower II crew members, who had never previously experienced true freedom, and isolate the die-hard authoritarians. ...

I guess it maybe all depends how wisely the robots are programmed initially? Sadly, with several AI companies racing forward in a winner-takes-all hyper-competitive safety-ignoring way to earn a lot of fiat dollars, seems like something needs to change to build a more hopeful future. Still, there is always the "Optimism of Uncertainty" as Howard Zinn called it.
https://www.thenation.com/arti...

Comment Tools of abundance misused from scarcity mindset (Score 1) 302

Time to trot out my sig again, sigh: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
https://pdfernhout.net/recogni...
"The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream."

It was awesome ten days ago to see someone else who understands that irony:
"The Real Danger of AI Has Nothing to Do With AI" by Mo Gawdat
https://www.youtube.com/watch?...
        "Artificial intelligence is often described as a force that will shape humanity's future, yet technology itself carries no intention, morality, or agenda. The direction it takes depends entirely on the values and decisions of the people creating and using it.
        As intelligence becomes more powerful and capable of solving increasingly complex problems, the real question shifts away from what machines can do and toward what humanity chooses to prioritize. Every major technological leap reflects the ethical frameworks of the society behind it, meaning the future shaped by AI will ultimately mirror human behavior rather than machine logic.
        The challenge is not teaching machines to think -- it is learning to think more wisely ourselves."

Mo Gawdat was talking about AI, but the same applies to using advanced manufacturing and supply chains to make hypersonic missiles as in the article. That's US$99,000 that can't otherwise be used to make food, water, shelter, energy, 3D printers, and so on -- and so exacerbating the very conflicts that lead people to want to use hypersonic missiles.

Or for a more humorous take on this by me from 2009 as another "Downfall" bunker scene parody (which coincidentally starts with a mention of rockets/missiles):
https://groups.google.com/g/po...
        "Dialog of alternatively a military officer and Hitler:
"It looks like there are now local digital fabrication facilities here, here, and here."
"But we still have the rockets we need to take them out?"
"The rockets have all been used to launch seed automated machine shops for self-replicating space habitats for more living space in space."
"What about the nuclear bombs?"
"All turned into battery-style nuclear power plants for island cities in the oceans."
"What about the tanks?"
"The diesel engines have been remade to run biodiesel and are powering the internet hubs supplying technical education to the rest of the world."
"I can't believe this. What about the weaponized plagues?"
"The gene engineers turned them into antidotes for most major diseases like malaria, tuberculosis, cancer, and river blindness."
"Well, send in the Daleks."
"The Daleks have been re-outfitted to terraform Mars. There all gone with the rockets."
"Well, use the 3D printers to print out some more grenades."
"We tried that, but they only are printing toys, food, clothes, shelters, solar panels, and more 3D printers, for some reason."
"But what about the Samsung automated machine guns?"
"They were all reprogrammed into automated bird watching platforms. The guns were taken out and melted down into parts for agricultural robots."
"I just can't believe this. We've developed the most amazing technology the world has ever known in order to create artificial scarcity so we could rule the world through managing scarcity. Where is the scarcity?"
"Gone, Mein Fuhrer, all gone. All the technologies we developed for weapons to enforce scarcity have all been used to make abundance."
"How can we rule without scarcity? Where did it all go so wrong? ... Everyone with an engineering degree leave the room ... now!" [Cue long tirade on the general incompetence of engineers. :-) Then cue long tirade on how could engineers seriously wanted to help the German workers to not have to work so hard when the whole Nazi party platform was based on providing full employment using fiat dollars. Then cue long tirade on how could engineers have taken the socialism part seriously and shared the wealth of nature and technology with everyone globally.]
"So how are the common people paying for all this?"
"Much is free, and there is a basic income given to everyone for the rest. There is so much to go around with the robots and 3D printers and solar panels and so on, that most of the old work no longer needs to be done."
"You mean people get money without working at jobs? But nobody would work?"
"Everyone does what they love. And they are producing so much just as gifts."
"Oh, so you mean people are producing so much for free that the economic system has failed?"
"Yes, the old pyramid scheme one, anyway. There is a new post-scarcity economy, where between automation and a a gift economy the income-through-jobs link is almost completely broken. Everyone also gets income as a right of citizenship as a share of all our resources for the few things that still need to be rationed. Even you."
"Really? How much is this basic income?"
"Two thousand a month."
"Two thousand a month? Just for being me?"
"Yes."
"Well, with a basic income like that, maybe I can finally have the time and resources to get back to my painting...""

Sadly we are seeing some real bunker scenes with billionaires and they are not so funny:
https://www.vice.com/en/articl...
        "The men cited potential disasters caused by electromagnetic pulses, economic downturn, disease, or war that might "necessitate them leaving their Silicon Valley ranches and retreating to these fortified bunkers in the middle of nowhere."
        These "luxury bunkers" include features most of us could only ever dream of, like indoor pools and artificial sunlight, allowing them to remain sealed off from the world for years at a time, if necessary.
        "The billionaires understand that they're playing a dangerous game," Rushkoff said. "They are running out of room to externalize the damage of the way that their companies operate. Eventually, there's going to be the social unrest that leads to your undoing."
        Like the gated communities of the past, their biggest concern was to find ways to protect themselves from the "unruly masses," Rushkoff said. "The question we ended up spending the majority of time on was: 'How do I maintain control of my security force after my money is worthless?'"
        That is, if their money is no longer worth anything--if money no longer means power--how and why would a Navy Seal agree to guard a bunker for them?
        "Once they start talking in those terms, it's really easy to start puncturing a hole in their plan," Rushkoff said. "The most powerful people in the world see themselves as utterly incapable of actually creating a future in which everything's gonna be OK.""

Comment Re:Coming soon off the back of this (Score 1) 111

Doesn't have to be a credit card. A class III user digital certificate requires a verification firm be certain of a person's identity through multiple proofs. If an age verification service issued such a certificate, but anonymised the name the certificate was issued to to the user's selected screen name, you now have a digital ID that proves your age and optionally can be used for encryption purposes to ensure your account is only reachable from devices you authorise.

Comment Re:How would you protect children at scale? (Score 2) 111

I just saw the other one, and posted on it. The point still hold, how at the scale of Facebook would you really keep kids safe?

You don't. That's what parents are for. Parents taking care of their own kids scales easily, because the number of parents is linear in the number of kids being monitored. Facebook taking of everyone's kids scales exponentially in the number of users, because anybody could be talking to any kid.

What they should be suing for are better tools for parents to monitor their kids' activity on Facebook. If you give the parents that, and if you force child accounts to have an associated parent account, then the responsibility falls on the parents, as it should be.

Any other approach would be insane beyond reason.

Comment Re:How would you protect children at scale? (Score 1) 111

Let's get it clear up front, that any child exploitation is terrible. At a scale the size of Facebook, what could they really do to "protect children"? Unless you take an extreme stance of not letting anyone on the platform under 18 (or 21), I don't think it's possible.

There are two articles on the home page right now about Meta losing a court case. I think you meant to post this in the other one. This one is about social media addiction.

Slashdot Top Deals

If you teach your children to like computers and to know how to gamble then they'll always be interested in something and won't come to no real harm.

Working...