Comment Re:Ribbon is less cluttered (Score 1) 235
Fuck you and that Ribbon.
I'm sorry. This is a topic that makes me nerd-rage.
LK
Fuck you and that Ribbon.
I'm sorry. This is a topic that makes me nerd-rage.
LK
How in the fuck does using 15% of the screen for a ribbon provide a compact interface when the menu bar is the competition?
LK
I'm so happy to hear of how many people are expressing this same sentiment.
I absolutely abhor the Ribbon interface. I don't care what their market research shows. I don't care what their shills and evangelists say. I do not like it. It's not intuitive at all.
LK
I have hated the Ribbon interface since it became the default. I use LibreOffice specifically to avoid having to use it.
LK
California's version "adds a certification bureaucracy on top: state-approved algorithms, state-approved software control processes, state-approved printer models, quarterly list updates
This is the most California thing I've ever read. Unconstitutional, unenforceable, and a massive increase in costs and bureaucracy; they hit the trifecta! I wonder if printer manufacturers that bake their own bread will be exempt once their checks to the governor's presidential campaign clear.
Incidentally, this is the kind of stupid shit that helps Trump and people like him get elected over and over.
That's why Amazon wanted to acquire Ring.
I have a ring camera and I'm hesitant to install it for this reason.
LK
There's a very clear pattern when we look at who benefits in any given gold rush and while there's a few big winners that fuel the mania, the vast, vast majority are losers.
And then there's Nvidia, happily selling shovels all day.
No, of course not, because he's stuck in his dogmatic viewpoint. He doesn't actually know much about LLMs, but he's got a ton of beliefs about them. And you have a hard time changing peoples' beliefs.
Don't ask some LLM's how many "r"s are in strawberry.
That was definitely a problem two years ago. I did just check in ChatGPT, Claude, and Gemini and all reported 3 correctly. The problem with people throwing out these sorts of criticisms isn't that they're all wrong; it's that they're ignorant of the leaps in progress being made. These models are rapidly improving and it's getting harder to find serious gotchas with them. They're still weak in some areas (e.g., spatial reasoning), but for serious power users who know how to prompt them well? They've become insanely powerful tools.
Not gods; tools. But really, really strong tools for huge variety of tasks.
I've used ChatGPT to write code and Gemini to debug it. If you pass the feedback back and forth, it takes a couple iterations but they'll eventually agree that it's all good and I find that's about 90-95% of the way to where I need it to be. Earlier today I took a 6kb script that had been used as something fast and dirty for years - written by someone long gone from the company - and completely revamped it into something much more powerful, robust, and polished in both its code and its output. Script grew to about 20kb, but it's 10x better and I only had to make minor tweaks. Between the two, they found all sorts of hidden bugs and problems with it.
No country can afford to take in unlimited refugees. At some point, the answer becomes another question. "How to we raise the standard of living for people in that country because we can not afford to take any more of them here?"
LK
The day will come that an AI will learn something that we did not deliberately teach it. When an AI is able to improve its own code, it won't be bound by the limitations of its human creator. It's only a question of when.
LK
Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.
LK
Don't agree at all and I think that's a morally dangerous approach. We're looking for a scientific definition of "desire" and "want". That's almost certainly a part of "conscious" and "self aware". Philosophy can help, but in the end, to know whether you are right or not you need the experimental results.
Experiments can be crafted in such a way as to exclude certain human beings from consciousness.
One day, it's extremely likely that a machine will say to us "I am alive. I am awake. I want..." and whether or not it's true is going to be increasingly hard to determine.
LK
Only if we define consciousness to be a state of awareness only attainable by human beings.
An LLM can't suddenly decide to do something else which isn't programmed into it.
Can we?
It's only a matter of time until an AI can learn to do something it wasn't programmed by us to do.
Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.
LK
Usage: fortune -P [] -a [xsz] [Q: [file]] [rKe9] -v6[+] dataspec ... inputdir