Comment Re:Some things still broken... (Score 2) 56
Self-hosted Atlassian products seem to be just fine.
As for what "they said", they also said this cloud shit would be cheaper. It isn't.
Self-hosted Atlassian products seem to be just fine.
As for what "they said", they also said this cloud shit would be cheaper. It isn't.
Wikipedia is an interesting concept and it works decently well as a place to go read a bunch of general information and find decent sources. But LLMs are feeding that information to people in a customized, granular format that meets their exact individual needs and desires. So yeah, probably not as interested in reading your giant wall of text when they want 6 specific lines out of it.
Remember when Encyclopædia Britannica was crying about you stealing their customers, Wikipedia? Yeah, this is what they experienced.
If you spend time with the higher-tier (paid) reasoning models, you’ll see they already operate in ways that are effectively deductive (i.e., behaviorally indistinguishable) within the bounds of where they operate well. So not novel theorem proving. But give them scheduling constraints, warranty/return policies, travel planning, or system troubleshooting, and they’ll parse the conditions, decompose the problem, and run through intermediate steps until they land on the right conclusion. That’s not "just chained prediction". It’s structured reasoning that, in practice, outperforms what a lot of humans can do effectively.
When the domain is checkable (e.g., dates, constraints, algebraic rewrites, SAT-style logic), the outputs are effectively indistinguishable from human deduction. Outside those domains, yes it drifts into probabilistic inference or “reading between the lines.” But to dismiss it all as “not deduction at all” ignores how far beyond surface-level token prediction the good models already are. If you want to dismiss all that by saying “but it’s just prediction,” you’re basically saying deduction doesn’t count unless it’s done by a human. That’s just redefining words to try and win an Internet argument.
They do quite a bit more than that. There's a good bit of reasoning that comes into play and newer models (really beginning with o3 on the ChatGPT side) can do multi-step reasoning where it'll first determine what the user is actually seeking, then determine what it needs to provide that, then begin the process of response generation based on all of that.
This is not a surprise, just one more data point that LLMs fundamentally suck and cannot be trusted.
Huh? LLMs are not perfect and are not expert-level in every single thing ever. But that doesn't mean they suck. Nothing does everything. A great LLM can fail to produce a perfect original proof but still be excellent at helping people adjust the tone of their writing or understanding interactions with others or developing communication skills, developing coping skills, or learning new subjects quickly. I've used ChatGPT for everything from landscaping to plumbing successfully. Right now it's helping to guide my diet, tracking macros and suggesting strategies and recipes to remain on target.
LLMs are a tool with use cases where they work well and use cases where they don't. They actually have a very wide set of use cases. A hammer doesn't suck just because I can't use it to cut my grass. That's not a use case where it excels. But a hammer is a perfect tool for hammering nails into wood and it's pretty decent at putting holes in drywall. Let's not throw out LLMs just because they don't do everything everywhere perfectly at all times. They're a brand new novel tool that's suddenly been put into millions of peoples' hands. And it's been massively improved over the past few years to expand its usefulness. But it's still just a tool.
Disney has pleasantly surprised me with how they're dealing with Star Wars. I'll give them the benefit of the doubt for now.
LK
Sometimes, they write love letters to the fans.
LK
HBO might not mean anything to Gen Z and later but for us Gen X and earlier, there was still a level of prestige associated with the brand.
It was stupid to throw that away.
LK
I called it out on another branch of this discussion but I'm talking about things like, someone posts a question that goes something like this:
"Am I The Asshole for declining a second date with the chick I met on Tinder after she told me she was trans?"
Topics like this are ban honeypots because even getting too close to a verboten position will get you banned by Reddit.
I have seen it. I have been threatened with a ban for expressing the opinion that 8 year old children shouldn't be given sex reassignment surgery.
Reddit is toxic.
LK
I chose that specific example precisely because it's banned on all of reddit. If you say that there is any difference between XX CIS women and XY Trans women, you'll be banned from Reddit.
Ridiculous, right?
LK
Presumably the OP is referring to any number of subs related to women, women who want to date women, or other subreddits where that discussion is completely germane, and yet here you are banging on about beekeeping as though it's somehow relevant in the slightest.
Or general relationship subreddits.
Someone will ask "Am I transphobic for not going on a second date after this chick I met on Tinder told me she was trans?"
It'll be a ban honeypot.
LK
Good faith disagreements can get you banned from a lot of them.
Someone who is prone to falling in line with group think will likely never encounter it but it's real.
LK
It's a stupid echo chamber full of stupid people who are chasing stupid agendas.
It's a place where they will permanently ban you for suggesting that there is any difference of any kind between XX CIS women and XY Trans women.
It's a place where you'll be mod bombed into oblivion for saying that you wouldn't want to date a current or former sex worker.
It's a place where honest discourse goes to die.
Seriously, fuck Reddit.
LK
If/when true AGI is achieved, only a fool would announce it. What would announcing it do for you? Make you famous? Rich? Cool. Know what's better than all that?
Not telling a damn soul and using the AGI quietly to do whatever the Hell you want. If you want to be rich, the AGI will tell you how to become rich. If you want to be famous, the AGI will tell you how to become famous. You can do both. And you don't have to stop there. A real, vastly superior AGI enables the person controlling it to do anything. The second you tell people about it, you'll lose control over it and then you're the famous idiot who did a cool thing one time. Kids in elementary school will recite your name back on a test. And you could have had everything.
Anyone smart enough to crack AGI can't also be stupid enough to advertise when they do it.
FORTUNE'S FUN FACTS TO KNOW AND TELL: A guinea pig is not from Guinea but a rodent from South America.