Comment proposal - keep the bastards honest (Score 1) 62
its a proposal nothing more
its a proposal nothing more
Self-hosted Atlassian products seem to be just fine.
As for what "they said", they also said this cloud shit would be cheaper. It isn't.
so they did not implement DANE or RPKI
if you delegate you have a serious problem
irony is they use route53 so its a single click solution they just did. not,,,,
silly get on it
also no HSTS and one of your mail servers does not offer STARTTLS (they use google but not the secure DNSSEC settings) and the domain does not have CAA.
faarrk did no one raise this with them ? seems unlikely.... seems like bonus got in the way of security
JJ
Wikipedia is an interesting concept and it works decently well as a place to go read a bunch of general information and find decent sources. But LLMs are feeding that information to people in a customized, granular format that meets their exact individual needs and desires. So yeah, probably not as interested in reading your giant wall of text when they want 6 specific lines out of it.
Remember when Encyclopædia Britannica was crying about you stealing their customers, Wikipedia? Yeah, this is what they experienced.
so how about funding
you can do offline processing and fusion of signals but really
Quasi-Zenith Satellite System (QZSS), nicknamed "Michibiki" is the best solution hey if you had enough LEO broadcasts you would be good but starlink does
how about security gmail team ?
https://www.rfc-editor.org/rfc/rfc6698
microsoft are eating your lunch in europe gov etc
https://learn.microsoft.com/en-us/purview/how-smtp-dane-works
exim supports it and so does postfix get with the times.... the Not invented here is getting tiresome just do it
regards
John Jones
the old oil is data there is a ton of stupid data in them their hills
If you spend time with the higher-tier (paid) reasoning models, you’ll see they already operate in ways that are effectively deductive (i.e., behaviorally indistinguishable) within the bounds of where they operate well. So not novel theorem proving. But give them scheduling constraints, warranty/return policies, travel planning, or system troubleshooting, and they’ll parse the conditions, decompose the problem, and run through intermediate steps until they land on the right conclusion. That’s not "just chained prediction". It’s structured reasoning that, in practice, outperforms what a lot of humans can do effectively.
When the domain is checkable (e.g., dates, constraints, algebraic rewrites, SAT-style logic), the outputs are effectively indistinguishable from human deduction. Outside those domains, yes it drifts into probabilistic inference or “reading between the lines.” But to dismiss it all as “not deduction at all” ignores how far beyond surface-level token prediction the good models already are. If you want to dismiss all that by saying “but it’s just prediction,” you’re basically saying deduction doesn’t count unless it’s done by a human. That’s just redefining words to try and win an Internet argument.
They do quite a bit more than that. There's a good bit of reasoning that comes into play and newer models (really beginning with o3 on the ChatGPT side) can do multi-step reasoning where it'll first determine what the user is actually seeking, then determine what it needs to provide that, then begin the process of response generation based on all of that.
This is not a surprise, just one more data point that LLMs fundamentally suck and cannot be trusted.
Huh? LLMs are not perfect and are not expert-level in every single thing ever. But that doesn't mean they suck. Nothing does everything. A great LLM can fail to produce a perfect original proof but still be excellent at helping people adjust the tone of their writing or understanding interactions with others or developing communication skills, developing coping skills, or learning new subjects quickly. I've used ChatGPT for everything from landscaping to plumbing successfully. Right now it's helping to guide my diet, tracking macros and suggesting strategies and recipes to remain on target.
LLMs are a tool with use cases where they work well and use cases where they don't. They actually have a very wide set of use cases. A hammer doesn't suck just because I can't use it to cut my grass. That's not a use case where it excels. But a hammer is a perfect tool for hammering nails into wood and it's pretty decent at putting holes in drywall. Let's not throw out LLMs just because they don't do everything everywhere perfectly at all times. They're a brand new novel tool that's suddenly been put into millions of peoples' hands. And it's been massively improved over the past few years to expand its usefulness. But it's still just a tool.
soon it will be done
frankly I think it would be a good thing
regards
John Jones
procurement what fun...
whats the fidelity tolerances ?
whats the containers and have they paid a license fee
feck it buy sony
NO - "The tax targeted revenue generated from Canadian users rather than corporate profits, making it particularly burdensome for technology companies"
if you dont have a ledger of Candaian users your pretty much a fail as a company oh wait the americans dont like to pay ANY tax so will lie cheat do whatever it takes..
they are literally taking the republicans for a ride... not that any of them will refuse a ride...
Multics is security spelled sideways.