Forgot your password?
typodupeerror

Comment Re:My home network is nearly pure IPv6 (Score 1) 73

To me the hoops that smoothbrains will jump through to avoid IPv6 and stay on legacy IPv4, especially when hosting, is pathetic. NAT, port forwarding, tunnels, blah blah blah blah.

I have something like ~1.2 trillion times the number of routable addresses that the entire IPv4 space has. Not all are reachable, of course, just the services that need incoming access and they're each on their own isolated DMZ.

Comment My home network is nearly pure IPv6 (Score 1) 73

Started the move about 18 months ago when I decided to get off my lazy ass. My ISP gives out a /56 prefix, so that lets me run 256 /64 subnets/VLANs in the house, currently there are ~10 in use. Everything get a GUA through SLAAC and I use RAs (Router Advertisements) to give ULAs to everything. Any external facing services get their own VLAN and /64 for the system(s) as needed. Firewall blocks all incoming as they usually do by default and I punch a hole for the external-facing systems. They can't reach back into the network, they only answer the phone. All the systems update DNS dynamically if the prefix or full address ever change.

I have an SSH bastion set up. In all this time there has not been a single SSH attempt from the internet. On IPv4 it was constant background noice.
For those legacy IPv4-only systems on the internet, I set up NAT64. I have an IoT VLAN and IoT 2.4 GHz wireless network that are only IPv4 because a lot of IoT network stacks are junk.

I'm still farting around with it, but man oh man, there's no way I'd go back to IPv4. It was one of the best moves I've done in ages.

Comment Need state approved toilet paper to wipe your own (Score 1) 123

California's version "adds a certification bureaucracy on top: state-approved algorithms, state-approved software control processes, state-approved printer models, quarterly list updates

This is the most California thing I've ever read. Unconstitutional, unenforceable, and a massive increase in costs and bureaucracy; they hit the trifecta! I wonder if printer manufacturers that bake their own bread will be exempt once their checks to the governor's presidential campaign clear.

Incidentally, this is the kind of stupid shit that helps Trump and people like him get elected over and over.

Comment Yep (Score 1) 186

The UHF app on our Apple TVs & iOS devices and the UHF Server in Docker to act as a PVR gives us everything for a few $ a month paid in crypto.
We haven't had cable since ~1999-2000. Downloading and the *arrs have kept us happy, but the better half wanted to check out some live sports. So IPTV it was.

Comment Re:Calling it a lead is very generous (Score 1) 28

I've used Claude at home for ages. Work was wanting to get some AI stuff for us and the only 'blessed' one is CoPilot. Everything else it blocked. All senior management seems to know about AI is "Hurrr... Copilot and ChatGPT."

Out team of ~8 (pentestesting & VA) were unanimous about Copilot being crap and Claude being the top dog. So some higher ups OK'd a Claude Teams package for work. To bypass the CorpSec tards, we use it from our lab environment that has its own unmonitored link and IP range.

Anthropic/Claude is just so far ahead of OpenAI/ChatGPT and MS/Copilot it's not funny.

Comment Re:LLM had a head start (Score 1) 113

Don't ask some LLM's how many "r"s are in strawberry.

That was definitely a problem two years ago. I did just check in ChatGPT, Claude, and Gemini and all reported 3 correctly. The problem with people throwing out these sorts of criticisms isn't that they're all wrong; it's that they're ignorant of the leaps in progress being made. These models are rapidly improving and it's getting harder to find serious gotchas with them. They're still weak in some areas (e.g., spatial reasoning), but for serious power users who know how to prompt them well? They've become insanely powerful tools.

Not gods; tools. But really, really strong tools for huge variety of tasks.

Comment Re:Too late (Score 1) 65

I've used ChatGPT to write code and Gemini to debug it. If you pass the feedback back and forth, it takes a couple iterations but they'll eventually agree that it's all good and I find that's about 90-95% of the way to where I need it to be. Earlier today I took a 6kb script that had been used as something fast and dirty for years - written by someone long gone from the company - and completely revamped it into something much more powerful, robust, and polished in both its code and its output. Script grew to about 20kb, but it's 10x better and I only had to make minor tweaks. Between the two, they found all sorts of hidden bugs and problems with it.

Comment New Flash: Farrier Very Concerned About Automobile (Score 3, Insightful) 92

Wikipedia is an interesting concept and it works decently well as a place to go read a bunch of general information and find decent sources. But LLMs are feeding that information to people in a customized, granular format that meets their exact individual needs and desires. So yeah, probably not as interested in reading your giant wall of text when they want 6 specific lines out of it.

Remember when Encyclopædia Britannica was crying about you stealing their customers, Wikipedia? Yeah, this is what they experienced.

Comment Re:"easily deducible" (Score 1) 60

If you spend time with the higher-tier (paid) reasoning models, you’ll see they already operate in ways that are effectively deductive (i.e., behaviorally indistinguishable) within the bounds of where they operate well. So not novel theorem proving. But give them scheduling constraints, warranty/return policies, travel planning, or system troubleshooting, and they’ll parse the conditions, decompose the problem, and run through intermediate steps until they land on the right conclusion. That’s not "just chained prediction". It’s structured reasoning that, in practice, outperforms what a lot of humans can do effectively.

When the domain is checkable (e.g., dates, constraints, algebraic rewrites, SAT-style logic), the outputs are effectively indistinguishable from human deduction. Outside those domains, yes it drifts into probabilistic inference or “reading between the lines.” But to dismiss it all as “not deduction at all” ignores how far beyond surface-level token prediction the good models already are. If you want to dismiss all that by saying “but it’s just prediction,” you’re basically saying deduction doesn’t count unless it’s done by a human. That’s just redefining words to try and win an Internet argument.

Slashdot Top Deals

Surprise your boss. Get to work on time.

Working...