Comment AI made "archiving the Internet" silly (Score 1) 73
It's harder to archive the Internet.
So what? Nothing of value will be lost.
It's harder to archive the Internet.
So what? Nothing of value will be lost.
Consider a 90-day Windows "evaluation" in a VM.
https://www.microsoft.com/en-u...
Get the image
Do your taxes
Save the results
Delete the VM
Same thing next year!
What you're wishing for is very much what WINE actually does. WINE is an (imperfect) implementation of Windows APIs. EXE files absolutely run "natively."
Sometimes it struggles with programs because it isn't an installation of Windows.
It's cute you think they'll keep the managers to do that.
The owners will hire cheap interns with AI experience to replace them.
And yes, eventually the owners will be jobless when the whole software as a service/product model falls apart. People will just ask their phones to do a thing, no app required.
What's wrong with "inefficient, verbose code?"
First, today's AI is the worst we're ever going to have. It'll just get better, or, at least, you'll always have today's so it won't get worse.
Second, compact, efficient code is often hard to read. Not always, but a good
Thirdly, I'm working in TypeScript, so "efficiency" is hardly the name of the game. We have a lot of business rules to parse and a little bit of cryptography, but no real heavy lifts computationally.
And back to efficiency. It might be that the AI's "inefficient" code is still better than layers of libraries that test all kinds of edge conditions that might well be redundant due to the next layer of library checking the same thing or not relevant to your use case and inputs.
This. So much this. Thank you for posting!
There are pros and cons to not using libraries.
Pros: not going to be subject to as many supply chain attacks.
Cons: probably some new vulnerabilities unique to your code.
Mitigation: who's going to know about those vulnerabilities? Run the "find vulnerabilities" AI and have it harden itself up.
Will it be perfect? No, of course not.
Is that library you were using perfect? No, of course not.
I work on the typescript/npm side of the world and what I've seen recently (the latest updates made a big difference) is not the inclusion of a lot of libraries, but a rewriting of boilerplate code that was probably cribbed from all of open source libraries the model used to train.
I agree with you that it makes code reviews difficult, but I think we'll be relying on tools for that soon enough, too.
Every time there has been a major shift in tooling, there has been the "old guard" concerned about the loss of craft. From assemblers, to C, to 4GLs, and every layer of abstraction ever added, someone has said "won't someone think of the junior devs? How will they ever learn the skills we needed?!"
Speaking as someone who started college in the late 80's and took an assembly course or two, the kids don't need those skill any more. At least not outside of a few specialist areas.
Why do you think code reuse is good?
Maintainability? Testability? Comprehensibility?
AI has the potential to be a paradigm shift in _avoiding_ code reuse. How many times have you come across a function with collection of parameter flags that make the function behave differently? "This is almost what I want, let me tweak it." Those are such a nightmare to maintain or unwind, especially if there are a lot of calls to them. You try to fix one bug and regress another one. You end up adding a new flag and kicking it down the road.
AI can write what it needs for that use case in-line without impacting other use cases. Would I write that much code again? Not a chance. Is it best practices? Not by today's standards, but use case isolation has some real advantages, too.
What if AI coding goes the other way?
AI is good at writing tons of code. We might actually move away from layers of libraries if AI directly includes all the support functions we've been too lazy to rewrite.
AI, using its training on all those libraries, might end up in-lining only the parts of the libraries that are needed.
You literally want to train AIs to know when humans are not paying attention to what the AIs do?
Well, that's one way to go...
An adult flag doesn't work.
People who aren't adults turn into adults based on their birthday and their location and the context.
The OS can know all of these things and return a flag.
Oh, right, delivery robot.
It IS for a densely populated area. Well done.
Same. I've been playing for years.
Did I know exactly what they were doing with those scans? No, but I didn't figure they doing it for nothing. That's not exactly "unknowingly."
That said, the places they encourage scanning are way too far apart to get any decent coverage. If you want to deliver something to your neighborhood church fountain, it's great. But as a delivery system anywhere outside of a dense city, I think it'll have lousy coverage. Maybe it's enough to reorient a robot?
The last thing I want in an OS is constant pressure to upload all my files to their cloud service and trying to do it by default with every patch update.
I hate going through those reboot prompts pushing various services with dark patterns. I can usually deny them, but it's exhausting. And the rest of my family doesn't get it.
I believe your data and draw a different conclusion. Suicide is the biggest fraction of US gun deaths.
"Protecting your family" means not providing access to a gun. A gun is the most effective way to turn a transient suicidal impulse into an untreatable, fatal injury.
Yes, there are many ways to die and it is hard to stop someone truly determined, but few have so little time for remorse or rescue between impulse, action, and death.
Science is to computer science as hydrodynamics is to plumbing.