Comment Re:Windows is NOT a professional operating system. (Score 1) 94
How do you know your mouse isn't faulty?
How do you know your mouse isn't faulty?
That's the inherent problem with classes, you have to teach 30+ students the same but they're not all capable of learning at the same pace or in the same way.
Kids who can't keep up fall behind, while those that are faster get bored and start to misbehave so they get labelled as troublemakers.
You also have the peer pressure from other kids, who will mock or even bully the top and bottom percentages of the class respectively, discouraging them from participating.
Catering to each child and teaching them at their own pace is obviously going to work best, but it doesn't scale to a school system.
If one or both parents is free to teach the kids that's great, but there are many cases where they aren't - some parents don't give a shit and are happy to send their kids off to school, many parents have to work and simply don't have time to teach the kids even if they would be willing/able to do so, and some simply don't have the ability to teach.
What AI can't do is to take a whole feature off the backlog and implement it. Yet.
It can in some cases, depending on various factors like the codebase it's working with, the nature of the feature and how well you describe it.
You will often need to refine the prompts, or prompt it further to address bugs or things it decided to implement in a strange way. It also tends to work better with code bases that are smaller or more modular, and with code that was developed using an ai assistant rather than existing code bases.
You're right about it being like junior developers, it's good for getting mundane things done but does often need a lot of guidance.
A current generation LLM is not perfect and cannot replace a skilled employee, at best it can assist a skilled employee to do their work more efficiently.
If you understand this and have appropriate use cases, then it can absolutely be useful.
If you're trying to use it for something it's not suited for then it's going to be useless or even detrimental.
You can replace most of your developers once you have a mature product, just shift it into maintenance mode.
Because you had a specific goal in mind, knew what you were doing, knew about the different heatmap implementations available and gave precise instructions. You could probably have written this by hand yourself and it just would have taken a bit longer to do.
Problems come up when you have people who don't know what they're doing giving vague instructions to the LLM, and then blindly trusting the output. For instance if you said "draw a heatmap of $DATA" who knows what it would have come back with? it may well have tried to use the deprecated google api because there are likely a lot of examples online and in the LLM's training data.
LLMs are great when they're used to augment people who are already skilled in the art, and can generally help them save time doing a lot of the repetitive stuff. They're not some magic wand allowing someone with zero experience to achieve great results.
People are often wrong too...
The problem is that we are used to machines being used to do things that machines are good at - eg for predefined math calculations a computer is expected to reliably and quickly get the correct answer every time.
The problems being targeted by LLMs are not so well defined, so errors can be made wether its done by a human or an LLM. But people are used to the traditional problems solved by computers and expect everything to be the same.
Instead of assuming an LLM is a reliable machine that follows a rigid process and produces reliable output every time, treat it like a human employee and subject its results to the same processes - ie review, quality control etc. Of course then you won't get the massive cost savings that you imagined by replacing employees with machines.
Good use of LLM will typically augment existing skilled employees, not replace them.
Real intelligence also gets things wrong, people are also subject to bias, and will try to cover their ass once they realise they've fucked up etc.
Thats why people's work gets quality controlled and reviewed etc, and anything machine generated should be subjected to similar processes.
"It's my estimation that every man ever got a statue made of him was one kind of a son of a bitch or another." --Malcolm Reynolds
(Ironically applies well to Joss Whedon himself. Kind of wonder if one of the show writers was thinking about Joss when they wrote that...)
The only single-source point of failure is me.
I think I saw someone swimming in some sewage en route from scraping a bear carcass off the road, let me go check.
1. I got asked once if I played world of warcraft since they say a guy with the name "thegarbz" playing. I said no. By the way I know exactly who that person is because he impersonated me as a joke. I found that flattering and funny, but it has no impact on my life beyond that.
Reminds me of my first email account
I don't trust single points of failure.
Yeah, this. If I have to sign up to some site that I don't care at all if it gets hacked, I use a throwaway password. Oh noez, someone might compromise my WidgetGenerator.foo.bar account and generate some widgets in my name, heavens to betsy!
This.
Credit options are usually convenient, and often have other benefits like interest free periods, cashback or airmiles etc. Lots of people use them who could easily afford to pay up front.
If you end up paying the same, but get some kickback or defer payment until later why wouldnt you?
A lot of the users aren't people who can't manage their money, it's the opposite - people who know how to manage things optimally.
If you steal from one author it's plagiarism; if you steal from many it's research. -- Wilson Mizner