Comment Re:That's nice, but... (Score 2) 419
That relies on using exactly the same tools for the entire build chain. Which might be hard or impossible to get, and of course also need to be trusted.
That relies on using exactly the same tools for the entire build chain. Which might be hard or impossible to get, and of course also need to be trusted.
The user's experience of the many interfaces of a system. E.g., as software nowadays comes, there is an (remote) backend, multiple frontends (apps, API, administrative tools). UX usually focuses on the end-user facing frontends, but those also come along with different interfaces as they have to integrate into different platforms.
Strictly speaking, yes, "user interface" could mean the sum of all those interfaces. But that's not the traditional meaning of the term.
Let me guess, you're writing these lines from the comfort of your air conditioned home office?
Give the man a break, he's had more impact than close to everyone on this site will ever have. And now he's in Russian hands, who have can easily blackmail him into anything.
And the rest of the world either inclined to sell him out to the US, or not letting him immigrate in the first place.
o_O
What you're arguing against is might not be what the parent stated. Yes, proving your software is working correctly is hard, very hard. Not possible for an individual developer in a real life situation (limited time, resources, formal education in mathematics - what are you doing writing commercial code when you're a mathematician?). That's one of the issue what I believe the parent wanted to mitigate with improved development tools, which also need to be bug free of course. At some level, the building blocks need to be proven to work correctly, you're right there. But that's should not be for the feature developer to do, he should be able to rely on the tools he's given.
Yes, I agree with the parent, software development is way to hard at the moment, the analogy "It's as if engineers decided to only use modeling clay for buildings, because nobody sells steel, and it's too cumbersome to smelt their own." holds true and seems to fit with what you're saying.
It is pretty common to blame users for system malfunctions. And often, that turns out to be correct. However, you being here, chances are you somehow involved in software development and its processes, and have experienced many of its failures. Here, we're talking about software people bet their life on. I do think it's warranted to look very closely and actually rule out that the failure was a technical one, and I find it difficult to imagine an argument against that.
You make some good point. Howevr, imagine the following not uncommon scenario:
1. A small number of experienced developers starts a project
2. The devs choose to build their own framework for the reasons you describe
3. PM wants ever more features, the project grows, more developers join
4. All new code is build on the framework made in step 2
5. Framework is extended
6. Original devs leave
So now everyone can use the framework, but it's original devs stopped maintaining it. Everyone know how to use the framework, but nobody knows its inner workings well enough - we have a custom, still lightweight framework tailored for the job, but nobody's maintaining it. The worst of both worlds, a framework maintained by people you can't rely on to understand it and fix its bugs, and the initial investment to build it in the first place.
In the end, you're right, there is no clearly defined criteria for which approach is better than the other, both ways has a very good chance to bite you in the ass. My perspective however, not as a developers of production systems but a software testing engineer (writing code to break other people's code
This is hard to prove or disprove. Sure, throwing some huge framework on a small problem is not a sensible approach. Seems like a no-brainer. But where to draw the line? A real life application constantly grows, and has a good chance to eventually use a growing percentage of the features exposed by a given framework. If you know your application will not outgrow its initial specs (in an unexpected way), it might make sense to opt for an own framework.
My personal experience, however, is that developers too often opt for their own solutions, maybe due some kind of not invented here syndrome, and duplicate a lot of work. That is bad in a lot of ways, as a well maintained framework with multiple developers and documented bugs is always preferable to completely new code.
Usually, frameworks are not written and maintained by one person. And thus, you are free to worry about your own code and its bugs, and have others worry about the framework's bugs (within reason, for crucial bugs you will of course need to worry about finding a viable workaround). Code reuse is a good thing, and no single person is smarter than > 1 person.
Also, frameworks do not usually change completely in less than a year, and the change that does happen is gradual - as long as you keep the framework you use up to date the learning curve is flat.
I don't know about that specific case, but from your description it doesn't apply to my argument: Either the car worked and the guy was joyriding, which isn't panicking, just general douchebaggyness, or the car did indeed fail, and the guy killed neither himself nor anyone else, which also isn't panicking.
Generally yes, in most cases the guy operating the "failing" machine is himself the point of failure. But that doesn't mean that's always the case.
"If I do not want others to quote me, I do not speak." -- Phil Wayne