Comment Re:Why not? (Score 1) 152
Sounds a lot like applications Microsoft creates...
Sounds a lot like applications Microsoft creates...
I agree; but the article is making out like the command line is such a prevalent force that simple tasks like editing documents and browsing the web require a command line. This simply isn't true, I know many people who shutter in fear at the thought of having to type commands at a text console, and yet use a Linux distribution with ease and satisfaction each and every day.
What twit would attempt to edit a photo with the command line without actually having a good reason?
Try building software without a command line, sure you could do it...wait no you can't because even with GUI based compilers, buttons are just wrappers around commands.
The fact is certain tasks are better attacked by a GUI or a CLI, to completely favor one over the other is a mistake.
The summery is assuming that to use an operating system built on Linux you have to be a command line wizard. This simply isn't true. I mean, unless you are setting these people up with Gentoo or Linux From Scratch I can't see why anyone would need to look at a command line unless they where doing something particularly technical to began with.
My mother is a perfect example; completely inept with computers, and yet she browses the web, creates documents, edits photos, makes phone calls, installs software, watches movies, listens to music, does email, shops on line, and a bunch of other stuff from a Linux box. As far as she is concerned, the thing doesn't have a command line.
Why care about the command line? Because it is a whole lot easier then getting carpal tunnel clicking fifty different things when I could just type a couple commands and get the job done.
Just because non-technical users are afraid of a particular interface does not mean you rip it out. After all, distros like Ubuntu, Debian, Linux Mint, Fedora, RedHat and I'm sure plenty of others make it very easy for Joe User to get his computing done.
I can think of many reasons why the command line is still a very important part of any operating system. If, as a developer, you are worried that Joe User needs access to your tool, then make it easy for them. Rather then have the whole system cater to the computer illiterate.
Well that explains it. I'm running nothing less then 3.3.8
Interesting. I wonder what conditions had to have been met for a crash to happen, none of my servers had so much as a hick-up.
See, now you are just twisting the whole "buffer" and "dynamically allocated memory" thing and just using whichever term when it suits you.
When I pointed out you where going from one to the other it was because you questioned me, asking how I thought printf("hello world\n"); uses any dynamic memory when I never said such a thing, I specifically said "buffer"
Not necessarily. I agree that simply guessing at the max size of a buffer is incorrect. But I think it is better to start out with a max guess with the option to calculate and adjust as needed. On most platforms it is more expensive to actually allocate new memory then to simply have one big allocated at program start chunk.
You shouldn't criticize a simple call to exit(), it is impossible to think up every single thing users will throw at your program. I think the best approach is to try and figure out the most common errors and create code that adjusts for them, then a generic handler that either calls exit() or returns and lets the calling function decide what to do based on the return value is a good catch all. Even then you arn't completely covered.
What about race conditions? Is it really better to try and hide the fact that data was incorrectly processed, or instead, having realized data is incorrect, calling exit() because it wouldn't make any sense to continue. (though we should try to avoid race conditions!)
What if I run:
yes $VERY_LONG_STRING | your_program
It just can't continue calculating the length of the string, at some point it has to say "this is too long" and exit.
I tend to stick with systems programming, I never got into hardware that much.
I completely agree with the second and third statement of your closing. However the first one, eh. I just don't see how functions like strncpy() are that big of a difference then how things where done before. It is perfectly possible to write code with strcpy() for example, and not leave openings for buffer overflows. I just feel like they are changing the language for the sake of changing it. And I also feel like the more functions like strncpy() that get pushed into the standard library, the closer we are to taking way the power that C gives you by doing what you asked for.
I can agree with that.
And that I know of, C is the only language that gives you, low-level manipulation, lots and lots of rope, and is pretty cross-platform.
Once again, only another statement, a call to exit() or return once the if statement decides your destination buffer is too small. We have a grand total of two additional statements.
It actually predates NeXT as well. XD It was developed by the founders of a company called Stepstone who originally licenced it to the NeXT company, then NeXT bought the rights to Objective-C and then Apple bought NeXT.
Some how you went from "buffer" to "dynamically allocated memory" which although related is not one and the same. I'm sure you can figure out how too look up the definitions of both and see that there is a difference.
I'll admit I misinterpreted what you wrote.
But here is the problem with strncpy() and similar functions it only reads so many characters. So if you are in a situation where more is being written into a buffer then is there;
1) your string isn't actually a string because it isn't null-terminated.
2) it is actually more work for you to implement the handling of the error, I can't think of too many situations where not having the whole string is useful. By explicitly checking buffer size, I can adjust my destination buffer to include enough space for the whole string and not loose any data OR if it is such a hideous problem I can simply exit() or return right then and there in fewer lines of code that are easier to read. At which point why even bother using strncpy() if I'm already checking buffer size manually?
If it is such a problem, go use another language. One that does all the thinking for you.
Are you suggesting it is possible to create a program that doesn't involve buffers?
Even the simplest Hello World program uses buffers. Even fancy languages that have run-times and virtual machines use buffers. Buffers are an integral part of designing software because they are an integral part of how the machine works at the hardware level.
The one day you'd sell your soul for something, souls are a glut.