Forgot your password?
typodupeerror

Comment: Re:C Needs Bounds Checking (Score 3, Interesting) 97

by Sanians (#47763155) Attached to: Project Zero Exploits 'Unexploitable' Glibc Bug

Your idea only works if bound sizes are defined at compile time which is hardly going to be even a majority of cases.

Use your imagination...

I was imagining a special type of pointer, but one compatible with ordinary pointers. Kind of how C99 added the "complex" data type for complex numbers, but you can assign to them from ordinary non-complex numbers. A future version of C could add a type of pointer that includes a limit, and a future version of malloc() could return this new type of pointer, and for compatibility, the compiler can just downgrade it to an ordinary pointer any time it is assigned to an ordinary pointer, so that old code continues to work with the new malloc() return value, and new code can continue to call old code that only accepts ordinary pointers. Of course, we won't call them "new" and "ordinary," we'll call them "safe" and "dangerous" when, after several years, we grow tired of hearing of yet another buffer overflow exploit discovered in some old code that hasn't yet been updated to use the new type of pointer.

...or I'm sure there's many other possibilities. This isn't an impossible thing to do.

Comment: C Needs Bounds Checking (Score 5, Informative) 97

by Sanians (#47762223) Attached to: Project Zero Exploits 'Unexploitable' Glibc Bug

Meanwhile, slopping programming in any language results in unintended side effects.

Yes, but the lack of bounds checking in C is kind of crazy. The compiler is now going out of its way to delete error-checking code simply because it runs into "undefined behavior," but no matter how obvious a bounds violation is, the compiler won't even mention it. Go ahead and try it. Create an array, index it with an immediate value of negative one, and compile. It won't complain at all. ...but god-forbid you accidentally write code that depends upon signed overflow to function correctly, because that's something the compiler needs to notice and do something about, namely, it needs to remove your overflow detection code because obviously you've memorized the C standards in their entirety and you're infallible, and there's no chance whatsoever that anyone ever thought that "undefined behavior" might mean "it'll just do whatever the platform the code was compiled for happens to do" rather than "it can do anything at all, no matter how little sense it makes."

Due to just how well GCC optimizes code, bounds checking wouldn't be a huge detriment to program execution speed. In some cases the compiler could verify at compile time that bounds violations will not occur. At other times, it could find more logical ways to check, like if there's a "for (int i = 0; i < some_variable; i++)" used to index an array, the compiler would know that simply checking "some_variable" against the bounds of the array before executing the loop is sufficient. I've looked at the code GCC generates, and optimizations like these are well within its abilities. The end result is that bounds checking wouldn't hinder execution speeds as much as everyone thinks. A compare and a conditional jump isn't a whole lot of code to begin with, and with the compiler determining that a lot of those tests aren't even necessary, it simply wouldn't be a big deal.

...but let's assume it was. Assume bounds checking would reduce program execution speeds by 10%. How often do you worry about network services you run being exploitable, vs. worrying that they won't execute quickly enough? Personally, I never worry about code not executing enough. I might wish it were faster, but worry? Hell no. On the other hand, I don't even keep an SSH server running, despite how convenient it might be to access my computer when I am away from home, because I fear it might be exploitable. I'd prefer more secure software, and if I'm then not happy with the speed at which that software executes, I'll just get a faster computer. After all, our software is clearly slower today than it was 20 years ago. I can put DOS on my PC and run the software from that era at incredible speeds, but I don't because I like the features I get from a modern OS, even if those features mean that my software isn't as fast as it could be. Bounds checking to prevent a frequent and often exploitable programming mistake is just another feature, and it's about time we have it.

..and like everything else the compiler does, bounds checking could always be a compile-time option. Those obsessed with speed could turn it off, but I'm pretty certain that if the option existed, anyone who even thought about turning it off would quickly decide that doing so would be stupid. Maybe for some non-networked applications that have already been well-tested with the option enabled and where execution speed is a serious factor, it might make sense to turn it off, but when it comes to network services and web browsers and the like, no sane person would ever disable the bounds checking when compiling those applications because everyone believes security is more important than speed.

Comment: Re:Backward-thinking by the DMV (Score 1) 500

by Sanians (#47761985) Attached to: California DMV Told Google Cars Still Need Steering Wheels

I was kind of thinking about this the other day when I saw the YouTube video of someone with a car with lane assist who had taped something to the steering wheel to make the car think he still had his hands on it. I wondered, what does the car do if you take your hands off of the wheel?

The first thing you might suspect is that it just turns the feature off, and the car drifts out of the lane. ...but, of course, that's dangerous, since the car appears to by all means be keeping the lane on it's own, so people might just take their hands off of the wheel and not notice that the feature turned off.

A better solution is for the car to simply keep the feature enabled, and instead slow the car to a stop. Then the car safely remains in the lane, but the car prevents the feature from being used unattended. (you know, assuming that taping something to the steering wheel isn't sufficient to fool the sensors)

I can't imagine there's a lot of reason why the self driving cars couldn't do the same thing. It's like how everyone knows that, when it's foggy, you shouldn't drive so fast that you can't stop in the distance you can see ahead. It's the same for the car: If it can't see a safe path from its present velocity to a complete stop, then it's not operating safely. As such, when it finds a situation it doesn't know how to navigate, it should bring the car to a stop and tell the driver to proceed manually, and because it can come to a complete stop, if the driver doesn't respond immediately, it isn't a big deal, it's just an inconvenience to the traffic behind the car.

I'm sure this is why they want the steering wheel to remain. Even if the car can recognize that the road no longer matches the map it has, and even if it knows how to safely stop when this occurs, if you don't have a steering wheel, you've just got a brick on the highway that's going to ruin a lot of people's day. It would be especially bad if it created a traffic jam preventing emergency vehicles from going where they need to go. The owner of the vehicle needs some way to move it in the event this occurs, even if it is unlikely. If nothing else, sensors do fail, and when the car brings itself to a stop based on limited navigation based on what it last knew before the sensors died, having it sit in the middle of the highway until a tow truck arrives isn't a good idea.

That said, this could be accomplished rather easily, without a full steering wheel. Just popping a little joystick out of the dashboard that allows someone to guide the car at golf-cart speeds to either get it off the road or possibly turn it around so that it can find another path would probably be sufficient. Joysticks suck, but for such a limited use case at low speeds, I'm sure one would be fine.

Comment: Re:Backward-thinking by the DMV (Score 1) 500

by Sanians (#47761911) Attached to: California DMV Told Google Cars Still Need Steering Wheels

Driverless cars have driven thousands of miles without making a single mistake.

Maybe no mistakes leading to accidents, but I'm sure there were mistakes.

I remember seeing a video where they were showing the car on a road with bicyclists, and when approaching them from behind, you could see on the computer's monitor its thought process of how it was going to just wedge itself between the bicyclist and oncoming traffic, driving over the yellow line and forcing oncoming traffic to move over all while passing the cyclist too closely, just to avoid having to wait until oncoming traffic was clear before passing.

If I ever saw my driverless car pulling stunts like that, I'd take the controls immediately. It isn't worth taking that sort of risk just to avoid waiting a fucking minute.

Comment: Learning new shit is a pain in the ass. (Score 1) 802

by Sanians (#47753527) Attached to: Choose Your Side On the Linux Divide

When I was younger, I loved learning how to do shit in DOS, then later in Linux. It was fun. Now it's just hell.

I suppose some of the reason I thought it was fun was because once I learned to do something, I could do it whenever I wanted. So I was picking up valuable skills. However, anymore when I go to do something, I find it's no longer done the way it used to be done. No, now there's some new and "better" way to do it that is only ten times as hard to understand. So I waste a day or two learning the new way of doing things, then decide I need to upgrade, and what do you know: The last two days were a complete waste of my life because now it's done some other way.

Needless to say, I don't want to learn anything anymore. As time goes on, the new way of doing things becomes progressively more complex than before with less explanation than before. I mean, I once tried to look into how to properly put something into the startup scripts of Linux Mint 15, and all the documentation I could find was "it's easy, it's like a shell script" even though there was clearly shit in there about dependencies that was nothing like a shell script. So just how much like a shell script was it? I didn't know, it didn't say. So what do I do about that dependency stuff? I don't know, it didn't say. So I just read the whole manual, front to back? No, I don't care to spend three days trying to figure out how to do one simple fucking thing. So I just tried something that looked about right -- after all, everyone on the internet was convinced it was so easy that I couldn't possibly need help -- and ended up with a system that would boot correctly about 50% of the time, and the other 50% of the time the GUI would start up too soon and I'd have to log out and back in again before everything would work. ...and, what do you know: If I'd bothered to learn the init system of Linux Mint 15, I would have been wasting my fucking time, as they're now going to switch to systemd.

I really need to install FreeBSD in a virtual machine and start learning to use it. Maybe do Linux how Linux people used to do Windows: Just keep it in a VM for things like watching YouTube but otherwise try to stay the hell away from it. I tried it once a year or two ago, and it was generally nice in that nothing seemed as retarded as shit tends to get in Linux. The only real problem I had was that the user base is small enough that when I couldn't figure out how to do something, not only was there no help, but it was quite likely I was the first person to ever want to do what I was trying to do. E.g., I wanted to do some MIDI stuff, but found no MIDI routing subsystem, and the applications in the ports system weren't actually capable of connecting to what MIDI support there was. On the bright side, they didn't have ALSA, and so I found fixing that to be rather easy. An OSS implementation that's actually able to allow multiple applications to play audio at once is wonderful.

Comment: Re:If by "decreeses" you mean "increases", then ye (Score 2) 300

by Sanians (#47749311) Attached to: Put A Red Cross PSA In Front Of the ISIS Beheading Video

A) Don't even pretend that your response to seeing a beheading, and just hearing about it would be anywhere near the same. As the phrase goes, a picture's worth a thousand words, and this is a video.

I'm sure you're right about that.

Some time ago this same debate came up on Slashdot about another beheading video. Someone who had seen the video replied "you don't understand, this wasn't just a simple beheading where, like a guillotine, a blade comes down and then the guy simply no longer has a head." He then went on to describe what was in the video in a fair amount of detail. I don't think he used a thousand words, maybe somewhere between 200 and 400, but he definitely painted a picture. At the end he said "sometimes you just wish you could unsee something." ...and I believed him, because after reading his description, I wished I could unread it. Just his description of the video was far worse than anything I might have imagined being in the video, so I can only guess what actually watching it might have been like.

There's a huge difference between hearing that these people have cut someone's head off and realizing just how sick someone has to be to do something like that. I mean, in the description I read of that other video, there were so many aspects to it that, if I were the one cutting someone's head off, would have triggered my "jesus fucking christ what the hell am I doing" sense and forced me to stop. ...but they didn't stop, they kept right on doing it, and knowing that forces you to realize just how completely fucking insane these people are. Even in war, people take issue with killing the enemy when they find them defenseless, preferring to take them prisoner instead, and in that case you're talking about someone who was quite likely actively trying to kill you not too long ago. It's a whole different thing to pick a particularly painful method of execution for someone you know is innocent and then not even think twice about it as the reality of actually doing it gives you so many cues that it's just so wrong.

However, in this particular video, from what I hear, some people suspect the beheading occurred before the video was recorded, and the video just fakes it. Just judging from what I've heard people say is in this video vs. what I heard was in the other, I have to agree with that hypothesis, as it sounds like one of those quick and clean executions people tend to expect, like what they might see in a movie. Of course, if I've heard wrong, then that's more proof that just reading about it isn't the same thing as seeing it. Indeed, other than the one person's description of that older video that I read, I don't think anyone talked about what happened in the video any more than to say "cut his head off" and so I wouldn't be that surprised to hear that this video is actually much worse than what I've read about it so far.

Comment: Re:The show is filled with mostly nonsense (Score 2) 359

by Sanians (#47737171) Attached to: "MythBusters" Drops Kari Byron, Grant Imahara, Tory Belleci

I stopped watching when I saw an episode where they were challenging the assertion that, given a vehicle moving at 30 mph, with a rear-facing air cannon that would shoot at tennis ball at 30 mph, the ball, when fired from the moving vehicle, would simply drop.

Really? I mean, I'm not going to challenge your assertion that the show has gotten pretty bad lately, as it's certainly gotten bad since season two began, but I wouldn't criticize them for testing something everyone thinks they know just because it is actually true.

One of the most interesting episodes I saw was when they were testing something Jamie said in an earlier episode: That if two trucks collide at 55 MPH, it's like one truck hitting a brick wall at 110 MPH. At first I thought "duh, everyone knows that's true" and I continued to think that as they set up experiments, right until they were about to let two clay blocks swing into each other at which point a light bulb lit up above my head, and so I quickly hit the pause button and thought about what was going to happen, and realized that since each block of clay was simply going to stop the movement of the other, each was going to end up in the same condition it would have been in had it simply slammed into the "immovable object" instead, and thus two vehicles each going 55 MPH in a head-on collision is exactly like just one vehicle hitting a brick wall in a 55 MPH collision. ...and I suppose it's solvable with math too, given e = m * v, and so if two objects slowing down one unit of speed yields two units of energy, or one unit per object, then one object slowing down two units of speed yields four units of energy, which is four times as much, even though the difference in speeds is identical in each case. ...but I was certainly misinformed about how it worked, and I don't think I was the only one, so it was totally worth doing an episode on, indeed it was one of my favorites since I actually learned something.

Who knows, maybe the tennis ball episode was someone else's favorite, as it showed them something they either didn't know, or just hadn't really ever thought about.

What annoys me is when they start testing movie myths that I'm pretty sure no one would believe anyway, or when they perform experiments in stupid ways, or omit basic information to try to make it seem like the outcome isn't as predictable as it is. I don't mind that they do the experiments, I just hate that they play dumb about the outcome rather than look for some way to inject some intelligence into the experiment despite the predictable outcome.

Comment: Re:Ob XKCD... (Score 1) 359

by Sanians (#47736841) Attached to: "MythBusters" Drops Kari Byron, Grant Imahara, Tory Belleci

The problem is, when people look at what they do as actually being science, they end up thinking you can confirm a scientific theory with a single experiment run with 20 minutes of work.

Reminds me of the myth going around last winter about the government spraying some chemical into the air which was creating unmeltable snow. All kinds of YouTube videos of people holding up a cigarette lighter to snow, and it "not melting" and "turning black as the chemicals burn." Yes, it was bullshit, but people could go outside and grab their own snow and do the experiment, and when they got the same results, they were all like "OMG, it's true!"

My sister had enough sense to realize it was bullshit and brought it to my attention for an explaination. The snow melts, it just doesn't melt quickly because melting isn't as simple as "raise it from 31 degrees to 33 degrees" as the amount of heat required to do that is the same amount of heat required to heat it from 33 degrees to 177 degrees, and so it isn't going to happen as instantly as people expect. As for the black, that's from the chemicals in the lighter flame, as the cold and the moisture cause incomplete combustion, depositing carbon onto the snow.

And the conclusion to that thought process is looking at the weather report and dismissing global warming because it's a particularly chilly summer.

...or, as the mythbusters did, one might set up an experiment where you toss a little extra CO2 into a cubic meter of air and observe that it now captures more heat than the control, completely ignoring that the environment is a complex system with feedback loops and regulation mechanisms that make it incredibly difficult to model.

Comment: Those recaps are the worst part of the show. (Score 2) 359

by Sanians (#47736757) Attached to: "MythBusters" Drops Kari Byron, Grant Imahara, Tory Belleci

I remember years ago when I first set up MythTV, and set it to record MythBusters. It eventually recorded the episodes from the first season, when they still did several myths per show, but finished one before starting on the next one. Watching those episodes was like heaven compared to the newer format. No "this is coming up later" and "this happened earlier" segments both before and after each commercial break. You have to wonder how much interesting footage they're leaving out so that they have time for all of those recaps.

Comment: Re:so 1h every 10 day per citizen (Score 1) 336

by Sanians (#47736661) Attached to: New EU Rules Will Limit Vacuum Cleaners To 1600W

That gives about 30 vacuum-hours per year per citizen, or about 1h per 10 days (rounding in different directions).

Sounds like quite an overestimate to me, just not so far over that it is patently obvious.

Vacuuming isn't something people enjoy doing. It's a "get it done" activity. So, what, a few minutes on each floor in the house? So 15 minutes for the whole house? Then we have to figure that several people live in that house. Also, I'm not sure about every ten days. I'm sure some people vacuum that often, but others probably don't vacuum at all. (no carpet in my house, also, some people with carpet just don't care to vacuum more than once every six months) So, every three weeks maybe? So, 0.25 hours / 5 citizens / 21 days * 356 days/year = 0.9 hours/citizen/year.

I think it's a serious over-estimate, but obviously, without either of us knowing the actual numbers, neither of us really knows.

Comment: Re:My two cents (Score 1) 336

by Sanians (#47736575) Attached to: New EU Rules Will Limit Vacuum Cleaners To 1600W

Even if the initial cost goes up you can easily break even long, long before the item expires.

...and they'd do good to emphasize that, and also, avoid exaggerating other claims.

Part of the problem with CFLs ten years ago was that the packaging advertised that you'd save $230/year and that they would last for a decade. Then, some of the early ones would go bad within a few months, leaving people to feel like they were a complete waste of money.

The smart thing would be to have a table on the back of the box, showing "if you use a bulb ___ hours a day" and indicate both how long it takes to pay back the initial investment for the bulb and how much you'll save after that and how long the bulb will last being used that much. I use my bulbs eight hours a day at least, which means using a 25 watt CFL vs. a 100 watt incandescent saves me $2.34/month, and so even when the bulbs were $5 a piece and dying a few months after purchase, they were still saving me money. ...but the box didn't say that, because it was focused on something ridiculous like one hour a day of use, just so that they could claim the things would last 20 years. Maybe in sunny California people only use their bulbs one hour a day, but where I live, the skies are overcast all winter long and so if you don't turn on some lights in your house during the day you're going to end up with a sleep disorder. So people buy the bulbs which claim to last at least 10 years, use them 10 times as much as the people who wrote the info on the box assumed, and when the things die in six months (still earlier than indicated even in terms of lifetime hours, because just like the manufacturer underestimated daily usage, they also overestimated how many hours the bulb would last), they assume the bulbs were a waste of money and don't buy more.

Education only works if you're completely honest with people. Otherwise they detect some of your bullshit and assume everything else you said was bullshit too. Give them an expensive bulb that they're already adverse to buying due to the cost, tell them it'll last 10 years, then have it last not even as long as a cheap incandescent, and they're going to instantly forget any claims you made about it also saving them money on their electric bill. You already lied to them once, so they're not going to continue to trust other claims you make.

I imagine the reason this shit always comes down to passing a new law is because people just aren't that interested in education and choice. When they try education, they're not honestly wanting to educate so that people can choose what's best for them, they're trying to force someone to make the decision they want. So they exaggerate all of the positive features and completely fail to tell people what negative features to expect. It's manipulation, perhaps manipulation that is everyone's own best interest, but manipulation none the less. People don't like being manipulated, and so as soon as they catch on, they resist the "education" and go back to doing what they were doing before. So, having failed to control people with manipulation, they resort to controlling people with laws. It's the logical next step.

Comment: Re:Linux's Security (Score 1) 331

by Sanians (#47720221) Attached to: Ask Slashdot: How Dead Is Antivirus, Exactly?

Hey, you're the one who said you have no idea how to see a non-rootkit virus/trojan.

...and how does one use ps to detect malware? There's 272 processes running on my system right now. Ten years ago, when there were only 20, I knew what they all were. Now? No way in hell. So I don't even look at it anymore, and malware could just call itself "malware" in the process list and I'd never notice it.

However, even if it were still only 20 processes, here's some questions:

1. What prevents malware from choosing a legitimate-looking name. Like how in Windows there's a dozen "svchost" running, and so malware would be smart to simply name itself "svchost," as most people are unlikely to notice that there are now 7 of them when there should be only 6. On my system, malware could hide itself pretty well just by calling itself "xterm" as there's always at least a dozen of them in there.

2. What forces traditional viruses to show up? You know, the ones where they infect an ordinary program, thereby being executed every time that program is executed. Threads don't show up in the process list, so just infect a program and make it spawn a thread to run your malware, and now the CPU time even shows up in 'top' as being used by some legitimate application you've used for years and totally trust, even if it does occasionally do weird things like use a little more CPU time than you think it should be using.

However, this is all moot anyway. My problem with forcing people to run applications non-root is that it only makes sense if there's some root application that is able to detect malware. When you download Linux and install it, what do you get? You get a system that will prompt you for your fucking password all the time, but otherwise not complain about a damn thing any application does. Does an application constantly use half of your internet bandwidth sending spam? Well, Linux won't tell you it's doing that. Is it indexing your files and sending them to a remote server? Linux won't tell you. Is it recording your keystrokes as you log in to your online banking web site? Linux won't tell you. ...but god-forbid you attempt to set the system time, because Linux will intervene to stop you, and insist that you authenticate yourself before you do something so bloody dangerous, because, you know, it might be malware attempting to set the system time, and we can't allow that.

It's just retarded. Linux is 100% obsessed with protecting the Linux system itself, but doesn't give a fuck about protecting the user.

So this whole thread started with me suggesting that a better solution is application sandboxing, since aside from utilities that come with the OS anyway (like file browsers, archive tools, etc.) there are very few applications that need complete access to everything the user running the application has access to. So if you run an office application, the first time you run it, Linux asks what you expect it to do. You click "modify the occasional file I ask it to modify" and so Linux restricts its file I/O to what you give it access to via a file open/save dialogue provided by the OS, and also gives it its own little folder somewhere to store whatever data it needs to store, but doesn't grant it access to every file the user is allowed to access. It also allows it to present GUI windows and accept input from the user through them, but doesn't allow it full GUI access so that it can intercept keystrokes to other applications. If the application attempts network access, Linux tells you what it's trying to access, and you can approve or deny. If you deny, it tells the application that you did, and the application can try to make a case for why it needs that access, but you're still free to just say 'no' and the application can just not implement whatever feature it needs that network access for since apparently the user isn't interested. This is how real security works. You can download malware intentionally, run it in such sandboxing, and be in full control of what the malware does, rather than the malware being in full control of what your computer does. It's how it should be, and it pisses me off that everyone thinks that telling everyone "just don't run appliactions as root" and "don't run untrusted applications" and apparently now "just examine your process list now and then" is somehow good enough. It's not. People buy computers so that they can run software. Any solution that tells them not to run software is a solution that is not going to work, and I think everyone knows that and that's why they say "well, just don't run the software as root," but they're in denial about the problem if they think that is any sort of solution.

It's not that you're not running as root that is keeping your computer secure. It's that you're essentially not using it to its full potential. I mean, if you left it in the box, it'd be perfectly secure. Don't connect it to the internet? Still pretty secure. Connect it, but just don't ever run any software that didn't come with it? Not quite as secure, but still not too bad. Download only well-known software? Less secure, but not the worst. Download anything that claims to do anything you're interested in doing? Now you're almost doomed to get malware.

Current security advice is essentially "use your computer, but don't use it too much." It's bullshit. The purpose of computers is to run software, and our operating systems should be able to do that without it being a huge security risk. It's like running a prison. You can build individual cells, or you can house everyone in one huge room and just tell the warden "to keep your prison secure, you should keep only trusted prisoners, and avoid taking in just any random prisoner off the street." Then you build a little "root" tower in the center, and you put a good secure door on that, and indeed you manage to keep all of the prisoners out of the tower, but they're still shanking everyone in sight and climbing the prison walls to escape. ...but hey, it's all good as long as they don't get into the "root" tower, right? ...and besides, it isn't that the prison's security is poor, it's that damn warden who failed to properly screen the prisoners he accepted. So, I guess we'll just shoot all of the prisoners from the safety of the "root" tower, then pretend like the situation isn't doomed to repeat itself, because expecting the warden to screen prisoners and determine which will and won't be a problem before they're even in the prison is a perfectly acceptable thing to expect. I mean, it isn't like he bought the prison to keep prisoners, and so he expects to be able to do so, and telling him not to is essentially telling him to not use his prison for the only thing it is good for because it quite frankly isn't engineered well enough to be able to do it. It's just insecure to expect to be able to keep dangerous prisoners, and besides, everyone knows that as long as the "root" tower is protected, the prison is perfectly secure.

Comment: Re:Linux's Security (Score 1) 331

by Sanians (#47707875) Attached to: Ask Slashdot: How Dead Is Antivirus, Exactly?

With a little care they can then use your system for months and you won't be the wiser.

...and they can do that without root, because frankly, there's nothing to hide from. How am I going to know there's malware on my Linux system?

While you're at learning, google chkrootkit.

I've heard of it. ...but it would seem to presume the machine has been rooted, in which case, like you said, stuff can hide itself if it's root. (I also remember it being rather useless for the average user due to too many false positives, but that's beside the point.)

Where's the virus scanner that every Linux user runs, which runs as root and detects the stuff that can't hide itself because the user didn't execute it as root?

People tell you 'keep studying' because there are obvious gaping holes in your basic knowledge that no post on a forum will even make a dent in it.

In my experience, it's usually people just repeating junk they've heard but don't really understand, assuming that they know something when they don't.

Perhaps you should consider the possibility that you can be wrong and if every single expert is going in the opposite direction, you should at least look around to make damned sure they don't know something very important that you don't.

What, just because something is a popular meme means that it is good security advice? I suppose kids drown if they go swimming after eating too. I mean, if everyone says it, it must be true, right?

Comment: Re:Linux's Security (Score 1) 331

by Sanians (#47692581) Attached to: Ask Slashdot: How Dead Is Antivirus, Exactly?

So you run binaries of unknown quality and source as root and wonder what went wrong?

I actually run everything as root. First thing I do with every Linux install is configure automatic logins, then log out, delete my home directory, symlink it to /root/, and change my username's user ID to 0. Tricks most of the software that pointlessly refuses to run as root into thinking that I'm not.

Never had anything go wrong. However, if I had, I don't see how not running as root would have made a damn bit of difference. So, what, the malware wouldn't be able to affect the system? Fuck the system, I can reinstall it. What I care about are all of my personal files which are 100% accessible to my user account.

...and what's more, everything malware cares about is accessible from my user account. It wants to send spam? My user account has network access. It wants to participate in a DDOS? My user account has network access. It wants to scan my personal files for sensitive information? My user account has access to my personal files. It wants to act as a keylogger to capture my banking password? My user account has the necessary access to do that. What exactly is malware missing out on by not being run as root?

So I always run as root. That way I don't have to play "simon says" with the command line, where I type "do something" and it replies "you didn't say 'sudo'" and so I type "sudo do something" and it finally does it. It's a pointless game as it doesn't protect me from anything, especially with the default settings where, after I type in my password, any sudo executions for the next five minutes get a free pass. Seems like any malware could just keep trying to run sudo until it works, assuming it had any reason whatsoever to give a fuck about the root account.

If you don't think running as root makes any difference, keep studying.

I'm really beginning to notice a trend with people who can't back up what they're saying simply telling me that I need to learn more.

I've made some arguments that support my belief that whether you run as root is irrelevant. Can you make some arguments that support your belief that it matters?

* * * * * THIS TERMINAL IS IN USE * * * * *

Working...