Forgot your password?
typodupeerror

Comment: Re:The commits are funny into themselves. (Score 1) 311

by arth1 (#46800879) Attached to: OpenSSL Cleanup: Hundreds of Commits In a Week

If decrementing and comparing to 0 is faster, then a modern optimizing compiler will do that automatically even if you use for(i = 0; i < 8; i++) instead of the other way around.

Do you have one example of a compiler that actually does that?

gcc for example, might unroll the loop, but not revert the look to use a more efficient comparison. A quick test with icc (older version, I admit) doesn't do it either.
Which compilers do you refer to that do this?

Comment: Re:Since when is every search engine Google? (Score 1) 144

by arth1 (#46800651) Attached to: New 'Google' For the Dark Web Makes Buying Dope and Guns Easy

Are you sure about that? I thought Kermit and ZModem were unrelated evolutions, more in parallel than Kermit being a predecessor (or successor) of ZModem. It becomes pretty obvious when you look at features, Kermit and ZModem send filenames to the other end, while XModem and YModem do not. XModem does show off that it is older since unlike the others it doesn't have any sort of error detection.

They're unrelated - X/Y/Zmodem share a heritage,but kermit is unrelated. However, it seems obvious that X/Y/Zmodem was attempting to provide the file transfer capabilities of kermit, making it simpler to both install and use. BBSes embraced it, and X/Y/Zmodem had its days of glory. Nowadays, kermit has overtaken Zmodem and Ymodem-g, so it has possibly gone full circle.

Comment: Re:I would think (Score 1) 311

by arth1 (#46800455) Attached to: OpenSSL Cleanup: Hundreds of Commits In a Week

Also..OpenVMS has not been updated in almost 4 years. If you have native servers on these machines exposed to the internet, you get what you deserver regardless of the version of openssl you're running. Tell you what - you maintain the OpenVMS patches and I'm sure no one will stop you. Otherwise, stop complaining about it.

They don't need to be on the internet - they may be running back-end or internal systems. But if front-end systems or internal PC or midrange systems communicate with them using openssl, the versions have to be compatible. You can't just upgrade one end of the connection, at least not without extra testing.
Or, it might be a system that has to be accessible and visible to just a small part of the internal userbase, and you protect it from internal hacking.
So legacy systems sometimes need software upgrades too.

Comment: Re:The commits are funny into themselves. (Score 1) 311

by arth1 (#46799799) Attached to: OpenSSL Cleanup: Hundreds of Commits In a Week

Comparing to zero is faster in most architectures and still is a valid optimization.

Indeed, and you might want to take it even one step further, and test for --i = 0.

There's also the fact that there are plenty of older archiitecture CPUs out there, being deployed even today, especially in the embedded world where product lifecycles are really long, and switching to a new architecture can mean dozen of man-years of work.
Do you want your water company and cable provider to install new meters every two years to keep up with the latest technology? Guess who would pay for that!
In critical infrastructure it becomes even more important to support old hardware. Would you feel safer in a plane running hardware/software that has shown itself to work, or one with bleeding edge computers that crashes as often as a typical desktop?

Optimization isn't bad. But you have to know what you're doing, and why. High level developers relying on magic abstractions layers need not apply. Their strengths lie elsewhere.

Comment: Re:I would think (Score 1) 311

by arth1 (#46799629) Attached to: OpenSSL Cleanup: Hundreds of Commits In a Week

not necessarily - when I saw a commit that said "removed use after free" (ie still using a structure after it had been freed) then you've got to think the code is just generally sloppy.

Not necessarily - if they used their own allocation routines (which it appears they did), it could have an API allowing use after free until a new allocation occurred. If so, the bug would be replacing the memory allocation routines without also rewriting the parts that depended on the old functionality.
And before someone going on a rant saying that that's a brain dead thing to do, it's something that pretty much every compiler does when using the stack. The stack pointer isn't going to change until you change it. So if using a private stack for memory allocation, this is perfectly fine. It's a different API to what's common, but different doesn't mean wrong. It just means that those who use it have to understand it and not make erroneous assumptions.

Comment: Re:I would think (Score 4, Insightful) 311

by arth1 (#46799539) Attached to: OpenSSL Cleanup: Hundreds of Commits In a Week

Yup. I can't believe that there were such dodgy trade-offs made for SPEED (at the expense of code readability and complexity) in openSSL.

At least a couple of reasons:
- First of all, OpenSSL was designed with much slower hardware in mind. MUCH slower. And much of is in still in use - embedded devices that last for 15+ years, for example.
- Then there's the problem that while you can dedicate your PC to SSL, the other end seldom can. A single server may serve hundreds or thousands of requests, and doesn't have enough CPUs to dedicate one to each client. Being frugal with resources is very important when it comes to client/server communications, both on the network side and the server side.

Certain critical functionality should be written highly optimized in low level languages, with out-of-the-box solutions for cutting Gordian knots and reduce delays.
A problem is when you get code contributes who think high level and write low level, like in this case. Keeping unerring mental track of what's data, pointers and pointers to pointers and pointers to array elements isn't just a good idea in C - it's a must.
But doing it correctly does pay off. The often repeated mantra that high level language compilers do a better job than humans isn't true, and doesn't become true through repetition. The compilers can do no better than the person programming them, and for a finite size compiler, the optimizations are generic, not specific. And a good low level programmer can take knowledge into effect that the compiler doesn't have.
The downside is a higher risk - the programmer has to be truly good, and understand the complete impact of any code change. And the APIs have to be written in stone, so an optimization doesn't break something when an API changes.

Comment: Re:Having to know the URL, what security! (Score 1) 144

by arth1 (#46799409) Attached to: New 'Google' For the Dark Web Makes Buying Dope and Guns Easy

DNS queries? Why didn't you simply search by IP address, which is what DNS queries resolve to?...

Because when web hotels arrived around the turn of the millennium, web servers commonly started serving several hosts from a single IP, and the Host header in the request would determine which site was served.
Scanning IPs would then likely only get you the hosting provider.

Comment: Re:Since when is every search engine Google? (Score 5, Informative) 144

by arth1 (#46797553) Attached to: New 'Google' For the Dark Web Makes Buying Dope and Guns Easy

zmodem was several generations newer.
kermit -> xmodem -> ymodem -> zmodem

I still use uucp, by the way. For communicating with faraway sites where the connection depends on a shaky cell phone connection that may or may not be up, it's a pretty good way of moving e-mail and logs.

Comment: Re:Good. (Score 2) 144

by arth1 (#46797477) Attached to: New 'Google' For the Dark Web Makes Buying Dope and Guns Easy

Now the FBI and the Sheriff would be able to set up stings more efficiently.

FBI and the Sheriff? You have no real insight in how law enforcement works here in the US of A, do you?

There are dozens(!) of different police forces, and they seldom cooperate on anything, but try to not step on each others' toes. A sheriff is county police and would not be involved in any international or interstate crime sting. Speeding tickets, serving divorce notices, arresting the busker in front of the strip mall, signing reports of items stolen, sit in cars at local road work - that's the sheriff's department. Investigative work to catch internet facilitated high crime is not going to involve the sheriff.

Crime

New 'Google' For the Dark Web Makes Buying Dope and Guns Easy 144

Posted by timothy
from the and-you'd-trust-this-because dept.
First time accepted submitter turkeydance (1266624) writes "The dark web just got a little less dark with the launch of a new search engine that lets you easily find illicit drugs and other contraband online. Grams, which launched last week and is patterned after Google, is accessible only through the Tor anonymizing browser (the address for Grams is: grams7enufi7jmdl.onion) but fills a niche for anyone seeking quick access to sites selling drugs, guns, stolen credit card numbers, counterfeit cash and fake IDs — sites that previously only could be found by users who knew the exact URL for the site."

Comment: Re:Metaphor (Score 1) 234

by arth1 (#46793043) Attached to: Bug Bounties Don't Help If Bugs Never Run Out

While you are technically correct, the reality is that the most serious security vulnerabilities are almost all directly related to buffer overruns (on read or write), allowing an attacker to read or write arbitrary memory. Everything else is a second-class citizen by comparison;

In my fairly long experience, there are ten vulnerabilities introduced at the design stage for every vulnerability caused by bad coding. Buffer overflows might be one of the more common coding errors, but certainly not the main cause of vulnerabilities.

Comment: Re:When did slashdot become a blog for Bennett? (Score 1) 234

by arth1 (#46793027) Attached to: Bug Bounties Don't Help If Bugs Never Run Out

Okay, I'm obviously missing some important details not being a security expert. Clear a couple things up for me.
1. Do security researchers spend their efforts actively searching for one particular bug using one particular method, or do they try a lot of different things and expect to find a lot of different bugs of varying levels of importance?
2. Do companies looking at their own code for bugs only concern themselves with bugs that would be worth selling on the black market, or is every bug a concern for them?
3. Bit of an opinion question, how much would you consider spending to find a bug to sell for $100k considering the potential failure of the endeavor?
4. Do you think bug bounties are the primary motivation for white hats to research bugs, and if not what effect do they have?

I don't think Mr. Haselton is qualified to answer these.

1: A little of both. I can only speak for myself, but I tend to look at a particular piece of hardware or software, and poke it until I find something interesting. Now interesting doesn't have to be a vulnerability, but it engages the brain. Could there be an exploit in here? And if not, could there be an exploit in other products that use a fairly similar design for something?
I may start looking at product A, and find X interesting, but end up finding a defect Y in product B.

2: Both. You sell not only a product, but a perception that you care about your customers. Besides, most companies have people in decision who wouldn't be able to make an educated decision on what type it was, and underlings whose opinion is tainted because they have a real need to cover their own ass. And the companies certainly won't take the word of a hacker as to what the impact is, so they'll usually err on the side of caution, i.e. treat it seriously.
Note that treating it seriously might mean it will take quite a long time to fix, because taking code seriously also means extensive tests that fixes don't break anything else. A company that has a very fast turnover for security fixes is one that I wouldn't trust much - it's a prime candidate for looking for more problems.

3: You start with a premise that the hunt is to get a reward. I believe that's almost always a false premise.

4: No, I think the primary motivation is curiosity. Unless that;s your primary driver, you will likely not be good at it.
A bounty might make a hacker go to the company after they've discovered the bug, instead of just sitting on it.
Which I think is what mostly happens. You know about a security flaw, but don't want to go to the company given the high risk of being sued in best shoot the messenger style. And you don't want to turn blackhat either, neither for criminals nor governments. But, I repeat myself. And if you're not a kid looking for notoriety, chances are you won't tell anyone.
I am quite convinced there are thousands of unreported vulnerabilities. Bounties might help with that.

Practical people would be more practical if they would take a little more time for dreaming. -- J. P. McEvoy

Working...