Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Submission + - Scary: Detecting or preventing abusive devices (antipope.org)

Keybounce writes: This isn't "new"; but it's getting scarier.

Small computers that can run a wifi stack are small. Tiny. Getting even cheaper with their power requirements.

This blog post indicates that kettles can — and *DO* — contain computers that want to infect your home network.

With a little thought, there is no clear end in sight. We know that batteries are fairly big compared to the rest of the computer, and there's no reason not to think that the inside of an "AA" battery might be a smaller power cell and a computer.

And it's not just wireless. Heck, any USB device — and this is old now — can be given "free" power to run a wifi. As much as a USB device can do all sorts of things by pretending to be something else, consider what can happen with a USB device that doesn't lie about what it does, just sends information off elsewhere? That USB memory stick you found doesn't have to attack your computer, it just sends copies of what you put on it to someone else over any open wifi it finds — such as your trip to the coffee house.

And where does it end? Right now we have smart inventory control tags — in the future, those can be strong computers. That might either be data gatherers, or outright compromised.

How can this be detected?
How can this be stopped?

As far as I can tell, there's no good way to detect, any "security" has to start with "don't plug anything into your computer" (apparently, not even a cable is safe), and the only hope of "stopping" this would be to have the entire US government's court and law-enforcement system get involved — as in, make this sort of thing illegal.

After all, illegal activities by corporate businesses for private gain always generates appropriate penalties, fines, and jail time for the people involved, right?

So what can end users do? Anything? Nothing?

Comment Peer Review (Score 1) 253

This is the sort of study that demands peer review.

It is far beyond me to understand the details of this study, and it's claims. But it is absolutely fascinating, if true, even taking the Male-only Y and Female-only Mit inheritance factors into account.

When I see things like

We used coalescent simulations ... The best-fitting models in Africa and Europe are very different. In Africa ... numbers expanded approximately 50-fold. In Europe ... as soon as the major R1b lineage entered Europe ... expanded more than a thousandfold.

then I know enough to know that the assumptions used matter, but also that I don't know enough to evaluate those assumptions.

Comment Re:Government Involvement (Score 1) 499

Actually, the 13th amendment specifically permits slavery.

Section 1. Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.

Someone who has been found guilty, and sentenced to jail, can be forced to work for the government (involuntary servitude) or lose other rights besides mobility and/or liberty (slavery).

If the "3/5th" rule were enforced, then states that take the view "you are our slave" would lose representation, and have a reason to NOT make them slaves.

Comment Re:Pathetic (Score 1) 403

Canned tuna. About 50 to 60 cents per can; about 1.5 meals per can. Eat 2/3rds of it now, and keep the rest for a snack in 2 hours. Combine with a banana.

"Meal bars" -- some are basically soy protein plus vitamins and carbs. For about $1.25, you get a complete, but small, meal.

What do both of these approaches lack? Fat. For some reason, our society thinks "fat is bad", when it's not only necessary, researchers say it's actually a better fuel than sugar for 75% of the population.

Comment Re:Oh my god (Score 1) 403

And how many of those are filled by either fresh-out-of-school, too-young-to-know-their-value graduates, or by people imported through the "high tech visa"s?

"We can't fill this job, so there must not be a supply in the US, so we have to import" -- that's what people say to washington.

Not: "We're not willing to pay what this is worth, and actually train people who are skilled in how to use our system, so give us cheap people from overseas instead".

Comment Re:Would probably be found (Score 1) 576

Nothing, and I mean nothing gets into the kernel without highly skilled devs reviewing it first. Sure, they could make a mistake, but saying that it might happen because nobody is really looking is ridiculous.

The old random number generator, that I believe affected every distribution of linux.

The bugged cryptography library / key generator that shipped for over a year, that I believe affected one distribution.

There are plenty of ways that a given section of code can only be understood by just a few people. Why constant X and not Y? Why is elliptical generation this way and not that way? Why insert a shift left one bit?

Heck, a more down to earth issue: How long was it before NTFS was understood well enough to be able to write to it in every case, given some strange features that had to be "black-boxed" reversed before they were understood -- and are you sure that there is 100% compatibility today?

That's just the areas that I know about; I'm sure other people have other issues that they keep aware of.

===

A much higher level question: Why is any program allowed to use getHostByName(), struct sockaddr, or decide to open a connection to machine X on it's own, without having to go through a system policy?

That's not a silly question. Yes, I know the history -- those had to be in user code when networking was changing 6 times a year. But for at least a decade, if not more, that hasn't been the case -- and there is nothing you can do to ensure that 100% of all traffic goes out through tor, is there?

I'm not calling struct sockaddr a back door; I'm calling it a security design flaw. I'm calling the whole "no program can write to the disk without OS control, but any program can write to any place on the network without any control" a security flaw. Heck, you could argue that being able to determine your real IP address is a flaw -- even if a spy had to send it out over tor, that spy could still reveal who you were.

[FYI, the alternative would be to eliminate the distinction between a socket descriptor and a file descriptor, and have network end-points created by open("/dev/net/hostname:port", O_RDRW) or something similar.]

Comment Make it easier to report bugs, that's how (Score 1) 205

Very often, I have absolutely minor bugs and problems that show up briefly, in passing. Not only is it hard to spend the time to replicate and figure out exactly how/why for someone that wants a "detailed report with reproducable steps" (ick!), I'm in the middle of ... you know, _DOING_ something. So I have to go back later?

Sigh.

What have I found to be the best way to demonstrate bugs? A full screen recording of what I'm doing. I'm not being silly here.

I'm in a beta test (NDA) for a new version of a program. When I'm playing with the beta software, I turn on the video recorder, and record EVERYTHING I'm doing. That company has actually thanked me for this approach.

While using OSX, I find plenty of little things, several times a day; by the time I've realized it, it's past and done, and I can't give you details on how/why. Do I report it as best I can? Used to. Gave up as "pointless/why bother".

I've seen people in this thread saying "Log everything, let the user hit "bug!" and all details are sent to you". I've seen people say "Record everything, snapshot everything, etc". And ... that's a good first step.

But is it enough? If I have a cursor that is supposed to change as I move around, and it only sometimes changes, sometimes does not, how do I log that? How do I report it? Reproducibility is pathetically poor.

How do you record and log your interaction with something else? In theory, the same events and responses that your program is / should be doing is what you'd be recording, right? So if the recording/logging works, your program does. Right?

If I have a problem with a USB drive and a loose connector, that leads to system disasters, and the view is "Won't fix the underlying cause because a disconnected drive has to be considered completely dirty for the worst possible potential situation", even if most such cases can be proven to be no where near that worst case, then what?

If I have a problem where a system program reaches "kill -9 does not work" about once a week, then what?

If I report a bug that is closed as "duplicate of bug #y", but I cannot follow bug #y, get no status updates, no feedback, etc, then why would I bother?

Make it easy for users to submit bugs.
Make it easy for users to tell you what happened.

Never put up a panel with text that cannot be copy/pasted.
Never put up a panel that won't fit on small screens if a lot of text is displayed.

If you are getting bad reports from people, start asking, not "what caused this bug?", but "What would help you provide better information next time?".

And make it so easy for people to report bugs that they can do it one-handed.
You will get bad reports. Don't try to prevent those.

Comment Html, in it's simplest form, is simple text (Score 1) 566

With all this discussion about different types of detail and complexity, and the need to do massive coding to properly handle all forms of HTML headers, I'm reminded of a discussion I had once with a back-end designer.

He had made a system designed to work via http. When I asked why, and mentioned all the overhead and everything, he had a real simple response.

Essentially: the application wasn't a full powered web server. In it's simplest form, an http request is "open a socket, write a string, read a string, close socket". By designing it to look like http, and work over port 80, it fits into modern corporate systems, gets past most firewalls, etc.

The result? Sure, you could send crazy http thingies to this server; it would just return an error. But it was able to work with the expected use case -- send data to the server, get a response back. Plain text. Easy to work with.

What happens to such a system once everything becomes binary? No longer so simple.

Does this help the typical user? If the concern is over the per-file TCP overhead/startup, there is already a way to re-use a single TCP connection for multiple file transfers. If the concern is over the size of the headers, there is a much better way to ... you know, remove all the junk, cookies, extra headers, etc, than just trying to shrink text labels down (which is about all you can do if you want to keep the data content the same). If the concern is over "Perceived browser speed", well, let me remind you of Larry Wall's "rn" versus the previous dominant news reader -- and how "rn" would parse and display on the fly, instead of having to read the entire thing in before displaying. Or compare Safari to Firefox -- again, firefox starts displaying faster than Safari, so I can start reading the page sooner. Or, compare ....

Do not assume that "perceived slowness" is caused by a bad protocol.
It may be caused by a bad user agent.

If the page has to be fully loaded before being displayed, that's one thing.
If you can load the entire base text of a page, put it up, and then go back and start loading the images, or sounds, or flash thingies, that's another.
If you don't like all the screen jumping, well guess what? You can open a bunch of separate TCP/http connections, read just enough from each to determine size, and then stop reading -- let the socket data stop coming -- and put up the text of the page, the placeholders for what you need, and then, once the base information is up, then go back and read the rest without the "jumping" and resizing.

Does any of this require binary?

The goals of 2.0:

1. Substantially and measurably improve end-user perceived latency in most cases, over HTTP/1.1 using TCP.

That's a user agent behavior problem.

3. Not require multiple connections to a server to enable parallelism, thus improving its use of TCP, especially regarding congestion control.

That requires a way to say "Here is control data", and "Here is content data". Sure, that would help a great deal, if as a user agent, while reading a text blob for a web page, I see that I need to open a TCP channel for an image to determine it's size. I currently have to either wait for the web page text to finish, or open a new TCP channel to get the image. What can avoid this? If I am now having to send ACK's and NACK's back to the server during my read, I can also send a "Put on hold -- now give me Y instead". So this is a "win" for this behavior -- I can now reuse the same TCP channel, and get the beginning of another data stream, determine size, etc, and then get back to the first file. So clearly, this goal is a good goal, right? There are no flaws in the goal of using a single TCP channel to send multiple data streams, right?

Right?

No one could possibly see any flaws with transmission speed by saying that the sender is going to stop sending, and come to a complete halt until the receiver has synchronized, right?

It's a tradeoff. The old way required filling up the TCP transmission window with data sent that was not acknowledged. This doesn't slow down the total transmission time (better throughput), but has the problem of the desired channel being slower (competing with the unwanted stuff). The new way has worse throughput (the communication has to stop and drain out repeatedly), but the stuff you want doesn't compete with the stuff you don't want.

What's the real problem? The TCP protocol lacks key, needed features, including any actual end application control of what's going on / how stuff is transmitted. TCP is a bad protocol today -- consider how long ago it was designed, and the network pipes it was designed for. The better solution would be to make a better TCP.

Instead, the approach being taken is: "Make everything that uses TCP re-implement better stuff on top of it".

4. Retain the semantics of HTTP/1.1, leveraging existing documentation (see above), including (but not limited to) HTTP methods, status codes, URIs, and where appropriate, header fields.

... Retain the semantics, while introducing new features? Gee, how about backwards compatibility?

5. Clearly define how HTTP/2.0 interacts with HTTP/1.x, especially in intermediaries (both 2->1 and 1->2).

... Err, you want a 1.1 and a 2.0 to talk to each other, with different framing systems, different abilities expected by the 2.0 side, and still somehow maintain the semantics???

... Yea, let me know how that AI project works out for you.

Comment YouTube doesn't have a download button?!? (Score 1) 381

> You shall not download any Content unless you see a “download” or similar link displayed by YouTube

But ... I do see a "download" button, on every YouTube page, I think it's right next to the like and dislike buttons.

And I agree: Absolutely standard. Oh, and you left out the standard, built-in firefox sync. Or that I'm using 17ESR so that I have the same profiles even from my 10.5 PPC macs that don't officially support it but still work (TenFourFox).

I've had that download button from YouTube for years. Is that EULA implying that they'll remove it?

Comment Re:Never trust an "app" to do anything. (Score 1) 85

If you wanted actual security, you'd use a real program to do it instead of an app.

If you wanted actual security, you wouldn't have it on a computer.

If you wanted actual security, you wouldn't send it to someone else's computer.

If you wanted actual security, you would ensure that no other computer could access the files on your computer.

Slashdot Top Deals

"No matter where you go, there you are..." -- Buckaroo Banzai

Working...