Slashdot stories can be listened to in audio form via an RSS feed, as read by our own robotic overlord.

 



Forgot your password?
typodupeerror

Comment: Re:not the point (Score 1) 121

by Todd Knarr (#48925283) Attached to: Why Screen Lockers On X11 Cannot Be Secure

You download a program that appears legit (and may be mostly legit, or be a hacked version of a legit program), and are running it.

But why would I do that? Almost all the programs I use come from the repository, and to get me to download one they'd have to compromise the repository first (which is possible, but not nearly as easy as just advertising a program for download). The rest are again ones I download from known sources, usually the developers' own official site, and again it's not trivial to compromise those sites.

The situation you propose only happens in the world of Windows where downloading random software from untrusted/unknown sources is routine. And if you're routinely doing that, you've got more problems than just a way to bypass the screen lock. The best way to avoid shooting yourself in the foot is to not blithely follow instructions but to stop and ask "Wait a minute, why are they asking me to aim a loaded gun at my foot and pull the trigger?". And if after pondering that question you still think following the instructions is a good idea, please report to HR for reassignment as reactor shielding.

Comment: Re:He's Not Justifying Retribution (Score 1) 893

by Todd Knarr (#48823983) Attached to: Pope Francis: There Are Limits To Freedom of Expression

Sure, if someone curses his mother, they shouldn't be surprised if he slugs them. However, note that if the police get involved it would be the Pope going to jail and being charged with battery, not the person who cursed his mother. You may be expected to have enough self-control not to curse like that, but you're also expected to have enough self-control not to respond to ordinary words with physical violence.

Comment: Both are correct (Score 1) 249

by Todd Knarr (#48794037) Attached to: Education Debate: Which Is More Important - Grit, Or Intelligence?

The way "intelligence" is used falls more under the heading of what I'd call "the skills you have". Some are innate physical abilities, many are probably learned but we don't really know when or how so they end up just being things that naturally come easy to you. They're the hand you're dealt. Grit and persistence are useful then in making the most of the skills you have, practicing and refining them to get the most out of the hand you're dealt. Both are needed. We all know people who just don't get math, or have bad hand-eye coordination, or other things they're just bad at that pretty much preclude them being theoretical physicists or world-class tennis players and so on, no matter how much they might work at it. All the grit in the world won't help much if you're focusing on something you're just bad at. We also all know people who're very good at something and have the potential to be very successful in some fields, except that they won't put in any effort they don't absolutely have to and so they never become successful. All the potential skill in the world won't magically make you good if you don't apply yourself. The key, of course, is to apply grit and persistence to the things you're good at and the things you absolutely need rather than at things you're bad at.

Comment: Re:HTTP/1.1 is just fine (Score 1) 161

by Todd Knarr (#48782905) Attached to: HTTP/2 - the IETF Is Phoning It In

It's not just a matter of decoding the packets. The big problem is usually in separating out the packets for the connections from one specific client while ignoring the packets for all the other clients, and then assembling those packets into a coherent order so you can see individual requests and responses rather than just packets. That's fairly easy to do at each endpoint, much harder to do when just sniffing traffic in the middle. And of course the code to decode packets and assemble them into a transaction's more complex than the code to just append to a string and output that string to a file. Not everything has that kind of logging already built into it, and when I need to add it I'm usually pressed for time because it's a critical problem. tcpdump or wireshark will work, given enough effort, but I've too often seen them produce valid but deceptive results because while the filtering and selection and reporting were correct enough to look reasonable they weren't quite completely correct so the results were showing me something that didn't exactly match reality. Debugging dumps finally revealed the discrepancy, and we got the problems solved.

Comment: Re:HTTP/1.1 is just fine (Score 1) 161

by Todd Knarr (#48776637) Attached to: HTTP/2 - the IETF Is Phoning It In

Most of the bandwidth for modern web sites goes to content, not the HTTP headers. That's even with content compression, which is already part of HTTP/1.1. Reducing overhead by going to binary in the headers isn't going to reduce the bandwidth requirements by enough to notice, and comes at the cost of not being able to use very simple tools to do diagnosis and debugging (I've lost count of the number of times I was able to use telnet or openssl and copy-and-paste to show exactly what the problem with a server response was and demonstrate conclusively that we hadn't misformed the request nor botched parsing the response, having to use tools to encode and decode things would've led to the vendor questioning whether our tools were working right and then I'd've had to figure out how to prove the tools weren't misbehaving (telnet and openssl were widely-enough used that that wasn't a problem)).

Comment: Re:HTTP/1.1 is just fine (Score 1) 161

by Todd Knarr (#48776461) Attached to: HTTP/2 - the IETF Is Phoning It In

Because none of that requires a new protocol? You can do that in HTTP/1.0, it's entirely a matter of client programming. And yes a protocol analyzer can decode a binary protocol for you, but it takes a bit of work to set them up to display one and only one request stream. A text-based protocol, meanwhile, can be dumped trivially at either end just by dumping the raw data to the console or a log file. Decoding and formatting a binary protocol takes quite a bit more code and adds work. As for bandwidth, the HTTP headers are a trivial amount of data compared to the content on modern web sites so gains from compressing the protocol headers are going to be minimal (content compression already exists in HTTP/1.1 and there's going to be little or no improvement there in the new protocol).

Comment: Re:It is called good coding. (Score 4, Insightful) 189

They have. But they didn't do it overnight, they did it small bits at a time and those 40-year-old systems were patched or updated and debugged with each change. The result is a twisted nightmare of code that works but nobody really understands why and how anymore. And the documentation on the requirements changes is woefully incomplete because much of it's been lost over the years (or was never created because it was an emergency change at the last minute and everybody knew what the change was supposed to be, and afterwards there were too many new projects to allow going back and documenting things properly) or inaccurate because of changes during implementation that weren't reflected in updated documentation. As long as you just have to make minor changes to the system, you can keep maintaining the old code without too much trouble. Your programmers hate it, but they can make things work. Recreating the functionality, OTOH, is an almost impossible task due to the nigh-impossibility of writing a complete set of requirements and specifications. Usually the final fatal blow is that management doesn't grasp just how big the problem really is, they mistakenly believe all this stuff is documented clearly somewhere and it's just a matter of implementing it.

Comment: Not a good comparison (Score 4, Informative) 437

by Todd Knarr (#48763231) Attached to: Is Kitkat Killing Lollipop Uptake?

I don't think the comparison holds up well, because in the case of XP users had control of the upgrade while in the case of phones it's usually the handset maker and to a lesser extent the carrier in charge. Adoption of Lollipop is mainly a function of how many handset models ship with it installed and how quickly people are upgrading to newer models of phones. Most of the flagship models are shipping with some flavor of 4.2 or 4.4 on them, and enough people seem to have bought those models in the last year that it'll probably be summer at the earliest before we see the next cycle of upgrades start in earnest. The only way we'll see Lollipop uptake pick up faster than that is if Google manages to convince the handset makers to roll 5.0 out to phones like the Galaxy S4. It'd also help if carriers stopped insisting on different "models" where the difference is strictly in branding and the actual phone hardware is identical.

Comment: Re:DNS blocking failure (Score 1) 437

by Todd Knarr (#48728219) Attached to: Netflix Cracks Down On VPN and Proxy "Pirates"

Harder and "tech savvy"? Hardly. If you're running a router based on DD-WRT (which is basically any home WiFi router these days), it already includes PPTP and OpenVPN servers. Doesn't take much on Windows to create a little script that'll do a one-click push of the necessary files to configure and enable the server and set up the firewall to allow VPN traffic to go to the WAN side as well as the LAN. Worst case is you go to your local geek and have them flash stock DD-WRT onto the router to replace the factory-modified installation (which I'd recommend anyway, the stock images are more stable and less prone to wonkiness).

Comment: DNS blocking failure (Score 1) 437

by Todd Knarr (#48727807) Attached to: Netflix Cracks Down On VPN and Proxy "Pirates"

Apparently the media companies haven't heard of this new-fangled device called a "router". It comes with this exotic, difficult-to-use feature called a "firewall". And it insures that regardless of what DNS servers the application may try to use, it will use my DNS server while on my network. Problem solved.

As for VPNs, it's difficult to block router-based VPN tunnels since there's no trace on the device that a VPN's in use. All it takes is a suitable server to connect to, and I've got a selection available that aren't part of any VPN service since I set them up myself. Setting it up the first time's a bit tricky, but duplicating that first setup and changing a few address numbers to match the new system's pretty simple.

The media companies need to just grow up and accept that the world's moved on with or without them, and that their problems stem not from any overwhelming desire of consumers to pirate content but from their own adamant refusal to accept consumers' money for that content.

Comment: Why is this an issue? (Score 1) 325

It's already assumed on desktops and laptops: saying it has a 500GB hard drive means it has a 500GB hard drive, not 500GB of free space after Windows and all the other software is installed. Saying it has 8GB of RAM means 8GB of RAM, not 8GB of memory free after device drivers and services and Windows and run-on-startup programs have loaded. So why on a phone or tablet should 16GB of storage not mean 16GB of storage, why is it supposed to mean 16GB free after the operating system and software is installed? It may be simply that phones and tablets have so much less storage compared to desktops, so people are more sensitive to how much is used by the pre-loaded software. The solution to that, though, is simply to either buy a model with enough storage or one with an SD card slot so you can add storage.

Comment: Solutions exist (Score 1) 312

by Todd Knarr (#48706957) Attached to: Ask Slashdot: What Should We Do About the DDoS Problem?
  1. Ingress/egress filtering near the edges. Backbone providers obviously can't feasibly do this, but edge networks like consumer ISPs have a solid knowledge of what netblocks are downstream of each subscriber port and what netblocks should be originating traffic on their networks. Traffic coming up from each subscriber should be blocked if it doesn't have a source address in a block owned by that subscriber, outgoing traffic through the upstream ports should be blocked if it doesn't have a source address of a netblock that belongs on or downstream of the network, and incoming traffic through the upstream ports should be blocked if it doesn't have a destination address that belongs on or downstream of the network.
  2. Disconnection of infected systems. If a subscriber system is confirmed to be originating malicious traffic due to a malware infection, shut off the subscriber's connection until they contact the ISP and clean up the infection. Time and time again it's demonstrated that the people getting repeatedly infected won't do anything as long as their connection appears to still work, and that the only thing that gets their attention is connectivity going out. Get their attention and make it clear to them that letting this continue is just not acceptable.
  3. Extend this as far into the Internet as is feasible. Even if you have so much interchange traffic that you can't filter all ports, you may also have some ports where there's a manageable number of known netblocks handled through them and you can do filtering on those ports to reinforce the filtering that should be happening on the connected network.

Comment: Simple: the consequences if they don't (Score 5, Insightful) 290

by Todd Knarr (#48703505) Attached to: War Tech the US, Russia, China and India All Want: Hypersonic Weapons

Yes, it can lead to an arms race. The problem is that if you hold off and your enemy doesn't, you're a sitting duck. Avoiding the arms race is only possible if everybody involved holds off, and you don't/can't trust any of them to hold off so you have to proceed as if you're already involved in an arms race whether you want to be or not. Because the only thing worse than being in a Mexican standoff is being the one guy in a Mexican standoff without any guns.

Comment: It mostly won't change anything (Score 1) 50

by Todd Knarr (#48700223) Attached to: What's the Future of Corporate IT and ITSM? (Video)

With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT?

It won't. Corporate IT and how it operates is driven by the people who sign the checks. That, BTW, is not the employees. The people who do have considerations other than employee expectations in mind when they decide on policies, and some of those things like compliance with laws and regulations aren't optional. Corporate IT will, as always, continue to be bound by what upper management decides on and the rest of the company will have to live with upper management's decisions. And no, IT isn't any happier about this than the rest of the company, because frankly their job would be a lot easier if upper management would stop telling them how to do things and just let them do whatever they needed to do to deliver what upper management needed. I don't see that happening any time soon.

The gent who wakes up and finds himself a success hasn't been asleep.

Working...