Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment: Re:Is that proven? (Score 1) 399

by thogard (#49557199) Attached to: Debian 8 Jessie Released

Lots of useful things can happen even if most file systems don't mount.

I have systems in data centers half way around the world. I want sshd to wake up as soon as the networking is up. Once the whole thing is up and stable, I want the initial sshd to be killed off and the normal production one started. The sshd started early uses no shared libraries and uses a config that lets root login. This means that if the machine is screwed up, I can get in if things are broken without depending on the lights out management card or some other virtual console hack.

Remember that on very large systems there are always errors on a disk and some systems are large enough that their mean time between failures is always now. That doesn't mean the systems aren't still useful in production.

Comment: Re: Figures (Score 2) 366

by thogard (#49540877) Attached to: iTunes Stops Working For Windows XP Users

I find it odd that there isn't a well known man in the middle SSL-> TLS 1.2 proxy for XP that can fake things enough to work for most programs.

The entire XP TCP/IP stack can be replaced and there are replacement WINSOCK versions for XP.

With the large number of programs that talk to specific hardware that simply won't run on anythign newer than XP, combined with how many machines are still functional for their users, it will be around for a very long time. Remember that Microsoft has only dropped free support for the consumer version of XP and paid support (and some free support) will be going on for another 4 years.

Comment: Re:How about basic security? (Score 2) 389

by thogard (#49517613) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

Scanning IPv6 isn't as hard as you make it out to be. I look at it more like using dictionary attacks rather that sequential scans. The 1st 64 bits are known if your after a specific target. It is also trivial to know if a given /64 is even used. A tree of all known used /64 shouldn't take long to create.

The 64 bits of the host is a bit different. They could be fully random (which is rare) or they are allocated based on mac address or statically assigned. The mac addresses means that 40 bits of the address are known if you know anything about the targets buying habits (i.e. they tend to buy Dell or Polycoms). That leaves 16 million guesses which can be reduced based on the vendor or the product version you which you intend to exploit once you find a target.

You may not be looking for one in 2^64, but a network of devices that all may have many addresses and you might only need one.

The static address assignment space isn't very large as well as netadmins like using :: when they type in addresses so they are unlikely to be random. That means their 1st network will be 0::something and their second is likely to be 0001::something. Oddly enough you might find they skip ::a and use ::8,::9,::10 as well or use something that match with their existing ip v4 address so things like ::192:168:1:1 is very likely.

All these things mean that Monte Carlo scans of a specific IPv6 allocation on a remote network is well within the ability of small time hackers.

Throw in a firewall that isn't filtering IPv6 properly and that will result in remote exploits of internal devices.

Comment: https^wmetadata everywhere (Score 2) 70

by thogard (#49507617) Attached to: Chrome 43 Should Help Batten Down HTTPS Sites

The push for https everywhere also means there is more metadata floating around. If all your are looking at is the metadata and not the data stream, https gives an observer more info about what is going on than with just http. Once you get into properly verifing certs, both sides and an observer has more info to tie a converstaion between a specific client and a server.

You can see this yourself by getting something that does netflow and look at the data that comes from that.

Comment: We have been using robots on farms for years (Score 1) 124

by thogard (#49498929) Attached to: Drought and Desertification: How Robots Might Help

The best modern farm equipment can grow alternate crops in alternate rows. It can be done in a way that is sort of mix between what had historically been done by using seasonal crop rotation and planting trees as wind breaks.

The system works by using a high precision DGPS system so the tractor wheels are in the same spot every year so the rows stay in the same places. The hills can also be mapped so that the side of a hill may get processed first or last in a season and the amount of fertilizer or planting depths of crops can be adjusted for optimum yield or land protection.

Many of the California farming areas were settled after people left the mid-west dust bowl. Most of the dust bowl problems were a result of not using the best farming techniques when a drought worsened and it took lots of time to rebuild those areas. Those areas also get massive amounts of rain from time to time from hurricanes hitting the Gulf of Mexico. California doesn't have that advantage.

Another odd thing is there seems to be some connection between early crop failures in the midwest that predate the dust bowl and those crop failures started screwing with the futures markets which some have claimed was the start of the stock market crash and great depression.

Comment: Re:Should be micro kernel (Score 2) 209

by thogard (#49467681) Attached to: Linux Getting Extensive x86 Assembly Code Refresh

It was a monolithic kernel. One of the interesting features were devices drivers were modules and there was a small device node module which would say stuff like "used module 'serial driver', call it tty4 at IRQ 2 and address 0x454040". The kernel would deal with all IRQs in the hardware and then run the IRQ callback funtion in the proper module. That allowed user level device drivers back in the early 1980s.

Another cool feature was each software module had a CRC so it could detect bad binaries. There were ways to whitelist and blacklist based on CRC values.

Comment: Re: Too many pixels = slooooooow (Score 1) 263

by thogard (#49418785) Attached to: LG Accidentally Leaks Apple iMac 8K Is Coming Later This Year

24 bit color is a problem. Out of those 16 million colors, about 1/4 are greys and about 1/2 are browns. The remaining 4 million are slightly more than a million of each of the reds, greens and blues leaving less than a million in the rest of the spectrum. When it comes to shades of oranges your limited to only about 60 that most people won't say are brown when viewed in isolation.

The flipper displays that use 18 bits are even worse and are way too common.

Of course the real fix for this is to run HSV rather than RGB to the display and let it work out how to drive the pixels.

Comment: Re:Contract contingency? (Score 1) 536

What is the problem with getting it installed before he moved in? I had to pay for termite inspection for a house I bought since I wasn't about to trust anyone else. If I hadn't spotted a radio tower I could link to, I would have had DSL installed in the house before I moved in. The costs of pulling out out of a DSL contract are much cheaper than trying to cope with a house in an area where you can't get connectivity.

Comment: Re:Postgres has referential integrity (Score 1) 320

by thogard (#49298667) Attached to: Why I Choose PostgreSQL Over MySQL/MariaDB

The OID concept does fix a common problem. Take a typical CRM database where you have customer account and a ship to address. At some point, the ship to address for a customer gets updated to their new office yet someone wants to check where an old order was shipped to and the programmer didn't think of it so now reprinting the old invoices show the new address. It is amazing how many times I've seen that type of problem cause massive issues in data integrity.

Nothing will ever be attempted if all possible objections must be first overcome. -- Dr. Johnson

Working...