Please create an account to participate in the Slashdot moderation system


Forgot your password?
Encryption Security

Security in Wireless Networks 58

Asmodeus writes "Those boys at Cambridge have done it again. The Resurrecting Duckling (where do they get these names ?) is a description of the security problems in ad-hoc wireless networks with some nifty solutions to the problems." Its a really interesting techie bit actually. Talks about problems with low power, wireless boxes. Its strange to think that in wireless world, for example a denial of service attack could be anything designed to drain your battery.
This discussion has been archived. No new comments can be posted.

Security in Wireless Networks

Comments Filter:
  • Although Bluetooth has its place - albeit not very useful for real WLANS - it's the one which will be hacked.
    Real WLANS's using Direct Sequence Spread Spectrum technology have an RF signal level typically below ambient noise level. So first you have to find the signal. Then you need to rebuild the signal - which is built using up to 51bit encryption. Then you need to have an SSID which is relevant and an IP address which hasn't been denied.
    Hacking that way is tricky.

    Trying to drain battery life is also very difficult - the only sensible way is to take interference levels so high that nothing gets through. You've got to really try hard to do this!!

    I don't think we have much to worry about.

  • While you can not transmit information with entangled particles, you can use them to create one-time-pads with a little add on to the method described here [] (was an old Slashdot article []), basically by having one side transmit if a photon was polarized along or perpendicular to a filter, but not which position the filter was in.

    Since the quantum method ensures, that only sender and recipient of a message know this one-time-pad, and 'sniffing' (=measurement) of the transmitted photons leads to errors and thus notification of a compromised (quantum-) line, you can then use the one-time-pad to transmit your message via any regular line you choose.
  • I guess a really clever wireless device
    could use the broadcast power inherent in
    a sleep deprivation attack to re-charge
    its batteries... ;-)
  • Obvously, we are talking on 2 different levels. 1. You were so busy trying to pick apart my comments, you didn't even understand that i DIDN'T say anything about a hard drive being the way we can protect our systems. I think all of this can be straightened out by your statement about not being an architecture expert. Reverse engineer a program written for the original 8088, then a Microsoft operating system. Then you will get an education on how a chip can be looped into infinite processes completely inefficiently. And I cant help but question your abilities if you rely on your monitor on your screen to tell you how much CPU is being used by whatever process at any time. Accuracy plays no part there. You also seem to think you cant write a program like windows to work with a simpler instruction set for an 8088 cpu based on memory or whatever. It comes down to how you wrote your operating system. If you want to try to rip me for not knowing what Im talking about, at least be logical and complete in your own knowledge.
  • You imprint with a secret key in addition to a MAC or network address. And then you run a protocol like AKEP2 to authenticate and set up session keys, for instance.

    This is all pretty standard cryptography, and lightweight enough to put in a single-chip embedded system.

  • Y2K is an entirely seperate issue and it didn't 'get passed' people. It's not a bug, it's a tradeoff... size for a 2 digit instead of a 4 digit year in exchange for failure in the year 2000. Nor is it universal. There are plenty of systems out there without a y2k problem.

    However; A CPU's 'power' is the instructions it performs in a given amount of time. The only way to waste computational power is to spend some cycles not executing an instruction or to execute unnecessary instructions. So, which are you saying is happening? And what is your technique for harnessing this 'missing' 99% of our power?

  • I think that writing the software more efficiently would probably give you between 50%-100% speedup. Software is notoriously cruddy.

    However, 100% is still only a x2, not a x100. The x100 is often quoted as the degree of idleness of a machine used for desktop WPing, but it's certainly not a typical figure.

    Poor coding exists, but it's certainly not THAT bad. Linux utilizes the CPU better than Windows, true, but it's code still is a long way from optimal. The TCP stack needs work, for example - the BSD stack is certainly faster, and that's still by no means perfect.

    But poor coding isn't the only factor. Linux is designed to be multi-platform, and generic code will ALWAYS be slower than tightly-written, heavily optimisd routines. That's the nature of the beast. You can't be both generic AND take advantage of every little trick a specific CPU or device may have.

  • by Christopher Bibbs ( 14 ) on Wednesday October 20, 1999 @07:58AM (#1598806) Homepage Journal

    Maybe I'm missing something, but the idea of imprinting sounds a lot like what my garage door opener does now.

    I've got one of those rolling code models [] from Sears where you have to hold the opener to the remote while pushing a button on each. The door can then be opened by the remote which itself can be programmed to handle three different openers. (Maybe you have more garages than I do. *shrug*). Seems to me that it fits the model discussed here a bit.

    Can someone let me know if I've got this or not?

  • I don't think it's true that QC can instantly crack "every cryptographic standard today". The article about TWINKLE was just a confused mass of lies and wishes. While there are some proposed quantum algorithms for factoring and discrete logs, these are mostly a threat to today's popular public-key systems. Good symmetric systems still require brute-force key searching, and QC can only turn that O(2**n) problem into an O(2**(n/2)) problem. So the easy fix is to just double your key lengths. (I know, easier said than done.)
  • Read it again. The 'imprint' lasts forever, unless you push the reset button in the battery compartment or some such. We're talking about _small_ devices here, stuff for which full-fledged public-key crypto is severe overkill.

    A strong solution would be to build a Faraday cage and do the imprinting in it.
  • Uhm... no. I mean, in -principle- you're right, because of equivalence... any CPU powerful enough to be a Turing Machine can do anything any other CPU can. In theory.
    In practice... well, unlike the imaginary Turing machine, CPU's do not have an infinite amount of memory. If you take an ordinary 8088 and its motherboard, you just can't -have- more than 720K of memory. It's not possible.
    Now you -could- put some memory sockets on an ISA card and write a protocol to utilize that memory, etc, but a complete 8088 machine is limited by memory.
    With those memory limitations, it is not possible to perform calculations that take up more memory than that!
    Unless, of course, we use virtual memory... so... now, with disk-access speed memory, we write a Win95 Emulator (never mind the difficulties in context switching and utter lack of runtime security caused by the lack of a protected mode) and now we're off! We download and install Quake V (it's been a few years since we started this project, you see, a couple more versions came out) and run at... 1 frame per week.
    Maybe we try something less ambitious... printing out a pdf document... at one page every six hours.

    No, I'm sorry, we utilize -much- more than 1% of our computing power in -many- everyday tasks, and what is theoretically 'possible' with older CPUs is -not- the same as what is feasible.

    Besides which, ... the security issues between 8088 and PentiumIII aren't the level of utilization, but the lack/existence of a protected mode, DMA transfers, bus protocols, etc. I'm not an expert, so I couldn't even start to tell you how we can be sure an OS is or is not secure against software taking advantage of hardware architecture.
    Obviously, -NO- OS is secure against actual tampering with the hardware directly. After all, the tamperer could always replace the OS with his own boot disk. But that, too, has little to do with the difference between 8088 and Pentium III...

  • don't believe everything you read in /. The Israeli quantum cracker message is 100% bogus. Public-key crypto would be in trouble if a quantum computer of sufficient size turns out to be feasible, but symmetric crypto can just double the size of its keys and go on. A secure solution can and has been built using symmetric crypto alone.
  • Sounds like you love to tinker as much as I do. Making an old hard drive play jingle bells by manipulating registers in the controller chip and other such things.

    I agree that hardware knowledge will play an important role in the security battleground of the future; However, I don't think it has been, or currently is as limited as one might imply from your comments.

    The real hackers out there are folks who have an overwhelming desire to know how everything works. To them, your comments might seem more like statements of the obvious rather than commentary about a subject you obviously enjoy. Hacking has always included hardware knowledge as a big part of it and it always will.

    A perfect example would be boxing (anyone thinking I'm referring to a sport stop reading here). The people who came up with most of those ideas weren't interested in any sort of fame, they just wanted to know how things worked. That was almost 100% hardware-based, and included knowledge that can be applied to today's electronics quite easily. Fundamentally things haven't changed very much... which was another point of yours.

    So... I agree, but I also think many so-called security experts miss the point that vulnerabilities have always existed at the hardware level. It's not just the future we're talking about here. IMO that's as much a fact now as it ever will be.

  • The first constrain on the system is that of a "peanut CPU". "The consequences of [this contraint] is that, while strong symmetric cryptography is feasible, modular arithmetic is difficule and so is strong asymmetric cryptography." Because of this, these devices cannot use IPv6. In general, the specification clearly shows why conventional solutions to these problems do not apply to these classes of devices.

  • Bear with me if you will while I make this association.

    I'm very interested in wireless and like the authors of this paper, I think it will be very important in the coming years. But I've never thought about things like this 'sleep deprivation attack' they were talking about. To me, this demonstrates one of the most powerful things about the open source/free software community, the fact that there are smart people thinking in ways others wouldn't. When big companies put a group of smart people together, they may very well come up with a great product but they probably won't be able to think of every attack/feature/etc that a larger interested group could think of.

    Another example of this is the development of so-called "side channel attacks" in cryptography. People have used things like battery drain, EMF radiation signatures, and others, to attack smart cards and their ilk. Certainly the designers of the smart cards were assured their crypto was up to snuff but they hadn't counted on these side-channel attacks. If this hadn't been discovered until everybody had a smart card in their wallet, it would be a huge catastrophe.

    Open thinking is a bit difficult for most big corporations to do, but I think things like this paper will help bring them around. The time of believing that a small group can design important projects in a closed manor is almost finished, there are too many smart people around thinking in new ways.

    I know this is a little offtopic but that sleep deprevation attack got me thinking. Which, I guess, is the point.

  • This is precisely the difference between a true "hacker" and your every-day IRC script/packet kiddie.

    A real hacker tinkers with stuff to see how it works. A real hacker gets his pleasure watching something he hacked up run on an obscure piece of hardware.

    Packet kiddies are the children that spend their time on IRC, downloading l33t exploits of the month, running their spl01T scanZ on machines, packeting their "enemies", defacing web sites, and only in very rare cases are these kids able to code anything useful on their own (aside from simple Tcl for their eggdrop botnet).

    The hackers examine and "hack up" software and hardware for their own education and pride. Packet kiddies do it so they can get recognition (either among their fellow l33t IRC peers, by telling "hacker stories" at school so people think they're cool, or by trying to do something they hope will get them in the newspaper).

    Packet kiddies don't know squat about electronics, and I doubt will ever have a desire to learn about it (it's too hard for most script kiddies, who tend to be pretty lazy/undisciplined). Those that do take the plunge tend to easily be the more mature of both worlds. (There are exceptions, sadly.)
  • Work in quantum computing and quantum cryptography has shown that every cryptographic standard today can be broken in a no time at all, so how would wireless networks be able to take advantage of this?

    To do quantum computing and quantum communication you have to have perfect control of the system, and prevent absolutely any interactions between it and its environment.

    As a theoretical excercise quantum algorithms are certainly fascinating; but I suspect that in reality a quantum computer with enough completely isolated and non-interacting 'gates' to run the factorisation algorithm is unlikely ever to be achieved.

    Similarly, quantum communication might work along optical fibres or tightly focussed laser beams, but I think you would have a lot of problems trying to detect the very subtle single-photon correlations using wireless against a noisy RF background.

    But I'd be delighted to be proved wrong on either of the above!

  • Obviously, -NO- OS is secure against actual tampering with the hardware directly. After all, the tamperer could always replace the OS with his own boot disk

    That's not obvious at all. For example, what do you (the invader) do if there's no floppy drive? Start pulling chips? What do you do if there's a floppy drive, and there's no password protection in the bios, you can boot from your floppy, but the file system is encrypted? Or any number of other simple obstacles that could be placed in your way.

    The point is, it is possible to harden the OS (and by extension the network) against invasion, both by hardware and software means.
  • Riiiiiiight. Well, I could be wrong (it's happened before). Must've been a different city.


    "You can't shake the Devil's hand and say you're only kidding."

  • You're right, that I didn't understand that you didn't say anything about the hard drive. I didn't understand -what- you were saying was 'how we can protect our systems' but the hard drive was your most recent comment before that. As far as I can tell you've said nothing about security.

    Next... a wait-looping program does -not- waste huge amounts of CPU. It wastes no more nor less than that processes share of CPU. Granted, wait-loops are inefficient, and it's better to explicitly relinquish the CPU with some sort of system call. However, this is not the same as 'wasting' 99% of the CPU power.
    Which, really, is the only point I'm taking issue with, here. It simply -is not true- that the CPU is 100 times more powerful than what we make use of. If it were, a rival operating system like Linux or BSD or SCO-Unixware or Solaris x86 would 'do it right' and be 100 times more powerful than Windows! That doesn't happen because we are not, even under Windows, wasting 99 percent of our CPU time.

    Nor are the CPU meter utilities the only way that I've looked at CPU usage (although I disagree that they are inaccurate measures, but never mind that). I've used WinDbg on Windows and kdb on Unixware to step through various problems I was debugging, done various bits of profiling to test for real-time latencies, etc. We 'waste' at -most- ten percent of the CPU time doing context-switches, page-faults, and other OS tasks. I'm not making this stuff up, you know, there's plenty of literature on OS design that talks about these things. And honestly... don't you think it's a little bit arrogant of you to think you're the only person in the world who has noticed that we could get 100 times more out of our computers if only we 'did it right'? Do you really think OS designers are so blind as to have made that grievous an error?

  • That's the right idea. And it also points out an interesting dilemma comes with multiple master remotes. "Imprinting" works great when you have one master remote, the authorization code constantly changes according to usage, but what happens when you need more than one remote? A two car garage would likely need at least two remotes.
    This problem can be solved by storing a separate key for each remote device and having the door opener react to each one. That increases the possiblity of breaking the key, but allows for multiple master remotes. The question is how many keys to store. Currently, electronic devices with remotes can be spoofed by a universal remote, providing us with a master remote, but you can still use the original remote to work the device. Even with the introduction of authorization security, that situation is not likely to change so there is a minimum of two remotes for the device to be "imprinted" to. There may be a need for more. So each device will need to have a max # of remotes it can become imprinted on.

    Hmm... So the "resurrected duckling" may need to be "imprinted" to multiple "mothers". Great, another image to digest, polygamist lesbian ducks raising undead ducklings.

    -S. Louie
  • For an ISP, most of these problems go away, I think. There's increased vulnerability to man-in-the-middle attacks, but there are plenty of security protocols to deal with that.

    You could, in theory anyway, use public key cryptography with good assurances... just issue
    your public key and your user's pre-generated private key to each new user with the rest of the install software...

    If you could do it, actually, it'd be a really good idea for every session to be encrypted. Plaintext passwords over a phone line is one thing... theoretically vulnerable to man-in-the-middle interception, but not likely... over the air is another thing altogether. Anyone could be listening.

  • Okay... let me get this straight:

    You support the notion that you can do anything on an 8088 that you can do on a PentiumIII with no loss of performance simply by writing the software more efficiently?

    This is the idea that I am contesting. The idea that we are only using 1% of our CPU power.

    Granted wordprocessing leaves the machine idle. So does booting it up, setting it to never start a screen saver, and not launching any applications. Or just playing solitaire. In these cases, you're just sitting around waiting for user input and then doing a little bit of drawing.

    But I contest the notion that there is a hidden 99% of our power that is not being used because of poor coding practices.

    If this were true in the Windows case, wouldn't a Linux that utilizes the CPU fully be 100 times more powerful than Windows? And can you honestly say that Linux -is- 100 times more powerful than Windows? I certainly wouldn't, not in the benchmarking sense, anyway.
  • If there's no floppy, sure, replace the boot-rom. Or pull the hard-drive and put it in a machine that -does- have a floppy.

    An encrypted OS is a cleverer idea, but it has to be implemented carefully. Now that I've broken physically into the lab and copied and/or taken the hard-drive I can take my time cracking the encryption. The initial authentication had better be stronger than 8 letters.

    If the machine can be booted up without authentication, I take off the case, replace the CPU with an ICE, and read/write directly to memory to bypass security. If you can't trust the integrity of memory, you're sunk. If it can't be booted up without authentication, well, that's awfully inconvenient. But it is more secure.

    Better yet... if you want your machine to be secure from physical attack, secure it physically. 'cause -nothing- is going to stop a DoS attack if the cracker gets physical access. It'll take hardly any amount of explosive at all for that...

  • Considering how many IP addresses IPV6 makes available, devices will probably have permanent IP addresses assigned. Consider also how wired the devices we are talking about will be. When the consumer buys these devices, the routing information can be burned in by the reseller or manufacturer, because said IPs will most probably be stored(and available for transfer to the reseller) in either all devices you own, or your main computer.

    So, because there is a trusted IP table, the devices will only listen(recognize) to devices(yours and those you allow) listed on it's burned in table. Imagine the table only being wipeable by the IPs that it trusts. That means that the duckling will not be killed by any other device than the owners(in theory hacking this would require someone to physically take the device from the owner, but that level of trust regarding security is dumb). And therefore cannot be reanimated to recognize any other devices except when the owner of the controlling device deems it necessary.

    At least that is what I understood from the article

  • But what happens when I come along and impersonate the trusted IP? And isn't this trivially easy to do? No more difficult that impersonating a different Ethernet MAC address, no? That's something I can do quite easily...


  • And right now, there's a group of rednecks in Alabama with a dozen bearcat scanners trying to intercept wireless communications. They think the Miller Lite they're drinking is going to help them.

    Brad Johnson
    Advisory Editor
  • by bain ( 1910 ) on Wednesday October 20, 1999 @06:46AM (#1598827) Homepage Journal
    As a Person working for and ISP in South Africa where there is a telecom monolopoly and lines take about 6 months to a year to install ( yes .. thats right ) we have to use wireless often to overide problems from our local telecom... security has never been much of an issues. This is an eye opener.

    thanks to all involved ..


  • I have been in the know about hardware and software both. -The knowledge of how everything works together for over 15 years. The intricacies and falibilities of both and what happens together. Most techs I have run across just know software. -(and very little at that). The whole thing is that if someone wants to disrupt or cause chaos to another's machine, the future may lie in the hardware knowledge. I have always maintained we are headed to a "Max Headroom" type society faster than we may believe. Hardware gets cheaper, and cheaper, while software continues to utilize less than 1% of the capability of it. For instance, what can't I do with my 8088 that a pentium 3 does today? The answer is nothing. The difference is the software written for it. Nobody has written windows to run on an 8088 for example. The result is that if the future is going to be ever quickly changing formats and chips and transfers, the knowledge to keep up is in the hardware knowledge, and how code interacts on it's lowest levels with chips. (of course a basic knowledge of radio, and electronics couldn't hurt either). I think this may be a clue. If you guys want to keep current, catch the education of electronics. :)
  • Since this is similar to a denial-of-service attack (continually requesting a service), the battery drain technique should work ok. Since you can keep the device broadcasting on a faster than normal basis, the battery will be drained faster.

    However, most of these devices are rated on just this sort of continual broadcast. Take a look at the specs for recent cell phones. They list total broadcast time, as well as standby time. Bluetooth specs also detail power drain on a broadcast/standby basis.

    End result? Manufacturers will get wise to these attacks, and figure out a way to ignore malicious devices. I seem to remember them talking about this, but I don't remember any documents regarding this.

    However, this is just one of the issues being addressed in the Bluetooth (pico area nets) and 3GPP (next generation mobile phones) groups. The really big problem is how do you keep others from listening in on your conversation. In both groups, part of the answer is frequency hopping, plus a small amount of encryption (allowed by the Feds). Authentication is already in place to disallow most spoofing. It is always possible to spoof, just depends on how hard you have to work at it.

  • by fornix ( 30268 ) on Wednesday October 20, 1999 @07:19AM (#1598831) Homepage
    And right now, there's a group of rednecks in Alabama with a dozen bearcat scanners trying to intercept wireless communications.

    Actually, there is a group in Alabama who have developed time modulated ultra wideband chips that promise extraordinary wireless bit rates and nearly perfect security. Check out Time Domain []. In addition to wireless LAN, you can use the stuff for pocket sized radar (see through walls!) and GPS to within centimeters! Anyway, I think it looks cool and haven't yet seen a story about it on /. (I submitted it 10 months ago, though)

  • This sort of thing becomes very very (did I say very?) important when we start getting into high-speed wireless networking. I remember reading about a city in the mid-west (Tucson?) that already had available 1.5 mbit wireless (two-way) networking coverage. I think pretty soon -- hopefully -- more cities will get this sort of thing, and mobile connectivity will become viable. GSM connections just aren't fast enough for anything useful right now.


    "You can't shake the Devil's hand and say you're only kidding."

  • This was a very interesting article to read, the more exposure and testing and thought that is put into the nature of wireless networks the sooner we can get support in the mainstream. What i would have liked to have seen mentioned though is the relevance of proximity(and or distance), and also the idea of device detectors, ie if there is only meant to be a certain amount of devices in an area then another device can be a sentinel to determine if there is an imposter in the area.

    And /. keep more wireless stuff coming please.
  • I had often wondered about security issues with wireless devices. On one such instance a fight between my autoresponder and another left my blackberry [] dead in no time flat. I also wondered what kind of security wireless could provide considering todays and yesterdays "snooping" technology such as "TEMPEST". Kinda broadens the scope of the rumored ECHELON. Boy has the NSA got a lot to snoop now.

    SL33ZE, MCSD
  • Work in quantum computing and quantum cryptography has shown that every cryptographic standard today can be broken in a no time at all [], so how would wireless networks be able to take advantage of this? Since quantum computing only works because of entanglement, can wireless communications use this technology to provide any true security? Scientific study has shown that entangled particles can stay as such when separated by distances up to 10 kilometers, which is certainly far enough if your device is computing with another in your home or perhaps a receiver box connected to the internet at the end of your block. Anybody care to comment?
  • Once IPv6 goes into widespread production (we're now on IPv4, but it seems like everyone is skipping IPv5 - any info on that?), this problem will be licked because IPSEC will be integrated right into the IP stack. That means that any TCP/IP communications between IPv6 devices can and should be encrypted using IPSEC.
    Neat, huh?
  • by wallace_mark ( 83758 ) on Wednesday October 20, 1999 @07:31AM (#1598837) Homepage
    I think the core assumption here is that we can trust some kernel code in the "peanut" device. I suspect that will prove to be a fairly difficult trust to establish.

    The concept of resurected Ducklings however might have broader implications. Indeed, it might serve to solve some of the problems with trusted kernel code.

    Suppose that we create "sickly ducklings" - processes that will die if interfered with. One way to look at hacking is that hacking is an attempt to obtain unexpected responses from a program based on unexpected inputs, and to take advantage of those responses. A fragile duckling, confronted with unexpected input would die - or perhaps enter a more sickly state.
    [Reference to the "DOOM kill process article" elsewhere on slashdot is intentional.]

    If the kernel code is fragile, then any attempt to interact with it by unauthorized entities will kill it. The program can then reinitialize itself, with a new identity. Any subsequent reference to this duckling by an authorized user will reveal the tamper.

    Obviously the code must be small, and must interact in (formally) defined ways - much like a security kernel.

    Combine this with Kerberos style tickets, or better yet Yaksha, and I think this might form the basis of software tamperproof.

    [Yes, a well prepared adversary can kill a lot of ducklings to discover an "addicitve duckling medicine" that will enable him/her to cure the duckling, and manipulate the cured duckling. But I suspect the ease of discovering that medicine is related to key/secret size.
  • Actually, I remember reading something about the possibility of teleportation/action at a distance in relation to entanglement. I don't remember where, but check out this article [] from Scientific American for some cool info.
  • :) and, if you recall, tempest technology is more than 10 years old. now, we BOTH know what is being done now....
  • There was an IPv5. Two if I remember correctly (which is part of the problem), but it/they was/were never designed for wide scale use. They solve some problem that most of us will never encounter, but you can find the RFCs for them.

  • That's a great idea, and it brings back memories of a menu program I used under DOS. This menu program was extremely tempermental, and any little change would break it. One day, I was having some trouble getting my menu program to run. Knowing that it was tempermental, I suspected that it had been changed without my knowlege. A virus scan turned up the Jerusalem-B virus, which had infected my menu program, rendering it inoperable. Every other infected program was still working though. That little canary in a coal mine saved me from losing any important data.

  • Here's [] a slashdot story that ran about the microwave wireless in Tucson.


    "You can't shake the Devil's hand and say you're only kidding."

  • by Signal 11 ( 7608 ) on Wednesday October 20, 1999 @07:49AM (#1598845)
    First, I commend the author(s) of this paper for making some very interesting observations and coming up with creative solutions to the problems proposed in their paper.

    There are two additional thoughts I would like to share. First - alot of this should be considered today. Examples include wake-on-LAN and power-management systems as well as laptops. For the first, assume a company has several hundred workstations that use wake-on-lan technology or other power management (maybe wake on modem activity?). Alot of power is consumed while those devices are "awake", so it would seem logical to put them to sleep when not in use (to save money on power). Somebody could simply walk up to a station and start sending out rogue "Wake p!" packets across the network, wasting large amounts of electricity and costing the company hundreds of dollars each day. This is, of course, theoretical.. but it underscores what these guys are talking about - conventional security wisdom isn't applicable in all situations.

    I like the message. It's a wake up (pardon the pun) call for security analysts - consider your requirements! Locking everything down military-style does little good if an attacker can just start turning devices off at will by draining away all their power!


  • Actually, 8-bit computers had a habit of being expanded by far more than their "theoretical" limits. There was a 256K expansion pack for the PET 8064, for example, and "Sideways RAM" expanded the BBC Micro into a monster machine for it's time.

    The trick is to use software paging. If you can spare enough memory to hold paging software and a software register, and your bus can transmit that data to a card, there is NOTHING to stop you having an unlimited amount of memory in your computer.

    An 8088, expanded this way, could easily handle over one million pages, each 1 megabyte in size, totalling 1 terabyte of RAM.

    An 8088 could program the 20-bit address bus, giving it a total of 1 megabyte of addressable RAM. However, the addressable space, internal to the processor, was the full 32-bits. If you had a TSR, which read this value and programmed a card with it, you could bypass the limitations of the rather idiotic address bus design.

    CPU "Protected Mode"? Same rules apply. Write something in software to produce a similar effect. Yes, you add a layer, but it's not going to slow you -that- much, as it doesn't have to -do- much.

    I agree that modern =LINUX= software utilises the processor a lot more than 1%. At least, when I use it, it does! I'm often getting between 98%-102%, as shown by 'top'. On the other hand, I do a lot of processor-intensive stuff. Wordprocessing leaves the machine unbelievably idle, and even regular stuff that floods the cache can end up injecting 4-5 wait-states for every machine-level instruction executed.

  • The 802.11 standard specifies some things about encryption, but the encryption is 40 bit RC4. Fast, but very weak.

    I suppose they expect to only worry about data between each hop, as then you have to guess where the new hop takes it (unless of course you KNOW where it's going by previous watching).

    At 2.4 Ghz, it's interesting to note that there isn't exactly a lot of channels available up there as the bandwidths get larger. You get about 60-80 distinct channels (varies from country to country) at 1Mbit, less (not sure of the exact amount) at 2Mbit, and a grand total of 3 at 11Mbit. (This might look weird.. 80/11=7.something, and 60/11=5.something. However at higher bandwidths, you get larger amounts of edge bleeding, which is why there is only 3 channels, and in the interest of international conformance, they have reduced the channel usage to fit the wider market).

    At 11Mbit, if one of these channels is occupied then you have only 2 alternatives. If they're all blocked, well, you're going to have a real problem aren't you?

    It isn't hard to build devices to jam the entire bandeither. What can be more devastating however, is the power they put out, which when received by an antenna might be of enough strength to actually fry circuitry. Many microwave ovens generate frequencies around the 2.4 Ghz band, some through direct emissions at 2.4 Ghz, but most through harmonics. At between 600-700 Watts total, the ordinary microwave distrbutes a lot more power at 2.4 Ghz than the puny 500mW or 100mW that most countries allow (fortunately microwave ovens are shielded and most of this doesn't escape, especially around us humans).

    Denial of service can take many forms, and current radio networks can be easily disrupted through signal damage, power drain, or signal jamming. The problem is however, getting everyone around the world to agree to a standard and spectrum that will allow large data bandwidth without country specific problems, and allowing lots of channels, with ideally a lowish frequency band to allow longer distance communications. The problem is of course, they're already taken.
  • Are you thinking of transmitting information using entangled particles? Sorry, but the way it works, no information at all can be transmitted that way. I can't recall all the details, but it fits in nicely with the rule that nothing, not even information, can travel faster than the speed of light.

  • Good in concept, lousy in practice. Take for instance Microsoft and any major computer manufacturer like Compaq, Dell, IBM, or anyone else. On a desktop, many boxes have a "sleep mode" button. In order to reactivate the machine, you hit a key, move a mouse, ect. What happened with that whole project? 2 years to come up with enough patches by the manufacturers in order to keep everyone's machine from crashing. Why? Because Microsoft incorperated it into the operating system, and the manufacturer of the computer made it work with their proprietary code as well. The result? Conflicts galore. In reality, 4 years later, problems still occur. Yes, this is Microsoft, but Microsoft isn't the problem. It's the conflict thing.
  • I never said anything about CPU time wasting. I said computational power- or capability. Again, you didnt read that. If being the first to point out that maybe something should be done differently is arrogance, then yes I am arrogant. Do I think software manufacturers are that blind? I cant believe you just asked that. If Y2K got passed everyone, do you really think they care about bloat or efficiency? oh. wait. you probably think y2k is a hoax. :)
  • First: You can put up to a meg memory on a 8088 without a card.

    720K/1Meg... it isn't really a relevant difference. The point is that for the sorts of large calculations used in cryptography, 3d rendering, etc, ... this isn't enough. A simple Xwindows running just a trivial window manager takes between 2 and 4 megs... you'd be thrashing your system before you even started an app!

    Second: The level of the code you write, and your ability as a code writer indicate what you can make your machine do. You are wrong about what machines today utilize on their chip capability.

    There are utilities for both windows and linux that show your percentage CPU usage. 2 or 3 percent is typical for an idle system, 90-something percent for an intensive videogame, maybe 50% for streaming video - bandwidth is usually the constraining factory here. Exact percentages vary from machine to machine, obviously... a PentiumI with a direct T1 connection is going to have different constraints than a Quad-Athlon machine with a 33.6 modem. :)
    At any rate, I regularly work with low-level code - driver/kernel level code, and standalone code - and I'm quite aware of what it takes to saturate a CPU.

    First- Ever try win3.1 on a pentium3? Wanna
    know why it runs better? The code can be excecuted better thru the CPU because of the CPUs capability.

    Exactly. It is because the CPU was saturated, fully utilized, unable to perform any better, that a more powerful CPU allows the system to perform better. If, as you suggest, CPUs were massively under-utilized, then it wouldn't matter whether or not you had a more powerful CPU. To draw an analogy...

    If I'm trying to drive to work on a 65 mph speed limit highway, if I'm driving a Mustang, I'm underutilizing the car, and upgrading to a Ferrari doesn't get me there any faster. (Unless I break the law. Ok, so it's a weak analogy).

    If, OTOH, I'm driving a Model-T with a top speed of 45 mph... I'm fully utilized. Upgrading to a Mustang or a Ferrari -will- show improvement in my accomplishing the task.

    What you're saying is equivalent to saying 'because upgrading from a Model-T to a Ferrari lets you go faster, this proves that we don't drive our Model-T's as fast as we could. If we wanted, we could drive them as fast as Ferraris!'
    It just doesn't make any sense.

    -You can even use a weak HD and it runs better. If youve ever studied CPU architecture and how to program registers, you know this.

    Sure. Hard drive has nothing to do with actual execution unless you start swapping. I don't see how this supports your point.

    Second-this is the same way you can protect your machine against tampering. -...But as any good security expert, I will leave that a topic for another day.

    I'm sorry, come again? Because hard drive speed is not a limiting factor on program execution time, we can secure our machines... how?

    Third....WAY wrong with what is used today. In plain English, you are utilizing a 32 bit bus to process a 32 bit instruction that doesn't need to be 32 bit in length. it could be done in 8. Its called bloatware. Microsofts famous trademark. For proof, see above about registers.

    Uhmmmm... no. Between caching, pipelining, branch prediction, and all that, this just isn't how it works. I'm no CPU architecture expert, but this violates even the basic principles. First of all, I believe the instructions themselves are still (mostly) 8bits. So a single bus fetch of 32 bits could fetch up to four instructions. Or an instruction and three bytes of arguments. Or whatever. Granted, I haven't studied the pentium archicture that thoroughly, but... when we went from 8->16 we did not go from 8 bit instructions to 16 bit instructions, nor did the 386 have 32 bit instructions. You will recall, please, that 8088 binary code runs on a pentium unchanged. Instructions are still 8 bits. More instructions are fetched per bus cycle with a larger CPU bus, not more memory wasted per instruction.

  • ST-II was an experimental protocol which implemented bandwidth controls for IP. It really evolved as a reaction to ATM, which was supposed to take over the world about 5 years ago. ST-II relied on keeping state at every router along the path. It didn't scale terribly well, and its failure modes (when one router along the path loses its state) could be difficult to deal with.

    RSVP is an attempt to replicate this stateful bandwidth control model without having to modify the underlying IP protocols. It has many of the same problems, however, with maintaining a distributed state. RSVP did learn a number of lessons from ST-II, and can deal with partial failures (where some of the intervening hops lose their bandwidth information) much more cleanly. Still, RSVP is considered a pretty havyweight mechanism.

    Differentiated Services is yet another Quality of Service effort at the IETF. DS takes the opposite tack. There is no global bandwidth reservation, everything is resolved hop by hop. That is, each network link in the path makes its best effort to meet the QoS defined for that packet. There are no guarantees, but it works well in practice. Its just like the IP protocol itself: there are no guarantees, but the Internet works pretty well in practice.

"Freedom is still the most radical idea of all." -- Nathaniel Branden