Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Wireless Networking Hardware

Wireless Mesh Networks 97

Roland Piquepaille writes "Robert Poor is CTO of Ember Corporation. He contends that point-to-point or point-to-multipoint networks typical of industrial wireless communications systems have limited scalability and reliability. 'In contrast, wireless mesh networks are multihop systems in which devices assist each other in transmitting packets through the network, especially in adverse conditions. You can drop these ad-hoc networks into place with minimal preparation, and they provide a reliable, flexible system that can be extended to thousands of devices.' The article is pretty technical and contains several illustrations and a case study about the deployment of a wireless mesh network in a water treatment plant. Check this column for Poor's conclusions or read this Sensors article if you have more time."
This discussion has been archived. No new comments can be posted.

Wireless Mesh Networks

Comments Filter:
  • by puzzled ( 12525 ) on Saturday March 01, 2003 @10:03AM (#5412902) Journal


    The only way to make something like this work is to have a solid L3 encryption system between the remote and the head end - intermediate stations will certainly get snooped.

    IPsec is the way to go, but its still something of a hassle on IPv4. I've seen a lot of noise about mesh networks - this isn't really going to take off until IPv6 gets moving under its own power - perhaps another five years.

    • by numbski ( 515011 ) <numbski&hksilver,net> on Saturday March 01, 2003 @11:10AM (#5413093) Homepage Journal
      We are deploying a city-wide 802.11b network as we speak, and although it 'eliminates the need for expensive cell towers', we are able to get on top of two well-placed high rises and cover a good portion of the area. Less equipment expense for us is ALWAYS a good thing.

      We would not be able to afford getting rights in all the places needed to make this feasible. Heck, the hassle of GETTING the rights needed would be make this prohibitive. :(
    • How is ipv4 a hassle to ipsec....
    • If I actually care about something being secure, it's either done through SSH (or scp to copy files), or I use SSL encryption on my ,a href="http://jabber.org">instant messaging, or PGP encrypted e-mails. I don't care if someone's able to tell who the recipient is and what the subject is. My ISP probably logs that anyway. Wireless networking as it is now supports all that and more. What's wrong with it then?

    • Trust is the key component missing from this picture. Mesh networks seem great for trivial information but what if I need to send someone critical or sensitive data? How do I know if I can trust the nodes to relay my data without compromising it?

      Most people do not use encryption, leaving their communications in the clear. We see this today with 802.11x networks and even e-mail. How many people do their online banking over a wireless connection? How many people send e-mail that contains sensitive information?

      Encryption is all well and good but if someone decides to flood the network with encrypted packets (remember that encryption also adds overhead and slows things down) what do you do then? Or what if someone decides to launch a DDoS, grabbing new leases as soon as the last batch of packets are sent? If they are hopping around, how do you know that it is the same person/entity?

      Users/access points need to be authenticated in this type of network environment. Presumably this would involve some sort of digital certificate. That raises all kinds of privacy questions. If you are just surfing the Web, why does someone need you to authenticate? What if you are visiting medical sites to learn about particular illnesses? You may not want your identity to be associated with such information. With authentication to establish your identity as a trusted entity, the flip side of it is that now your online movements can be tracked to your authenticated identity.

      Generally speaking the technical issues are never the most difficult or challenging with the introduction of new technologies -- the social issues are.
    • If you're not already encrypting your traffic, you're pretty hosed. You can always use mesh to VPN into your home network that's wired into the Internet. The real question is how do you prevent someone from DoSing your network. Some work has been done in securing these mesh network routing protocols; see Ariadne [google.com], SRP [google.com], SAODV [google.com], SEAD [google.com], and ARAN [google.com].
    • There are more security issues that are a tad more difficult to solve. Primary one being security of routing protocols. And IPSec is not a solution for that. Still some research to be done in this area.

      Besides, unless one solves the scalability and ease-of-management issues with end-to-end IPSec, wide-adoption will not happen.
    • I used to believe that IP was the right thing for almost all networked applications, but I have since learned better. IP may be too heavyweight a protocol stack for sensor networks.

      If you look at the challenges for untethered sensor network devices, you'll quickly realize that "every bit transmitted brings a sensor a little closer to death." That's not my quote - I heard it from Deborah Estrin of the new Center for Embedded Networked Sensing [ucla.edu] at UCLA.

      I agree that crypto is important, but for anything other than a periodically-wired transmitter like a laptop, or a device with a power source of extreme energy density, power budget is a consideration that often directly affects network stack optimization.

      If any of you receive the Research Channel on the DISH Network, try to catch Professor Estrin's sensor talk. It's a great summary of the issues involved in making this stuff scalable.
  • by James_Duncan8181 ( 588316 ) on Saturday March 01, 2003 @10:04AM (#5412905) Homepage
    Well, I'll buy it as working for a small business but you would find it very difficult to have a big network as all of the network chatter takes up an increasing amount of spectrum. This is the same problem P2P networks have (a similar arch) and they can only solve it by having a network that cannot see all nodes. This is not a good idea for a wider net for obvious reasons.
    • Metricom's ricochet network used the mesh arrangment. It worked well, for the multi-city
      network for that company. The mesh is designed to get around local problems within a large network,
      ie. your local link died so you use the next closest link point.

      • by James_Duncan8181 ( 588316 ) on Saturday March 01, 2003 @10:47AM (#5413034) Homepage
        Yes, but the issue is what happens when the network density increases. There is only so much spectrum you can use for bandwidth, so if me, my next door neighbour, the whole street and their dog have a wireless link, our connection speeds all go down big time. This is the reason that cellphone networks sometimes have problems placing your call, even with digital compression. You thought contention on a wired network is bad? You'd better just be hoping this doesn't catch on as spectrum is *VERY* limited.
        • by ka9dgx ( 72702 ) on Saturday March 01, 2003 @11:24AM (#5413153) Homepage Journal
          The advent of modern digitally controlled radios includes transmit power control. If the density of equipment goes up, the power levels (and range of each cell) go down (on average), thus increasing the efficiency of which the spectrum gets used.

          If you assume a 2 dimensional distribution, the total power transmitted can remain the same, reguardless of the number of nodes.

          --Mike--

          • And if the cells get smaller, the amount of routing hops increases, and my friend's bandwidth goes down as his node fetches and sends me, my other buddy, my dog, and my neighbour files off Kazaa because he is nearest to the wired pipe. There is fundamentally only so much bandwidth in the air, and it is not enough to support ubiquitous wireless use. The failure here seems to be not appreciating that people will all want to connect to certain nodes, as they supply the (wired) bandwidth.
            • These are all issues that a network of this design faces, but they are not insurmountable. As you said, the cells get smaller so you have to make multiple hops, but by doing so you can communicate at a higher data rate and maintain that rate across multiple hops. My company [meshnetworks.com] has a product that "meshes" 802.11, and in testing we've seen this lead to much higher aggregate throughput when compared to vanilla 802.11, some 2 to 3 times higher.

              As for your other concerns, why would you need to know of all the nodes in the network? All your node should be concerned about are your "neighbors" in the immediate area, and if necessary, how to get back to a wired access point. Traditional routing protocols like RIP and OSPF don't perform well in this kind of network, and as the network grows the overhead would quickly take up all the available bandwidth. Because of this we've been moving towards on-demand protocols, and based on modeling we've done these protocols should scale well.

              The other thing, and this is more my opinion than fact perhaps, is that when ad hoc peer to peer networks gain widespread use, I believe they will fundamentally change how we use networks completely. Yes, if you just went to an ad hoc network and connected it to the internet, based on the apps everyone uses today, everyone would be swamping the access points and bandwidth to the wired world would drop. But once peer to peer wireless is ubiquitous, users will have more incentive to use more peer to peer oriented applications. If 20 people on the same wireless network want to view Slashdot.org why should they have to download all the graphics 20 times? Peer to peer wireless will give rise to new implementations of applications we use today.

              The biggest problem I can see is, as usual, security. IPSEC can secure the payload, but in a wireless ad hoc network it would be trivial for an attacker to inject a false routing advertisement and bring the network to its knees. Routing updates and other overhead needs to be secured for these networks to work.
            • There is fundamentally only so much bandwidth in the air, and it is not enough to support ubiquitous wireless use.

              No. Actually the bandwidth is not finite. In fact it scales up proportional to the number of nodes if the nodes are reasonably smart.

              The idea of a fixed pie of bandwidth is based on the Shannon limit and the idea that radio waves go infinite distance.

              In real life the radio waves get absorbed, attenuate with distance and the Shannon limit only applies between any two nodes in the network- it does not represent a fundamental limit for the network.

              It's a bit like sound in an office. If there's lots of walls around the sound gets absorbed and everyone can talk to each other and pass messages around.

              The failure here seems to be not appreciating that people will all want to connect to certain nodes, as they supply the (wired) bandwidth.

              Yeah, but that's a problem we have already on the internet, and the protocols already divide the bandwidth up fairly.

              And if the cells get smaller, the amount of routing hops increases

              Yeah, but not much. The number of hops goes up with the square root of the number of nodes; so a network with a thousand nodes has 30 hops; and that's a huge wireless network. 30 hops might be a lag of 60ms.

              • There is fundamentally only so much bandwidth in the air, and it is not enough to support ubiquitous wireless use. This is, unfortunately, a very common misconception, and I'm afraid only a few people (RF folks and EEs) are aware of it. I remember reading a very interesting article a few months ago on this. It's a bit like sound in an office. If there's lots of walls around the sound gets absorbed and everyone can talk to each other and pass messages around. Exactly. With modern software radios, it is possible for many more devices to communicate in a given space without bothering each other. This technology didn't exist back when the FCC was formed, so they had to (and still do) allocate specific bandwidths for each service. However, a few people starting to realize that this isn't really necessary, and the effective amount of bandwidth can now be very large with respect to computing power. Check this google on open spectrum [google.com]. The first link seems to get the general gist.
            • > There is fundamentally only so much bandwidth in
              > the air, and it is not enough to support
              > ubiquitous wireless use.

              Even with current technology the technical limit on bandwidth is orders of magnitude larger than the the political one. UWB will make even more bandwidth available. Scarcity of bandwidth is a political artifact.
        • So what happens when everyone in your area uses the
          same dsl, cable or any land line solution. You'd better just be hoping this doesn't catch on as wires can only push so much data. There is NO solution that will not buckle under load.
    • by Morgaine ( 4316 ) on Saturday March 01, 2003 @10:54AM (#5413049)
      you would find it very difficult to have a big network as all of the network chatter takes up an increasing amount of spectrum

      This is not so, although the articles doesn't really make that clear --- the aggregate bandwidth of these networks grows as the number of nodes increases in density and in geographical extent.

      The reason why this is so is that in a wireless mesh network, RF coverage is purposely restricted by turning down the power automatically and/or by dynamic channelization using frequency, time, or code (spread spectrum) multiplexing. This in effect gives you a dynamic cellular type of architecture, with channel reuse in non-adjacent cells.

      And that of course is why it's called a mesh network --- it's not a fully connected network of nodes (which would be non-scalable and bandwidth-limited), but a mesh in which locality is strong so that nodes only hear and connect to their nearest neighbours, so each new locale contributes bandwidth to the overall aggregate capacity instead of eating up yet another slice of a dwindling pie.
      • This assumes that the network does not have to connect to the outside world. If and when people want to read/post on slashdot etc, (You just know that one of the first actions would be someone irradiating the neighbourhood from the slashdot link to their server ;) you need to link up to a high bandwidth pipe, or the slashdot server itself (unlikely there is a link) now the network then has to coalese around the point where the bandwith to outside machines. The mesh needs to route the furthest away machines to this, and each hop increases the bandwidth used. In contested, quite limited spectrum. So even assuming no microwaves etc add to the fun, the network cannot function effectively unless many people share their bandwidth (must be varied locations, not just one big pipe). If you are in a situation where you can do this, there is obviously a lot of wired networks provided already, and the point of a mesh network disappears. If you are somewhere with few local links to a wired net, the mesh network has fundamental problems with contention and the total amount of spectrum availible for the high bandwith links.

        you see?

        • and each hop increases the bandwidth used. In contested, quite limited spectrum.

          No, that's not so. Each hop increases the bandwidth, not the bandwidth used. This is so because each node can whisper to the node next door rather than shout and take up everyone's bandwidth. The bandwidth scales UP with the number of nodes; since you then have multiple independent ways to route from A-B through the mesh.

          • Each hop increases the bandwidth, not the bandwidth used.

            How? When c = maximum capacity per radio, c+c+c > 3c?

            The original poster was correct; it decreases the maximum capacity, and god forbid you start bridging instead of routing. Remember, you've got a retransmission of an ethernet frame going on.

            Radio A sends out a frame destined for Radio D. B hears and repeats, C picks up B's and repeats. D hears and acts as the access point for the network where egress to the Internet occurs.

            But RF doesn't work like a normal point-to-point model; you have point-multipoint going on and in most 802.11b/a ad hoc modes, it can get rather inefficient quickly. Just look at an 802.11b repeater/bridge, for instance.

            bandwidth scales UP with the number of nodes; since you then have multiple independent ways to route from A-B through the mesh

            Sounds nice but unless you've designed some load balancing protocol into the mesh, it isn't going to happen.

            *scoove*

            • How? When c = maximum capacity per radio, c+c+c > 3c?

              Consider 4 nodes, A, B, C, D in a line.

              Now D can talk direct to A direct by maxing out it's power and shouting over nodes B and C. But if it does that then A's conversation with B, gets drowned out, likewise B and C, and C and D, because they go momentarily deaf with all the shouting.

              If instead D whispers to C, C whispers to B, B whispers to A, then the other conversations aren't affected. The overall bandwidth is 3 links, whereas if you just shout all the time, the bandwidth is one link shared between everyone. (I'm glossing over some complications, but that's the basic idea).

              • If instead D whispers to C, C whispers to B, B whispers to A, then the other conversations aren't affected.

                How do you propose this whisper? In ad hoc 802.11b/a, D will be transmitting frames to whoever can hear it, regardless of whether it feels like whispering or not.

                In order to pick up omni coverage for mesh, you're probably running some sort of omnidirectional antenna which does not have the ability to discriminate and focus energy from D to C. Likewise, D is not going to calculate that it can run at a lower power level to transmit a frame to C, then bump back up to a different level to E, so on. It's a nice thought, but I'm aware of no protocol that supports this approach (someone correct me if I'm wrong please!).

                And all of this would have to be factored into the routing OS as well as any link-state protocol would need to be aware of these factors.

                I've read of experimental mesh antennas that redirect using an array - sort of a doppler approach were by sending a frame to antennas 1, 3 and 4, but not 2 or 5 or 6, I can focus my transmission in a directional manner.

                Also, per the mesh discussion, we've run Nokia Rooftop (now discontinued) and clearly observed that A-->B-->C results in significant degradation with every additional unit added to the mesh. From an initial 3 Mbps for the FHSS mesh, a tiny network with 7 units was having a difficult time getting at best 384 Kbps to a given subscriber.

                *scoove*
                • Yes, but: 'In contrast, wireless mesh networks are multihop systems in which devices assist each other in transmitting packets through the network, especially in adverse conditions. You can drop these ad-hoc networks into place with minimal preparation, and they provide a reliable, flexible system that can be extended to thousands of devices.'

                  We're clearly talking about 'wireless mesh networks' in general not the subset of: " ad hoc 802.11b/a"

                  How do you propose this whisper?

                  Physically, it's 'merely' a question of minimising the transmitter power when transmitting a packet.

                  However, as you say all of this would have to be factored into the routing and of course this implies that the nodes have to occasionally do a search for all the nodes it is within range of and update the routing tables accordingly in its self and its neighbours.

                  Ideally, each node would have electronically steerable antennas; and multiple antennas, and filtering to make use of multipath. The more sophisticated the nodes are, the more bandwidth there is.

                • How do you propose this whisper? In ad hoc 802.11b/a, D will be transmitting frames to whoever can hear it, regardless of whether it feels like whispering or not.

                  Not entirely correct. Most 802.11b cards I have worked with and or discussed can limit their power. It would require the device's operating system's networking to dynamically adjust this.


                  For example, iwconfig (is hopefully going to be ifconfig for wireless extensions) allows me to set a card's power output, using linux's wireless extensions (which admittedly don't work everywhere yet). This would allow a person to design a way to power down enough to only talk to the closest couple of nodes (of course their need to be ways to deal with long hops, where a card needs to be maxed out to cover a long distance, and such, but it could be done) routing would have to get better, and be a heck of a lot more dynamic and be user friendly when dynamic, which is something that computers have seemed to mostly ignore. (I mean how many computer's routing tables change on a minute to minute basis? (very few) while these devices would need second to second routing updates.) (I do think most OSes could handle it, but they currently don't have the higher level tools (that I am aware of to do it.))

  • by suqur ( 28061 ) on Saturday March 01, 2003 @10:06AM (#5412911) Homepage
    You can find a couple of demonstrations of how mesh networks can actually work and be implemented in cities and companies on MeshNetworks' [meshnetworks.com] homepage. Very cool how the p2p works....
  • mesh networks (Score:2, Interesting)

    A smart routing protocol for many-to-many wireless connections is critical for the growth of metropolitan wireless networks.

    If anybody in Sydney, Australia is interested in joining a wireless network, check out Sydney Wireless [sydneywireless.org].
    • Is anyone studying the effects of prolonged exposure to wireless signals? If somethng like this becomes prevalent, the density of signals in a large city are going to be quite high.
  • by Anonymous Coward on Saturday March 01, 2003 @10:20AM (#5412946)
    ...Wireless Business & Technology magazine recently, in the October 2002 issue--from the now especially timely perspective of how they will almost certainly feature in just the kind of "4G Battlefield" that we may be about to witness in Iraq.


    In their WBT article "The Unwired soldier," [sys-con.com] authors Allen H. Kupetz and K. Terrell Brown introduce their concept of the 'Wal-Mart Soldier' and explain how "every soldier's communication device will be an individual network element with a unique IP address. All the network devices on the battlefield - including those embedded in tanks or other vehicles - will instantly form, heal, and update the network as users come and go. That is, they will associate in an ad hoc manner."


    "But unlike cell-based solutions," the authors write, "network coverage and service levels will improve when soldier density increases. Network resources are also better utilized because networks are self-balancing as well. The soldier's subscriber device can hop to distant network access points, away from points of congestion, shifting network capacity to where the demand is."


    Here's the really wild part, though: "Finally, this technology will function as a PAN (personal area network), a LAN (local area network), and a WAN (wide area network), simultaneously. This means that the same network can connect a soldier to the squad/platoon, to the battalion, and to a fully mobile division. This is critical to meeting the functionality requirements of the FCS program. It is the equivalent of Bluetooth, 802.11, and 3G converging, but in a single network, with a single device."


    They also point out (before you ask!) that "The next-generation soldier's communication device has not yet been chosen. There are several DARPA/DoD projects operating simultaneously, all of which have a communications device component. These include the "Warfighter Information Network - Tactical" (WIN-T), "Future Combat Systems" (FCS - formerly known as Future Ground Combat Systems), "Small Unit Operations/Situational Awareness System" (SUO/SAS), and the "Joint Tactical Radio System - Programmable, Modular Communications System" (JTRS-PMCS)."

  • by infractor ( 152926 ) on Saturday March 01, 2003 @10:29AM (#5412973)
    Which turns a laptop or PC system into a Linux based mesh routing access point and thin client. They also sell hardware boxes. Get the bootable ISO here [mirror.ac.uk] - build 22 is recommended.
  • by Quixote ( 154172 ) on Saturday March 01, 2003 @10:56AM (#5413052) Homepage Journal
    Common sense tells you that if the number of nodes connected to the (wired) 'net is significantly less than the number of nodes not connected (i.e. on the mesh alone), then you'll have a bottleneck.

    Has someone done any simulations on the behaviour of these mesh networks as the number of nodes increases, without an increase in the number of connected (with one foot in each domain) nodes?

    Also, will the "max flow min cut" theorem come into play at some point? Will some poor sod who happens to be the "cut point" get hammered beyond belief by having to route all packets?

    It looks to me (and I could be totally wrong here, its been known to happen quite often) that this "mesh networks" craze is similar in vein to the "P2P" and "distributed computing" crazes that came along a couple of years ago.

    • Common sense also tells you that these mesh networks will most likely be deployed in universities, businesses, and homes with broadband. So the number of wired nodes might be quite high. It seems like wherever there would be a lot of users, there would be a lot of wired bandwidth.

      Some poor sod on the min cut may have to prioritize his own requests going out to the internet side of the graph. It'd be a problem if he were on a battery. Hopefully, also along the min cut, would be a few machines dedicated to mesh routing.

      Also, the P2P craze might not be quite as huge among software programmers. I'd suggest that there are still a huge number of people working on it, but the "craze" definitely has not subsided among computer users. It's not a craze, it's just one of the most desireable uses of their computers. If mesh networking becomes nearly as useful to end users as P2P file sharing, it'll have to be illegalized to go away. Wait a second...
    • Yes, it is common sense. The wired internet has this limitation. At some point, you have to get on the backbone, right? In a true ad hoc peer to peer network, nodes are equal to one another. Need more bandwidth? Plug a backhaul into one of the nodes. Yes there is a finite amount of bandwidth, that will eventually become a bottleneck, but the same can be said of most networks.

      We've done quite a bit of OPNET modeling, continue to do so, and it provides a "proving ground" for new techniques and protocols that we try. In theory, as the number of nodes increases the aggregate capacity of the network increases. This assumes each node step down its power to the minimum necessary to communicate at a high data rate, creating picocells, and allowing greater frequency reuse than a point to multipoint network. The access point will have a finite throughput, but just add backhauls to other nodes, and the network will self-balance as route propogate.

      Hopefully routing protocols will prevent any one node from carrying all the packets. Our routing takes this into account, by noting which nodes are congested and routing around them, as well as considering battery life. Also, Quality of Service is implemented to make sure important packets get through first. We model this extensively before implementing it, as well as to continue to tweak things.

      Yea, it is quite similar to both "p2p" and distributed/grid computing, and that makes sense. You are pushing the intelligence back out onto the edges of the mesh. I believe all 3 will have a great change on the way we compute.
    • The scalability of the mesh network depends on the connectivity patterns. Note that there is no backbone in one of these-- all links have the same capacity. Thus, if all nodes just talk to their neighbors, there is no limit to the size of the network-- obviously, distant parts have no effect on each other.

      However, if the connectivity patterns are global, the mesh won't scale as well. For example, when arbitrary pairs of nodes are just as likely to want to talk to each other, one can show that the capacity alotted to such pairs diminishes as 1/sqrt(N) on a 2-D mesh (slice the mesh in half, and note that on the order of N/2 nodes must pass through the sqrt(N) nodes along the dividing line).

      The obvious conclusion is that we won't be able to build wide area mesh networks out of a single type of link. But that's what backbones are for, and the ad hoc networks are still damn useful in local environments, such as meetings, towns, etc. The missing part is some sort of standard resource discovery protocol whereby a node can find services available to it, such as routing, exchange business cards or ebooks or mp3s, control the stereo, download photos from the digital camera, get readings from the sensor network embedded in the building, download the local building's map, etc.
  • Mesh wireless networks sound good in theory, kinda like microkernel OS's ;p, but in practice they have been unworkable to this point. Nokia bought a company, whose name I can't remember, for this type of product, Nokia called it Rooftop. The previous company had spent more than 4 years in development, and Nokia pumped in enough cash to add another year or so, but the product was a technical failure. Our company was already experienced deploying wireless systems (Alvarion/Breezecom and Orninoco) so we liked what Nokia had to say about the product and we gave it a try. The system proved to be totally unusable, the customer prem equipment often couldn't figure out which way to send traffic if the node it was previously using went away. I don't think that a mesh system is totally unworkable, but I do think its more complicated than most people think. Nokia has already removed the info from their site, but

    google cache here [216.239.39.100]

    Tessco was Nokia's reseller on the line and they still have info and pics on it here [tessco.com]
  • by juxter ( 533902 ) on Saturday March 01, 2003 @11:22AM (#5413146)
    ... has been running for several months in Kingsbridge, Devon (UK) [kingsbridgelink.co.uk], based on 'off the shelf' hardware, and free software downloadable from LocustWorld.com [locustworld.com]. There is also a bootable ISO that turns any PC into a Mesh node without overwriting any of the local data! You can download it here [mirror.ac.uk] - Build 22 is recommended
  • by Shoten ( 260439 ) on Saturday March 01, 2003 @11:27AM (#5413168)
    Ok then...as I can understand it (and maybe I'm missing the point here), for objects in the mesh to assist in carrying its traffic, you have to entrust them to be part of its infrastructure. This leads to the obvious question: would you allow just anyone to put their router (or device that acts like a router and does god knows what else) between you and your endpoint? For that matter, would you trust a network made entirely of network devices that everyone and their brother contributed, with those devices able to come and go like thieves in the night?

    It seems to me that a mesh network would inherently place trust in all users, in a world where it's clear that all users should not be trusted, just some...and there's no way yet to sort out the good from the bad. Even if you restricted the use/deployment of the network to a single organization, it still poses an absolute nightmare that an insider could subvert the functions of a node.

    I love the notion of minimal centralization (if any) and the fault tolerance that can come with it, but I think that the security risk is waaaaaay too great.

    One day, when all connections between points (I doubt this day will come, btw) are encrypted, this could work, but only as long as the mesh itself could detect and isolate the source of DoS behavior against the rest of the net. Remember, encryption keeps information secret, but it doesn't keep anyone from just plain breaking stuff :)
    • For that matter, would you trust a network made entirely of network devices that everyone and their brother contributed, with those devices able to come and go like thieves in the night?

      Would you trust the same information to a twisted pair that any old theif could patch into? Or does your plant routinely patrol the thousands of miles of wires you use? Where I used to work, there were many ways to make things go wrong and a sabatour would not have wasted time on data links. You could program your mesh to only talk to your nodes and encrypt the information just like you do with wired connections.

      • Yes, but it's a lot harder to patch into twisted pair. Ever hear of warwalking? Try it sometime, and then try patching into the actual wire...it'll be an enlightening experience for you :)

        And then realize that being a participant in a mesh network is far more access, far more readily given, than being a participant of a Wi-Fi network.
        • Yes, but it's a lot harder to patch into twisted pair. Ever hear of warwalking? Try it sometime, and then try patching into the actual wire...it'll be an enlightening experience for you :)

          Try walking into radio range of a nuclear power plant, or any other kind of plant for that matter. Well, that's beside the point as our sabatour must have physical access to be a real threat. It would be so much easier to misalign valves or damage critical equipment than it would be to mess with wires or, heavens, someone's customized wireless network.

          And then realize that being a participant in a mesh network is far more access, far more readily given, than being a participant of a Wi-Fi network.

          My whole point was that this does not have to be true. If you design your mesh to ignore unknown equipment it would be much harder to break than most wired networks. Judging the performance of the network you program by the way others have done Wi-Fi would be like judging computer security by the way M$ has done things.

          • Wait...you rely on security by obscurity by ignoring "unknown equipment" there...and the key isn't whether the equipment is known or not, the key is whether the BEHAVIOR of the known equipment can be emulated. Ever set up a honeypot? Ever hack IIS to make it look like apache? Such things are possible. After all, how the hell do you set up a wireless network and still hide the underlying protocol behavior, eh? There's nothing flawed about the implemention...the very concept of it is flawed. Anyone can enter into such a network, and in the real world "anyone" includes "very bad people". Saying that a nuclear power plant is hard to get within wireless range of is a useless argument; I doubt that mesh networks are being designed strictly for use in nuclear power plants. Oh, and a quick note: wireless networking is prohibited in nuclear power plants, as well as any classified installation or similar high-security installation under federal control. Can you guess why? It ain't because wired nets are just as insecure as wireless ones, I can tell you that much :)
    • > For that matter, would you trust a network made
      > entirely of network devices that everyone and
      > their brother contributed, with those devices able
      > to come and go like thieves in the night?

      Would you trust a network controlled by the likes of Worldcom, Verizon, and SBC?
      • I think you're confusing technology with service providers. Isn't that a bit like saying that the Cisco gear sucks because you have a crappy ISP? Mesh networking technology is, in my opinion, unsafe, and the short answer to your question will be (by any security-conscious business), "yes!" Let's face it, guys like us are not the driving power behind the wireless infrastructure market.
  • All media, no mesh (Score:4, Interesting)

    by Anonymous Coward on Saturday March 01, 2003 @11:27AM (#5413169)
    Having watched him operate for several years, it seems that Rob Poor thinks that simply by 1) talking about mesh networks for several years, 2) building a half-assed mesh simulator for his M.S. thesis that didn't even work, 3) blustering his way through a Ph.D. on the strength of that old simulator, 4) raising VC for an ill-posed attack on a very difficult problem, and 5) sitting on topic committees that have many fine lunches and dinners at consortium expense, he will somehow gain insight into a problem that he still doesn't even understand.

    But if you really want to believe the hype, then perhaps you'll be impressed by the advanced level of technical sophistication evidenced by this presentation on his website [mit.edu]. Don't forget your free sample of PIC code that shows us all how gosh-dang simple it is to be a radio engineer! Want to build a mesh? Just sprinkle a few thousand PICs in the environment and they'll self-organize into a network through the emergent properties of entrainment!

    It seems so obvious; why didn't we think of that?
    • how about that working water plant? Looks like the mesh saved a bunch of money there. Now all they have to do to add new points is stick in a new node. Having worked at a plant with thousands of miles of wires and huge costs to pull new ones, I can tell you that this is revolutonalry and I would not mind is the man who proposed it droped out of high school.

      The main reason people had not done this before was that the technology did not exist or was too expensive. We've come a long long way since $200 ethernet cards and $1,000 "portable" phones, no?

      Yeah, I know, I'm responding to a flaming troll, but the answer was so obvious I just had to post it.

  • by gmplague ( 412185 ) on Saturday March 01, 2003 @11:57AM (#5413302) Homepage
    My biggest problem with this type of network is the battery life. Sure, maybe the logistics of the network architecture are sound or whatever, but if my cell phone or my laptop is constantly rebroadcasting packets whenever it's in range of the network, then I'm pretty sure there'll be a substantial drain on my battery life. Maybe when battery life is basically a non-issue this type of network will be feasible, but until then, bleh!
  • by zerofoo ( 262795 ) on Saturday March 01, 2003 @12:03PM (#5413323)
    It seems that every node must be "connected", that is every node must "touch" every other node. Is that the way this technology works? I couldn't tell from the article. If so, it doesn't look like this type of network will scale very well. (Much like p2p networks...they work around this problem by limiting the amount of nodes any other node can see). I remember talking about this type of network (and its limitations) in college in a graph theory class.

    -ted
    -ted
    • No, that's the strength of mesh networking. As long as the network can figure out how to get from point A to point B, you only need to have one connection per node (more is better, obviously). The other tricky part is how to optimize the network, so that your packet gets from point A to point B in the shortest time possible (not necessarily the least number of hops!).

      I implemented such a system back in 1996 in VB for 1200 baud half-duplex tactical networks.
  • Clarifications (Score:3, Informative)

    by natpoor ( 142801 ) on Saturday March 01, 2003 @12:19PM (#5413373) Homepage
    Let me make a few comments, since Rob happens to be my uncle. He's got a PhD from MIT's Media Lab, and, among other things, used to work at NeXT, so trust me he knows what he's doing. Most comments here question scalability and security, so I'll address those. As some have pointed out, it's a MESH, so the nodes only see other, nearby nodes. The Ember nodes are inexpensive devices (I have a swag version on my fridge downstairs, it's small), if there is a bottleneck you add another one. These devices, as I understand it, are aimed at firms trying to do LAN-type of things where laying cable or fiber is expensive. However, a lot of such places already are wired for power, which was questioned by one poster.
    As for security, again, under the scenarios I am familiar with, these devices are local and low-power, so you'd have to be onsite to snoop. But, the Ember nodes are flexible, low-level devices, so you run what you want over them. I don't see why that wouldn't enclude any type of encryption.
    Granted, I don't work for Ember (IANAEE), but that's my understanding of it.
  • This is not about networking PC/PDA it is about replacing the wires going from a sensor to the PLC. If you don't know what a PLC is this article is not aimed at you. The example they used is sensors in a piping tunnel that would block a standard ceneral base staion radio approach. It is about conneting less than a hundred signals from a fixed location going back to another fixed location.

    It is about accepting some additional data loss for not hving to run copper or fiber to the sensors. It seems to me it would be of most use to a temporary installation or spread out sensor array where lighting could take out your PLC.

  • by megazoid81 ( 573094 ) on Saturday March 01, 2003 @12:48PM (#5413486)
    ... there needs to be a viable 'economic model' in place.

    The main issue at stake here is that each node in the ad-hoc network is both a router and a network node in itself. Consider an ad-hoc network I am participating in when I am riding a bus. Let us say I am watching a thriller on DVD on my device locally. All of a sudden, my two neighboring co-passengers start streaming video from each other's devices and suck up so much bandwidth (and therefore processing power) from my device that my DVD starts to jump right at the climax of the movie. Clearly, this is quite unacceptable.

    In general, if Device A relays some packets on Device B's behalf, then Device B should give it some number of credits that Device A can use in the future to have Device B repay the favor. In choosing an ad-hoc route, the protocol which routes packets through ad-hoc networks must take into account not only how much each device is contributing to the network, but also how equipped they are in terms of processing power, current battery level and the like.

    • There does not need to be some corporate god-on-high to declare what the viable 'economic model' is before ad-hoc multi-hop networks become commonplace.

      More likely, people will just start deploying them and deploying the software and restrict access on their own personal machines accordingly until they have something that works. Just because your DVD scenario would be unacceptable simply means that this particular application won't be attractive to those interested in these mesh networks.

      Hobbyists will deploy the software first and use it when it's convenient until critical mass gets achieved. We're not sure what the precise applications will be, but those will turn up as the technology gets more commonplace.
      • I should probably add that by the term economic model, I did not mean that some corporate god-on-high will set down rules of the road for mesh network use.

        By an economic model (and note that in the original, it was in quotes), I mean that any mesh network routing protocol must take into account the resources of each node when routing packets. The consideration given to various factors in the design of such a protocol constitutes a model for judicious resource use and it is this model that I refer to as the 'economic model' in my original post.

    • That problem will be solved in the next release. Transmission speeds are increasing all the time.

  • If you think p2p software makes copyrights hard to enforce, wait till you have p2p hardware networks where they will be impossible to enforce unless the copyright lords track every single node in relation to every other node. Shal we register our p2p hardware with the government?
  • by erc ( 38443 ) <erc@noSPaM.pobox.com> on Saturday March 01, 2003 @01:02PM (#5413548) Homepage
    Been there, done that. I designed a wireless tactical network based on this idea back in 1995 or 1996. What I discovered is that it's tricky to get routing right if you use a broadcast type of protocol where each node automatically retransmits anything it hears, because the network quickly gets swamped with retransmissions unless you're careful about the timing of retransmissions. The other way to do it (which I implemented) is to exchange routing information throughout the network - sharing information on which nodes any particular node can hear - then it becomes easy to route packets efficiently through the network.

    APRS does a similar sort of thing as the former - it uses a decaying algorithm to determine when to retransmit messages, and so (mostly) avoids the congestion problems inherent in such a design.

  • I don't know the technology, so maybe someone can enlighten me: How do mesh networks deal with spoofing or MITM-type attacks. That is, if it's an unreliable network, how does anyone know that the packets came from me?

  • The Grid Project (Score:2, Informative)

    by huberj ( 12015 )
    A friend of mine worked on the Grid Project over at MIT's LCS department...sounds pretty interesting -- they have some test networking set up:

    http://www.pdos.lcs.mit.edu/grid/
  • by Bio ( 18940 )
    A similar product: DIRC [dirc.net]

Money may buy friendship but money cannot buy love.

Working...