Follow Slashdot stories on Twitter


Forgot your password?
Trust the World's Fastest VPN with Your Internet Security & Freedom - A Lifetime Subscription of PureVPN at 88% off. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment Re:Consider your hardware (Score 1) 907

I have a Thinkpad T500, with the "switchable" ATI/Intel graphics. Under Vista, it's fine, using either ATI or Intel card. I get about 5 hours runtime off the 9-cell battery pack.

Under Linux, X uses the Intel card by default (presumably because of lower PCI bus ID), but seemingly keeps the ATI card clocking at full speed. Not only does it get much hotter than when running Vista, but the battery life drops to about 2.5 hours or less.

My solution - go into the BIOS and switch off the "switchable" graphics, just using the Intel card under Linux. Battery life soars to about the same 5 hours as when running Vista, and temperate drops back substantially too. I don't really do anything presently that warrants the high performance ATI display.

It appears that X just isn't ready for switchable graphics (or most modern ATI cards, for that matter).

Comment Re:Frequency, not just technology (Score 2, Interesting) 202

Actually, all half-duplex ethernet, regardless of physical media, even up to 100 Mbps (Gig-E doesn't support half-duplex), uses CSMA/CD

And any system that uses a "contention" based method to determine who can transmit, will be prone to jitter, due to the randomness of when a device wants to transmit. This includes 802.11, which uses CSMA/CA (collision advoidance, not collision detect like ethernet).

Most wireless technology that has to guarantee specific latency to multiple clients uses some sort of static TDMA or TDD

WiMax / 802.16e does support QOS (and dynamic TDMA), including realtime polling service for VoIP applications. Perhaps the telco was just using Best Effort configuration.

I used to deploy a lot outdoor wireless gear from Proxim (and previously Orinoco). Most of their gear either used a proprietary MAC in the same band as 802.11 (ie, 2.4 GHz ISM band), or some completely proprietary concoction, such as some of their circular-polarised gear in the 5 GHz ISM band.

Orinoco were one of the first companies to solve 802.11's "hidden node" problem, where peers could be NLOS (and thus unable to hear when other TX'ed), by using a polling system, controlled by a master node that could see all peers. A standard 802.11 would have performed very badly in such a scenario, due to frequent collisions. This proprietary system was essentially TDMA, and ensured relatively consistent latency (apart from dropped frames due to RF noise).

Proxim Tsunami MP gear used a strict TDMA system to ensure that peers could only TX when they were given permission to. The base stations had a 60 degree beam width, and to get 360 degree coverage, you simply put six of them together in a pod, on alternate channels. They used GPS time signals to sync all units in the pod, ensuring that all of them had synchronised TX slots - they'd all transmit at the exactly the same time, then go into RX mode at the same time.

They also had a similar system called a QuickBridge, which could run at up to 54 Mbps aggregate bandwidth - and unlike 802.11g, this did actually have a throughput of 54 Mbps, not 20 Mbps (which is the best I ever saw from 802.1g). It used a TDD system, as it was only two units in a configuration. Using some simple traffic shaping, we successfully blasted a 2 meg voice circuit across it, had terminal server traffic running (even fancy screensavers within the terminal session to stress it out a bit), while copying large files in BOTH directions across it. All performed perfectly, and voice was crystal clear. Ok, the traffic shaping was partially responsible, since it policed bandwidth and prioritised the voip - but the main thing to take note of, is that TDD/TDMA systems can have heavy traffic in both directions without causing massive amounts of retransmits.

Slashdot Top Deals

Another megabytes the dust.