Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:orly? (Score 4, Informative) 120

Your speeds are off by about a decimal place. In mobile data terms and technical terms it breaks down like this

1G = analog / AMPS service or similiar .. 2400bp/s on a good day plus whatever hardware error correction and data compression (MNP10) -- circuit switched technology (your taking a line on the tower
2G = CDMA / GSM(CDPD) base speed data - circuit switched at 9600bp/s
2.5G = packet switched CDMA 1X / GSM GPRS or EDGE .. nominally max 144kb/s .. usually 50-70kb/s .. GSM had different EDGE profiles for higher speeds .. but the base was in this range
3G = CDMA 1XEVDO / GSM HSDPA .. 3.1mb/s on CDMA .. up to 14.4mb/s and higher on GSM (though getting a contiguous spectrum block available for the full speed is problematic when mixed with voice traffic and paging channels
3.5G = current spec WIMAX and LTE .. nominal 10mb/s down .. biggest difference is it scales to higher data rates based on number of users .. whereas say 3G CDMA might have 3.1mb/s per sector .. wimax / LTE can deliver this per user given enough spectrum
4G = most recently published goalpost .. something like 100mb/s sustained mobile and higher in fixed / limited mobility scenarios .. WIMAX2 / LTE Advanced

Comment well .. (Score 1) 75

Well, expecting to get more output from the same input is of course illogical and impossible, but if a company puts up the planning, development and engineering resources to make it happen up front than the scalability claims in the marketing copy can be done to some extent.

But the way some (most?) deployments seem to go make it cost prohibitive to put the distributed database / distributed applications and fault tolerant components in in the first place.

Comment other ideas (Score 1) 178

1. Push everything into 5.8 range you can. Whereas 2.4 (b/g/cheap N) only has 3-4 non-contiguous channels, 5.8 (A/N) has dozens of fully non-shared channels available which should make spectrum contention less of an issue in this band.

2. depending on the geographical area required, back the power levels down using either commercial gear that allows it by default or using one of the freeware (DDWRT/Tomato) firmwares so that it doesn't exacerbate cross AP contention in B/G ranges

3. Directional antennas

4. Disable the DHCP servers in your APs and setup 2 or more subnets with their own physically separate DHCP servers.

5. If there is any AP placement flexibility, it is generally better to setup an "edges in" approach with say directional APs antenna at the perimeter and at least 4 quadrants in the central area, though if this is in the US your going to be limited to 3 non-overlapping B/G bands.

There is a diagram in the Cisco CCNA wifi study materials that has a frequency reuse map defined for maximum spectral efficiency and minimum overlap, though with only 3 mount points you won't be able to use much of that.

As far as the per user available bandwidth being small and latency going up exponentially with more users .. on paper that's true, but I find it extraordinarily unlikely that ALL users will be powering up and attempting to access all at the same time. If this were really the case than I'd say scrap the AP plan and go scounge up some 10/100 switches and go wired.

Comment Re:Other considerations (Score 1) 325

I live in Kansas with KCP&L .. my bill itemizes kw/h used multiplied by the cost per kw/h. No seperate charge based on time of day or anything else.

It might be true on the utility side where one person subsidizes another by default, but that isn't born out in the billing for residential customers. Since the value proposition of the energy storage system ITA is based on soaking up the power during off-peak rate periods and using it during peak periods .. my payback period would be much longer, which is the point I was making.

Comment Other considerations (Score 3, Informative) 325

Some of these technologies are of no use to those of us that live in areas where the cost of energy is consistent all day and night and year round.

Part of that maybe the problem (no intelligence in the infrastructure). But in the meantime if I were to have solar or any other resource put up that would benefit from stored energy for later use, it'll throw the payback vs normal utility curve way off to where I'd have to live here for decades to get my money back in anything but smugness.

As far as LI battery technology, it seems that the Prius used NMhd batteries because the number of charge discharge cycles was greater, since the batteries in the story were expected to have a cycle per day, the owner would have to replace them realistically every 3-4 years.

As far as the greater energy content of LI batteries, that is a risk that is always present with batteries. As long as the controller / charger is smart and has a layer or two of fault checking, the risk of runaway thermal events is pretty low. (The problem people had with Lithium Ion AA cell batteries where they are available was when people put them into standard NiCad or NiMh chargers, which apply too much current too quickly and make them pop to start fires. Since this is an integrated system by Panasonic with no capacity to mix and match technology evident, I'd say the risks is low.)

It would be possible with standard deep cycle lead acid batteries, but than you have to have climate control for your batteries above and beyond that proposed, and than your dedicating a good chunk of floorspace to batteries (You can't stack them because of heat buildup when discharging). I know the Central Offices I've been in have had a good chunk of their floorspace dedicated to just power, and even than only for the few minutes it takes for the diesel to kick over .. and you don't want to know what happens to expensive telephone equipment when it starts getting fed progressive amounts lower than 48VDC.)

Comment Re:It's not the cities, it's the spaces in between (Score 4, Interesting) 108

While to believe the commercials from the larger players, there will never be absolutely seamless coverage across the nation because ..

1. There are places nobody lives (or it's economically unfeasible to cover)
2. Transmit powers are 1/12 of what they were in the analog era
3. They can't just throw a tower up anywhere

Back when analog bag phones were the norm, I never found anyplace without coverage .. why? Because on analog they had a nominal 3 watt transmit power on the phone, which let you have towers dozens of miles apart and still get a reliable signal. Today's mobiles operate at .25 of a watt or less, and since the 3G spec devices currently at or becoming the norm are based on CDMA technology (CDMA or WCDMA/HS?PA), the transmit power will only go down based on the load of the tower. (Under CDMA, the transmit power decreases when the load rises, lowering the noise floor and allowing more devices on the tower, with the net effect of creating islands of service if the network has hot spots and they don't plan accordingly).

As far as towers and stuff are concerned, I remember reading an article from upstate new york about a stretch of state highway that had pristine views, and a very high mortality rate in the winter because nobody had cell service up that way. The local government bodies sued and cajoled the cell carriers to build coverage, but wouldn't let them put the tall towers up to allow service in an economically feasible way. Net result, no coverage and more death, but the view was still great.

Comment signoffs (Score 2) 293

In Kansas City, most of the local stations all signed off at 9AM.

I thought it fitting that WDAF-TV4 ended their broadcast with

1. a crude "1949-2009" graphic
2. A few seconds of the old indian head test pattern
3. A video of the old stars and stripes video they always used at signoff everyday

Followed by a "ceremony" with some backoffice engineers pushing the big button you aren't supposed to press.

Comment My first usage (Score 1) 739

Walked into my first real tech job in 1995 at a local ISP and discovered Linux (Slackware 3.0 and I think 2.2).

We were using it to run CERN web servers on Pentium 75 desktop and 486/33 class machines on 8 megs of ram.

I remember was being knee deep in swap all the time. That were were running a .99 kernel forever, and that in todays environment we'd have our lunch eaten because the boxes were running (and using) every usable service known to man at the same time (http / SMTP / DNS / nntp / pop3).

We had to setup remote reboot capabilities because a local television station would flash their website on the screen during the nightly news and we'd get murdered when they posted an .au audio file of the evening news.

On linux specific stuff, I just remember the lack of loadable modules, answering hundreds of yes/no questions to recompile the kernel, no SSH anywhere to be seen and it being a big deal when we installed stuff like top.

Slashdot Top Deals

"The voters have spoken, the bastards..." -- unknown

Working...