Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:As a good Slashdotter I didn't RTFA (Score 3, Informative) 36

LTE is not done and is continuing its evolution. To give a rough idea recent products are LTE "release 10" (R10), and standard work will start in a few months on R13 that will not cover anything 5G yet. R13 should arrive in the field in ~2017. This Samsung demo is not a standard yet, it's more a technology evaluation / advanced work that will only land in real products in a few years.

It's likely that a real production 5G will come from within 3GPP, the organization that standardize 2G/3G/4G. At every big transition some people try to go for it with a completely different standard (for 4G: Qualcom UMD, WiMAX) and it may not be different with 5G, but it would be very unlikely to succeed IMHO. The technology demonstrated here is not universal: it can only work in very dense area. Which is fine, that's also where we need added capacity. But it means that whereas in time LTE can fully replace 2G and 3G, 5G will be designed to coexist with 4G and will never replace it. At best, you'll have LTE in low-density areas, and 5G in dense areas. And even in dense areas there may be a 4G coverage umbrella to provide service continuity.

There's a lot of hype and BS in wireless, so take all throughput / generation targets with a big grain of salt... LTE Advanced defines a "category 8" that goes up to 3 Gbps for example, but it's a joke to get the IMT 4G stamp. Already the initial LTE defined a category 5 that no product ever implemented. It was just there to match the WiMAX 2 peak target rate. It was bollocks and unpractical and nobody cared once WiMAX 2 died. Similarly, the people at IMT got over-excited and stuck in a hype loop, and defined real 4G has the ability to support 1 Gbps. It was nonsense at the time and still above what's practical. So what did LTE-A did? It introduced realistic new categories 6 and 7 with 300 Mbps down, and a BS category 8 at 3 Gbps. So on paper LTE-A is 4G, because of a category 8 that nobody will implement anytime soon if ever. I've seen pedants saying LTE is no real 4G but LTE-A is because only LTE-A makes the 1 Gbps IMT target: what a joke!

The high rates of 5G as demoed by Samsung use a very different approach. Much higher frequency allowing larger channels and data rates. Also the size of the antennas shrinks with a higher frequency, so it becomes possible to use many small antennas in a device. Each receive path is quite poor compared to LTE to keep the cost down, but it's compensated by a lot of them. These many antennas are not used for massive spatial multiplexing (SM) MIMO, which would be too computationally expensive, but for a few SM layers as today and beamforming as beamforming is cheap. It's a bit early to say it will work well in real life, but it looks promising and worth pursuing.

Comment Re:HTC (Score 1) 201

They've been sloppy with updates a few years back at their peak of popularity. I guess they had became complacent then. With their recent challenges it seems they have understood the importance of keeping your customers happy, and hence loyal long term, instead of pressuring them to update faster by not supporting old models and just succeeding in annoying people. It's long term vs. hypothetical short term gain. I'm glad they've taken the long view in the end.

Now I find they take updates of "old" phones seriously at least for their flagships (can't tell for the others). I have the first One (M7), it's about 18 months old now and I have Android 4.4.3 with Sense 6 on it, just as the latest One M8. I had a handful of updates since I bought the phone, tracking Google new versions closely. HTC announced Lollipop will be supported on the M7 too. Typically all the big updates are made available within 3 months or so of Google release. If they keep it like this (nice phones with updates), I'll probably stick with them when replacement time come.

One important point: I have an unlocked phone, bought without contract and independently from my operator. Updates may not be as fast when the phone is bought with a contract through an operator, but then the issue would be the operator not HTC.

Comment Re:They've reinvented CB radio! (Score 1) 153

Yes it will. One of the driver for D2D is public safety, where the network may not be available. Think of a situation after a big earthquake or hurricane, where cell towers have been damaged and the cellular coverage is patchy or entirely gone in some areas. Then D2D can be used locally by public safety people to communicate with anyone having a LTE device supporting D2D (and the vision is that in time, everybody will). D2D will support both this offline / local mode and a network assisted mode when you're under cellular coverage. There's also a hybrid case where one party is under coverage and the other is not.

There are other use cases for D2D, but I must say I find most of the "end user" ones gimmicky. Besides public safety, which can potentially be really useful IMHO, there are other interesting use cases for M2M / infrastructure, like supporting car-to-car communications (assisted driving in the future) and coverage extension (mesh like, although the issue there is always the impact on the relay devices: would you like a meter in the basement draining your smartphone battery? You would need user acceptance, and then it gets complicated).

Comment Re:How much more can we squeeze? (Score 2) 78

As said by others the fundamental limit is given by Shannon. This defines a maximum throughput given a spectrum bandwidth and S/N ratio. In current technologies we're pretty close to this. This also indicates how to increase the total throughput, which can comes from:
  • - Adding channels. This is what MIMO spatial multiplexing (SM) is about;
  • - Increasing the used spectrum bandwidth. There is a lot of spectrum at high frequencies, with new challenges, and one option for 5G is to use this;
  • - Increase the signal to noise ratio. This is what beam-forming is about.

Having more MIMO SM layers (i.e. concurrent channels) is not practical. The complexity of a MMSE decoder isO(L^3) with L the number of layers, so it gets ugly quickly. Today MIMO SM is typically limited to 2 layers in practice, with 4 likely coming and 8 the practical limit (and that may not be so practical really...).

Using very high frequencies (above 10 GHz) gives access to a lot of free spectrum, but the higher one go the lower the reach for a given power budget. To compensate for the high attenuation this is coupled with massive multi-antennas, the talk for 5G is 64 to 256. This is split between a few very costly MIMO SM layers and the rest for cheap beam-forming. So for example 256 antennas would behave like four 64 patches BF antennas for 4 layers MIMO. Of course with that many antennas and RF transceiver you have to compromise in cost and quality. So it's a lot of poor receive chains, vs. a few very high quality ones today. But there's still the potential to gain overall.
It has challenges though: it will still be for small cells (low reach) and rather low mobility (the beam steering cannot track high speed mobiles, plus small cells don't work wall for highly mobile devices: too many handovers). But because most people are low speed and the places where capacity is most needed are urban centers where small cells are ok, it still can be a win.

But as one can see, high speed 5G won't be universal like 4G is. By this I mean that 4G can (and will) completely replace 2G and 3G in time, while this high frequencies / massive BF 5G could only complement 4G is high density urban places, but will never be suitable for lower density parts (rural) where 4G would stay.

And then there's the elephant in the room: a lot of the improvements in telecoms have been riding on Moore's law. With the scaling problems that start now to be more openly discussed, how much more processing power we can use for 5G and what the users are prepared to pay (cost and power) for all these improvements are interesting questions.

Comment Re:what the FEC... (Score 1) 129

FEC is commonly used in streaming over IP applications to support lost packets, see for example the "above IP" FEC used in LTE multicast (eMBMS) or DVB-H, Raptor. In those applications the L2 transport has its own error detection scheme, so IP packets will typically be either fully lost or ok (in other words, the probability of having a corrupt packet is very low).

Comment Re:not fast (Score 2) 117

The sloppiness in the PC ACPI world can also bite Windows too. You can find nice Asrock mini PCs based on laptop hardware. If you look at a Tom's hardware review of the Asrock VisionX 420, with a mobile core and mobile AMD GPU, you'll see that it consumes 28W at idle. This is crazy high for what's in effect a mobile laptop in small form factor box. One of the big reason is that the system ACPI says that PCIe ASPM (the low-power mode of PCIe) is not supported. Configuring laptop-mode on Linux and forcing ASPM results in idling at ~12.5W only, and a quieter box. Enabling ASPM in power saving mode alone saves ~8W, the rest is due to other suboptimal Windows defaults I guess.

So IMHO the "let's do the minimum and ship" to save a buck approach of PC vendors hurts Windows and linux both. On the Windows side you'll usually get something considered good enough for its product class (here power was not considered as relevant for an HTPC it seems) but likely not optimal. On Linux you will get a mess by default because as you say vendors can't be bothered with it. But with some work you can actually get something quite good.

Comment Re:The title is a bit misleading (Score 1) 32

Absolutely, and that's why I'm talking about "peak throughput" above and not only throughput. Increasing capacity allows to either offering higher average throughput, or offering the same average throughput to more devices concurrently, or any mix in between. It does matter, and actually changing field a bit capacity in the wireless cellular world is what really matters: stretching things a bit to make a point you could say that in cellular peak throughput is for marketing, capacity improvements are for real life concerns (increasing average throughput / reducing cost per bit). Unfortunately the later is more complicated to sell so is often not put forward, but that's where the real work is.

Still, it's too bad that on a geek site one has to pass a capacity improvement as a peak improvement to make it more sexy. Let's learn to love capacity for its own sake ;)

Comment Re:A rare sight (Score 2) 32

Thanks! The MAC layer principles are the same in 802.11ac than in 802.11n. The MU-MIMO feature is really made for the AP to stations direction (downlink). The AP can decide on the combined transmission on its own, based on queued packets: if the AP has packets buffered for several stations that can be sent using MU-MIMO, it can merge them into a single PHY access. There's a gotcha for the ACK part, you don't want the receiving stations to ACK at the same time but for this block ACK is used for all stations but one. So those other stations will wait for a block ACK request from the AP, and the AP makes sure there is no collision.

For more details I can recommend "Next Generation Wireless LANs" by E. Perahia and R. Stacey. I have no relation to the authors, but I've used their book and found it good. Warning: it's only if you really want to go into the details (but it beats the IEEE spec handily on readability ;)

Comment The title is a bit misleading (Score 5, Informative) 32

The title could lead some to believe that MU-MIMO is increasing the peak throughput, which is not the case. Spatial multiplexing (SM) MIMO allows to have as many independent concurrent streams as there are antennas on receiver and transceiver (the min of both sides actually). So with 4 antennas on the AP and 3 on the station for example, you can have 3 streams. With SU-MIMO, all three streams are used between the AP and a single station. With MU-MIMO the AP can use its streams with more stations: for example 1 stream to station A, and 2 streams to station B. There is a little bit of degradation of course compared to single use. It's a win when you have for example a 4 antennas AP and only 2 antennas stations, then instead of leaving half the capacity on the floor you can make use of all the streams. But it doesn't increase the peak rate possible with SU-MIMO, it increases the AP capacity when devices do not have as many antennas as the AP, which is the usual case.

Comment Re:Some context from a hardware perspective. (Score 1) 147

True, but at least on a PC the main CPU could protect itself against any device being an IOMMU. And in smartphones ARM also has an IOMMU IP (called SMU). I don't think it's commonly used yet, but on principle at least it is possible to hook any smart peripheral with an embedded processor (or several nowadays for a 4G cellular modem for example) to the SMU, and then the SMU to the SoC NoC providing access to the system memory. Then the user application processor (AP) can restrict any memory access from those devices.

So even if such a smart peripheral is exploited, if it sit behind an IOMMU/SMU the AP can restrict to which parts of the memory it can access, just like a MMU provides such protection between processes. The peripheral, even if inside the SoC die, can be sandboxed.

Is this how it is done today? I guess not. For SoCs, I'm not sure a lot of chip vendors pay for the additional SMU IP, unless they have another use-case that requires it (like supporting virtualization). And even if the hardware is there, I'm not sure it is actually used for security containment as I described. Could a linux expert comment on that for the PC world at least?

Now of course an IOMMU/SMU will never protect against an embedded chip purposefully designed to be able to take control of the chip and access system memory without restriction, like the management chip on some Intel chips (vPro feature IIRC). But as you point out there are a lot of smart peripherals, and if we could at least mitigate exploits from them it'd be that many less potential holes.

Last point, security always has a cost: someone needs to handle the IOMMU/SMU management to do the sandboxing. And a sandbox may prevent zero copy, requiring extra copying from the AP in some cases (to move data from AP reserved memory to/from a given device sandbox).

Comment Re:Bad example. (Score 1) 459

But it has a very bad idea, which cannot be fixed: the mechanical functions keys are gone. Instead you have touche sensitive keys, with no travel. One can adapt toa new layout for a product one use often and long (ok, after some grumbling...), and with some utilities remap keys if needed (switching ctrl/caps lock is very common). But you can't add real keys where there are none here. And you can't select an alternative keyboard with the usual functions keys either.

It's very ironic that their new slogan is "Thinkpad, for those who do". I guess it's for those who don't do enough to remember each function key role in the applications they work with, and need an icon on the key to remember it. And use all this infrequently enough not to care about touch sensitive keys.

I'm for one very disappointed with this nonsense. I was looking forward to treat myself with a high definition screen, long battery life light laptop with this update. I guess Asus or somebody else can thank the "innovators" at Lenovo who pushed this crap through.

Comment Re:All the other OS, too. (Score 1) 352

Then criticize such designs for being insecure, I'm fine with that. But rightfully criticizing a weak design, and condemning the whole of the cellular world are two different things. The article does the later and that's why I think there is exaggeration (with some errors and misunderstandings too).

Comment Re:All the other OS, too. (Score 1) 352

In theory it is possible. In practice the network operator would see the extra traffic (so the NSA too it seems ;), so the risk to get caught red handed seems way too high to me. It could kill a brand.

It's up to everyone to decide on their paranoia threshold and what they feel they need to protect against. But anyone who really cares about its own security and would trust 1) it's smartphone including a lot of closed source software even with Android 2) a complex baseband stack 3) a hugely complex operator network or even many for an international call, just doesn't get security. You want real secure? You need a trusted terminal host stack and and end-to-end secure connection (SRTP, or VoIP over a VPN, etc.). Then you don't have to trust anything in between as anything else just see encrypted traffic. You still have to trust the terminal host stack, so best if one can audit it. This is overkill for most. But that's what secure phones are, with a price to match.

Comment Re:Over-the-air Security Protocols (Score 1) 352

Encryption is optional but typically always set in production network.

For the downgrade attack, I completely agree and mention this elsewhere. When a next gen cellular technology is out there's not much changes/updates on the previous ones as operator focus their investments in the latest. And due to this we still have the 2G issue you mention. That's unfortunate, but in practice I believe the only solution to this will come when 2G will be phased out. It's still a few years away in most countries, and has been done already in some places (some big operators in Japan, South Korea).

Comment Re:Exploits for baseband processors (Score 1) 352

"Certification requirements" is the key thing here, and it's a lot of work for vendors (can't really be lazy and succeed in this space ;).

Spectrum is a shared medium, and the worst jammer is a buggy device. Because of this there are strict certification requirements before being legally allowed to put a device over the air. And going through all the associated tests cost a lot of money: it's a lot of time with expensive testing hardware and in the field (after passing the "safe for network" part). It's expensive both for operators and vendors by the way. This cost make everyone quite conservative. Any change will go through a cost assessment. So it's not that they don't care about security, they do and even if there has been holes in the old versions a lot of work go into making the system secure. But not at any price.

To be practical, the target is good enough security for the average person. And that's ok. If you really have higher protection requirements than this, there is no way but having your own controlled end-to-end scheme. I would expect anyone claiming security is critical and taking this seriously to have figured this out.

Slashdot Top Deals

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...