It's up to everyone to decide on their paranoia threshold and what they feel they need to protect against. But anyone who really cares about its own security and would trust 1) it's smartphone including a lot of closed source software even with Android 2) a complex baseband stack 3) a hugely complex operator network or even many for an international call, just doesn't get security. You want real secure? You need a trusted terminal host stack and and end-to-end secure connection (SRTP, or VoIP over a VPN, etc.). Then you don't have to trust anything in between as anything else just see encrypted traffic. You still have to trust the terminal host stack, so best if one can audit it. This is overkill for most. But that's what secure phones are, with a price to match.
For the downgrade attack, I completely agree and mention this elsewhere. When a next gen cellular technology is out there's not much changes/updates on the previous ones as operator focus their investments in the latest. And due to this we still have the 2G issue you mention. That's unfortunate, but in practice I believe the only solution to this will come when 2G will be phased out. It's still a few years away in most countries, and has been done already in some places (some big operators in Japan, South Korea).
Spectrum is a shared medium, and the worst jammer is a buggy device. Because of this there are strict certification requirements before being legally allowed to put a device over the air. And going through all the associated tests cost a lot of money: it's a lot of time with expensive testing hardware and in the field (after passing the "safe for network" part). It's expensive both for operators and vendors by the way. This cost make everyone quite conservative. Any change will go through a cost assessment. So it's not that they don't care about security, they do and even if there has been holes in the old versions a lot of work go into making the system secure. But not at any price.
To be practical, the target is good enough security for the average person. And that's ok. If you really have higher protection requirements than this, there is no way but having your own controlled end-to-end scheme. I would expect anyone claiming security is critical and taking this seriously to have figured this out.
The big thing is that the encryption is between the device and cell (base station). The assumption is that the cell is secure, and behind the operator network is secured by other means. So it's important to protect the cell (eNB in LTE) against compromises. A fake cell won't work as in LTE the authentication is mutual: the UE won't work with any cell, except for an emergency call.
For more details have a look at the 3GPP 33.401 spec, for example the latest R9 version.
What is possible however is that when your device cellular radio is on and the baseband is enabled, then the SIM can directly use the baseband to communicate with the network using what is called the SIM Toolkit (STK). This can be done with or without the user being informed. The STK also many features like transforming the numbers you dialed (to seamlessly add a routing prefix, or redirect), filter calls (block or accept), get and report a location, etc. The specs are public, look for 3GPP TS 31.048 and ETSI 102.223 (using USAT and CAT instead of STK, but it's all the same under different names).
When the baseband is in a separate die, connected with some interface like SDIO for QCOM, HSI, USB HSIC,
I believe the article is a bit sensationalistic and miss the real danger: a compromised base station. That's what the source articles quoted talk about. If you can compromise a cell you can spy traffic without any attack on the UE (encryption is only between device and cell). A fake cell is an issue with 2G but since then authentication is mutual: in LTE a device do authenticate the cell too, and won't work with a fake one. But that doesn't protect against a compromised cell. This is a risk with small and femto cells mostly, as macro cells are easier to protect. The only interest as see in compromising the BB is to use it as a vector to attack the host processor (which has been done), where you have access to much more interesting stuff. This requires a security exploit on the host side too. On its own the BB isn't really very interesting as an attack target.
While I'm at it, there are others not very serious claims here. The fact that one can redirect calls to voice mail with an AT command has nothing to do with baseband security. An baseband support a control interface, and even usually two: 1) a modern but proprietary interface and 2) the standard but old fashioned AT interface. You can do a lot with these commands, no need to compromise the BB. But normally such access is limited to trusted applications, so if anyone can access this it's a host security issue, not a baseband issue.
The baseband doesn't contain one RTOS but usually several instances. There's at least one RISC core (typically ARM), possibly more. At least one DSP, possibly more. With likely more than one OS: having an instance running linux is common, with other(s) on RTOS or even bare bone schedulers (depending on the complexity of the task at hand and timing constraints). That can vary a lot depending on each BB design, but as a rule of thumb for a modern LTE capable BB expect two RISC cores and two DSPs (YMMV).
The mutual authentication I've talked about already. Here the practical issue is that when the next gen is out there's not much interest in doing big upgrades to previous generations. So the lack of network authentication in 2G will stay with us until 2G is phased out, which is still a few years away in most places (big Japan networks have already killed 2G however).
The P5600 core is being touted as supporting up to six cores in a cache-coherent link, most likely similar to ARM's CCI-400.
The CCI-400 is not relevant here. In both MIPS and ARM worlds CPUs are now multi cores capable out of the box. One cluster can be configured from 1 to 4 cores typically, and here for this latest MIPS up to 6. The L2 management is handled as part of the cluster, which also typically supports coherency with external hardware accessing the L2 through one or several coherency port(s). The L1 cache(s), the L2 and the hardware are kept coherent inside a cluster (with some limitations at times on the low end, there are variants). All this can be taken for granted at the high-end, as here.
Now what the CCI-400 does is different: it extends coherency management between several clusters. This is very important in the ARM world because of the big.LITTLE scheme: you want the big cluster and little cluster to be kept coherent to speed-up and easy the transition between the low-power and high performance modes (that also helps when all cores are used at all times, as the OS can migrate tasks between cores more efficiently).
Three years later and Broadcom had no working LTE. Even Intel, who started announcing its first LTE chip in 2011, is starting to show hardware 2 years later. It's likely Broadcom management is not happy at all about the situation, and in the end what's happening here is that they're sacking the Beceem team to replace it with the Renesas LTE team. As they make a point of saying they only acquired the LTE assets of Renesas, they may keep the historical Broadcom 2G/3G people. I'm only guessing from public info here, YMMV.
Now I'm NOT laying the blame on the Beceem guys here. Integration in a bigger company can be complicated, with lots of turf wars. It's very possible that the in-house Broadcom 2G/3G team had their own in-house plans to develop LTE (skunk-work style maybe) and didn't see the Broadcom acquisition in a good light. As LTE needs to be integrated with 2G/3G you need some good cooperation between the teams. It's also possible that some key Beceem guys were not happy and left, leaving the rest in trouble. They could have over promised too. What I mean is that there are many ways to fail in such an acquisition, I don't know what happens and can't and won't lay the blame on anyone. I'm just trying to clarify the situation a bit on what happened.
Now the fact that it is said that only Renesas LTE is interesting. Renesas had a full 2G/3G/LTE system as far as I know. Are they cutting the 2G/3G out to keep their own to please the internal guys? How easy/fast such surgery will be? We're not at the end of the story...
Rising defect densities have created a situation where — for the first time ever — 20nm wafers won't be cheaper than the 28nm processors they're supposed to replace.
The economic part is often left out on tech sites discussions, but it matters a lot. Up to now we had a sustainable situation where the cost of new processes increased regularly, but at the same time eventually the cost of the new process was lower. This allowed to get all on board and to also increase the reachable market, to get more revenues. That's why we have small micro-controllers everywhere nowadays.
Now when the cost of new processes increases, only the part of the market that trully need the improved density and performance will move on. And that's only a small part of the whole market. So we will have increasing costs, with a reducing addressable market. Double whammy. Expect end prices for high performance to rise quickly. That may slow down things significantly.
We'll see how it develops soon, but I would expect the economic to bite before we reach tech limits.
Owning its own fab means that Intel can tweak process technology to match the particulars of a given architecture (and vice-versa)
That may be understood as an Intel exclusive, but it's not entirely true. Even in the fabless world the big shots (Qualcomm, NVidia, AMD & co) have very early access to new process nodes and can certainly tune their design to it, and have their own specifics tweaks made. So they can do both kind of adaptation too, although it's not as integrated as for Intel. If you draw a line, Intel is at one extreme being able to have close integration, the small fabless companies are at the other extreme taking the stock TSMC or GF or UMC or else offering as-is. But the big fabless guys are somewhat in the middle.
ARM, in contrast, is limited by the decisions of the foundry manufacturers it partners with.
It's also a bit misleading. ARM has early access to all big fabs (Globalfoundries and TSMC), and because ARM is so pervasive there is a very very high pressure for a fab to provide the best ARM implementations on their process. So sure, it's the fab making the decisions on their process in the end. But you can bet they will pay a lot of attention to any ARM feedback gained during the early access co-work.
ARM doesn't only provide processor IP, they do the whole range now from memory cells to GPUs to interconnect to memory controllers. And they work with the fabs to optimize their design for them and provide their customers "Process Optimization Packages" (POPs) that summarize how to get the best of a process for their IP. So ARM has the know-how, the access and the pull to have a big say in what happens in the fabs roadmaps.
- - To reduce resistance in wires (which in turns reduces power dissipation) a LP variants uses thicker wires. For a given process node, this means that the distance between the walls of two parallel wires will be reduced compared to a high perf process using thinner wires. This means that the process must be more controlled to avoid shortages and get a good yield.
- - LP processes are used for mobile devices, where the selling prices is lower than for high performance. A middle of the road Haswell is ~$200, a high-end mobile AP is ~$15. Other high-perf devices are GPUs and FPGAs, and in both cases the selling prices is much higher than in low-power applications. Which means you can get away with a higher silicon cost and lower yield.
This is why all fabs start a new process node in high power first, as it's easier and you can get away with a so-so yield. Then some move to the more tricky low-power variant where it's at the same time more technically challenging and you need to get the cost low on top of it.
Now Intel is certainly the king of high-performance processes. But for low-power for mobile TSMC has been producing 28 nm parts for some time now, while what you can buy from Intel is still 32 nm. The 22 nm low-power Atoms from Intel are supposed to be available real soon now. But the key question is their cost: Intel could afford to take a hit to take some market initially, but in the long run they need to be competitive at the lower price point seen in mobile. Will there be able to do that it's still an open question. Having good performance from hand picked 22 nm Atom and having 22 nm parts in products are good steps but it's not sufficient. In the long run Intel must be able to make sufficient money on low cost parts. TSMC knows how to do that and will be there. For Intel, it's still an open question.
Another important point is that TSMC has experience in supporting customers for fabless, as it's their core business. While for Intel it's still very new. People may not appreciate how difficult all this is and how good support is key. There's a lot of know-how and process based on painful experiences, and even with a good process it takes time to build a good service part. Intel has started this, but mostly with simple designs (FPGA are the simpler you can get, that's why they're always the first on any new process node) and most are not out yet.
Lastly, Apple has massive volumes and can only take limited risks on production of their APs. TSMC looks like a much safer bet at this point. If Intel can prove they really can deliver in volume at low cost with good support (it's economics & process here, not having good technical metrics on a few samples) then it'll still be possible to switch in the future. At this time it would be a bit reckless IMHO. And I'm sure Apple did their due diligence all right on all this (with people that do understand the details of this complex business).
If blobs were ONLY firmware, they could run ONLY on the device, and could be loaded once at installation time. Very few fall into this category. (Some wifi chips do load this way upon every boot).
Even when a firmware blob runs only on the device I would expect it to be loaded every time the device is reset, particularly for a WiFi chip. If you want the blob to be persistent you must add a local Flash to the WiFi subsystem, which increases the BOM cost. And at the very low price in WiFi this is just not acceptable anymore, the chipmaker would sell nothing. So there's no such local storage (except for a minimum bootloader maybe, and that could be in the chip in ROM) and the chip will load it's executable from the host at each reset. I've worked on systems (cellular, not WiFi) that do just that.