I would be a little concerned that the silicon mono-layer would grow a natural oxide very fast and thus consume the silicon?
The solution in a HEMT transistor is cool in this respect. It is using an un-doped IV-V semiconductor next to a highly doped layer and excess carriers will form a two-dimensional electron gas at the interface. The carriers will move along the surface of the un-doped semi-conducter that since it is un-doped have better mobility and fewer defects than doped material. It must be something along this property they try to re-create with a silicon mono-layer.
If that is the case, it must be specified how a SIM card request this blocking from the phone. Otherwise this is not likely to work between different manufacturers of phones and SIM cards. If there is a specified way of doing this it must be within the GSM protocol to do so.
Alternatively this is a behavior specified by certain network operators who buy phones and SIM cards in bulk and mandate an in-official spec extension from both the SIM card and the phone manufacturer.
In the latter case I think the problem is with the operator. You cannot blame Nokia, Motorola, Samsung, Apple etc., from making business with AT&T, Vodafone, Hutchinson and the like. If an extra feature is a requirement for selling to these operators in the first place what are you to do? The customer is always right and in the subsidized markets the customer is the operator and not the punter using the phone.
I am not sure I understand the above text. If it is the SIM card disabling the setting, why is this then labeled a deliberate choice by the cell phone makers?
Also I have seen at least on numerous Nokia mobile phones that an icon in the display notify you at least in some instances when encryption is disabled. (This happen quite frequently in e.g. China).
Could you elaborate on your claim that a reciever with ~0dB noise figure can detect a signal 15 times smaller than a reciever with ~1dB noise figure? I would expect around 1dB improvement in performance.
You are talking about the noise figure of the reciever and that will differ between handsets as will the reciever algorithms re-creating the transmitted information from the decoded signal.
For a given standard one implementation might require 3dB s/n to reach a given BER but other implementations might need 2dB or 4dB depending on the amount of signal processing and the quality of the algorithms used. For a given sensitivity you can then design your receiver with higher noise figure if your algorithms are better or you can live with lower baseband performance if you design the analogue front-end with lower noise.
You write: "the minimum detectable signal is often defined as the point where the signal power equals the input referred noise power.". This will be the case for a system and an algorithmic implementation where you can detect a signal with 0dB s/n. For systems with coding gain you will typically be able to detect signals with negative s/n and in WCDMA mode a phone is likely to be able to decode a signal with -20dB s/n.
The noise floor is around -174dBm/sqrt(Hz) depending on temperature.
This will be the same for all phones
I get your point.
It might be possible to maintain a "gcc-light" compiler written fully in C and then have the gcc build scripts completing this boot-strap compiler first. The gcc-light do not need to be fast or effective since it will only be used for boot-strapping. It might even be possible to make it as a pre-processor converting c++ into C.
Since there are platforms for which C++ compilers exist you can compile the compiler on one of these and then cross-compile for the target platform.
This is also how you boot-strap a C-compiler on a platform it is not implemented for initially.
Garbage In -- Gospel Out.