If this was a drone and was just using the mobile frequencies for communication, it would probably use an off-the-shelf cellular modem module to communicate normally over the cellular network. A special testing authority from the FCC would not be necessary.
I won't tell you to "shut the fuck up", though . . . I can think of no greater endorsement than attention from a troll such as yourself. Now, go back to your game of "Minesweeper" and leave the grown-ups here on Slashdot to talk about grown-up stuff.
There's mandatory telemetry in Firefox, Chrome, IE, Opera, Ubuntu, Fedora, and a whole lot of other stuff. You're leaking data to everyone. Search habits, Web address look-ups, the lot. Some of it can be removed; some of it can be disabled (notably, the malware checks in Firefox); some of it is designed in (the only way to run system updates without sending a random university, ISP, or Canonical- or Redhat-controlled Web server a list of software you have installed which you intend to upgrade today is to make a complete local mirror of the entire repository).
Nobody knows what's in MS telemetry, but they presume it can be any of anything. A lot of what they presume is also what's sent out to random actors through any Linux distribution or other free software you've been using, and the only reason nobody cares is they don't think about it.
Do you know how often I type something into a text box on Reddit or Slashdot, pull up Google to do some research before I post something retarded, and Google immediately suggests exactly what I'm looking for despite me never having searched for that? It's like they can read the text boxes before I even submit the form--or maybe they know I've been on a certain page in some forum where such topic is being discussed, and can guess what I want to know. Either that or the Googlecluster is both self-aware and telepathic.
CentOS installations use either a local repository or they connect to a mirror. No "central server"
Mine connect to mirrors as well. They tell the mirror, "HTTP GET
I don't think that they query the yum server for every package installed on the system: instead, they download a single file that lists all the available packages in that particular repository, then they download only the necessary packages.
So, when you download installation packages beyond what's installed by default--of which they know every single package already--it doesn't transmit to one of the CentOS mirrors a list of some subset of packages, the total from installation to decommission will necessarily detail every single package you install by virtue of telling them what software you install outside the default set--and even by virtue of pointedly not updating software you've since uninstalled?
There's enough going out to archive.ubuntu.com and us.archive.ubuntu.com pools for the Canonical secret NDA conspiracy to track my every software selection and update. This is approximately the same reality as what's going on with Microsoft, aside from that Microsoft doesn't see every piece of software you install--you only get Microsoft software updates through MS, plus anything you installed through the Windows store--yet nobody panics about their distro or some untrusted third-party Government-controlled university (what do you think gatech.edu is in the CentOS mirror list?) getting their IP and the names of arbitrary software applications they update.
You leak information like a sieve, and your best argument about why it's okay sometimes is apparently that you leak that information to more people in those cases.
Some nerds want to discuss shit like this. We take stuff from Linus and from some African who bought Debian for loosechange all the time.
It depends. Software is technology, too. From that end, Windows 10 actually has some pretty awesome features.
A while back, I installed Gitlab on a server with 1GB RAM. That server immediately went 700MB into swap and... proceeded to behave as if nothing had happened. 40MB of reclaimable memory, but no problem. That was a Linux server with 1GB of zram allowed to use up to 50% of memory, compressing its load to less than 1/3 its original size--about 700MB open RAM, and 300MB housing 700MB of compressed RAM swapped out. The compression algorithm is about as fast as a worst-case cache miss, and programs tend to do more computation than that on a given block of RAM: it didn't add any significant performance overhead (like, less than 1%). This will balloon out of control when you get to a tight enough RAM-to-swap ratio, of course.
Windows 10 has an internal caching system that's quite similar and implemented extremely well. Because of this, Windows 10 can allocate around 24-32GB of RAM in a 16GB system and not care. It won't touch disk, at all, and it won't appear to slow down. With the right precaching algorithms, access to this kind of compressed area takes only twice as much time as access to raw RAM--that is, the same situations where CPU prefetch kicks in, the OS can decompress a few swap pages into a hot area of RAM before they're needed if there's any CPU downtime at all (there almost always is, even under heavy load from high-intensity gaming or server applications).
That's not just powerful technology; it's a response to fear of swap areas destroying SSDs. You don't need to write swap to disk ever, and you still get the benefit of taking long-unused data out of RAM and idling it on a slower medium. In this case, you trade 500MB of idle RAM out for 160MB of idle RAM, giving you an extra 340MB for stuff that actually matters. A good deal of RAM is never touched again, or is in a working set less-frequently-used than block cache, so it's actually faster to swap in some cases where you could actually reclaim usable RAM.
A lot of scheduler and memory management changes went into Windows 10, and they're actually great features--some of which I'd wanted on Linux for years and just barely got a couple years ago. If Microsoft gets a new BFS-type scheduler in before CK's goes mainline, it'll actually be a stronger server and desktop OS than Linux--that hasn't happened yet, and MS continues to trail in that respect, but they're quickly catching up on all kinds of failures and rough edges that have historically put them far beyond modern Linux distributions. Windows 10 doesn't even need a reboot when you power off an HDMI display (8.1 loses the ability to play sound through HDMI).
So Windows 10 makes better use of RAM, avoids wearing out SSDs, can handle HDMI displays properly, and has some scheduling and memory management improvements that more-optimally leverage modern processors and high-speed disks. It's not yet on-par with the latest in Linux technology--but neither is Linux mainline kernel or any Linux distribution to date; and nothing's caught up to DragonflyBSD in a few key areas. Still, it's better able to leverage modern machines than any prior OS.
I'll be excited to see BFS and BFQ in Linux mainline, and similarly-excited if we start getting Dragonfly-esque features like process freezing and, thus, the ability to just store a software session and reload the underlying system on a new kernel. If Windows can catch up, good for them.
The simple problem is that telemetry has been overstated and overblown. Try to find a comprehensive description of what Microsoft captures about users. What you get is things about Windows making DNS lookups against hundreds of domains, some chatter about what Windows 10 could be doing, and some criticisms of ill-thought-out features like Wifi network password sharing. Nobody knows what's happening, but they've all assumed so.
The result is a bunch of people talking about how Microsoft is spying on you by doing such things as identifying all software installed, based on Windows Update removing 6 particular softwares (something that can be done locally, without sending information about them back to Microsoft); meanwhile when you run yum or apt, it sends an HTTP request for each individual piece of software you're updating or installing back to a central server--which actually does what people said Windows 10 does, but doesn't freak anybody out because... reasons. EVERYBODY PANIC!
Every keystroke you enter into your browser's search bar is sent to a remote server, where it's logged in Web server logs. Every domain you look up goes back to a Malware service to block bad sites. Cortana used to search the web if you typed search terms into the Cortana search bar, and people freaked out.
To be fair, people freaked out when Ubuntu started searching Amazon through the Unity bar. It's not that they have legitimate fears; it's that they fear new things, and confusion in groups turns into mass hysteria. You get a few people suggesting folks are just afraid Amazon will see them trying to look for their child porn collection, but that's retarded; the truth is everyone's scared because the next ten people are scared and nobody is inclined to take the time to verify that the next ten people aren't idiots, so they do the reasonable thing and assume (incorrectly) that a million people who have no fucking clue what they're talking about can't be wrong or someone would have told them by now.
Someone like me.
Do you see the problem?
The IBM 2250 is impressive ... if you compare it with a system selling for a tenth its price. -- D. Cohen