This seems like it was specifically designed to to generate libel lawsuits.
This seems like it was specifically designed to to generate libel lawsuits.
So the shills have been copying Scientology's "what are your crimes" deflection tactic for a while now. I wonder who approved that plan, as it didn't work for CoS either.
Even if we assume this is accurate and this "telemetry" data is the only spying they are doing (a patently incorrect assumption), this is still an incredible amount of metadata being collected.
A lot of people - even some that should know better - have bought the propaganda that spying on "metadata" doesn't matter. In reality, metadata (or "anonymous" usage statistics) is the most valuable data that can be collected in bulk. As former CIA and NSA director Michael Hayden said, "We kill people based on metadata..
This data is obviously profitable among the businesses using surveillance as a business model because you are the product, but that's not the biggest problem.
Knowing what programs you run - and when you run them - can be enough to start building a pattern of life profile. When do you wake up. When do you spend time near your home router's IP address running a web browser. When do you tend to run MS Office, with the telemetry coming from an IP owned by business instead of your usual home IP? I'm sure modern data analysis tools could find a lot more interesting stuff out of telemetry data.
That's what you get when you buy a product that depends on a single vendor for its mission-critical supply chain.
As you are a marine, you should be concerned about how much of your ability to function as an armed force depends strictly on a single vendor. Engineering fields and especially defence suppliers traditionally required a second source for any mission-critical parts.
Then again, what do I know. Given that the armed forces seem to be fine depending on China for most military hardware, what's another Sword of Damocles hanging our head?
if the owner of the PC chooses
No, the OEM will get to choose, just like they do today in other areas. I suppose the laptops with UEFI SecureBoot enabled don't exist in your world?
I work for Intel
So you're a collaborator. I hope you like the future you're creating. Maybe you should wake up to what is actually happening in the world?
Every time I see people discussing AMT, they leave out the final piece of the puzzle: Intel's SGX ("Software Guard Extensions") instructions that are in Skylake and future CPUs. SGX lets a program set up "secure enclaves" in RAM that are encrypted in the CPU and cannot be accessed by other programs, including the OS itself. As the data is encrypted outside of the CPU, you cannot even use a cold-boot attack or a logic analyser to access the data the hard way.
The only people talking about these instructions seem to be the occasional crypto researcher musing about how this could be a nice feature for protecting private keys. I'm sure that's possible, but Intel clearly has another goal in mind.
1. Allow application developers to protect sensitive data from unauthorized access or modification by rogue software running at higher privilege levels.
5. Enable the development of trusted applications [...]
6. Enable software vendors to deliver trusted applications and updates [...]
8. Enable applications to define secure regions of code and data that maintain confidentiality even when an attacker has physical control of the platform and can conduct direct attacks on memory.
In case anybody has forgotten, "trusted applications" is a dog whistle for DRM, originally popularized by Microsoft when they announced "Palladium". Good luck investigating what AMT is doing when the RAM it uses is encrypted.
Of course, some people in this very thread are already apologizing for Intel and claiming AMT isn't a threat. They probably said the same thing about Windows 10, too, with claims that the spyware wasn't important because it could (with much hassle) be disabled. Well, good luck in future Windows versions when the spyware is an encrypted SGX enclave.
The problem is Intel's new SGX ("Software Guard Extensions"). They allow the creation of memory regions that "maintain confidentiality even when an attacker has physical control of the platform and can conduct direct attacks on memory". The CPU encrypts RAM so you cannot pull keys out of it with a cold boot attack or a logic analyser on the memory bus.
Of course, the rare news article about SGX likes to assume this is something intended for the user so they can protect their GPG keys. What nobody is talking about is that this lets, for example, Microsoft create unbreakable DRM. MS will finally have their infamous Palladium "trusted computing" platform. They have already started the chain-of-trust with UEFI's SecureBoot. I hope people are taking the hint now with the Windows 10 scandal and fleeing the platform, because you aren't going to be able to remove their spyware once it is in the "trusted" enclave.
If that isn't worrying enough, consider what hidden SGX enclaves means for Intel's System Management Mode - the network enabled BIOS feature that allows remote access - which is already in your computer if have an Intel system newer than ~2010. This even works independent of the installed OS, so you can't get away from SMM by using Linux.
Ever get the feeling you don't actually own your computer? Current "trusted computing" design allows an untrusted OS to run most of the time by implementing the DRM/spyware at a lower hardware protection ring while making sure plaintext never leaves the CPU.
You're deliberately conflating ownership of a creative work's copyright with ownership of an individual copy of that work (which was made by the party who did own the copyright). The only right the granted by copyright is the right to a monopoly on who can create new instances (copies) of a given work, and that right absolutely does not extend beyond that.
This is called the first-sale doctrine, which recognizes that reproduction rights are distinct form distribution rights, with copyright only granting the former and their distribution rights end at the first sale. If a retailer buys a copyright-protected work at wholesale, they can sell it however they like as long as they do not create any more copies. Likewise, if you buy such a work, you can use it for whatever you like, provided you don't make additional copies. If the party that owns the copyright wants more control over what happens after the first sale, they can always negotiate a contract with additional restrictions. This happens often when publishers sell wholesale to retailers. Just remember that an EULA is not a contract, and anybody that buys something in a simple retail transaction ("I pay you money, you hand me $GAME" only) has not agreed to any extra restrictions.
A lot of publishers really wish they could control their product after the first sale so they can eliminate the resale market. They can dream all they want, but that doesn't change the law.
Note: it's a mistake to assume someone is looking for the forecast for their current location or the GPS location given by their network device (which may not be the same as their current location). If your service only worked by GPS, it would be giving the wrong forecast in some cases.
How about asking the user, and respecting their choice? Ask them if they want to give their GPS location for a specific forecast, or if they would prefer to type in a zip an get a generalized forecast. There could be reasons people might want either of those options, and they might like it if your service supported both. It's not like it would be hard (just lookup a default location for each zip and use that instead of the GPS; it only requires one table in the DB). You probably already do this for backwards compatibility with non-GPS-enabled devices.
The *only* reason not to offer that is if you aren't really interested in providing weather forecasts, but instead are trying to jump o the surveillance-as-a-business-model bandwagon. If that's the case, you should think long and hard about your new job - do you really want to be associated with peeping toms?
In the beginning was the plan.
And then came the assumptions.
And the assumptions were without form.
And the plan was without substance.
And darkness was upon the face of the workers.
And they spoke among themselves saying,
"It is a crock of shit and it stinketh."
And the workers went unto their supervisors and said,
"It is a pale of dung and none may abide the odor thereof."
And the supervisor went unto their managers and said,
"It is a container of excrement and it is very strong, such that none may abide by it."
And the managers went unto their directors, saying,
"It is a vessel of fertilizer, and none may abide its strength."
And the directors spoke among themselves, saying to one another,
"It contains that which aids plant growth and it is very strong."
And the directors went unto the vice presidents, saying unto them,
"It promotes growth and is very powerful."
And the vice presidents went unto the president, saying unto him,
"The new plan will promote the growth and vigor of the company, with powerful effects."
And the president looked upon the plan and saw that it was good.
And the plan became policy.
This is how shit happens.
There are many different types of threats on the net that, and a few are law enforcement.
Why is it that so many people seem to think that it's no big deal to open a connection to a random host on the internet? That puts you in yet another situation where you have to enumerate badness.
In this case, what you just described allows someone to probabilistically verify that someone saw a page (regardless of how they got the HTML - email/spam, HTTP, or a README.html found in a warez
I suggest thinking long and hard about what any of this data can be correlated with (temporally or as a matching surrogate key), remember that it doesn't have to work all the time. Single data points are usually safe on their own, but the pattern that emerges when you join someone's data trail together can be very detailed.
We need a reduction of data that browsers transmit, in this post-Snowden world.
I'm even fine with changes that break existing software if it is required to fix a security issue. Unfortunately, we have people confusing design with implementation, and even more that follow the Not Invented Here principle. Anybody in that camp may want to read "Things You Should Never Do" by Joel Spolsky.
When programmers see an old, messy project they tend to want to rewrite it. They see that mess and believe they know how it could "obviously" be simplified. Usually, this is incorrect, as the simplified abstraction they are picturing doesn't actually implement the same feature. That "mess" was actually the accumulation of important details that were learned over time: subtle details that were not covered by the original design, evolving requirements, and expensive bug fixes. The "rewrite" has none of that, and those details will eventually be added back, eventually making the rewrite just as messy as the original version at the cost of many man-years of effort. Throwing out the "mess" is throwing out a project's most valuable asset: real-world experience.
Hmm....a n00b that doesn't understand good enginnering practice? (Do you buy new screwdrivers every few years to keep up with useless changes? If you want to keep you Philips or Torx screwdrivers, you're just afraid of change)
Or a shill that is trying to use identity politics and tribal politics to drive a wedge through the Free Software community?
Or should we just go with "idiot"?
Did you miss the GTK+ 3.0 drama?
The GNOME idiots have been making it a point to break compatibility and remove "old" (aka "working", "currently used") features. You are delusional if you think they will continue supporting X once they declare the Wayland version to be "standard".
Of course, they'll probably use their typical victim-blaming approach where claim that keeping the old version around is "too much work" that should be done by someone else.
Never trust an operating system.