So if he injected himself with all the marijuana the car won't drive him home?
It is true that the article is written poorly, but Oracle is in fact discriminating against everybody. This is true of many companies.
The way it works is simple:
The managers are racist. They pay X group more - usually white men.
But the company does not WANT to pay a lot of money. So the Hiring managers are told to actively find people that are not white men. Then they offer these hiring less money.
To work like this it requires a wide spread racism among hiring companies combined with a slightly desperate population.
Honestly, if they stop there, it's not that bad. They sift off the cream of desperate people, helping them out. Theoretically the company would end up dominated by the disadvantaged group.
But it doesn't stop. What happens next is the real problem, internal discrimination.:
When it comes time to promote people, they only promote the X group (white men). After all. those were the people getting paid the most and who, because of internal discrimination, were given both the best opportunities and the most credit.
So you end up with a racist company paying X group more, while proudly proclaiming how many minorities they hire - even while they underpay them.
It's easy to save on RAM, but RAM is cheap. With the zram module in Linux, you can create a zram block device 2x the size of RAM with mem_limit set to 50% of RAM and experience approximately no performance hit--faulting out of zram is approximately twice as heavy as a worst-case cache miss. I've had a 1GB server run 700MB into zram swap trying to run Gitlab, with 40MB of available RAM (including disk cache), and not show any visible sign of performance degradation; note that that's about 230MB of RAM acting as a compressed cache area for that 700MB, and 770MB of flat RAM available.
This works because CPU isn't pegged to 100% on average across 1 second, and decompression requires something like 23-26 instructions per byte. That means decompressing one page per second on a 1.2GHz core consumes about 0.00887% CPU at 1 cycle per instruction, or 0.0266% at an average 3 cycles per instruction. RAM prefetching is actually huge--a cache miss can cost 48 cycles for 64 bytes (on x86-64) or 0.000256% for a 4096-byte page, at a minimum, with 8-cycle CAS across a CPU, or a whopping 1,200 cycles or 0.0064% for 4096 bytes, although that's never going to happen (it's physically impossible: sequential reads don't need the expensive row precharge before RAS after the first read).
Basically, if your code uses memory infrequently, it has no reason to swap; and if it uses it frequently, then the cost of swapping can be absorbed by prefetch algorithms similar to the ones used by the CPU itself to avoid the above cache miss costs. Standard LRU swap algorithms will prevent swapping out of frequently-used memory; and the delay waiting for a swap-in consumes the bits of unused CPU time in a 99.7%-pegged processor.
The performance hit explodes exponentially at a certain point. If you have 1GB RAM and use 900MB as a compressed swap such that you have 2.8GB available, you're going to have a bad time. If you have 1GB and use 500MB for swap such that you have 2GB available, you'll be fine even under high load.
The problem is the whole phone is made of a SOC which isn't that much cheaper on 1GB versus 2GB; expensive NAND storage; radio chipsets; a battery; an expensive display; and so forth. The SOC isn't even the biggest part, with a cost of like $35 or sometimes in the $20 range for something current-generation for a $400 phone, up to $70-ish for state-of-the-art SOCs. Slap a $100 screen, $80 of TLC NAND, and $40 of boards and components and case around a $30 chip and you have a $250 phone.
Firstly, you apparently didn't read my comment that I wasn't discussing how apt works, only yum.
When Yum downloads something, it fetches a bunch of repo information (like apt-get update), then it downloads files (like apt-get install). To do this, it does... all the shit I described apt doing.
Secondly, the critical issue that you are missing is that if I install a package from an alternate repository (eg EPEL), my systems don't tell the main CENTOS mirrors about those EPEL packages.
No, of course not. You tell Georgia Tech, the NSA CentOS Mirror, or Microsoft's Redmond CentOS mirror, at random, who you are and what you're downloading.
Multiple distributions and mirror maintainers coordinate in secret to keep security exploit details quiet until a patch is ready from everyone. There's an entire network of quiet discussion that happens, intentionally hidden from everyone, to make sure everyone hits the ground running. If you report a remote exploit in Firefox directly to Mozilla, Debian, RedHat, Slackware, or Gentoo, marked as a security bug, they will keep the details private until everyone has patches ready; then they all release at once.
So you believe Microsoft is doing secret things dealing your data to secret partners in secret; but that Linux distributions might not be secretly collecting your data, or that various Linux mirrors who aren't controlled by those distributions aren't under the influence of others. That is: although AT&T was sucking up your phone data and piping it to the NSA, they apparently won't collect what scraps of OS update telemetry data hits their servers in the same way.
You're basically saying there's no network of bad actors out there, so instead of trusting "Debian", you trust everyone.
Finally, there is no fingerprinting involved in the yum transactions. If I have multiple machines behind a single IP address, the server doesn't have sufficient information to distinguish them. As well has having insufficient information to fingerprint individual systems, no user information is transmitted.
We've been able to identify individuals based on their Internet usage and TV usage, even from the same account, device, and browser. We can tell if your 16 year old daughter or her 17 year old sister is currently using the PC or watching TV.
I might have two x86-64 PCs running the same version of Ubuntu, and a Raspberry pi; you can fingerprint at least three systems out of my usage habits, and identify one distinctly at least.
Through all of that...
In summary, yes I am leaking some information, but it is benign.
The leaking of what Microsoft software you've installed to Microsoft's servers is benign as well. Who fucking cares that Microsoft knows you have Office 2013 installed?
There's mandatory telemetry in Firefox, Chrome, IE, Opera, Ubuntu, Fedora, and a whole lot of other stuff. You're leaking data to everyone. Search habits, Web address look-ups, the lot. Some of it can be removed; some of it can be disabled (notably, the malware checks in Firefox); some of it is designed in (the only way to run system updates without sending a random university, ISP, or Canonical- or Redhat-controlled Web server a list of software you have installed which you intend to upgrade today is to make a complete local mirror of the entire repository).
Nobody knows what's in MS telemetry, but they presume it can be any of anything. A lot of what they presume is also what's sent out to random actors through any Linux distribution or other free software you've been using, and the only reason nobody cares is they don't think about it.
Do you know how often I type something into a text box on Reddit or Slashdot, pull up Google to do some research before I post something retarded, and Google immediately suggests exactly what I'm looking for despite me never having searched for that? It's like they can read the text boxes before I even submit the form--or maybe they know I've been on a certain page in some forum where such topic is being discussed, and can guess what I want to know. Either that or the Googlecluster is both self-aware and telepathic.
CentOS installations use either a local repository or they connect to a mirror. No "central server"
Mine connect to mirrors as well. They tell the mirror, "HTTP GET
I don't think that they query the yum server for every package installed on the system: instead, they download a single file that lists all the available packages in that particular repository, then they download only the necessary packages.
So, when you download installation packages beyond what's installed by default--of which they know every single package already--it doesn't transmit to one of the CentOS mirrors a list of some subset of packages, the total from installation to decommission will necessarily detail every single package you install by virtue of telling them what software you install outside the default set--and even by virtue of pointedly not updating software you've since uninstalled?
There's enough going out to archive.ubuntu.com and us.archive.ubuntu.com pools for the Canonical secret NDA conspiracy to track my every software selection and update. This is approximately the same reality as what's going on with Microsoft, aside from that Microsoft doesn't see every piece of software you install--you only get Microsoft software updates through MS, plus anything you installed through the Windows store--yet nobody panics about their distro or some untrusted third-party Government-controlled university (what do you think gatech.edu is in the CentOS mirror list?) getting their IP and the names of arbitrary software applications they update.
You leak information like a sieve, and your best argument about why it's okay sometimes is apparently that you leak that information to more people in those cases.
Some nerds want to discuss shit like this. We take stuff from Linus and from some African who bought Debian for loosechange all the time.
It depends. Software is technology, too. From that end, Windows 10 actually has some pretty awesome features.
A while back, I installed Gitlab on a server with 1GB RAM. That server immediately went 700MB into swap and... proceeded to behave as if nothing had happened. 40MB of reclaimable memory, but no problem. That was a Linux server with 1GB of zram allowed to use up to 50% of memory, compressing its load to less than 1/3 its original size--about 700MB open RAM, and 300MB housing 700MB of compressed RAM swapped out. The compression algorithm is about as fast as a worst-case cache miss, and programs tend to do more computation than that on a given block of RAM: it didn't add any significant performance overhead (like, less than 1%). This will balloon out of control when you get to a tight enough RAM-to-swap ratio, of course.
Windows 10 has an internal caching system that's quite similar and implemented extremely well. Because of this, Windows 10 can allocate around 24-32GB of RAM in a 16GB system and not care. It won't touch disk, at all, and it won't appear to slow down. With the right precaching algorithms, access to this kind of compressed area takes only twice as much time as access to raw RAM--that is, the same situations where CPU prefetch kicks in, the OS can decompress a few swap pages into a hot area of RAM before they're needed if there's any CPU downtime at all (there almost always is, even under heavy load from high-intensity gaming or server applications).
That's not just powerful technology; it's a response to fear of swap areas destroying SSDs. You don't need to write swap to disk ever, and you still get the benefit of taking long-unused data out of RAM and idling it on a slower medium. In this case, you trade 500MB of idle RAM out for 160MB of idle RAM, giving you an extra 340MB for stuff that actually matters. A good deal of RAM is never touched again, or is in a working set less-frequently-used than block cache, so it's actually faster to swap in some cases where you could actually reclaim usable RAM.
A lot of scheduler and memory management changes went into Windows 10, and they're actually great features--some of which I'd wanted on Linux for years and just barely got a couple years ago. If Microsoft gets a new BFS-type scheduler in before CK's goes mainline, it'll actually be a stronger server and desktop OS than Linux--that hasn't happened yet, and MS continues to trail in that respect, but they're quickly catching up on all kinds of failures and rough edges that have historically put them far beyond modern Linux distributions. Windows 10 doesn't even need a reboot when you power off an HDMI display (8.1 loses the ability to play sound through HDMI).
So Windows 10 makes better use of RAM, avoids wearing out SSDs, can handle HDMI displays properly, and has some scheduling and memory management improvements that more-optimally leverage modern processors and high-speed disks. It's not yet on-par with the latest in Linux technology--but neither is Linux mainline kernel or any Linux distribution to date; and nothing's caught up to DragonflyBSD in a few key areas. Still, it's better able to leverage modern machines than any prior OS.
I'll be excited to see BFS and BFQ in Linux mainline, and similarly-excited if we start getting Dragonfly-esque features like process freezing and, thus, the ability to just store a software session and reload the underlying system on a new kernel. If Windows can catch up, good for them.
The simple problem is that telemetry has been overstated and overblown. Try to find a comprehensive description of what Microsoft captures about users. What you get is things about Windows making DNS lookups against hundreds of domains, some chatter about what Windows 10 could be doing, and some criticisms of ill-thought-out features like Wifi network password sharing. Nobody knows what's happening, but they've all assumed so.
The result is a bunch of people talking about how Microsoft is spying on you by doing such things as identifying all software installed, based on Windows Update removing 6 particular softwares (something that can be done locally, without sending information about them back to Microsoft); meanwhile when you run yum or apt, it sends an HTTP request for each individual piece of software you're updating or installing back to a central server--which actually does what people said Windows 10 does, but doesn't freak anybody out because... reasons. EVERYBODY PANIC!
Every keystroke you enter into your browser's search bar is sent to a remote server, where it's logged in Web server logs. Every domain you look up goes back to a Malware service to block bad sites. Cortana used to search the web if you typed search terms into the Cortana search bar, and people freaked out.
To be fair, people freaked out when Ubuntu started searching Amazon through the Unity bar. It's not that they have legitimate fears; it's that they fear new things, and confusion in groups turns into mass hysteria. You get a few people suggesting folks are just afraid Amazon will see them trying to look for their child porn collection, but that's retarded; the truth is everyone's scared because the next ten people are scared and nobody is inclined to take the time to verify that the next ten people aren't idiots, so they do the reasonable thing and assume (incorrectly) that a million people who have no fucking clue what they're talking about can't be wrong or someone would have told them by now.
Someone like me.
Do you see the problem?
Putting aside my opinions about "platforms" I can go anywhere and purchase software for Wii, Xbox, PS4. Not sure what your trying to say
Actually, Wii, Xbox, and PS4 software has to be licensed and signed by the platform producer--that is: you can't buy Wii or PS4 software that Nintendo and Sony haven't allowed to be sold. In effect, Nintendo, Sony, and Microsoft get paid to allow certain software on their platforms, giving you a curated catalog to purchase from; you can chose a delivery vendor--an ISP to download from or a store through which to ship the software on physical media--but you have to buy what's available from the Nintendo/Sony/Microsoft store.
There have been lawsuits about this--re: Tengen.
The point is that *Apple users* can't buy software from anywhere else.
I've had many Apple users tell me Android is a cesspool of diseased malware and Apple is secure because of its walled garden. Security via the app store is apparently a primary feature in many users's minds; although I imagine most simply don't care one way or the other.
If you bought a house in a certain neighborhood you wouldn't accept being limited to only purchasing physical goods from one specific store as a condition of living in that neighborhood.
True, although I apparently have consented to buying only those products Nintendo sells on the Wii on my Wii. I might go to Gamestop to pick them up--after Nintendo allows them to be sold on the Wii. GameStop is a local front for the Nintendo store, selling software made by companies which paid Nintendo for the right to sell said software.
Incorrect. A Big Mac cost 49 cents in 1967. 49 cents in 1967 has the same buying power, today, as $3.54 according to the BLS (prices are all in USD btw). Today's Big Mac costs $3.99. That's roughly a 12% increase in price. This is with inflation already calculated as well.
The median income in 1967 was $26,100 2011 dollars. The 1967 Big Mac thus cost 0.0135% of the median income. The 2015 median income is $56,500, and the 2016 Big Mac costs 0.00706% of the median income.
So in 2015, a Big Mac cost 52% of what it cost in 1967. The Big Mac today costs half as much as it did in 1967.
You will work for half as many hours today to make the wage required to buy a Big Mac as you would have worked in 1967 to bu ya Big Mac at that time.
A Big Mac in 2015 costs half as much as a Big Mac in 1967. Cool, huh?
The pneumatic air gun thing was a typo--it's redundant. Pneumatic nail gun was what I was going for.
I'd guess a whole load of people lost their jobs.
They did. We have 4.9% unemployment today.
The point is their work gets displaced. If you cause a 10% uptick in unemployment, your economy has a bad time; if you cause a 1% uptick spread across 1 year, your economy hardly actually notices.
People notice. Someone loses his job and he sure as hell notices. Welfare is supposed to cover for that, and stronger welfare is possible today--the inflection point for America is 2013, so excuse the politicians for not quite catching up yet--and that's supposed to patch up the other side. It's what we exchange: sometimes you lose your job, and you get to have running water and spend less than half your income on food because people have lost their jobs over the years. As a participant in this system, you deserve some just compensation; and it turns out the economy is more-efficient when we have stable welfare (but not welfare beyond the means to which we can supply).
Folks like to imagine businesses can take profits forever, and see a business cutting off 10% of its workforce as a business that just got $1,000,000 of additional profit forever. That might actually be true: that business might get $1,000,000 of additional profit which they never give back--not until it's eaten away by inflation over the next few years. More than likely, it'll keep those profits for 2-3 years at best--most likely, not longer than it takes the competition to catch up. In active, quickly-changing economies, the change is often rapid.
The other factor is growth. The Nest Protect smoke and CO2 detector didn't get any cheaper--it's still $100--but it now lasts 10 years instead of 7. It got a redesigned, upgraded sensor. It's now $10/year, essentially, which is better. Cars come packed with fancy new traction control systems, complex suspensions, and other upgrades not available in earlier models. Essentially, as labor became cheaper, they employed more labor to build more stuff--cars still cost damn near 56% of the income of the purchaser, on average, by sale price.
So sometimes those added profits go away when competition comes up. Sometimes the company supplies a higher-end good, trying to be the leading-edge supplier selling what's still a $100 widget but with so many more features that you wouldn't spend the $100 for the last-generation competition's product. They slim their margin, capture a market, and dominate their competitors. Adding all that crap creates cost because it creates replacement jobs.
It happens. It's happened since the beginning of human history. It will continue to happen.
Oh god not the money argument. Let me explain money to you.
You make $20/hr. Some other person makes $10/hr. Well, as it turns out, for every 1 hour you work, you can induce that person to work 2 hours. Wage inequality.
So to make a thing, someone has to work. They have to labor in the fields to make food, they have to spin thread and sew cloth to make clothing, and so forth. This, of course, means that your work--which produces--can be traded for their work, in the same way: your hours versus their hours, exchanged in "Money".
So we find a way to get the seamstresses to make clothing in 1/10 the time by inventing the "sewing machine". Instead of hand-sewing, they just buzz shit off the line using sergers and the like. Now the seamstress works 40 hours still, but produces 10 times the garments. People buy about 5 times as many clothes... but we still only need 1/2 as many seamstresses; and those clothes only weigh your hours against 1/10 as many of their hours per unit clothing.
That is to say: where it took 10 hours of work to make a shirt, it takes 1, and so you pay $10 instead of $100 for that shirt. Where we used to need 10 seamstresses to make 10 shirts in 10 hours, we now only need 1. Where we used to buy 10 shirts, we buy 5, so
That leaves you with $50, so you buy more of something else.
So money doesn't do anything but represent labor; and what labor can make changes as we increase technology.
We used to spend 90% of our time making food. In 1900, we spent 40% of our time making food. Today, it's under 12% of our time.
The cost of clothing has plummeted dramatically--the industrial revolution, then later with the basic sewing machine and the serger, and then again with the trade advantages provided by globalization. Chinese manufacture isn't just about cheap labor; they're very good at what they do, and the Chinese can optimize an assembly line to any quality specifications with minimal cost to retool--which means they can hit ROI on smaller batch runs, too.
Costs of construction fell with the invention of iron furnaces that cut labor requirements by over 99.5% (yes, they made 216 times as much iron with the same labor when they brought out the hot blast furnace), pneumatic and electric power tools (nail guns and circular saws versus hammers and handsaws), and big machinery like excavators (because fuck shovels).
The wooden shipping pallet allowed a crew to carry out the loading and unloading of canned goods in 4 hours--a task which took the same crew three 16-hour days, or twelve times as long.
Costs have fallen dramatically over the years. Go back two centuries and you'll find a world that can't produce enough food to feed a billion people. Go back to 1900 and you'll find a world that's facing famine as it races past a population of 2 billion. Agricultural technological advances in the 1900s and 1920s won Nobel prizes for saving billions from starvation.
Since the late 1800s, we've cut the working week from 100 hours to 40, eliminated child labor, and dramatically increased productivity. We live in a ridiculous caricature of society in which stuff just appears out of nowhere as people go through the motions of waving a magic stick at a voodoo apparatus that just spits out piles of completed product. The army of horses and postal workers required to deliver this message to its readers all over the world has been replaced by a fraction of a fraction of a second of labor share in a network that, for each hour of human effort, can deliver literally billions of such messages to billions of people.
How do we quantify that?
I pay $83/month for 200Mbit/s Internet service and probably have a data cap of around 200GB/month. By cap, that's 284MB per hour. Comcast has something like a 10% profit margin overall, but that doesn't help us here: the gross profit margin on Internet service would translate to labor share, although the actual capability to provide the service does come down to total net profits (i.e. there might be a 40% or 90% profit margin "on Internet service", but actually providing Internet service as an ISP involves a whole lot of other stuff essentially funded by that profit margin--stuff besides routers and cables and firewalls). That just tells us that we can supply way more than, but at least, 284MB of message transfer per person per hour.
That's delivery of almost 300,000 letters to every single person who has an Internet connection every hour. The message arrives seconds after it's sent.
Can the Pony Express do that on horseback?
No, it can't. We live in a world where everything that took a shitton of labor 100 years ago or even 50 year ago now happens by some kind of ridiculous magic, and supplying an early-1900s lifestyle to everyone would be ludicrously-cheap.
So the average income can buy more stuff today than in, say, 2005... how?
"I'm not afraid of dying, I just don't want to be there when it happens." -- Woody Allen