Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Compare cell phone plans using Wirefly's innovative plan comparison tool ×

Comment Arithmetic, Population and Energy (Score 1) 323

This is an interesting point. However, when most people say "population growth", what they really mean is GEOMETRIC growth -- meaning that the population is growing by an exponential function.

The late, great Professor Al Bartlett's arguments in Arithmetic, Population and Energy -- https://www.youtube.com/watch?... -- are assuming that population grows at an exponential rate, not something slower.

If it could be shown that the population was growing linearly, or even polynomially, over a long period of time, we would have significantly less cause for concern. However, if the population growth's fastest term is still exponential, it doesn't matter whether it's "decelerating", we still have the problem.

Consider three different population growth functions where each whole number of `x` is 1 year:

C = 100000 (starting pool of people = 100,000)

L(x) = 1000x + C // Linear growth

Q(x) = 400x^2 + C // Quadratic growth

E(x) = C*(1 + 0.0113)^x // Exponential growth -- let's set r = 0.0113, the current estimated world population growth rate, 1.13% per year

So if x = 1 (10 years from now):

L(1) = 1000 + 100000 = 101000.

Q(1) = 400 + 100000 = 100400.

E(1) = 100000 * 1.0113 = 101130.

So far these are relatively close, but let's look at 50 years...

x = 50:

L(50) = 50000 + 100000 = 150000.

Q(50) = 1000000 + 100000 = 1,100,000.

E(50) = 100000 * (1.0113)^50 = 175388.

Quadratic jumps way ahead here, but even though I set a fairly aggressive coefficient for the quadratic, the exponential wins out in the end...

Let's say x = 500...

L(500) = 500000 + 100000 = 600000.

Q(500) = 100,000,000 + 100000 = 100,100,000.

E(500) = 100000 * (1.0113)^500 = 27,542,516.

Nope, quadratic still wins.

x = 1000?

L(1000) = 1,000,000 + 100000 = 1,100,000.

Q(1000) = 400,000,000 + 100000 = 400,100,000.

E(1000) = 100000 * (1.0113)^1000 = 7,585,902,222.

So yeah, after just 1000 years of a very slow exponential growth, it completely trounces the extremely fast-growing quadratic.

So we need to stop looking at population figures in terms of derivatives and acceleration in the traditional sense, because historically the growth has always best been described by an exponential function, not by a polynomial. Unless we have somehow fundamentally changed our ways to stop the exponential growth, everything Al Bartlett says in his video is 100% true, even if the rate, r, in the growth function is decreasing (hint: it's not decreasing fast enough to matter).

Comment Re:Disingenuous article - so, so wrong. (Score 1, Informative) 472

Unless you have more RAM than you have persistent storage, you *always* max out your RAM after only a few minutes of the machine being on. All "extra" RAM that isn't allocated to processes, is used as page cache for the storage layer, which dramatically improves read performance because DRAM is much faster than even the fastest SSD (and hundreds of times faster than HDDs).

My point wasn't whether or not you "max out" your RAM using programs. My point was that the core operating system and "plumbing" stuff uses a lot less memory on OS X than it does on Windows. Plus, on Windows, you have to wear a digital condom (run a resource-hungry virus and firewall suite) to avoid getting owned by every other malicious ad on the Internet, which artificially increases the absolute amount of mapped RAM that has to be dedicated to processes just to keep your system running well.

So, OS X is more RAM efficient. Hardware-wise, it's also got a much faster SSD (on the Macbook Pro, at least) than is available to the vast majority of PCs. This SSD matches up favorably with the best SSDs that Samsung and Intel have to offer. It is significantly faster than the Samsung 850 Pro. So even if this Macbook Pro system starts to experience some memory pressure, it's a lot less obvious on a Mac that the system is swapping than it is on a PC.

The OS X kernel and driver stack is also very well-designed for low latency input and display functions. Windows is significantly improved with WDDM and recent kernel improvements in Windows 10, but I can still feel the input latency difference in basic things like typing text in a textbox on a webpage, comparing OS X and my high-end Windows 10 desktop.

I've got a GTX 1080, 64 gigs of RAM and an i7-6700K in my desktop, with two SSDs in RAID. Even with Windows 10 and a fuckton of bloated services and hundreds of processes running just about everything that exists, the responsiveness is pretty good, and I rarely experience any kind of performance problem, even if I'm playing *multiple games* simultaneously. But look at how much I had to invest in that hardware to reach that point. Would I be able to do the same with Macbook Pro specs (8 gigs of RAM) running Windows? No, absolutely not. But OS X manages to handle whatever I throw at it.

Anyway, here's a summary of my experience with Apple hardware. The current-gen Macbook Pro and iPhone have a much faster *storage layer* (SSD / NAND) than the vast majority of PCs and Android phones, and yet the Apple products are not hideously more expensive than their competitors. In fact, if you were to buy a PC or an Android phone with comparable storage layer performance to what Apple offers as "standard", you'd almost certainly pay a lot more.

My thesis is that, even though they skimp a little on the RAM, their focus on supplying excellent, high-end solid state storage that far exceeds the SATA 6 Gb/s performance ceiling, is a wise investment technically-speaking, as it allows them a great amount of flexibility with their memory management. And their well-designed, efficient software manages the user experience extremely well, even if resources are under high demand.

That's the 2016 Apple technology landscape in a nutshell: Use "good-enough" CPUs, with "good-enough" GPUs on x86 and top-notch GPUs on mobile. Be very battery efficient to keep weight down. Provide just enough RAM but not excessive amounts. Go all-out on solid state storage performance and blow away all but the most expensive enterprise SSDs. And produce the most optimized OS in the world and use that to reduce the need to procure expensive, high-end hardware.

Not saying you can't get a good experience on Windows or even GNU/Linux; I own and run boxes with Windows 10 and Ubuntu 16.04. But you usually need to invest significantly more for the same result. Ditto for Android vs. iOS. Hey Samsung, when are you going to offer 128 GB internal storage on your phones? Oh, what's that? We're supposed to use a dog-slow MicroSD card instead? That's what I thought. Peace out.

Comment Disingenuous article - so, so wrong. (Score 5, Informative) 472

I am so sick of Slashdot posting bold-faced lies and FUD on their front page. You can buy Macbooks with Skylake, which is a CPU architecture that wasn't even released until about a year ago, and Macbook Pros with Broadwell, an architecture released in early 2015.

If you buy a 13" Macbook Pro (latest generation) on apple.com right now, it will come with a CPU and chipset released to market by Intel about a year and a half ago, not four years ago.

And if you're complaining about the physical chassis, well, maybe it's just that Apple has reached what they consider to be the optimal layout and dimensions for their chassis. I mean, IBM/Lenovo hardly ever changed their ThinkPad physical design characteristics for a number of years in the mid to late 2000s, until Lenovo started messing with a good thing, and ended up utterly ruining the ThinkPad brand and stopped providing the features that people who bought them wanted/needed.

I am not an Apple fanboy; I think the company is pretentious, greedy, anti-competitive, and significantly less visionary with the loss of Steve Jobs. The very little they do for open source is overshadowed by their aggressive litigiousness and the walled garden platform they created.

BUT -- and this is a big thing for me -- Apple can do *more* with 4 or 8 GB of RAM than Microsoft can do with 16 GB of RAM. Their software is extremely well-designed, optimized for fast, high-fidelity displays, and the font rendering is beautiful and second to none. They don't have a ton of old legacy code like Windows does; the legacy that does exist has easily been swept under the rug in favor of new designs. And being based on BSD is a huge plus for software dev.

The efficiency and responsiveness of Macbook Pro and iPhone has made me appreciate and admire these *products* that I own, even though I only started buying Apple products in 2015 after spending decades swearing I never would and preferring GNU/Linux or Windows-if-absolutely-necessary.

I'm tired of having to grossly over-spec my machines (and often end up paying even more than I paid for my Apple products) for trash software like Microsoft Windows and Android, two great examples of over-engineering plus bloat plus the worst parts of an open or semi-open platform (security vulnerabilities, malware, etc.) ... A $1800 MBP with a year-old processor and 8 gigs of RAM is faster, more enjoyable to use, lighter, and has better battery life than a $3000 13" Windows 10 "ultrabook". And my $1000 iPhone 6S Plus with 2 gigs of RAM is faster, far less buggy, completely free of bloat, and easier to use than any Android phone on the market.

Again, I'm not an Apple fanboy. I don't love the company and I have zero loyalty to them. I dare someone else to do better. For years I thought everyone else *did* do better, but it's clear to me now that I was actually deluding myself into thinking that having 4 gigs of memory wasted by background service bloat on Windows was "necessary".

I'm very satisfied with their products right now and extremely dissatisfied with their competition. I'd actually recommend to those in the market for a laptop to seriously consider the Macbook Pro. It's not ideal for gaming, of course, but it's great for anything from content creation to heavy web surfing to flash games and even does VMs extremely well in VirtualBox or VMware. And I also do some heavy C++ and Java dev on this box. It just never slows down no matter what I do. Love it.

Comment Stop claiming supposition as fact (Score 1) 422

Nobody has ever said that the threshold is 100 GB! Verizon reps specifically danced around saying the exact number in every statement they've made.

The article claims the 100 GB figure as fact, which is extremely intellectually disingenuous.

In fact, there are compelling rumors (but still not facts, so please don't update the article claiming this as the truth) that only users with 500 GB or more data usage per month (on average, per-line) will be disconnected or forced to go metered. The original guy who leaked the info on Reddit is now saying he heard from Verizon management that the threshold is 500 GB.

But until people start getting letters and we can collect a representative sample of who did and did not get letters and chart that against their monthly usage, STOP claiming that you know any number to be true and accurate. This is the first step in being an ethical journalist and Slashdot can't even do this.

Comment Re:Time and place (Score 1) 284

I forgot to mention that the next "step" in the cat and mouse circumvention / anti-circumvention arms race is to ban encryption as well as any traffic that the DPI firewall can't "understand". This is the Brave New World of universal surveillance that we are headed towards. Countries like China, Australia and the UK are leading the way, and the US is going there too, just perhaps a little bit slower because organizations like the EFF and ACLU exist to try and gum up the works of the process.

It may take a few decades, but it will be within the lifetime of millennials that you will see encryption banned, and any traffic you try to send over the Internet that isn't fully understood by your ISP (regardless of whether you're on a home, work, or free hotspot connection) will be automatically rejected. So no VPNs or anything of the sort.

Comment Re:Time and place (Score 2) 284

The protocol negotiation and setup routines of a VPN are extremely easy to detect. When it's your "ISP" -- the network gateway providing your uplink -- that is trying to prevent you from getting on a VPN, it is extremely trivial for the gateway to block most VPNs because they have such well-known, "overt" setup/negotiation protocols.

Even OpenVPN on TCP port 443, which by all counts looks a helluva lot like a standard HTTPS connection, has just enough of a "tell" that it can be blocked while the gateway still allows normal HTTPS connections over the web.

While it's true that *endpoints* on an already established VPN tunnel cannot tell that the traffic is being handed to another client over a VPN, it is very easy for a *gateway* to detect all but the most stealthy VPNs.

That's why I said that specific mitigations (in terms of traffic shape and protocol "appearance", using steganography if required) are required if you want to bypass this kind of "anti-VPN" restriction on the gateway you are connected to (for instance, a free WiFi hotspot that's trying to block porn, and then tries to block VPNs to prevent people from using them to circumvent the block).

It's not even about setting up your own server. I could give you a description of a certain sequence of packets that will identify OpenVPN connections (even over TCP) 100% of the time, and never false positive on anything else. You could safely implement that rule on *all* remote IP addresses on a stateful firewall gateway and prevent people from using OpenVPN, regardless of the port.

Comment Re:Time and place (Score 3, Interesting) 284

That's why you just need to make your VPN traffic look like normal web traffic. There are various protocols out there that are so obfuscated that even a deep packet inspection firewall couldn't tell that it's not ordinary web traffic.

It adds overhead and latency, but it's really not that difficult to do. Somewhat ironically, it is based on the exact same principle as terrorists use to infiltrate countries they want to blow up: you become really, really good at looking exactly like the sheeple. You don't stand out. You look perfectly ordinary just like the rest of the law-abiding citizens. Except that the *semantics* of the data you're transferring -- which no firewall or DPI could possibly understand -- are such that porn content (or whatever) is being delivered to your computer.

Blocking is for deterring casual use, not for actually preventing something from being done. See: Great Firewall of China.

Comment Prices HAVE come down (Score 1) 729

In absolute terms of the cost per teraflop of GPU compute performance, prices have taken a nosedive this year. The release of the GTX 1080 and 1070 drove down the price of the 980 Ti, 980 and 970 which are still more than adequate for all gaming at 1080p (even on very high settings). The AMD Radeon RX480 has given a huge boost to the $200 price point as well, providing 5-6 TFLOPS at a price point that could net you *maybe* 3 TFLOPS if you shopped sales, before around Q2 2016 (with the release of the 16nm FinFET TSMC process cards).

There are extremely few games that will really bottleneck on the GPU if you have an RX480 or similar level of performance (with max or nearly maxed settings and 1080p60). In 5 years, even AAA games will still run smoothly on Low-Medium detail on the same card.

The only reason you'd need a beefier card, or SLI/CF, is if you go above 1080p60 (which is distinctly in enthusiast territory at this time, not because of the cost of the monitor but because of the additional load that imposes on the GPU(s)) or if you want to keep playing the latest AAA titles on the highest possible graphics settings over the next half-decade.

The only game I can think of right now that gives these cards a run for their money is a 150-million-dollar, crowd-funded, tech demo.

Comment Re:Please don't kill 32-bit Wine (Score 1) 378

Unless those libraries are shipped by Ubuntu, though, you'd have to either use a prior release or run a non-Ubuntu OS in your container in order to handle this. A lot of people would like to use the versions of libs shipped by Ubuntu on their 64-bit system, but compiled for 32-bit, to do their 32-bit work (e.g. Wine).

So yeah, it would do a lot of harm to many use cases and workloads if they stopped providing 32-bit libraries at the very least. If they want to drop 32-bit kernel support, it would affect way fewer people, because there aren't many systems still in use today that only run x86 and not x86_64.

Slashdot Top Deals

Truth is free, but information costs.

Working...