Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
For the out-of-band Slashdot experience (mostly headlines), follow us on Twitter, or Facebook. ×

Comment: What's the next moonshot? (Score 2) 382 382

In the 20th century, humanity took a transformational step forward when it "went interplanetary". This impacted billions of lives and changed everyone's perspective about our role in the universe.

A lot of bad stuff happened, too -- weaponization of nuclear energy; oppressive governments; new tools like computers being twisted to serve repressive governments rather than the common man; continual and destructive wars; accelerating destruction of the environment and natural resources; etc.

If there's one objective -- one imperative with a positive end-goal that will transform humanity, or at least the way we think about ourselves, in a good way -- that the current and next generation should focus on, what objective do you think that should be?

In short, what should be our next moonshot as a global society? I say global because I believe any objective worth achieving at this scale cannot be accomplished even by a small cadre of very powerful advanced industrial nations. We would need truly global support for any initiative on the scale I'm talking about.

Comment: Friendly OpenJDK Upstream? (Score 2) 328 328

Does anyone know if there exists, or can we start, a project like this:

(1) They distribute binaries for Windows (32-bit and 64-bit). Other platforms would be awesome, too, but Linux already has great OpenJDK support in package managers, so that may not even be necessary. Windows is the platform where it really sucks.

(2) They have a custom-designed updater that schedules itself to run every so often (say, every 2 weeks); launches; checks for an update; and then *EXITS* if it doesn't find one. If it does find one, it gives the user a simple "Yes/No/Ask Later" prompt: if they pick Yes, it'll silently remove the old OpenJDK version and install the new one; if they pick "No" it'll skip that version and only remind them when the next update comes out; and it'll bug them next week if they click "Ask Later". Once it finishes whatever it has to do, it EXITS, rather than remaining in virtual memory forever like the Oracle Java updater.

(3) No adware. All components free and open source software. Installer should only depend on FOSS (no InstallShield, etc.).

(4) Gives user the option to enable/disable Java plugins for each browser detected to be installed on the system, at install-time, and can be configured after install via a config GUI. Default should be to NOT install the Java plugins, since they have had a history of severe vulnerabilities, but users are free to request their installation anyway.

(5) Installer should come in two forms: a "net installer" that has a tiny size (1 MB or less) and only downloads the requested components at runtime (allowing user to select whether they want the source code, the JDK or just the JRE, etc.), and an "offline installer" that contains the entire kitchen sink and does not need Internet connectivity (for environments behind a restrictive proxy, or no network connection).

(6) User should have the option to install OpenJDK without admin rights! If they don't have admin rights, stick it in AppData\Local and put the plugins in a similarly user-scoped folder (not possible with IE as far as I know, but should work with Chrome and Firefox). Auto-detect whether the user can be an admin, and only give the UAC prompt if the user's account can actually accept the prompt; otherwise, fall back to "non-admin" install.

Gee, sounds like if nothing like this exists, I have the requirements / design doc in my head...

If I disappear in my room for a week and don't emerge until this thing is on github, tell my family and my cat that I love them.

Comment: Re:Containers can be VMs *or* apps, Docker. (Score 1) 48 48

I'm well-aware of the advantages of SmartOS, actually. I am in the process, however, of migrating my dedicated server from one system to another (I'm upgrading the hardware, while staying with the same hosting provider). In doing so, I've made the difficult decision to move *away* from SmartOS, and back to GNU/Linux, for the following reasons:

(1) Despite promises to the contrary, compiling most C/C++ FOSS is *not* easy on SmartOS. Also despite promises to the contrary, a vast amount of FOSS that I need is *not* available in SmartOS's repositories, nor in any SmartOS equivalent of Ubuntu PPAs. 99% of projects need extensive source-level patching, awful environment variable hacks, symlinked binaries, edited configure scripts, or some nasty combination of the above. Some extremely complicated projects like PhantomJS are nearly impossible to compile on a system that is not the Linux kernel, with glibc as the C library, libstdc++ as the C++ library, and gcc 4.x as the compiler. I cannot use any alternative programs for some of these use cases, though, and PhantomJS isn't the only one -- I simply don't have the time or willpower to spend weeks fighting horrendous build environments that are opaque to diagnosis to do something that I could accomplish with "aptitude install phantomjs" on Debian or Ubuntu or Devuan or Mint or... you get the point.

(2) Although the kernel and core system was sound, I experienced inexplicable random crashes of a 4 GiB kvm guest running Windows Server 2012 on SmartOS. I tried fixing this numerous ways by updating the host OS, updating kvm, looking at logs, reducing the amount of RAM assigned to it, etc. -- but after about 24 hours of uptime, the VM just crashes (on the host side). I don't experience this with KVM on the Linux kernel. And for various reasons I can't *not* have a Windows VM for certain limited use cases on my server. It's a multipurpose box and it needs to be able to do a lot of different things. I don't have the money to buy a dozen different boxes each filling its own little niche.

(3) To run the aforementioned programs that are infinitely resistant to compiling cleanly on SmartOS, I ended up firing up a paravirtualized Linux kernel (CentOS 7) on top of SmartOS. This ran well enough, but it just felt *unclean* to need to run a UNIX on top of a UNIX, when all I'm doing is running FOSS programs. Although there is that one program that's binary-only which absolutely *does* require Linux...

(4) I tried messing with lx branded zones, but could never get it to actually do anything useful except print error messages. I of course googled those error messages, asked about them in IRC, and the like; but the most I could get was someone saying "Huh... that's strange." No offered solution. lx branded zones have a lot of promise if they can emulate an actually modern GNU/Linux distro such as CentOS 7 or Ubuntu 14.04.2 or Debian Jessie, but until/unless they can do that, with few or no gotchas, I really can't be bothered to mess with them in an alpha/experimental state.

(5) lxd to the rescue! Canonical (officially the "Linux Containers project") is working on a daemon called lxd ("lex-dee") which brings sanity and proper isolation (through use of already-existing kernel resources) to the lxc project. The `lxc` command that comes with `lxd` operates very similarly to `vmadm` in SmartOS, and the guarantees that you expect for isolation in SmartOS are pretty much true in lxd guests as well. At their core they just use mainline Linux kernel features; the difference is that lxd actually uses these facilities *in a smart way* to isolate guests. Docker on the other hand, encourages the guests to be friends with one another. Yuck.

So:

- I had problems with SmartOS in actually using it, despite a promising and rock-solid stable base system. It needs a lot (and I mean a LOT) of work for compatibility with existing FOSS and/or better support for lx zones based on modern (recently-released) Linux distros.

  - I still have ZFS on Linux and it works great.

  - KVM isn't broken on Linux like it is on SmartOS, so I can have my Windows VM and not constantly have to `vmadm start $WINDOZE` when it crashes.

  - Ubuntu's lxd is a viable isolated container solution for me on Linux, using the mainline Linux kernel.

  - By using native Linux, I don't have to virtualize a Linux kernel, so any software I want to run that happens to depend heavily on Linux will "just work", and I likely won't even have to compile it thanks to everything under the sun being in a PPA.

  - Rebootless kernel updates :D

P.S. - my use cases for this server are varied and diverse, from gaming to music streaming to file hosting to VPN to IDS to .... well, you name it. "But it's got nginx in the package repo!" isn't enough package support for me.

Sorry, SmartOS, but it's not ready to meet my needs, and Ubuntu Server 14.04.2 definitely is.

Comment: Re:Containers can be VMs *or* apps, Docker. (Score 1) 48 48

They seem viable enough; all the prerequisite container isolation concepts seem to be implemented, though I'm not sure if there are any hidden "gotchas" where certain resources would not be isolated. I'd have to investigate more.

Then I'd have to learn all the different system administration concepts and commands for using an entirely new OS that I've never used before. I've used Solaris (and variants), about 9 Linux distros, Windows, and Mac, so maybe I'm more qualified as a "new platform learner" than others, but it's still not really something I wanna do.

Especially considering there are about a dozen different, extremely complicated software products that I want to run on this system, in the containers (not in hypervised VMs), which are either binary-only Linux/ELF executables, or source code that's so resistant to running on non-Linux that it's ridiculous (spent the better part of a week trying to compile one of these programs for SmartOS).

So yeah, not really going down that rabbit hole, sorry.

Comment: Containers can be VMs *or* apps, Docker. (Score 5, Interesting) 48 48

Unless this unified "Open Container Project" supports both the unprivileged, isolated "machine" concept of a container AND the trusted, shared "app" concept of a container, it's going nowhere fast for me.

Solaris Zones. linux-vserver containers. Now Canonical's lxd. Few of the participants in the container effort, except these three, seem to understand the value of having containers as *machines*. Give each machine its own static IP, isolate all its resources (memory, processes, users and groups, files, networking, etc.) from the other containers on the system, and you have what's basically a traditional VM (in the early 2000s sense of the word), but with a lot less overhead, because no hypervisor and only one centralized kernel.

Docker seems to pretend like VM-style containers don't (or shouldn't) exist. I disagree fundamentally with that. I dislike that Docker pushes containers so hard while ignoring this very important use case. I hope the rest of the Linux Foundation is smart enough to recognize the value of this use case and support it.

If not, I'll just have to hope that Canonical's lxd continues to mature and improve.

Comment: Nah (Score 1) 250 250

When PHBs think of development, they think of one of two things: either an MS Access database with code-behind in VBA, or they think of Visual Studio. Naturally, nearly all of the most useful features of Visual Studio hook into at least some kind of .NET language or runtime.

As long as PHBs continue to consider Microsoft stuff as the "name brand" for software development, like Kleenex for tissues, we won't see .NET going anywhere. After all, if they're willing to bankroll $1M in license fees for a couple hundred devs to buy VS Ultimate...

Comment: Re:Anybody remember Turtle Beach? (Score 1) 76 76

Not sure I agree. This would only happen IF the media cartels and game developers stop trying to push the envelope to increase hardware requirements. They won't do that, though, because their hardware "partners" keep egging them on to up the ante more and more. As long as new AAA games continue to come with increasingly higher GPU requirements, you're not going to see discrete GPUs disappear.

With sound hardware, it stopped being mainstream because developers ran out of ideas for how to plausibly use increasing processing power for sound I/O. But I don't see graphics slowing down any time soon. Even the best-looking games still look decidedly cartoonish compared to real life; it's completely obvious. They'll just keep pushing pixel density and realism (things like more advanced shaders and effects, etc.) indefinitely as we asymptotically approach real life with the fidelity of AAA games. We'll never GET there, but we keep getting closer and closer; as we get closer, though, the cost of getting a bit closer goes up exponentially, while the benefit of that increased effort goes down at an inverse rate.

In an alternate world where they actually decide "enough is enough" and declare 2.5D real-time 3d rendering to have reached its final end state at DirectX 12 with 4K resolutions, then yes, eventually Intel and AMD will develop low-power integrated GPUs on the CPU that will be powerful enough to run these games at 60 fps. If that's our final target and nothing ever comes after that (until or unless we get to true 3D or virtual reality or holodecks), then the Turtle Beach effect will take hold.

I don't think that will come true, though. If nothing else, the game developers will start intentionally slowing down their games and adding needless complexity just for the hell of it, even if it doesn't actually improve visual fidelity, just for the sake of benefiting their hardware partners by making users want to upgrade to be able to play the latest games. The economic forces at work there are far more powerful than any technical factors.

Comment: Wake me up when they stop using 28nm (Score 1) 76 76

In February of 2012, I ordered a graphics card made on the TSMC 28nm process node, the Radeon HD7970. The cards hit the market on January 9, 2012. It has 4.3 billion transistors and a die size of 352 mm^2. It has 2048 GCN cores.

In June of 2015, the Radeon Fury X is a graphics card made on the TSMC 28nm process node, with 8.9 billion transistors, and a die size likely to be somewhere around 600 mm^2 based on a quadratic fit regression analysis using the existing GCN 1.1 parts as data points. It has 4096 GCN cores.

Aside from notable improvements to the memory bandwidth, are you really going to sit here and tell me that this card is much more than just a clever packing of two Tahiti (HD7970) chips onto a single die? They had to add a more effective cooling solution (liquid) to cope with the increase in heat generation in a small area that was caused by adding this many transistors in a very small area, which goes to show that they did fairly little in the way of power consumpton savings.

What is the likelihood that, in three years' time, they have made any significant innovations on the hardware front whatsoever, aside from stacking memory modules on top of one another?

To me this looks like an attempt to continue to milk yesterday's fabrication processes and throw in a few minor bones (like improved VCE, new API support) while not really improving in areas that count, like power efficiency, performance per compute core, cost per compute core, and overall performance per dollar.

When the HD7970's Tahiti cores are being sold as a re-branded R9 280X, and most games except Star Citizen don't seem to really demand more than one Tahiti worth of horsepower to run appreciably, there's very little motivation for me to "upgrade" to a chip that's basically two of what I already have packed onto one die with better cooling and faster memory. Especially when it's likely to come at a very steep price, which is much more expensive than simply buying another R9 280X and running them in CrossfireX.

As a gamer, I think I'm going to keep on waiting until TSMC and AMD/Nvidia stop dragging their heels. I've had enough of the 28nm node. That's three distinct families of GPU now that they've released on the 28nm node. It has gone on for too long. Time to move to a smaller process node. Until then, they won't be getting my money.

Comment: C++ makes sense here (Score 3, Interesting) 173 173

C++ is so flexible that you can write all your nasty "legwork" code (performance-sensitive stuff, like the actual facial recognition, image data manipulation, etc.) *once* and call it from whatever UI layer you write.

Granted, it's probably somewhere between hard and impossible to write a mobile platform-agnostic UI layer that actually looks good on both Android and iOS, since iOS and Android are so different in that regard; but even if they didn't bother doing that and just wrote two entirely separate view layers, they still can separate out all the heavy lifting and "write once, compile in two places". Both Android and iOS have decent to good C++ support, so if you make it platform-independent, you can have an optimized core library that works on the two major mobile platforms with no modifications.

Not sure I would go with C++ for something that was less performance-sensitive, but in this case, they can probably peg the CPU of a modern smartphone for at least a good fraction of a second with some of their heavier code.

Unless of course they are simply taking the image and uploading it to "the cloud" to do the facial recognition, in which case it's kind of a head-scratcher, since you don't need C++ to make HTTP requests.

Comment: Don't touch my HTTPS (Score 1) 231 231

HTTPS should be truly end to end with no MITM. Any software vendor putting stuff on my computer that bypasses this will not be supported by me financially in the future.

To be perfectly honest, I'm so strongly in favor of encrypting everything that I say, if there's a non-HTTPS site out there that only serves traffic over HTTP, and they want to bundle malware on my system that *only* injects content into regular HTTP (not HTTPS) connections, I'm all for it. Go ahead and punish users and sites that run without TLS enabled. It'll just increase the pressure on webmasters and users to get TLS up and running on absolutely every host.

And with things like StartSSL and soon that Mozilla-funded free CA, there's really no excuse not to have a trusted cert (not a self-signed or snakeoil cert).

Let's encrypt the web. But don't you dare interfere with or modify my HTTPS traffic through any means. That will immediately get your company blacklisted in my book of companies I'm willing to do business with.

Comment: Response #34591525 (Score 1) 558 558

Hand-assembled desktop:
Case: Corsair Obsidian 650D
PSU: Corsair Professional Series Gold 1200W
Mobo: ASUS P8Z77-V
CPU: Core i7 3770K
RAM: 32 GiB DDR3-1600 (Komputerbay)
Storage: 3 x 4 TiB HGST 7200rpm 3.5" + 1 x Seagate Barracuda 4 TiB 7200rpm consumer HDDs (in hardware RAID10)
RAID controller: Adaptec 6405E
GPU1: Sapphire Radeon HD7970 (reference design with impeller)
GPU2 (in CrossFireX): XFX Radeon R9 280X (with three large 'standard' fans and clocked at GHz Edition speeds)
Soundcard: Creative SoundBlaster Z

Accessories:
Headphones: Steelseries H Wireless connected via bidirectional Optical (Mini-TOSLink) to the SoundBlaster Z
Display: Panasonic VIERA TC-L32DT30 HDTV (1080p60)
Keyboard: Das Keyboard Model S Professional
Mouse: Steelseries Sensei
Mat: Razer Vespula

The story:

Ordered the HD7970 in February 2012 and stuck it in my old box for a few months.

Ordered the CPU, Mobo, case, PSU, two HDDs (one of them has since died), and RAM in April 2012 and built new box by adding GPU. Handed down my old box to a family member along with my older GPU.

Ordered the Adaptec RAID controller a couple days after getting the box together and realizing I didn't like software RAID.

Ordered the SoundBlaster Z in February 2014 in preparation for the arrival of the Steelseries H Wireless (pre-order) in March 2014.

Ordered two HGST disks in March 2014 and combined them with the existing two Seagate disks to make a RAID10 array.

Ordered the R9 280X in June 2014 after realizing how cheap it was and that I could Crossfire it with my existing card because it's the same chipset.

One of the Seagate disks failed badly in August 2014, but I didn't lose the RAID array because the other three disks were fine. I overnighted a new HGST disk (same make and model as the other two) to replace it. At present, I have one of the original Seagate and three HGST disks still in the RAID array.

The configuration has been static since then.

Presently I estimate that this system has gone through about 75-80% of its service life *with me*. Since I'm a gamer, coder, virtual machine runner, and general all-around resource hog, I'll be looking to upgrade when Skylake mainstream processors land. I'll probably get a Skylake "K" (unlocked) i7. Of course, this system is perfectly serviceable for lighter duty gaming and web browsing, so I expect it will become the upgrade for the same family member who is using my old system today (though with a few retrofits due to some component failure).

The internals of the case are an absolute mess; a tangle of poorly organized cables. The only thing that keeps it even slightly manageable is the modular PSU; I removed (or never plugged in) all the molex connectors I'll never need.

One of the big limitations I've come up against with this system is the limit of the number of PCIe lanes and slots. I'll definitely consider this more heavily when I buy my next system, but I understand that Skylake mainstream is going to be expanding the number of lanes anyway.

Right now, this system can play 2014-and-earlier AAA games at maximum detail (or very near to it; some settings are just so poorly optimized that they're not usable), even on a single GPU. With CrossFireX I just get more consistent framerates (AMD's Frame Pacing feature is a lifesaver).

I'm starting to feel that it is experiencing significant slowdowns, even in CrossFireX, on the latest AAA titles. Dragon Age Inquisition and The Witcher 3 are giving me a lot of trouble. I am not sure if it's due to their poor driver maintenance, bad optimization, or Nvidia-favoring algorithms. I can probably deal with this performance deficit for the remainder of this year, but I will definitely want to upgrade in time for Star Citizen.

Comment: Re:Reliability (Score 1) 229 229

Well, Frontier is the 6th largest local exchange carrier and 5th largest DSL provider based on coverage area (citation: Wikipedia). Being that far down on the totem pole, I'm not surprised that they have to differentiate themselves with nice things like competent tech support. The ones that are really terrible are Comcast, TWC, and Verizon.

Point taken, though. They're not all bad. Just the 2-3 of them that the vast majority of the people have access to.

Comment: Reliability (Score 3, Insightful) 229 229

While I have many issues with ISPs that have been covered fairly well by other responses here, one issue that few have talked about is reliability of the service, and the ability to get it fixed when it breaks.

At least around here, it seems almost 1 out of every 2 people has some significant reliability problems with their Internet connectivity, and isn't sure how to fix them. When they call the ISP (whether it's cable, DSL, fiber, LTE, ..) the first thing they ask them is to reboot their modem and/or router and/or computer. When that doesn't fix it, the tech doesn't know what else to do. They often send out a guy to take a look, who'll say that your cable modem is shot, and have you get a replacement. If it's under warranty or owned by the cable company, sometimes that might be free; if you own the equipment and it's out of warranty, you have to put up for a new one.

But 8 times out of 10, replacing your modem / routers does not fix the problem. Nor does going from WiFi to ethernet -- another common "fix". Sure, WiFi has problems, but if your issue is actually with some part of the cable, especially if it's a part that's buried underground, it can be nearly impossible to convince the company that the problem is there, and moreover, to get them to dig it up and replace it.

I'm on a grandfathered unlimited LTE data plan as my primary Internet connection, now. Cellular towers are pretty reliable due to their centralized infrastructure and the number of users it would affect if they were having a problem. I've had a few persistent issues with my LTE connection that lasted for weeks, but each time, it magically went away after very little effort on my part, likely after they received hundreds of calls from other customers about the same problem, and had to send someone up the tower to fix it.

Those with landlines to the premises are in a much more difficult situation. The company is likely to pin the problem on hardware that is owned by you, or wiring that is installed within the walls of your house. They will not be willing to admit that the problem may lie with the line buried underground. Acknowledging that problem would effectively cause them to have to outlay a significant cost to a contractor to dig up and replace the cable, so instead, they treat each individual support call as a new incident, and forget all the history of your problem where you've diligently worked by process of elimination to determine that it must be something in the line.

I remember years ago when we used deduction to determine that our DSL problem must lie with the phone line beyond the premises of our house. We replaced all our devices, hooked up to ethernet instead of WiFi, and even completely replaced all the DSL filters and phone line wiring in our house. The problem persisted. But the tech support guys kept experiencing a case of amnesia; every time we called, despite trying to ask them to refer to previous tickets and things we'd already tried, they just wanted us to reboot our modem, over and over and over and over again, as if that would help. This would happen even if we got the same tech support person on multiple calls.

At work, a lot of people come to me for advice on problems they're having with tech at home. I don't know why they do it; they just do. I get my fair share of laptop problems; Windows won't boot; they have a virus; whatever. But the #1 most frequent problem I get is that their Internet is unreliable and drops out all the time. Occasionally I'll find that replacing their cable modem fixes the problem, but in many more cases, we narrow it down to the landline, or at least to an ONT or something exterior to their dwelling that isn't owned by the resident -- at which point, you're basically at a dead-end.

The willingness to address problems, and to refer to case history to eliminate potential sources of problems, is seemingly absent from nearly all ISP support employees. And you wonder why their ACSI score is low...

Comment: Re:Fuck you Very Much, Disney. (Score 1) 614 614

Their bottom line will definitely "feel" this, but it'll be in the positive direction. People will continue to visit Disney parks, buy Disney games and watch Disney movies. As far as the vast majority of the US population is concerned, the Disney company can do no wrong, and everything it produces is gold.

It's like the character Truxton Spangler said at the end of the last episode of the AMC series "Rubicon": "Do you really think anyone is going to give a shit?"

People don't care how they treat their workers. That's just not a criterion that people use to determine whether to do business with a company. Unless their U.S. products start having their manuals and product literature written in Indian English, I doubt anyone will notice, or care, that there was a major shift inside Disney.

And to prevent everything from being written in Indian English, they'll just hire one or two fresh-out English majors (USAians) for $35k to translate all the public-facing Indian English documents into American English.

If the code and the comments disagree, then both are probably wrong. -- Norm Schryer

Working...