Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Nah (Score 1) 250

When PHBs think of development, they think of one of two things: either an MS Access database with code-behind in VBA, or they think of Visual Studio. Naturally, nearly all of the most useful features of Visual Studio hook into at least some kind of .NET language or runtime.

As long as PHBs continue to consider Microsoft stuff as the "name brand" for software development, like Kleenex for tissues, we won't see .NET going anywhere. After all, if they're willing to bankroll $1M in license fees for a couple hundred devs to buy VS Ultimate...

Comment Re:Anybody remember Turtle Beach? (Score 1) 76

Not sure I agree. This would only happen IF the media cartels and game developers stop trying to push the envelope to increase hardware requirements. They won't do that, though, because their hardware "partners" keep egging them on to up the ante more and more. As long as new AAA games continue to come with increasingly higher GPU requirements, you're not going to see discrete GPUs disappear.

With sound hardware, it stopped being mainstream because developers ran out of ideas for how to plausibly use increasing processing power for sound I/O. But I don't see graphics slowing down any time soon. Even the best-looking games still look decidedly cartoonish compared to real life; it's completely obvious. They'll just keep pushing pixel density and realism (things like more advanced shaders and effects, etc.) indefinitely as we asymptotically approach real life with the fidelity of AAA games. We'll never GET there, but we keep getting closer and closer; as we get closer, though, the cost of getting a bit closer goes up exponentially, while the benefit of that increased effort goes down at an inverse rate.

In an alternate world where they actually decide "enough is enough" and declare 2.5D real-time 3d rendering to have reached its final end state at DirectX 12 with 4K resolutions, then yes, eventually Intel and AMD will develop low-power integrated GPUs on the CPU that will be powerful enough to run these games at 60 fps. If that's our final target and nothing ever comes after that (until or unless we get to true 3D or virtual reality or holodecks), then the Turtle Beach effect will take hold.

I don't think that will come true, though. If nothing else, the game developers will start intentionally slowing down their games and adding needless complexity just for the hell of it, even if it doesn't actually improve visual fidelity, just for the sake of benefiting their hardware partners by making users want to upgrade to be able to play the latest games. The economic forces at work there are far more powerful than any technical factors.

Comment Wake me up when they stop using 28nm (Score 1) 76

In February of 2012, I ordered a graphics card made on the TSMC 28nm process node, the Radeon HD7970. The cards hit the market on January 9, 2012. It has 4.3 billion transistors and a die size of 352 mm^2. It has 2048 GCN cores.

In June of 2015, the Radeon Fury X is a graphics card made on the TSMC 28nm process node, with 8.9 billion transistors, and a die size likely to be somewhere around 600 mm^2 based on a quadratic fit regression analysis using the existing GCN 1.1 parts as data points. It has 4096 GCN cores.

Aside from notable improvements to the memory bandwidth, are you really going to sit here and tell me that this card is much more than just a clever packing of two Tahiti (HD7970) chips onto a single die? They had to add a more effective cooling solution (liquid) to cope with the increase in heat generation in a small area that was caused by adding this many transistors in a very small area, which goes to show that they did fairly little in the way of power consumpton savings.

What is the likelihood that, in three years' time, they have made any significant innovations on the hardware front whatsoever, aside from stacking memory modules on top of one another?

To me this looks like an attempt to continue to milk yesterday's fabrication processes and throw in a few minor bones (like improved VCE, new API support) while not really improving in areas that count, like power efficiency, performance per compute core, cost per compute core, and overall performance per dollar.

When the HD7970's Tahiti cores are being sold as a re-branded R9 280X, and most games except Star Citizen don't seem to really demand more than one Tahiti worth of horsepower to run appreciably, there's very little motivation for me to "upgrade" to a chip that's basically two of what I already have packed onto one die with better cooling and faster memory. Especially when it's likely to come at a very steep price, which is much more expensive than simply buying another R9 280X and running them in CrossfireX.

As a gamer, I think I'm going to keep on waiting until TSMC and AMD/Nvidia stop dragging their heels. I've had enough of the 28nm node. That's three distinct families of GPU now that they've released on the 28nm node. It has gone on for too long. Time to move to a smaller process node. Until then, they won't be getting my money.

Comment C++ makes sense here (Score 3, Interesting) 173

C++ is so flexible that you can write all your nasty "legwork" code (performance-sensitive stuff, like the actual facial recognition, image data manipulation, etc.) *once* and call it from whatever UI layer you write.

Granted, it's probably somewhere between hard and impossible to write a mobile platform-agnostic UI layer that actually looks good on both Android and iOS, since iOS and Android are so different in that regard; but even if they didn't bother doing that and just wrote two entirely separate view layers, they still can separate out all the heavy lifting and "write once, compile in two places". Both Android and iOS have decent to good C++ support, so if you make it platform-independent, you can have an optimized core library that works on the two major mobile platforms with no modifications.

Not sure I would go with C++ for something that was less performance-sensitive, but in this case, they can probably peg the CPU of a modern smartphone for at least a good fraction of a second with some of their heavier code.

Unless of course they are simply taking the image and uploading it to "the cloud" to do the facial recognition, in which case it's kind of a head-scratcher, since you don't need C++ to make HTTP requests.

Comment Don't touch my HTTPS (Score 1) 231

HTTPS should be truly end to end with no MITM. Any software vendor putting stuff on my computer that bypasses this will not be supported by me financially in the future.

To be perfectly honest, I'm so strongly in favor of encrypting everything that I say, if there's a non-HTTPS site out there that only serves traffic over HTTP, and they want to bundle malware on my system that *only* injects content into regular HTTP (not HTTPS) connections, I'm all for it. Go ahead and punish users and sites that run without TLS enabled. It'll just increase the pressure on webmasters and users to get TLS up and running on absolutely every host.

And with things like StartSSL and soon that Mozilla-funded free CA, there's really no excuse not to have a trusted cert (not a self-signed or snakeoil cert).

Let's encrypt the web. But don't you dare interfere with or modify my HTTPS traffic through any means. That will immediately get your company blacklisted in my book of companies I'm willing to do business with.

Comment Response #34591525 (Score 1) 558

Hand-assembled desktop:
Case: Corsair Obsidian 650D
PSU: Corsair Professional Series Gold 1200W
Mobo: ASUS P8Z77-V
CPU: Core i7 3770K
RAM: 32 GiB DDR3-1600 (Komputerbay)
Storage: 3 x 4 TiB HGST 7200rpm 3.5" + 1 x Seagate Barracuda 4 TiB 7200rpm consumer HDDs (in hardware RAID10)
RAID controller: Adaptec 6405E
GPU1: Sapphire Radeon HD7970 (reference design with impeller)
GPU2 (in CrossFireX): XFX Radeon R9 280X (with three large 'standard' fans and clocked at GHz Edition speeds)
Soundcard: Creative SoundBlaster Z

Accessories:
Headphones: Steelseries H Wireless connected via bidirectional Optical (Mini-TOSLink) to the SoundBlaster Z
Display: Panasonic VIERA TC-L32DT30 HDTV (1080p60)
Keyboard: Das Keyboard Model S Professional
Mouse: Steelseries Sensei
Mat: Razer Vespula

The story:

Ordered the HD7970 in February 2012 and stuck it in my old box for a few months.

Ordered the CPU, Mobo, case, PSU, two HDDs (one of them has since died), and RAM in April 2012 and built new box by adding GPU. Handed down my old box to a family member along with my older GPU.

Ordered the Adaptec RAID controller a couple days after getting the box together and realizing I didn't like software RAID.

Ordered the SoundBlaster Z in February 2014 in preparation for the arrival of the Steelseries H Wireless (pre-order) in March 2014.

Ordered two HGST disks in March 2014 and combined them with the existing two Seagate disks to make a RAID10 array.

Ordered the R9 280X in June 2014 after realizing how cheap it was and that I could Crossfire it with my existing card because it's the same chipset.

One of the Seagate disks failed badly in August 2014, but I didn't lose the RAID array because the other three disks were fine. I overnighted a new HGST disk (same make and model as the other two) to replace it. At present, I have one of the original Seagate and three HGST disks still in the RAID array.

The configuration has been static since then.

Presently I estimate that this system has gone through about 75-80% of its service life *with me*. Since I'm a gamer, coder, virtual machine runner, and general all-around resource hog, I'll be looking to upgrade when Skylake mainstream processors land. I'll probably get a Skylake "K" (unlocked) i7. Of course, this system is perfectly serviceable for lighter duty gaming and web browsing, so I expect it will become the upgrade for the same family member who is using my old system today (though with a few retrofits due to some component failure).

The internals of the case are an absolute mess; a tangle of poorly organized cables. The only thing that keeps it even slightly manageable is the modular PSU; I removed (or never plugged in) all the molex connectors I'll never need.

One of the big limitations I've come up against with this system is the limit of the number of PCIe lanes and slots. I'll definitely consider this more heavily when I buy my next system, but I understand that Skylake mainstream is going to be expanding the number of lanes anyway.

Right now, this system can play 2014-and-earlier AAA games at maximum detail (or very near to it; some settings are just so poorly optimized that they're not usable), even on a single GPU. With CrossFireX I just get more consistent framerates (AMD's Frame Pacing feature is a lifesaver).

I'm starting to feel that it is experiencing significant slowdowns, even in CrossFireX, on the latest AAA titles. Dragon Age Inquisition and The Witcher 3 are giving me a lot of trouble. I am not sure if it's due to their poor driver maintenance, bad optimization, or Nvidia-favoring algorithms. I can probably deal with this performance deficit for the remainder of this year, but I will definitely want to upgrade in time for Star Citizen.

Comment Re:Reliability (Score 1) 229

Well, Frontier is the 6th largest local exchange carrier and 5th largest DSL provider based on coverage area (citation: Wikipedia). Being that far down on the totem pole, I'm not surprised that they have to differentiate themselves with nice things like competent tech support. The ones that are really terrible are Comcast, TWC, and Verizon.

Point taken, though. They're not all bad. Just the 2-3 of them that the vast majority of the people have access to.

Comment Reliability (Score 3, Insightful) 229

While I have many issues with ISPs that have been covered fairly well by other responses here, one issue that few have talked about is reliability of the service, and the ability to get it fixed when it breaks.

At least around here, it seems almost 1 out of every 2 people has some significant reliability problems with their Internet connectivity, and isn't sure how to fix them. When they call the ISP (whether it's cable, DSL, fiber, LTE, ..) the first thing they ask them is to reboot their modem and/or router and/or computer. When that doesn't fix it, the tech doesn't know what else to do. They often send out a guy to take a look, who'll say that your cable modem is shot, and have you get a replacement. If it's under warranty or owned by the cable company, sometimes that might be free; if you own the equipment and it's out of warranty, you have to put up for a new one.

But 8 times out of 10, replacing your modem / routers does not fix the problem. Nor does going from WiFi to ethernet -- another common "fix". Sure, WiFi has problems, but if your issue is actually with some part of the cable, especially if it's a part that's buried underground, it can be nearly impossible to convince the company that the problem is there, and moreover, to get them to dig it up and replace it.

I'm on a grandfathered unlimited LTE data plan as my primary Internet connection, now. Cellular towers are pretty reliable due to their centralized infrastructure and the number of users it would affect if they were having a problem. I've had a few persistent issues with my LTE connection that lasted for weeks, but each time, it magically went away after very little effort on my part, likely after they received hundreds of calls from other customers about the same problem, and had to send someone up the tower to fix it.

Those with landlines to the premises are in a much more difficult situation. The company is likely to pin the problem on hardware that is owned by you, or wiring that is installed within the walls of your house. They will not be willing to admit that the problem may lie with the line buried underground. Acknowledging that problem would effectively cause them to have to outlay a significant cost to a contractor to dig up and replace the cable, so instead, they treat each individual support call as a new incident, and forget all the history of your problem where you've diligently worked by process of elimination to determine that it must be something in the line.

I remember years ago when we used deduction to determine that our DSL problem must lie with the phone line beyond the premises of our house. We replaced all our devices, hooked up to ethernet instead of WiFi, and even completely replaced all the DSL filters and phone line wiring in our house. The problem persisted. But the tech support guys kept experiencing a case of amnesia; every time we called, despite trying to ask them to refer to previous tickets and things we'd already tried, they just wanted us to reboot our modem, over and over and over and over again, as if that would help. This would happen even if we got the same tech support person on multiple calls.

At work, a lot of people come to me for advice on problems they're having with tech at home. I don't know why they do it; they just do. I get my fair share of laptop problems; Windows won't boot; they have a virus; whatever. But the #1 most frequent problem I get is that their Internet is unreliable and drops out all the time. Occasionally I'll find that replacing their cable modem fixes the problem, but in many more cases, we narrow it down to the landline, or at least to an ONT or something exterior to their dwelling that isn't owned by the resident -- at which point, you're basically at a dead-end.

The willingness to address problems, and to refer to case history to eliminate potential sources of problems, is seemingly absent from nearly all ISP support employees. And you wonder why their ACSI score is low...

Comment Re:Fuck you Very Much, Disney. (Score 1) 614

Their bottom line will definitely "feel" this, but it'll be in the positive direction. People will continue to visit Disney parks, buy Disney games and watch Disney movies. As far as the vast majority of the US population is concerned, the Disney company can do no wrong, and everything it produces is gold.

It's like the character Truxton Spangler said at the end of the last episode of the AMC series "Rubicon": "Do you really think anyone is going to give a shit?"

People don't care how they treat their workers. That's just not a criterion that people use to determine whether to do business with a company. Unless their U.S. products start having their manuals and product literature written in Indian English, I doubt anyone will notice, or care, that there was a major shift inside Disney.

And to prevent everything from being written in Indian English, they'll just hire one or two fresh-out English majors (USAians) for $35k to translate all the public-facing Indian English documents into American English.

Comment Here's looking at you, Android (Score 2) 46

"Reduce the number of private trees" --> Yeah, like the ancient (by mainline standards) kernels in most releases of Android... The sooner GOOG learns how to play nice with the rest of the Linux developers and get their customizations contributed upstream, the better off we'll all be. Though, admittedly, AOSP is doing a pretty decent job of that nowadays. The more egregious sinners are the device manufacturers.

Comment Re:100% effectiveness against any unknown attacks (Score 2) 145

I would imagine that there is a mandatory security policy on those systems, enforced by the kernel, that prevents processes from modifying their code or modifying any other process's code. If not, it sounds like they need another 100kb kernel module to make sure of that. I'm pretty sure SELinux and/or grsecurity can do that. It's usually enforced with certain exceptions for some software programs that need to modify their own code by design. On an embedded, high-risk system, you would just not allow it, period.

Comment Re:And all 9 Android/MIDI users were happy (Score 2) 106

Or how about Bluetooth audio that actually *works*? https://code.google.com/p/andr...

The thing to keep in mind is that Google doesn't have a one-track mind. They don't have 1 developer who can only perform feature development work on one thing at a time. MIDI support has very likely been a side project brewing for a number of years that finally now is stable enough to release. Meanwhile, they have lots of other people who have little to no clue what that guy was doing with MIDI because they spend a lot of their time looking into problems similar to the ones you've identified. But obviously those people aren't done with their work because the feature isn't in production yet.

I consider it very unlikely that some other highly useful feature would have been worked on instead, had they opted not to add MIDI support. I'm not sure why they chose to feature it so prominently in TFS or to even mention something like that at Google I/O, but my guess is that it was a 20% project for someone who has a passion for MIDI, so it's not like Google could tell them, "stop doing this useful contribution to the Android open source project in your spare time". 20% time at Google is exactly for this type of thing.

Comment Re:USB Power Delivery? (Score 2) 106

I think this has a very practical purpose: by allowing the charging circuit to operate at the same time as power flows out of the host, it will allow something like this:

USB Keyboard = K
USB Mouse = M
Powered USB Hub (connected to wall socket) = H
MHL adapter (USB-C to HDMI with a female USB-C socket for accepting peripherals and power) = P
Smartphone = S

K and M --> H --> P --> S

USB hub provides power to K and M and provides data and charging to S

Not sure if this is how it will actually work, but they definitely needed to do something to enable a use case like this, and it sounds like it might just do it.

Slashdot Top Deals

Cobol programmers are down in the dumps.

Working...