Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
Now Uber show up and declare themselves to be exempt from taxi laws because they don't employ taxi drivers, they just make money by "soliciting" "ridesharing", which is somehow different except it seems to work exactly the same*. And they're not willing to enforce that their drivers have valid licenses because they think they're not subject to the law.
Now you have lots of taxi drivers in all but name driving around without a license and you probably can't even get Uber to disclose their identities so you can fine them because, again, Uber thinks there's no legal basis for this.
I think it's fairly easy to see why Uber isn't very popular with municipalities.
* Technically it's a form of outsourcing but to my knowledge they don't require the drivers to be their own proper taxi businesses so Uber is still blatantly ignoring the law by contracting with people they know don't qualify under existing regulations.
The problem is that GPUs usually have a uniform memory layout. If your GPU advertises 4 GiB of RAM then all 4 GiB of it behave in pretty much the same way. Accessing one part of the memory does not significatly affect accesses to other parts. Thus it's unnecessary to take special care in how to structure your memory handling; you just use whatever's there.
The 970, as I understand it, has a non-uniform memory layout where the segment between 0x00000000 and 0xDFFFFFFF cannot be accessed at the same time as the segment between 0xE0000000 and 0xFFFFFFFF. Try to access one segment and all accesses to the other segment will stall until this one access has been handled.
This could be used without appreciable performance impacts if the software accessing the memory is aware of it and specially structures its memory management so that accesses to the upper segment are sparse and happen in bulk (ie. it switches between blocks of lower segment accesses and blocks of higher segment accesses). That's the kind of optimization you see in game console programming and actually smells kind of like how PS3 games had to structure their memory handling around the Cell's peculiarities. If I remember correctly, this made the PS3 somewhat unpopular to develop for.
Of course, no one in their right mind is going to add special Geforce GTX 970-specific logic to their game (potentially having to restructure half the engine for it) just to make best use of the hardware. Even making a codepath that detects the 970 and avoids the upper 0.5 GiB of VRAM entirely is unlikely. Thus, in situations where more than 3.5 GiB of VRAM are needed, the 970 will exhibit stuttering because of stalled memory accesses and there's not much anyone can do about it - except Nvidia, who could release a driver that reports the 970 as having 3.5 GiB of RAM.
(I find it interesting how a Google search for "VRAM" ended up having several articles about the 970's slowness on the first page. I have never searched for the 970 before; my 660 from 2012 still has more power than I need.)
I've also had GPUs that just went completely tits up requiring a system board replacement... I'm probably forgetting a lot of the problems now, but the most reliable Macs I ever had weren't built by Apple.
That one probably wasn't Apple's fault. Apple issued a recall for certain MBPs because Nvidia managed to screw up the packaging of the Geforce 8600M GT so badly that the thermal stress of running caused the chip to slowly break itself apart.
Not that Apple is free of sin. I had an iBook with a power jack that liked to desolder itself and my current MBP has an Nvidia GPU and Yosemite, which is an explosive combination due to Yosemite's Nvidia GPU driver being unstable when switching between the Intel GPU and the Nvidia one. Apple does screw up. But not every problem is their fault - and, in fact, their speed in issuing a recall is usually directly proportional to how much it isn't. The hand grenades Sony sold them instead of regular battery packs were recalled pretty quickly, if I recall correctly.
My next Mac will still be a Lenovo but that's mainly because I find the Retina MBPs higly unappealing. While Apple has terrible customer support, my Macs do have a tendency to outlive AppleCare. In fact, the only one that really died was the one with the 8600M GT. That one died once during the AppleCare period and once shortly after it ended - it turned out that the replacement GPUs were also faulty.
(As for speed, my experiences differ but I have to deal with UAC a lot and UAC is easily the slowest privilege escalation method on any major operating system. I'd take (g)ksudo over it any day.)
As has been pointed out, they've misdeployed this to the wrong market but still - it's Samsung. Their hardware is nice but they're not terribly strong on the software side.
Okay, actually it just booted into the old Mountain Lion volume on the first HDD because the Mac keeps the preferred boot volume in NVRAM. So when clearing your NVRAM keep in mind that the Mac will boot into whatever system volume it finds first unless you tell it otherwise.
What device would you be carrying with which you expect to use a web application over Wi-Fi? Or do "normal" people still carry laptops?
I'd ask "Do 'normal' people still carry tablets?" as the tablet-on-the-go fad seems to have cooled off quite a bit. I see a lot of people with smartphones and a sizable number of people with laptops but pretty much nobody with a tablet. Tablets are commonly found in homes but they definitely don't seem to be popular for mobile computing.
This might be because tablets suck for the two things I commonly see people do with their laptops on the train: Watching movies (big stationary screen, easy to view with more than one person) and working (big screen, physical keyboard and sometimes software that has no smartphone equivalent).
XML-RPC should mainly be disabled because of pingbacks; not too long ago these could be exploited to make your site participate in a DOS attack. XML-RPC itself not a significant security risk these days. You can go for a more nuanced approach by only disabling the functions used for pingbacks (there's a hook for that too) but if you don't need XML-RPC it might be easier to just rename or delete the entire file.
Trackbacks should be disabled because of trackback spam. Yes, you can install plugins that help you deal with it but - seriously - pretty much no Wordpress-as-a-CMS user cares about trackbacks (or pingbacks, for that matter) in the first place. Disabling them means fewer hassles.
Again, these days the biggest security risk are badly-written plugins. We once had an infected WordPress where it turned out that the attacker never compromised any user account. They didn't need to because a plugin allowed them to execute PHP code on the server. They just injected their attack code directly into WordPress and could do whatever they wanted, such as displaying dodgy pharma ads without even touching the database. That's the kind of danger unreviewed plugins pose.
WordPress can be quite capable when managed correctly. Just don't make the mistake of assuming that you can just install a plugin and get new functionality without any risk. Badly-written plugins are common and they can screw you just as much as an insecure admin account can.
Some WordPress plugins are well-written and secure. Most WordPress plugins are messy and were written by people who haven't even heard of code injections. If you want your WordPress to be secure, don't use plugins. Ever. At least not without a full code review by someone who knows how to write secure code in PHP.
Seriously. Most WordPress CVEs these days are for plugins and after having seen the code of a few dozen plugins I can see why. Do not trust a WordPress plugin you have not verified yourself.
You can use (a hardened) WordPress without much issue except for poor performance when compared to plain websites. If you intend to extend it in any way, however, you really should do a full code review of every plugin you use every time it is installed or updated. That means either your customers get their WordPress without plugins and further support or you rack up the billable hours doing code reviews for them.
The company I work at is actually migrating away from WordPress because our customers demand non-core functionality and keeping the plugins reasonably secure is simply too expensive.
Generally, customers expect future visitors to use something similar to what they themselves use. If the customer uses IE8 they will assume that a significant number of visitors will also use IE8. Telling the customer to switch to Firefox is useless as they can't assume that all visitors will now also magically have switched to Firefox. The only argument that does work is if we can show to them that the IE version in question has a negligible market share.
If there was a legitimate new version of IE for old Windowses it might help in driving old versions out of the market, even if it only gets the IE diehards to upgrade. Over here in Germany we already had mainstream media telling people to stop using IE (especially after the DHS and the BSI issued warnings); we might very well see computer mags reporting on an open-sourced IE for those who can't switch. That would further reduce market share and make the day when IE8/9 can be safely ignored come sooner.
(Then all we need to do is get rid of iOS <8 and Android <4.4 and we might even be able to ditch most remaining vendor prefixes.)
People generally don't use these versions of IE because some internal web app requires them. They use them because they're the most recent versions available for their version of Windows. And they're not going to upgrade Windows because they don't need to; their current setup works for them and there's no business case for upgrading before something breaks.
Internet Explorer is tied to Windows. You can't install IE10 on Vista. It's simply not possible. That means that for any SME running Vista IE9 is the latest version of IE. And they expect their shiny new website to be equally shiny in IE9. And no, they aren't going to buy new computers or install a different browser because their web designer told them to. (Plus, they know full well that their new site's visitory might also run IE so "just use a different browser" won't convince them even if they do switch browsers themselves.)
If Windows 8.1 was free and had the same requirements and UI as Windows Vista you could perhaps convince some of these people to upgrade. It isn't, though, and that means that either you cater to their browser choice (which usually means the latest version of IE supported by the oldest version of Windows they run) or they'll take their business elsewhere.
Having an open Trident/Son-of-Trident would at least allow people to backport it. If the mainstream tech media reported on it word might actually reach these businesses and they might consider installing the latest OpenIE. Not all of them but perhaps enough to further drive the old-IE user base further down until we can finally declare 8 and 9 irrelevant like 6 and 7 already are. Even Microsoft wants that to happen.