Become a fan of Slashdot on Facebook


Forgot your password?

Comment: Re:Fork with extensions support (Score 1) 51

by allquixotic (#49754249) Attached to: Chrome For Android Is Now Almost Entirely Open Source

I switched over to uBlock about a month and a half ago, and it didn't noticeably improve performance. Chrome just renders the page faster. I have no idea how. It's magic. And when I say Chrome is faster, I mean it's faster *with ads* than Firefox *without ads*. You'd think the one that has less network and drawing work to do would be faster. A few string comparisons is nothing next to the amount of work that needs to be done to actually load those ads.

I'm using a Note 4, so there is ample CPU and RAM.

Comment: Fork with extensions support (Score 2) 51

by allquixotic (#49753537) Attached to: Chrome For Android Is Now Almost Entirely Open Source

I think the mainline Chrome for Android will never support extensions because they want to avoid opening up the "Pandora's Box" that will eventually lead to one of the popular adblockers showing up for Chrome on Android. And since they have such a huge installed base of phones running Chrome, there is a huge financial incentive for them to disallow adblocking extensions for Chrome.

Now that it's open source, I would be greatly appreciative if someone could work on a version of Chrom(ium/e) for Android that has either extensions support, or built-in support for AdBlock-style blocking (i.e., don't even make the HTTP request if the URL or DOM element matches a pattern).

I want the (admittedly superior) performance of the optimized Blink layout engine and V8 JS engine, which no other browsers (that also offer extensions or ad-blocking built-in) offer; I also want the Google-specific blobs (Chromecast in particular); and I want/need AdBlock. Lacking this, I just end up using Firefox for Android, which has decent performance but not great, and has several site compatibility issues that Chrome doesn't for some reason.

It'd be awesome to see an adblocking fork of Chrome have a larger number of users than "mainline" Chrome.

Comment: Amazon does it better... weak (Score 1) 59

by allquixotic (#49737811) Attached to: Google Offers Cheap Cloud Computing For Low-Priority Tasks

Amazon's spot instances are better in every way. Not only are they usually cheaper than Google's fixed prices, but you can run them for way more than 24 hours. I have a Spot Instance running for 4 months at about 1/25th the price of the on-demand instance, and way cheaper than Google's preemptible instance too.

The limit of 24 hours seems to be designed to prevent people who want to run long-running tasks from using up the spare compute power on something like a VPN. That's fine with me; Amazon can have my business instead. :D

Comment: Re:Durability concerns valid, but... Tampering? (Score 1) 88

by allquixotic (#49728283) Attached to: Yubikey Neo Teardown and Durability Review

Good point; I didn't think of that.

It's a very, very rare situation where I have to actually surrender control of my key for long enough that a *physical* (mechanical) attack could take place, though. Even at airports, I just have to put my wallet through the X-ray scanner. A highly specialized robot designed explicitly for this purpose might be able to take apart the key, duplicate it and put it back together seamlessly in the few seconds it's under the hood there, but no human could. And it's apparently highly resistant to non-destructive attacks.

I *do* wish they'd make a YubiKey Neo with the same innards but with a very strong metal as the outer casing instead of plastic, and with a metal shroud over the USB connector instead of leaving the pins open to the environment (a corrosion and damage risk). It would make the end of it slightly thicker, but that's not a problem in my use case.

Comment: Durability concerns valid, but... Tampering? (Score 1) 88

by allquixotic (#49727371) Attached to: Yubikey Neo Teardown and Durability Review

Not sure what benefit "tampering" would provide. Why would you have to take it apart to extract its secrets, when you can just: steal the person's smartphone/computer and the yubikey, and use them in tandem to authenticate yourself as the user to whatever services they have locked behind it? You can use the Yubikey all by itself, assuming you have exclusive physical access to the device, to make it serve its purpose for you, the attacker.

Durability concerns are valid, but I keep it in my wallet, and it is working fine for me after some time (about a year and a half). I mainly use it NFC though. The USB connector being "raw" like that is probably more susceptible to damage than the NFC part which is hidden inside the plastic shell.

Comment: Still in the super-early adopter phase (Score 3, Interesting) 227

If you're a developer wanting to write software or games that'll work with this kind of thing, now is a great time to gain some experience with the technology -- go out and buy one.

Otherwise, only those with a ridiculous amount of disposable income, or some other compelling business justification to buy one, are probably going to be purchasing an Oculus Rift, or even a lesser knockoff, for at least 5 years.

I don't think this will reach "power gamer" audiences for 5-7 years, and it won't reach the masses of the "core gamers" for probably close to 10 years.

We also need to make a few assumptions that may not necessarily be true:

(1) The capabilities of GPUs, especially at the mid-range and lower-end, start to be able to push enough pixels to satisfy something this hungry. We were stalled for a number of years because TSMC dragged their feet on the 28nm process. If they delay another couple of years to go smaller than 20nm, the market probably will not be able to support $250-and-under GPUs that can power Oculus Rift or anything similar.

(2) Game developers stop the exponential increase in scene complexity, fidelity, draw calls, shader complexity, etc. I don't see this slowing down at all; if anything, game developers are making their games heavier and heavier at a faster rate than the GPU manufacturers can keep up. There used to be a time when you could buy a single discrete GPU of the highest make/model available on release day of a game, and you'd be able to run it with the maximum detail settings. Now, you either need SLI/CrossFireX, or lower your resolution beyond what's "standard" for the present day. Unfortunately, if texture size and scene complexity continue to climb, it won't matter if the options menu has a detail slider -- if your GPU can't keep up with the required number of pixels per second, it doesn't matter whether you're using big textures or tiny ones.

If "VR" is really going to be a thing, we cannot continue business as usual in the game dev and GPU industries. GPU manufacturers have to pick up the slack and make up for YEARS of lost time. Game devs have to slow down the procession of ever-increasing game requirements.

If you're designing your games to run at 58 to 60 fps at 1080p on max detail with two 980s in SLI, no one is going to be able to install six 980s in SLI to chunk out the required amount of pixels for an Oculus Rift. And trust me, the people who'll be buying VR will not be willing to settle for medium detail. Not til the price of all this comes down to core gamer levels -- no more than $250 for the GPU, and $100-$200 for the VR kit.

Comment: Re:What's the Problem? (Score 1) 950

First: you misquoted me by leaving out the ", especially..." clause I added to it as a proviso. To make this explicit, that means:

(1) I think I should have the right to live my life the way I want to, regardless of whether anyone else uses any subjective or objective reasoning to determine that my choices are worse than some other choices I could have made instead.

(2) I think, based on evidence that has informed some of the choices I've made, that I've made good choices that are logical and grounded in scientific data, with an aim to minimizing the amount of harm I inflict on my environment and fellow human beings, while (hopefully...) doing my best to slightly improving the lives of at least a handful of other people whom I interact with, either socially or professionally. While I could probably *technically* save some resources by offing myself, so could just about anyone, and I don't advocate suicide. I advocate making the most of what we have, while we have it.

Even if I'm wrong about (2), I think my point in (1) is still able to stand on its own. If I sound defensive, it's because someone else, namely this researcher, has "gone on the offensive" against a certain set of characteristics or behaviors or choices, many of which match some of the choices and behaviors I see in myself. Naturally, if you do not fall within Zimbardo's tartgeting reticule, you would have no reason to be defensive. I do, and I do.

The reason I am concerned is that influential researchers like Zimbardo are able to influence public policy to a significant level, especially because he is the guy who published that seminal result about the Stanford prison experiment. That WAS great work, indeed, and insightful information about human psychology (it also did a great service by opening up a can of worms for the public and other psychologists to dig into for the next ~50 years) -- but that doesn't entitle him to start throwing people like me under the bus.

Consider the distinction between these two statements (neither of which I am implying that Zimbardo actually said; this is just for illustration purposes):

(A): I observe, empirically, that people who exhibit 'foo' behaviors/choices experience 'bar' consequences later.

(B): I observe, empirically, that people who exhibit 'foo' behaviors/choices experience 'bar' consequences later. Therefore, 'foo' is bad, and anyone who behaves like / chooses 'foo' should reexamine their life and maybe stop doing 'foo'.

The problem with (B) is that there may be countless *positive* consequences besides 'bar' (we're assuming 'bar' is something that most people would consider undesirable) that the researcher did not anticipate. Furthermore, there may be disagreement over whether 'bar' is even "bad" at all; maybe it's fine from a certain point of view. In the specifics of this situation, I feel that there's a little bit of both going on.

If Zimbardo stopped at (A), and if TFA stopped at (A), and TFS stopped at (A), I'd be fine. I wouldn't even bat an eyelash. But, not having read the actual research study, TFS and TFA *portray* it as more like (B). If that's not what Zimbardo said, then my beef is with TFA, not Zimbardo. The article definitely reads like the research identifies a societal "problem" that needs to be fixed.

Now, let me be clear. Under normal circumstances, I do not go around living my life by telling people who've made different choices than me that they're wrong, or that they're wasting resources, or making problems worse, or whatever. That's not the type of personality I have. I'm very much "live and let live". But when someone starts taking pot-shots at my choices, I feel fully entitled to rebut their arguments, and maybe even argue past them a little bit if I feel like it, because it has a very personal impact on me when someone does that to me.

In fact, even though I feel that some of my choices are objectively better than the alternatives, I am very hesitant to point out to others, normally, that I disagree with their choices. But unfortunately, when situations like this arise, I can't resist the temptation to air out my opinions, and sometimes people who fall under *my* targeting reticule might get upset. If that's the case here, I'm sorry.

Comment: What's the Problem? (Score 4, Interesting) 950

While the guys with Neanderthal brains are out at bars trying to get laid, I'm living a comfortable, safe, and happy life. While they're getting exposed to STDs, drugs, accidental pregnancies, rough divorce settlements, paying child support, spouse abuse (either as the perpetrator or the victim), defaulting on their home because their spouse talked them into living above their means, etc. etc. etc..... I am living in a small single-bed apartment alone, making good money, playing video games (mostly MMOs) for social interaction, and listening to music to tame my biological cravings.

Not to mention, my choice not to reproduce helps the population problem -- at least in the span of a few decades, if not the long term. There is not a single problem that humankind has that can be solved by making more people. In fact, making more people does exactly the opposite for nearly all of our problems; it makes them more severe and reduces the length of time we have until those problems erupt into global catastrophes.

I don't *want or need* a woman. I don't *want or need* a romantic relationship with anyone. I don't want kids. I don't want any of the associated problems that come with either. It's been completely wired out of me.

I am basically an exact description of the type of person the study was about. And yet, I am not unhappy; I am not unsuccessful; I am not a loser. I am an environmentally-conscious, socially-reponsible citizen, supporter of my community, dedicated employee, educated voter and participant in the political process, and I have my fair share of social interaction, too, on the order of 6 to 8 hours per day on MMOs. Just because I don't touch the people I socialize with doesn't somehow make me diseased. I am a very social person. I am "socially intelligent". I can pick up on body language cues, implied meaning in conversation, the intent behind vocal intonation, the significance of a touch. I deal with people in meatspace for eight hours a day, and people in virtual space for another 6 to 8.

Medicine and academia has a tendency to call anything abnormal a disease, or a problem to be solved. Sometimes change is for the better. Sometimes the status quo is the worse of the two worlds.

In short: I would prefer to continue to be who I am, in the situaton I am in, rather than be the epitome of "masculinity" as this researcher thinks I need to be, even if I had the means to become that. And quite honestly, I'm pretty sure I do have the means to become that, if I put my mind to it. I don't put my mind to it because *it's not how I want to live my life.* Who the fuck is Philip Zimbardo to tell me that my life choices are wrong, especially when, by all the objective measurements that his ilk thrive on, I am of a far greater net benefit to society than many of the so-called "masculine" men he thinks I should be?

Comment: No, Stupid (Score 1) 91

by allquixotic (#49646593) Attached to: Superfish Injects Ads In 1 In 25 Google Page Views

The relevant software products that are getting extensions sideloaded into them -- Firefox and Chrome -- are both open source. If a vendor like Lenovo wants to put ads in your browser with an extension, what do you think is going to happen when Google shuts off outside extensions in Chrome? That's right -- they're going to ship a fork of Chromium and call it "Lenovo Browser" and make it the default browser. You read it here first, folks.

The solution, for consumers, is simple. Don't use the pre-loaded OS installed on your system. Use a program to get your product key back, then wipe and reinstall from the original OS media. Or if you happen to be able to tolerate a non-Windows OS, just install one of those.

It's also worth mentioning that, as long as Chromium or Firefox is open source, people who want to use ad-blockers will be able to use them, no matter how hard Google tries to stop people from using them. Even if Google used their might to convince Mozilla to take Firefox closed source, another community fork would spring up to maintain Firefox and keep it up to date.

These companies need to understand that you can't strong-arm an idea. Open source code is basically an idea, and as long as there are people, there will be people who are building open source projects that do things that make you lose money. If that keeps you from getting any sleep at night, tough cookies. It's exactly the same reason that we can't defeat terrorism no matter how many people we kill. You can't kill your way through an idea, unless you kill every last human on the planet. This is especially true when tightening your grip makes people want to do that thing you don't want them to do *even more* -- ad blocking has this characteristic to it, too.

Comment: *A* kernel, but not *the only* kernel, to succeed. (Score 1) 469

by allquixotic (#49639991) Attached to: Why Was Linux the Kernel That Succeeded?

Linux is definitely a success story, both according to its original author and many in the community. However, it's not *the only* success story of a widely-used open source OS kernel other than Windows.

For example, the OpenSolaris kernel (and the rest of the operating system) is free software and open source, mature, well-tested, stable, and has a pretty large install base. Solaris is a different matter entirely since it's no longer open source, but the community that used to be behind OpenSolaris is still very active on e.g. Illumos, SmartOS, etc.

Sure, OpenSolaris is no longer a legitimate contender for the desktop (there was a time around 2006-2008 when it was more or less on-par with GNU/Linux on the desktop, believe it or not!), but it's still widely deployed on servers for all sorts of tasks, and it has an incredible compatibility story, too. You can run binaries compiled in the early 90s on a modern SmartOS machine. The Linux devs would just tell you to recompile from source, after fixing any build errors.

And let's not forget that (Free/Open/Net)BSD are also widely used. Again, their viability for *modern* gaming/desktop use is pretty limited (though some would argue otherwise, they're still way behind Linux, if for no other reason than proprietary games only run properly on a "real" Linux kernel), but *BSD OSes are used in a lot of routers, home servers, and yes, production servers for pretty important websites and web services.

I don't believe that Linux is the only winner in the battle for having a viable FOSS operating system based on a FOSS kernel. It's definitely the best we have when it comes to playing games and watching video, but that's because a lot of the proprietary elements that want to protect their content are only willing to support platforms with a huge landslide of installed users, and *BSD and *Solaris/Illumos/SmartOS are definitely not that.

But it would be irresponsible for us to judge winners and losers solely by their ability to present a nice GUI, since we use software for a lot of things that don't need a GUI, or can provide their GUI via a web app (and run that web app on whateverOS, be it Windows, Mac, Linux, Android, etc.), and can very viably be hosted -- with high performance and security to boot -- on *BSD or Solaris derivatives.

Comment: My IT folks wait *months* before deploying updates (Score 1) 141

by allquixotic (#49624513) Attached to: Microsoft: No More 'Patch Tuesday' For Windows 10 Home Users

My IT shop waits at least a few weeks, if not months, before deploying updates. For critical security updates they usually wait about 2 weeks after the patch tuesday that it comes out on. For everything else, they eventually roll them out, but it can take a very, very long time.

I'm not sure exactly what kind of testing they're doing, or if they are just waiting for users to download the patch and see if it breaks things (resulting in a rollback from MSFT), but we never have the latest and greatest anyway.

Honestly, I can't really blame them. There have been countless "bad" updates out of Microsoft in recent years, that break certain programs or BSOD the system or even make it unbootable. However, I don't have a sense that the testing they're doing on these updates internally is adding any value. Probably best just to take a "wait and see" approach: if the update isn't pulled in 2-3 weeks after it lands, it's probably fine.

Comment: Duh? (Score 1) 73

by allquixotic (#49597395) Attached to: Space Radiation May Alter Astronauts' Neurons

Exposure to radiation from space is probabilistic, and there have been many more living human-hours down here on Earth than up in space. Isn't it reasonable to assume that at least a handful of people have been unlucky enough to just *happen* to get bombarded with an unusually high amount of gamma radiation, having the same thing happen to them?

And then there are all these new-fangled manmade sources of gamma rays that we've been blowing up and/or using for electricity since the 40s...

Comment: Wrong pop diva (Score 1) 263

by allquixotic (#49587111) Attached to: Crashing iPad App Grounds Dozens of American Airline Flights

If you're going to reference a pop song ("haters gonna hate"), you chose the wrong pop diva. Rather, you should have used the more apt "Call Me Maybe" by Carly-rae Jeppsen (which is one letter off of "Jeppesen"):

Hey, I just bought you
And this is crazy
But here's my WiFi
So update me maybe?

Comment: Need certified, rugged "iPad-like" devices (Score 1) 263

by allquixotic (#49587057) Attached to: Crashing iPad App Grounds Dozens of American Airline Flights

The problem isn't that they are relying on a single vendor, nor that they chose iOS over Android, or anything else silly like that.

The problem here is that they are using technology that's functionally tested for basic consumer use in a situation that suggests (and may soon require) a mission-critical level of software and hardware certification.

A lot of people (business people / decision-makers, mostly) don't seem to understand the difference between consumer-oriented hardware/software and safety-critical hardware/software.

Safety-critical hardware/software is designed, developed and tested with security, safety, and stability principles that are not only there "in theory", but are also tested for in practice, with a rigorous validation program that ensures the correct operation of the system. On the hardware front, the device is built to higher standards, such that the core chassis of, say, an iPad-like device would be able to withstand more shock than a consumer-oriented iPad with an Otterbox on.

If an airplane goes into a sudden roll or dive, causes the iPad to go flying across the cockpit and shatters the screen, what then? The pilots need the information in that device to be able to know how to follow the proper procedures to continue flying the aircraft safely. Without it, they can take their best guess and rely on instincts on how to operate the systems, but you cannot expect every pilot to memorize every contingency procedure. That's why the EFBs exist in the first place.

If you can't ensure that your tablet electronic device is at least as rugged as a hardback book, you shouldn't be using them on an airplane.

The problem is that there are few or no vendors of extremely rugged hardware/software solutions that are available in a thin and light form factor akin to an iPad. The safety-critical rugged device sector is 5 to 10 years behind the state-of-the-art consumer device space. That's because it takes many more months to design and ship a device with a much higher level of physical and digital assurance of correct operation. The airlines seem willing to take the risk of failure of these consumer devices, because they would rather have the latest features, like capacitive multitouch, ultra-slim design, retina displays, etc. instead of using something whose technology was state-of-the-art in 2008, but is built like a brick, both physically and software-wise.

We've seen MANY first-hand examples of consumer electronics devices from ALL vendors having extremely dangerous stability and security bugs that would render the device inoperable for the use case the airlines are using it for. We can't take the risk that this important tool will be unavailable when they need it. AA and other carriers need to stop using iPads as replacements for the flight bag, and either pay for the R&D for a proper rugged replacement, or go back to paper.

I'll conclude by saying that the EFB/flight bag is, in my opinion, a safety-critical tool aboard all except the most sophisticated airplanes (e.g. the Airbus A380, which has a computer built into the cockpit on an LCD screen that actually tells the pilots what to do to resolve problems). The airlines are taking a big risk by implementing this with consumer technology. If they "do it right" and work with a vendor that produces rugged industrial mobile devices, it will cost more and have a much longer development cycle than shipping iPads. The devices will almost certainly be heavier, have less "whizzy" displays and UI, have a shorter battery life, and be harder to upgrade if additional features are desired later. But they will have a MUCH higher level of assurance that their correct operation, both hardware and software-wise, will continue to be available in the case of an emergency when they are needed most. It still won't be impossible that they'll break, but it'll be much less likely.

Practical people would be more practical if they would take a little more time for dreaming. -- J. P. McEvoy