Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Comment: Speak for yourself! ... at least in the case of FF (Score 4, Insightful) 93

by allquixotic (#49235223) Attached to: Linux Kernel Adopts 'Code of Conflict'

You seem to take it for granted that the changes made to Firefox were universally bad for all users, and that everyone hates them.

As a regular user of Firefox on multiple platforms (Android, Fedora, and Windows), I have no problem with the changes they have made. I like the customization of the new menus and I like my tabs on top (though I could of course revert back to the old way if I wanted to, because they made it customizable and configurable). Almost everything they've done with Firefox from the 3.x releases up to the latest stable has been a net positive change for me, even when I've occasionally scratched my head at questionable decisions. Even their choices to completely disable certain broken websites have turned out for the better, because in every case where I've had such a broken website that I depended upon, the developers have come around to fixing the problem instead of making people run IE 6 or a patched browser that's deliberately insecure.

The UI changes are not what is killing Firefox. The disruptive security policy enhancements that break sites are not what is killing Firefox.

What's killing Firefox is the critical mass of Google Chrome, because it's being pre-loaded onto PCs out of the shop; is much faster for general use (faster page rendering and startup, so don't give me JS benchmark results), and more compatible with more sites. There is huge word of mouth support for Chrome among Joe User type people now -- people who swore by IE just a couple years ago. There's also Chrome's app store, which is causing many third party devs to release stuff that only supports Chrome, leaving competitors in the dust. Firefox may be able to match Chrome in some limited respect in some of these things, but they simply don't have the same word of mouth support that Chrome does among the vast majority of users. Oh, and it's the default browser on the mobile OS with the largest installed base in the world (since ICS anyway).

Rather than Firefox being especially bad in any particular way (except for its abysmal startup time on mechanical hard drives when the files aren't in page cache), it's pretty much just that Chrome is better for your regular user who doesn't care about privacy, just functionality and speed. They are losing to a superior competitor. Even though they are accelerating the rate at which Firefox is getting better, Chrome is accelerating way faster than they can muster.

Comment: Re:Fuck Off Dice (Score 1) 292

by allquixotic (#49225965) Attached to: Do Tech Companies Ask For Way Too Much From Job Candidates?

Nope. On my desktop at home with an i7-3770K and 32 GB of RAM, maybe. On my phone or my work computer, not a chance.

With an i5-2520M, 8 GB of RAM, a Samsung 850 Pro 1 TB SSD, a 10 Mbps internet connection (shared among a handful of users), integrated graphics, and Chrome 41, pages on Dice websites with Adblock disabled take about 10 to 35 seconds to finish loading and be responsive to scrolling, with much of that time dedicated to parsing all the scripts and elements on the page and loading them into the GPU (as semi-scientifically evidenced by pegged CPU usage in the task manager, the whirring of the CPU/GPU fans, the extremely low render framerate judged by the responsiveness of scrolling, and the time it takes to see the entire page loaded).

With uBlock enabled, that drops to 3 to 5 seconds, with most of that time spent in the network stack actually downloading the non-ad resources on the page, and very little time spent on rendering and parsing. And that's not even with NoScript on -- all the functionality of the site except for the ads is being loaded and executed.

The difference is significantly less on my beastly desktop at home, but if you're going to say that everyone should always use beastly desktops or not bother using the web, I'd like you to tell that to my employer so they'll buy me a $3000 laptop.

Comment: FACT: The Internet is For... (Score 1) 375

by allquixotic (#49166231) Attached to: Google Wants To Rank Websites Based On Facts Not Links

The Internet is for porn. It's universally accepted, so therefore it must be true!

Now all libraries are getting bumped way down on the page rankings because they claim in their "About Us" page, something along the lines of, the Internet is a place for gaining knowledge, communicating ideas, etc. etc.

Comment: Re:DoD (along with everyone else) wants clean orbi (Score 1) 253

by allquixotic (#49166041) Attached to: 20-Year-Old Military Weather Satellite Explodes In Orbit

I guess "armor" would help to a certain extent depending on the circumstances, but remember, the relative velocity between a satellite and a tiny piece of space debris can *far* exceed the velocity of a bullet on impact down here in the thick part of the atmosphere, because you don't have all that drag.

A piece of space debris the size of a fingernail can cause as much destruction in space as a bullet in atmosphere that's many times heavier. I don't think our materials science can produce materials strong enough to withstand the impact of even a small actual bullet (or other matter of equivalent mass) traveling at a typical orbital relative velocity.

Comment: Space Junk Reentry question (Score 1) 253

by allquixotic (#49165905) Attached to: 20-Year-Old Military Weather Satellite Explodes In Orbit

Question to those of you with some knowledge of how satellites are set up, where they're put in orbit, probable trajectory, etc., keeping in mind that my "knowledge" of orbital mechanics is only slightly more sophisticated than an enthusiastic player of Kerbal Space Program:

Is the space junk created by this explosion likely to remain around for a very long time (on the human lifespan timescale), or are we looking at a high probability that most/all of the debris will reenter the atmosphere and burn up some time soon?

If we assume a uniform distribution of debris exploding from a point within the satellite's center of mass, a "sphere" of debris would get created, where some of it would receive a fairly large push down toward the atmosphere and likely reenter very soon... what about the pieces that were given a large delta-v *away* from the atmosphere? What'll happen to those pieces?

Comment: Just good ol' fashioned (in)compatibility (Score 4, Informative) 136

by allquixotic (#49140571) Attached to: Who's Afraid of Android Fragmentation?

Software fragmentation at a high level hasn't been all that scary of a specter for Android yet. Hook into a few core APIs that pretty much work everywhere, and off you go. There haven't been any successful whole-system Android forks that have been able to challenge mainline Android with any semblance of significance in market share.

The problem is in the details of the hardware, and to a lesser degree, software implementations. Screen resolutions; GPU capabilities; the difference in performance between the slowest and fastest (non-obsolete) device on the market; highly variable amounts of storage, VRAM, network bandwidth; limited vs unlimited data plans; the amount of other crap (that may interference) that the user or manufacturer has already installed on their phone; etc. etc.

A lot of devs are starting to whitelist their apps for specific phones, or at least for specific SoC make/model/generations. Your OpenGL fragment shader may run fine on a Qualcomm Snapdragon platform with an Adreno GPU, but crash the app or the whole system on ARM Mali. Your game may get 30-60 FPS reliably on a modern Tegra GPU, but deliver a slideshow on a Droid Mini from late 2013.

And that brings me to my second point: the hardware advances too quickly. A lot of people expect their smartphone to last them 4 years, maybe longer. But if you look at how far phone specs have come since 2010, it's pretty ridiculous. Most 2014 Android games and even non-trivial business apps won't even *launch* on a phone with specs 1/10th as capable as the state of the art.

These problems are hard enough to solve on their own. Most devs don't even have time to think about supporting other core systems with forks or replacements of the core Android APIs.

Comment: Re:Department of Fairness can not be far behind (Score 2) 631

by allquixotic (#49140207) Attached to: FCC Approves Net Neutrality Rules

If the Federal Government can't determine what's fair, then who can?

Is it fair for someone to have exactly one choice of "broadband" ISP, when that choice is extremely unreliable, outdated, overpriced ADSL?

Is it fair that corporations get to ignore what customers want and only sell what's the most profitable for them, paying absolutely no attention to customer satisfaction, with a three-pronged "bend over and take it / don't have Internet / move house" ultimatum?

If the Federal Government won't stand up for its citizens, what recourse do citizens have left? Organize and march up to some corporate office and demand (peacefully or otherwise) to get what they want? Give up their job and completely change their life around to move to one of the handful of locations in the entire country that has actually good Internet?

Connecting to and participating in the global economy shouldn't be a privilege reserved for the upper crust elite. It should be accessible to everyone. Hell, there is an *enormous* financial incentive to do it, since without that connection, you won't sell nearly as much stuff on the 'net. Games, video, software, you name it.

You corporati would gladly tear down the national highway system to avoid paying taxes on roads, even knowing full-well that without roads, people won't be able to drive to Best Buy or Target or K-mart or Walmart to buy your shit.

Infrastructure is a special type of good. It's a GDP and productivity and economy multiplier. Infrastructure deserves special protection. Ever since man discovered the mechanical lever, we've been using infrastructure to enable us to do more than we could without it. The capitalist system has a significant weakness in that, if left completely unregulated, no one will pay for the infrastructure. Categorizing Internet service as infrastructure is exactly the move that needed to be made. IN PRINCIPLE.

Now what remains is to see what actual changes fall out in practice. The principle of the matter and the actual implementation may turn out to be very disjoint, which would be unfortunate. But leaving the system as-is would all but ensure that the current bad state of affairs would continue, since the old way was backwards *just in principle*, let alone in practice.

Comment: My Mon Cal sense is going off... (Score 3, Interesting) 631

by allquixotic (#49139843) Attached to: FCC Approves Net Neutrality Rules

"IT'S (probably*) A TRAP!"
  - Rear Admiral Akquixotic of the Mon Calamari

*: There's a small chance that this will end up actually helping consumers. A broken clock is right twice a day, and a reg-captured FCC occasionally does things that benefit the common man.

For example, the Block C Open Access provisions on Verizon and AT&T's LTE bands (or at least some of them) are what prevented these carriers from preventing tethering or the use of custom devices. Any FCC-certified device, rooted or not, tethering or not, can be on those bands, and there's nothing the carrier can do to stop it without breaking the law.

Those provisions have been a lifesaver for many customers of these two carriers who want to use the LTE from their phone to tether a laptop on the go, but don't want to pay extra or buy dedicated hardware for it. So the FCC definitely helped in a pragmatic sense with those rules.

Then again, I'm sure the industry coalitions have fully formed lawsuits written up, signed, in the envelope, and just waiting to be mailed when this decision hit. Who knows how long it'll be until the results of this trickle down through carrier policy and plan offerings to affect the everyman?

Comment: Re:Overlooking one small detail... (Score 2) 71

by allquixotic (#49120991) Attached to: UK Scientists Claim 1Tbps Data Speed Via Experimental 5G Technology

Also, 100 MHz is *a lot* of spectrum to allocate to a single client, given the amount of spectrum that's currently available. They'd have to free up a lot of old spectrum that is used for obsolete stuff like 2G voice and 3G data, so that they could repurpose the spectrum for 5G. The only way they'd be able to pull this off, realistically, would be to increase tower density. 100 MHz is just too much to ask. Typical LTE bands have 1.4 MHz to 20 MHz allocated to a given LTE client; this increases that by a factor of 5 for the largest-width LTE deployments today, and by a significantly larger factor for LTE running on narrower widths.

We can't just manufacture more bandwidth. Once the usable spectrum is allocated for something, we have to either wait until that technology goes obsolete and deallocate and repurpose it, or invent newer and better transceivers that can reliably transmit and receive over a previously unused range of frequencies (factoring in problems like building penetration, which gets harder at higher frequencies). Absent such advances in transceiver technology, we are stuck using the finite bandwidth ranges we have today -- at least for long distance cellular.

So, while it's quite plausible to think they could allocate 100 MHz for this technology, maybe even as much as 500 MHz of spectrum for it, there would only be enough spectrum for a lot fewer clients at a time for each tower, compared to what we can do today.

Problem is, there will be people who are perfectly happy with their 3G or 4G devices and resist upgrading, who want to remain customers under their current contract and continue to use the service already available. These folks are going to give carriers the motivation to retain their existing spectrum for the legacy protocols, inhibiting its repurposing for the next generation. It may be 50 years before the regulators officially announce, say, the 700 MHz LTE band to be free for a new auction.

2020 seems to be very aggressive to me, mainly for policy reasons, not so much technical reasons.

Comment: Re:Oblig. XKCD (Score 1) 716

by allquixotic (#49039675) Attached to: Is Modern Linux Becoming Too Complex?

In addition to what AC said, there's another possibility. Even if the design of an existing solution is adequate to solve the problem, and even if someone isn't looking to create another standard simply to try to unify the existing standards, there are still an enormous number of *non-technical* reasons why something might not be desirable.

For one thing, there will always be license purists who insist that absolutely everything in their favorite distribution comply with their specific license of choice. The two biggest offenders are the all-GPL camp and the all-BSD camp, but there are probably others too. Even if they evaluated an existing solution purely on its technical merits and determined it to be superior, they'd still have a motivation to start their own project because Licensing Matters. Licensing matters at least a little to most people, but it's the #1 priority for some people. If those some people also have programming skill, they'll start their own project.

There are also other non-licensing issues that could come up. Maybe the primary maintainer of foo project requires Contributor License Agreements, and soandso thinks that CLAs are the work of the devil. They don't want to fork it (even if the license allows it) because they think it would result in too much infighting between the communities. So they go and start their own project.

One thing that seems to help is to abstract away *specifications* from *implementations*. This works marvelously in the case of the networking stack (from IP to TCP to HTTP), USB device classes, etc. Implementations are free to license under the GPL, BSD, whatever they want. As long as the interfaces between your widget and the other components of the software ecosystem are standard and consistent, people are happy to write their own implementation if they want a specific license. But you don't suffer from the proliferation problem when you have multiple conforming implementations, because the compatibility issues can be ironed out to be very minor with the right level of cooperation and standardization.

It's a wonder we have standards that are established as they are (like HTTP), given the number of reasons people use to justify proliferation.

Your agreement or disagreement with their justifications, such as the examples I gave above, depends on where you stand on the axis of pragmatism or purism with regards to the issue at hand. But regardless of your opinion, that doesn't change the fact that people nonetheless will use these reasons to fork, and hence, proliferation continues apace.

After any salary raise, you will have less money at the end of the month than you did before.

Working...