Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Department of Fairness can not be far behind (Score 2) 631

If the Federal Government can't determine what's fair, then who can?

Is it fair for someone to have exactly one choice of "broadband" ISP, when that choice is extremely unreliable, outdated, overpriced ADSL?

Is it fair that corporations get to ignore what customers want and only sell what's the most profitable for them, paying absolutely no attention to customer satisfaction, with a three-pronged "bend over and take it / don't have Internet / move house" ultimatum?

If the Federal Government won't stand up for its citizens, what recourse do citizens have left? Organize and march up to some corporate office and demand (peacefully or otherwise) to get what they want? Give up their job and completely change their life around to move to one of the handful of locations in the entire country that has actually good Internet?

Connecting to and participating in the global economy shouldn't be a privilege reserved for the upper crust elite. It should be accessible to everyone. Hell, there is an *enormous* financial incentive to do it, since without that connection, you won't sell nearly as much stuff on the 'net. Games, video, software, you name it.

You corporati would gladly tear down the national highway system to avoid paying taxes on roads, even knowing full-well that without roads, people won't be able to drive to Best Buy or Target or K-mart or Walmart to buy your shit.

Infrastructure is a special type of good. It's a GDP and productivity and economy multiplier. Infrastructure deserves special protection. Ever since man discovered the mechanical lever, we've been using infrastructure to enable us to do more than we could without it. The capitalist system has a significant weakness in that, if left completely unregulated, no one will pay for the infrastructure. Categorizing Internet service as infrastructure is exactly the move that needed to be made. IN PRINCIPLE.

Now what remains is to see what actual changes fall out in practice. The principle of the matter and the actual implementation may turn out to be very disjoint, which would be unfortunate. But leaving the system as-is would all but ensure that the current bad state of affairs would continue, since the old way was backwards *just in principle*, let alone in practice.

Comment My Mon Cal sense is going off... (Score 3, Interesting) 631

"IT'S (probably*) A TRAP!"
  - Rear Admiral Akquixotic of the Mon Calamari

*: There's a small chance that this will end up actually helping consumers. A broken clock is right twice a day, and a reg-captured FCC occasionally does things that benefit the common man.

For example, the Block C Open Access provisions on Verizon and AT&T's LTE bands (or at least some of them) are what prevented these carriers from preventing tethering or the use of custom devices. Any FCC-certified device, rooted or not, tethering or not, can be on those bands, and there's nothing the carrier can do to stop it without breaking the law.

Those provisions have been a lifesaver for many customers of these two carriers who want to use the LTE from their phone to tether a laptop on the go, but don't want to pay extra or buy dedicated hardware for it. So the FCC definitely helped in a pragmatic sense with those rules.

Then again, I'm sure the industry coalitions have fully formed lawsuits written up, signed, in the envelope, and just waiting to be mailed when this decision hit. Who knows how long it'll be until the results of this trickle down through carrier policy and plan offerings to affect the everyman?

Comment Re:Overlooking one small detail... (Score 2) 71

Also, 100 MHz is *a lot* of spectrum to allocate to a single client, given the amount of spectrum that's currently available. They'd have to free up a lot of old spectrum that is used for obsolete stuff like 2G voice and 3G data, so that they could repurpose the spectrum for 5G. The only way they'd be able to pull this off, realistically, would be to increase tower density. 100 MHz is just too much to ask. Typical LTE bands have 1.4 MHz to 20 MHz allocated to a given LTE client; this increases that by a factor of 5 for the largest-width LTE deployments today, and by a significantly larger factor for LTE running on narrower widths.

We can't just manufacture more bandwidth. Once the usable spectrum is allocated for something, we have to either wait until that technology goes obsolete and deallocate and repurpose it, or invent newer and better transceivers that can reliably transmit and receive over a previously unused range of frequencies (factoring in problems like building penetration, which gets harder at higher frequencies). Absent such advances in transceiver technology, we are stuck using the finite bandwidth ranges we have today -- at least for long distance cellular.

So, while it's quite plausible to think they could allocate 100 MHz for this technology, maybe even as much as 500 MHz of spectrum for it, there would only be enough spectrum for a lot fewer clients at a time for each tower, compared to what we can do today.

Problem is, there will be people who are perfectly happy with their 3G or 4G devices and resist upgrading, who want to remain customers under their current contract and continue to use the service already available. These folks are going to give carriers the motivation to retain their existing spectrum for the legacy protocols, inhibiting its repurposing for the next generation. It may be 50 years before the regulators officially announce, say, the 700 MHz LTE band to be free for a new auction.

2020 seems to be very aggressive to me, mainly for policy reasons, not so much technical reasons.

Comment Re:Oblig. XKCD (Score 1) 716

In addition to what AC said, there's another possibility. Even if the design of an existing solution is adequate to solve the problem, and even if someone isn't looking to create another standard simply to try to unify the existing standards, there are still an enormous number of *non-technical* reasons why something might not be desirable.

For one thing, there will always be license purists who insist that absolutely everything in their favorite distribution comply with their specific license of choice. The two biggest offenders are the all-GPL camp and the all-BSD camp, but there are probably others too. Even if they evaluated an existing solution purely on its technical merits and determined it to be superior, they'd still have a motivation to start their own project because Licensing Matters. Licensing matters at least a little to most people, but it's the #1 priority for some people. If those some people also have programming skill, they'll start their own project.

There are also other non-licensing issues that could come up. Maybe the primary maintainer of foo project requires Contributor License Agreements, and soandso thinks that CLAs are the work of the devil. They don't want to fork it (even if the license allows it) because they think it would result in too much infighting between the communities. So they go and start their own project.

One thing that seems to help is to abstract away *specifications* from *implementations*. This works marvelously in the case of the networking stack (from IP to TCP to HTTP), USB device classes, etc. Implementations are free to license under the GPL, BSD, whatever they want. As long as the interfaces between your widget and the other components of the software ecosystem are standard and consistent, people are happy to write their own implementation if they want a specific license. But you don't suffer from the proliferation problem when you have multiple conforming implementations, because the compatibility issues can be ironed out to be very minor with the right level of cooperation and standardization.

It's a wonder we have standards that are established as they are (like HTTP), given the number of reasons people use to justify proliferation.

Your agreement or disagreement with their justifications, such as the examples I gave above, depends on where you stand on the axis of pragmatism or purism with regards to the issue at hand. But regardless of your opinion, that doesn't change the fact that people nonetheless will use these reasons to fork, and hence, proliferation continues apace.

Comment Re:Alien life (Score 2) 52

Would we be so fortunate as to do so this quickly, though? 600 some odd years ago, almost no one was aware of the full extent of the planet's land masses, much less that there were actual people living on those other land masses. After that settled down, not a lot happened for the next several hundred years in terms of advancement of human life's extent and discovery of new civilizations. Then, suddenly, in the 60s, we're extraplanetary.

It would be amazing, but unlikely IMHO, to see a single generation of people live to see Apollo 11 and the discovery of extraterrestrial life. I think we're going to have to look a lot further and for a lot longer before we bump into anyone out there.

Comment Re:Tor and systemd? (Score 1) 53

Tor's integration with systemd, if any, would be very very tiny. Basically systemd would be responsible for managing the start/stop cycle of Tor and collecting any log files.

This is entirely optional, though. You can always run Tor without it being integrated into systemd's service management facility at all. If you need it in the background and headless, just run `screen -mdS tor [tor_cmdline]`.

I do not believe that Tor would be automatically running on a default Fedora install. You would have to enable it yourself.

Comment Re:Does anyone care what RMS thinks any more? (Score 1) 253

While it is true that free distribution makes it difficult to impossible to make money on software *distribution*, this does NOT mean that it is now impossible to make a profit while producing software!

One way is to profit from other aspects of the technology ecosystem. This includes selling support (or "patches/enhancements delivered for a price") for software, which is plenty profitable if you ask Red Hat. Another approach is to sell things that are inherently not copyable at zero cost, like hardware. I don't mean literally selling hardware like Intel; I mean hosting a cloud platform or something like that where you install your software and sell it as a service. Microsoft Azure. Amazon EC2. Google Cloud Engine. Another approach is to sell content, a la Spotify or Amazon Video on Demand or iTunes.

Companies can amortize software development as a cost of doing business. If every company does this, not only will it greatly increase net utility (all the individuals who "just want to use it one time" but can't afford the $1,999 sticker shock for enterprise software, for instance), but it will also reduce the amount of investment needed from each individual company, because each company will be contributing. Although from an individualistic perspective each company has an incentive not to contribute, if *no one* contributes, there will be no software commons, and thus each company will have to reinvent the wheel themselves (or pay another vendor an arm and a leg for exorbitant license fees to do so for them).

It's entirely possible for a large company like Google, Amazon, Red Hat, Microsoft, Adobe, etc. to open source their software while still turning a net corporate profit by selling other things that are not inherently copyable -- things which integrate closely with and use their software. At the same time, by making it open source, they can benefit from the long tail of drive-by patches that reduces the overall cost of their software investment.

It's simple. Aim for net utility as a principle of doing business. Reduce your assets to their essence and sell them based on their essential properties. Software is essentially copyable, so let it be copyable. Ever hear the term "software wants to be free"? So let it be free. Focus your profiteering on something that can't easily be copied for the cost of a few megabytes of data, and help build a better world by contributing to the software commons (and eating your own dogfood if you're able).

Comment Other companies are asking... (Score 2) 331

If you work at a large tech or services company, rest assured that your top execs are scrambling right now to figure out how to emulate IBM's exploitation of the loophole that lets them lay off employees with the performance management system without technically laying them off.

This is bad news for all US salaried job holders, but especially those in large enterprises with a lot of low-profit business. Even if the job you work is profitable in essence, these companies would gladly dump you in exchange for an H1-B or just replace you with higher-margin work. Even profitable, high-performance employees are on the chopping block nowadays in the quest for ever-increasing profits.

Comment Re:"Wi-Fi" is fundamentally broken, period. (Score 1) 120

Considering I have never spent a penny on any Apple product or service, and have no stock invested in Apple, I'm not sure how the label "fanboy" makes any sense whatsoever.

Just because *you've* not had any particular problem doesn't mean that problems don't exist. I have the unfortunate pleasure of having a reputation as a person who is knowledgable in general about computing, so pretty much everyone I know who isn't technically savvy themselves will invariably come to me when they have problems.

I've had to deal with a small handful of old laptop HDD crashes, USB port failures, botched Firefox updates, malware, etc. in my years of being unable to say "no" to a desperate user who needs my help to fix their shit. But I can count the number of these instances on one hand per incident type.

On the other hand, I have responded to maybe 100 different requests that go along the lines of "my WiFi won't connect" or "my WiFi is slower than dial-up" or "my WiFi keeps dropping out". Sometimes these instances involve Apple devices; sometimes not. Often times, they involve devices from different manufacturers. Very often, they involve people who live in tight spaces like apartments or dorms, where WiFi from next door (and downstairs, and upstairs...) can pollute the WiFi spectrum within your own dwelling.

Maybe I'm just really unlucky and I have friends who make poor choices in their purchase of WiFi-using devices, but the disproportionate ratio of WiFi-related problems to non-WiFi problems suggests to me that there are metric tons of devices out there with broken WiFi implementations.

The reputation and legacy of WiFi as a protocol will be judged by whether it could be implemented reliably and consistently, so don't say "that's not a critique on WiFi itself". If even a significant minority (say, 30%) of the implementors can't be arsed to do it *properly* in such a way that you don't get pathetic issues like a link that's capped at 2.8 kbit/s, that should say a lot about the spec, the standards organization, and the verification & validation (or lack thereof) surrounding WiFi.

And while we're on about anecdotal personal evidence, I've got a Note 4 and a current-generation Linksys USB adapter that both claim to speak 5 GHz 802.11ac, and I get random dropouts when the devices are within 10 feet of one another and not being moved.

I randomly fire up a wifi heat map on my phone when I get the dropouts, and not a single other device in the area is talking on 5 GHz. I don't own a cordless telephone and there are no other dwellings near enough that a cordless phone could be the problem. 2.4 GHz, while noisier, exhibits the same problem. I've tried with 3 different driver releases too, and the problem persisted after a Note 4 OTA claiming to fix WiFi issues.

Then again, my personal experience is just one data point. There's no way I'd claim that to be any kind of a representative sample. I've got a few dozen friends/colleagues/associates -- technically savvy and otherwise -- who would be eager to tell you about their (sometimes ongoing, sometimes former) WiFi woes.

Comment "Wi-Fi" is fundamentally broken, period. (Score 4, Interesting) 120

The problems with "Wi-Fi" are numerous. The end result is that generally speaking, Wi-Fi is a hot mess of broken tech that doesn't work. In the rare case that it DOES work, even the most trivial of changes in the environment or in the client can completely break it.

1. Early versions of the spec were too loosely worded, and allowed for too many "interpretations".

2. Vendor extensions are still a major problem. Many vendor extensions are not compatible with one another, and a device that has a vendor extension enabled
may work very poorly (or not at all) with a device lacking said extension.

3. Actual implementations of Wi-Fi are all over the map in terms of quality, with ridiculous things like: advertising support for an extension that it doesn't actually support; criminally severe bugs in a production implementation; vendors that try to work around bugs that other vendors introduced but in turn create yet more bugs, causing a vicious cycle of workarounds to workarounds; "hide and go seek" with extensions and spec interpretations; ridiculous driver implementations that hold exclusive access over very coarse-grained locks in the OS kernel for long periods of time, causing freezes and/or panics; poorly designed antennas; buggy firmware that never gets updated; etc.

4. The spectrum WiFi uses is open to be used by literally anything else that complies with a few simple rules, such as the maximum Tx power on that frequency band. As a consequence, random electric devices can freely leak a certain amount of random interference (noise) in the 2.4 GHz and 5 GHz WiFi bands, which destroys the ability for WiFi to operate. Ever lose your WiFi when you turn on your vacuum cleaner, or microwave? That's what's happening.

5. The spectrum WiFi uses is used by other communications protocols that are not Wi-Fi. While some effort is made to interoperate between a few of them, such as cooperation between Bluetooth nodes and WiFi nodes (such that they don't "trample over" one another if they use the same frequency), the interoperation protocols, specifications, and implementations have the same problems as the Wi-Fi specs themselves, as stated above.

6. Recent increased focus on power saving has caused some rather extreme power saving techniques to be employed in Wi-Fi firmware and drivers, which sacrifices performance, range and reliability for a few microwatts or milliwatts of energy. Paradoxically, some of the proponents of these techniques actually think that's OK, and are still trying to make the problem worse.

7. There are a large number of complex physical parameters that affect whether two WiFi transceivers will be able to communicate, which 99% of users don't understand at all. The power saving techniques mentioned above reduce the variety of possible configurations (that is, device orientations and distances, mainly) under which the signal will be reliable and high-performance.

8. Vendors that produce Wi-Fi transceivers, or products that integrate them, usually perform inadequate testing to certify the device as interoperable with a very large array of existing and upcoming other products that use Wi-Fi. Especially in the case of smartphones, the possible number of clients and basestations that may be interacted with is tremendous: Smart TVs; DSL modem/routers; cable modem/routers; other smartphones; enterprise APs and repeaters; laptops; tablets; cars; IoT devices -- all these things need to be tested. With a LOT of work -- and I mean a LOT -- eventually a Wi-Fi stack can be designed in such a way that it operates at least decently well with all modern incarnations of the above, but that says nothing about older implementations, which people love to keep around for a decade or more, and expect them to work. A sufficiently general Wi-Fi stack that works okay with all of the above will probably have so many heuristics for bug detection, compromises, polling tests, etc. that they won't work especially well even in an "ideal" scenario, and may even try to implement contradictory rules depending on the specific model of the device being communicated with... basically, it's nearly an effort in futility to develop such a thing, let alone have it work *WELL* with everything.

If USB and its "device class" specifications (Mass Storage, Battery charging spec, RNDIS, audio class, etc.) is a ringing success story of how standardization can promote interoperability, Wi-Fi is a textbook case study of how faux "standardization" can go so, SO horribly wrong that the only way I can see to fix the problem is to abandon the 2.4 GHz and 5 GHz spectra entirely, and come up with a new, non-WiFi communication protocol that is much more tightly specified, open standard, general purpose, and functions on some other band that does not overlap with the WiFi bands (since those bands will be eternally trashed by millions of WiFi devices for at least 20-25 years after the last WiFi device is manufactured).

Comment Re: 4 paid developers yes, but (Score 4, Insightful) 288

This is a little story about four people named Everybody, Somebody, Anybody, and Nobody.
There was an important job to be done and Everybody was sure that Somebody would do it.
Anybody could have done it, but Nobody did it.
Somebody got angry about that because it was Everybody's job.
Everybody thought that Anybody could do it, but Nobody realized that Everybody wouldn't do it.
It ended up that Everybody blamed Somebody when Nobody did what Anybody could have done.

Basically, there needs to be a team of people (whether volunteers, paid employees, or a mix) who are dedicated to spending a specific number of hours explicitly assigned to working on security testing of a piece of software, and then have those hours held accountable. Meaning, if they have no results over a long period of time, or aren't putting in the hours, even if they're just volunteering, then their position on the team should be vacated for someone else willing to do the work.

Features are completely different, and most types of non-security bugs are also different. In general, people implement features because they find it genuinely fun to do so. Also, as long as the software has users, the absence of a feature will not normally cause millions of dollars in damage, loss of reputation, or identity theft. The consequence of the absence of a feature is usually annoyance or inconvenience, but is upper bounded by what that feature would provide if available, rather than being upper bounded by the limits of human cruelty and deviousness, which are MUCH higher bounds than even the most major features.

This is why it's OK to let features develop "organically" in a bazaar fashion. Even bugs can be developed this way: if nobody is encountering the bug, who cares if it's there? And bugs that are encountered frequently will get complained about and/or fixed directly by the core devs or a drive-by patch. Security, on the other hand, almost requires a deliberate, cathedral model to provide any guarantees.

Bringing small aspects of cathedral development philosophy -- the best parts of the cathedral only -- into projects that were once purely "bazaar-only" projects like OpenSSL, can only be a good thing.

Comment Re:And is this a bad thing? (Score 1) 392

That's no problem. They'll just get their buddies in Congress to write them up a law that says that whatever they do is fine. Or, if that causes too much of a ruckus, they'll just provide Congress with a long laundry list of the things they do, then get Congress to copy and paste that into the law.

Slashdot Top Deals

I just need enough to tide me over until I need more. -- Bill Hoest

Working...