Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Ads can stay, as long as they behave (Score 1) 699

So far, the rise of locked mobile devices is not preventing the sale or use of computing devices which are not restricted in such a way. And at least on Android, even "locked" devices still allow you to install third-party apps, like Firefox, which can be used to block ads.

Locking down all possible systems that can be purchased by consumers and enterprises (including modems/routers, desktops, laptops, etc.) with NO way to purchase, anywhere, a compatible, functional system that can have arbitrary software code executed on it, is a very tall order. If such a system is ever even threatened to be put into place, there will be a social rebellion the likes of which will make the American Revolutionary War look like a playground arm wrestle.

However, to attempt to prevent systems like this from being placed into effect gradually and slowly over time, I believe we should do all we can to reject systems of this nature, and continue to use, promote and purchase open platforms. Even (desktop) Windows, proprietary as it is, is -- relatively speaking -- very "open" compared to the locked-down environment you speak of. By refusing to economically support walled gardens, we can prevent them from gaining a foothold, or worse, becoming such a "de facto" standard that the majority of the web stops supporting open platforms.

I definitely see the danger, but I am optimistic that people will care enough that they will fight it. As usual, with matters like these, technologists such as the ones who often visit /. should be expected to lead the charge. Join the EFF and throw away your iPhones, folks.

Comment Ads can stay, as long as they behave (Score 5, Insightful) 699

There's really no other rational choice than to block most/all ads, in a world where ads can do just about anything they want. The annoyance and performance slowdown are trivial issues compared to the real problems. The same openness that allows Web-based ads to track you using cookies, launch plugins and pop-up windows, and prevent you from viewing content until you watch a video or wait some time, also (fortunately) allows users to fight back as a natural defense mechanism against these predatory tactics. For the advertisers to abuse this openness for their own personal monetary gain, while presuming to control what *I* run on *my* computer, while being appalled at my choice of doing the same, is ridiculous and contradictory.

Far and away the gravest problem with ads today is that the vast majority of them pose *serious* security and/or privacy issues. Most ad networks do very little to prevent bad actors from embedding malicious content that tries to exploit browser zero-days, steal cookies, track your behavior, or trick you into visiting malicious websites. Until website owners and ad networks decide to completely purge all the security and privacy risks, advertising is essentially synonymous with an opportunistic attack on each user who visits an ad-infested site.

On the open web, the only way advertisers are going to get any revenue is through earning the trust and goodwill of their customers. And we ARE customers -- customers who are currently being treated like shit. How would you like it if a car salesman walked up to you and started giving you a tattoo on the arm with the manufacturer's logo, seconds after you get out of your car and step foot on the lot? That kind of intrusive behavior should not be tolerated. And it isn't: users are doing exactly what the advertisers should expect them to do, given how they are being treated.

Ad networks should start by having a manual screening process for each entity that wants to submit ads through their network. The integrity, ownership, and status of each entity should be scrutinized to ensure that they are a legitimate business and are registered with the proper authorities. Additionally, the network should perform constant random sampling of their current ads being run, and employ experienced security auditors or penetration testers to examine the source code and other dynamic behavior of the advertisement payload on various popular browsers, to determine if it is tracking the user or malicious in any way. If it is, all further business with that partner should be stopped immediately, and the advertisement removed from the network. Website owners and users should not be the ones having to push the ad networks to remove these abusers.

The open Web is not going away. Users are in control of what displays in the web browser. Advertisers must either learn to work within a system of reasonable rules that do not attack users' systems or try to compromise their privacy, OR just keep fighting until their revenue stream is slowly strangled to death by their own despicable policies.

Comment Re:What does learning a language really mean? (Score 1) 277

That's just passing the buck, not really answering the question. I can imagine three completely different interviews purportedly about, say, "Ruby": one that a college kid could pass by going through a Ruby tutorial the night before; one that would require the same kid to take a Programming Languages class with a 4-week section on Ruby to be able to pass; and one that would unambiguously flunk everyone except Masahiro Matsumoto. There is an entire spectrum of different difficulties of interview in between these three examples, too.

The OP asked a philosophical question, and you provided a concrete answer that, while technically correct, does not actually get us any closer to an understanding of the issue. A more reasoned answer would go something like this: if you're being hired with the intent of writing Ruby code as a significant part, or the entirety of your job description, the interviewer ought to ask questions that are challenging enough that, if you can answer them fluently and capably, the implied skills you must possess in order to do so would be more than sufficient for you to excel at the type of work being performed. Even within the industry of coding shops, there is an enormous difference between writing Ruby for basic office automation tools (say, an automatic timesheet filler using Selenium) and writing Ruby with intense domain knowledge needed in a challenging area, like physics, quantum computing, higher mathematics, actuarial science, etc.

So the two axes of programming language competency are: domain knowledge (can you comprehend the subject matter enough to develop the algorithms/formulae/logic flows for the software you're writing?), and knowledge of the language and its libraries, including third-party libraries (are you able to use the facilities of the language itself or any third-party components in order to implement your solution?). Proper competency must be established on BOTH axes in order to determine if you're suitable for a job.

In the absolutely general, context-free case of determining whether you are "competent" or "good" or "learned" with a language, I don't think it's possible to answer in any meaningful way. One's competency must always be bounded by a realistic look at what exactly the work to be perfomed entails.

Comment Re:Who cares (Score 1) 216

Contract prices may be down, but the cost per gigabyte of data (because fuck everything except data; nothing else matters; data is information and information is data, and there's no point in thinking about anything else) has not decreased significantly.

When you pre-purchase tens of gigabytes in advance, you might be lucky to pay $3 to $4 USD per GB of data transferred over a world-class LTE network like Verizon's. If you have any overages, the price shoots up to $10/GB. It's been at that level since the EvDO days.

$3 per GB is an order of magnitude more than what most people expect to pay and are willing to pay for data once you get out of the 10GB category. The convenient fact that most people still don't know how to do anything useful with their phones and hence don't actually use that much data, does not excuse the heinous prices.

A ridiculous amount of 30+ year old local, county, state and federal legislation has kept *actually good* landlines from reaching millions of people in the US, even in densely-packed suburbs with strong median income. Well, let me clarify that: it's legislation that was passed due to industry lobbying, and even without the legislation, the industry would still collude to depress the rollout of things like fiber to the premises. So while yes, government is complicit in the problem, even the anarchists/libertarians having their way wouldn't fix the problem.

But the landline problem could be conveniently sidestepped if the wireless broadband carriers would offer reasonable tethering or home LTE modem plans with affordable prices per gigabyte. On the whole, LTE data is extremely stable, very high-throughput (many times faster than Verizon ADSL, that's for sure), and natively supports IPv6. It's usually still up if you have a localized power outage. It works fine in severe weather. It's cheaper for the carriers to roll out than to bring fiber to every house. Everybody fucking wins! Except they don't want to do it, because they're making money hand over fist as it is, and they have no regulation forcing them to change. Meanwhile, the "have-nots" who can't get FiOS or similar high-speed broadband are left in the 20th century, or trying to buy someone else's grandfathered unlimited data plan on craigslist or eBay.

I'm not even saying that unlimited data needs to happen on LTE. Sure, it would be nice, and I think it's achievable if they simply scale the tower density to the population density and fix the egregious spectrum waste problem (legacy protocols that are hideously inefficient, etc); but even very cheap *limited* data plans would be fine. Nobody wants to pay $9.99+ to buy an HD movie, then pay another $40 in data charges just to download the damn thing. I think a reasonable price for 1 GB of data on a mobile network is 25 cents per GB. $1 to download a feature-length 1080p movie in high quality. That's perhaps a 5 - 10% tax on the cost of the content license. Not too terrible.

It's completely bogus to say that things have improved a lot in the last two years. The ditching of new unlimited data contracts on Verizon and AT&T, coupled with the stagnation in the price of data per gigabyte despite a vastly expanding network capacity, is pure, unadulterated greed on the behalf of the carriers, with absolutely no sign of pro-consumer progress. If you believe $10/GB is reasonable, either you're shilling or you've got your head stuck in the sand.

Comment Re:Hydrogen is a nice alternative (Score 1) 194

The biggest challenge with this tech, as with most emerging tech these days, is to mass-produce it, and do so cheaply. People simply cannot afford to pay the prices that are normally slapped on next-generation vehicles like this. That, and they only tend to produce about 1500 of them per year. Not enough to even make a dent in the market.

I'm pessimistic, but I hope they prove me wrong. If the relatively successful mass deployment of gas hybrids is a baby learning to walk, this new fuel system is going to be as difficult to roll out as asking that baby who just took their first steps, to immediately get themselves down to the race track and win a mile race against Usain Bolt.

Good luck.

Comment Re:CyberThis, CyberThat, CyberCommand (Score 1) 61

Actually, the "US military and federal contracting wanker-sphere" were among the few organizations that spent big bucks on the foundational concepts of networking that eventually led to the Internet. Look up the history of DARPA sometimes. The first letter in the acronym, D, stands for Defense.

Their reasons for using "Cyber" in front of everything are for completely different reasons. Beancounters in the massive federal bureaucracy system need distinctive search keywords for disparate efforts. If they just called everything "security", you would end up with hiring security guards with pistols who've never touched a computer, whose job description says they're supposed to do penetration testing on mainframes.

Sure, their terminology seems a little out there (especially because much of the world doesn't feel the need to assign such specific, clumsy terms to everything), and I'm not defending their practice, nor am I claiming that they're up to date with the latest trends and technologies now that the Internet has flourished.

But it is a complete fabrication to say that the military-industrial complex / the US DoD / the US military is "30 years late discovering this whole internet thing". They BUILT it.

Al Gore didn't invent the Internet. DARPA did.

Comment Re:All based on a false-to-fact payment model (Score 1) 179

But Verizon is perfectly happy billing customers at $10 per GB. They will only change if they are forced to by law. And I guarantee you that they will sue the FCC, the courts, and even their own mothers again and again until the Supreme Court has to make a decision, in case the FCC does actually strike against them and declare Internet service (including mobile) a public utility.

Comment Re:How about we hackers? (Score 1) 863

I consider myself part of the "hacker culture", and I'm sick and tired of this blind adherence to an operating system architecture from the 70s. AT&T UNIX is dead, and its ancestors only mimicked it chiefly for compatibility reasons (otherwise you'd have a rough time compiling any software that was written for UNIX on your new OS, and you'd never gain market traction).

I find very little value in the UNIX philosophy, and in fact I find that it adds a lot of needless overhead and puts MORE work on system administrators for no reason at all except that "we've been doing it that way since the 70s". I'm so tired of hearing this "criticism" of systemd.

More seriously though, if that's the best you can come up with -- that it's not "UNIXy enough" -- then systemd is well on its way to universal adoption. No serious maintainer of any distro except perhaps Slackware is going to take that argument seriously, because it's a strawman that should've died before many of us in the current incarnation of the hacker culture were even born. Now it's properly dying, and all the people who are stuck in the 70s are coming out of the woodwork to pitch a fit as their set-in-stone view of system architecture is rendered obsolete.

The UNIX philosophy had a time and a place, but just like any other design decision, it had major drawbacks. It worked well in the type of environment it was running in at the time, and continues to have some relevance today, but it is high time that we retire it. The new system architecture proposed by systemd may not be the best possible one, but I think it's a step forward from the old ways.

I would very broadly describe systemd's architecture as "service-based" -- you have one or more daemons that offer some kind of service over IPC, then wake up and process requests as they receive them from processes on the system. The kernel mediates the enforcement of access control, while the daemon(s) themselves mediate authentication (if required). It is a design that is pretty close to how we do web services in an enterprise environment.

We're getting away from "edit this file" and moving towards "call this service API" (or run a command that calls it for you, if you want to map the service architecture onto a model closer to UNIX). We are getting away from the rampant race conditions that come from file editing (look at the number of programs that try to touch/manage `/etc/fstab`, `/etc/resolv.conf`, etc. and end up stepping on one anothers' toes, and trying to parse comments like "#FooProgram edited this; leave it alone!" as interoperability hacks). When you have a daemon offering a service, you get parallelism safety by design: the program can just mutex the critical section, and make sure only its user and root have access to its (ultimately) file-based backend. And root has no business touching the backend directly.

"But what if there are bugs?" you ask. Well, if everything is in a bunch of text files, you can just edit those text files as a workaround, thus avoiding the actual problem. But it's much better to get the actual bug fixed in the program itself, so that you AND all other users do not have to trip over the same bug again and again. A text-based configuration architecture just encourages lazy sysadmin practices and hacks that make a system brittle, unportable, and difficult to maintain.

If you read this and you would still rather have the UNIX philosophy, just remember: even though the service-oriented system architecture is gaining the upper hand in terms of mindshare and adoption in most GNU/Linux distros these days, there is absolutely nothing stopping you and a group of friends from starting a new distro, or contributing to an existing one, that insists on remaining on the old UNIX design. It is physically impossible for anyone making any kind of policy decision in the free software community to force you to do or run anything. The only potential threat would be if a liberally-licensed software (e.g. BSD) development team decided to take their future contributions proprietary and shut down the code repos, thus stopping development (with all the existing bugs, etc remaining unfixed) unless you pay a license fee. That's a far greater threat than the existence of any free software could potentially pose.

Kind of ironic that the people who are the most emotionally uncontrollable about this whole systemd thing are the ones that are flocking in droves to BSD. I would actively encourage the OpenBSD, NetBSD and FreeBSD dev teams to seriously consider taking their future contributions proprietary, just to show these people how silly they're being. A much better solution long term is to continue using a GPL-based ecosystem (Linux, GNU userspace, etc.) and simply run the init system that you choose, and host forks of any packages that seem to have a "hard dependency" on systemd. Yeah, it's a lot of work, but there seem to be about a million of you who hate systemd, so if only 10 or 15 of you got motivated enough, we'd see "Old Hand Linux" or something get shipped 1.0 by Q4 2015, to the applause of 999,995 people over 50 and about 5 people under 50.

And not a single person would oppose you for releasing Old Hand Linux 1.0. It's completely your prerogative. Just as it is the prerogative of current distro maintainers to adopt (or not adopt) systemd. Start contributing, or get used to it.

Comment Re:No. (Score 1) 291

While I agree with most of what you said, I think the more important issue to address is that many people don't even have the opportunity to purchase a fast Internet connection, even if they desperately want one, can afford it, and are willing to pay top dollar for it. Even if they live in a fairly densely-populated suburban area. Pointless political posturing and bureaucracy at the state, county and local levels is preventing access to viable connectivity to people who need it due to their choice of livelihood. For example, it's not far-fetched to believe that people who work with software -- programmers, system administrators, etc -- will tend to need more bandwidth than people who work as HVAC repair technicians. If you work with software, you're naturally going to be interested in much more software that's available out there, and want to download it -- IDEs, alternative operating systems, high-end content editing software, games, and so on.

But the current regulatory landscape pretty much dictates a small number of very specific locations where people who actually need high speeds to grow their knowledge and career, are able to live happily. For no reason other than politics or a whimsical decision by a bean counter at an ISP, houses 1/8th of a mile away might have 100 Mbps fiber to the home, while you might be stuck with 7 Mbps ADSL. The reasons for this may not even be related to expected revenue in your area: it may just be that the company didn't want to shell out any more money to have the fiber rolled out. So if you have a paid-off house from before the economy went to crap, and are not interested in buying a new or used house that costs 10x more and paying for a second mortgage, your choices are either to move and vastly curtail your expenses to pay for your new shack, or stay where you are and live with a 20th-century version of the Internet.

The real problem, as I see it, isn't about raising the bar for the average or typical expected speeds. The speeds that people get in the 50th percentile are fine. The problem is about bringing the bottom 20% of the speeds up to what the current average is, so that there aren't as many people left behind -- or AT LEAST making it so that people who are living in areas with these bottom 20% offerings can, at their option, purchase higher speeds, if they so choose.

I mean, there is always going to be *someone* left behind; if you own all the land around your house for 30 miles in all directions, you probably are going to have to pay a lot out of pocket to get any Internet faster than dial-up. But this should not be happening for people who live in areas with a population density that fits very well into the definition of "urban".

It's a tremendous loss of potential, both in terms of the company (they're not getting higher revenues because they aren't offering these willing customers the higher end service), and in terms of the individuals (they have legitimate non-entertainment reasons for wanting 50Mbps+ speeds but are physically unable to get them without bribing a Verizon executive with a couple million bucks or moving their entire livelihood to another place and starting the mortgage-induced poverty all over again). How many kids interested in programming went to download the Qt SDK or Eclipse or Netbeans or Ruby on Rails, only to see the download estimated time say "3 days" and give up? How many gaming enthusiasts interested in starting a career as a games critic on Youtube have been unable to do so because they only get 128 Kbps upstream? Or if you don't have any love for gamers, replace "gaming" with any other product space where being a critic/reviewer would get you enough views on youtube to pull in a decent ad revenue.

So yeah, there are plenty of potential uses for high bandwidth, but the distribution of bandwidth is pretty much like the distribution of wealth right now: you have the elite ruling class with (usually symmetrical or nearly symmetrical) 100 Mbps - 1 Gbps and up; you have the enormous preponderance of users -- the middle class -- with between 10 Mbps and 50 Mbps; and you have the lower class with dial-up to 7 Mbps ADSL. What I'd like to see is the shrinking of the number of people who are stuck being in the "lower class" of the bandwidth spectrum.

The problem is that there is no correlation between your income status, professional need for bandwidth, and the actual bandwidth you're capable of getting. You could be a member of the (income) ruling class, a software developer, and only be able to purchase ADSL, in an urban area. You could be a member of the (income) lower class, living in poverty and unable to afford any Internet at all, and not even own a computer, and have 3 companies competing to offer you 50Mbps+ at prices starting at $9.99 per month. The politics and financials of bandwidth availability are completely nonsensical. I am not advocating aligning the two so that only the rich can get high bandwidth and the poor can only get dial-up; rather, I am advocating that fewer people *overall* would be unable to get anything faster than ADSL, irrespective of their income or profession.

Comment Re:USB Device Recommendation (Score 3, Informative) 121

I have a Yubikey NEO. The U2F device they're selling now is the same form factor so I would assume it will work. It's a hardy little device -- it frequently clanks up against my other keys, but it still works in both USB and NFC modes. Not sure if the U2F model supports NFC, though. You'd have to check.

Still, good build quality. And there's no battery; the unit has no moving parts (completely discrete); so they can be expected to last a very long time. Basically the limiting factor is how much damage you will accidentally do to the physical housing of the chip and/or the USB connector by dragging it with you everywhere. So far that amount is "0" for mine as far as I can detect.

Comment Just use the keyboard...! (Score 3, Informative) 107

The keyboard?! How quaint!

In all seriousness: if you don't have an unlimited data plan, you're probably going to blow your data allowance, unless by some miracle you've found a provider that values 1 GB of LTE at an order of magnitude (or more) less than $10 per GB.

If you had an unlimited data plan, you would ideally be able to use the Hotspot feature that's built into nearly every smartphone these days, and forego the hotspot. On Verizon it's an extra $30/mo for hotspot tethering on a stock firmware for phones that aren't rooted, but totally worth it for the benefit you get. This is my primary (only) Internet connection. You could make yours the same if you had unlimited data. It's not new or far-fetched at all.

In fact, if the carriers *did* reduce the amortized cost of 1 GB of data transfer on LTE by a factor of 10 or more, I'd be willing to bet that we would see many millions of people signing up for *limited* data plans on the order of 100 - 150 GB and tethering through their phones or using a hotspot as their primary internet connection. Right now it's simply too much money to get "limited" data plans -- on Verizon XLTE with the MORE plan, you can get like 100 GB for $700/month. It's still way too much money for too little data. Until and unless the prices become somewhat reasonable, so it "only" costs you $2 to watch that Netflix video instead of $20, we will mostly see unlimited data plans as the only users of LTE as their primary connection.

Comment Re:One rule comes to mind... (Score 1) 191

My complaint is less about absolute cost than comparative cost.

The mid-grade model of Intel NUC, Core i3, is going to ring up a bill close to $450 USD once you've purchased the core NUC unit plus all necessary parts under the hood. For that, you get USB 3.0, a dual-core processor with hyperthreading that runs circles around any router's, and dual-band 2x2 802.11ac.

To get a device that's labeled as a router or gateway or router+gateway that has comparative specs, just in terms of I/O throughput and total wifi bandwidth (nevermind computation power, since that doesn't come into play very much), you have to venture into "enterprise-grade" equipment, which is intentionally overpriced to be "as expensive as the market can bear" (and corporations can bear a lot). You'll easily spend $1500 to $2000. The only benefit is that you'll hopefully get a nice and stodgy, well-tested but very utilitarian web interface that lets you customize everything you could possibly want. Is a web interface worth $1000 or more? I don't really think so.

My next project is to stick some really nice antennae on a NUC and build a router based on Debian with better everything -- and cheaper -- than the enterprisey routers, while being much more featureful and customizable than a consumer router. Heck, I am tempted to put X and a lightweight desktop on it, so it can be used as a web browser in a pinch (like when you bork your main desktop's OS).

Comment Re:One rule comes to mind... (Score 1) 191

It's very hard to find affordable routers, with the latest-gen tech (802.11ac, USB 3.0, etc) which support flashing and have decent driver support on Linux or *WRT, though. Many routers have such anemic SoCs that they barely run with the built-in firmware, let alone something custom that isn't hand-optimized for the device.

I'm close to resigning to the fact that every router I have going forward is gonna have to be an Intel NUC. Even a Celery processor is many times faster than those MIPS pieces of crap they ship in most routers that cost under $1000.

Comment Re: Critics should take positive action (Score 1) 993

Please stop linking to that website. I have taken in all the information on it, and yet I find systemd to be significantly easier to use than almost every other init system except Solaris' SMF (which has significant tradeoffs compared to systemd, so I'd consider it neither better nor worse). In both the RHEL7 and Debian Testing implementations, I find my system easier to diagnose, and I find it easier to set up new services when installing stuff (both from the package manager and from source).

I use GNU/Linux both as a desktop/laptop distro and for (headless) dedicated servers, and systemd has never once stood in the way of me getting shit done. In fact, it is better at that than any other init system I've ever used, and I distro shopped for a decade before I started to settle on Debian and CentOS as my main two (I also tried out all three main BSD variants -- Free, Open and Net -- and OpenSolaris).

Most of the criticisms on that site are completely immaterial to me because they're either philosophical, or the typical crybabying of "what about BSD?????". Well, now even the BSD folks can shut up, because uselessd is bcoming an actually useful piece of software that hopefully will maintain some degree of compatibility with systemd, as far as the integration points to systemd that other packages have to support. This should at least fix upstream GNOME3 on BSD.

The few valid technical arguments go along the lines of, "there are too many GNUisms in the code". Compiler and libc compatibility matters, so I can get behind that. But really, if you solely use GNU/Linux like I do, (and it's not even a "Red Hat" thing anymore with so many distros on the uptake), it's hard to consider this a priority. I'm glad that some people do, and have created uselessd as a result. Uselessd is the opposite of a parody, in my opinion: it's a confirmation that systemd is fundamentally useful and innovative; is here to stay; and is so useful that people want to implement at least some pieces of it on other OSes. More power to them!

Hopefully the uselessd developers will take their project in a direction that is pragmatic, resulting in a better overall init system. If they pull it off, the systemd developers might consider merging their work upstream, which is the ultimate compliment -- this happened with gcc, and now the gcc community is one big happy family. Mostly. Or at least a lot happier than before.

The work they're doing on uselessd is infinity percent better and more constructive than all you imbeciles sitting around complaining about something being "forced down your throat". FOSS, where forced obsolescence doesn't exist and licenses are free as in beer, and you talk about things being *forced* upon you? Fuck me. Go live in an actually oppressive society for a decade or so, and THEN you'll know the true definition of having something forced upon you. Everyone who thinks there is any sort of enforcement going on about using systemd needs to live in North Korea until they actually understand the words that come out of their own mouths.

Slashdot Top Deals

God help those who do not help themselves. -- Wilson Mizner

Working...