Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Congress is just mad someone is beating them (Score 1) 137

I haven't found the actual bill yet so I don't know what it actually says but this seems to be the case.

The bill is literally linked in the summary. I'll link it again here: https://assets.documentcloud.o...

It's a page and a half of actual content, even with the narrow columns and oversized font they use for whatever reason.

Comment Re:More nation-wrecking idiocy (Score 1) 600

This is actually part of a standard practice in traffic engineering called traffic calming.

It's well known that without strict enforcement people will drive whatever speed they feel comfortable on a road regardless of the posted speed limit. This is why for example when you have a four-lane divided highway with a speed limit of 55-60 MPH (common in most major metro areas) the average speed of free-flowing traffic is still likely 65+.

The same applies in areas where you really do want people driving slower. If you put a nice wide four lanes plus suicide lane down the middle of a neighborhood it doesn't matter if you put a 25 MPH limit on it, people will still tend to drive it at the same speed they would any other road of that size. Traffic calming is the practice of intentionally doing things that make it less comfortable for drivers to travel at speed, thus reducing the average speed naturally. Narrowing lanes, removing markings, the funky markings some countries use, center islands, chicanes, etc.

This is the sort of thing that should be encouraged, because they're doing it right rather than just tossing up unrealistic speed limits and going hardcore on enforcement.

Of course that is all on the assumption that these are areas where reducing speeds is legitimately desirable, as opposed to places where some idiots suddenly decided a road that's been there for years is a dangerous menace around the time their kid started walking or some old bastard whose reaction times aren

Comment Re:What the doctor ordered... (Score 1) 699

So your complaint appears to be that when one recklessly uses root commands, it should only wipe out all their data, not all their data plus their motherboard? That's not terribly urgent. It'd be like complaining that handguns are too powerful, when I shoot somebody in the head it should only kill them, not kill them and also leave a hole in the wall.

Killing the motherboard is a lot worse than just losing what's on the hard drive. When a hard drive dies (effectively the same result as 'rm -rf /' in a traditional environment in one of my clients' machines I can replace it with a random one from stock and reinstall the OS from a USB stick within an hour. Restore from backup takes whatever time it takes for how much data that machine had over gigabit ethernet.

If an otherwise good motherboard gets nuked, now I have to get parts that may be in a proprietary formfactor, I have to completely pull apart the machine, etc.

For your gun analogy it'd be more like if it brought the whole building down rather than just putting a hole in the wall.

Comment Re:80 MB? Seriously? (Score 1) 116

Would be 4.5 MB if it were MODs, or .45 MB if it were MIDs

I don't think we need to go over the limitations of MIDI, I'm sure you know it already and other posters have covered it anyways.

As for MODs, those work great when you have hardware audio engines with enough channels that you can load the appropriate instruments in to and trigger at the appropriate times. When you're doing all your audio in software, a single preassembled audio track is simpler and more reliable.

This, not the music, is actually what pisses me off. Use low-res graphics to make them look like low-res graphics. Then use a fancy scaler to make them smooth when scaled way up. We all have supercomputers on our desks and laps now, there is no need to ship high-res textures just to save some CPU. Using low-res graphics with Quincunx or similar actually produces a better retro look than using high-res graphics anyway, so if your goal is to look all retro, that's a better way to do it.

It's been a while since I've played SuperTux, but I never recall it looking like they're going for a retro low-res appearance. Firing up this latest version to play a level, I don't see any textures that are wasting their resolution.

Could the game work just as well with lower-res textures? Of course, but so could pretty much any other game ever. As long as they're not doing something stupid like using a 64x64 for a blown up representation of a 16x16 texture I don't see the problem.

That's bigger than the whole original game!

It's infinitely higher quality than the audio in the original game too, that's what happens when we can play actual waveform recordings rather than the NES audio system which was basically comparable to an extremely limited form of MIDI.

8 and 16 bit era game consoles were basically programmed like embedded devices, raw access to all the hardware and a fixed configuration meant that developers could pull all kinds of trickery to achieve much higher quality than the binary size would have you expect. I have a fairly complete NES ROM collection, including a lot of hack ROMs that never actually existed on a cartridge, and it totals just a bit over 100 megabytes. The thing is, that means those games as they are will only ever exist on that hardware because changing anything significant would throw off everything.

Building something to work the same on a variety of systems means you can't do that sort of hardware-level optimization. It's also really hard, requiring a detailed understanding of your target platform. Abstraction solves both of these problems at the cost of limiting your optimization. These days whether you're targeting mobile, PCs, or consoles it's just not worth the effort to try to shave a few dozen megabytes by significantly increasing the complexity of your code. Disk is cheap. RAM is cheap. Bandwidth is cheap. Programmer time is not cheap. Clever hacks make your code harder to understand for new contributors and introduce opportunities for bugs that need not exist.

Comment Re:Ha ha (Score 1) 181

Major mobile providers are/were not handed geographic monopolies, although there are some de facto ones cause huge country, etc. etc.

Technically correct, they didn't have monopolies, but they did have duopolies where the ILEC was given special treatment.

In the analog days the FCC divided the available channels in half, allowing two networks per market area. The "A" channels were allocated for competitive wireless providers and the "B" channels were for the local wireline provider. Basically the ILECs were guaranteed half the capacity and to only have one local competitor.

It's not as bad as a pure monopoly, but pretty much the same situation much of the US has with broadband where the choices are one cable provider and the ILEC's DSL. We all know how well that works out.

Comment Re:cracked in about two years. (Score 1) 133

Where does the PS4 hold its OS? If it's on the HDD you could just back it up to one or two others and let them sit around.

I don't actually know any specifics about the current generation consoles' security, but in the last generation there were protections against downgrading the OS.

I'm most familiar with the Xbox 360, which used eFuses to prevent downgrades. Older versions of the OS would refuse to boot if a certain number of fuses were burned. You could do a hardware mod to prevent the fuses from being burned, but once they're gone there's no going back.

If I remember correctly at least the fat PS3s could be downgraded after a low-level bootloader exploit was discovered, but that still required hardware modification and came long after most non-piracy development interest had faded due to modern GPUs being much faster than the Cell.

I would assume that the current-gen consoles are at least equal if not better from a security standpoint. Both companies have been very good about not making the same mistake twice.

Comment Re:cracked in about two years. (Score 3, Informative) 133

Except that wouldn't really be a great experience because AMD's drivers are still terrible on Linux.

Both of the current PC-like consoles use AMD GPUs derived from the GCN 1.0 family. The PS4's is roughly somewhere in between a Radeon HD 7850 and 7870, where the XB1's is harder to compare due to the memory configuration being rather unique (fast ESRAM cache but slow DDR3 main memory). It has less cores and a third the memory bandwidth but clocks higher.

Considering that even nVidia's optimization on Linux isn't as good as it is on Windows, most benchmarks I've seen show SteamOS delivering 50-80% of the framerate seen with the same hardware on Windows, you'd be giving up a LOT by trying to run SteamOS on a PS4 rather than just building a cheap gaming PC.

Since the flaw being exploited will likely be patched soon after it goes public, if not before, the better plan if one wanted to switch from PS4 to something else would be to hang on to your potentially exploitable console and keep it offline until someone releases an exploit. If Sony is able to fix the hole with a patch any unpatched boxes immediately jump up in value, like we saw with the Xbox 360 and PS3. That of course means giving up online features and possibly new game releases for a while, but if you're one of those users who doesn't game online and/or uses it mostly as a Bluray player that might not be a big deal. You can then use the money to build a budget gaming PC that'll beat the pants off of any of the consoles.

Comment Re:Problem with the definition of a planet (Score 2) 73

They'll say, "oh, it's okay, there's enough of a size difference between those bodies that they don't count".

No, they'll just point out that while the orbits of the two planets appear to cross when looking at a 2D top-down view of the solar system, in 3D space they come nowhere near each other. The closest point in their orbits is 2 AU apart. Unless you want to say that Neptune's orbital zone its supposed to be clearing is twice the distance between the Earth and the Sun, Pluto is irrelevant.

For someone who seems to care a lot about Pluto you seem to have forgotten how absurdly tilted its orbit is.

Comment Do the math, figure it out for yourself. (Score 1) 325

I'm going to break from the majority here and say it could possibly go either way.

For a higher-end system, I wouldn't bother with any builder who won't tell you the exact parts they're using. I don't know what the situation is with Alienware these days in the Dell era, but if they don't say or use custom parts in key places I say skip 'em.

Something like a custom but still ATX standard case is fine, but a proprietary motherboard or GPU is no good IMO.

From there, just do the math. Look up what the same or an equivalent machine would cost you to build, then figure out how you value a central source for warranty support and the time you'd take to build it yourself.

I've only seen this on the low end rather than the high, but its certainly possible that the volume OEM gets better prices on parts than you do to a point that they can sell you a prebuilt that's either cheaper than you could build on your own or a better value due to the warranty and personal time issues.

All that said, I personally enjoy the process and the ability to select exactly the parts I want to a point that I doubt I'd ever buy a prebuilt for my own use outside of a few appliance-type machines where I can't easily replicate it.

Comment Re:Easiest technical solution for this (Score 1) 138

I know every Android device I've used has either had non-existent support or barely functional implementations that were obviously set up and then forgotten about.

Quit using shitty devices that have been bastardized by the carrier and/or a fuckhead OEM that thinks they need to be a special snowflake.

My phone runs a pretty much pure build of the Android Open Source Project and all three forms of tethering work great. It even supports USB tethering to WiFi, which I've used to get my desktop online when my switch failed.

If it's broken on the devices you've used its because someone went out of their way to change it from the stock Android solution and either didn't care to test or intentionally wanted to break tethering.

Comment WAAAAY Overblown! (Score 5, Informative) 113

Here's a link to a page that actually describes the "vulnerabilities" they found: http://www.kb.cert.org/vuls/id...

All of them only apply to Voice over LTE environments, which are different from traditional mobile phone networks in that the LTE network is purely IP traffic so it's effectively a voice over IP call using standard protocols like SIP the same as an internet-based VoIP service would.

As someone who's been working in VoIP for over a decade I just have to laugh at this crap.

Let's start:

The Android operating system does not have appropriate permissions model for current LTE networks; the CALL_PHONE permission can be overruled with only the INTERNET permission by directly sending SIP/IP packets. A call made in such a manner would not provide any feedback to the user. Continually making such calls may result in overbilling or lead to denial of service.

Translation: A VoIP app doesn't require phone permissions if it's not accessing any of the OS' phone subsystems. No shit, sherlock.

The only way this could result in billing or denial of service is if the carrier was not properly authenticating the SIP traffic and was just assuming that anything from that phone aimed at the right IP address must be a legit call. That's 100% a carrier fault, not any flaw with the system. Do they propose that Android should be specifically watching for SIP traffic and require an app have the phone permission to be able to send it?

Apple reports that iOS is not affected by this issue.

I smell bullshit, but I don't have an iOS device to confirm. I doubt Apple requires that VoIP clients have special permissions over anything else.

Some networks allow two phones to directly establish a session rather than being monitored by a SIP server, thus such communication is not accounted for by the provider. This may be used to either spoof phone numbers or obtain free data usage such as for video calls.

This is carrier logic if I've ever heard it. Using the data service I pay for to send IP traffic (which happens to contain voice or video) directly to another user on the data service they pay for is somehow a vulnerability? Again I'm not sure how this is platform-specific.

Spoofing numbers again would require that the carrier have their network configured in a stupidly open and trusting fashion. None of my customers can spoof numbers unless I allow them to (hint: I don't) and it wasn't rocket science to set things up that way.

Some networks do not properly authenticate every SIP message, allowing spoofing of phone numbers.

Repeating themselves here, while this time acknowledging that it's the network's problem.

Some networks allow a user to attempt to establish multiple SIP sessions simultaneously rather than restricting a user to a single voice session, which may lead to denial of service attacks on the network. An attacker may also use this to establish a peer-to-peer network within the mobile network.

Well at least this time they blame the network from the start. I wouldn't limit users to a single session, that restricts 3/4 way calls, but reasonable limits are good there. Still not sure what would be wrong with endpoints directly contacting each other via the data service they're paying for.

I have no doubt that some carriers' networks are truly insecure enough to allow the spoofing and fraudulent usage described here, but that's entirely down to their own stupidity because none of these things are hard to prevent at the network level, even the ones that aren't actual problems.

Comment Re:The Line (Score 1) 928

There is a place for profanity laced arguments. There are times when the cluebat need to be applied. They should be the exception and preferably done in private. The problem comes when every discussion quickly devolves into name calling and profanity.

It is the exception though. LKML runs around 5000 messages a week and this sort of "Linus is mean" drama only comes up a few times a year if that. It's by no means every discussion. Linus has over 800 posts to the mailing list this year, which comes to somewhere around four a day assuming he's not working 7 days a week. All the examples I've ever seen have been preceded by much more polite conversation or have been flagrant violations of well known kernel dev policies by people who definitely should know better (such as the two cases Ms. Sharp specifically picked out a few years ago).

In at least one documented case with her Linus actually did take it off-list and she brought it back in to the public.

Just because a woman has brought it up does not make it a gender issue. In the end this is not a man or woman issue it is a civility issue.

You say this, but then you follow it up with making it a gender issue by saying this:

To all those who say "women should get thicker skins and not take things personally" I say "certain men should stop equating being right with their worth/masculinity or go back to the cave where they belong".

In a meritocracy, which all technical projects should strive to be, being right is directly equated with your worth as far as the project's concerned. Masculinity has nothing to do with it.

If someone who has been trusted with the maintainer role of an important subsystem, as both of the people in Ms. Sharp's earlier examples are/were, it's fair to expect that they understand the basic rules of kernel development. If they then try to violate those rules and go out of their way to defend those violations, it's fair to drop the politeness. If it's a recurring problem as Linus claims and I'm very willing to believe, it's not unexpected that someone might have very little patience.

Comment Re:Build timestamps mess this up (Score 1) 130

Why would a lot of code need to be "fixed" just because someone anally retentive wants deterministic builds? If they truly care they can LD_PRELOAD fake date/time libs.

The reason for deterministic builds is to allow those of us who want to use binaries from our distros for convenience sake verify that the binary is actually built from the source it claims to be from. It only takes a few people actually doing it to confirm things are good for all of us.

Basically it lets the lazy masses gain the same level of confidence in what they're running as those who compile everything from source.

I thought this problem was solved a long time ago by the bitcoin developers w/ gitian.

Bitcoin solved it as far as they needed for their own purposes, this project aims to solve it sufficiently to be generally applicable across the entire Debian operating system. The goal is that the entire Debian binary package repository could be audited by anyone who cares to do it. Obviously if it's generic enough to cover all of Debian the same techniques should be usable pretty much across the entire computing world.

Slashdot Top Deals

A good supervisor can step on your toes without messing up your shine.

Working...