Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Compare cell phone plans using Wirefly's innovative plan comparison tool ×

Comment Can we just get rid of it? (Score 3, Interesting) 74

The only bias I see is that for some reason, Facebook seems to think I'm in any way interested in celebrity gossip, because that's about all that ever shows up in the "Trending" section for me.

I'm interested in science and technology, but every "trending" topic I seem to see is something about what Britney Spears ate for breakfast, or whose dress Catelyn Jenner wore to the mall, or some other equally banal and useless piece of "news" about some celebrity that I don't give a crap about.

I'm not even exaggerating. My current "trending" topics include:

  • - What's coming up on Netflix next month (US Netflix, that is. I'm Canadian and watch Canadian Netflix, and we don't get the same new movies the US does, so it's even more useless)
  • - "Go topless day"
  • - Some sort of conspiracy theory about Herman Cain and Epi Pens
  • - Some nonsense about whether or not some pastor endorsed Donald Trump or not
  • - Something about some guy I've never heard of who got roasted by Twitter due to his hairstyle
  • - Five reasons to see some movie I've never heard of
  • - The 77th anniversary of The Wizard of Oz
  • - Something about Britney Spears doing karaoke

As you can see, my "trending" doesn't have a Liberal or Conservative slant -- it just has a inanely stupid slant.

"Trending" is the least useful part of Facebook, and personally I wish they'd just get rid of it altogether.


Comment Re:"Windows exclusive" (Score 1) 123

That being the case, why the hell is this Windows exclusive? Why not open it to Macs and desktop Linux?

A Sony rep mentioned on the PlayStation Blog today that they were evaluating Mac support. Obviously they can do it, because they are already doing it withPS4 Remote Play for Mac (interesting side note: the PS4 Remote Play for Mac app is significantly smaller than the Windows version. One of these days I'm meaning to look into why this is).


Comment Re:Input on a Windows tablet? (Score 1) 123

"With a Windows laptop or tablet, you aren't tethered to a big-screen TV. You could theoretically take these PlayStation games anywhere"

The article says it requires a DualShock 4 controller. I don't see how that will work with all Windows tablets, especially seeing as ARM-based Windows tablets (like the Surface 1 and 2 non-Pro) allow only XInput controllers (that is, Xbox 360 controllers and one Logitech model).

Sony also announced today a USB dongle for Mac and Windows that permits wireless DS4 connections. Assuming the tablet has a USB port you could presumably use that (although as of yet there is no word if it requires any special drivers or not).


Comment Re:Finally! (Score 1) 211

I'm not so sold on the evils of writing passwords down as it requires the Evil Actor to have physical access in order to exploit it. And as we all know, once you have physical access it is pretty well game over for security in general.

That depends entirely on the purpose of the Evil Actor. If Evil Actor's purpose is to break into your corporate network and steal data from the outside, you're probably correct.

If, however, the Evil Actor is the guy at the next desk who wants to do something nefarious and pin it on you, then all they have to do is offer you a nice tall beverage, and wait for you to leave to use the washroom.


Comment Re: yay more emojis (Score 3, Interesting) 200

I honestly have yet to figure out what the fuck the point in most of these emojis is. In the past everybody just used a combination of existing ascii symbols to show the mood of your message, and I am still trying to figure out what the new emojis solve that that system didn't solve.

You need to understand a bit about where and why emoji's started showing up in the first place. And to do that, we go back to pre-millennium Japan.

Japanese is, to put it bluntly, an insanely crazy written language. Modern Japanese uses no less than four different scripts/alphabets, and in any given sentence different types of words may need to be in different alphabets!. They are:

  • - Kanji: logographic elements taken from Chinese. These are symbols that stand for a word, phrase, or idea on their own. There are several thousand in modern use in Japan
  • - Hiragana: a set of 46 symbols indicating syllables. These are typically used for native Japanese words that don't have a Kanji equivalent.
  • - Katakana: a set of 48 symbols also indicating syllables. Indeed, many of these syllables are identical to those available in Hiragana, but with completely different symbols. These are used for loan-words, scientific terms, names of plants and animals, and for emphasis.
  • - Romaji: as if all that isn't bad enough, some words (loanwords and trademarks) are written in the standard Latin script we use in English ([A-Za-z]).

And if all that wasn't bad enough, there is also hentaigana, which are obsolete kana sometimes used to give things like restaurants and such an old-timey feel (something akin to 'Ye Olde...' in English). And because the different scripts in Japanese are used for different types of words, you frequently have to switch between one and the others in a single sentence. In short, written Japanese is f'd up.

This is where Emoji came from. Imagine a late 1990's cell phone with the 12 standard buttons, and having to send text messages to someone in Japanese. How do you use those 12 buttons to select from thousands of Kanji symbols? How do you switch between Katakana and Hiragana and Romanji? I'll admit I'm not a Japanese speaker (I've studied the symbology, but not the language itself), but I'd think even typing "Hey, let's meet up with Akira at the McDonalds" would take a week on a standard flip-phone keypad. Thus emoji was invented to provide visual shortcuts for writing things that would otherwise be a major PITA to type in Japanese.

So basically, because written Japanese is so incredibly f'd up with four simultaneous scripts in modern usage...the Japanese decided to get around it by adding another script system.

Early iOS releases implemented Emoji to satisfy the Japanese market, but in can you don't recall that far back, it was originally only available if you set your system language to Japanese. In those early days, someone figured out how to write an app to enable the emoji keyboard in other languages, and eventually due to demand (which I'm assuming was mostly 12 to 14 year-olds) Apple eventually opened it up to everyone. At which point, hundreds of millions of people with sane written languages that use compact alphabets decided they were cute, and that they had to use them as much as possible.

Like yourself, I'm a bit of a curmudgeon about the whole Emoji thing. I can understand why the Japanese needed to invent it, as their writing system is horrendous. I don't tend to directly use it myself, preferring to use old-style emoticons in personal correspondence; however, at this point most e-mail and chat systems will "upgrade" typed emoticons to emoji.

So there you go. A brief history of emoji.


Comment Depends. (Score 2) 331

I've been a developer on some pretty damn big projects. The kind of projects used by Fortune 500 companies -- everything from end-user facing applications all the down to low-level infrastructure projects.

If there's one thing I've noticed about all of these large projects over the years, it's that there is rarely ever only one programming language in use. Web apps will use Javascript on the front end and one or more language son the back-end. Large scale C/C++ apps will have a variety of scripts surrounding them. Every project needs an installer, some form of scripting for the build processes, deployment, automated QA, and (frequently) database management. There may even be a mobile app attached to the project. I've had to switch between C/C++, Bash scripting, Java (with JNI), SQL, and REXX, all in the same project.

The point being, if you work on a large enough project, and aren't a junior developer, you're probably switching between a bunch of different languages already. Those languages are probably fairly stable (i.e: you probably won't see too often where you change a massive project from Java to C#), although I've certainly introduced new languages and processes to big projects to make "dumb" processes smarter. The ability to do that, however, often comes when you get to a point in your career where you can specify and/or contribute to significant architectural changes.

I've also been fortunate enough to work at a few places where you can spend 10% of your time working on personal interest projects. If you're fortunate enough to be in such an organization, this is a great time to try out new languages that interest you. If not, find (or start) a project in the interesting language of your choice, and work on it in your own time. If you make it Open Source, and put it on GitHub or the like, you can include it as experience on a resume.


Comment Re:Slow police response (Score 1) 1718

There are lose-lose situations. But someone who is actually worried about self defense isn't whipping their gun out as soon as they hear a shot. They are going to take cover and assess who the shooter is before drawing a weapon.

Which in this specific case wouldn't have been of any help, which was my point. From across a dark club with dance floor lighting filled with panicking people, you're not going to be able to assess squat. Indeed, reports now have it that two security guards on site did indeed have guns on them, and it helped them not one whit. And the shooter in this case doesn't have the same consideration for the safety of others that you do; the scenario is already lopsided in their favour by the facts that a) they already intend on killing as many people as possible, and b) they may not have the intention to get out of the situation alive to being with.


Comment Re:Slow police response (Score 1) 1718

Sorry to break it to you, but sometimes there are lose-lose situations.

Sorry to break it to YOU, but abject helplessness is an idiotic survival strategy. If you can't return fire, you're done for. If you can return fire, your chances of stopping the attack go from nil to non-nil.

Evidence from this very incident proves you wrong. Nobody fired back, and yet many people survived, many with no injuries at all. That's certainly not a mathematical definition of 'nil'.


Comment Re:Slow police response (Score 2, Insightful) 1718

Gosh, you're right. Cowering on the floor and hoping that the attacker will leave you alone is a much better plan. What was I thinking?


Sorry to break it to you, but sometimes there are lose-lose situations. In this situation, your idea is about as useful as deciding that tossing grenades into the crowd is a better idea than "cowering on the floor".

Real life isn't a cowboy movie. The good guys don't shoot from the hip, their shots don't always land true, their bullets don't disappear into the ether with no repercussions. And they don't always live to go home afterwards, or live their lives with a clear conscience about what they did. Sometimes hiding is the best thing you can do to survive.

Someone who thinks they're going to be a hero by starting blasting away in a crowded place to down the bad guy is no hero. Ninty-Nine times out of one-hundred, they're simply an added danger to themselves and the people around them.


Comment Re:Slow police response (Score 2, Insightful) 1718

I'd say that shows very clearly that depending on the cops for protection is a losing strategy. If you want to protect yourself, your friends and loved ones, and innocent people around you, you should carry at all times.


So you find yourself in a nightclub one day. It's dark, there are flashing lights on the dance floor, and it's packed to the gills with revellers.

Someone pops in the front door with an AR-15 and starts mowing down people. There are roughly 50 - 100 people between you and the shooter. You have your trusty Glock 17 in its holster. Panicked people are shoving towards you, as more people closer to the shooter and going down.

Given the above, how many shots do you get off before you're dead? How many bystanders do you take down before that AR-15 is trained your way when you miss with the first few shots from being jostled by panicking people, and from shooting in a still dark place?

A gun battle in an crowded, enclosed space is just stupid. Bullets frequently don't go where you expect them to go. And a handgun against something like an AR-15 is suicide.

(Not a gun owner, but a decade ago I did have a job where I was trained to use and had to carry a C7 assault rifle)


Comment Re:IPv6 is a failed technology (Score 4, Informative) 112

Since you consider yourself an expert, would you care to explain why you think that IPv6 is especially routable?

Sure. There are a lot of things that will make IPv6 easier to route:

  • - Simplified packet processing: there are a variety of features in the IPv6 packet header that simplifies processing by routers. Included here are:
    • - Fixed header size: unlike IPv4, IPv6 has a fixed packet size of 40 octets, whereas IPv4 packets can vary between 20 and 60 octets,
    • - Lack of header checksum: IPv6 has no header checksum (thus removing the need to either compute or verify the checksum). This is actually pretty big, as each router hop needs to recompute the checksum as the TTL value is decremented in order to remain valid,
    • - TTL replaced by Hop Limit: this one is a bit complex. In IPv4, Time-to-Live is specified in the header as the total number of seconds the packet should be routed before it is dropped. This is tricky to compute, so even in IPv4 many nodes simply decrement it by one regardless of how long it has taken to process. In IPv6, this is changed to be a straight hop count; the value in the header basically specifies how many times a packet can hit a router before it is dropped.
    • - Gets rid of unused fields: IPv6 gets rid of a lot of header fields present in IPv4, such as the IHL, DSCP, ECN, and everything related to fragmentation.
    • - Lack of fragmentation: IPv6 packets can't be fragmented. Routers don't have to fragment or defragment packets. This can also mean fewer overall packets, and also means the router doesn't have to parse or generate a pile of fragmentation fields and information from the packets being sent/received.
  • - Traffic Class header field: IPv6 has a field that can be used to differentiate services, and can be used for QoS, allowing the router to more easily prioritize and arrange traffic.
  • - Flow labelling: IPv6 has a header field for flow labelling, that can be used to do things such as ensure stable routes for packets, such that packets aren't received out-of-order at a destination. This is intended to make streaming data (such as video) more stable, and can replace custom heuristic algorithms at the router layer with something much simpler,
  • - Jumbogram support: IPv6 packets can be up to (2**32)-1 octets in size (1 byte less than 4GB). While not practical today on the public internet, bigger packets can mean fewer (albeit bigger) packets that need routing,
  • - CIDR and smarter address allocation: CIDR was invented for IPv4 of course, however IPv4 didn't use CIDR until ten years after Flag Day. Pre-CIDR address allocations were ad-hoc; address blocks were classful (A, B, C). Many of these classful allocations still exist, however because of they way they were assigned, it was (and is) difficult to aggregate these routes. IPv6 came about long after these lessons were learned the hard way, and thus the IANA is being much smarter about what addresses are allocated where in order to better aggregate routes. Thus, a given /32 will be doled out only to a single RIR, who can break it up into smaller units to LIR's, to eventually be broken into /48, /64, and /56's for destination routers. IPv4 also works this way, but with the much bigger address space (and the lack of legacy pre-CIDR allocations), and with smarter allocation policies in place, route aggregation will make the possible mess that is the current state of the IPv4 routing tables significantly saner. From a processing perspective, this means that next hop lookups should be significantly quicker and easier. IPv4 currently has over 610000 prefixes; way more than should be needed. This is partly due to, as addresses have run out, large CIDR blocks being broken up into smaller blocks that are then allocated to different routing regions, requiring more prefixes.

Second thing is, have you seen any IPv4 successor proposals? Link please.

I've seen a few, although I don't have links for all of them. The most significant example that has come up is IPv4.1; one example of a class of proposals that basically suffer from the same "problems" IPv6 has. In particular, it changes the number of octets in the address from four to five. This breaks IPv4; no existing IPv4 applications will actually work on it. It also doesn't fix any of the existing IPv4 routing problems, either in terms of route aggregation or processing power required per packet. It necessitates new protocol stacks on every layer to work. They "claim" backward compatibility by allowing IPv4 addresses to be encapsulated in the packet by setting the high-order address octets to zero; however you can do the same thing with IPv6 (just with more zero octets; this was actually proposed in RFC 4291, but has been dropped/ignored in recent years as the scheme is highly problematic). In terms of aggregation, it basically inherits the mess that is the existing IPv4 global routing table, and then adds even more routes to it. It also ignores that many processors use fewer cycles to read an even number of words than an odd number of words; 5 octets per address can actually be harder for some hardware to handle. I also find it funny they call it "IPv4.1", however the Version field is still four bit long; they don't even propose what they are going to set this value to (0100, 0101, and 0110 are already out). Thus, it has all of the problems of both IPv4 and IPv6, without any of the benefits.

I've also seen a proposal that tries to maintain the integrity of the basic IPv4 packet structure by adding extra addressing bits to the end of the packet header, in the options field, called IPv4+. Thus, the relative offsets of the starts of the source and destination address stay the same (unlike in IPv4.1, where the offset to the destination address differs by one octet). The problem with this scheme is that now the addresses are broken into two parts in different parts of the packet header, requiring more effort on the router end to generate, parse, and reconstitute. Again, it has all of the faults of the existing IPv4 global routing table, adds to it, and doesn't provide any real backward compatibility (as an IPv4-only aware router is going to parse the options data incorrectly, producing unexpected results at best. The packet certainly won't reach its destination.).

Lastly, there are proposals such as this one, which can be categorized as "I've come up with an amazing solution that solves every problem everywhere -- I'll tell you about it later!"...and later simply never comes. Oh well.

Effectively, every time someone comes up with a "backward compatible" solution, all they ever generally try to do is add more addressing bits. They don't bother to consider how packets will be routed, or how their proposal will affect the global routing tables. They also ignore how fast you can actually process a packet (when you're dealing in the range of 3+ million packets per second, even a small sub-millisecond additional packet processing time adds up substantially quickly). And they all get decimated by people who are experts in this field.

There is a whole lot more broken about "the Internet" under IPv4 than just the number of bits in the address. Routing is very key here, and any proposal that ignores that (and so far, they all have) is doomed to fail. The design behind IPv6 has a lot more to do with just adding more addressing bits; rout ability was a key factor here, with lessons learned based on all of the complex, crufty code and overly huge global routing tables currently needed to keep IPv4 going.


Comment Re:IPv6 is a failed technology (Score 2) 112

You seem to be unclear on the definition of backward compatibility. This means that the old protocol is a subset of the new one. There are countless examples where protocol backward compatibility has been achieved in a useful way. Unfortunately, IPv6 is not one of them.

What everyone like you who thinks you can create a backwards compatible IPvX always forgets is when it comes to addressing, it's about more than addressing bits, but routability.

I have yet to see a single IPv4 successor proposal that features backwards compatibility that is actually routable. One of the major problems on the Internet today is the routing system is a complete mess. And every "backward compatible" IPv4 successor people like you have proposed only make the situation 100 times worse .

IPv6 makes routing significantly easier. Routing an IPv6 packet requires less processing overhead, permitting routers to be much more efficient.

Please leave protocol design to the experts.


Comment Re:64bit version?? (Score 2) 359

Sure that would mean using more memory?

Yes, but how much depends on the application. I haven't found a good scholarly reference on average memory use increase for 64 bit applications, however some rough worst-case scenarios I've found see to indicate a 20% memory increase is considered to be at the high end. Most applications will see much less of an increase.

On the flip side, 64 applications get a lot more registers available to them in x86_64 mode -- eight additional General Purpose Registers. Thus, a compiler can better optimize applications to squeeze more performance out of the processor, by loading more data into registers (or potentially by not having to swap data in and out of registers as frequently as in 32 bit mode).

Most consider this a good trade off. Extra RAM is easy to come by when compared to extra processing speed.


Comment Re:Campaign season (Score 5, Insightful) 607

Yeah I think this is by far the biggest "douche vs turd" election I've ever witnessed, and I can't even fathom how it could possibly get even worse than this. Seriously, this year politics in America has probably hit rock bottom.

If I may speak for a second on behalf of everyone in the rest of the world...

America, you have just shy of 325 million residents. I don't know how many of those are natural-born residents eligible to run for US President, but I assume the percentage is fairly high. Let's say at least 275 million people. How is it that from such a huge number that these are the best people you could come up with???

You guys really need to dig deeper for political talent. We in the outside world are getting worried about you if the current crop of clowns is the best you can find!


Slashdot Top Deals

Failure is more frequently from want of energy than want of capital.