Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Seemed pretty obvious this was the case (Score 1) 311

FWIW, I agree that it may not entirely be "real-world accurate". It does pre-suppose that whomever is attempting to crack your password already knows something about the structure of your password (such as it being a dictionary word followed by a repeating sequence, as in the original "Ten!!!!!!!!!!!" example). However, if we take this at face value, it does give us a better worst-case scenario for password strength than those which simply presume a brute-force approach.

That is, given someone looking over your shoulder (but without sufficient accuracy to see exactly what you're typing), and then applying computational tools, how quickly could your password be cracked? That's certainly an interesting question to have the answer to, and if your password is resistant to a known-pattern based cracking approach, it's certainly going to kill any attempts to purely brute-force it.

Yaz

Comment Re: What the heck? (Score 1) 354

OK, I don't get it either. If somebody is using GPL code and refuses to issue source, it's cut and dried, guilty. But I can't make out whether this is what is going on.

Nit-pick here, but using GPL code doesn't require you to issue source, even if you've made modifications. It's not until you distribute said modified code that you need to release source (and even then, you only technically have to provide source to those you've distributed binaries to, and not just anyone who happens to request it).

Thus if I take some GPL code, modify it, and use it in an internal process that isn't shared with anyone, there is no requirements for me to make sources available. But as soon as I share the artifacts with anyone else, they have the right to my source modifications, and all those rights entail.

How that relates to this case, I have no idea.

Yaz

Comment Re:Seemed pretty obvious this was the case (Score 4, Informative) 311

A strong password CAN be easily remembered. How about remembering 10 and 11?

"Ten!!!!!!!!!!!"

That's 10 and eleven "!" characters.

There are a number of ways to calculate password effectiveness. If you assume zero knowledge of the password characteristics, then the 290 million years the website you linked to calculated may be accurate.

Hackers, however, have typically found that certain patterns are used by humans more frequently than others, and instead of brute-forcing the password from the beginning (following UTF-8 order " ", " ", " !"... etc.), you can instead skip a significant part of the overall password space by only testing these common patterns.

I prefer this tool, which evaluates password entropy. The figures it comes up with do tend to presume that something about the structure of the password is known (i.e: in your example that it is a word followed by a repeating symbol), but IMO this is a good figure to base your password decisions off as it represents a worst-case scenario, and not the best-case scenario the tool you linked presumes.

Using that tooling instead, your passwords strength and estimated crack time is as follows:

  • password: Ten!!!!!!!!!!!
  • entropy: 18.669
  • crack time (seconds): 20.836
  • crack time (display): instant
  • score from 0 to 4: 0
  • calculation time (ms): 3

FWIW, (and purely for the sake of comparison) one of the passwords I use online has, according to this tool, an entropy of 61.819 and a crack time of 203355820622500.06s (about 6.4 million years). And yes, it's something I both change often and have memorized.

Yaz

Comment Edit much? (Score 4, Funny) 613

"Although there are those who think the systems debate has been decided in favour of systems, the exceedingly loud protests on message boards, forums, and the posts I wrote over the past two weeks would indicate otherwise.

"Although there are those who think bacon is tasty, a loud protests I've posted recently on message boards, forums, and here on /. over the past two weeks would indicate otherwise."

(Yeah, I've been here long enough to know that nobody at /. does any actual editing. Still, can I make fun of the submitter for making it sound like (s)he's the one who is going around and posting all the loud protests, and then trying to make it seem like some sort of movement?)

Yaz

Comment Re:Manipulated by apple (Score -1) 132

Well that sounds truthy, but I don't buy it especially since you're obviously a Mac nut (your email is mac.com). I'm hitting 40 and my big fingers and crappy eyes have a tough time navigating my 4.3" screen so the almost 6" my Note 3 has, is an outstanding upgrade. Could the possibility be that people want more phone choices than one? Nah, must be because droids are that shitty.

I have at least seven different e-mail addresses on different domains, including gmail.com. Does that also qualify me to be a Google nut?

Android phones (particularly on the high end) are big for exactly the reasons I described. They were big because of technical issues making them small. But if as a by-product that means a phone that works for the fat-fingered four-eyed brigade, well, I have no problems with that. But it's somewhat silly to be overly proud of the fact that you carry around a large phone, when the reasons why it's large are due to technical limitations for it being small. That would be akin to claiming your portable record player is way superior to an iPod because it's bigger. This is still a technology site, isn't it? (I know -- hard to tell sometimes these days).

Oh, and FWIW, I don't own or carry a phone of any kind, so I don't have a horse in the race either way. I'm glad you found the right phone for you, but I wouldn't go around bragging about your phone being generally better just because it's bigger, particularly when the truth of the matter is it's primarily bigger as it requires more hardware to overcome software issues.

Yaz

Comment Re:Manipulated by apple (Score -1) 132

Apple PR again. In light of good press from Microsoft and android simply having more apps. IOS is falling behind in both quality and quantity. Posted from a 5.5" phone

Let's try to remember for a moment why Andriod phones were bigger in the first place.

Andriod apps written against Davlik are garbage collected, however the garbage collection process on a phone with the typical quantity of phone memory requires a) frequent collection runs, and b) causes pauses. In order to alleviate this effect, Andriod device manufacturers started popping multi-core CPUs into their devices, simply to be able to handle garbage collection in the background, and make their devices appear closer to real-time performance and reduce UI "hiccoughs". Particularly early versions of these processors were more power hungry, requiring a larger battery to meet the same per-charge runtime as the iPhone. This required a larger overall package.

As such, the Andriod phones aren't larger because larger is better. They're larger because they couldn't compete with the iPhone in terms of performance or battery life if they were the same size.

Keep that in mind the next time you want to brag about your giant phone :).

Yaz

Comment Re:Lame.. (Score 1) 158

For example, this week I saw a video of a beheading. Now after watching it I probably wish that somebody had filtered that for me.

If it makes you feel any better, unless you watched a completely different video than I did (something other the what has been in the news recently), you didn't see a beheading. Did you see the blood spurt/drain out as the carotid/jugular were severed? Did you see the disarticulation of the spine? Those weren't in any version of the video I saw. It moves from a guy making a sawing motion with a knife in front of a guy throat, to a picture of the disembodied head sitting atop the body.

That's not to say that the guy is any less dead, or that it was any less horrific. But there was a lot of somewhat creative editing going on in that video. Shadows seem to shift at different points relative to the background, indicating that some of the later parts may have been recorded an hour or two after the earlier parts. There is some analysis that seems to indicate the "terrorist" may have been two different people at different points in the video. There are a lot of cuts, and quite a bit you don't see.

I'm not saying the video is a complete fake. The guy obviously suffered a horrific death, and the perpetrators need the full weight of the western worlds power brought down upon them. But don't beat yourself up about watching a beheading -- what was shown was both sad and shocking, but it left out the actual beheading part (again, unless there is some special uncut version out there I haven't heard about).

Yaz

Comment Re: Doing it wrong? (Score 1) 113

Sure, you could cobble together all the assets and code together in no time...

That depends on the game, which supports my thesis.

Strong AI is difficult to do, and can be the real differentiator between a great game and a cheap copycat. Likewise for a physics engine or a rendering engine.

If your game doesn't feature any form of AI, or is easily reproduced with off-the-shelf physics and/or rendering engines, then your game is probably trivial. And if it took you years to put together your trivial game when it only takes the next guy days to replicate it -- than as I've said, you did something wrong.

Yaz

Comment Re:Doing it wrong? (Score 2) 113

Not necessarily. I don't particularly care about Flappy Bird, but let's look at Chess. Chess took centuries to develop, and almost anyone could reproduce it now.

Chess has evolved over time, and wasn't the product of a single development team, so it's not exactly an apples-to-oranges comparison. It took roughly 900 years of evolution for chess to take on its modern form, and there have been many variations of chess (Wikipedia claims more than 2000 published variations).

Early versions of chess weren't unplayable, in-development versions. They were proper, stand-alone games. You could think of modern chess as actually having been a "rip-off" of these earlier games. Indeed, several of the basic game mechanics were seen in earlier games that predated chess by centuries (pieces on an X-by-Y grid, for example, was used 600 years before the earliest variants of chess in Ludus latrunculorum). Indeed, if chess hadn't freely borrowed from games that came before it, it wouldn't exist today.

As such, chess evolved in exactly the way this article is railing against. Over the years, people who had nothing to do with the original "developers" of the earliest chess forked their own versions with slightly different rule-sets, and those with rule-sets that provided for an improved game were adopted by others, who then adapted those rules with their own improvements. Without these "copy-cats", we'd be sitting down to play Chaturanga right now instead of chess.

Yaz

Comment Re:Doing it wrong? (Score 1) 113

Well, you mean as an Indie developer if I start from scratch, do the design, generate graphics, coding, testing, then it should still takes me as much time as some one who can simply download the app, and replicate without having to 'think' (or like in case of Android apps, just download the apk, decompile and open it up, grab the resources) and put out a clone. Interesting.

You have a fair comment, so I should clarify somewhat. I'm assuming that whomever does the copy is not only generating their own code, but is also generating their own resources. If they're copying your resources you have the ability to go after them for copyright infringement. That's not really a new thing in game development, and there is legal recourse (and yes, I know it's a shitty thing to have to go through, as it has happened to me personally with someone who ripped off both code AND resources from an OSS game a friend of mine and I coded 8 years ago).

But the summary is talking about differing orders of magnitude here. If you've developed something that took years and someone is able to replicate your work (without stealing code or resources) in days, then yes -- I still submit you're doing something wrong.

Yaz

Comment Re:Doing it wrong? (Score 1) 113

Unless that "someone else" happens to be a game studio of 500 artists and 50 devs, in which case it makes sense that they can do it faster.

Personally, I've never known a team of that size to be able to ramp up development all that quickly. What that many devs, you'd probably wind up with a month of design meetings before any coding got started.

Yaz

Comment Doing it wrong? (Score 1) 113

While coming up with good game mechanics is important to a successful game, if it takes you years to develop a game, and someone else can copy it in weeks or days, then you're probably doing something seriously wrong. Either your game is too trivial, or you weren't a very good developer to start with.

Yaz

Comment Re:thanks for the info (Score 1) 371

You're welcome -- glad to help.

I should note that I am assuming that with 16GB of RAM you're running a 64-bit OS. If you're running a 32-bit OS (with PAE to access all the extra memory), you're going to be more constrained. While an OS with PAE can access quite a lot of RAM, 32-bit processes are still limited to a 32-bit virtual memory space (with a maximum addressable memory of 4GB, but functionally less on some OS's depending on whether or not they do things like mapping kernel memory into process memory space (Windows, I'm looking at you...)). On some OS's, Java also expects that the heap memory space will be completely contiguous (virtually, not physically), which may not seem like a problem until you run into a virus scanner or some such that loads some library resident into every single process ("DLL Injection") at the 2GB mark or some other fixed virtual memory location below the maximum (yeah, I'm looking at Windows again. Had this one bite a customer pretty badly a few years back who for some odd reason still insisted on running 32-bit Windows servers). 64-bit OS's can still support DLL Injection, but typically the injected memory location is so insanely high within VM (quite a large amount higher than you can physically install in your typical system), that it doesn't cause any problems.

The point being (before I go off into too long of a rant), on the off chance you're still running a 32-bit OS and 32-bit Java (I'm reasonably sure you can run 64-bit Java on OS X 1.6, even when booting with the 32-bit kernel), tuning may only take you so far for Java applications that are really memory hungry -- you'll still hit a wall well before even 1/4 of your installed RAM is used. In that case, upgrading to a 64-bit OS and 64-bit Java is highly recommended, if possible.

(As an aside, this is actually one reason why Android handset developers were quick to jump into using quad-core processors. When programming Android using Java, all of these memory and garbage collection issues arise, in a package with less RAM and less processing power than a standard PC. The best way to handle being able to create responsive applications under GC in such a model is to do as much of the GC as you can in the background on one or more independent cores to minimize whole-application pauses. Contract this to iOS which uses retain/release cycles (or better yet, Automatic Retain Counting, aka ARC) where memory management is either hand-coded by the developer, or resolved at compile time (in the case of ARC), requiring no GC at all).

Yaz

Comment Re:Pauses my 16 GB desktop working on 4K program (Score 1) 371

I use a few Java programs on my desktop, which has 16GB of RAM. One program I use is a little editor / mini-IDE for microcontrollers which have 4k of memory. While writing these 4K programs, Java will largely lock up the machine for 30 seconds, probably while it's doing GC.

Check your settings. For better or worse, Java doesn't run like other native programs (on most platforms at least) -- the maximum heap space it will allow programs to use is fixed at runtime (i.e.: it won't grow dynamically as memory allocations are requested). On the system I'm sitting at right now, the default is set to just under 2GB (Win7, JDK 1.7.0_45 (both 64-bit), 8GB RAM). No matter how much actual memory my application wants (or needs), this is the maximum that can be allocated for objects on the heap. Attempting to allocate any new objects once this 2GB of heap space won't use any free RAM available on the system; it will instead trigger garbage collection.

Additionally, with Java 7 at least, there are seven different garbage collection algorithms. The default is the Parallel Scavenger Collector, which is a "stop the world" collector. You might find on your system you have fewer pauses if you enable the Concurrent Mark/Sweep collector, in conjunction with the Parallel Copy Collector. Concurrent Mark/Sweep runs in the background, and only stops all application threads if it can't keep up with demand.

Tuning those should help with your specific performance issues. I completely recognise that none of this should be necessary in a well-written application; unfortunately for all too many Java developers, "I don't need to worry about garbage collection" also seems to mean "I don't need to care about the lifecycle of my objects", which of course isn't true. There are a lot of badly written Java applications out there, and in some ways understanding how Java handles memory is actually harder than understanding memory allocation in a lower-level language like C (Garbage Collection makes things pretty complex), and all too often I run into Java developers who simply have no idea how the JVM manages these things. IDEs for embedded systems always seem to be particularly bad for this. That's not entirely Java's fault -- you can write memory-efficient Java applications, it's just that too many Java developers (and frequently their managers) seem to think you can ignore memory issues with Java because it has Garbage Collection, as if it's some magical solution to memory allocation/deallocation.

Yaz

Slashdot Top Deals

Work without a vision is slavery, Vision without work is a pipe dream, But vision with work is the hope of the world.

Working...