Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Print... (Score 1) 333

... is considered to be rough at around 150dpi, ok at around 300dpi, good at around 600dpi, and anything at 1200dpi or higher is considered very fine print and is usually reserved for art prints, etc.

Of course, print dots and pixels aren't exactly the same, but comparison is hard - mostly because print dots are not as clearly part of a grid system as pixels are. Comparing print dots and subpixels would make more sense, but is even harder.

So assuming that pixels and print dots are equivalent 3200X1800 on a 13" 16:9 screen would mean the screen has something like a sqrt(3200*3200 + 1800*1800)/13 = 282ppi resolution (simplified maths).

So we're just about moving from "rough" into "ok" territory, by some 30+ year old standards. To me, that's not "good enough", but YMMV.

Comment Put money where his mouth is... (Score 2) 346

... is exactly what Mark Shuttleworth has been doing for almost a decade now. If that amount of effort hasn't helped the Ubuntu team initiate some drastic changes in otherwise community-run projects, then I very much understand why he's fed up and wants things to Just Work (TM).

This is irrespective of who did what, and whether they did the right thing, by the way. There's always multiple people at fault in any conflict. I'm talking purely about the desire to stop wasting time and money on something for which there is overwhelming evidence it doesn't work. If I had been in his position, I would have started my own Ubuntu-run competitor projects a lot earlier, but then I'm not a patient man.

So more power to him, I say.

Comment TL;DR (Score 1) 926

The TL;DR version is the rather less spectacular "The problem with diets that are heavy in meat, fat or sugar is not solely that they pack a lot of calories into food; it is that they alter the biochemistry of fat storage and fat expenditure, tilting the body’s system in favour of fat storage."

Yes, we've known that for a while.

Comment I've posted this before... (Score 2) 352

... I will post it again. This relates to the page views GoSquared measures, NOT to the entire internet.

GoSquared self-describes as "trusted by 30,000 businesses", which is not a small number, but also doesn't really compare to the number of businesses with websites out there. http://www.whois.sc/internet-statistics/ says there are about 150 mio domains (in the most popular gTLDs). Assuming domains is a decent measure for websites (which it isn't, but let's go with that for now), at best GoSquared measures 0.02% of the internet.

That means, we only know that 0.02% of 40% was affected, which is some 0.8% of the total internet.

But wait, we're talking page views, not websites. http://www.worldwidewebsize.com/ estimates 3.76 billion pages (indexed, not existing). If we assume those 3.76 billion pages are spread evenly across those 150 mio domains, we have some 25 pages per domain.

It now depends on the type of businesses GoSquared represents: are they SMEs (often with no more than 1-5 pages on their domain) or large businesses (often with hundreds of pages)? With 30,000 customers, my guess would be that they represent more SMEs than large businesses, meaning the total number of pages they represent is actually less than 30,000 x 25. Which also means the 0.8% percentage of total page views would drop even further.

Lastly, page views does not mean traffic. Traffic is mostly generated from streaming media these days, not pages.

All the numbers to be taken with a grain of salt, of course, but they still seem to be in a better range than the ones GoSquared published. Never trust statistics you didn't forge yourself.

Comment Re:Update from the author (Score 1) 331

Yes, Android is somewhat fragmented. No, that doesn't play much of a role here.

Android is no more fragmented than e.g. the Desktop, and provides better support for various screen sizes and aspect ratios than most other development environments.

The fragmentation of Android starts to matter once you want to support esoteric features, or use the latest APIs. For UI concerns, Google provides backwards compatibility libraries. For Audio/Video processing, Android's APIs haven't changed since about version 1.5. For what I can see in this case, it shouldn't matter much at all.

None of which means that supporting all these different device configurations is going to be *fast*, and time does have a cost. Development-wise, though, you can reduce the cost best by considering the different screen sizes right from the start. One sadly common mistake I see people repeat over and over is to design for one screen size first, and then try to adapt that design to different screen sizes or aspect ratios. If you go down that path, you're likely going to end up spending more than if you design with an adaptive layout in mind from the start.

If you need help with that, let me know :)

Comment He didn't build it in 30 days. (Score 3, Insightful) 266

"it took him only 30 days to build and launch a basic open source office" and "The suite was released as an alpha version" mean's he's got the 80 (visible) percent done that take 20 percent of the time.

http://en.wikipedia.org/wiki/Pareto_principle

I wish people wouldn't get headlines with this sort of claim. It helps push the entire profession towards cutting corner in order to under bid each other, which does not speak well for the quality of future software.

Speak instead of prototyping. That's much closer to the truth.

Comment The only real argument against reviews is... (Score 1) 495

... time. And it is a good argument in many cases. Code reviews are great, but they make sense only if you provide good feedback from your review, and the original author has time to revisit their code and make changes according to the feedback. I've seen many work environments were such things were considered too expensive (in terms of time).

Comment I do this every day... (Score 1) 213

I mean, he ends up at reading the subject of every email (check), and scanning through his spam to see if there are false positives (check). My ham volume is about as large as his, and my spam volume is significantly lower (ca. 30%) because I've got a good spam filter.

I don't see what the big deal is.

Submission + - Carry-On Luggage with detachable Laptop Bag? (not.available)

unwesen writes: I'm going on more and more short business trips, and am sick of checking in luggage when a carry-on bag holds everything I need. That's also convenient, as some airlines do not allow you to carry an extra laptop bag. My current solution is less than ideal, though: I basically squash my laptop bag into a duffel bag of the right dimensions. Surely there must be a better solution? My ideal would be a fairly sturdy carry-on backpack or trolley with a detachable laptop case just large enough to hold a 15" laptop, a magazine or two and a few pens. My searches haven't led to much; I've found one or two potential candidates, but they all seem to have serious drawbacks. Surely I can't be alone amongst the /. crowd with my requirements; do you have any suggestions? There seems to be plenty of luggage with a laptop compartment, but you can't take that to the office on its own...

Comment Re:Big difference (Score 1) 1486

And how does it relate to the topic that people have faith in some things, but take a scientific approach to trying to understand others? The two are not mutually exclusive. It's not even mutually exclusive that they apply both approaches to the same thing at different times.

Trying to understand science in relation to individual faith is non-sensical; science works at the scope of humanity as a whole. Many individuals being able to double-check many other individual's claims, not one individual proving their own.

Comment That post makes my brain hurt with it's stupidity (Score 2) 1486

Let's make it simple why: Science is not what scientific disciplines have found out. Science is a set of methods to further human knowledge.

To confuse the two is to misunderstand science so thoroughly that it pains me.

More precisely, science is a set of tools to guard against our individual fallacies (such as blind faith) contaminating the species' body of knowledge, by enabling each and every person to apply these tools and validate or disprove every piece of knowledge in existence. In other words, it doesn't bloody matter if you, the individual, believe in the tooth fairy when you can prove P != NP. Nor does it matter when you, the individual, don't even know what there is to prove, what the problem is. What matters is that someone else can verify or disprove either your proof of P != NP, or your belief in the tooth fairy. Or both.

To be fair, it's terrifying how people will take on an "expert's opinion" on blind faith. The answer to that problem, though, is to teach scientific process, so that people can make better choices in what to believe. The answer most certainly is not to suggest that science is little more than faith in a different set of beliefs.

Comment Re:Please enlighten me... (Score 1) 755

Apologies accepted, and my apologies to you. In my experience, trolls wouldn't reply to to this sort of bait, I must have been wrong.

I don't know what sort of point I was supposed to have missed, really, given that I started this sub-thread with an example; if anything - and yes, I'm nitpicking now - everyone else in this sub-thread is missing my point, pretty much by definition.

So let's get back to the point I was trying to make: efficient techniques for parallel programming are all about how you subdivide your data. This is completely language-independent. For a totally high-level approach to subdividing data (mostly) efficiently, look to MapReduce.

What you would do with MapReduce in a simple case on a single multi-core machine would be to divide your data set into N segments, where N is the number of threads you want to run in parallel. Take MapReduce to the level of networked machines, and the concept remains the same, except that you've got M machines to each process a chunk of the overall data (and maybe in addition, on each machine, you further subdivide the data into N segments).

The problem with talking about MapReduce in this context is that at this high level of abstraction, you might as well use OOP, too. It's a good example for illustrating the data partitioning problems that exist around writing parallel code, but not a good example for how OOP might stand in the way of efficient parallel execution.

I figured the best example would be one that takes the subdivision problem to the lowest level, which invariably means breaking down the problem to how efficiently a CPU's cache protocol handles data, based on how it's subdivided in memory. Using C++-ish code as an example made sense, as C++ is a language where you *can* influence the data layout, and where you *can* lay out data in a fashion that's typical to OOP.

As far as I am concerned, OOP is inherently great at a number of things, and at first glance none of them impact parallelism in any way. All of these things - encapsulation, access control, inheritance, etc. - have a side-effect, though, because the very concept requires you to lump data together that's conceptually related, whether or not it's used together (You might feel like rephrasing that as "the very concept IS to lump data together", but that would ignore some OOP features). In that sense, OOP is inherently about preferring some data layout in memory over another, whether you think about it in that way or not.

The downside of this data layout is that it makes it harder to split one set of relevant data off from another, that is, it makes *efficient* subdivision more difficult. Whether or not that's the case of course depends on the problem you're trying to solve; the assumption must be that you want to solve a problem involving one subset of your objects' data members in parallel to another problem involving a disjoint subset of data members.

You're arguing that whether or not data is, in fact, laid out in memory in a "bad" way is dependent on the language or compiler. To the degree where we're speaking about C or C++, that's just not the case - but of course clever enough compilers for other OOP languages might exist. This is something I doubt, though I would love to be proven wrong, with code to read. The reason I doubt it is primarily because it's pretty damn hard without marking your data as "please parallelize", which none of the programming languages I know/use provide a (portable, standardized) mechanism for. Never mind whether they're OOP. Someone mentioned Haskell here; I don't know the language well enough to know whether or not such a mechanism exists.

Does this explain things better?

Slashdot Top Deals

"No matter where you go, there you are..." -- Buckaroo Banzai

Working...