Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Wait... what? (Score 1) 182

I guess I was being a bit snarky earlier.

For some things, I can see waiting to package it up at the post office. Priority / Express Mail envelopes are a big one. Since I do a fair bit of shipping via USPS, I actually go down to the PO ahead of time and pick up boxes for the things I'm going to ship, and do all the packaging at home. That's how I'm acutely aware that they often don't have what I need at a particular location, which further reinforces my desire to pack things up at home.

I also don't trust the tape-strips that are pre-applied to the packaging. I've had some that stuck like glue, and some that started peeling open a minute after packing up my boxes. So, I always end up reinforcing with some shipping tape.

I grant you, I'm probably not the typical postal customer, though. The more casual user of postal services will probably scratch their heads over many of the things you've pointed out on the USPS site. I've been using it for so long, it all feels pretty natural to me.

I imagine the "no first class stamps at USPS.com" thingy is a concession to Pitney Bowes, Stamps.Com and other vendors they've made deals with for various first class postage services. Otherwise, there'd be little reason for those other services to exist.

Comment Wait... what? (Score 5, Insightful) 182

You could argue that it's the user's responsibility to make sure their package fits into the box they select, but a user could reasonably assume that the whole point of entering the length, width and height is so that the USPS can recommend only those boxes that will hold the item. Remember, the user usually doesn't have these boxes in front of them at the time they're printing the label. They could end up selecting a box option, printing the label, taking it all the way to the post office along with their package, only to find out that the package doesn't fit into the box that they printed the label for, and that they have to wait in line anwyay to pay for an alternate method.

Ah, you're one of those people who clog up the lobby boxing your stuff up at the post office, using the wrong tape (such as the tape meat to mark an Express package on something you're shipping Priority or First Class) and breaking in line to ask someone behind the desk for scissors.

You realize that the post office isn't a full service pack and ship place, right? At least none of the ones I've been to around here are. You're supposed to have everything packed up and ready to go before you walk in the door. You also realize that your local PO probably doesn't stock all the sizes and shapes of shipping box the website describes, and that package weight is supposed to include the box, right?.

That is, you're supposed to have boxed up your parcel by the time you got to this part of the form. The only thing missing should be the label.

Could be worse. You could be like the person I saw who tried to send a package wrapped in normal Christmas wrapping paper.... That was going to be a shredded nightmare on the other side.

Comment Re:If you don't want to upgrade your box (Score 1) 100

On some Linux distros, /tmp is a tmpfs volume, which is effectively a RAM disk. SunOS/Solaris also do that. Many files live in /tmp for very short periods, and have no requirement to persist across a reboot. So, building them in RAM makes sense. The filesystem can still get backed to disk in the swap partition.

The only other case I can think of where a RAM drive might make sense is if you have a set of files you need access to with tight deadlines, and the total corpus fits in RAM. Of course, you could also mmap and mlock those files to hold those files in RAM, if you have control over the application's implementation. For example, in the bad old days of 4x CD burners with almost no buffer, loading the ISO into a RAM disk could help weak burning software keep up with its realtime deadlines. That is, if you had enough RAM to hold the ISO (it needed to be a smallish ISO, not a full 650MB).

Otherwise, RAM disks are usually a bad idea these days.

Comment LIDAR (Score 2) 73

LIDAR stands for Laser Infrared Detection and Ranging. Why does the summary say "(light, radar)" after LIDAR? RADAR uses radio waves, not infrared laser.

(And yes, Mr. Pedantic, I realize radio waves and infrared light waves are both electromagnetic waves. But, our mechanisms for detecting things in the radar band vs. the infrared light band are quite different, so the distinction is meaningful.)

Comment Re:Core of the article (Score 1) 449

Eventual consistency means that the computer eventually computes the right answer if its quiescent long enough. Intermediate values, though, are an approximation, which is often enough.

One example that Paul McKinney gives is of a distributed counters built out of per-CPU counters, and CPU-to-CPU events saying how much to update the total by. (Let's assume positive counts only.)

Each CPU will see update events from other CPUs in different orders, each saying how much to update the count by. All CPUs will eventually see all updates. So, the total seen by any given CPU might differ from the true total in the short run (and may not even be a technically valid total given the original source of events, since events get reordered), but eventually all of the counters will converge on the same total if updates stop pouring in. Also, the totals are still locally monotonic.

If you required all CPUs to see the same sequence of updates to the count, then you have to take locks and serialize memory accesses, which on a manycore system is an expensive operation that simply doesn't scale well. But, if you relax the constraint to "eventual consistency" and "monotonic updates", then each core can have its local approximation that isn't too far from the real value, knowing that each core is no further from the true value than the backlog of events yet to arrive.

That's an extremely reasonable model for many types of data.

Comment Re:Linux kernel is mostly 80-column (Score 1) 330

Yep, I'm with you here.

I did a stint with 132 columns awhile back (back when I was running Linux on a machine that wasn't quite powerful enough to run X11), and I found the extra horizontal space to be mostly wasted. If I did start putting comments or code over there, it'd often get "lost".

Now, I do use a wide format for certain debugging applications. It also works well for spreadsheets. But for source code? I've definitely got an 80 column mind, despite ever having used punch cards or paper tape. I went from 28 columns to 40 to 80 as I learned programming in the 1980s. 80 columns is a very comfortable width. 132 pushed it too far.

80 columns actually corresponds pretty well to the amount of text you'd have on a type-written page on standard letter size paper (or A4, if you prefer). You get 60 to 72 characters across (assuming 1" margins) depending on whether you're 10CPI or 12CPI, and that's roughly how wide programs are when written within 80 column boundaries.

Comment Re:Lamport (Score 2) 42

Indeed!

A few years back, I was implementing Leslie's Bakery Algorithm. (Which, to be sure, you should look up his original paper, not the bastardizations you sometimes find in textbooks. That paper and more are available here.)

In my implementation, I wanted to SIMD-ize one of the steps to make it more efficient. I thought the transformation was valid, but wasn't certain, so I emailed Dr. Lamport. I was pleasantly surprised when Leslie actually replied to my email.

And yes, the transformation was valid. *whew* Our multiprocessor DSP software got a little faster that day.

Anyway, there's some fascinating stuff on his page full of papers. The link again: http://research.microsoft.com/en-us/um/people/lamport/pubs/pubs.html

Comment Re:Hyperbolic headlines strike again (Score 1) 181

I think part of the problem is that the axes aren't linear. If you know the problem you're trying to tackle a priori you can tackle it with multiple magnitudes of greater efficiency. For a fully specified, unchanging problem, I'd expect 3 orders of magnitude or better in most spaces, because you'd build exactly the hardware you need, and strip away all the hardware that supports unneeded programmability—you build a hardwired ASIC. Even in the programmable space, spending a bit of effort matching your problem to your processor can bring huge gains in efficiency, at least 5x. Also, consider that efficiency isn't just run time, but rather a function of power, performance, and cost.

The algorithms that run on a hearing aid would sop the hearing aid's battery before they were even fully loaded if you tried to run them on a typical desktop processor. But, they're baked down to a hyperefficient DSP or ASIC that's tuned specifically for the problem.

You cite a SPEC benchmark that runs faster on an A7 than an A15. Is that in clocks or wall-clock time? I suspect it's dominated by pointer dereferences, such as a linked list traversal. Load-to-use latency (which isn't a function of cache organization, but rather pipeline depth) becomes a dominant term for those workloads.

Backing up a bit: My problem with your thesis is that you assume there's a "best GPP" and then seek to prove there's no one processor that could possibly be that on the basis that across random applications, the winner varies. Your argument seems to be, at the limit: "if you don't tell me your application ahead of time, I can't pick a best processor, so therefore there's no general purpose processors."

It's the other way around. There's a cluster of processors that are OK at a range of random tasks. They're distinguished from special-purpose processors by the fact that the special purpose processor performs at least 5x or more (and likely orders of magnitude in some cases) better than the average for the cluster. That's true even if some of the processors in that cluster are 2x more efficient than the others. A processor is a GPP if there's few or no problems for which it's orders of magnitude more efficient than its cohort. 2x is nothing to sneeze at, but a specialized processor should reach much higher. 5x at a minimum.

And please note I'm mentioning efficiency. It's not raw cycles or even wall clock time. Maybe a better measure is "energy per function", or "energy per function per dollar." (Although the latter is a bit dubious, as you buy the hardware once, but you use it many, many times. Lifetime costs are best approximated by energy costs over the lifetime of the device, if you're doing significant compute.)

You mention GPUs. Sure, GPUs provide cheap FLOPs, and they can even start to run arbitrary C programs. But, what %age of those FLOPs get utilized when running random programs? You might get a 4x speedup offloading some algorithm to your video card, but is that a win when your video card's raw compute power is 100x your host CPUs? Would you buy a Windows machine powered only by a GPU, running everything from your statistical regression to your web browser?

(I may exaggerate, but only slightly.)

To me, "general purpose" means, "I run the compiler, and for the most part, I get what I get. If there's some hotspots, maybe I can tune for this specific architecture. Most of the time, I don't worry." Specialized means "by selecting this processor for this task, I know up front I need to spend time optimizing the implementation of the task to this processor."

Perhaps the qualm is that really that's more a function of the application than the processor. OK. I can buy that. But, when you look across the space of processors that get deployed in that way, you'll see that most processors tend to end up one one side or the other of that line fairly often, and few are on the fence. You find very few DSPs and GPUs asked to run Linux or Windows kernels and applications (the core code, not the stuff they compile to be offloaded, say, in a shader language). You find some number of x86s asked to run signal processing applications, but only where they can afford the cooling.

Comment Re:Mind-blowingly cool, but... I don't get it. (Score 1) 79

Due to self-interference, the light bumps into itself on the way out, and subsequently can't get out. At least at my limited level of understanding, it's the wave-light nature of light at play here.

I imagine at some point, the trapped photons all get absorbed and the original energy dissipates as heat.

Comment Re:Also which languages that beginners choose. (Score 1) 217

FWIW, I technically had some C++ before Perl, but not enough that I count it.

Re: Perl: I'm much the same way with Perl. I use Perl for lots of quickie projects. Great for anything I'm only going to spend at most a couple days developing.

Perl's also great for much larger projects too, where runtime performance isn't absolutely critical but flexibility and development ease is paramount. We actually have some fairly significant projects at work that are written in Perl.

One of them, which embeds Perl in another language as a metaprogramming language. I couldn't imagine trying to write that in C++ without a dedicated team. But, a set of hardware designers are effectively maintaining the tool in the background thanks to it being written in Perl.

Slashdot Top Deals

In any formula, constants (especially those obtained from handbooks) are to be treated as variables.

Working...