Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:Good grief... (Score 1) 681

by Mr Z (#49127891) Attached to: Bill Nye Disses "Regular" Software Writers' Science Knowledge

At a quick look, the XOR trick depends on there being an integral type large enough to hold the pointer type, and if there is it appears to be legal. A strictly conforming implementation apparently might not have a sufficiently large integral type, although I can't imagine anybody writing one.

The XOR trick is inherently implementation dependent, since it requires manipulating a pointer while it's an integer. I think it's fair to assume anyone using it is only using it on a machine with a sufficiently wide integral type.

What's not strictly conforming in my mind is performing any manipulation on the pointer while it's represented as an integer. However, you would be correct to point out that if reinterpret_cast< sufficient_int_type >( pointer ) gives me value X, and regardless of the shenanigans I pull with X, as long as I supply that exact same bit pattern X to reinterpret_cast< orig_ptr_type >( X ) I should get the original pointer back. And if round-tripping a pointer through an int back to a pointer is strictly conforming, then the XOR trick is strictly conforming too.

(At the risk of sounding like I'm shifting goal posts, I do know the C++11 standard tried to get some wording in there to support garbage collectors. I have no idea how that language reads against the XOR trick. I do know the XOR trick would confuse GC by hiding pointers from it though. As for whether GC could ever work out-of-the-box in real, non-trivial C++98 programs that have been around awhile, allow me to show you my deeply skeptical face. You pretty much need a C++11/14 program written in a more modern style.)

In any case, can we both agree that the XOR-pointers trick is a trick best left in the last millennium in most modern systems?

The extra pointer could be legally put into the padding by a sufficiently smart compiler, I believe.

I don't believe structure copies are guaranteed to copy padding.

It's also moot on most systems: If pointers have the strictest alignment of any type on a given platform, there will never be contiguous padding large enough to accommodate a pointer. The only cases I can think of where pointers don't share the strictest alignment are systems with 32 bit pointers, but require 64-bit alignment for double and/or long long. Surprisingly (or maybe not), 32-bit x86 only requires 4 byte alignment for double and long long.

So even if it was legal for the compiler to play games on you within the padding in a POD type, on most commercially viable systems you'll never have the padding you need in a contiguous field.

We're rather far into the "theoretically possible, but with such restrictions that nobody would bother."

Comment: Re:Good grief... (Score 1) 681

by Mr Z (#49119563) Attached to: Bill Nye Disses "Regular" Software Writers' Science Knowledge

I wasn't saying the XOR trick is illegal in C++. (It does, however, rely on implementation defined behavior, so you likely couldn't use it in a strictly-conforming program.)

I was saying that it's illegal for the compiler to undo bad data structures, such as replacing your XORed pointers with proper prev/next pointers. If you have something like this:

struct ugly {
uint64_t prevnext;

Where 'prevnext' is the XORed pointer, the compiler isn't allowed to replace it with something more sane like:

struct ugly {
ugly *prev, *next;

...or even...

struct ugly {
uint32_t prevnext;

...if it figures out you picked an oversized integer for the storage.

TheRaven64 pointed out some cases where it can be legal for the compiler to rearrange / modify bad structures, but the gains tend to be minimal.

As I recall, the C++ standard does put some requirements on structure layout, as least for standard layout PODs:

* Minimum size of 1 for a structure so that each element of an array of empty structures has a distinct address

* Distinct addresses for all non-bitfield members

* Pointer to struct is convertable to pointer to first member and back

* Increasing addresses for members in order of specification

* Optional padding as required between members and at the end of the structure.

* Layout compatibility between identically declared standard-layout structs for their common initial sequence. (That's a mouthful!)

(There may be some others I'm forgetting... That list was off the top of my head.)

If a compiler wanted to rearrange a structure (say, to eliminate or minimize padding, or to eliminate unused members), it'd have to prove that the program didn't rely on any of the guarantees the standard offers that the compiler's otherwise violating. Violations of the standard by the optimizer are legal as long as you don't get caught. ;-)

Comment: Re:Good grief... (Score 1) 681

by Mr Z (#49118739) Attached to: Bill Nye Disses "Regular" Software Writers' Science Knowledge

Actually, it isn't if the compiler can prove that the layout is not visible outside of the compilation unit.

Ok, that's fair. You also need to ensure its memory image isn't visible to, say, a char*. (That pesky char* exemption in the standard that allows you to write memcpy and memmove is no friend to alias analysis.)

For example, if the address of one of these structs is never taken, then the struct never need live in memory even, so its layout is irrelevant. If addresses do get taken but you can find all the uses of those addresses, then yes, you could play games with the layout if it was impossible for the program to notice. That's perhaps a stronger criteria than compilation unit boundary, though.

What's the justification for compilation unit boundary? It seems like you could expose the layout of the struct (and therefore any compiler shenanigans) through other means within a compilation unit. offsetof comes to mind. :-)

My initial gut reaction is that nearly any interesting data structure wouldn't qualify for this optimization. :-) Sounds like your data matches.

It's much more interesting in environments with on-the-fly compilation, because then you can adapt data structures to use.

Cool. That reminds me of an experiment I heard about at HP. They implemented a PA-RISC to PA-RISC dynamic translator that used run-time information to reoptimize the code. The overall speedup (including the cost of the translator) was in the 5% - 10% range. Here's the paper.

Even then, you can do it outside of the compiler (for example, the NeXT implementations of the Objective-C collection classes would switch between a few different internal representations based on the data that you put in them).

I suppose you could do that in C++ with template specialization. In fact, doesn't that happen today in C++11 and later, with movable types vs. copyable types in certain containers? Otherwise you couldn't have vector< unique_ptr< > >. Granted, that specialization is based on a very specific trait, and without it the particular combination wouldn't even work.

In theory, you could also specialize based on, say, sizeof( item ). I suppose that then becomes an ABI issue with the C++ standard library. Bleh.

I have a love-hate relationship with C++. :-)

Comment: Re:Good grief... (Score 1) 681

by Mr Z (#49113109) Attached to: Bill Nye Disses "Regular" Software Writers' Science Knowledge

That's cool, and I could see how you might get something out of it if it actually digs in on some of the tougher topics. Pipelining doesn't really affect your code until you start doing things that expose the pipe's depth, such as branches, load instructions, multiply/divide and floating point. It can be a good lesson on why instruction scheduling matters, whether it's handled by software or hardware.

In my comment above, I mentioned a class that I wasn't sure students were getting much out of, the bar of accomplishment seemed pretty low, and the things focused on seemed esoteric as compared to what actually matters. For example, the professor had the class design a machine to implement quicksort on bytes, with a memory that was only word addressable. That just seems like a pointless brain teaser.

I think it'd be far more relevant to model a simple RISC pipeline and look at the impact of, say, a branch predictor in the context of some common algorithms. Or maybe the impact of memory latency on various data structures. "Hey kids, this is why linked lists suck." "Let's see the horror show behind switch-case statements and VTables!" :-)

In other words, I was picking on that particular professor and course, not the idea of such a class.

Comment: Re:Good grief... (Score 2) 681

by Mr Z (#49111955) Attached to: Bill Nye Disses "Regular" Software Writers' Science Knowledge

I have worked with compilers that auto-vectorize, unroll and jam loops, collapse/coalesce loop nests, interchange loops, software pipeline instructions, and so forth. (The compiler we ship for our DSPs at work does all of these things.) And there are some compilers that will tile loop nests into chunks that fit in L1 and L2 caches, although I don't know that I've used one, unless GCC's picked up some new tricks.

But yeah, rewriting data structures, such as undoing XORed-pointer lists? I agree with you: BS! Heck, in C / C++, such as transformation is actually illegal.

I wonder if this is a case of confusing the compiler with the standard library? Most people I know who aren't compiler geeks don't really distinguish between the compiler and the standard libraries. "I don't know; it comes with the compiler!"

For example, I know if I use std::map in C++, that I'll get certain big-O guarantees. But, I don't know how it's implemented underneath. Is it an RB tree? An AVL tree? A B* tree? Some other structure? I don't actually care, but I do expect whoever's writing the standard library to pick an appropriate representation for current hardware. And I know if I upgrade my compiler (which upgrades the standard library), I may get an improved implementation of the standard algorithms.

Comment: Re:Not this shit again (Score 1) 681

by Mr Z (#49111711) Attached to: Bill Nye Disses "Regular" Software Writers' Science Knowledge

I've met CS grad students (grad students!) that have never programmed assembly language, or really anything lower than Java. I've worked with other freshly minted CS majors that have barely gotten a taste of machine organization, computer architecture, or any of these lower level concepts. Or, they were exposed to it, they did the bare minimum to survive the architecture class with no intention of retaining the material. And, I've worked with others that were stellar and knew their stuff inside-out.

Sure, the better schools do a better job of covering computer architecture basics. But it seems a fair number cover this material rather perfunctorily.

In any case, this is a bit off the topic. I expect CS folks to know their job and their curriculum, and complaining about schools with shoddy CS curricula vs. schools that do CS right is missing the point.

Bill Nye's complaint is that most software writes (and he included that in a rather generic list of occupations, as opposed to singling them out specifically) aren't terribly scientifically literate. That has almost nothing to do with Computer Science, and more to do with how many folks fall for pseudoscientific claims and hokum.

Silicon Valley has a unusually high concentration of anti-vaxxers. Explain that. No amount of compiler theory, digital logic, virtual memory, pipelining, algorithm analysis, or big-O notation can fix the scientific literacy gap that anti-vaxxers fall into. And it's that sort of scientific literacy Bill Nye was going on about.

Comment: Re: Not this shit again (Score 1) 681

by Mr Z (#49111501) Attached to: Bill Nye Disses "Regular" Software Writers' Science Knowledge

I have it here on my desk. IEEE 754-2008 defines a binary representation for decimal floating point, so that people can move away from BCD (Binary Coded Decimal). Your observation doesn't invalidate Lumpy's point that the computations and electrical representation are all in binary.

Comment: Re:Good grief... (Score 1) 681

by Mr Z (#49111463) Attached to: Bill Nye Disses "Regular" Software Writers' Science Knowledge

I'd argue that that's not so computer science as it is computer engineering. Computer Architecture courses tend to be more the domain of EE departments than CS departments, in my experience. I'll grant you that enlightened schools do a better job of combining them. The school I went to, which has an excellent EE department, didn't combine them. (Our CS department at the time was made up primarily of math professors that couldn't hack teaching math.)

I find it hard to imagine a pure CS undergrad program going to the extreme of designing and building a pipelined processor. I have seen some CS architecture classes that try to have students architect and simulate simple machines. The bar was pretty low, and it wasn't clear to me what the students were getting out of that particular class.

Granted, I'm saying all of this from the jaded perspective of someone who's been in the chip industry 18 years, and has had the pleasure of working with a wide range of skill sets. I've also seen how much there is to learn after college. I do think EEs could stand to learn more CS, and CS folks could stand to learn more EE, but I don't know what you'd displace in the existing curriculum to fit it all in. Some of this you just have to learn on the job if you don't want to be stuck in college for 6 to 8 years.

Comment: 100% Non-fiction (Score 2) 164

by Mr Z (#49067697) Attached to: How is your book reading divided between fiction and non-fiction?

I have tons of programming books, engineering books, algorithms books, data manuals, etc., all of which I read for enjoyment and to better myself as an engineer. I have read some fiction, but it's been a long while since I spent much time reading fiction. And then there's puzzle books which don't really fall into either category.

When I want fictional entertainment, I'll turn on the TV or go watch a movie.

Comment: Re:But in template metaprogramming... (Score 1) 252

by Mr Z (#49012269) Attached to: AP Test's Recursion Examples: An Exercise In Awkwardness

*chuckle* It's a power so great, it can only be used for good or evil!

Actually, it's useful for compile-time optimization of performance critical code. For example, I have a CRC calculation template class that generates its lookup tables at compile time based on the specified polynomial, shift, direction and field size. Need a new CRC in a different wacked out field? One line of code and it's there.

Previously, in C (or C++ w/out template metaprogramming), I'd need to either write a separate program to generate the lookup table for me offline and manually graft that into the code, or generate the lookup table at startup.

The consuming code is fairly clear, so in principle it makes other code easier to write. Of course, if there were something wrong with my implementation, debugging it is potentially more challenging.

Comment: But in template metaprogramming... (Score 1) 252

by Mr Z (#49011979) Attached to: AP Test's Recursion Examples: An Exercise In Awkwardness

Public Service Announcement comes on the television. Pages of code scroll by on a computer screen in the background. A strung out looking programmer stares into space, bleary eyed, obviously stressed, pulling his hair out. A voiceover begins...

Using recursion as a simple looping construct in an imperative programming language isn't normal. But, in C++ template metaprogramming, it is.

Template metaprogramming, not even once.


(Actually, I do a fair bit of template metaprogramming in C++. It can be handy for a certain class of problem.)


SpaceX Launch of "GoreSat" Planned For Today, Along With Another Landing Attempt 75

Posted by timothy
from the elon-musk-for-the-win dept.
The New York Times reports that SpaceX will again attempt to recover a Falcon 9 launch vehicle, after the recent unsuccessful try; the company believes the lessons from the earlier launch have been learned, and today's launch will be loaded with more hydraulic fluid. This evening, the rocket is to loft the satellite nicknamed "GoreSat," after Al Gore, who envisioned it as a sort of permanent eye in space beaing back pictures of Earth from afar. The purpose of the satellite has evolved, though: Writes the Times: The observatory, abbreviated as Dscovr and pronounced “discover,” is to serve as a sentinel for solar storms: bursts of high-energy particles originating from the sun. The particles from a gargantuan solar storm could induce electrical currents that might overwhelm the world’s power grids, possibly causing continent-wide blackouts. Even a 15-minute warning could let power companies take actions to limit damage.

Comment: Re:really? (Score 2) 192

by Mr Z (#48959295) Attached to: Perl 6 In Time For Next Christmas?

We actually use Perl quite heavily where I work, and its use is only growing. We've built rather significant pieces of our infrastructure around it, including a rather impressive internal project that uses Perl as a metaprogramming language. You'll get yelled at if you deviate from the standard perl-based development flows we've put in place.

So, "isn't used all that much anymore" may be more anecdotal than not? I guess it really depends on the shop whether perl use is increasing or decreasing.

Comment: Re: Perl is more expressive (Score 1) 192

by Mr Z (#48959087) Attached to: Perl 6 In Time For Next Christmas?

There's that typo and the fact that the < got eaten.

Personally, I don't see the point in a pissing match between Perl 5 and C++14. I use both. Perl's great for rapid prototyping and programs that need a certain flexibility. C++14 is great for rapid execution.

The perl code is fairly idiomatic, and a perl programmer would type it without thinking. It'll likely compile into an optimized sort, since this type of sort is common in perl.

The C++14 version is also idiomatic to C++14 (although I think non-member begin/end would be preferred there), and has the advantage that it'll compile an optimized sort for whatever type you're sorting.

In C++14, I can use std::vector, std::map, std::unordered_map, std::regex, std::shared_ptr, std::unique_ptr, gobs of standard algorithms, range-based for() and lambdas. These give me very similar containers and tools to what I have access to in Perl 5. That makes the conceptual leap between the two shorter. I quit worrying about syntax ages ago. Know the syntax for the language you're programming, and spend your energy on the semantics of the program and problem you're trying to solve. This is work, not a beauty contest.

I've been waiting patiently for a usable Perl 6. I did install a version of Rakudo Star a couple years ago to compete in a Perl 6 coding contest (over Christmas, it so happened). It was fun picking up the language, and I was able to implement some interesting stuff quickly, including an A* search. But, it was definitely not ready for prime time. I ran into several rough edges, and execution time was uninspiring. I've gotten accustomed to how fast Perl 5 is.

Now I'm hoping for a nice Perl 6 Christmas present this year, although I won't get my hopes up too high. :-)

"Freedom is still the most radical idea of all." -- Nathaniel Branden