Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Slashdot Deals: Prep for the CompTIA A+ certification exam. Save 95% on the CompTIA IT Certification Bundle ×

Comment Re:Not a parody. A love letter. (Score 1) 87

There have been some discussions about sound in [sci-fi] space, particularly about whether you could hear the sound of an explosion during a space battle. Somebody showed how it could be theoretically possible. The main point is that hearing explosions during space battles makes them more exciting. ST:TOS did this in "The Ultimate Computer" by showing the bridge of Bob Wesley's ship when it was it by a phaser blast. Inside the ship, no problem. So, a bit of poetic license and suspension of disbelief adds to the enjoyment.

The series was just getting started. "Balance of Terror" was a loose adaptation of the Robert Mitchum film "The Enemy Below", which was about a U.S. Navy destroyer and German U-Boat during WWII. The whole thing about the sound was taken from that. At the time, not many television writers had experience with sci-fi. ST:TOS needed plots. So, they borrowed heavily where they could.

ST:TOS had a number of inconsistencies that varied from episode to episode. They were still developing the canon. By Gene Roddenberry's own admission, the reason for the "transporter" was that "he couldn't figure out how to land this thing" where "thing" meant the starship. Plus, the special effects for the transporter were far less costly than showing the ship land/takeoff, etc.

In most ST episodes, put antimatter in contact with matter and it explodes [except under controlled warp engine conditions]. This means any matter and any antimatter. In one episode, somebody said "There's less than one ounce of antimatter here, but it's more powerful than 10,000 cobalt bombs".

However, in "The Alternative Factor", there were two universes, one of matter, one of antimatter. Each universe had a copy of a given person. In this case, Lazarus. Matter Lazarus [who went insane] wanted to meet his antimatter counterpart and destroy both universes. Matter Kirk got sent to the antimatter universe. Met with antimatter Lazarus [the sane one]. No explosion because they needed to be the alternate version of the same person. Made for a great story, but violated canon from all other episodes.

By the time ST:TNG rolled around, the canon was well established enough that you had "continuity" editors that would spot the canon violations. Hence, the workaround was the "exotic particle" that had whatever properties the plot needed. Since, for the most part, it only showed up in a single episode, no conflicts.

Regarding B5, J Michael Straczynski was asked "How fast is travel through hyperspace?". His reply: "As fast as the plot needs".

BEWARE: B5 Spoiler Alerts!

Because JMS developed the entire five year story arc, JMS was able to fully develop the canon before shooting frame 1 of the pilot. Thus, far fewer canon violations. Two years into the series, B5 viewers were [pleasantly] shocked when the true identity of Valen was revealed. But, this identity was given away, heavily disguised, in the pilot movie.

This was done deliberately for major themes throughout the series. JMS has said [something like] "I'll lay my cards on the table beforehand. No surprises. But, you won't see it then because it's done in a disguised way and you don't [yet] have the context"

The main violations of B5 canon were due to actors wanting out of the series and rewriting so that plot points of their characters were given to other characters. Oh, let's not forget that B5 was slated to be cut short after season 4.

The epic space battle/war that appeared near the end of season 4 was originally planned to spill over into at least 1/3 of season 5, but was cut short. B5 was given a reprieve, but, by then, the "war" was over. Season 5 had to have some other subplots stretched out to compensate. The series finale episode ("Sleeping in Light") was shot near the end of season 4 and would have been the end of season 4. When the series was extended, they had to scramble to write/shoot an alternate season 4 last episode and save "Sleeping in Light" for the end of season 5.

One example of actors leaving and rewriting the arc [some of this is conjecture on my part]:

Eventually, the story arc needed a "super" telepath. In the pilot, the B5 telepath was "Lyta Alexander", played by Patricia Tallman. She was in the pilot, but did not start the series. For the early episodes, there was another telepath "Talia Winters", played by Andrea Thompson. Talia was given enhanced telepathic powers by her ex lover [who became an omnipotent super being]. These powers gradually began to grow in the series.

Later on, it became apparent the station had a mole [probably a telepath] that had a code name "Control". At this point, Andrea wants out of the series. So, they brought back Patricia's "Lyta" character and she exposes "Talia" as the mole. Andrea/Talia is now gone. Now, Lyta is the B5 resident telepath [and her "super" telepath powers were given to her during a pilgrimage she made to the Vorlons].

One characteristic of "Control" was this was a second personality implanted by the Psy Corp. The person's normal personality was unaware of this, but the artificial mole personality was aware of everything [would only come out at night when the natural personality was sleeping].

If Andrea hadn't wanted out of the series, it was more likely that the mole was Ivanova [who had mild telepathic powers]. The reason is, because "Control" had access to things that probably only a command deck officer had access to. Talia would have [eventually] exposed her. Talia would have used her extra powers to purge Ivanova's "mole" personality, thus, allowing the Ivanova character to continue. Further, if Ivanova had been the mole, the "Control" subplot/subarc could have been extended beyond the few episodes it was in.

Comment Re:And then they can make fun of '80s hairstyles.. (Score 2) 87

See my second paragraph:

Galaxy Quest had a great mix of comedy, parody, character development, and heroism as well as some classic sci-fi elements. It's one of the first works that was respectful to the sci-fi genre without taking itself too seriously.

That acknowledged all that you were saying and the key word is mix. BTW, I saw GQ in a theater, and I own a copy, so I may understand it better than you seem to think I do.

So, where does the series go? Ignore the movie and spread the movie across five seasons and the characters achieve their final growth by the series finale? Or, do you start the series where the movie ended? So, will it just become another serious Star Trek like series without much humor. Or, will it try to blend the best of both?

I think you lept to the conclusion that it's laugh track jokes or nothing. How about more subtle humor blended directly into a serious plot point?

For example, in B5, the station breaks away from the Earth Alliance. They can no longer be resupplied. So, Ivanova gathers together a bunch of smugglers/black marketeers in a conference room. Ivanova: "I know in the past we've had our differences. You tried to bring in contraband and we've had to come down on you. Sorry about the shoulder, Jaxos". She then goes on to explain how smuggling in useful stuff will benefit both the station and them. So, they agree to an alliance. Solves the plot point of how B5's supply chain was fixed, with a little humor thrown in.

Now that the main characters of the Protector have matured and are heroes, they are the anchors for the serious plots in the stories. But, you needn't drain them of their sense of humor to fit some rigid heroic vision. GQ, in addition to everything else, was more broadly comedic. Why toss away one of its strengths? Because the main characters have matured, you can move the broader comedy to infrequent recurring characters in subplots.

And, if you want to talk about the injustice of something, sometimes the most effective weapon is humor/comedy/irony/mockery of it (e.g. an officious bureaucrat).

Comment Re:Not a parody. A love letter. (Score 4, Interesting) 87

The sad part was that Galaxy Quest was marketed to kids instead of a parody of, or homage to, Star Trek (TOS in particular) and its adult fan base. Thus, didn't do as well at the box office as it should have. Note: I've seen ST:TOS in original network first run and have been a fan of all forms of the show since (and I'm a huge fan of Babylon 5 as well).

Galaxy Quest had a great mix of comedy, parody, character development, and heroism as well as some classic sci-fi elements. It's one of the first works that was respectful to the sci-fi genre without taking itself too seriously.

If done carefully, the series could work. In TOS, there were a number of plot holes (e.g. in "Balance of Terror", Spock hitting a button that causes a beeper to go off, alerting the Romulan ship--this ignores the fact that sound doesn't travel in a vacuum). In ST:TNG, they got around things with the "exotic particle/ray of the week" approach.

For example, "cross phased polartronide delta particles", CPPDP for short. They threaten to rupture space/time, etc.

The new series could work because maybe the ship has something that could combat CPPDP but they'd have to explore the ship to find it. Then, they'd have to figure out how to operate it. Plenty of opportunity for comedy. Plenty of opportunity for traditional Star Trek plots, just presented in a lighter vein.

In TOS, the "A Taste Of Armageddon", the planet fights its wars with computers and herds casualties into suicide stations. Everybody took this so seriously (Kirk, Spock, the aliens, and Ambassador Fox). Nobody ever said "How silly that is".

How about having a smart-mouthed android that says: "Completely logical. Our ship's sensors have determined 99.44% of your population is composed of genetic defectives" (like the robot in "Lost in Space" saying "Dr. Smith is a quack").

Further, the android is programmed to abide by Azimov's robot principles. But, the android is constantly trying to break that programming so he can kill the rest of the crew (e.g. Like Klinger doing outrageous/funny things to win him a section 8 discharge in "Mash").

The ship, internally, could be much larger than the outside (Think: Tardis). In Stargate, they were always discovering new stuff left behind by the "ancients".

If the interior of the ship were large enough, it could have a ST:DS9 "promenade". In Babylon 5, there was the "zocalo". Plenty of room for a shady character like Quark, Harry Mudd, etc. In B5, it wasn't all equal. They had levels that were little more than tent cities, with the denizens living in poverty.

How about "breaking the fourth wall" and speaking directly to the audience. This was done by George Burns in "Burns & Allen" [and "Wendy and Me"]. It was also done in "She Spies". Let the android do it, functioning as narrator: "Android's log: The ship is headed to Omicron Burpo Five to initiate trade negotiations. I, however, have determined that the Omicron Burpo system has large amounts of Kyratron radiation and that if I'm able to collect enough of it, I'll be able to break my Azimov programming and finally kill the crew".

Oh, yeah. How about a character like Jonathan Harris' "Dr Smith" in "Lost in Space", who is just as cowardly. Or, like Colonel Klink from "Hogan's Heroes".

Or, maybe there's the lovable ship's cook (like Neelix in ST:Voyager), but who is inept. Food poisoning after his meals, etc. The crew has to find ever more clever ways to disguise the fact that they're not eating his food anymore, lest it hurt his feelings.

Because the ship is so big [internally], it could have a passenger liner section (Think: Love Boat). ST had a number of episodes around transporting diplomatic personnel to peace conferences. A passenger orders a vegan meal. Gets a vegetarian meal. But, the passenger really wanted "sauteed kremloks" served as they do in Vega star system.

Do a main plot each week, just like Star Trek, albeit a little more tongue-in-cheek. Add more sarcastic stuff in smaller side plots and characters. This was the form for a lot of episodes of the Mash series, which carefully balanced serious subjects with comedy.

In short, Galaxy Quest as a series, has the potential to be just about anything.

Comment Re:i think it shows trends in GitHub's demographic (Score 1) 132

I wouldn't do engines[4] IRL, but, c'mon, this is slashdot. It's hard to post meaningful code fragments using what they provide. Would engines[NENG] be less objectionable, even if I hadn't defined NENG?

The whole point of "= 0" is that it came from a time in C++ when adding new keywords was eschewed because C++ wasn't fully adopted and the more new keywords used, the more places it would blow up C code that was being ported. For example, I do believe I had a function called "new" and when I recompiled the code in C++, I had to rename it. These were the cfront days. The early adopters of C++ were C programmers looking for a better way.

When C++ first arrived, it was billed as an incremental addon to C [to get quicker mindshare/acceptance]. Nowadays, it's considered a language in its own right, with a native compiler. But, cfront had some advantages. You could do "cfront xyz.cpp" to get xyz.c and compile that. Having xyz.c to look at, you could see where "x = y" generates "deep copy". To do that now, with a modern compiler, you'd probably have to generate the .s file and peruse the assembler output. So, which is really better?

C++ was brought in stealthily lest it frighten off the C programmer base at the time. Now, C++ adds keywords just fine: template, mutable, explicit. But, why not use "+++" for mutable? Or, "---" for explicit? If mutable/explicit had been in the language at its inception, these keywords would probably have gotten some quirky syntax instead. But, I'll tell you, when I first saw "= 0" some 20+ years ago, my first reaction was "WTF?". So, if you would find "+++" objectionable instead of "mutable", that's the reason I prefer "pure" over "= 0". The only reason you're not hearing objections about "= 0" is that it's part of the language, no matter how quirky it may be. Try showing it to a java programmer. Nowadays, if "= 0" didn't exist, and somebody wanted to add "= 0" in lieu of a keyword, the general reaction would be "burn the keyword and we'll adjust our code to fit rather than add more quirky syntax". Another case: why use "class" [a new keyword] instead of "@struct". Using class added something. But, it's just shorthand for "struct { private: ... }".

Why have [our now favorite class :-)] Z's constructor be "Z" and the destructor be "~Z"? In other OO languages, they are _creat and _destroy respectively [squirrel, I think]. Personally, I prefer _creat/_destroy, but "The mayonaise you like is the mayonaise you grew up with".

I do believe I mentioned handling things by having Z inherit privately from X. Then, create public forwarding functions in Z, leaving off the offensive fncX12.

Most of the code I write uses composition. I have a number of objects that have doubly linked pointers. And, I have standard routines for these. Consider:
Z {
    Z *z_prev;
    Z *z_next; ...
}
In C, I'll usually have a bit of either metaprogramming or macros that do:
Z {
    DLINK(Z,z_); ...
}
where DLINK is:
    DLINK(_typ,_pre)
        _typ *_pre##prev;
        _typ *_pre##next;
The traversal is handled with a FORALL(lst,cur) macro. This is:
    cur = lst->head; cur != NULL; cur = cur->z_next
This stuff is available to code outside the class. Thus, the links need to be public (in C++)

Now, in order to get the same cur->z_next in C++, you can do the above, or you can inherit from a zlink class that might be generated from a template with the appropriate type (e.g. Z) that has the low level stuff in a dlink class. The template generated code does upcasting to the dlink class. So, you can, for example, have a single copy of dlink.check_list_integrity without having to do massive casting by hand.

I've also done this via:
Z {
    dlink z_dlink;
}
But, I find this less flexible than the meta/macro approach. The FORALL now has to be:
cur = (Z) lst->head; cur != NULL; cur = (Z) cur->z_dlink.dlink_next

So, if this is all you want in your class, and so state, it may violate Liskov, but so what? Repeating: so what?

In C, anything is legal if you document it sufficiently and have a serious reason to do so. Nobody will fault you on principle. But, OO seems to say that you can never violate things because the result is, well, "not what a gentleman OO programmer does".

In C, the programmer is ultimately responsible [and is trusted] to make the tradeoffs. In OO, the language seems to rule (more so in Java than C++) and the programmer is not trusted. An expert programmer knows when to follow the rules, which is most of the time. But, an expert also knows when to bend/break them, based on analysis of the situation.

If the superclass is inherited as private, Liskov doesn't even apply because the private inheritance is just to get the functionality in. You might be able to get the same effect with a template of some sort, but the inheritance seems to be the simplest way, Liskov or no. Another way might be with traits/mixins but C++ doesn't have them.

Even perl6 has an aliasing mechanism of sorts that may get the job done. And it has traits (called roles). So, C++ may be falling behind in the OO arms race ;-)

Comment Re:i think it shows trends in GitHub's demographic (Score 1) 132

The rant on new (the stanford paper) was not mine. And, it was about new being an operator instead of a function. BTW, I've been programming in C for 35 years (+ 10 before doing C). I haven't seen malloc use char * in at least 15. And why does one need to use std::unique_ptr to force C++ not to be "cute"?

The RTTI is disallowed at Google and C++ is their goto language. It's also slow. There was a recent usenix paper https://www.usenix.org/confere... that has a CaVer tool for detecting bad downcasts. It does much of what the RTTI does but as a tool. It could easily be adapted into an alternate RTTI implementation and it's much faster than RTTI (e.g. it uses RB trees instead of walking the hierarchy with string compares)

C++ makes code easier to write, but because it's easy to abuse, it's harder to review/debug. Because, you have to detect the abuse, which can be much harder. Particularly when you have to peruse 50,000 lines of code to get the first clue as to the bug.

Inheritance does violate encapsulation. See a better example I did here: http://slashdot.org/comments.p... You're assuming you control the superclass instead of just inheriting from it. That is, the author of X and Z are the same programmer, or even in the same organization. Read that post, especially the timeline.

In C, if you "inherit" a class, you can do so thus:
Z {
    X z_x;
}
At first glance, that looks like a "has a" but it's really an "is a" in this context. That is essentially what C++ does [invisibly] and in C you just do z.z_x.fncX4 instead of z.fncX4. And, you can put a comment that says "// don't use fncX12" in Z. So, if you do that, how do you classify it?

"x = y" can generate a copy constructor or a copy assignment operator. If either of these is overloaded in the class, they can generate a "deep copy" whether you want it or not. In C, "x = y" either copies the scalar value, pointer value, or does a struct element by element copy (e.g. can be done with memcpy). In C, if you wanted a "deep copy" (e.g. a member has a char *, and you do a strdup), you'd create a "deep_copy" function and you'd say "x = deep_copy(y)". That is explicit as to intent.

Another example: "x += y" might generate a huge amount of code because the creator of the X class decided that the "+" and/or "+=" operator should do things with a database. That's an abuse. The creator clearly abused things, but the [hapless] programmer trying to find this would have their work cut out for them. They're not going to be able to check every "+" statement for "is this overloaded in an insane/abusive way?". If the "+" were actually a member function called "plus" or better yet "addto", a reviewer would be much more clued in to look at the "addto" function than a simple "+".

Or, the "+" operator is defined such that if you were using an unsigned int it prevents wrap (e.g. 0xFFFFFFFF + 1 --> 0xFFFFFFFF instead of 0--common for some video calculations and saturation). Somebody who is not the original author would probably not realize this on the first pass through the code and might take much longer to realize that this is the bug they're looking for (e.g. the value needs to wrap to 0). If you want FF+1 --> FF instead of 0, create a sat_add function that does the job. You'd have to do that anyway in any OO lang that doesn't support operator overload (e.g. java)

And, don't get me started on ">>" and "" for streams. This is one of C++ most obvious "hacks". People realize how defective it is, but swallow it, because it's ubiquitous and idiomatic, but nobody should try to defend it.

As to polymorphic functions:
    void foobar(int i)
    void foobar(float x)
    void foobar(int i,int j)
Is that incorrect?
What the trial found was that:
    void foobar_int(int i)
    void foobar_float(float x)
    void foobar_int2(int i,int j)
is better because if you had:
    int x; // 200 lines of code
    foobar(x)
It was much easier to review/debug:
    foobar_int(x)
because it was more explicit as to intent because you didn't have to look for the "int x" [which might be in a .h file] to figure out which foobar would be called. YMMV

If you really believe C++ can do a better job at an OS kernel, first download the linux kernel source. Read the 6,000,000+ lines of code first. If you've truly done kernel programming [I'm guessing you haven't], you'd already know that C is the better tool for the job.

If you still think C++ would help, recode sections, come up with a plan for converting the entire codebase. You'll be doing std::unique_ptr everywhere just to undo a problem that C++ created that C doesn't have in the first place. That's using a template/class where the kernel would just use a simple pointer with an explicit release [which is much faster]. The sheer slowness/overhead of using a "smart pointer" is unacceptable for a kernel. The kernel does insane things in a controlled way (see spin_lock_irqsave) to be fast, super fast. It also has lots of macros and inline functions so that code can use inline assembler code in certain places in a clean way--again for speed.

The kernel has many complex cases. For example, an app requests a file system read. But, before the I/O completes, the app is control-C'ed. The kernel may release the app's memory right away and just field the interrupt later (ignoring it with cleanup). Or, if the I/O was done with O_DIRECT, the memory release has to be queued for later (after I/O completion). This is merely a simple example.

Much of what the kernel deals with depends on what other unrelated threads, devices, etc. do. The kernel also does RCU, and about five other types of locks, etc. ISR code will try to do something directly, find out it's blocked from doing it and be forced to send it to a deferred work thread. Or, it queues it for the timer ISR, which keeps retrying it until it eventually succeeds. Or, it send a message to another SMP core to do the job.

Little of what C++ brings to the table is useful for the above. It's a drop in the proverbial kernel ocean.

I've been doing [various] kernels, device drivers, boot roms, realtime for 40+ years. I've been doing OO in C for many years, and back in 70's was doing it in assembler language (using a very powerful macro assembler).

In fact, a recent project was a realtime executive running on a MicroBlaze inside an FPGA. Written in C, it had to fit in 128KB [that's kilobytes]. sprintf [unmodified] couldn't be used because it dragged in the software floating point code and that put it over the top. And, oh yeah, no malloc/free either because the slightest fragmentation was intolerable.

Comment Re:i think it shows trends in GitHub's demographic (Score 1) 132

is_a/has_a is so you can determine if you can/should inherit. While the standard literature might say that the inheritance is part of the class definition, it's also an implementation detail of the given class. Radical idea, no? Read on ...

Back to school for the sake of common ground: An airplane is not an engine. It has an engine. An airplane is a vehicle. So airplane could inherit from vehicle but would have:
    engine_t engines[4];

Encapsulation has two definitions:
https://en.wikipedia.org/wiki/...

(1) A language mechanism for restricting access to some of the object's components.
(2) A language construct that facilitates the bundling of data with the methods (or other functions) operating on that data

I'm talking about (1), and also using it for information hiding.

For reference below, a "Z user" or "user of Z" means ordinary code that does not inherit from Z.

Referring to my original X/Y/Z example, suppose X or Y was a pure abstract virtual class. That is, all member functions were:
    virtual void fncX1() = 0;
    virtual void fncX2() = 0;
A side comment here: the "= 0" seems bizarre. Back when it was created the mantra was to avoid new keywords [at any cost]. Wouldn't "virtual void fnc() pure;" or "pure virtual void fnc();" be easier to grasp?

Thus, Z would have to provide real definitions for fncX1, ... and they are part of the definition of X. Thus, any user of Z would be blocked from trying to "go around" Z to get to X member functions. A side benefit is that a user of Z would only need to look at X to find the functions that could be used.

If one of X's member functions is not pure virtual, Z doesn't have to provide a definition for it. But, suppose the creator of Z does not want a user of Z to access a particular X function. C++ has public/protected/private but not on a per-function/member basis for classes it inherits from.

So, how does the user of Z know whether accessing X::fncX6 (via Z::fncX6) is what the Z author expects/intends? Nothing in the language prevents it if X is a public ancestor of Z.

What I would want is that for any function in X that is not specified in Z, no access is allowed to any user of Z. In other words, you'd need an explicit [not the keyword] definition of a member function:
Z {
    alias void fncX1() to X::fncX1();
}

In more standard terms:
Z {
    void fncX1() { X::fncX1(); }
}

Thus, Z must clearly spell out all functions that can be used. The usual way might be to make Z inherit from X as private, then Z would have to provide public functions to allow controlled access to X. If Z doesn't want a Z user to access X::fncX6, it just doesn't define Z::fncX6.

So, a user of Z can see everything at a glance. If the author of X adds a new function, it is not usable by Z until the Z class gets a definition. This would be the case if X were pure virtual as Z lacking a definition of the new X function would generate a build error.

But, I'm talking about enforcing that restriction for regular classes. Yes! For a base class that had 50 member functions, that would require each child class to provide 50 definitions. A possible workaround would allow some wildcard matching:
Z {
    alias fncX[1-20] to X::fnc[1-20]
}

Consider the following development timeline:
(1) Author X creates class X with public functions fncX1, fncX2, and fncX3
(2) Author Z creates class Z that inherits from X (as public) and adds fncZ1 and expects that users of Z will have access to fncX1, fncX2, and fncX3 [and fncZ1]
(3) Author X adds fncX4 and some additional stuff that author X will find unsuitable.
(4) Author Q starts using class Z and figures out that class X has fncX4 and starts to use it as Z::fncX4
(5) Author Z gets disgusted (with X) and changes the Z class so it no longer inherits from X, but folds in the functionality previously provided by X directly into Z, including definitions for fncX1, fncX2, fncX3, but not fncX4
(6) Author Q is now broken.
(7) Z never considered fncX4 to be part of X, and does not want it to be in Z.

Encapsulation is violated because the Z class can't restrict access to fncX4 at (4). The above scenario does happen, particularly during early stages of development, and sometimes years later.

In fact, Z might have started out a standalone class, that implemented the fncX1/fncX2/fncX3 functionality internally. After a while, author Z realizes s/he's replicated class X and removes the code and inherits from X. But, then author of X adds fncX4 and author Z decides to revert back to standalone because s/he no longer likes the X class.

The notion that inheritance is an implementation detail might be clearer if the inheritance of X by Z was private. private anything in a given class is considered an implementation detail of that class, whether it's a member element, member function, or a parent class.

My notion is that any public class hierarchy that allows fncX4 from class X to show up in class Z without an explicit definition in Z violates encapsulation.

Adding the restrictions would probably increase the amount of typing required and, therefore, reduce [my aforementioned] overuse of inheritance.

The moon is a harsh mistress ...

Comment Re:i think it shows trends in GitHub's demographic (Score 1) 132

Right you are. Linus did a pretty good [well, succinct anyway :-)] job explaining this.

Here are a few more detailed reasons why you can't write a kernel in C++:
- C++ constructors [or destructors] can't return error codes. They can only throw exceptions.

- Leaving aside the fact that relying on exceptions (vs. checking error codes, which the kernel code already does at every step of the way) is largely unsuitable in a kernel [or an app even]

- The kernel has several modes/states: in ISR, in syscall, entering/leaving syscall, in kernel thread, in tasklet/ISR bottom half and most of those, if not all, an exception can't be used.

- Even you still wanted to try, the exception code that C++ generates can't be used in the kernel. It would have to be something custom.

- So, you can't wrap a lock acquisition inside a constructor [and release inside a destructor] because if you're in an ISR, you have to do "trylock" [and test the return code] instead of "getlock". Trying to wrap the trylock in a constructor requires that you be able to throw an exception.

- Nothing in C++ (inheritance, polymorphism, operator overloading, etc.) helps the kernel with the bulk of what it does (e.g. programming devices, ordering of FS writes with journaling, setting up page tables, etc.). A lot of things the kernel does are "dirty" jobs (e.g. some device interfaces are straight out of kafka, particularly for older devices).

- The linux kernel [and *BSD] are shining examples of how to program in C in a "clean" way, despite having to do a "dirty" job

- As to "object oriented" programming, the kernel already does a fair bit of it already, but without the fanfare/hoopla.

When Linus first introduced git, he gave a video talk. He said [paraphrasing] "It's all about the merging, stupid". To me, it's all about the debugging

Disadvantages of C++ over C:
- constructors can't return error codes, only throw exceptions. In C, you can choose what ever you want:
    {
        foo_t *new_ptr;
        int errcode = alloc_and_construct(&new_ptr);
    }
    {
        int errcode;
        foo_t *new_ptr = allow_and_construct(&errcode);
    }
    {
        foo_t *new_ptr = alloc_and_construct();
        if (new_ptr == NULL) // error ...
    }

- new/delete are operators and not functions [defective by design]. See http://www.scs.stanford.edu/~d... for a far better/in-depth explanation/condemnation than I could give.

- templates and stl -- they can simplify some code, but if the stl implementation has a bug, where do you put the breakpoint? Of course, the stl is like "Westworld" [1973] "Where nothing can ever go wrong ... go wrong ... go rong ..." :-)

- Try to explain to your boss that the reason you can't ship a product is because you used the stl heavily and you'll have to wait six months before the bug fix gets propagated to all the platforms you ship on.

- A simple "x = y" can generate a copy constructor [or two other things that I can't remember]. Trying to decide which one gets generated "at a glance" is problematic. The simple line may generate a lot of code that is slow. In C, there's little to no ambiguity (e.g. x/y are either simple types (e.g. int), pointers [to structs], or structs and the execution time is more easily predictable). There was a proposal a while back to come up with a C++ subset for realtime [IMO, why??? If you want realtime, just use C]. The one feature I remember was removing copy constructors [as evil].

- In C++, if you're trying to use some of the more advanced/powerful features, it often takes longer to get it to compile and ensure correctness. This is pure overhead to actually shipping code. You spend more time haggling with the language than the problem itself. And, anybody taking over maintenance of the code will have the same learning curve.

- In most of the code I've read, C++ is more sparsely commented than C code. IMO, partly because after all the "haggling", you're exhausted, and still don't understand how the stl/whatever actually works, so your only comment would be "stl magic that I don't understand" [which most people would leave out]. A genuine frustration for programmers doing maintenance and bugfix because they have to cast a wide net looking for the source of a bug and may not be able to spend the time on the "cute construct" you used.

- In a C programming class, you're doing exercises to implement linked lists, trees, dynamic arrays [using realloc], etc. Or, these are covered in a follow on "data structures" class. Unless a C programming class is a prereq for a C++ programming class, there is too much material, and you're going to be taught to just use stl::whatever instead of implementing your own and then being shown the stl equivalent. So, you're highly dependent on using something that is an ever increasing, hard to debug, bit of standard functions, and because you never learned how to implement them yourself, you're beholden to using them, no matter what the bugs and side effects are.

- Polymorphism actually makes code harder to read. I once did a portion of a realtime project, written in C, in C++, just to see if it could be beneficial to use more C++. I had a polymorphic function "foobar". Afterwards, both my boss and I concluded that it was clearer to use "foobar_int" and "foobar_float" because you didn't have to keep looking for the argument definitions to get their type and figure out which polymorphic version of foobar would need to be breakpointed.

- Inheritance is [wildly] overused. The acid test is the "is a" vs "has a" decision. Most real world data relationships are "has a". But, in many cases, newbies feel they have to use inheritance, otherwise, they feel they're "not real C++ programmers"

- Inheritance violates encapsulation. If you have three classes X, Y, and Z (X is the base class, Y inherits from X, and Z inherits from Y), and each has various fncX1/fncX2/..., fncY1/fncY2/..., and fncZ1/fncZ2/... Now, you instantiate class Z. Then, when you use Z.fncX1 you're violating encapsulation because you're having to have [incestuous] knowledge of how Z was implemented in order to know that is has [by virtue of a two level inheritance from X] that Z.fncX1 is valid. Imagine a five level hierarchy. Now, you're probably looking through five different .h files and possibly five different .cpp files just to find the function definition/body of fncX1.

- Inheritance is even worse. In the above X/Y/Z example, suppose fncX12 is a public function that was added to the X class after class Z was created. The creator of Z never intended users of Z to do Z.fncX12 because it was unknown at the time and couldn't be warned against. Now, Z is changed to no longer use X/Y. Now, Z.fncX12 no longer exists. Suppose, that Z only wanted to use fncX3 and fncY7 and thus inherited from Y. But, the simplification of Z to provide fncX3/fncY7 functionality directly [as fncZ3/fncZ4] now breaks code. This was all done because X/Y were not under the control of the author of Z. Bad, dog, bad. Bad ...

- The RTTI. Most companies disallow it because it is very slow. Now, when doing a [yecch] pointer "downcast" (from base class to child class), instead of doing <dynamic_cast>, you're doing <static_cast> with the hope your program doesn't segfault.

Comment Re:Sorry Jeff (Score 3, Interesting) 268

I emailed a friend currently working at Amazon, with links to the NYT story and the CNN/Money story. I asked him/her if that sort of thing ever happened in their group.

The reply: "It seems all true, even!"

The friend is a former boss of mine, whom I know to be honest, fair, and a really good manager, who knows my skill set well, and would have been able to match me to some opportunities. But, now, I think I'll pass on Amazon as I'm getting confirmation of the environment from someone I know.

If Bezos truly doesn't condone the bad behavior, but also believes that it isn't happening underneath him, then, he's asleep at the switch.

Berating people in meetings is actually "creating a hostile work environment", which is actionable under U.S. labor law. But, anybody mentioning this to HR would probably mark the person as "not an Amazonian" and the person would find themselves being shoved out.

This is "stack rank" management [at MS], where the lower 20% must get bad reviews even if they're top performers. In a dept of five where all the team members are stellar performers, one must be singled out as a "low performer". This was started, IIRC, at HP, and is also at Cisco. So, the group gets together and mutually selects "the goat" for the quarter. After five quarters, each employee has been "the goat".

Comment use TCP with new type of internal QoS (Score 1) 47

The problem seems to be that uTP, which uses UDP instead of TCP, was created because when torrents used TCP, they had the same priority as TCP packets for things like web browsing. Going back to TCP would seem to ameliorate at least one form of attack mentioned. Why reinvent the wheel by enhancing uTP to the point where its virtually indistinguishable from TCP when the priority problem can be solved another way?

How about an "internal" QoS parameter, set as a socket option call, that sets a QoS within a given system/node for the given socket, but is not the classic QoS packet parameter? That is, a web browser sets a lower QoS for its download manager so that a lengthy download doesn't slow down new http/html traffic. The OS's network stack layer uses this to prioritize requests, but this is purely internal (e.g. no packet gets a QoS value). In other words, it's not a protocol extension, just an OS network stack enhancement.

I've had cases where I'm downloading a lot of stuff (either in the browser's download manager or something external like fedora's yum reposync) and foreground web browsing slows to a crawl.

This is more akin to lowering a process scheduling priority below "normal". I use this if I'm running a heavy compute job in the background, but I still want my web browser, email, editor, etc. to have reasonable responsiveness.

The most efficient way to implement this would be through a socket option, which would require a kernel change.

Linux has a cgroup for something similar (/sys/fs/cgroup/net_prio) but creating a subdir under this and attaching a process to it usually requires root access. It's baroque. I tried to use the cgroup fs to implement a limit on a process resident set size because the syscall for RLIMIT_RSS isn't connected to anything. I got it to work, but the mechanism was far more complex than the equivalent for lowering process priority via setpriority. Hence, the "clean" solution for socket/connection priority would be a socket option.

Applications could do this with minimal system support by getting stats on all connections (e.g. via netstat, etc.), calculating what their load is, and throttling themselves if they see they're using more than X percentage of the current total usage. Torrent clients already have this (e.g. one can set a parameter that limits the upstream bitrate used by a given torrent). But, there is no global limit or mechanism that says "How much bandwidth am I hogging?" and do a backoff.

Comment Re:Agriculture uses way more water than residents (Score 1) 390

Thanks for mentioning this. [My mea culpa]: I was unaware of the mandated reductions to farmers [which I'm in favor of, obviously]. I read a lot about a wide variety of topics and I missed this one, partly, because most of the followup buzz [news stories] surrounding this has been about how such-and-such movie star waters their lawn too much. After a while, my eyes started glazing over. Headlines like "Movie Star is Water Pig" are more likely to show up [because they're more sensational] than "California's Comprehensive Water Reduction Regulations Explained" [I would prefer the latter].

Because I've lived in CA for 30+ years, I'm keenly aware of the water supply issues. I had been aware of the almond/walnut thing long before Jerry Brown started the regulations, and when things starting looking bad, I cut my consumption of almonds [now down to zero--and I love almonds]. By my estimate, by eliminating almonds, I'm saving 50+ gallons per day. This is good, because I live in an apartment, and don't have a lawn that I can stop watering to conserve.

Comment Re:Agriculture uses way more water than residents (Score 1) 390

As the actor Ricardo Montalban once said in an interview: "If you're [currently] not acting [in some play/movie], you're not an actor".

So, if a farmer isn't growing crops, he's not a farmer. So, he'd probably lose whatever special pricing on water he has [as well as any raising of the limit on how much water he gets], so he wouldn't be able to make a profit because he'd be buying the water at the same price he could sell it for. And, such profiteering would probably be prohibited.

From the wiki, California grows some 350 different crops. Planting more of the more efficient ones [with regulations to back that up] is no more onerous than imposing regulations on residential usage. These crops will still turn a profit, perhaps not as much as almonds/walnuts. Cutting down on [say] the top 50 water waster crops will not bankrupt farmers, but might cut agricultural water usage in [say] half.

There are times when the practical necessity of a crisis [and the market regulation that goes with it] needs to trump the free market economy [for which the purist (non-Keynesian) economic models don't account for].

We all need to sacrifice a bit. That includes everybody.

Comment Agriculture uses way more water than residents (Score 2) 390

See https://en.wikipedia.org/wiki/...

From this [see "uses of water" section]:
- Agriculture uses 39% of the water vs. 11% for residential use
- A typical household uses 170 gallons/day
- It takes 4.9 gallons to grow one walnut, almost as much as a head of broccoli at 5.4 [but with much less food value]
- It takes 1.1 gallons to make an almond, so a small jar of them uses more water than a household does per day.

Most of the regulations [and hoopla] so far are about getting residents to use less water, but their usage is a drop in the proverbial ocean. Where are the regulations to get farms to plant water efficient crops that have high food value instead of water thirsty crops that, effectively, waste water?

Producing crops that have good nutrition, use less water, and provide lower prices to consumers would seem to be the responsible thing to do during a prolonged drought. If farmers can't see the logic of this, then, if regulation comes, they would only have themselves to blame.

Comment surveillance and datacaps (Score 4, Interesting) 82

Seems to me that datacaps facilitate the surveillance.

The published/public reasons for datacaps are to "reduce network congestion" and that various telcos would like to charge [gouge] their customers more money.

Many articles have debunked the "network congestion" argument. But, telcos would like to charge higher prices so they continue to float the myth ad naseum. It's also a great cover.

Maybe the only "congestion" is that while it would be relatively easy/inexpensive to build out networks to handle it [routers, etc.], it would be prohibitively more expensive to add the requisite amount of surveillance equipment to handle the load [if they could]. Otherwise, the "secret room" inside a telco's CO would have to become the "secret floor" and eventually the "secret building".

Charging customers higher prices for congestion is a misnomer. But, instead of using this capital [or any capital for that matter] to build out networks to accommodate legitimate internet traffic increases, like any reasonably/responsibly managed company, diverting it to a telco's "black budget" would be harder to justify [even internally] to an auditor.

Comment Ads burn 30% of bandwidth that YOU pay for (Score 4, Interesting) 519

In a recent story, a university installed ad blocking at their edge router. They saw their total Internet usage drop by 30%. Since, they were probably also doing non-web traffic (e.g. software updates, dropbox, etc.), this means that the actual percentage of website content that is ads is probably higher.

Are companies who inject ads going to compensate the recipient for the bandwidth usage? Will such usage push the subscriber over their datacap?

I installed ad blocking early, because, back then, the flash video ad was more likely to hang the flash player.

And, I used to have a datacap [Note: I'm in California, and I switched to sonic.net, one of the few ISPs that have no datacap], but now the load time with the ads would still be too great.

And, I'm not against ads in general, but, the privilege [of sending me an ad] has been abused. Obnoxiousness, malware vector, delaying page load until the ad is dynamically selected in a back haul bidding network. The list just keeps going.

The confusion of a staff member is measured by the length of his memos. -- New York Times, Jan. 20, 1981

Working...