Please create an account to participate in the Slashdot moderation system


Forgot your password?
Slashdot Deals: Deal of the Day - Pay What You Want for the Learn to Code Bundle, includes AngularJS, Python, HTML5, Ruby, and more. ×

Comment My personal stack overflow experience (Score 3, Informative) 167

I've been posting/moderating slashdot for years, but just started with stack overflow. Here's my experience.

I definitely agree with and have seen what the articles are driving at. In particular, the "The Decline of Stack Overflow" is absolutely 100% on the money.

I answer questions. At least five / day. In a short time [about a month], I amassed 1000+ rep points. I'm now in the top 0.5% for the quarter. The article's comment about "SO hates new users" is true. Before I got to this point, I used to have more difficulty with certain people. As my point total got higher, the snark level went down. Ironic, because I was doing [trying to do] the best job I could at all times. My answers didn't change in terms of quality, just the tone of comments I got back.

When I post an answer, I take several approaches. Sometimes, a simple "use this function instead" is enough. Sometimes, "change this line from blah1 to blah2". If the OP has made an honest effort to be clear, but the posted code is way off the mark (e.g. has more than two bugs), I'll download it clean it up for style [so I can see OP's logic before I try to fix it], fix the bugs [usually simplifying the logic] and post the complete solution with an explanation of the changes and annotations in code comments.

This is the "cut-n-paste" solution. I may be just doing somebody's homework for them. But, usually, it's just somebody who spent days on the code and is "just stuck" [I've asked some OPs about this]. The controversy is that "if you do that, they'll never learn anything". Possibly. But, it's part of my judgement call on type of response to give. IMO, in addition to research and traditional classes/exercises, one of the best ways to learn is to read more advanced "expert" code. Compare one's original to what the expert did and ask "Why did they do it that way?!". This may foster more research on their part and they will have an "AHA! moment"

Unlike slashdot, one can edit a post [either a question or an answer] and you can delete them. Comments can edited for five minutes and deleted anytime. Now this will seem goofy: If you comment back and forth with a given user over an answer one of you gave, either a collegial discussion or a flame war, eventually an automatic message comes up asking if you'd like to transfer your "discussion" to a chat page. Also, because comments are limited to 500 chars, I sometimes have to post a partial, incomplete answer because what I need to say needs better formatting/highlighting than a comment and wouldn't fit in a comment, even though it's more appropriate as a comment.

The goofy thing is that you start with 1 rep point. You can post a question or a full answer. But, you can't yet post a comment!?

On SO, people edit their questions and answers, based on feedback in the comments. The answer may be edited several times before questioner accepts it. Sometimes, for complex questions, it can take a day or two to come up with the right answer.

Despite all this, once and a while, I get a "heckler" who doesn't like an answer [even though it's correct]. It goes several rounds in the comments, usually the other person doesn't understand the problem space enough to realize the answer was correct [or more subtle than they realized]. So, it goes back and forth, and each time I explain how I was correct, adding clarification or highlighting what I said originally. Eventually, the heckler says "Your answer doesn't answer the question". This is for an answer the OP questioner has "accepted" as the best one.

I've seen reasonable questions downvoted within minutes [I upvote them back]. I've seen people threaten to close the question as unclear, requires opinion, or can _not_ be answered as described. The last one is funny, because the question is clear to me, and I provide a correct answer [that eventually gets upvoted and/or accepted]. Sometimes I send the commenter who is threatening doom a message [you can direct a comment to a specific user--like twitter] and say "Hey! The question can be answered--as is. Please see my posted answer".

Because I have particular domain expertise, I tend to see some of the same people active on a question that I feel qualified to answer.

Some are superb angels:
- Always polite
- Extreme kindness to newbies [even if the newbie "doesn't get it"]
- Always provide a helpful and correct answer.
- Plus, a ton of helpful comments, even when not posting a full answer.
- Often, post a helpful comment, and come back with a full correct answer an hour later
- May post a comment about how an answer was wrong. They're usually right--I've had this happen to me once or twice

Some are what I'll call "keepers of the faith" or KOTFs:
- They will comment "consult man page" [without explaining _which_ manpage].
- Or [and this is popular], "please post an MCVE". An MCVE means [IIRC] "Minimally complete verifiable explanation" in SO jargon. This is even for questions that are already clear, concise, etc.
- Note some "angels" will say "do MCVE" but the difference is tone: angels do it with love--and question is unclear. KOTFs stay polite but it comes off as abusive
- The KOTF crowd camp on the SO moderation pages. They do direct moderation, but the pages operate mechanically more like slashdot's meta-moderation section.
- The problem is that KOTFs will moderate based on form rather than domain expertise (e.g. a bash programmer moderating a question involving python)
- They moderate question as "should be deleted" because the question didn't fit the MCVE requirement--or so they thought.
- Because they don't have the domain expertise, they're not qualified to judge whether the question is clear to a person who is "skilled in the art" of the particular domain (e.g. unclear to bash person, perfectly clear to python person)
- KOTFs also leap on the littlest missing char in OP's posted code, even when it's obvious that it's because SO's posting mechanism is to blame [SO uses markdown, and you have to indent four spaces before each line of code]. SO doesn't allow a direct uplink or clean paste like pastebin
- Also, downvoting a question costs the downvoter only 2 rep points, so KOTFs can (and, unfortunately do) downvote quickly and frequently. Shoot first and ask questions later.
- After downvoting, KOTFs are likely to start commenting on a page about MCVE, code is terrible, consult man page, why are you asking this.
- For some, in separate comments as they gradually think of new things to snipe/carp about [*]

[*] I saw this literally on a page that had the question upvoted to +5. Multiple commenters [myself included] were, using the comment system, helping OP to test/debug his code in real time (e.g. "try this", wait, "what result?", "okay, try this")--this is unusual, but not that unusual. We were "online" with OP for about two hours, before OP got so intimidated by the KOTF that he deleted the question page. Fortunately, OP had gotten enough hints from the angels, that he was able to find his problem, and he reposted his question next day with a completed answer.

Frankly, even when I'm not the target of this, it can be difficult to watch. If the KOTF is genuinely wrong, I'll sometimes comment back to them directly, because it's clear their primary mission is to make OP feel as bad as possible. Doing so takes time/energy on my part that is better spent answering a question, but otherwise, KOTFs can run [and do] run amok. With the "online" example, I finally saw the KOTF, composed a message telling him to back off, but the page was deleted 10 seconds before I could send it.

Still, overall, SO still works. But, it could use a facelift ... Ironically, younger programmers think that older ones can't learn new things. But, on SO it appears that the KOTF are younger programmers who started on SO early, amassing points over a multiyear period. But, because they've been there so long, they feel like "they know what's best" or know when a question [or answer] is "well formed" or not. Older programmers have come to the site more recently, so they're more circumspect. So, in this context, who's really the tired old man?!?!

Most OPs try to post a good question. Sometimes, they're newbies, and need help to formulate it. Ignoring KOTF comments, others will ask for more specific information: post this, and this, and this. With the commenter's help, OP can edit the question enough that they can get a good answer back. Usually, they're more than willing.

An expert answerer doesn't always [and frequently doesn't] need a perfect question to provide a good answer. So, if they're asking, it usually means that they or anybody else needs the info.

However, some OPs do take offence at being asked to post more information, even if it's needed. Sometimes, a further explanation by commenter as to why they need more info gets OP over this. Sometimes, OPs drop the question at this point. I can only surmise this is due to ego crush, even when it's an angel asking. Some OPs do have a sense of entitlement that if they post a question, it should be answered--quick. They're also the hardest to get adequate info from.

And, sometimes, it appears [to me, at least] that OP thinks the code they held back [even if not proprietary] is too "special" to "give away for free". They never say this outright, but, after several rounds of multiple commenters asking for additional info, that is not provided, what other conclusion does one draw? This is more likely to happen with newbie programmers or newbie SO posters (e.g. "I'm working on my first singly linked list implementation, but mine is going to be special and revolutionary for the world").

So, I will continue to post answers. Yes, I do "work" for "points". But, I also, in addition to getting my answer "accepted" [worth points], I frequently get a comment from OP: "Thank you! Everything is now so clear". And, that, as much as anything else, is the reason to do it.

It's what allows me to continue through the minefield of SO's version of "The Good, The Bad, and The Ugly" ...

Comment Re:How can there be? (Score 1) 622

I think you have the model wrong. You can't download/upload faster than your maximum line rate, which is predictable [be it DSL, cable, wireless]. I had ATT DSL with a 150 GB datacap and now have DSL which has no datacap. I pay $60/month. My downlink speed is 1.2 MB/sec, so my maximum possible usage can be calculated. Sonic, BTW, is perfectly happy to say "use your line 24x7 at maximum speed".

Of the fees, I expect some to go to pay for local loop maintenance, routers, and backhaul links--and carrier profit. Unless the carrier is pricing below their costs, the model works. Carriers are simply refusing to pay for capital expenditures out of the money they are charging. They just want to have their cake and eat it, too.

Comment Re: Waaaahhhhh!! (Score 1) 688

I love technology and love programming with a passion that rivals Michelangelo's for painting. I'm all about "what's the best technical solution", just like a doctor is about "what's the best care for a patient". Check one's ego/emotions at the door.

But, sometimes, situations can make it difficult to remain civil. In one company, our product was a realtime H.264 video encoder. It had the video encoding app, that using special device drivers, interacted with custom hardware inside an FPGA. At the time, I included some .h files from /usr/src/kernel/include/linux to get at some constants/structs I needed in the app to communicate with the driver. Normally, a good portion of this has an analog in /usr/include/sys and would be the preferred way to do it. But, at the time, /usr/include/sys was incomplete, so going to /usr/src/kernel was necessary.

Eventually, I have a conversation with a programmer, who was with his manager. The manager was a peer of my boss, but could exert undue influence on the direction of my project. So, I want to keep things calm. It goes like this:

He: there is absolutely no reason to include from /usr/src/kernel in an app
Me: that is normally true for regular apps, but our app interacts heavily with the kernel and our driver and /usr/src/kernel has definitions that are only available there.
He: there is absolutely no reason to include from /usr/src/kernel in an app
Me: We need "blah" and it's only defined in /usr/src/kernel/include/linux/blah.h
He: there is absolutely no reason to include from /usr/src/kernel in an app

This goes on for 5+ rounds.

Eventually, I ask:
Me: Why do you believe this, given what I've said?
He: stony silence
Me: If I don't use /usr/src/kernel, where would you suggest I find the definitions that the code needs?
He: stony silence
Me: If you don't/won't believe me, why don't you post your "there is absolutely no reason to include from /usr/src/kernel in an app" to the LKML, explaining that it's an app that interacts with a device driver? See what they say.
He: stony silence

This goes on for 5+ rounds. At this point, I am furious. I so wanted to say "you are being fucking silly". But, I settled on "That is the most ridiculous thing I've ever heard in my entire career".

Of course, my civility was not returned in kind. The guy's boss would get profane privately and in meetings regularly. In retrospect, I wish I had said "fucking silly"--I would have felt better and I don't believe it would have made the overall situation any worse [but, it might have made it better--see below].

Most people respond well to an empathetic answer. But, there are some personality types [particularly, those that use abuse themselves] that distrust the "soft answer" as being "weak" (i.e. you're lying to them). If however, you lay it all out and then close the discussion with "you're being damn stupid" and walk away, they'll think about it and come back later and say "okay". Strange, but true ...

Comment Re:Perl? LOL. (Score 1) 163

It's supposed to be ironic, considering how every N months, another "perl is dead" meme gets posted to slashdot [usually as the result of an article posted by a dice shill]. This was mentioned at a perl conference in a talk by a core developer titled [something like] "Is perl really dead?". He went on to say that there are actually more jobs in perl than a number of other languages. IIRC, MS uses perl as the master control for building either the kernel or office.

In truth, I'm actually language agnostic. I've learned, used, [and forgotten] quite a few languages over the years. To me, a language is a tool in the toolbox, not a religious icon.

I recently taught myself python, just to see if I was missing something.

Starting out, I was against the indentation thing. That turned out to not be such a problem.

But, what is a serious problem is the way globals are handled. In most languages, symbols are globals if you don't declare them local to a given function. In python, it's the reverse--you say "global sym" inside the function. It also has some goofy rules about whether a symbol is global or not. If a global is written to in function x, it can be implicitly referenced in function y [without a global declaration]. But, if y tries to write to it, then y needs a "global sym" too--or the interpreter will flag it. This led to a whole host of problems. And no way to solve this by declaring "global sym" at file scope like virtually every other language. This is a major defect.

Also, python has no equivalent of "for (x = 0; x < 10; ++x)". You need to say "for x in range(10)". Fair enough. But, python, seems to have no equivalent to "for (x = 0; x < 10; ++x, ++y, z -= 6, cur = next)" or "for (x = ...; check_x(x); x = next_x(x))" without defining an iterator of some sort.

Plus, if there's a syntax error [on say a 100 line file], it takes the python interpreter takes 2+ seconds to spit it out. In that time, my "check all perl scripts" script that invokes "perl -c" on all files has processed 100,000 lines of perl code.

The stated mantras are:
Python -- there's one right/best way to do things
Perl -- there's many ways to do things

Having been used to perl's flexibility, I was surprised how difficult python was if I didn't write "idiomatic" python. In fact, despite having 200,000 lines of perl, I don't write "idiomatic" perl code. I've never used importer, carp, etc. I've always written my own equivalents. But, instead of my perl code looking like APL, it would be fairly easy to understand by a C programmer [which is one of my design goals].

Comment Re:Waaaahhhhh!! (Score 1) 688

The issue was not about secure boot [as in UEFI secure boot]. This was not a debate about whether the boot loader (e.g. grub) needed a UEFI/secure version. It has one [signed by MS]. Nobody disagrees with that, not even Linus. But, Garrett was not talking about that.

Garrett was talking about PE as a format for kernel modules loaded by the kernel after it is running. This is completely different.

Here's the boot process. BTW, I'm a computer engineer with 40+ years experience and I specialize in writing boot roms, loaders, kernel/drivers and realtime on a myriad of systems, so I might just know something.

A system can have a combination of the following:
Disk partititon table: MBR or GPT format

Step 1: So, if you have UEFI, you can enable secure boot or not. With secure boot enabled [which mandates GPT], it needs a signed PE binary in a specially marked GPT partition that has a DOS-like FS format. From within that partition, it will select a file for the boot loader. With that, the ROM will load the bootstrap loader and transfer control to it.

At this point, secure boot and PE format are no longer a consideration in the remaining boot process.

Step 2: The bootstrap loader will, regardless of how it was loaded, in turn, load the kernel. It can do so by any method it chooses.

Step 3: At this point, the kernel controls everything. It will initialize itself. It may choose to dynamically load some kernel modules for drivers, etc. But, you can gen a kernel that is statically linked with all its modules bound in.

As I said, after step 1, it has nothing whatsoever to do with PE format binaries. The boot loader is already running, regardless of how it got there.

Garrett was proposing adding PE format for step 3 [or being able to do so weeks later]. Why? And to what end?

Secure boot is the car's battery during crank. Kernel module format is the brand of gasoline you put in your car. Apples and oranges.

Comment Re:Perl? LOL. (Score 3, Informative) 163

perl6 is a complete overhall of the language. It isn't merely perl5++. They are similar, but they aren't compatible, which is why the perl5 interpreter will be maintained in parallel [so stated]. It has a huge number of new features, including real classes [instead of implementing a class as a hash].

The perl6 interpreter [written in perl6, BTW], will be able to run perl5 code (e.g. it hooks on .pm or .pm6, etc.) and run a mix of the two. It will also be able to run python code, ruby, javascript, etc. if one wants to add the front end. So, in some ways, it's like .NET. You can run a program comprised of perl6, perl5, python, C, etc. all coexisting in one program. Also, on the back end, perl6 will generate true byte code, and can generate javascript, python, or other backend languages.

You can also define your own operators (e.g. "nand" for not (x and y), or do full metaprogramming.

perl6 classes can define "how they're implemented" (e.g. implement me as a C struct, python dictionary, javascript hash, or java class, etc.). In other words, if you request a C struct binding, the data will be stored that way. So, you fill in your object, then you can pass it off to a C function without any glue code. In and out, in and out, back and forth, at extreme speed.

perl5 is not my main language (e.g. I make my living writing C code), but I've been coding perl5 for 20 years [and I maintain a codebase of 250,000 lines just for my personal scripts]. I've been following the progress for a few years now and I've been waiting for perl6 to give it a try.

perl6 has ripped off concepts from just about every other language--that's a good thing. Traits, mixins, interfaces(?), multimethods, a full set functional programming operations [like Haskell, Scala, etc.], full set operators [just like python].

What I've described is merely the tip of the iceberg, done from memory of what I was reading a year or two ago.

BTW, in case you didn't already know, slashdot runs on perl5 ...

Comment Re: Waaaahhhhh!! (Score 1) 688

Don't be such a prig. The "deep throating" comment was figurative and not literal. Would it have passed your tolerance meter if Linus had said "Have sex with Microsoft" instead? Or, "Stop being such a tool of Microsoft"? Or, "Stop being a shill that is promoting Microsoft's agenda of making it impossible to boot a non-MS operating system"?

The discussion was not about signing binaries in the kernel. You sign them with a utility and then request the kernel to load them into the kernel. The kernel checks the signing and rejects the load if the signature fails.

BTW, I'm a computer engineer with 40 years programming experience doing kernel/drivers/realtime. There is no good technical reason for Linux to use PE format binaries for applications. Even if you could load them into userland, they can't run because the ABI is different. The are radically less useful as a format for loadable kernel modules.

What would a developer say/think if you told them that before they could publish [or even load] their kernel driver/module, they would have to submit it to Microsoft, wait five days to get it signed, before they could begin testing. Make a one line change to the source and recompile. Now resubmit and wait another five days ... It is this nightmare scenario that prompted the comment and the idea is so bad [and so obviously bad]. I mean, what can one say? NOTE: I really meant to say "WTF can one say?"--and I rarely curse.

Once again:
- Linus does this rarely. It is not his norm with most people or threads. If you still think otherwise, you're uninformed.
- He does it to people who have a history of "not getting it" or who refuse to fix [obvious] bugs (e.g. Lennart Poetering and Kai Sievers) and post patches they haven't even tested.

Try looking at the LKML archive. Out of the thousand and thousands of threads, see which ones actually fit the criteria.

Want to be treated well by Linus?
- Post patches that fix bugs
- Post patches that add needed features
- Keep messages short and to the point. This shows respect for Linus's time and the time of the other developers that have to read your post.
- Do your research. Show that you understand how the kernel currently operates.
- Then, when proposing/posting a fix, you'll be more likely to come up with the best one, instead of having it recoded by a maintainer before acceptance.
- Likewise, with proposing/posting a feature enhancement

Want to be treated poorly by Linus?
- Post messages that have meandering, poorly thought out logic.
- Post broken patches that won't even compile
- Post patches that break things
- Fail to provide data tp support an point of view. Or, provide data that is flawed
- Show genuine lack of understanding of how the kernel actually operates when suggesting a feature or fix.
- Continue to repeat your argument, unchanged, after the logic flaws have been pointed out.
- On things that are a judgement call, continue to post messages after Linus has made his decision, trying to weasel your way in, hoping he'll change his mind.

On Poettering, it isn't just Linus that has the problem with him. A lot of developers do. He's fairly arrogant. Casting aside the merits of systemd [or not], Lennart has a long history of ignoring bug reports, claiming they're not really bugs, saying "You just don't understand it". Well, he's saying this to highly experienced software developers.

I saw this go round and round on a bugzilla entry. The other developer had already posted a clear description of the bug, with supporting attachment files. It took something like fifteen extra posts, from several developers.

If you do this enough times, as Lennart has, you will anger/alienate a few people. And, later, when you post on the LKML, you will get hostility.

Lennart, in addition to ego, is also a comparative newbie coder. He adds features at breakneck pace, but because of his inexperience, he has a high bug ratio. He doesn't quite yet have enough experience to discern bugs when they're pointed out to him.

Want the love? Be a cooperative developer that works with other developers to fix problems. Don't tell developers that have way more experience than you do that they don't know what they're talking about [when they do].

Comment Re:Waaaahhhhh!! (Score 1) 688

Thanks for the link. I read it back in the day [and agreed with Linus' message if not the tone]. I had forgotten the target of the this was Garrett.

IIRC, the whole thing was about getting a Microsoft key signing of a Linux kernel module. This was trying to add on the cruft surrounding UEFI secure boot into kernel module building. IMO, this is a braindamaged concept. As Linus pointed out, kernel modules can be signed with X.509 keys [an international standard].

From another part of the thread (from Linus to Garrett):

You continue to miss the big question:
  - why should the kernel care?
  - why do you bother with the MS keysigning of Linux kernel modules to
begin with?
Your arguments only make sense if you accept those insane assumptions
to begin with. And I don't.

And, one of my own questions: Why do we want/need PE binaries when ELF are extensible [the "E" in ELF] and have widely supported tool chains? Answer: Because MS is pushing it.

Garrett only got blasted when he failed to see the big picture and persisted. At this point, I would have blasted Garrett too, and I'm calmer than Linus.

Linus is a calm rational guy. I actually met him briefly back in the 90's. His mailing list posts start out calmly asking "why"? If someone persists and fails to back up their claims with data or clear logic, Linus will turn up the heat. But, he only does that for effect to get smart [but sometimes egocentric] developers to actually think about what they're saying.

I've read a number of these so called threads of Linus' over the years. I'll tell you, at the end, Linus was right. And, I've wanted to set the other party straight.

For example, the infamous broken "code motion" optimizer bug in gcc that had hundreds of posts on LKML and the gcc mailing list in parallel saying "it's not a bug" and "Linus, you aren't a compiler developer", etc. Finally, one of the smarter gcc developers pointed out that the optimization violated the upcoming C standard--and the bug got fixed. Given what I read, I'm surprised Linus stayed as calm as he did :-)

Comment Re:That was easy (Score 1) 867

The first step is to find the disk space for the Linux distro. If your current system has multiple partitions you can repurpose one if you can move the files off it (e.g. move D:\games to C:\games). You'll probably need to burn a bootable CD with a partition editor. It will grow/shrink partitions [adjusting the filesystems as well]. gparted is one.

Linux works best with a small /boot partition [say 2GB]. It needs a root partition [to future proof, I recommend 60GB]. A swap partition that is [minimum] 2x the size of physical ram. And a /home partition [make this "all the rest"]. You can get this by shrinking an existing big Win partition that is [say] 10% full. You then create the four partitions from the free space you just created. Or, you could also just add a new SATA disk and put all of the Linux partitions there.

The one caveat: Be careful with disks that are >2TB as the /boot partition must be in the lower part of the disk due to limitations with the partition table entries in an MBR (BIOS) boot block. I'm assuming you have a BIOS based system (vs. UEFI) and that your boot block is an MBR. All UEFI systems use [mostly] GPT partitions that don't have that restriction. Some BIOS also support GPT, but converting an MBR to a GPT requires a tool.

When installing grub [the linux boot loader], it replaces the boot loader in the MBR block with grub's. This is fine, because on a pure Win7 system, the boot loader in the MBR just looked for the the "active" partition, read the first block from that partition and transferred control to it. This "second" boot block on the Win7 C: drive will then read the first N blocks from that partition [the "real" win7 boot loader] and transfer control to it.

The first partition of any MBR disk starts at block 1024. That means that blocks 1-1023 are unused. The grub installer places grub's "second stage" boot there. Grub's MBR boot will read block 1 and transfer control. The block 1 boot will read in 2-1023 and transfer control to it (the "real" grub loader). It is this "real" third stage grub loader that will give you the boot menu, etc. When you select "Win 7", it will read the first block of the Win7 partition and transfer control to it. Win7 won't know the difference from this point on.

Further, note that even if you add a second Linux only disk, the MBR block will be the one with the lowest SATA slot number (e.g. 0). So, if your Linux only disk is slot 1, the MBR changes will still be on slot 0. grub should handle this just fine, but If you're squeamish, you could swap the cables so the Linux add-on disk is slot 0, but now the Win7 C: partition is on the slot 1 drive and I honestly don't know if the Win7 boot loader is smart enough to handle this.

Before you proceed, you must create a full win7 backup. I use MS's backup tool. Use the tool to create a "system recovery disk" [a CD]. Tell the tool to "create a system image" during backup and be sure to configure it to backup all your partitions. Note: You must use an external [USB] disk as the backup media. Do not use an internal SATA drive for this [unless you disconnect it after the backup]. I had a disk wipeout, and I was able to restore everything. So, if you make a mistake, you're covered.

As to distros, I use Fedora, but I've been doing this for a while. I configured a system for a friend who was non-technical and chose Ubuntu. I added an extra root partition and put Fedora on the second. Ubuntu had some issues and eventually even my non-tech friend decided on Fedora. Mint or Arch are also good choices. YMMV, so others can probably give you better advice on choice of distro.

The CD based installers for most distros are smart enough to look at all partition tables on all disks and not mess with any Win* partitions they find and will add the Win* bootable ones as grub [boot loader] menu entries [at the bottom].

Comment Re:That was easy (Score 1) 867

You might be interested in dual boot.

When I did this a few years back for a new system I got, it had Vista. I then installed Linux over it. The Linux distro installed grub and found the Vista install and it became a choice in the grub boot menu. Later, I installed an upgrade version of Win7. It didn't touch grub or Linux, so now I've got a full dual boot system. I boot Linux by default, and override at the grub prompt for Win7 for games [or turbotax :-)].

Comment Re:Awesome (Score 1) 130

The problem solution can get messy.

Most packaging systems have a control file for each package that specifies dependencies for other packages with versions, conflicts, etc. They specify deps for stuff they care about (e.g. gtk3 version X needs cairo version Y) but they don't always specify the version of gcc et. al. that they need, because that's not important from their perspective. That is, they're happy to build with gcc 4.0.0 or 4.1.0 or whatever. Sometimes the deps are specified as "I need package X with version >= 2.6"

To be reproducible, these packages would need to spell out exact versions for the build tools and possibly/probably the exact version for any given package it depends on. It's a bit inflexible to burden an individual package with specifying an exact version dependency [and can break a lot of stuff], so generating an "uber" version/dependency file during the "make world" seems like the sanest approach.

But, that may not work either. When distro major releases are done, they can do "make world" and get the exact versions for all packages. But, sometime later, a package may be updated to a new version. For example, it's a forward/backward compatible bug fix to a shared library, so application program packages that depend on it (via rev >= X) don't need to be rebuilt.

Also, if a given package has a dependency like "gcc >= 4.0.0" [because it needs a given feature that was added in 4.0.0] may work fine for a while. But, later the package control file may have to be changed to "gcc >= 4.0.0 && gcc " need to be rebuilt, just to verify that the stat.h change builds the same binary? Further, maybe some apps may be updated to use this new define. It would be easy for a programmer to see that merely adding a #define is harmless and backwards compatible. That is, the smart way is use the latest stat.h even for older programs as they will build the same binary regardless of which version is used. But, a validation would have a hard time determining this and would have to rely on package version numbers.

Existing package systems do have a fair number of extra fields to help with such a choice (e.g. "I'm version x.y.1 and I'm backwards compatible with x.y.0"). But, can/should a validation system rely on this? IMO, it must disregard this and still regen the package and do the compare to detect mistakes in the package control file. That is, the only proof is the final binary comparisons, regardless of what the control file says.

Now comes the fun part ...

Naively, the validation system could loop through all packages and rebuild a container, virtual machine, or even just a simple chroot target directory with the correct dependent package versions for the given package that you wish to validate. Most of the the time would be spent simply extracting the packages in order to get the canonical environment for the build/verify operation.

Far better would be for the validation system to build the container initially and just change dependent packages as needed. But, this would require the validator to do global analysis of all of the dependency graph and choose the best way. For example, you'd like to install gcc 5.1.0 and verify most packages, deferring those that need the newer 5.1.1 until the end. Flipping back and forth would be wasteful and it's desirable to avoid this "thrashing".

But, such global analysis requires a satisfiability solver of some sophistication (e.g. libsolv) or you end up with massive amounts of cycles consumed.

As to disk space ... I've been using fedora and do "reposync" so I have both binary and source rpms. For a given fedora version (e.g. fc21), this runs to ~200GB for everything, which is what you'd need on each system if you wanted to "crowd source" the validation effort.

Comment Re:Build timestamps mess this up (Score 1) 130

When I needed to compare binaries, I wrote a script that would clean the source of __DATE__/__TIME__ and RCS/CVS style $Id stuff.

While the $Id was okay for source comments, it was common practice to add something like "static char *rcs_id = "$Id";" to each .c file in such a way that you'd get a bunch of these in the final binary.

This script can be run recursively on a copy of the source tree. Or, it can be done "on the fly" by having the script masquerade as gcc. Then, do the build.

This works a bit better than a special binary diff util because the $Id strings are variable length and could change the offsets [slightly] within many generated instructions.

RCS/CVS also had $Log for comments which pulled in the entire commit log into a block comment. This made even a simple recursive diff on the source trees of two different versions be messy.

Comment FTC to the rescue? (Score 1) 66

In a recent court decision, the FTC's power to levy fines against a company with poor cybersecurity has been affirmed:

With car manufacturers, sell a car with defective brakes and the FTC can order the manufacturer to recall the vehicles and fix the brakes, regardless of the model year. If the manufacturer fails to implement the recall, the FTC can fine the manufacturer up to $16,000 for each vehicle in the field. With [say] 100,000,000 vehicles in the field, this is a $1.6 trillion fine.

The FTC recently fined Fiat/Chrysler over failing to implement a recall They tempered the amount of the fine to be enough to generate some pain but not so much as to bankrupt the company.

It's not much of a stretch to extend this to router vendors. Fix security problems and issue a patch [the recall] or face a fine. The fine would far exceed the relatively small NRE cost of fixing the problem in the first place.

As a side note, this would get security fixes issued for older Android versions (e.g. even 2.0.x) as the FTC could fine any vendor that thumbed its nose at such support: Google, phone manufacturer, and/or telco that was the "obstinate" link in the chain.

No more of this WONTFIX nonsense [except on latest flagship gear] that leaves consumers that paid good money and got hung out to dry.

"For the love of phlegm...a stupid wall of death rays. How tacky can ya get?" - Post Brothers comics