Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Biotech

Human DNA Enlarges Mouse Brains 193

sciencehabit writes Researchers have increased the size of mouse brains by giving the rodents a piece of human DNA that controls gene activity. The work provides some of the strongest genetic evidence yet for how the human intellect surpassed those of all other apes. The human gene causes cells that are destined to become nerve cells to divide more frequently, thereby providing a larger of pool of cells that become part of the cortex. As a result, the embryos carrying human HARE5 have brains that are 12% larger than the brains of mice carrying the chimp version of the enhancer. The team is currently testing these mice to see if the bigger brains made them any smarter.

Comment Re:Not quite sure (Score 1) 199

The problem was that the odd-numbered versions weren't getting enough testing, so the real testing didn't happen until the even-numbered release. So the system seriously slowed the pace of kernel development without significantly improving stability.

True, and the solution to that should be that nothing except emergency fixes goes into the stable branch until it's tested in unstable.

And I disagree with the AC you replied to that Torvalds has god complexes. Not so - he's quite capable of changing his mind as situations change, which his change to discard odd/even is just one example of. He has made mistakes in the past, and will probably be the first to tell you. However, he's a leader type in that he's willing to make hard decisions even when they could potentially be mistakes.

Programming

Ask Slashdot: What Portion of Developers Are Bad At What They Do? 809

ramoneThePoolGuy writes: We are looking to fill a senior developer/architect position in our firm. I am disappointed with the applicants thus far, and quite frankly it has me worried about the quality of developers/engineers available to us. For instance, today I asked an engineer with 20+ years of experience to describe to me the basic process of public/private key encryption. This engineer had no clue. I asked another applicant a similar question: "Suppose you wanted to send me a file with very sensitive information, how would you encrypt it in such a way that I would decrypt it?" The person started off by asking me if it was an excel file, a PDF, etc. In general, I'm finding that an overwhelming number of developers I've interviewed have poor understanding of key concepts, especially when it comes to securing data. Are other firms experiencing this same dilemma in finding qualified applicants? (Quite frankly it scares me that some of these developers are building sites that need to be secure)"

Comment Re:Not quite sure (Score 1) 199

Indeed.
2.0 -> 2.2 -> 2.4 -> 2.6 all heralded major changes, while 2.6 -> 3.x did not.
I think there should have been one major lift (to 2.8?) in there with devicetree - that's a big change that introduces lots of potential incompatibilities and required changes to userland. But that one went in a minor number, which seems rather wtf to me.

My recommendation: Switch to 4.1 and go back to an odd/even system so 4.2 will be the next stable.
Also make a marker for whether a stable kernel is also designated long term stable, because right now you have to just know which ones are LTS, or you have to look it up at Cox' or Morton's sites.

Comment Re: Yes (Score 1) 716

Did you even try it?

Outside production, yes.
In production, no way. The amount of work required just to try was determined to be immense, and there were incompatibilities from the start that seem prohibitive - like systemd taking over cgroups, while we already use cgroups for our own purposes, and hal not working because of incompatibilities with the "improved" dbus, and we rely on user-triggered external hardware detection, not system-triggered.

I currently run a test system with EL7 that does not need cgroups and hal, and when it works, it is not much of a problem - but that can be said of any software, regardless of quality. It's when it doesn't work, and a boot crashes that systemd leaves you stranded. If you haven't provoked boot problems to see what happens, and how to recover, you have not truly tried EL7.

Comment Re: Yes (Score 1) 716

Along the way, your RHEL6 will be fine, and it will grow cold, like they all do, as will your skills. I don't particularly care for systemd, but I learned it in a couple of hours, and yeah, it works.

The problems with systemd is when it doesn't work. You risk a system that's inoperable - you can't boot to single user mode and you can't even check logs from a different system, because they're not humanly readable and not committed. And you can't single-step through the tasks one by one and see what happens.

Never mind that it's also MSDOS .ini file based, which is about the most automation-unfriendly system there is. What a key is for depends on which section it is in, and there's no inheritance whatsoever. It's a file format that should have expired in the 90s, and thankfully never caught on in the Unix world. Because of that truly WTF choice, modifying settings through scripts becomes an error-prone headache.
And at the first error - boom, you have a system that can't be brought up or troubleshot. Better keep a bare metal backup handy.

What this sysadmin cares about isn't how fast a system boots[*], but that it's stable, consistent, has as few dependencies as possible, is debuggable when the shit hits the fan, and that no one failing part can bring down the whole house.

[*]: When an IBM system spends ten minutes in pre-boot running self-checks and enumerating all the SCSI disks, I couldn't care less whether it cuts the boot that follows down from a minute to half a minute.

Comment Re:Luckily Red Hat is not the only distro (Score 1) 716

Dbus, HAL, systemd, all this half-implemented GUI stuff... it's too unpolished and counter-intuitive for the clueless, and too complex for the clueful, basically because they keep trying to reach for the former market at the expense of the latter.

I could live with hal. It provided useful functionality. Although whoever created the command line interface to hal must have been a masochist who loved typing and had an aversion to allowing more than one qualifier per command.

for udi in $(hal-find-by-capability --capability storage); do
    if [ "$(hal-get-property --udi=$udi --key storage.removable.media_available)" = "true" ]; then
        hal-get-property --udi=$udi --key block.device
    fi
done

Comment Re:The alternative (Score 1) 411

Imagine a language with no fluff, no cruft, no boilerplate. Everything is essential and concise. You have something akin to either assembly or too-clever Perl. The fluff is necessary. The fluff provides context, readability, and maintainability.

It also provides its own bug opportunities. Indeed, from looking at what Coverity finds, most defects wouldn't have existed without the fluff.

I'm not advocating that people migrate to assembly or perl, but whenever you cannot point at just where something happens, you have overused abstractions.

Comment Re:"Not intentional". Right. (Score 1) 370

The Cox "Browser Alerts" seem to only come from three IPs. I blocked them at my router and haven't experienced any problems. I don't have those IPs handy, but found them when NoScript listed them as choices to Allow/Forbid. (In my case, they were "letting me know" that I should upgrade from a 2.x to 3.x DOCSIS modem.)

They still modify the content. Say I use wget to recursively fetch a web site I run on the outside somewhere, and burn it to a CD for someone to look at. Then they will see the content that Cox inserted. That's not cool at all.

Something is wrong if you have to rely on SSL to protect you from your own internet provider!

Slashdot Top Deals

Real Users are afraid they'll break the machine -- but they're never afraid to break your face.

Working...