Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Comment not "decoded" (Score 3, Interesting) 292

Scientists Successfully Decode the Genome of Quinoa

Ugh. I know this is a primarily a tech site, but why can't we make more of an effort to use the actual scientific terms instead meaningless stupid phrases.
It's kind of like saying "Company develops new method to talk to computers" instead of "Company develops new programming language, Rust"

"Scientists sequence and assemble the genome of Chenopodium quinoa (aka "quinoa")"

There, much better. Heck, that's lifted almost word-for-word from the actual scientific article, so it's not like it requires a ton of effort.

Comment Re:Gentetic modification (Score 1) 292

Breeding doesn't modify genes. You cannot breed two animals together or cross pollinate two plants and get arbitrary genes that weren't there to begin with.

Of course it does. I suggest you take a college-level Biology course and learn a little bit about mechanisms of genetic variation before saying things like:

The fact that you don't know that pretty much disqualifies you from any discussion here as you're not fooling anybody.

Comment Re:Am I supposed to hate this or not? (Score 1) 292

Those alterations have to already exist in order for breeding to get anywhere.

No, they don't. The process is just slower for non-GMO breeding. Random mutations can cause the plant to produce new chemicals that may or may not be harmful. In one case, you end up with oranges becoming blood oranges, in another you end up with a potato with way too much solanine.

Just to add to this. Humans may have accelerated the process of cross-Kingdom genetic variation by bringing organisms from very different geography and ecological context together, but that doesn't make it "unnatural." Breeding is also an acceleration of a natural process, unless you think hundreds of varieties of corn all grow in neatly ordered rows to facilitate cross-pollination in the wild.

Comment Re: Am I supposed to hate this or not? (Score 1) 292

You may have learned in your high school biology class that sexual reproduction is the only or primary method of introducing genetic variation within a species, but it really isn't. Genetic variation very frequently comes from other sources. It is no accident (or malfeasance) that one of the methods for introducing genetic modifications into plants uses a bacterium (Agrobacterium) or, for that matter, that naturally-occurring human viruses are frequently used to introduce mutations into human tissue-culture cell lines. Microbes are ancient and everywhere, and they are responsible for a lot of cross-Kingdom genetic exchange.

Comment Re:Am I supposed to hate this or not? (Score 1) 292

That's because you are letting the rules of nature determine the outcome.

Guess what? Genetic engineering is also subject to "the rules of nature" to "determine the outcome." It is not some magic wand that suddenly results in a new organism. The rules that determine whether particular genetic modifications are lethal (to the plant) or effective (change the phenotype), and whether effective modifications are "safe" (do not result in phenotypic changes that are toxic to humans) or not, are a complex system of interacting regulatory networks. How the DNA modification takes place is irrelevant to the outcome. It is foolish to assume that random genetic variation followed by selection (aka "breeding") is any safer or more controlled than directed and specific modifications to the genome. Is is also foolish to assume that the "safeness" of any phenotypic change to an organism is context-independent or immutable. See, for example, the increasing prevalence of type II diabetes, which is only now leading to concerns over past breeding-practices that produced then-desirable sweeter-tasting (ie: more sugar) and easier-to-digest (ie: less fiber) varieties of common staples (rice, wheat, corn, etc).

Comment Re: Multicore for spreadsheets..? (Score 1) 224

We have complex financial models that are coded in C# for production use but which also exist in spreadsheets for the purposes of documentation and independent model validation.

You might want to have a look at RMarkdown (http://rmarkdown.rstudio.com/index.html). It's pretty much designed for exactly that purpose. You get to use the very nice R framework, and be able to very naturally document changes to your models. You can use revision control if you want to. You can even embed interactive widgets if you want to go that far. This guy (http://vnijs.github.io/radiant/) wrote a business intelligence platform on top of R using Shiny. There is lots of cool stuff to do here. More importantly, you get to take advantage of a lot of very robust statistical models, a very active development community, and proper data management support (ex: relational databases). R project files are just text files, so it's pretty easy to archive and version control using whichever tools you have for those purposes.

Comment Re:Massive failure from all involved (Score 2) 169

No the point of the article was "questions whether more information is the same thing as more understanding."

No, that was not the point of the article at all. The point of the article was that there is an implicit assumption in the field that we just lack sufficient data. That the methodologies used to analyze that data are fine, but because we don't have enough data, we fail to successfully understand cognition. The authors argue that, no, there is not enough but data, but also that the methodologies are flawed; that the methodologies themselves need to be validated. But because we don't have a ground truth with which to study the brain, we have no way of validating on that data set.

So they are looking for a suitable stand-in, to validate the methodology. That is all. It is not their intention to learn anything about the brain from the microprocessor, just to replicate the known ground truth of the microprocessor using the "reverse-engineering" methods that are common and accepted in the neuroscience field to determine whether they are adequate.

Comment Re:Massive failure from all involved (Score 1) 169

No, they argue that the 6502 (or just a microprocessor) is an acceptable model for validating the approaches used in neuroscience to analyze complex data sets, which is exactly what I said in my comment above. In other words, if they can successfully determine the ground truth of the microprocessor using those approaches and with limited a priori knowledge, then the methodologies have potential. Otherwise, they need to be refined until they are able to do this. Validating against an imperfect model is better than having no validation at all.

More to the point: "Gaël Varoquaux, a machine-learning specialist at the Institute for Research in Computer Science and Automation, in France, says that the 6502 in particular is about as different from a brain as it could be."

I suggest you read the actual scientific article and not just the Economist summary blurb. It is open access.

Comment Re:Massive failure from all involved (Score 1) 169

But that logic only makes sense if microprocessors and brains were similar enough that comparable methods could be used to attempt to understand them. But that isn't true.

Actually, they are arguing that it is true. From the article,

"Obviously the brain is not a processor, and a tremendous amount of effort and time have been spent characterizing these differences over the past century [22, 23, 59]. Neural systems are analog and and biophysically complex, they operate at temporal scales vastly slower than this classical processor but with far greater parallelism than is available in state of the art processors. Typical neurons also have several orders of magnitude more inputs than a transistor. Moreover, the design process for the brain (evolution) is dramatically different from that of the processor (the MOS6502 was designed by a small team of people over a few years). As such, we should be skeptical about generalizing from processors to the brain.

"However, we cannot write off the failure of the methods we used on the processor simply because processors are different from neural systems. After all, the brain also consists of a large number of modules that can equally switch their input and output properties. It also has prominent oscillations, which may act as clock signals as well [60]. Similarly, a small number of relevant connections can produce drivers that are more important than those of the bulk of the activity. Also, the localization of function that is often assumed to simplify models of the brain is only a very rough approximation. This is true even in an area like V1 where a great diversity of co-localized cells can be found [61]. Altogether, there seems to be little reason to assume that any of the methods we used should be more meaningful on brains than on the processor."

It is a really interesting exercise because it highlights a critically important problem in the field of neuroscience: we don't know the ground truth, so analyses of complex data sets cannot be validated. There is no way to know if the methodologies being used are actually effectual. Is a microprocessor the best model system? Probably not. But it is something to start with. If we can validate successfully on that, we have a better chance of succeeding on the brain.

Comment CMS and compatibility (Score 1) 207

This may stem from a lack of awareness on the part of website designers or from the difficulty in a content-management system (CMS) getting the curl direction correct every time.

LaTeX solved this problem a long time ago.
`` = “
'' = ”

Any CMS should easily be able to make these substitutions as well. For the people commenting about code samples and not wanting smart quotes around their literal strings, this solves that problem as well. For code samples, use "" instead of ``''. Problem solved. Not sure why this is so difficult...

Comment Re:What OpenRC ? (Score 1) 95

As for what starts and stops at each runlevel, that's as easy as an "ls". Beats grepping a myriad of MSDOS ini files

Agreed that it isn't quite as nice and easy as an "ls",

Scratch that. It is as easy as an "ls". I was poking around in the /etc/systemd directory and realized /etc/systemd/system is functionally equivalent to /etc/rcX.d/. It is maintained by the systemd daemon, but it consists of a bunch of symlinks to service unit files that allow you to very quickly and easily see which services are dependencies of a particular target.

Comment Re:Well that's nice (Score 2) 95

So if I update any of the libraries that init uses, all I have to do is a "telinit q"?

systemctl daemon-reexec

That one isn't mapped to a telinit equivalent (I don't think).

Last I checked, that was broken in upstart

Lots of things were broken in upstart. Systemd is a tremendous improvement over upstart, much to the chagrin of Mark Shuttleworth.

And systemd now lets me drop to single user mode? That's an improvement.

That's been there, for a while. I'm not sure who these "perps" are that you speak of, but systemd will do anything you tell it to. If you want a single user mode, define a target that creates a single user mode, just like you would define runlevel 1 to not start multiple ttys. I didn't follow every development of systemd as it was happening, and only really jumped in when it came (officially) to Red Hat 7.2 and then Ubuntu 16.04. While there are some lingering integration issues still (mostly with specific daemons), I would say it works pretty well, and the distro maintainers have done a lot good work with backwards-compatibility scripts to help people transition from sysvinit. So yes, there is a rescue.target, which is also called runlevel1.target on both Fedora and Debian.

Comment Re:What OpenRC ? (Score 1) 95

One of the absolute worst features of systemd (and inittab when abused) is automatic restart.

I never said anything about automatic restart. Systemd allows you to be alerted to and to respond to process failures. To me, that's predictability. If I start a bunch of network services and one of them fails, systemd will decide whether to continue (ie: the dependency tree allows it) or to fail. Regardless, the outcome is entirely predictable. Services that depend on other services (which includes the target state itself) will have all of their dependencies satisfied, or they won't be started, and anything that fails will be logged in a consistent manner that is easily parseable by a system monitoring utility. When you "telinit 3", sysvinit runs all of the scripts in /etc/rc3.d and if they start they start, if they fail they fail. It's up to you to scrape the logs and keep tabs on all of the daemons. The state "runlevel 3" is not guaranteed.

Because the start order is 100% predictable.

Ah, ok, that's a different kind of predictable. I agree, start order is not predictable with systemd. I would argue, though, that it doesn't need to be because you have explicit dependencies that allow depending on the actual started and functioning state of a prior process (as opposed to just a numbering scheme), and logged events that allow you to determine precisely when and where (and often why) a dependency tree failed. You don't need to step through the boot process one script at a time because the log tells you exactly what failed, and you can start your debugging there right at that point.

As for what starts and stops at each runlevel, that's as easy as an "ls". Beats grepping a myriad of MSDOS ini files

Agreed that it isn't quite as nice and easy as an "ls", but it definitely is not as complicated as grepping the unit files and trying to figure out when things start. The nice thing about explicitly declaring your dependencies is that you can have systemd show them to you. Let it do the work so you don't have to.

And I find it to combine the worst aspects of Windows 95 .ini files

That's kind of funny because .ini files were one of the better parts of Win95. They were simple text, easy to read and edit (by a human) configuration files, that happened to use [bracketed] headers, but whatever. Samba actually uses that convention for smb.conf, by the way. Anyway, Win became much worse when they took away the .ini files and replaced them with the registry. If I need to change a configuration, I would rather do it in a plain non-executable text file, rather than something structured but cumbersome like XML, or something with variables and conditionals built-in but takes time to parse and study like a shell script.

Comment Re:What OpenRC ? (Score 1) 95

I like good old-fashioned runlevels, and not named abstractions that may differ from system to system

Um, why do you think "old-fashioned" runlevels are any less abstract than named process groups. A runlevel is just a group of processes to start that happens to be named as a number (ex: runlevel 3 could just as easily be "network-enabled" and function identically). The fact that most linux distributions used runlevels may have been a convention, but it was hardly a standard. In fact, Red Hat famously used runlevel 5 to distinguish between an X and console environment, whereas Debian used runlevel 3 to distinguish between single-user vs multi-user environment regardless of whether there was a desktop session manager running. So I would definitely call runlevels "named abstractions that may differ from system to system". Since derivative distributions (ex: Ubuntu from Debian and Mandrake from Red Hat) tended to adopt the original's runlevel classification, it may have given the appearance that there was a de facto standard, but there really wasn't.

Predictability is good.

Correct. Which includes knowing that your processes actually started and not just that you told them to start, but maybe they failed, when you change runlevels.

So are posix scripts, which continue working even on systems where /bin/sh is lightweight ash or some other bourne family shell that isn't bash.

Some do, some don't. It depends on who wrote the script. When Ubuntu switched to dash, which was one of the first attempts to speed up boot times years ago, quite a few of the boot scripts broke and had to be rewritten. If you upgraded Ubuntu and suddenly one of your services didn't start, switching back to bash was usually the easiest fix. They eventually ironed out all of the bugs, but it was shaky for a while.

Slashdot Top Deals

We all agree on the necessity of compromise. We just can't agree on when it's necessary to compromise. -- Larry Wall

Working...