Forgot your password?
typodupeerror

Comment: Re:Confused about how this works (Score 4, Informative) 105

by paskie (#47334813) Attached to: Fixing Faulty Genes On the Cheap

CRISPR is a tool that allows you to cut the DNA in two disjoint pieces at a specific point (specification of this point is a parameter of a particular CRISPR instance). What happens then depends on your setup; bacteria will just insert some junk at that break point, or you can pack your custom DNA sequences along the CRISPRs and they will be spliced in, connecting to each of the two disjoint pieces by one end. Thanks to this, at that specific point, you can disable a gene or modify or add an extra sequence.

We had tools to do this before - restriction enzymes or TALENs. They weren't really usable for therapeutic purposes, though, due to much less reliable targetting, more laborous engineering (parametrizing your instance for a specific sequence) and low effectivity (the break happens only in a a few percents of cases). CRISPRs are easily parametrized, can be precisely taretted, and have effectivity in tens of percents (in general; can vary organism by organism). It's still a work in progress, but looks pretty promising!

Comment: Re:The 'test' was fixed (Score 1) 432

by paskie (#47193945) Attached to: Turing Test Passed

+1 Insightful. :-) Now, this is something I completely agree on - we need a better test than the original immitation game, with some restrictions and incentives. Hmm, that actually almost sounds like a TV show!

Your proposal sounds fairly reasonable, though I think "exposing chatbots" is way too aggressive - we don't need Blade Runner style interrogations, that just doesn't seem like that sensible a goal. We just want to push the conversations to a higher, intellectual level to test the computers' ability to deduce and relate like a human; pick people accordingly and also offer incentives for winning against the computer.

Comment: Re:Turing Test Failed (Score 1) 432

by paskie (#47193941) Attached to: Turing Test Passed

I don't think pretending to be a person who isn't fluent in English is cheating in the immitation game, as long as the conversation still happens in English; remember, they are still talking to the human too! This result does say a lot about computer capabilities, and may have implications in spam, but also e.g. call center automation etc.

I agree that based on this experience, we can add some extra restrictions to the immitation game to make it a much more useful benchmark for progress in AI.

Comment: Re:A pretty low requirement (Score 1) 432

by paskie (#47192319) Attached to: Turing Test Passed

I'm developing an open source IBM Watson analog and I don't really care *how* my brain works when solving this task, because I am dealing with a different computation platform. What my point was is, on the high level, what *function* does the brain perform. And my brain, in this task, acts like a search engine on the facts I have learnt - no matter how it does it.

Comment: Re:A pretty low requirement (Score 3, Insightful) 432

by paskie (#47192143) Attached to: Turing Test Passed

...and your brain, during a game of Jeopardy, is what if not a search engine?

Of course, (at least) advanced deductive capabilities are also important for general intelligence. That's the next goal now. (Watson had some deductive capabilities, but fairly simple and somewhat specialized.) We gotta take it piece by piece, give us another few years. :-)

Comment: Re:Turing Test Failed (Score 4, Insightful) 432

by paskie (#47192119) Attached to: Turing Test Passed

What has been conducted precisely matches Turing's proposed immitation game. I don't know what do you mean by a "full-blown Turing test", the immitiation game is what it has always meant, including the 30% bar (because the human has three options - human, machine, don't know). Of coure, it is nowadays not considered a final goal, but it is still a useful landmark even if we have a long way to go.

That's the trouble with AI, the expectation are perpetuouly shifting. A few years in the past, a hard task is considered impossible for computers to achieve, or at least many years away. Then it's pased and the verdict prompty shifts to "well, it wasn't that hard anyway and doesn't mean much", and a year from now we take the new capability of machines as a given.

Comment: Re:Thirty percent? (Score 1) 432

by paskie (#47192107) Attached to: Turing Test Passed

The reaon is simple - the human is also allowed to answer "don't know" in Turing' immitation game. So with purely random anwers, the probability of each is 1/3.

(I think forcing the judges to pick one would make the results more clear-cut, I'm not sure about Turing's reasons here.)

Anyway, the 30% bar has been proposed in the original paper and this is what "Turing's test" was _always_ meaning.

Comment: Games! (Score 1) 172

by paskie (#46935473) Attached to: Ask Slashdot: Beginner To Intermediate Programming Projects?

Make a game. Or contribute to an existing open source game. You can easily set and adjust the scope and depth of the project so that it's fun and challenging. Chances are, you already play some games you like, and chances are you can get inspired for your own game project there. And perhaps others will even find it fun to play.

Somehow, when I get playing a game for any period of time, sooner or later I slowly switch to hacking the codebase as it ends up being even more fun. :-) If you're interested in building a non-trivial game, you may find it interesting to take a look at the code of existing open source games and start hacking them. You will find fun and rewarding low-hanging fruit features lying all around. In strategies - Freeciv, OpenTTD, Wesnoth, Widelands..., arcades like Supertux or Stepmania or even FPS like Xonotic. Or UI or computer player for a board game.

Games are also nice because they are very multi-faceted - you can start by adding simple features, but also work on optimization and better core algorithms, graphics programming, network programming, improve the user interface, porting it to a new platform or have a go at building an AI computer opponent. Hey, try building an AI for OpenTTD, none of them is perfect and they have a nice plugin system. And if you get more involved, imho they look pretty cool on a CV of any programmer.

Comment: Re:It's not underresourced (Score 1) 175

by paskie (#46909457) Attached to: Free Can Make You Bleed: the Underresourced Open Source

I actually think it's not really possible to do it fool-proof. You may eventually get right as in mathematically right in some formal system, but then the problem is in quality of your formal system.

10 years ago, people often wouldn't account for timing attacks (though I admit they were proposed ~20 years ago) and things like that. It's still well possible that there are attacks noone concieved of yet and implementations may or may not be vulnerable. Heck, it's possible a specific sequence of instructions your single true implementation compiles to on some future architecture triggers a subtle bug.

I still believe that even for the most basic plumbing, diversity is a good thing and it's not possible to get any slightly complex software 100% right, 100% foolproof in the real world, even if you manage to do it in an abstract formal system.

Comment: Re:It's not underresourced (Score 4, Insightful) 175

by paskie (#46907341) Attached to: Free Can Make You Bleed: the Underresourced Open Source

In some cases, fragmentation is bad. In case of critical infrastructure, fragmentation is great!

Having multiple interoperating implementations has been always one of the basic requirements for internet standards, it ensures future growth and leaving out the worst warts, dependency on undocumented behavior etc. But most importantly, if a bug is found in one of the implementations, it cannot take out the complete internet infrastructure because large parts of it are running a different implementation. Even if a bug is found on a protocol level, some implementations may not implement that feature or implement it slightly differently and aren't involved. Fragmentation is essential to the robustness of internet.

Comment: Re:Parent SHOULD NOT be modded flamebait (Score 2) 178

by paskie (#46871325) Attached to: New Zero-Day Flash Bug Affects Windows, OS X, and Linux Computers

I just, like many others, wish someone would actually fucking *elaborate* on *concrete* *technical* hurdles of HTML5. We are not denying there are none, but just saying "you are clueless if you need to ask" is not going to help your position. We don't want to argue with you but we want you to actually explain yourselves. Gee, this thread is so frustrating.

Comment: Re:more modern == less useful ? (Score 1) 57

by paskie (#46866579) Attached to: GNU Mailman 3 Enters Beta

I completely agree that the mail archives UI is awful. Mailman2 archives could use many improvements (nicer thread browsing including cross-month threads, _optional_ threads collapsing, web-form replies, fulltext search, ...) but I don't really follow the direction in which HyperKitty is going - views like https://lists.stg.fedoraprojec... are a complete mess; having a one-mail per line concise view had great value...

It's still beta, I'm not hopeless; I think HyperKitty could be made much more usable by a few simple UI tweaks (and hopefully things like comment voting are optional). Perhaps we will get / can make a "classic theme". :-)

Comment: Re:WTF? (Score 2) 188

by paskie (#46786925) Attached to: Heartbleed Sparks 'Responsible' Disclosure Debate

"Very well known?" This is very much *not* the way how for example many security bugs in linux distributions are handled (http://oss-security.openwall.org/wiki/mailing-lists/distros). Gradual disclosure along a well-defined timeline limits damage of exposure to blackhats and at the same time allows enough reaction time to prepare and push updates to the user. So typically, once the software vendor has fixed the issue, they would notify distributions, which would be given some time to prepare and test an updated package, then the update is pushed to users at a final disclosure date.

For a bug of such severity, I'd agree that the embargo time of 7-14 days used by distros@ is way too long. But a 12-24 hour advance announcement would be quite reasonable. Large website operations typically may have suitable staffing to be able to bring a specific update for a critical bug (similar in potential damages to a service outage) online within 6-12 hours, so a next step would be passing the information from distributions to these users (e.g. via a support contract with distros@-subscribed vendor).

In this timeframe, you have a good chance to prepare updated packages for major archs and do an emergency rollout. At the same time, even if there is a leak, the leak needs to propagate to skilled blackhat developers, they need to develop an exploit and this exploit needs to get propagated to people who would deploy it in the remaining time frame.

Comment: Re:I take it this is a server concern (Score 2) 303

by paskie (#46689895) Attached to: OpenSSL Bug Allows Attackers To Read Memory In 64k Chunks

I *think* it might be feasible to exploit your web browser to steal cookies or saved credentials if you connect to a rogue https site. Credentials are always nice for spamming. If you convince people to keep you open in another tab, you might get lucky and snoop some credit card numbers or banking credentials too. A regular person should fear mainly automated attacks like this.

(Please do prove me wrong if I didn't get the attack potential here right.)

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...