Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Crap, the sky is falling (Score 1) 334

Actually that's not the case. For apps running on your phone, they are using simplified payment verification in which the contents of the blocks are not validated (the block headers themselves are). So they are agnostic to the kind of issue that led to the unexpected hard fork. Yes this kind of consensus failure is pretty disastrous but it didn't actually affect many end users, and will only get rarer in future as testing improves.

Comment Re:It's not a full node (Score 5, Informative) 150

A full node is a really, really large amount of work. I feel that lots of people don't realise this, get enthusiastic and think, "I love Bitcoin! I love Go! I'll write Bitcoin in Go" where for Go you can substitute basically any language that's fun or popular. Then they write the easy bits (like wire marshalling) and eventually the project dies around the time that it's time to implement the wallet or Bloom filtering or robust test suites. Possibly Conformal is different, we'll have to wait and see, but the feature set they advertised in their blog is very much what has been seen many times before. In particular there's no handling of the block chain, re-orgs, no wallet and they haven't got any infrastructure to test edge cases.

One reason implementing Bitcoin properly is not fun is an entire class of bugs that doesn't exist in normal software - chain splitting bugs - which can be summed up as "Your software behaves how you thought it's supposed to work rather than how the original bitcoind actually does work". Bitcoin is highly unusual in that it implements group consensus - lots of nodes have to perform extremely complicated calculations and arrive at exactly the same result in lockstep, to a far far higher degree of accuracy than other network protocols. This means that you have to replicate the same set of bugs bitcoind has. Failure to do so can lead to opening up security holes via consensus failure which can in turn lead to double spending (and thus your users lose money!).

Being compatible with the way bitcoind is written (bugs and all) may require you to break whatever abstractions you have introduced to make the code cleaner or more elegant or whatever reason you have for reimplementing Bitcoin. Here's a trivial example - signatures in Bitcoin have an additional byte that basically selects between one of a few different modes. It's actually one of three modes plus a flag. So a natural way to implement this is as an enum representing the three modes plus a boolean for the flag. But that won't work. There is a transaction in the block chain which has a sighash flag that doesn't fit any of the pre-defined values (it's zero) and because Satoshi's code uses bit testing it still works. But if you turn the flag into an enum, when you re-serialise the mode flags you'll re-serialise it wrong and arrive at an incorrect result. So you have to pass these flags around as integers and select via bit testing as well.

Bitcoin is full of these kinds of weird edge cases. Eventually you come to realise that reimplementing it is dangerous and probably whatever benefits you thought it had, it probably doesn't. Some people believe there should be independent reimplementations anyway and I can understand and respect that, but doing it safely is an absolutely massive piece of work. You have to really, really, really believe in diversity to do it - the features of language-of-the-day aren't good enough to justify the effort.

Comment Re:fly brains (Score 1) 209

Thank you, your comments render a much better understanding on my side
However, let me add a few thoughts.
perceptions ... no continuous real-world sensors
To me, that sounds a lot like conceptual learning isolated from perception. YMMV
Though, I do not know whether "intelligent" behaviour may emerge without the challenges that a body full of sensors as well as (parallel) means to cope with these that is interfaced to a brain that (my view) on a high level (call it consciously, think focus of attention) is concentrating on controlling one task, namely generating "intention" or "goals".
Forgetting could be an actively decided optimization parameter, as opposed to a byproduct of capacity.
Which may occur in the "real world" as well, though presumably focussed in the realm of "emotions" (BTW, this raises the question how emotions interact with more or less cognitive processes).
not a constantly active information stream
Crucial, and I am fine with the whole paragraph, especially as you somehow emphasize the "tool" aspect, which gives you a lot more degrees of freedom compared to efforts to engineer some "reality".
Also, being self-destructive indicates "not intelligent"?
This is taken out of context, namely "immediate trust". My remark was triggered by an (admittedly dim) recall of a classification that Stegmüller made (K1, K2, K3 systems) with regard to teleological systems. IIRC, one can extend the scheme to a continuum from acting immediately in response to an input to tailoring the action to the outcome of building a "complete" model/simulation of the context (warning: recursion ahead).
I agree that suicide might be an "intelligent choice". Ethics and moral add yet another layer.
Besides, an artificial intelligence ...
You are probably better of if you call your envisioned system along the lines of "cognitive augmentation". This lowers expectations while still complex enough, shifts the focus from "basic" to "applied" (funding? I speculate "applied" has more appeal) and makes the goal scalable (creating backdoors when confronted with too many nontrivial problems) by redefinition of the target group.
Intelligence requires weariness? Intelligence negates meticulousness?The pursuit of goals is not intelligent?
For an autonomous system, which a tool is not, yes to both: sleep, fuzzyness.
It was not "pursuit of goals" but "follow instructions". Anyhow, with the "toolfocus", this is irrelevant.
Given proper sharing of context, instructions in natural language can be unambiguous
For practical purposes, yes. IMHO, theoretically, no (Gödel).

Disclaimer: I am only expressing my opinions here, which are based on what is left from working in the field in the 80ies and loosely following (more or less meager) development since then.

CC.

Comment Re:fly brains (Score 1) 209

So, more detail.
perfect recall

Conflicts with prioritizing if you have provisions for priority zero (forgetting, irrelevant if the link goes away or the information is erased).
could converse about its knowledge and thought processes
Telling more than we can know (Nisbett &Wilson, 1977, Psychological Review, 84, 231–259), protocol analysis, expert interviews: evidence that this is at least not always possible. My hypothesis is that too much metaprocessing would lead to a deadlock.
conversational feedback would have immediate application without lengthy retraining
Would imply that the system immediately trusts. Would probably be rather self destructive, thus not intelligent.
tirelessly and meticulously follow instructions given in natural language
The antithesis of intelligent behaviour?
So now I say that I see a recursive combinatorial explosion happening during conflict resolution.
CC.

Comment Re:fly brains (Score 1) 209

I'd love to have a system with perfect recall, could converse about its knowledge and thought processes such that conversational feedback would have immediate application without lengthy retraining, and could tirelessly and meticulously follow instructions given in natural language.

I see a combinatorial explosion at the horizon.

CC.

Slashdot Top Deals

If you hype something and it succeeds, you're a genius -- it wasn't a hype. If you hype it and it fails, then it was just a hype. -- Neil Bogart

Working...