Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Science

Ternary Computing 375

eviltwinimposter writes: "This month's American Scientist has an article about base-3 or ternary number systems, and their possible advantages for computing and other applications. Base-3 hardware could be smaller because of decreased number of components and use ternary logic to return less than, greater than, or equal, rather than just the binary true or false, although as the article says, '...you're not going to find a ternary minitower in stock at CompUSA.' Ternary also comes the closest of any integer base to e, the ideal base in terms of efficiency, and has some interesting properties such as unbounded square-free sequences. Also in other formats."
This discussion has been archived. No new comments can be posted.

Ternary Computing

Comments Filter:
  • ... the choices will be 0, 1, and Maybe :)

    Actually not a bad step- I wonder when they look at quantum computers using light ... this might be an easier step to integrate. There was a previous article here talking about light based quantum computing- give it a few years :)
    • Not a bad step? Towards what? What do you mean, an easier step to integrate? I don't understand much about quantum computers using light. What does ternary computing have to do with quantum computers using light?
      • Light has infinite wavelengths (not in reality as only certain wavelenghts are emitted, but you can combo those with different techniques to get infinite). I'm sorry you don't undertand much about quantumn computers constructed with light- i suggest reading up on it.

        Since you have an infinite number of selections to choose from, and as was demonstrated that base E is the most efficient to represent numbers in (ie, infinite representation in base e is better than other bases), then it stands to reason that quantumn computers based on light should be designed to utilize base e, but since that isn't very practical ternary might be the first logical step.

        And howcome I got rated offtopic? Quantumn computing is the logical next application of ternary computing, since binary is pretty much entrenched in everything from your local computre reseller to every time you toss a dime for 'heads or tails'.

      • I don't understand much about quantum computers using light

        Nobody knows much about quantum computers using anything...
    • by Tumbleweed ( 3706 ) on Tuesday October 30, 2001 @03:26PM (#2498729)
      > the choices will be 0, 1, and Maybe

      You're all wrong.

      There can BE only ONE!

      :)
    • the choices will be 0, 1, and Maybe

      So does this mean that computers and consumer electronics devices' power switches would stop being labeled with just 0 and 1?

      Doesn't it make sense that your Ternary computer from BestBuy would have a three state power switch?
    • I wonder when they look at quantum computers using light

      Quantum computers using photons is a good idea because photons are very well insulated from noise and decoherence, however it is a bitch to make them interact with each other for the same reason, so gates like controlled nots will be next to impossible to implement.

      There is, however, a non-deterministic QC based on linear optics where multi-qubit work gates 1/16th of the time or something. I don't expect it will ever be useful for doing real computing though

      The paper is here [lanl.gov].
  • by Amazing Quantum Man ( 458715 ) on Tuesday October 30, 2001 @02:43PM (#2498444) Homepage
    They've finally invented my favorite circuit... the Maybe gate?
    • They've finally invented my favorite circuit... the Maybe gate

      Good... Hopefully this will let us design computers with much less Bill gates.
    • All those modal logic guys can finally implement the decidable portions of their theory without base-level translation. woot. or wait, I forgot about all those other truth value schemes... like the boolean complete truth value system. drat.

      <g>
      -l
    • Dear god, Intercal in hardware. I'll never sleep again.
  • well known (Score:2, Insightful)

    by 4im ( 181450 )
    In theoretical CS classes, we learned all about it, it's not exactly news.
    The thing is, it's simpler to manufacture binary logic than ternary.
    So, no big deal really... the choices were made some time ago.
    Next step: quantum computing.
  • Not base3 again (Score:4, Insightful)

    by GunFodder ( 208805 ) on Tuesday October 30, 2001 @02:45PM (#2498463)
    Seems like someone has to bring up base3 computing every once in a while, just like asynchronous circuit design. I'm sure there are plenty of reasons why they are technically superior. But it has taken us 50+ years to get to this point with synchronous circuit design and binary logic. It would take many years to get to this point using totally new technology, and in the meantime the current computer industry would continue to grow exponentially. I'll believe in these technologies when I see a useful example of them.
    • Re:Not base3 again (Score:3, Interesting)

      by Mannerism ( 188292 )
      I agree that a practical ternary computer is unlikely. Rather, the value of the theory might lie in helping us to realize the shortcomings of the binary approach, and the way our familiarity with it molds our thinking. How many of us would have come up with the ternary solution to the coin balance problem?
    • Re:Not base3 again (Score:3, Insightful)

      by Tom7 ( 102298 )

      The reason that the computer industry grows exponentially is exactly these kinds of paradigm-changing technologies.Most of these have happened in manufacturing processes, but I think as we exhaust that field we will be pushing the changes higher up the architecture. (x86, your days are numbered!)

      That said, base 3 is probably pretty stupid. Asynchronous circuits, however, might really make a difference some day...
      • Another reason that the computer industry grows exponentially is because the computer industry uses computer technology... which is growing exponentially. In other words, if I design my chip using a computer, as computers get faster so does chip design. As chip design gets faster, it becomes feasible to design more complicated chips... thereby allowing me to design more complicated chips which allow me to design more complicated chips...
  • by ponos ( 122721 ) on Tuesday October 30, 2001 @02:45PM (#2498465)
    Try reading Knuth's The Art of Computer Programming, Vol. 2, Section 4.1, Positional
    Number Systems.

    There is an extended discussion on the balanced
    ternary system and some other exotic number
    systems (base 2i etc). There are some merits
    to the ternary system but it would be
    harder to implement with transistors.
    • by Asic Eng ( 193332 ) on Tuesday October 30, 2001 @03:42PM (#2498839)
      but it would be harder to implement with transistors.

      Very apt. A binary transistor has two states, idealized "on" and "off". From a more analog view that's low current and high current - appropriately connected with a resistor that results in low and high voltages.

      The nice feature is, that a high voltage at the input opens the transistor, a low voltage closes it. So we get a relatively complete system, I can get from hi to lo, from lo to hi.

      Tertary would put us into "middle" voltage. But middle on the input, creates middle on the output, no direct way to get either high or low - making basic circuits more complex.

      But the real killer with "middle" is manufacturing. Let's say we use 2.8 Volts for the high level, 0.2 Volts for the low level. Due to manufacturing tolerances some chips transistors would be "fully" open at 2.3 Volts, others at 2.7 Volts. Easy to compensate on binary designs, you just use the 2.8 to switch the transistor, but for the middle level? What's required to switch a transistor to middle on one chip, is sufficient to open the transistor completely on another chip...

      So your manufacturing tolerances become way smaller, and that of course reduces yield which increases cost.

      Add to that, that chips today work with a variety of "hi" voltages like 5, 3.3, 2.8 ... Most lower-voltage chips are compatible with higher-voltage ones, they produce voltages which are still over the switching point and accept higher voltages than they operate on.

      With ternary that becomes impossible and chip manufacturers need to progressively lower the voltages for higher speed.

      Plus disadvantages in power consumption and and and...

      Admittedly the article doesn't seem to suggest that ternary is viable, just that it's pretty. Which may be true for a mathematician. :)

      • by nels_tomlinson ( 106413 ) on Tuesday October 30, 2001 @04:33PM (#2499137) Homepage
        As long as we're turning the world on its ear, lets go all the way, and use triacs. We implement it (the tri-state gate, that is) like an inverter, more-or-less. These have two (non-linear ) on states plus off, and are just right for implementing an inverter. They'd probably be great for trinary logic, too.

        I just dug out my old physical electronis book (Micro Electronics, by Jacob Millman, First edition), and can't find them in there, so here's a slightly less academic reference. [americanmicrosemi.com]

        There might be some problems with trying to get the clock speed high enough to compete with the Intel/AMD marketing, though; it says that they can be triggered into conduction by high dV/dt.
      • What about MOSFETs? As I understand it, there is a channel which is either opened or closed (depending on how it is created). A third state would be neither. In this case, you can either fully shut the channel, fully open the channel, or leave it in a half-open state.

        Now, I'm obviously not a EE major (I just happen to know a smidgen from my current job), so i may be WAY off base. But...?

        weylin

        • What about MOSFETs? As I understand it, there is a channel which is either opened or closed (depending on how it is created). A third state would be neither. In this case, you can either fully shut the channel, fully open the channel, or leave it in a half-open state.


          "neither" is vague. It's close to saying it's either light, dark or neighter. Transistors have two basic modes of operation (that I'm aware of) that fits a garden hose analogy; You either step on the hose (blocking current) or physically remove the foot (to allow current). You have to physically attach the gate to either a positive source or a negative drain, otherwise static charge will keep it in what-ever state it was previously. The output of a MOSFET is usually the gate of another mostfet. Moreover, the gate of a MOSFET is like a capacitor, so the role of the gate-controller logic is to charge or discharge the gate-capacitor. Thus having some "dim" state that is neither blindingly bright nor pitch dark only slows the process of opening or closing (charging/discharing) the targeted gate.

          You might hear about a third "Z" state (cheater latches), but that's just when you don't connect the source of the FET to power or ground, but instead to the output of some other combinational logic. The idea is that when the FET is pinched off, you've electrically isolated one logic component from another. This is good for multiplexing (like memory arrays), where you'll have the output of multiple logic devices connected to a single input of another logic device, and you "activate" only one output device at a time. (The alternative would be to risk having the output of one device +5V, and the output of another device 0V, thereby short-circuiting).
          Bipolar transistors are even worse since they never truely pinch-off, and even leak current into the control. But they have more efficient current characterisitcs (higher current swings) and thus work well to amplify the charging /discharing of the FET capacitors. Hence the buzword BiCMOS.

          These are the basic uses of FETs that I've encountered in my undergraduate EE cariculum. End result, there's no free lunch with trinary logic systems and current transistors.

          -Michael
      • by Anonymous Coward
        It may be difficult using conventional transitors, but an interesting approach might be a spin-valve transistor.

        Simply put it's a transistor that has different transport properties for either spin up or spin down electrons. The Giant Magneto Resistive (GMR) head in your fancy new 100GB hard disk is a fine example of spin effects being used in every day life. A similar device could be used for doing base 3 arithmatic.

        A while back I did some simulations (admittedly simple and first order) of seperating different spins without using ferromagnetic materials. Which are used in GMR devices and are basically nasty things to use in device fabrication that should be avoided if at all possible. I found that you can get pretty good separation from the geometry of the device and an applied external magnetic field. This was all done with device sizes and parameters that were reasonable (a few microns in size, not huge magnetic fields, etc) for real life.

        Imagine a transistor with a single source and two drains one for spin up and one for spin down.

        Not to say it would be easy, just that it's possible given a little ingenuity to make a transitor that has 3 states.

      • Tertary would put us into "middle" voltage. But middle on the input, creates middle on the output, no direct way to get either high or low - making basic circuits more complex. But the real killer with "middle" is manufacturing. Let's say we use 2.8 Volts for the high level, 0.2 Volts for the low level. Due to manufacturing tolerances some chips transistors would be "fully" open at 2.3 Volts, others at 2.7 Volts. Easy to compensate on binary designs, you just use the 2.8 to switch the transistor, but for the middle level? What's required to switch a transistor to middle on one chip, is sufficient to open the transistor completely on another chip...

        Why would you choose such a brain dead scheme? 2.8V as your "middle" choice? A sensible scheme would have been +ve rail, -ve rail, and ground. This builds upon 100 years of analog electronics and op-amps. Locking a voltage to a rail is extremely easy AND fast.

        Plus disadvantages in power consumption and and and...

        The benefit of a ternary scheme is that you have LESS power consumption to achieve the same work. Your individual flip-flap-flops are more complex than a binary flip-flop, but you need fewer flip-flap-flops. Overall you'll have fewer transistors and subsequently less heat than the equivalent binary circuit.

        So your manufacturing tolerances become way smaller, and that of course reduces yield which increases cost.

        The fact that fewer transistors are required to achieve the same work (despite the fact that there are more transistors per gate) will INCREASE the yields. This DECREASES costs.

        How in hell did your post get modded up?

    • A really cool number system that is rarely mentioned is factorial base notation. What makes factorial base interesting is that all rational number are represented by finite factorial base numbers, and transcendental numbers like e and pi are represented by infinite but nonrandom factorial numbers. So, somehow factorial notation "captures" and "tames" the complexity of the real number continuum in a way that decimal notation can't.
  • Trits? (Score:4, Funny)

    by ceswiedler ( 165311 ) <chris@swiedler.org> on Tuesday October 30, 2001 @02:46PM (#2498469)
    Setun operated on numbers composed of 18 ternary digits, or trits

    Awww...they shied away from the obvious choice, tits.
    • Re:Trits? (Score:2, Funny)

      by miked50 ( 466948 )
      I thought tits were binary... unless you've seen the movie Total Recall
    • Re:Trits? (Score:5, Funny)

      by tswinzig ( 210999 ) on Tuesday October 30, 2001 @03:00PM (#2498576) Journal
      Awww...they shied away from the obvious choice, tits.

      No, I think that was a good decision. When I think of tits, I always imagine them in pairs.
    • Re:Trits? (Score:2, Informative)

      by kerrbear ( 163235 )
      Setun operated on numbers composed of 18 ternary digits, or trits

      Awww...they shied away from the obvious choice, tits.

      Just to be more serious and perferctionistic about it. Shouldn't the word digit in this case be a trigit? Since the very word digit is prefaced with di which means two? I guess I could be wrong about that, but it seems to make sense.

      • Re:Trits? (Score:2, Informative)

        by rgmoore ( 133276 )

        IIRC, the origin of digit is not from di- meaning two, but from digit meaning finger or toe. This makes some sense if you think about where numbering systems came from. FWIW, one advantage of binary is that it's very easy to count in binary on your fingers; your thumb is the ones bit, index finger twos bit, middle finger fours bit, etc. Not quite as easy to do in ternary.

    • Yes! Tits! (Score:5, Funny)

      by corebreech ( 469871 ) on Tuesday October 30, 2001 @03:16PM (#2498658) Journal
      If bits becomes tits, then I say bytes should become teats.

      And, instead of a 'nibble' being four bits, we'd have a 'suckle' equaling three tits, like that babe in the movie Total Recall.

      Instead of dealing in megabits or gigabytes, we'd have gigatits, which could be abbreviated as DD, saving vast amounts of bandwidth -- which might as well be called handwidth now -- or terateets [terapatrick.com], abbreviatable as DDD.

      With all the sexual content in technical lingo (e.g., male and female plugs, master/slave, unix, etc.) this is only a natural development, and given that half of these machines are used for nothing but downloading pictures of naked breasts anyways...
  • by RGreen ( 15823 ) on Tuesday October 30, 2001 @02:46PM (#2498471)
    Ternary numbers are an interesting sidetrack and some similar techniques are used in fast chip-based systems to speed up adding (each bit also caries it's own overflow and sign bits, turning the classic serial add-with-carry into a more parallel operation).

    It must be remembered that, for floating point numbers, base 2 is *the* most efficient representation, as argued in the classic paper "What Every computer Scientist Should Know About Floating Point Arithmetic [sun.com]" by David Goldberg. The deep understanding behind IEEE754 is a masterpiece of numerical engineering that is often overlooked, IMO.
  • 2 4 8 16 32 64 128 256 512 1024 2048 4096...
    10 seconds

    3 9 27 81...ummmm...crap
    10 seconds

    This'll make all my computer-numbering knowledge obsolete
  • What will we call the digits? Click here to find out:

    [schlockmercenary.com]
    http://www.schlockmercenary.com/d/20001226.html
  • by Anonymous Coward on Tuesday October 30, 2001 @02:50PM (#2498497)
    As far as I know radio astronomers user 3-level data recording for their VLBI (Very Long Baseline Interferometry) data. One of the equipment was from JPL at Caltech lab. Their problem is to detect a weak signal in presence of strong noise. In this case, it doesn't make sense to do 8-bit digitization. Instead people do 1-bit, 2-bit digitization and average out many sample of data. They found that the recording efficiency was highest when they used 3-level digitization.
    I myself worked on VLBI in the same lab but our machines were using 1-bit digitization (BTW, we used regular video cassette and somewhat modified VCR to record 7 GBytes on single 2-hour tape).
  • by m_ilya ( 311437 ) <ilya@martynov.org> on Tuesday October 30, 2001 @02:56PM (#2498547) Homepage
    I have seen in one book that there was created a ternary computer long time ago. I have tried to find anything with google and found this page [computer-museum.ru].
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Tuesday October 30, 2001 @02:59PM (#2498569)
    Comment removed based on user account deletion
  • Best Quote (Score:2, Funny)

    by bn557 ( 183935 )
    "Cheaper by the Threesome"

    I can always go for a cheap three-some.

    Pat

  • what a pain... (Score:2, Interesting)

    by inepom01 ( 525367 )
    One of the advantages of digital over analog is that between the 1 and the 0 you have a ton of space that you can consider an error so if your signal is distorted (long wires), you still know what's going on- and you know when you have an error in transmission. In fact, when you read a binary signal, you can have true, false, and no output. With 3 possible inputs, the neutral zones between accepted values would become smaller and would significantly increase chances of error. In fact, the higher the base you take, the less margin of error.
  • Something is : A member of set of elements called 'yes', a member of the set of elements called 'no' and a member of the set of elements called the intersection of 'yes' and 'no'.

  • ... I might find that that adaptive image compression scheme I was using years ago might turn out to be useful after all. (Some parts of it would have been tons more practical if you used a ternary coding.) Now if I can find the source code amongst all those 360KB floppies that I've been meaning to burn onto CDs and convert it from FORTRAN...

  • if YX switch right

    it seems to me that this is valid logic, now if we can just come up with a transister that can do this sort of thing.

    I am sure we can come up with a muti transistor system made of 2 transisters, but what would be the economic savings of having a ternary logic system if you double the transisters?
  • by Uttles ( 324447 ) <[moc.liamg] [ta] [selttu]> on Tuesday October 30, 2001 @03:16PM (#2498659) Homepage Journal
    I looked over the article and it made a good arument for a ternary computing architecture, however there are some big problems with this that were not addressed in the article. Although I'm not a math expert, I did gain a math minor in college during my computer engineering curriculum, and I have to say ternary computing seems to have too many complex problems that need solving to be worth it.

    First of all, hardware is getting smaller and smaller all the time, so the whole premise behind ternary computing (base 3 would use less hardware) doesn't apply, especially since brand new gates would have to be made in order to distinguish between 3 signal levels rather than 2, and that would be taking a HUGE step backwards.

    Secondly, doing things on a chip or two is great, but the main problem in computing is communications. The major part of creating efficient communications protocols is determining the probability of a bit error. Probability is a very complicated science, even using the binary distribution, which is a very simple function (that just happens to escape me at the moment.) Now, add another bit, and you have to use a trinary distribution, which I'm sure exists but isn't very common (and not surprisingly, I can't recall that one either). Long story short, this theoretical math has been made practical in computer communications over a long period of time dating back 50 years, starting all over with 3 bits rather than 2 would be extremely complicated and VERY, VERY expensive.
    Finally, figuring out logical schemes for advanced, specialized chips is a daunting task. Engineers have come up with shortcuts over the years (K-maps, state diagrams, special algorithms, etc) but adding in a 3rd state to each input would make things almost impossibly complicated. All computer engineers working at the hardware level would have to be re-educated, starting with the simplest of logical gates.

    Overall, in my humble opinion, we'll never see large scale use of ternary computing. There's just too much overhead involved in switching over the way of doing things at such a fundamental level. The way hardware advances each year, things are getting smaller and smaller without switching the number base, so until we reach the limit using binary, we'll probably stick with it.
    • well I hope this does not boed the same for Quantom Computing. I realy hope that, when the final brakthrough of having a working machine is complete, we make the move. the computational value is much to great to ignore it.
    • The major part of creating efficient communications protocols is determining the probability of a bit error.

      You have made some very good points, and the bit error problem is one of the big ones. When you go to ternary logic levels, you reduce the noise margin, so you have to slow down the clock and/or spread out the logic (more space) which offsets the gains you might get from ternary logic.

      I once saw a point-to-point ternary logic data bus design that looked very clever on paper. It allowed simultaneous transfer of data in two directions on the same wires. Both ends of the bus were allowed to drive 0 or 1, and both ends watched the ternary logic level on the bus. If the middle state, "M", was observed, then the other end must be driving the opposite logic level.

      This looks like a big win since the same wires can carry twice as much traffic than a normal binary bus, but the reality of noise margin made the bus impractical. The noise from the dynamic voltage swing between 0 and 1 made it difficult to reliably discriminate the smaller 0/M or M/1 voltages at high clock rates. The clock rate had to be slowed to less than half the speed of a binary bus which made the ternary bus lose its apparent advantage.

      I won't even get into the headaches that ternary logic design would cause. It makes my binary brain hurt.

    • On the contrary--the "theoretical math" was never developed for a specific representation of information, much less binary. In fact, information theory accomodates any representation, all the way back to Shannon [bell-labs.com]..

      The real difficulty is physical implementation. Coming up with coding schemes is trivial.

    • Now, add another bit, and you have to use a trinary distribution, which I'm sure exists but isn't very common (and not surprisingly, I can't recall that one either).

      Well, I don't think that the probability is really much worse. Instead of binomial, we have in general multinomial, and here trinomial: pdf=(n!/(x_i!*x_j!*x_k!))(p_i^{x_i}*p_j^{x_j)*p_k^ {x_k)).
      See Berger's Statistical Decision Theory and Bayesian Analysis. Or here [home.cern.ch] or here [uah.edu].

      There are some hardware problems; I posted a possible solution . [slashdot.org] (It's a joke, mostly!)

      A more serious problem is mentioned by anohter poster: floating point [sun.com] is where we really, really care about speed and efficiency, and it seems that binary has that sewn up.

      ... we'll never see large scale use of ternary computing. There's just too much overhead involved in switching over the way of doing things at such a fundamental level.

      Quite right. This is the only argument against it which doesn't have an answer, I suspect.
    • First of all, hardware is getting smaller and smaller all the time, so the whole premise behind ternary computing (base 3 would use less hardware) doesn't apply, especially since brand new gates would have to be made in order to distinguish between 3 signal levels rather than 2, and that would be taking a HUGE step backwards.

      You really couldn't be more wrong! Ternary logic is at the basis of some of the hottest research in asynchronous logic design right now.

      For instance, if you had a group of transistors that computed multiplication and stored the output in a register you might see the value of that register change several times until the computation was completed. Right now, the only way that you know a computation is complete is that logic is designed to complete an action in X cycles; as long as you feed in the data and wait X cycles you will get the proper result. Clock cycles can be waisted here, because a simple multiplication might be completed in a single clock while harder multiplications might take the full amount of time the logic area is spec'ed for.

      Using async logic, this can be done much more effciently. The multiplication happens just as soon as input data is given and the next stage of the logic knows when the operation is complete because its wires has three states: 0, 1, and not-yet-done. As soon as all the wires are 0 or 1, the computation is finished (consequently, this is how input works to). There are no "wasted" clock cycles, stuff moves through the logic as quickly as it is completed.

      Of course, there has been some debate whether three states are needed on each wire, or an just additional acknowledgement wire is needed (say 8 wires + 1 for an 8-bit computation block). But, believe it or not there are already patents for both methods!

      I guess, by having true ternary logic on each wire, you could have logic that will grab a result just as soon as X% of the wires report they are done with the computation to get "good enough" answer if the logic is iteratively improving a problem.

      -AP

    • You're right about the communication problem, IMO. That's why if ternary computing takes on, it will be limited to "inside" a single chip for a while. For example, an FPU or a DSP processor could make use of ternary arithmetic internally, converting to and from binary when you need to go off-chip. That may have advantages. A general-purpose ternary computer, however, probably won't be useful for a very long time if at all.

  • by Chris Burke ( 6130 ) on Tuesday October 30, 2001 @03:23PM (#2498706) Homepage
    As if in precognition, a language has already been developed for ternary computers:
    TriINTERCAL [muppetlabs.com]! (the link is about INTERCAL, chapter 6 is about the TriINTERCAL extension)

    I can't wait until college courses are taught in this truly wonderous and -- who would have thought -- futuristic language.

  • ...the use of the operators <<< and >>> to mean ternary left and right shift, ie.

    x <<< 3 is (usually) x*3

    and x >>> 3 is (usually) x/3.


    The representation of negative numbers is interesting but there is a 3's complement scheme that works. Eg. ....22222222222 is -1 (like the 3-adics). So -1 <<< 3 would be .....222220, ie. -3. So everything works out fine.

  • With a binary system, you can change from a 1 to 0 state, and a 0 to 1 state without having to pass through an intervening state. (You can call 1 high voltage, 0 low, or 1 a high tone and 0 a low tone) Only two states to worry about.

    I don't see how one can design something as fast as a binary system, and still allow us to go from 0 to 2 without going through 1. If you are doing voltages, the intermediate value theorem forces you to go through state 1. Similarly if you are doing tones.

    One can design a logic system, with "forbidden" state transitions. But then you would have to argue that "ternary logic with forbidden transitions" is significantly better than "binary logic." It seems to me that you would lose 90% of your advantage if you forbade 0 - 2 and 2 - 0 transitions.
  • by nadador ( 3747 ) on Tuesday October 30, 2001 @03:33PM (#2498766)
    and rain on the computer scientist's parade, but...

    The reason that you can't get, and won't for a long time, anything greater than base 2 is that setting and sensing more than two logical levels in a given voltage range is very hard. Those ones and zeros you like to look at and think about discretely are not really ones and zeros, but voltages close to those that represent one and zero, close enough to not confuse the physics of the device in question.

    For example, if you arbitrarily define 0 volts to be a 0 and 1 volt to be 1 in an equally useless and arbitrary circuit, and you monitor the voltage, what do you assume is happening if one of your discrete samples is .5? Is it your ternary maybe, or is it the circuit switching from 0 to 1? And what about the case when your manufacturing process introduces errors greater than you expected? What if 1 comes out .75? Is that in the maybe range or the 1 range?

    Now, I remember something about double flash memory densities by sensing 4 voltage ranges in each cell, but I imagine the timing precision required to do that correctly is orders of magnitude easier to do (and still a royal pain) than putting ternary logic into a modern microprocessor (with tens of millions of transistors, implementing everything created in the entire history of computing that might be even marginally useful so that you can 3 more frames per second in quake3).

    • Gratuitous rain? (Score:5, Interesting)

      by leighklotz ( 192300 ) on Tuesday October 30, 2001 @03:55PM (#2498920) Homepage

      I am shocked, shocked to discover that a fundamental computer architecture explored in the 1950's, rejected as unworkable, and forgotten is in fact unworkable.

      The feeling that this induces has no word in English, but in Japanese it's called yappari.

    • by booch ( 4157 )
      How is the problem of distinguishing a .75 voltage in your ternary system different than distinguishing a .5 voltage in a binary system?

      A/D controllers do this all the time, for larger bases, often on the order of 256 to 24 million. Granted, the digital results don't have logical consequences. But you can't ignore the fact that binary systems have to set tolerance levels just the same as ternary systems.

  • I vaguely remember discussing this in a Computer Science class on circuit design four or five years back. While this might be possible for some sort of non-copper processor, I imagine the difficulty would be in rapidly distinguishing correct voltages for each bit on today's technology.

    In simplistic terms, presently, if you have two bits, at a clock cycle, the electrical current is either 0 (0 volts) or 1 (3.3 volts). Theoretically, you could have an infinite number of bits, provided you had infinite voltage precision. Thus, 0=0v, 1=.1v, 2=.2v, ..., 33-3.3v - a 34-bit computer.

    However, your processor is probably designed with a tolerance in mind, thus 3.1 volts is probably a 1, and .2 volts is probably a 0. I really don't knwo the specs, but you might even presume 0-1.65v=0 and 1.66-3.3v=1. The amount of effort required and the reduction in speed required to slow the clock cycle down to ensure that .5v is ACTUALLY .5v and not .6v would probably impact performance too greatly.

    I'm sure there's a PhD EE somewhere in this crowd that can explain this even better, but my point is that I don't think anything but binary computers are useful with current electrical technology. Presently, there's a reason we use two bits - because it's easy and *fast* to check "on or off" without having to determine how "on" is "on". Now, if one was able to use fiber and send colors along with the pulses, then you might have something...
  • I like what was written, and it is interesting, but I don't think that this will change much in terms of how computation is performed or perceived.

    One of the earlier posters had something mentioned about it all being two dimensional... actually, a good way to look at computation is using what Turing devised - a one dimensional model of computation based upon a single tape.

    In studying Turing Machines, the mathematical model based upon (potentially infinitely long) tapes is used extensively. Move the tape right, left, and modify what is under the head, for example, are the primitive operations. A set of functions defines how symbols are changed, and when computation halts, as well as the resulting halt state.

    A basic examination of binary versus ternary systems, based on Turing Machines, and some (basic) complexity Theory...

    In binary systems, computation trees build at the rate of 2^n, where n is the number of computational steps...

    In a trinary system, we are looking at 3^n.

    So, performance could be considered in terms of - I believe 3^n - 2^ n, i.e., polynomial, not exponential) differences in processing power.

    But, any binary system could by used to -simulate- a 3^n system through the use of a (at worst polynomially larger) set of functions and / or chunkings of data (to represent the 3 states in binary, repeatedly). Also, necessary encodings could be performed by 'chuncking' the ternary data into blocks.

    Polynomial gains are nice, but at best, we don't have an earth-shattering enhancement.

    P.S. Some of this may be a bit rusty, so if anyone has a more concrete analysis or corrections, feel free...

    Sam Nitzberg
    sam@iamsam.com
    http://www.iamsam.com
  • Been done (Score:2, Informative)

    by apilosov ( 1810 )
    As a matter of fact, there was a ternary computer built in Russia, called Setun' (apostrophe signifies a soft n).

    Russian translation of Knuth's Volume 2 was quite funny. Knuth is saying that "Somewhere, some strange person might actually build a ternary computer". The russian translation had a translators footnote "Actually, it has been built in russia in 1960s".

    See this page for more information about setun:
    http://www.computer-museum.ru/english/setun.htm
  • Ternary computing is an old idea; in fact it is said that there were a group of computer scientists in the old USSR who were quite enamored with the idea, especially the balanced ternary form (-1, 0 +1).

    The problem is that although you reduce the number of gates, the gates themselves get horribly complex. There are only 16 possible two-input binary gates, of which two are trivial, two are wires, two are NOTs, two are ANDN, two are ORN, and one each of AND, NAND, OR, NOR, XOR and XNOR. All of these are familiar gates. However, there are no less than 19683 two-input ternary gates. If you sacrifice some of the combinations, you suddenly are doing less than true ternary computation, and you're wasting the power of your machine.

    That, in combination with the sheer commonality of true/false type states, means, in my opinion, that binary is here to stay.
  • Units? (Score:4, Funny)

    by ajs ( 35943 ) <ajs.ajs@com> on Tuesday October 30, 2001 @04:05PM (#2498981) Homepage Journal
    A friend and I were thinking about representations on a ternary system. We had to figure out what units of storage would be available.

    Obviously, there's the basic unit of storage (1, 0, -1; on, off, undefined; true, false, maybe; whatever). We called this a trit for obvious reasons of parallel to the binary world.

    Ok, good enough so far. Then, there's the basic unit that's used to store characters or very simple numbers. We decided that 9 trits would be good (this was to allow for UNICODE-like representations). This seemed to be a shoe-in for the title, tryte.

    Then, you occasionally want to have something that is used in firmware to sub-divide trytes into various fields. In binary we call this a nibble, so in honor of Star Trek we called this one (3 trits) a tribble.

    But, there it stopped, as we soon realized what we'd be measuring the system's word-size in.... Man, I thought SCSI was a painful phrase to use all the time ;-)
  • It occurred to me while reading the article that even if you could build a ternary microprocessor with more bang for the buck than the standard binary variety, in order to be useful in a microcomputer, you would need ternary RAM as well, so I began think about the problem.

    Most desktop computers use dynamic RAM to achieve high densities at low cost. (It's not unusual for new desktop systems to have 1G of RAM in about 4 modules.) They work on the principal of charging tiny capacitors. Charged represents a 1 for instance and not charged represents a 0. But capacitors can be charged in one of two polarities (one plate negative with respect to the other, or positive). Thus it seems that it would be a small step to go from binary dynamic RAM to ternary. The supporting electronics and refresh circuitry would be a bit more sophisticated, but the capacitor array might be basically the same, resulting in increased capacity (log3/log2) with about the same real estate! So perhaps dynamic RAM is really optimal in ternary as well.

    I'm not an electrical engineer; the above is merely speculation off the top of my head. Does anyone more qualified than myself have any thoughts on this?

  • This is interesting stuff. Does anyone have links to software experiments in ternary computing? Obviously, running on binary hardware, such projects probably won't have storage or execution advantages, but are there application domains where ternary representation makes a more elegant software solution?
  • So I may have to exchange my geek insult "couldn't figure out the logarithm to the base 2 of 65536 without a calculator" into "couldn't figure out the logarithm to the base 3 of 19683 without a calculator."
  • Intercal (Score:3, Funny)

    by mikeee ( 137160 ) on Tuesday October 30, 2001 @04:15PM (#2499040)
    Excellent. This will translate Tri-Intercal, with native bitwise trinary operators, into much more efficient machine code.
  • Ternary trees (Score:2, Offtopic)

    by mauddib~ ( 126018 )
    Dr. Dobbs recently had an interesting article about ternary trees (http://www.ddj.com/articles/1998/9804/9804a/9804a .htm [ddj.com]), which also discussed some performance comparisons between binary trees and hashes.
    We just did some testing, comparing those search algorithms with eachother. Although hashes are more or less comparable in speed with ternary trees, binary trees are much slower.

    Some sample output: (btw, we didn't balance the ternary tree, although we did some really basic balancing on the binary tree).


    testing binary tree
    elements = 235807
    235807 insertions in 97.995633 seconds
    tree depth = 7882
    235807 lookups in 95.111857 seconds

    testing hash table
    elements = 235807
    235807 insertions in 0.442643 seconds
    tree depth = 63709 (number of buckets in use)
    235807 lookups in 0.345933 seconds

    testing ternary tree
    elements = 235807
    235807 insertions in 0.744229 seconds
    tree depth = 93
    235807 lookups in 0.386081 seconds


    Clearly the ternary tree and hash are much faster than the binary tree. Although there are still some optimisations to make, we believe that the ternary tree will outperform the binary tree at all times.

    We also made some (very) cool graphs with Graphviz, but unfortunately have no good place to share it with the rest of the /. reading audience.
  • Why this is useful (Score:3, Insightful)

    by ChenLing ( 20932 ) <slashdot.ilovedancing@org> on Tuesday October 30, 2001 @04:43PM (#2499189) Homepage
    I've read a lot of posts on how this will be difficult to implement using voltages and circuits....and you know what? It *IS* difficult to sense 3 different voltage.
    The solution? Don't use electric circuits...don't use transistors.

    Electric circuits will only get us so far, and then we'll have to move on to more 'exotic' hardware -- optical computing, molecular computing, quantum computing.......

    Suppose a qubit's state is describe by the spin polarization of an electron pair -- they can either be both up, both down, or one of each -- you can't tell which one, so it's actually 3 states (balanced at that)......

    In optical computing, suppose you can push the frequency of the lasers a little in either direction of 'neutral'...this is also base 3.

    So what I'm trying to say is, don't just say "base-3 computing is not practical with current technology" -- because it isn't, but it WILL be practical (perhaps even more so than binary computing) with future technology.

    And to finish with something lighter...
    troolean x, y, z;
    x = true;
    y = false;
    z = existentialism;

    :)
  • by mortenf ( 191503 )
    I don't like this.

    Any system that can't spell "42" is not worth it.
  • Then came my Epiphany of the File Cabinet a few weeks ago ...

    When counting, thou shalt not stop at One. Neither shalt thou go on to Three.
  • C is defined for a binary machine, like most programming languages (if they are defined properly at all). That's why it's hard to believe that we're going to see non-binary computing anytime soon. It would be very inefficient to reuse old software, making the theoretical effiency gain rather worthless. (Hmm, but this didn't prevent Intel/HP/et al. from developing the IA-64 architecture, may be they start trinary computing next?).

    By the way, Ada does have support for trit operations (in some bizarre way), but this was merely an accident.
  • They tried this on Star Trek, and it went berserk and blew up starships. It'll all end in tears, mark my words.
  • The hypoteticaly "cheap" ternary system assumess, that the need for hardware scales linearly with the base. That is, e.g. it binary gate requires 2 transistors, ternary needs 3 transistors. In such a case 2^3=8 is less than 3^2=9.
    So system with 2*3=6 transistors can count to 9 in ternary while in binary only to 8. When searching maximum for f(x) = x^(const/x), one ends up with e for all const>1. That's why the mention that the 3 is the closest to e - an number base ideal. I remember having that case in mathematics competition way back in 8th grade.

    In engineering practice, that is quite far from truth. In ECL logic the ternary half-adder requires the same number of transistors as in binary logic. It requires the same number of wires to carry ternary digit as binary one. However we all know why is ECL nearly extinct - its high consumption prevents high integration.

    The benefits of binary logic can be seen in CMOS, where we have two states, each of which consumes also no power and still has low output impedance.

    Petrus
  • I'm glad to know that unbounded square-free ternary sequences exist. Life has a meaning now.

  • Imagine what you could do with Cesium OS [slashdot.org] running on a ternary computer. Even better, a distributed system of Beowulf clusters running Cesium-based ternary processes! And perhaps a Natlie Portman wall paper, while looking at your fake Britney Spears porno at 3am, eating a taco with an old X-Files on television in the background...

It is easier to write an incorrect program than understand a correct one.

Working...