Please create an account to participate in the Slashdot moderation system


Forgot your password?
AI Bitcoin China

Leading Chinese Bitcoin Miner Wants To Cash In On AI ( 23

hackingbear writes: Bitmain, the most influential company in the bitcoin economy by the sheer amount of processing power, or hash rate, that it controls, plans to unleash its bitcoin mining ASIC technology to AI applications. The company designed a new deep learning processor Sophon, named after a alien-made, proton-sized supercomputer in China's seminal science-fiction novel, The Three-Body Problem . The idea is to etch in silicon in some of the most common deep learning algorithms, thus greatly boosting efficiency. Users will be able to apply their own datasets and build their own models on these ASICs, allowing the resulting neural networks to generate results and learn from those results at a far quicker pace. The company hopes that thousands of Bitmain Sophon units soon could be training neural networks in vast data centers around the world.
This discussion has been archived. No new comments can be posted.

Leading Chinese Bitcoin Miner Wants To Cash In On AI

Comments Filter:
  • Can you push Dogecoin to $100?

    Thanks in advance.

  • Is it just me (Score:4, Interesting)

    by gweihir ( 88907 ) on Monday August 21, 2017 @06:48PM (#55060061)

    Or did anybody else find "The Three-Body Problem" pretty much unreadable? Maybe I just have the wrong cultural background to understand it.

    • This is an interesting remark. I am about to start reading the trilogy because I wanted to wait until the 3 books were published. Since it won important genre awards, I do expect that many people find it readable. Now I did enjoy some of Liu's shorter work and the Chinese cultural background indeed made for a somewhat different reading experience than SF stories written by western genre practioners - something I personally find interesting rather than offputting.
      • by gweihir ( 88907 )

        I have to say I found it boring. From the Amazon-reviews, about 10% or so of the readers agree with me, but something like 80% found it pretty good. Maybe it really is me.

      • It's not boring and it doesn't require any Chinese background other than the general history knowledge. But the books don't have much action in them, so I can see how many people can find that off-putting. I also immensely enjoyed the third book, especially how it subverted the usual tropes of "love wins at the end".
    • I enjoyed it immensely, but it truly is from a very different cultural standpoint.

    • Well, it isn't packed with action and there are large timeline jumps, viewpoint changes and sudden 180 degree rotations of the narrative and the story outcome the reader expects. But it all serves the purpose of painting an immense picture with surprising breadth and containing a lot of brilliant ideas. I like this kind of S-F, where ideas about future are maybe more important than character development, storytelling, etc. - but I get that some people might not. I LOVED the whole trilogy.
  • ... but now they might have the bulk of processing power for the next big thing, scary shit. Maybe the "Meanwhile in America" meme is in order here.

    Can anyone in the know point to what tools and/or open resources can an empiricist use to get started on the AI field? What are the must reads on this field? Do we really need all that processing power to do anything meaningful in/with AI?
    • I was dabbling with some deep learning and a single simulation I ran took almost 6 weeks to complete without a GPU.

      I was reading that one nice thing about TensorFlow's "pictures of flowers" dataset is that almost all the "layers" in the neural net are already trained so you can play with it. Only the last one or two layers need any additional training. That might be an interesting point to start at (though I couldn't say how long the last 1 or 2 layers would take to train without a GPU as I haven't trie

    • Rent Google/Amazon's K80s if you're just dipping your toe in. Or buy an external GPU. This will give you all of the horsepower that you need, even if you're training giant models.
  • Those tags though...
  • This sounds great for people that want to commit to using an algo for an extended timeframe. But since it's etched, you won't get any benefit from the constant stream of papers about deep learning that are being released. Keep in mind that the big ML competitions like ImageNet usually have a new high score record set every year. Not really a field in which I want to be behind the curve. Everybody could start using a spicy new activation function tomorrow for all we know. It's happened before. Not to m

Trap full -- please empty.