Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Dominion & Munchkin (Score 1) 382

Want a fun game? Try one where a 5 year old might beat you with a random turn of a card and absolutely no strategy, instead of one in which you can feel good about yourself by constantly beating a 5 year old.

Is that even really a game, by definition?

That's like two people roll a dice, higher roll wins. There's nothing to play, no input or decisions on the part of the player, and precious little interaction between players. I don't think that would be very fun at all.

Comment Re:Verilog (Score 1) 365

To be fair, the definition of "well" I intend isn't an arbitrary X/Y value. There's already very well defined numbers for the hardware which currently runs the algorithm. To transfer "Well" to custom hardware would be somewhere in the vicinity of: less than the original general purpose CPU by enough that it justifies the design effort involved and doesn't cost MORE to manufacture. All engineering decisions are trade-offs, and if the trade-off isn't worth the effort and resource cost you don't do it. For a transfer effort to go "Well" means at the end of the day you come out ahead somewhere.

If you have to spend 3 million dollars on custom hardware development just to get performance parity with a COTS general purpose CPU... you'd be hard pressed to call that "well" by any measure. This is what is implied by the setup of the original Ask Slashdot question, asking an engineering question about feasibility and cost.

Comment Re:Verilog (Score 1) 365

Not really. The biggest conversion issues I deal with (when converting algorithms to hardware) are related to how software treats RAM vs how hardware treats RAM. They are fundamentally different methods of operation. In software RAM is cheap/free, so it is preferred over CPU cycles. In hardware, the processing is cheaper (in general) and RAM is more expensive.

Buffering and holding a megabyte of data between each stage of processing is natural and very easy for software. But in hardware this is a very inefficient way to do things. Converting from one method to the other can be quite difficult depending on the algorithm.

Comment Re: Verilog (Score 1) 365

Nice point - For is used for iteration in software, and For Generate in hardware is used to generate new instantiations. The similarity in words/syntax is a dangerous trap. The closest thing in software is like putting malloc or new() in a loop. It's a great convention for when you need many similar bits of hardware. Completely wrong for iteration.

Comment Re:Verilog (Score 1) 365

The original wording was "Some C algorithms may never transfer well into a hardware implementation." At least in my mind the transfer process is what might not go well... not how the final product may or may not run. Having some experience here I understood the transfer to be where the work/expense would be. And those are ultimately key factors you would use to base your decision about whether or not to go ahead and make the conversion.

I don't think we disagree on content, just on what might have been meant by Andy's post. Especially considering exactly what you said for the reasons you said, making claims of impossibility would indeed be silly.

Comment Re:Verilog (Score 1) 365

Unless the algorithm requires all those special instructions and monster ram to run.... at which point your custom hardware looks very much like the CPU and system it is intended to replace, and definitely not cheaper unless you're selling a whole lot of them. Reliable hardware is expensive to build even when it's a simple design iterating on previously known good hardware. Starting from scratch on raw silicon takes millions of dollars, just for your first chip lot, not to mention all the man hours to get it there and subsequent revisions. There are lots of algorithms that don't make any sense (from a cost vs efficiency standpoint) to port to custom hardware. That's the whole reason the generic CPU exists in the first place.

I guess I'm disagreeing with your definition of better. If it's faster but costs too much for anyone to actually buy isn't better.

Comment Re:Verilog (Score 1) 365

He didn't say "may not transfer at all", he said "may not transfer well". Also remember that an algorithm isn't just running on any old bit of hardware it's running on a modern CPU with lots of special instructions with a gigantic RAM attached to it and potentially some other peripherals for special functions. Hardware RNG, etc. It might very well not be reasonable to try to convert all this to a custom FPGA/ASIC for the cost involved.

Comment Re:Matlab has a solution for this, but $$$ (Score 2) 365

They have a tool that can do this, I don't know if I'd call it a 'solution' just yet though. We've just finished ripping out all the 'solution' for our project because we wanted a device that was actually small enough (and thus cheap enough) to be able to sell.

It takes input designed to be hardware and makes good hardware. It takes input designed to be software and makes shit hardware. It also doesn't handle version control very well, you need proprietary tools to even VIEW the design files... and the output which actually describes the hardware (vhdl) is so obfuscated as to be nearly illegible. The build times are also 4-5 times longer than they need to be, so it takes a whole day to place and route the designs output by this tool. Unless you're building something trivial I wouldn't advise depending on mathworks/simulink tools for a solution.

Comment Re:Difficult, but... (Score 1) 365

VHDL is basically programming

Sure, the same way software is basically just english and letters and numbers and if you understand those you can do most any software yourself! /sarc.
VHDL is code, but after cleaning up after software people who think they can write VHDL, it's not the same thing at all. The key statement is

Sure, you'll need to develop a fair understanding of the hardware

This is by no means a light or trivial task. There's even entire university degrees dedicated to it. ;) But if you have all THAT, then sure writing hardware code is a snap! In all seriousness the above statement basically says it's easy if you have the skill set already.

Just don't make the mistake of thinking that if you understand BOTH hardware and software that they are equivalent, or that everyone else shares in your expanded understanding. I've seen programs fail because people try to treat hardware like software simply because they're both captured with some text. It's a dangerous viewpoint if you want your project to succeed.

Comment I've done this before (Score 4, Informative) 365

There's been several people who suggested using a high-level synthesis tool to convert your software (c/c++) directly to HDL (verilog/VHDL) of some kind. This can work and I've been on this task and seen it's output before. The catch is; unless that software was expressly and purpose written to describe hardware (by someone who understands that hardware and it's limitations and how that particular converter works), it almost always makes awful and extraordinarily inefficient hardware.

Case in point - we had one algorithm developed in Simulink/Matlab that needed to end up in an FPGA. After 'pushing the button' and letting the tool generate the HDL, it consumed not just 1 but about 4 FPGAs worth of logic gates, RAMs, and registers. Needless to say the hardware platform only had one FPGA and a good portion of it was already dedicated to platform tasks so only about 20% was available for the algorithm. We got it working after basically re-implementing the algorithm with the goal of hardware in mind. The generation tool's output was 20 times worse than what was even feasible. If you're doing an ASIC you can just throw a crap-load of extra silicon at it, but that gets expensive very quickly. Plus closing timing on that will be a nightmare.

My job recently has been to go through and take algorithms written by very smart people (but oriented to software) and re-implement them so they can fit on reasonably sized FPGAs. It can be a long task sometimes and there's no push-button solution for getting something good, fast, cheap. Techies usually say you can pick two during the design process, but when converting from software to hardware you usually only get one.

Granted this all varies a lot and depends heavily on the specifics of the algorithm in question. But the most likely way to get a reasonable estimate is going to be to explain the algorithm in detail to an ASIC/FPGA engineer and let them work up a prelim architecture and estimate. The high-level synthesis push-button tools will give you a number but it probably won't be something people actually want to build/sell or buy.

Slashdot Top Deals

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...