If you have to spend 3 million dollars on custom hardware development just to get performance parity with a COTS general purpose CPU... you'd be hard pressed to call that "well" by any measure. This is what is implied by the setup of the original Ask Slashdot question, asking an engineering question about feasibility and cost.
Buffering and holding a megabyte of data between each stage of processing is natural and very easy for software. But in hardware this is a very inefficient way to do things. Converting from one method to the other can be quite difficult depending on the algorithm.
The original wording was "Some C algorithms may never transfer well into a hardware implementation." At least in my mind the transfer process is what might not go well... not how the final product may or may not run. Having some experience here I understood the transfer to be where the work/expense would be. And those are ultimately key factors you would use to base your decision about whether or not to go ahead and make the conversion.
I don't think we disagree on content, just on what might have been meant by Andy's post. Especially considering exactly what you said for the reasons you said, making claims of impossibility would indeed be silly.
Unless the algorithm requires all those special instructions and monster ram to run.... at which point your custom hardware looks very much like the CPU and system it is intended to replace, and definitely not cheaper unless you're selling a whole lot of them. Reliable hardware is expensive to build even when it's a simple design iterating on previously known good hardware. Starting from scratch on raw silicon takes millions of dollars, just for your first chip lot, not to mention all the man hours to get it there and subsequent revisions. There are lots of algorithms that don't make any sense (from a cost vs efficiency standpoint) to port to custom hardware. That's the whole reason the generic CPU exists in the first place.
I guess I'm disagreeing with your definition of better. If it's faster but costs too much for anyone to actually buy isn't better.
It takes input designed to be hardware and makes good hardware. It takes input designed to be software and makes shit hardware. It also doesn't handle version control very well, you need proprietary tools to even VIEW the design files... and the output which actually describes the hardware (vhdl) is so obfuscated as to be nearly illegible. The build times are also 4-5 times longer than they need to be, so it takes a whole day to place and route the designs output by this tool. Unless you're building something trivial I wouldn't advise depending on mathworks/simulink tools for a solution.
VHDL is basically programming
Sure, the same way software is basically just english and letters and numbers and if you understand those you can do most any software yourself!
VHDL is code, but after cleaning up after software people who think they can write VHDL, it's not the same thing at all. The key statement is
Sure, you'll need to develop a fair understanding of the hardware
This is by no means a light or trivial task. There's even entire university degrees dedicated to it.
Just don't make the mistake of thinking that if you understand BOTH hardware and software that they are equivalent, or that everyone else shares in your expanded understanding. I've seen programs fail because people try to treat hardware like software simply because they're both captured with some text. It's a dangerous viewpoint if you want your project to succeed.
There's been several people who suggested using a high-level synthesis tool to convert your software (c/c++) directly to HDL (verilog/VHDL) of some kind. This can work and I've been on this task and seen it's output before. The catch is; unless that software was expressly and purpose written to describe hardware (by someone who understands that hardware and it's limitations and how that particular converter works), it almost always makes awful and extraordinarily inefficient hardware.
Case in point - we had one algorithm developed in Simulink/Matlab that needed to end up in an FPGA. After 'pushing the button' and letting the tool generate the HDL, it consumed not just 1 but about 4 FPGAs worth of logic gates, RAMs, and registers. Needless to say the hardware platform only had one FPGA and a good portion of it was already dedicated to platform tasks so only about 20% was available for the algorithm. We got it working after basically re-implementing the algorithm with the goal of hardware in mind. The generation tool's output was 20 times worse than what was even feasible. If you're doing an ASIC you can just throw a crap-load of extra silicon at it, but that gets expensive very quickly. Plus closing timing on that will be a nightmare.
My job recently has been to go through and take algorithms written by very smart people (but oriented to software) and re-implement them so they can fit on reasonably sized FPGAs. It can be a long task sometimes and there's no push-button solution for getting something good, fast, cheap. Techies usually say you can pick two during the design process, but when converting from software to hardware you usually only get one.
Granted this all varies a lot and depends heavily on the specifics of the algorithm in question. But the most likely way to get a reasonable estimate is going to be to explain the algorithm in detail to an ASIC/FPGA engineer and let them work up a prelim architecture and estimate. The high-level synthesis push-button tools will give you a number but it probably won't be something people actually want to build/sell or buy.
Here's a comment I made a while back about this same situation:
If I recall correctly it had more to do with some arbitrary and insane insistence on 'Consumer Imaging' being the business focus, which is why you got cheap consumer cameras (easy share), printer docs (with attempts to cash in on printer paper consumables), but little pro-sumer stuff, and the occasional/rare super high-end imagers/gear (like those used in telescopes, etc).
This is also why they sold off/spun off their profitable medical imaging groups, chemicals group, and they've tried to get rid of their profitable Document Imaging group (high-end, high-speed document scanners) several times. They've been constantly trying to push themselves into the most difficult and price-competitive market possible, cheapo consumer cameras. I think the ultimate goal was to maintain some kind of grasp of the photo printing business as their cash cow with consumable manufacturing/selling. To be fair, they still do a good job printing pictures, but people don't really want/need to do that anymore with rare exceptions. And people that still do prints do it in-house or have local labs that do the work.
Kodak's management has always been married to consumables and services as encapsulated by the mantra "You push the button, we do the rest." It's like some creepy love affair with George Eastman. Most of the outright wrong directions Kodak has taken can be traced back to trying to that philosophy. Being from the Rochester area made Kodak's fall a bit sad to watch, but it was still very predictable.
To those people saying Kodak wasn't a camera company; Kodak made the first and best professional digital cameras, as well as medium/large format digital camera backs and other digital sensors. It was management decisions not to aggressively pursue that tech in the consumer space with gear that didn't treat the consumer like a moron. Not to mention all the custom designed software/drivers using non-standard GUI interfaces which were expensive to build from scratch and horrendous to use. Every Kodak made product or service was focused on consumables and draining the customer of as much cash as possible and not about providing as much value as possible.
Incidentally Eastman Chemical (spun off several years ago) seems to be doing just fine.
Are you serious? These guys are in damage control now that their complicit behaviour towards the NSA has been revealed. They are protecting their profits and that is it.
IF that's all it is, then that means sufficiently many of their customers care about privacy to noticeably affect their profits. How is THAT not at least a little bit of good news? Up till now I assumed nobody but a few hardcore geeks/techs cared at all. Maybe all this public discussion is bearing some fruit after all?
Think carefully about those statements. Here are some possible consequences of SteamMachine:
Failure - Status quo is maintained.
Success (even moderate success) - LINUX Gains a huge user base dedicated to gaming. The calculus of game developers and publishers with regards to LINUX development and Linux ports does a complete 180. Native support for LINUX games becomes something publishers might actually consider as worthwhile instead of "WTF is LINUX?".
Success and Valve turns evil - Games will be made to natively support LINUX so they run on the Steam console hardware platform of the day. DRM can and will be circumvented as always, but now they'll run on LINUX instead of Windows.