No, I'm suggesting that the article explains all the things that buyers should be wary of and therefore makes an excellent guide for buyers.
"CAVEAT EMPTOR, BITCHES."
Good thing someone wrote a guide explaining all this then eh?
The problem is:
Trading forums deliberately suppressing information on actual prices and alternatives.
Package resellers on Amazon & Ebay charging large markups because the buyers don't know the mechanics.
Star Citizen MODERATORS in charge of enforcing the trading bans on the official forum directing users to their own trading service.
In short, the problem is information asymmetry, which this article attempts to address.
Link to Original Source
Putting aside, for the moment, all the Slashdot griping about whether this is or is not a productive use of human time and energy (I agree it's probably not in a macro sense, but hey, that's the world we live in), this is indeed an old idea.
I worked as a consultant on a hardware-based HFT system back in 2007 for a Silicon Valley company called Xambala. They were using reconfigurable logic-style chips of their own design specialized for text processing applications, rather than more general purpose FPGAs, though we discussing chaining their chips with FPGAs for more computationally intensive algorithms.
The edge you are going to get from doing processing in silico is quite limited. You can conceivably cut a few tens of microseconds, maybe even 100 microseconds, out of a computation - you still have to have all the other pieces of the puzzle just right. If you are doing straight news/information driven trades in situ at an exchange and can get the same timing of feed data to respond to, then you'll have a good edge (i.e. "Buy if X>0.2, Sell if X0.1, do nothing otherwise).
If you are trying to do intermarket arb (futures/ETF arb, for example) your edge is smaller, since differences in network route, networking hardware, other infrastructure are generally larger in magnitude than what you gain from cutting a few tens of microseconds out of the picture in hardware - but this edge would probably serve existing players well who already have top tier infrastructure.
For the more sophisticated, "game"-driven trading algorithms out there in equity markets, how much value doing stuff in hardware gives you is variable. There's a lot of decision logic involved in spiking orders around, changing behavior states based on other participants, and so on. A better set of algorithms running on top tier infrastructure in software will probably do better than inferior algorithms running in hardware without top tier infrastructure.
Other than Xambala, I am sure there are other players doing similar things. I've also used CUDA on NVIDIA GPUs for calculating option market prices really fast. These are just tools and other people definitely are using these tools in the right scenarios. What really matters in making money is combining the right tools with good implementation, excellent infrastructure, and testing and adaptiveness to market conditions.
Python scales pretty well to a medium size codebase (Java scales better to a large codebase than anything I know of - old-school Java may have been painful to develop in, but this it did well).
If you just need to do very simple processing of data in a fast, asynchronous manner, node.js does it (and it's fast as hell, no doubt, because v8 is the best tuned JITed engine for weak-ish typed languages out there, thank to years of browser competition). But if you want to do anything interesting with that data, I don't know how you are going to accomplish that with node - where are the NLP, vector math, linear algebra, search infrastructure, etc. etc. libraries that Java or Python have in huge volumes? Non-existant or extremely immature.
Also - for anything serious, node.js has changed way too much in way too short a period of time to use as a platform for. Maybe in a few years it will be more stable for some applications, but not quite yet.
Dear smart, grumpy engineers of Slashdot who live elsewhere in the US: here in Silicon Valley it's hard to hire good people.
I am very much trying to hire excellent engineers with experience in search infrastructure/Lucene, recommendation systems, as well as great mobile app developers with experience developing top-tier iOS or Android apps. I will pay well for good talent, offer fair benefits and excellent option package in an early stage startup founded by a guy who has built several successful businesses, including a multi-hundred million dollar company backed by top tier venture firms.
If you can prove to me that you are smart and capable and have relevant experience, I don't care if you have a degree from a top college or not (a degree will affect my baseline expectations, but if you seem smart and competent, I'll give you the opportunity for a phone call to show me how good you are).
If you are a Slashdot regular, that is worth bonus points too (the fewer digits in your UID, the better).
Seriously. If you meet any of the parameters above and think you are a great programmer and would like to come out to the Palo Alto area and work with other top tier people building a product that pushes boundaries in the social space and helps people get more out of their mobile devices, send a resume and cover letter to email@example.com.
I always like to point out that in Scotland a Bing is a spoil heap, it's the pile of dirt that you take out of the ground and discard to get at the minerals you actually want, worst name for a search engine ever.
I like working with people who get the job done, quickly and simply, and focus on functional completeness and minimizing defects. People who I can count on to tell them "here's what it needs to do" and I can know that I'll get something out that does what we need.
I don't like working with people who obsess about every line of code they produce and who worry more about documenting things internally than about getting working code out the door.
Sure, given the choice I prefer clean, maintainable code to shitty, sloppy code. But complaining about diagram quality in internal documentation? Unless you are making components for NASA or MRI machines, I think you're obsessing about things that don't matter that much.
The reason the guy in question is senior to you is because management likes people they can count on to get shit done.
The 1200 was second gen Amiga. My first was a 1000 (with the optional 256k RAM module in front) and I preferred it to my Mac. I remember spending $600 for a RAM module the size of a hardback book that hooked to a huge port on the side and gave me (gasp) 1MB of RAM. That was enough to run the whole OS in RAM. This was my bbs machine, and my CI$ and Genie box. I used a C128 to run Quantum Link.
I got a 3000 in 1990 but soon went Mac and Linux for good.
Cloud computing definitely doesn't have to be pay-as-you-go. The pay-as-you-go hosted computing services are certainly banner examples of "cloud computing" but they are by no means the only thing that can be accurately described as "cloud computing".
In Britain a 'Bing' is a spoil heap, it's a pile of dirt taken from mining and discarded.
i.e. it's all the worthless crap left behind after you've taken the good stuff out.
Hey dummies, you broke scrolling during the beta. It simply doesn't scroll properly on the default Android browser. Something is mucked up with touch event handling. Please fix this. That is why everybody keeps saying they prefer classic mode - at least you can scroll properly.
Funny, I thought what was rare was finding them in high concentrations in places where labor is cheap and environmental laws lax.