That said, the front page also has a 'Video Bytes' line half way down full of crap, so I guess someone is really keen on killing the site. Thank $DEITY for Soylent News...
Nokia has also been in the market of selling the infrastructure for mobile networks for a long time. And, unlike the handsets, this is a very profitable place to be. Both Nokia and Ericsson saw the commoditisation of the handset market and Nokia in particular watched their margins evaporate and decided it was time to get out. But because they're now no longer in the public eye, they're perceived as losing. Now their customers are people who make money from the products that they sell, so are willing to pay a reasonable premium because a few minutes of downtime costs far more.
Of course, when Apple decides to concentrate on the high-margin part of a business, no one claims that they're dying, because they concentrate on a consumer-visible part of the market.
There's some overlap. Altera FPGAs have lots of fixed-function blocks on them, ranging from simple block RAMs to fast floating point units. There's a good chance that Intel could reuse some of their existing designs (which, after all, are already optimised for their manufacturing process) from things like AVX units and caches on x86 chips. A lot of the FPGAs also include things like PCIe, USB, Ethernet and so on controllers. Again, Intel makes these in their chipset division and, again, they're optimised for Intel's process so being able to stick them on FPGAs instead of the Altera ones would make sense.
The main reason that you're probably right is that Intel is generally pretty bad at getting their own internal divisions to play nicely together, let alone ones that are used to being in a completely separate company.
My guess would be coarse-grained reconfigurable architectures. Altera FPGAs aren't just FPGAs, they also have a load of fixed-function blocks. The kinds of signal processing that the other poster talks about work because there are various floating point blocks on the FPGA and so you're using the programmable part to connect a sequence of these operations together without any instruction fetch/decode or register renaming overhead (you'd be surprised how much of the die area of a modern CPU is register renaming and how little is ALUs).
FPGAs are great for prototyping (we've built an experimental CPU as a softcore that runs on an Altera FPGA at 100MHz), but there are a lot of applications that could be made faster by being able to wire a set of SSE / AVX execution units together into a fixed chain and just fire data at them.
A Kickstarter-like model would work. Release a single for free, designate an amount that you think the full album is worth. If enough people are willing to pay, then you release the album for free. For the second album, hopefully enough people have copied the first that you don't need to do much to encourage them to pay for the second. As an added bonus, you can reduce your up-front costs by only renting the studio time to record the first track and only record the rest once people have paid for it.
Recording a song (at least, a song that people want to buy) requires talent, creativity, and often expensive instruments and studio time. Copying a song once it's recorded is basically free. Any business model that relies on doing the difficult thing for free and then trying to persuade people to pay for you to do the easy thing is doomed to failure. Imagine if Ford had noticed that people wanted coloured cars and decided to give away unpainted cars and charge for painting them, then bribed politicians to pass laws so that only Ford was allowed to paint cars Ford sold and driving an unpainted car on the road was illegal. It wouldn't take people long to realise that this was a stupid business model and that you could get rid of the laws and charge for the cars, but in the case of copyright people are still trying very hard to make the 'free car, expensive and exclusive paint' model work with different variations.
It's an ABI mismatch, and the summary is nonsense, saying almost the exact opposite of TFA (which I actually read, because the summary is obvious nonsense). The issue is that the Windows ABI defines long double as being a 64-bit floating point value (which is fine, because the only requirement for long double is that it have no less precision than double. If you're using it and expecting some guaranteed precision for vaguely portable code then you're an idiot). For some reason, MinGW (which aims to be ABI-compatible with MS, at least for C libraries) uses 80-bit x87 values for long double, so you get truncation. I forget the exact calling conventions for Windows i386, but I believe that in some cases this will be silently hidden, as the value will be passed in x87 register and so be transparently extended to 80 bits in the caller and truncated in the callee anyway. It's only if it's passed on the stack (or indirectly via a pointer) that it's a problem.
It's not obvious which definition of long double is better. On modern x86, you'll use SSE for 32- and 64-bit values, and may lose precision moving between x87 and SSE registers. You also get worse IEEE compliance out of the x87 unit, which may matter more than the extra 16 bits of precision. 80-bit floats are not available on any platform other than x86 (128-bit is more common, though PowerPC has its own special non-IEEE version of these and on some other platforms they're entirely done in software), so they're a bad choice if you want portable code that generates the same output on different platforms.
Link to Original Source
My 486 won't display this site
I think you took my comment as sarcasm. I am well aware of the bandwidth-saving benefits of unformatted text, I'm also a big fan of, and I make use of, the fact it's cross-platform and when paired with interpreters and interfaces written for many platforms can be used and contributed to by anyone on an equal standing. Slashdot itself is a good example of this; I can actually still post using my Nokia 6230i running Opera Mini over GPRS. I would, in fact, despair somewhat if I couldn't!
The universal participation, from anywhere, with any device theme I refer to in my OP implicitly refers to low bandwidth, low-memory, 8-16 bit CPUs, tape decks as storage, etc.
Also, why would most technically-inclined people disagree with me about Dreamweaver, et al? I thought we'd got over the "difference between tools and lazy macro code generators" decades ago... A compiler is a tool. Dreamweaver is a lego set, with no real tools for creating new lego pieces. And the lego pieces it provides are almost always much more inefficient than raw HTML, CSS and PHP code written by any half-decent developer! So whilst the majority of *drones in the current IT hegemony" might disagree with me, an educated engineer learning to program, or one very experienced, would not at all!
Who the fuck are you? Go die in a fire, you disrespectful worthless turd.
Next you'll be shouting web developers down for not using an automated tool like Dreamweaver, or advocating driverless cars with no manual controls. Our forefathers and our freedom are closely connected, forget one, you may as well forget the other. Long live text only devices! Long live being able to connect from anywhere, with anything, and participate based on one's intellectual prowess rather than one's socio-economic status!
I certainly can't argue with that, there were indeed.
The same is true of university exams. My undergraduate exams, for example, mostly required that you answer two of three questions per exam. To get a first (for people outside the UK: the highest classification), you needed to get 70%. Most questions were around 40% knowledge and 60% application of the knowledge. If you could predict the topics that the examiner would pick, then that meant that you could immediately discard a third of the material. To get the top grade, you needed to get 100% in one question and 40% in another. This meant that you could understand a third of the material really well and understand another third well enough to get the repetition marks, but not the understanding ones and still get the top grade. This meant that you could study 50% of the material and still do very well in the exams, as long as you picked the correct 50%. And some of the lecturers were very predictable when setting exams...