Comment.c:12: Warning missing return before }
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
Industry is having trouble convincing people they need HD? A large majority of the market switched to HD. They're not having trouble convincing the market to adopt HD. They already did.
That's why they ended up doing 3D because their market dried up when everyone had an HDTV.
Any TV you can buy today can do 60 fps over HDMI. The frame rate push has been done for years, the content just never showed up:
It's also arguable if that's the future. Everyone seems pretty happy with the current refresh rates of film, and 60 fps Hobbit wasn't well received.
I've never seen a DVD that looks good on an HD TV above 40 inches. Same goes for H.264 SD content, which has a better encoding potential.
It always just looks like a blurry mess. Not totally unwatchable, but also not as enjoyable either.
Isn't Microsoft announcing a new web browser intended to replace Internet Explorer today? Maybe it'll be open source. Maybe it'll even be based on Webkit.
I sure hope not. We need competing browser engines to keep things honest. The competition between them is the only way we ever get standards compliance.
Spoken by someone who wasn't around for the web browser wars of the 90s...
Multiple browsers led to less compliance, not more. Both Netscape and IE were in a rush to add their own non standard HTML elements to "outdate" the other. ActiveX didn't come along at a time that IE owned the market. ActiveX came along at a time when IE was in fierce competition with Netscape, and needed to BREAK the standard to push Netscape out of the market.
Having lived through that, I've never understood the logic of "we need multiple browsers to maintain standards." That's never actually happened in practice. It's like free market philosophy gone amok. Even today, we still see that a bit with either draft or pre-draft things getting added to web browsers outside of standards. Stuff like NaCL is not part of any web spec, and is entirely proprietary to Google, but hey, even with all the competition that's supposed to stop that, it still exists. Because competition promotes people creating their own proprietary stuff to beat the other browsers with.
I also disagree about C being "incredibly complex for a beginner". I found C to be very easy to grasp and very good at exposing what the computer is actually doing under the hood. I would agree that programming C well is complex (and also time-consuming), but that is because it is simple, not because it is complex.
As someone who went down this path and is now a professional software developer many years later... C sucks as an intro language.
Yes, it's very important for the work I do now. It's clean, simple, and easy to understand. It's also totally useless to a beginner.
I used BASIC (and HyperCard) way back when I was a kid because I could actually do things in it. Want to code a 3D game? I could do that. How about something that could play movies? Yep, easy. A basic text editor? I could do that too. With C? Lot's of hello world and "add these numbers together." C's minimalistic nature may be a strength for actual practice, but as a kid who wants to actually create things quickly, it sucks. I'd argue early education is about getting kids interested in coding, with less of an emphasis on actual practice they might use in a job 10 years down the road first. Just get them in the door first.
If C was my only intro to programming I would have lost my mind. Try to get more kids interested in development with only a command line interface and see how that goes. (Yes, there are libraries like SDL, but that takes you fair outside the realm of novice student development.)
This article only takes into account direct emissions. It neglects the CO2 emissions due to the energy used in the manufacture of said airplanes, which is proportional to their cost.
Actually the number of developers ( and developer effective effort ) that would work on such a library is not constant. Once you get too many developers on a single project a larger effort is required to required to just for code / development syncing. The other issue is especially with open source developers is not all developers are interested in working in the same development models among other issues. Which means single projects tend to discourage some potential developers from joining in. There also is the issue of developers wanting to feel helpful a good developer will often feel more appreciated on working on a smaller project.
Finally just because implementations are separate does not mean that their is not cross pollination. Fundamental concerns found by one project are very likely to be searched for in other projects. Likewise with good ideas introduced in one project will often be picked up by other projects. Further because of a diverse system of libraries it is more likely that good ideas can be experimented with safely in different projects.
Or a spreadsheet? (Sure, a small fraction of people will have monster multi-tab sheets, but they're idiots.)
Web browsers get a big win from multi-processing, but not parallel algorithms.
Linus is right: most of what we do has limited need for massive parallelization, and the work that does benefit from parallelization has been parallelized.
This is kind of silly. Rendering, indexing and searching get pretty easy boosts from parallelization. That applies to all three cases you've listed above. Web browsers especially love tiled parallel rendering (very rarely these days does your web browser output get rendered into one giant buffer), and that can apply to spreadsheets to.
A better question is how much parallelization we need for the average user. While the software algorithms should nicely scale to any reasonable processor/thread count, on the hardware side you do have to ask how many cores we really need, especially in since a lot of users are happy right now. But targeting these sorts of operations as a single thread is also the entirely wrong approach. It's not power efficient for mobile users, and it drastically limits the gains your code will see on new hardware, while competing source bases pass you up.
To be fair, teacher pay sucks. We all know it. There isn't a debate there.
So you don't exactly make a solid point by saying "Hey look! Women dominate in all the crappy low paying jobs! How are they oppressed?"
Do women dominate in teaching because they choose to go into teaching, or because society is corralling them into teaching because it's pretty much on the bottom end of the career ladder?
I'll tell you, any time I look at teaching (which I'd be interested in), I go "hell no" when I see the pay scales, and go back to my normal engineering job.
If you think discrimination is not a thing, perhaps you're deluded enough to think that women, as a large group, are all collectively putting themselves into low paying, low recognition work.
For bonus points, break out gender ratios in education teaching by pay scale. You'll find as the level of academia and pay increases, the ratio of women also declines. Gee, that's funny.
3: Other country's laws. People don't realize it in the US that Thailand's lese majeste laws apply here? Well, they do, and an American can get shipped over there for breaking them, due to extradition treaties. Same with Turkey and the Kingdom of Saudi Arabia. In theory, someone handing out events for their pagan festival or church bulletins can be shipped over there to be executed, due to violating Islamic sharia laws. Privacy is important, since it isn't just domestic LEOs, but LEOs of foreign countries who can press charges and have US citizens answer for them. Right now, it tends not to be enforced, but the laws are on the books, and the pastor who was televised burning a Koran might find himself in Riyadh facing an imam and a crowd with rocks and a can of gasoline.
Errrr, no, that's totally wrong. Where did you learn this stuff?
If you commit an illegal activity in Thailand, and then enter the United States, there is a chance that the US could return you to Thailand. If you do something that is illegal in Thailand but not illegal in the United States in the United Staes, then it does not matter at all. Only US law applies to acts committed in the US.
I don't know where you learned your understanding of extradition laws, but this is so far out in right field. Maybe you should lay off the internet conspiracy crack for a while.
Seriously, learn a few things about extradition. It only applies to crimes committed in the country trying to get their hands on the person.
Well, now I'm reading specs on USB 3.0 controllers. Ugh. There's a lot on mapping a bus address to a memory address for DMA, but nothing addressing the security implications of doing so, or what devices are allowed to do, just broad hints like the buffer has to exist in a DMA-able part of memory without saying if that's a security implication or a hardware implication.
It would be nice to see a follow up article on if/how USB 3.0 protects against these things, because I'm not a kernel USB developer sort of guy, so while I know DMA is there, I'm not feeling like I'd be able to dissect these implementation specs.
same thing as a pci-e / pci / cardbus / express card with a boot ROM or flash. They load pre boot at least on non mac systems you can go to bios and trun off option roms / set it to EFI only mode.
Apple exposes a bunch of pre boot options for the firmware on the command line, but I'm not sure if you can disable pre-boot EFI drivers from there.
I'm pretty sure in the case of USB 3 that DMA is a function of the host controller. A device by itself cannot inject into arbitrary memory. This thunderbolt "vulnerability" is the equivalent of the windows autorun on insertion function that was disabled years ago. Only this functions above the level of the current user (aka much worse).
I'm looking up DMA for USB3. Although there are some ways to secure DMA (like a white list of addresses/sizes that are safe to write to), all of the advertised functionality of USB3, such as the sustained data rates, would be very hard to achieve if you didn't have direct access to memory. That's why Firewire ruled for live streaming of data for so long: DMA made it's rates reliable, whereas USB's dependence on the controller and CPU for memory transfers made the throughput more flakey.
Thunderbolt is more like USB to the user - it's a thing you use to connect untrusted devices to your system. You wouldn't expect that plugging in a USB thumbdrive would magically own your system (well, maybe you should, because it's happened in the past, but I think it's fair to say that it shouldn't). You'd think that plugging in a random Thunderbolt device would be designed to be safe. Apparently not: apparently Thunderbolt is unsafe by design.
USB 3.0 has this exact same feature (DMA), so yes, yes you should expect a USB thumb drive to be able to do this.