Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment 2GB? (Score 1) 827

What I don't get about the new MacBook Air is the default 2GB of memory. When every $500 PC at Best Buy is shipping with 4GB, you need to make it standard. We're spending $1000 on a MacBook Air, so it's silly to cheap out on the memory. Yeah, you can upgrade to 4GB for another $100. But you shouldn't need to special-order to get what should be the standard.

Core 2 Duo is disappointing but not unexpected. NVIDIA's chipset doesn't work with Nehalem and probably never will.

SSD is nice, but we'll have to see what the performance is. Depending on the controller it could range from poor to excellent.

Honestly, Apple did what they could. If you need to buy now, both of the MacBook Air models are nice - if expensive - machines. Getting a 1.6GHz Core 2 Duo and decent graphics in a 2.3lb package is really cool. Paying $1400 to get the configuration that this machine should have as stock (1.6GHz, 4GB, 128GB) is less so, but compared to other premium machines (ThinkPad X201s, Vaio Z) you're not paying much of a premium - you're just trading less performance for less size/weight.

The problem is that this category is about to be redefined. AMD is releasing Ontario and Zacate early next year, which will contain an out-of-order processor with similar performance to the Core 2 Duo in the Air, plus a Radeon 5400-class GPU that will handily beat the GeForce 320M in the Air. All of this in 9/18W (less than the Air) and a single chip, at a low price.

Intel is releasing Sandy Bridge next year. It will have similar graphics performance to the GeForce 320M, plus CPU performance that will blow it away. All while using less power, in a single chip.

You can already buy 11.6" notebooks with better CPU performance than the Air. The Acer 1830 series runs around $700 with an i5 and 4GB of DDR3. It has the same resolution screen as the 11.6" Air. It has a hard drive, which increases the size and weight. It also enables you to have 500GB of storage or to upgrade to a fast SSD (Intel, SandForce, etc.) for around $200. The Acer also has Gigabit Ethernet and an HDMI port.

The Air's advantage is that it's built better (aluminum vs plastic), that it's thinner/lighter (2.3lbs instead of ~3lbs), and that it runs OS X. But I can't help but think that the Mac would be better off with an i5 instead. Most people are not going to play games on an 11.6" notebook, both because of thermal issues (25W+ of CPU+GPU in that form factor means lots of heat/noise) and because PC gaming isn't that popular in general. I think most people would trade a slower Intel GPU for a faster CPU, and the Air could easily take a ULV Core i5 or i7 (18W).

Ultimately, Sandy Bridge or Zacate is the answer to this category, not a last-gen Core CPU. Apple made compromises that are acceptable but not ideal. Unfortunately, that's hard to swallow in a $1000+ machine.

Comment The problem with C++ (Score 5, Insightful) 553

C++ is successful for one big reason: it provides most of the advantages of C with the conveniences of an object-oriented language. Performance is excellent (close to C, which with a good compiler is close to hand-written assembly in most cases) and there's enough capability that you can write just about anything in it, including things that you would never consider writing in manged languages (like device drivers or the VM for those managed languages).

The problem is that the developers of C++ have trouble saying "no". There are a bunch of C++ features that aren't really necessary, but that exist either out of legacy or because someone thought it would be a good idea.

Look at Google's C++ style guide: http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Inheritance

Like most users of C++, Google uses a severely restricted subset of the language. The thing is, most of what Google has left out is quite frankly unnecessary for 99.9% of C++ users. But we're all stuck with it anyway.

Once you get past some of the C-legacy anachronisms and restrict C++ to a small subset of its functionality, it's actually a nice language. The problem is that we can't take things out at this point.

Comment Re:The Volt uses a planetary gearset (Score 2, Interesting) 657

The Volt uses a planetary gearset where the main gear is driven by the primary electric motor. The planet and ring gears can also optionally by driven by the engine and a second assist electric motor when needed. This allows the computer to continuously vary the power source that is driving the wheels. The only part of this equation that was not previously known was that the engine can directly give torque to the wheels under certain circumstances (without going through a generator).

This is exactly how the Prius works. The plug-in version of the Prius (currently in testing) even has ability to charge the battery from the grid, just like the Volt.

It's a sensible, efficient design. The problem is that it's neither particularly new nor particularly innovative, and it underscores the fact that the Volt is probably overpriced, which leaves GM open to being undercut by competitors like Honda or Toyota.

What's particularly ridiculous to me is that the Volt only goes ~40 miles on a 16kWh battery pack (2.5 mi per kWh). The Leaf goes ~100 miles on a 24kWh battery pack (4.2 mi per kWh). That tells me that the Volt is too big and too heavy.

Comment Re:Want Open - Get a Cheap NetTop (Score 1) 299

The simple answer is noise. Many of the nettops (such as the Foxconn Netbox, which I suspect is what you bought on Newegg) are annoyingly loud.

What we need is a nettop using AMD's Ontario APU.

An Atom D525 is 13W (and the Atom 330 is actually worse despite being only 8W because you also need a multi-watt chipset). For that you get a slow (by modern standards) in-order CPU and absolutely terrible graphics (that don't support HDMI or hardware video decoding).

The Ontario APU is supposed to be 9W, including the memory controller and a Radeon 5400 class GPU. That means that you don't need an external GPU (as you do in NG-ION). It also has a dramatically faster out-of-order CPU.

Once you get down to that level, passive cooling starts to become an option.

Comment Re:What open channels? (Score 1) 107

No they did not. Analog 2-51 and Digital 2-51 are exactly the same spectrum. In fact a lot of the stations are had to do a "live cutover" from analog-to-digital at midnight June 12, because they occupy the exact same spot. These stations include WPVI, WGAL, WBAL, WHYY, WJZ, and so on.

While that's technically true, ATSC also allows channels to be remapped - so what you see as "Channel 9" might actually be UHF channel 31.

I don't know of any market where all 51 channels are being used.

Comment Re:not protects (Score 4, Informative) 1066

The disk drives are also controlled. The disk drive don't let you just get the bits out - they will only give you data if you have a key, etc. I don't know the specifics but this is a *well* thought out system. They have serious control over this shit.

That's not actually true. You can absolutely get almost all of the data off of a Blu-ray disc without breaking AACS. What you can't get (without a hacked drive or an un-revoked player certificate) is the volume ID, which you need to decrypt or duplicate the disc.

Note that Blu-ray drives have basically been irrevocably broken at this point, so this is sort of moot.

Comment Re:Works fine on my e52 (Score 1) 657

What can I say.

Leave the country, move somewhere with a 21st century mobile infrastructure.
Learn to smoke, casually.
Lose weight.
Wear better clothes.
Talk with an accent.
Use a Nokia.

In short, become European. Life is better.

800x480 AMOLED display, 1GHZ Snapdragon CPU, and 512MB of memory beats pretty much the entire Nokia lineup.

And I like 48-oz beverages that are strategically shaped to fit in my cupholder, $3.50 T-shirts form Wal-Mart and Target, double cheeseburgers, and $2.65/gallon gas.

Comment Re:Why would Intel care about Rambus? (Score 4, Informative) 95

They basically thought everyone was going to start using their computers for watching movies, video editing, and little else. So they designed the P4 with a horribly long pipeline that meant any context switching resulted in terrible performance.

If you don't know much about CPU architecture, please don't make a bunch of random statements about the P4.

First, the pipeline length has minimal impact on the speed of context switches. Context switches are relatively infrequent (compared with the CPU frequency) and relatively slow (typically several hundred cycles at a minimum).

The major downside of pipeline length comes from branch mispredicts. Branch mispredicts hurt you more because you have to flush more wrong instructions. Additionally, the scheduler is less able to parallelize instructions because instructions with data dependencies need to be spaced further out in the pipeline (forwarding doesn't help you unless the result has actually been computed, and in long pipelines there are typically several execution stages). Some of this can be improved with tactics like better branch prediction or multi-threading, but ultimately you give up IPC in a longer pipelined design.

Second, the P4 was not designed for "watching movies, video editing, and little else". It was designed to be fast. When Intel was designing the P4, the IPC-bag-of-tricks was starting to run out. The P6 (Pentium Pro, later evolved into the Pentium II/III) already had all the common improvements including multi-level, fast on-chip caches, a fully pipelined design, out-of-order execution, branch prediction, and multi-issue. The bottom line is that Intel realized (like everyone else) that making the chip wider or increasing caches really didn't do much for performance anymore. To keep seeing dramatic improvements in single-threaded performance, we either needed a completely new bag of tricks or we needed much higher clocks. Intel figured that they would make a CPU that (architecturally) could hit very high clocks, which means very deep pipelines to meet timing constraints. Yes, P4 would have lower IPC, but it would more than make it up in clock speed.

For a while, it worked. P4 was not a huge winner at first but over time (with Northwood) the P4 managed to out-gun AMD's lineup and become one of the fastest CPUs available. It does't matter if the Athlon could retire more instructions per clock, the P4 was clocked dramatically higher.

The problem is that somewhere around Prescott, the process technology ran out of gas. Leakage current became an issue more quickly than Intel had anticipated, thermal issues became problematic, and despite Intel's tricks (sockets that could handle more power, BTX, etc.) it became clear that people just weren't going to put a 400W CPU in their machine.

None of this is really a problem with the P4 architecture. With the right cooling and power, P4 can hit 8GHz. That's higher than any Intel or AMD CPU before or since.

You'll hear people say that P4 was a marketing decision. While I'm sure that the high clocks did benefit marketing, people who know the actual architects will tell you that it had more to do with chasing single-threaded performance than it had to do with marketing.

Some people say that the P4 was optimized for media. While it's true that highly predictable code (e.g. loopy scientific code and media encoding) performs especially well on the P4, compared with the Athlons of the day (before Athlon 64) so did everything else. You can't compare a 1.5GHz Athlon XP to a 1.5GHz P4 and argue that the Athlon is better because it's faster. P4 was specifically designed to make up for its lower IPC with very high clocks.

The whole thing was just a bad idea. AMD pretty quickly realized what was going on, avoided Rambus RAM like the plague, and concentrated on better performance at lower clockspeeds. AMD made huge inroads against Intel during this time.

AMD made inroads very late in P4's life after they launched Athlon 64, which was a much newer architecture with a lot of improvements (like a lower latency on-die memory controller). This was long after Intel had stopped using RDRAM.

Intel knew P4 was a turkey before Prescott had even shipped. They canceled Tejas as a result. But designing a new CPU architecture is a multi-year process, and Pentium-M (despite being an excellent notebook CPU) would not have competed well against Athlon 64.

I'm not going to argue that P4 was the right design choice. It clearly wasn't, and even Intel will acknowledge that now. But it wasn't a terrible design and the problems that it did have later in its life were not due to architectural issues.

Since P4 there has been considerably more emphasis on integrating things like power modeling into the design of CPUs. In 10 years architects went from basically not caring about power to it being one of the most important considerations with every design element.

Comment Re:I bought shoes. (Score 1) 762

There is no subway around here (Boulder, CO). There is a decent bus system, which I use when it makes sense.

I have friends in Golden, CO. It's 45 minutes by car and about an hour and a half by bus (and requires two transfers). The direct (GS) bus doesn't run after 6PM and doesn't run on weekends, which means that you have to take a very indirect route that can take 3+ hours depending on the schedules.

I have other friends in Fort Collins, CO. It's 60 minutes by car, and there is no scheduled bus service. The best you can do is take the bus to the airport (1 1/2 hours), then take the bus back to Fort Collins (another 1 1/2 hours). All in all, you're looking at 3+ hours and $35 each way.

I know this because for 3 years in college in Boulder, I didn't have a car. I bummed rides from friends, walked, and rode the bus. It is *possible* to get almost anywhere in this area without a car, but most of the time it's not practical.

Slashdot Top Deals

Pascal is a language for children wanting to be naughty. -- Dr. Kasi Ananthanarayanan

Working...