Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment: Re:Battery capacity loss over time (Score 1) 65

by dgatwood (#48684855) Attached to: My laptop lasts on battery for ...

Not really. Ever since flat LiPo packs became the norm, I haven't seen a huge decline in performance over time. Long gone are the days when a two-year-old battery got half the life of a new one.

However, with current hardware, battery life varies wildly, depending on whether you're actually doing something of consequence with your laptop or merely using its screen to show a picture. The people who sit there running basic apps like Word, PowerPoint, Safari/Firefox/Chrome (with Flash disabled), etc. are likely to get very close to the rated battery life, because for all intents and purposes, their laptops are just sitting there idling with the screen lit for 99% of that time, with all but one CPU core powered down completely. By contrast, people who are actually using the CPU—compiling, running Photoshop, running Lightroom, using audio, etc.—burn through the battery in a fraction of the rated time.

Comment: Re:I hate to do it (Score 1) 65

by dgatwood (#48683843) Attached to: My laptop lasts on battery for ...

Apple got a lot of bad press a few years ago for massively overestimating their battery life and is now quite a bit more conservative. They've gone from claiming 6 hours to claiming 8, but at the same time they've shipped lower power CPUs and doubled the size of the battery. There was a Kickstarter for an open source compatible laptop with very similar specs to the MBP floating around last week: they were also claiming 8 hours on battery, but they were shipping a battery half the size of the MBP. I guess they think Linux users keep the screen turned off.

Yes, all of those things can help. Of course, if you're running builds in Xcode or similar, you'll still be lucky to get three hours from that eight-hour battery. And if you're using musical notation software like Finale (which keeps the audio hardware "hot" continuously), you'll be lucky to get four. Lightroom? Photoshop? Same deal.

The problem is, what I want in a laptop is to be able to use it all day without running out of battery, and by "use", I actually mean use, not sit around and browse the web.

IMO, Apple still needs to get serious about battery life, which can only be achieved by putting in a much higher-capacity battery. If they offered one model of MacBook Pro 15" Retina in the old (pre-retina) case (but sans optical drive), they could stick in a battery that would truly last an entire day under actual use.

Comment: Re:Now we're getting somewhere (Score 1) 125

by dgatwood (#48682395) Attached to: Tesla Roadster Update Extends Range

They can be a great option for folks who only occasionally travel long distances, because 98% of the time, you're not dragging the extra weight of an ICE around, and you're (ostensibly) using clean energy to power your car, and you only use gasoline when you're traveling too far for electric cars to otherwise be practical. For people who drive long distances regularly, obviously a hybrid or even a traditional automobile would be a better choice (less pollution, better emissions controls, and better fuel economy in all likelihood).

Comment: Re:uh - by design? (Score 1) 163

by dgatwood (#48670849) Attached to: Thunderbolt Rootkit Vector

All drivers on OS X are already required to tell the operating system ahead of time that a device is about to DMA to memory. That's how that VT-d is able to configure the IOMMU hardware to allow those devices to access RAM without worrying about 64-bit address spaces. So the OS already knows precisely which pages of physical RAM should be accessible by PCIe devices using DMA. If other pages of RAM are accessible, that's a bug.

Similarly, making the Thunderbolt controller's IOMMU mappings be driven by that part of the kernel should not break any drivers at all, by definition, because PCIe devices shouldn't be issuing DMA requests except at driver-preapproved locations. So AFAIK, the only way such a fix could break any device would be if that device was trying to do something really dangerous, like reprogramming one of the PCI bus bridges, or reflashing the computer's EFI firmware....

I mean, I suppose that some drivers might be inadvertently configuring a mapping for a page of memory that also contains executable code or class instances (with function pointers), in which case fully fixing this would also require Apple to modify the IOMemoryDescriptor class to ensure that the DMA-enabled pages are whole pages owned by the descriptor, but that should still be pretty minor, and should result in only a modest amount of wired kernel memory bloat.

In the worst case, such a change might require a CPU-driven copy-on-prepare and/or copy-on-complete to work around drivers that provide their own virtual addresses for a memory descriptor that aren't page-aligned, which would cause a big performance hit for those few drivers, but I'd expect most driver developers to quickly fix those design mistakes to eliminate the performance hit. (And that's assuming this isn't done already—for some reason, I thought those buffers had to be page aligned or you'd get a panic, but I'm not seeing anything about it in the docs, so I might be remembering wrong.)

Comment: Re:uh - by design? (Score 1) 163

by dgatwood (#48664559) Attached to: Thunderbolt Rootkit Vector

The whole point of requiring every driver to call the prepare method on an IOMemoryDescriptor object before telling a device to do DMA and calling the matching complete method when the I/O is done is so that the OS can create and tear down mappings in various IOMMU hardware to protect the system as a whole from buggy devices (and particularly those that don't understand 64-bit address spaces). If that isn't happening, I'd argue that it is a kernel bug, and given the security implications, a pretty serious one.

Comment: Re:uh - by design? (Score 2) 163

by dgatwood (#48664515) Attached to: Thunderbolt Rootkit Vector

Thunderbolt is rather different, because the devices are basically PCI-E cards with a Thunderbolt transceiver bolted on. As such they can do anything that a PCI-E card can do, including accessing all RAM. PC Card devices have the same issue, and so does Firewire. It's a serious issue and tools that exploit it have been available for a while, both open source and commercial.

Here's what I don't get. Back when the G5 came out, Apple used a custom piece of hardware called DART to create a boundary between the I/O address space used by PCI devices and the physical address space used by RAM. It required device drivers to explicitly configure mappings before a PCI device could scribble on RAM, and limited those devices to scribbling over the ranges specified by the OS. That hardware went away with the Intel transition, of course, but most of the newer 64-bit Intel hardware has a feature called VT-d that does essentially the same thing. AFAIK, the 64-bit OS X kernel uses that functionality by default if the hardware supports it, so all of those tools should be completely non-functional on recent Macs running Mountain Lion and later. And I think I remember reading somewhere that Thunderbolt controllers contain an address translation table as well.

With that in mind, how is this Thunderbolt device somehow gaining the ability to tickle hardware that probably doesn't live on the PCI bus, on the opposite side of the Thunderbolt controller, at a location that wasn't explicitly configured for DMA by a device driver? Does it involve rebooting the machine and exploiting a driver bug in EFI?

Comment: Re:No, not "in other words" ... (Score 1) 292

by dgatwood (#48664425) Attached to: Hotel Group Asks FCC For Permission To Block Some Outside Wi-Fi

On the other hand there is only so much wireless spectrum available that is set aside for 802.11x. Ever been to big even in a hotel where eveybody and their brother has the hot spot function enabled on their phones, is caring around those mobile hot spot things, folks are running classes in conference with their own wireless AP setup for their students, etc.

IIRC, cell phone hotspots deliberately limit their maximum gain to minimize interference. They typically have an indoor range of about 66 feet—essentially, your hotel room and one or two rooms on either side. Based on that, I suspect that those personal hotspots are more likely to be a symptom of the problem than its actual cause.

If you're seeing poor performance on a hotel's infrastructure Wi-Fi network, odds are good that either:

  • The hotel doesn't have enough APs.
  • The hotel's external bandwidth is insufficient for the traffic.
  • The hotel's DHCP server ran out of IP addresses for the number of clients.
  • The hotel's DHCP server is buggy and sends out offers based on what the client asked for, without properly checking that the request is sensible (e.g. that it is in the right subnet, that no other client is using that address, etc.).
  • The hotel's systems are, in fact, down.

The first one is usually the main problem. Most hotels' networks were designed under the assumption that folks will have at most one Wi-Fi-capable device per room, and that most folks won't be using them at any given point in time. When you have a bunch of geeks with three or four devices, all talking at once, the spectrum can get clogged pretty badly.

There are two possible fixes for that problem. The first fix is to deploy 802.11ac more broadly. For clients that support it, this reduces congestion considerably, both by providing more channels and by reducing interference through beamforming. The second fix is to greatly increase the number of 802.11b/g APs (and, to a lesser degree, 802.11n APs) so that you can reduce their maximum receive and transmit gain settings, effectively creating a large number of very small clusters of nodes instead of a few big ones. Note that these solutions are not mutually exclusive.

Comment: Re:Fine (Score 1) 292

by dgatwood (#48664115) Attached to: Hotel Group Asks FCC For Permission To Block Some Outside Wi-Fi

Because they don't allow you to bring your own drinks and snacks...and you don't want to be "forced to purchase theirs at a dramatically inflated price". I'm just curious how strong your principles are.

That's not really a fair comparison, for several reasons:

  • Movie theaters are considered public locations. Hotel rooms are considered private locations. Just as a hotel cannot authorize the police to search your hotel room without a warrant, it also has limited authority to govern what you do in your room, so long as your actions do not cause damage to the room.
  • You stay in a movie theater for the duration of a single movie. Unless your metabolism is insane, you can trivially eat before you go inside and wait to eat again until after you leave. By contrast, you stay at a hotel for several days. Going without Internet service for a week is usually a much bigger burden than going without snacks for 90 minutes.
  • Health code regulations often prohibit businesses that sell food from allowing outside food to be consumed on the premises, so even if theaters wanted to allow you to bring food in, they may not be able to do so.
  • Movie theaters charge high prices on food to make up for their miniscule profit on movie tickets. Hotels that charge hundreds of dollars per night are making a lot more than a buck per person, so they really don't have any good excuse for ripping people off by charging $15 per day for Internet service.

So although they might be similar in principle, the differences in practice are so large so as to render one a meaningless annoyance that we can live with, while rendering the other a serious act of interference that cannot be tolerated.

Modeling paged and segmented memories is tricky business. -- P.J. Denning

Working...