Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

Comment Re:Or, it could be unrelated to actually extending (Score 1) 286

more charging stations

In the city, there's a (slow) "charging station" located within 10 metres of the road. It's called an "electrical socket" and while 110V 15A is slow charge, the "infrastructure" is plentiful and extremely common.

Of course, public use plugs are extremely rare, but given how many people gather around plugs to charge their smartphones...

Comment Re:Do that for the laptops as well (Score 1) 51

Although I question how much of a benefit this will really be. As it is, even without heatpipes, smartphone thermal throttles are usually set WELL below the CPU's junction temperature limit - the reason is that it's to prevent other components from getting too hot (like the battery). I remember talking to some Sony engineers, and IIRC, the CPU thermal throttle in most Xperia Z family units is not set to protect any of the internal components, but to protect the user's hand. Fujitsu's tricks might actually reduce the junction temperature at which a CPU can operate without burning the user.

Actually, thermal control is a compromise. ARMs have traditionally consumed 1mW/MHz, which is great back when everything was 500MHz or less. These days you have "octacore" processors running at 2.5GHz, you're looking at 10-20W in a tiny package with poor cooling (because Package on Package to put RAM on top of the SoC). Add in everything else and going full tilt you could easily have to deal with 15+W.

Given the thermal resistance of the package, you can easily reach junction temperature (125C) stupidly quickly.

One analysis of a chip I saw had 2 cores going 100%, while cores 3 and 4 had to be thermally limited to 50% to keep the junction temperature down. And it needed to start well before the temperature was reached - or you'll overshoot the temperature.

Then there's the system configuration - is the CPU beside the battery? Then you'd want to keep it from getting too hot (you can still get max junction temperature and be only at 45C at the top because of thermal resistance) . Or use the metal in the screen as a heatsink.

A large amount of heat in a SoC is also conducted away through the balls - ground and power planes that double as heatsinks aren't uncommon.

Comment Re:Long time... (Score 1) 240

You'd think. But you'd be wrong. Digg did it. Firefox did it. GNOME did it. Even Slashdot damn near did it. UI is about elegant discoverable interfaces between user and computer, and if this means expensive testing and actually listening to feedback that says "don't fix what isn't broken," so be it. UX, by contrast, relies on bogus metrics to justify change for its own sake - said change always requiring the hiring of more UX people, for some strange reason.

UX has become a cancer upon the profession. UXtards destroy products in order to leave their creative stamp on them. They lie to the marketroids and the C-suite by convincing them that change for its own sake is value-add. The existence of UX personnel in your organization ultimately results in a loss of marketshare and mindshare. Fire them all before your customers do.

And don't forget the compliant press who believes shiny should be different.

Because you know what the biggest complaint about iOS6 was? That the UI, which has changed little since iPhone OS 1.0, was "dated" and "outmoded".

UIs SHOULD be incredibly stable - they SHOULD get out of the way. The only way a UI is dated or outmoded is if it gets in the way of the user. (E.g., how iOS used to do notifications).

It's not just UX designers deciding to revamp everything - it's the press that decides that just because your windows look the same for 2 years that it needs redesign.

It's really why Apple bothered to go flat in iOS7, to partially implement it in Mavericks, etc. The press was basically calling out Apple for being stubborn with "stale" UIs.

Comment Re:The profession is in decline (Score 1) 154

Sure, but there are many areas of EE where demand has fallen. Programmable logic has drastically reduced the need for boards full of TTL chips. FPGAs, and even many ASICs, are designed with fully synchronous digital logic, that requires zero knowledge of most EE concepts, and can be done by any kid bright enough to master Verilog/VHDL. My company has done several successful FPGA projects, none of which involved anyone with an EE degree. ADCs, DACs, PWM, and DSPs come built into many microcontrollers, which themselves increasingly come on standard PCBs, with free downloadable libraries to handle all the interfacing.

And who do you think designs those things?

EEs are very much in demand, however they're not in demand for the old "computer engineer" or "programmer" style jobs.

Digital logic design still commands a small premium because it involves a LOT of advanced technology.

Though if you want to be in a field that's in resurgence, analog IC design is in. Even in the digital world - high speed digital signals behave in very basic analog ways that if your experience is in HDLs, you're not going to be able to figure out easily. Analog designers can command 6 figures easy, especially as modern PHYs are analog in nature, so you have to do mixed-signal ICs. Just because joe end user doesn't have to worry about it doesn't mean someone doesn't.

Then there's plenty of analog designs out there. A popular field is power engineering - you know, utility scale. Utilities all over the world are hurting because there really are only a handful of graduates in power engineering, not enough to replace the growing crowd that is retiring. (Yes, the flashy nature of computers and technologies have sapped the talent pool for other sub disciplines). Enough so that starting salaries are close to, if not above, 6 figures. Even those who want to retire are often asked to hang on because there's no one to replace them, or pass the institutional knowledge to.

There's plenty of RF work as well - WiFi and the like are easy to use, but that's because the RF guys made it simple enough to do. Even so, goof the design and you'll be wondering why you have limited range.

Comment Re:Wireless charging hit mainstream ~ 1-2 years ag (Score 1) 184

Personally, I hate fumbling with MicroUSB cables and my phone. I don't exactly have sausage fingers, but trying to put in that cable when I'm half asleep, the light on my nightstand is off (and I've been reading an eBook) and the end of the cable is loose *somewhere* on the nightstand is really annoying,

Other than the loose cable "problem" (which most people solve with a bit of tape, a binder clip, or other mechanism including $10 "solutions" that basically hold the end down), the problem is you're complaining about micro USB, a horrendous connector.

Sorry, this is an Apple article - and the Lightning connector is much nicer to deal with. If you wonder why Apple still made their own connector over using micro USB, think why did the USB Type C connector was invented.

Everyone blasts Apple on their proprietary connectors. Yet there are valid reasons why they exist - just because something is standard doesn't mean it doesn't suck (like micro USB).

Comment Re:Intel chip better than Qualcomm? (Score 1) 77

It is pretty difficult to get perfect performance out of a cell modem, the underlying theory is pretty complex, and translating these complex algorithms into a practical working implementation is incredibly difficult. Neither Intel nor Mediatek know how to close the gap. Qualcomm is probably the only company in the world that has the knowhow and brainpower to do this.

Actually, Intel probably has plenty of experience making modems - Intel's chips aren't their own designs, they purchased Infineon, one of the big modem chip manufacturers out there - if it wasn't Qualcomm, it's Infineon.

In fact, the first iPhones use Infineon modems. Though, one reason to use Infineon was extreme power management - so much so that it was responsible for killing AT&T's network. Basically the modems immediately killed the data channel when the transfer was over, so if you're websurfing, it would open a bunch of data channels, transfer the content, then shut them down. The end result was an overloaded control channel (opening and closing data channels is a control message). With enough iPhones, this consumed all control channel bandwidth. End result? Dropped calls because a tower with a full control channel mean the phone cannot do a handoff. It's why AT&T, despite having the worst call drop history, had some of the best data transfer rates (because a full control channel doesn't have a relation to how many voice/data channels are available).

As for Intel, maybe Intel is trying to court Apple - with modem chips at first, then maybe with SoC business.

Comment Re:LAPD Police? (Score 1) 160

Of course, it could be the LAPD needs to justify the huge expense of patrolling from Ghetto Birds instead of ground-based black-and-whites, and they're not at all bothered by the statistical insignificance of the small sample trotted out here as causation.

Ground patrol might be difficult in LA traffic, I would imagine... and flying LA isn't much fun either (lots of air traffic to contend with) but at least there isn't the chance of running into gridlock.

Comment Re:The next big bubble? (Score 2) 54

Uber is just forcing free market economics on governments that don't want it, and surprise surprise, prices plummet while service improves greatly. Get rid of the damn medallions and be done with it.

The problem is the free market sucks for utilities.

Uber works, but it's only working on cherry-picked routes and times. Taxis are heavily regulated not just in the drivers and licensing, but also in what they can do. For example, most taxis are required to pick up drunks and take them home, and dealing with a drunk is not an easy thing (think having to clean up your car afterwards). Likewise, most taxis must pick up their fares regardless of color, creed, or other discriminatory factor. And they have to cover the whole city - they may not want to go into a low-rent district, but if they accept the call they have to.

Uber drivers, though, are free to not do any of those things. If you don't wan to pick up some guy because he's black, just drive along. (In many places, a taxi driver doing this would be forced to call another taxi AND wait for that cab to arrive - they're not allowed to drive off).

Then again, taxi companies are evil. But I suppose it's OK when you find yourself partying on a Friday night and unable to get home because there's no taxis and uber isn't willing to pick up people who might throw up in the vehicle.

Comment Re:What of other more disastrous modes of failure? (Score 1) 204

This experiment only documents the survivability of the NAND Flash itself, really. I've had two consumer SSDs and at least one SD fail completely for other reasons; they became completely un-usable, not just un-writable. In the case of the SSDs at least, I was told it was due to internal controller failure, meaning the NAND itself was fine but the circuits to control and access it were trashed. I suppose a platter-drive analog to that would be having the platters in mint condition with all data intact but the servo coil melted, or something.

Since I've only owned three consumer SSDs and two of those died from a mode of failure that wasn't even addressed by this experiment, what am I to make of the real value of the results? They certainly have no meaning for me, but YMMV.

Well, this test is to figure out if the lifetime of an SSD is adequate - everyone knows flash life is limited, is it limited to the point where every write you should cringe or will it handle enough data that you needn't worry about it?

In this case, the tests show it's closer to the latter.

As for disastrous failure modes, the most common one is FTL table corruption (flash translation later). The tables map the externally visible sectors to the internal flash (you want to wear level the flash so no one block gets unduly worn out even if you repeatedly write to it).

The problem is those tables in cheap SSDs are cached in RAM for speed, but they don't have much in the way of a backup power supply to dump the cache back into the media. It's possible during a write back that the power fails and the table gets corrupt. On the next boot, it fails to load the tables and it's dead.

Those you can usually save if you do an ATA_SECURE_ERASE command which resets the table mappings back to default.

Or get better SSDs that build in capacitance so they can do emergency write backs.

Comment Re:But it's still a Chromebook... (Score 1) 139

Actually that means it runs Linux natively, which is kind of a big draw from my perspective. I'm considering getting one, but will not be running ChromeOS on it if I do.

Only if by big draw you like kludges. Sure it may be the Linux way but still.

Yeah, you CAN run Linux on it. You can also run Windows (it has SeaBIOS in it). But to do either means you have hit Ctrl-D within 30 seconds of power up (or reboot) every time to boot into your "alternate" (non-ChromeOS) OS otherwise it times out and goes into a recovery mode where it waits for you to insert a recovery USB stick. Sure not a hassle in that you can turn it off and turn it on again, then wait for it to get to the point where it finds an unsigned OS so you can hit Ctrl-D, but still not elegant.

So yes, you can, but it's not a Linux laptop by far.

Comment Re:This ex-Swatch guy doesn't have a clue (Score 1) 389

The other fatal assumption is that Apple is going to sell 20M watches.

Android and Pebble combined barely tilt at 1M (Samsung's latest generation offerings, barely sold 300K combined).

So now Apple is going to sell not just as many Apple Watches as smartwatches combined, but 20 times as much? Granted, interest is high, and even a product like the iPad was mocked as being something completely without reason but still.

Comment Re:Squeezeplay (Score 1) 37

but connected to proprietary products instead of a server that runs on almost any hardware

I never understood why people liked the squeezebox better than other alternatives. I had an AudioTron, and it required zero software installation on Windows and OS X, and one useful package on Linux. It relied solely on SMB and didn't need any indexing server or anything. You gave it a user account and it could either self discover the shares or you could explicitly point your music share to it and it indexed that.

Made using a special server sorta like having to use iTunes to load your MP3 player.

Comment Re:Tools for modifying open hardware designs (Score 1) 78

There's also gEDA which is an open (GPL'd) EDA suite including a schematic editor, PCB layout tool, and a bunch of other EDA tools.

The big thing with open hardware is simply getting the hardware - RPi and Arduinos are popular because it's easy to get the hardware for minimal cost, and many people make it on behalf of others (well, not the Pi, but that's because of Broadcom).

Open hardware requires the ability to make money (i.e, commercialize) the design. This is not the evil "we will sell your design to make millions" theme, but more so companies can take Open Hardware designs and build them for you. Or at least assemble you a kit.

Nothing screams "useless" more than seeing an NC label slapped on an open hardware design because it means if I wanted to build it, I have to source it all myself instead of being able to go to some company to get it all kitted up or even assembled and I just click "buy it now".

Comment Re:A few embedded strings and timestamps? (Score 1) 129

What the summary said was that the timestamps are consistent with an 8-5 day in those time zones, not that the timestamps came from those timezones. Timestamps aren't UTC anything -- they're milliseconds since epoch (generally), and the OS converts on the fly when displaying. I can't speak for the NSA, but core hours are 10-3 for many government workers, and many people go in to the office early to beat traffic. Also, the NSA is under the DoD, and DoD tends to get an early start. All of that is consistent with what one would expect to see.

And to address the GP, the odds of finding a string that matches a codeword, especially a unique codeword, are very slim. Probably millions to one. You're not going to find, say, "XKEYSCORE" in Microsoft or Apple source code. That's the most convincing evidence -- the timestamp stuff is just icing.

I expect to see future exploits released with standardized timestamps and obfuscated strings.

I find it very circumstantial and more akin to fitting the evidence to the crime. I mean, are the only software developers who work normal business hours on normal workdays in the Eastern timezone all working for the NSA? I find that extremely hard to believe, even more so when you consider that a lot of developers do work on the east coast (sorry, software development is not an exclusively west coast thing).

Even a symbol like "Backsnarf" sounds like something that could plausibly be used in malware to indicate reverse snarfing of whatever it is.

Ditto XKEYCODE. Sounds like something someone might call a keyboard map - either the mapping driver or a keymap.

Slashdot Top Deals

Today is a good day for information-gathering. Read someone else's mail file.

Working...