The kids will never learn, but I wager the dog won't piss on it more than once.
Yeah, but how is the dog supposed to pass that knowledge onto its successor?
The kids will never learn, but I wager the dog won't piss on it more than once.
Yeah, but how is the dog supposed to pass that knowledge onto its successor?
Ironically while Apple executives laugh at the president at the suggestion of iphone manufacture in the states, Samsung make their chips in the US. Really its a mystery why Apple is being considered at all they are as anti American as they come. If I was cynical I would suggest its part of the deal to bring back the some imac mini manufacturing to the states...although we have seen very little actual manufacture as yet.
But continue your hater-ade. And Samsung still supplies the A6 processors to Apple. And for final irony, The SGS4. Uses the US-made Exynos processor OUTSIDE NORTH AMERICA. And uses the TSMC-fabbed Qualcomm Snapdragon INSIDE North America.
On my supposedly "archaic" x86 desktop, I download any Linux distro I feel like using and can use the exact same installer to setup a 5 year old desktop or next month's Haswell.
I bet it wouldn't work on a NEC PC-Engine. Or an original Xbox.
You see, the thing is your PC is just ONE platform. Everything about the PC has been the same standard dating all the way back to the original IBM PC. RAM is always in the same location on every PC, and even today we still have the stupid 640ki-1M memory hole (for display). I/O ports are still in the same location as they always were.
Whereas on ARM, NOTHING is standard. Some SoCs have RAM at 0x0. Others at 0x40000000. Or 0x80000000. Or 0xC0000000. Boot ROM can be at 0x0. Or 0x10000000. Or wherever else. The serial ports? Anywhere. Display? Registers are randomly here or there, at least the memory is somewhere in RAM space and usually programmable.
Pluses and minuses on both. Minus for the PC is the horrendously discontiguous RAM space (there's another RAM hole around 3GiB-4GiB for memory mapped peripherals). Pluses means one OS image will work on all platforms because the kernel knows where everything is and will not change.
ARM Linux actually has undergone huge revisions to accommodate the fact that each SoC is different - it started with the platform_device that separates I/O addresses from drivers, and proceeded to the device-tree that expands on that even more. With proper coding, it's possible to have one kernel binary be able to boot several different SoCs. Of course, getting it to work across multiple manufacturers is much harder. Including multiple OEMs.
You mean like that stupidity of charging twice for the same shopping cart serial number when the final button is pressed twice? You get this shit when you let morons design it.
You mean the brilliance of being able to ding a customer for twice their shopping cart value? Extra profit from stupid and/or impatient people.
And when they chargeback, you can provide proof and cancel their order and still keep the other payment. And tie it up with confusion because you can easily switch which payment you're talking about to cause mass confusion.
The moronic thing would be to not accept a credit card, like one site I go to always claims the payment was denied. Call the bank and show that the payment was allowed. Now that is leaving money on the table by not accepting an order from a customer. But charging a customer multiple times for the same order? Brilliant business tactic.
The radio, and associated amplifiers, will generate the majority of the heat. Just look how much longer a cell phone will operate if you disable wireless. One must also take into consideration that wireless routers operate at higher power levels.
Actually, cellphones are higher powered - I believe they top out at between 0.5-1W max transmit power. Your wireless router is typically anywhere from under 50mW to 100mW, though it's possible to get "long range" ones that do 250mW.
Of course, a cellphone dynamically adjusts its power - in urban areas, it typically is close enough to a cell tower that it can crank the transmit power way down. This, of course, is to save battery (RF level amplifiers aren't efficient at all - they waste a lot of power). If you live in a poorly covered area, you'll note your battery life is a lot lower as a result of having to crank up the power to maintain the link.
CDMA phones are interesting - the amount of power they use is proportional to usage as the more phones using it, the lower the SNR. You've hit the limit when everyone's transmitting at max power and the SNR is too low for successful correlation.
Google Glass is visible, right there up on the wearer's face. What about all those cell phones that can do video recording, and can do that video recording right there from your shirt pocket, with no visible indication? Cameras are getting pretty small these days. Someone up to something nefarious, the camera lens is going to be one of his shirt buttons.
Thing is, Glass is like spectacles - it becomes common enough, you don't know who's recording and who isn't. (And apparently, the recording indicator is a lightpipe to the display, so a properly crafted "glassware" app can simply display black to not clue others into the recording).
For cellphones, either they're in breast pockets with the camera sticking out (which I've very rarely seen people put cellphones in their breast pocket), or people are holding them out in ways it's obvious that they're recording.
With Glass, people get used to seeing them like glasses, whereas most people don't hold their cellphones in front of them at the end of their arm to use it - it's generally pretty obvious. As would a cellphone sticking out of someone's breast pocket.
One other reason is well, if you need to bend over, the breast pockets tend to empty out....
My first thought when I read the summary was that hell had frozen over: Congress is thinking about privacy!
My second thought was that *Congress is thinking about privacy*. This can only be a good thing. I think we should encourage them, saying "you're on the right track, keep going that way" rather than being derisive.
Parent is right, government surveillance/data collection is a huge privacy issue. That does not mean it's the only privacy issue. It is easier for our inherently timid Congresscritters to start by pointing the finger outward from Washington, and I'm OK with that because it at least starts the policy discussion we so desperately need.
No, what happened is that the interest of politicians and the people they're supposed to represent aligned in this one case.
You see, imagine if people were using Glass - and recording stuff around them. Let's say it captures a politician coming out of a less-than-completely-upstanding business (which could be anything someone can raise much about). That image is stored and uploaded to Google, and possibly tagged. Now any political opponent can go and claim that said politician believes in X because they just came from a store that supports it.
Think of anything mildly controversial and see how it can get blown up. Perhaps it was a store selling porn - I'm sure the family first groups will use that at any opportunity (and I'm sure it's probably a common enough event, but one that can be used as leverage).
Basically, they're worried about politicians being captured on film doing stuff. It may be normal behavior that gets twisted around like a quote out of context, or it could be someone capturing actual backroom deals taking place, etc.
And the cynical side of me says it's because the politicians don't want any recording of them doing anything "bad" like being seen with industry executives that support them, or being hypocritical, etc.
Amtrak is a commuter service, not freight.
The only difference is whether the cargo is self-loading or not.
RIM certainly has SOME part of the code and as such they can give out the relevant stuff to the authorities, including the BASE KEYS.
That 'government certification' nonsense is just that.
You're confusing the consumer "internet edition" with "Enterprise edition".
Internet edition blackberries are what you get when you go to your carrier and buy it on a blackberry plan and they give you email and all that. In which case, what happens is your blackberry connects to RIM's servers and gets your mail through RIM proxying to your carrier's email inbox.
BES though is different. You pair a blackberry with BES and they generate a set of keys. Your blackberry proxies its connection with RIM to reach BES. When it gets to BES, the data transferred is using the key set up during pairing. End to end, it's encrypted.
What RIM did with India was set up a RIM server in there, so internet edition phones proxy through it, and then onto the carrier email server. When a BES attached one does it, the link is still encrypted because that server does not have the key.
Basically, all BB traffic goes through RIM or a RIM server set up in the middle east or india or wherever. From there, the server is what contacts the mail server you're using. As that part is unencrypted, they can decrypt your email and such at that point.
HOWEVER, use BES, and what happens is the RIM server connects to your BES server and your BES server then communicates to your blackberry via the pre-shared key. No one can snoop on that email because its encrypted with keys only your blackberry and BES know. Even with those servers they can't examine your traffic because the server does not have the key.
Of course, the bigger question is who buys a blackberry and NOT use BES with it...
Reforming patent law would be simple, software should simply not be patentable. You can copyright it sure, but no patents.
Thing is, software is very special.
Prior to the computer age, humans generally created "stuff" or "art". Stuff like mechanical things - which are easily patentable, but not copyrightable. "art" things were copyrighted because they didn't generally serve any purpose other than aesthetic or entertainment. Of course, one could create mechanical art, but the utility of such generally wasn't there, and useful machines that also looked good generally were patented because they were stuff.
But software is neither. It's written, which implies copyright, but it can be stuff as well - like when it's a fundamental part of hardware (embedded software). Or maybe it IS hardware, when you write your RTL code for an FPGA or ASIC.
And therein lies the problem. Software is everywhere, and saying "we can't patent software' means that if I invent something that uses software in a fundamental way, that whole system of software+hardware may be unpatentable. However, if I were to implement the same software as a complex piece of hardware, it suddenly IS patentable? Like say I come up with a way to do radio transmissions more efficiently and closer to Shannon's limit than ever before. If I use it in an SDR, it would be software implemented, but if I implemented it using standard radio hardware and building blocks of mixers and detectors and other things, it would obviously be a hardware implementation. It would mean the former gets no protection, while the latter does.
The reality of life is we probably need to come up with a new form of IP protection, called, well, software. Thus it covers software, or anything written that requires hardware to perform some action. So my hardware+software thing - I can patent the hardware and protect the software inside it the same way I could with a complex assembly of chips and mechanical pieces I use to work around the need for a line of code. Likewise the RTL code would be covered under the software protection. As would Windows. or Word. Or Linux. Or whatever else.
The problem is software is very unique - it does things and sometimes it does things in clever novel ways. But at the same time, it's also something that's fixed onto a medium which means you now have two competing protections for it. Neither of which are completely adequate, either. Copyright may be good for the source, but it doesn't really handle the binary side very effectively (is the output of a compiler copyrightable? Or just the part that underwent human creativity, i.e., source code?). Patents make sense for some software (used to generate better ways of controlling some piece of hardware), but not for others (e.g., application software). It's also unique because software can be hardware.
That's because the reality is that most people don't use advanced mathematics (or, these days, hardly any mathematics at all) in their day-to-day lives. Most simple mathematical exercises in the modern world have been automated, and the complex stuff is largely the purview of engineers and other specialized pros. Academia is the only place most people ever encounter it, and very few people spend their whole lives as students (my son being a rare exception).
The article doesn't refer to advanced math, just basic arithmetic - addition, subtraction, multiplication, division. Of which is such a basic skill it's used daily in many activities, including ones you don't know exist.
A basic use is shopping - is the 12oz bottle for $4 a better deal than the 16oz bottle for $5? What is the current approximate value of your shopping cart? Including tax? (Do you have enough money?) If you're having a party and they like your cake, how much extra ingredients do you need to buy so you can bake enough for everyone?
One thing I found that helped were the "brain training" games. They fell out of favor when it turns out they can't increase your general brain health, but the footnotes all noted that they improved significantly on the areas they trained in. So doing 100 basic math problems daily did in general improve basic arithmetic ability.
Heck, even doing 100 problems shouldn't take more than a couple of minutes (you don't need much - just adding/subtracting/multiplying and dividing involving digits you find on a multiplication table is good enough - i.e. add/sub/mul single digits, and division has a dividend in the double digits while a divisor in the single digits).
Sounds like a liability. "That car accident wasn't my fault. That particular robotic bartender always manages my alcohol intake perfectly, so it is the one at fault for screwing up and letting me drink a little too much, too fast. It was programmed to cut me off one drink sooner but a bug let it give me one more drink, and made me t-bone a taxi full of nuns on vacation."
I'll take the bartender I know, who pours me a beer the moment I walk in the door and makes sure I have a ride on the rare occasion when I allow myself to get carried away. He's a better bullshitter than any robot I've met, too.
In a lot of places, that is actually true - the establishment can be found responsible for a DUI incident as well and forced to provide compensation. And yes, the establishment includes commercial public bars as well as hosts of a private party.
The general reasoning goes that the best person to judge would be the person providing the drinks. It could be the bartender (since all the drinks come fro them), the waitstaff (who carry it to you and can tell if you've perhaps had a bit too much), the hosts (it's their party, they should be encouraging responsible drinking), etc. Because alcohol impairs judgement, so expecting the user to be in control while impaired is generally not feasible.
So yeah, a robotic bartended will be held to similar standards - first, it knows who had what drinks and thus should have some basic intelligence at saying some guy should not have put back 10 shots in the past hour , and second, the establishment should be monitoring their guests even if the robot is unable to properly determine the inebriation of the patron because the patron is not in a position to properly judge.
So not only is your bartender doing you a favor by checking your ride, they're covering their ass as well because the family of the kid you ran over accidentally can sue the bar you went to for damages.
do not be worried that you're not embracing all the stuff that the masses embrace.
True, argumentum ad populum is usually a fallacy. But sometimes it isn't. Economies of scale in manufacturing is one case. As the masses have moved from "netbooks" (10" laptops) to tablets, it has become more difficult for a happy netbook user to find a new replacement for failed hardware. Communication platforms are another case. If people aren't willing to make their writing available through an open technoloby such as an Atom feed but instead prefer to lock their communication inside the closed systems of Twitter and Facebook because everyone else is doing it, one has to join what everyone else is doing in order to be able to communicate with everyone else.
The reason the masses embrace reinventions of old technology is because modern technology makes it easier to use and understand. Facebook makes it easy to get in contact with your friends - no longer do you have to understand a cryptic email address - you just shoot off a message to "J. Doe" on your list of contacts and can be reasonably sure that you're talking to the right Doe (because you can click their profile to look it up) but also not send it to the wrong Doe (your boss, say). And in one place, you can see what they were up to. At least, that's the marketing glitz that attracts people in.
Sure they could email, view blogs and such, but that's more complex and takes more time than having the information in one place, not having to deal with cryptic addresses (is this the right Doe?), having to deal with uploading photos and converting them, etc. (Remember the old jokes about people emailing 50MB worth of photos? They're true).
Basically, the reinventions are aimed at the non-technical market. It's also core to what Apple does - they repackage existing technology for the masses.
Netbooks basically were a fad because the masses wanted a cheap computer to do their facebooking and other stuff with - to passively consume content and such. But the formfactor was wrong, screens small, keyboards crappy, and all sorts of other things evident when trying to cut corners and repackage a PC to cost under $300. They killed themselves off when manufacturers, fed up at not being able to make money, started charging more and more ($400 and $500 netbooks became REALLY common. And yes, at $500, they were running into low-end laptops.).
Then Apple introduced the iPad that basically answered a lot of people's needs for a netbook (like a tablet, netbooks aren't for creating with their puny keyboards and such - and larger ones were again, costing the same as laptops).
Other manufacturers jumped into the same ring realizing they could make much more money selling a tablet rather than a netbook.
The old netbooks are still around - I still see them for $300, but they aren't movers anymore as for $100 more, you get a glitzier tablet with larger screen, multimedia, less hassle, etc.
As for technies, there's nothing wrong with the old methods - stability is good as well. But just as the masses aren't going to take up vi or emacs, it doesn't mean vi and emacs users are outdated and antiquated.
Obsolete is what happens when there are far better ways of doing what you're doing but keeping to the old methods. Like say, using a typewriter over a word processor. Using the latter means if you make a typo, you hit Backspace and fix it. You don't have to retype the entire page, or dig out the liquid paper or correction tape. Thus, the word processor obsoletes the typewriter, even though it's not entirely perfect (after all, a mechanical typewriter works with no power). But the common use case was people had power (and electric typewriters), thus making word processors far more useful for creating documents.
The old methods of shell windows, vi/emacs text editing, etc., haven't been obsoleted because there generally aren't good replacements. And vi and emacs have generally also kept up with the times And people still text and email, despite stuff like facebook and such - because addressbooks and such (office productivity suites) provide a lot of the same functionality, especially in a business context. Hell, people love to complain about Outlook, but its integration of everything keeps it relevant for business. But for personal stuff? Outlook requires a corporate server to be effective (as would most office productivity suites which want to integrate properly), which puts it at a disadvantage over stuff like facebook where it's available everywhere and the "server" is provided.
Both Larry and Sergei are no longer connected with reality.. I don't begrudge them anything, but they are seriously in outer space.
I would add Eric to the list as well. There must be something in the water at Mountain View. Or maybe Google has a RDF that's more powerful than Apple's. Something very bizarre is coming out of their mouths.
Perhaps it's what happens when one is a Glasshole?
That might fly for the advertising, but including a download functionality requires a deliberate effort - Microsoft is willfully including a tool with no functionality except to facilitate in the violation of Google's license agreement, and thus copyright. If this ever turns into a court case, MS would probably lose - but they could still drag it on long enough to cost both sides a few million dollars in legal fees, and get a lot of good press if they spin it right.
Given the way youtube works effectively gives you a download of the video, all Microsoft did was add the ability to save the file it downloaded.
Aren't all the open-source folks angry when some company tries to pull this stunt? After all, YouTube doesn't stream the video - it basically shoves the entire video to you. Granted, Google traffic shapes the videos now, but they don't really protect it. Hell, one could in theory use the HTML5 version, right click and save target as.
Microsoft's implementation of a download function is no different than open-source doing the same thing with many other services - hell, the FOSS community (including
If we force these companies to adapt their businesses to not do stupid things, perhaps Google should for YouTube as well. Hell, there are plenty of YouTube downloaders out there as well.
Or it could be a bunch of Google apologists willing to overlook anything Google does even though it's the same as what others have done as well , only the latter have been mocked for the very same actions.
Certainly the game is rigged. Don't let that stop you; if you don't bet, you can't win. -- Robert Heinlein, "Time Enough For Love"