Comment Re:ADD -- Billionaire Edition (Score 1) 69
As someone who lives in Mountain View, I'd like to second this post. Google WiFi has never worked well. In my experience it hardly ever worked at all. I'd be happy to be rid of it.
As someone who lives in Mountain View, I'd like to second this post. Google WiFi has never worked well. In my experience it hardly ever worked at all. I'd be happy to be rid of it.
Uh, Syria was until quite recently one of our supporters in the region. We've had generally decent to good relations with the Assad regime. It cooled a bit since he started killing his people, but we tend to take a dim view of those who would kill their people because they started talkng democracy.
You might be confusing Syria with Jordan, because Syria was most definitely not our friend at any point. They have been a client state of the Soviet Union / Russia for many years, their Alawite (a form of Shia) minority runs the military dictatorship that is sponsored by Iran, and it props up Hizbollah in Lebanon. The US has had an arms and technology embargo on Syria since at least the 70's, there are no direct flights between the two countries, and there are no banking ties between them.
CNN's comments are equally terrible. It's hard to believe a reputable news organization allows them on its website.
Cost is not a big issue. The big issue is power.
A laser of this type is almost always a chemical laser because that is one of the best ways to portably produce that much energy in a hurry. The drawbacks of this are
1) the laser reactant supply needs to be reloaded after every shot or few shots. This is time consuming.
2) toxic chemical byproducts of the power-generating reaction. If you're defending a base in the middle of the desert, this may not be an issue. If you're defending land that you care about, like your home town, it is a big problem.
If the laser is battery-powered instead then prepare to wait a long time between firings for it to charge. Even a nuclear reactor on an aircraft carrier doesn't deliver the kind of wattage this needs in a burst, there will need to be a charge-discharge system.
Until the power problems are solved, don't expect to be seeing too many lasers shooting down missiles.
Both are faster than fastest Tegra3.
For what definition of "faster than"? A7 is a pretty weak core. It was optimized for very low power and die area, not for high performance. Tegra3 uses Cortex-A9, which is an older design but actually faster if all else is held equal (same clock speed, equivalent memory subsystem).
(The reason you see quad A7 popping up in cheap Allwinner SoCs is that A7 is tiny. Really tiny. Area has a direct relationship with cost in semiconductor manufacturing. Also, ARM probably charges lower per-unit royalties for smaller / lower performance cores like the A7.)
Someone mod this guy up. A7 is a single issue, in-order core with basically no frills. An A9 like in the Tegra 3 is a triple issue out of order processor with a much faster FPU. It blows an A7 away in performance by a wide margin, although it is a much larger and more expensive chip.
These are cheap for a reason, and they're unpopular in the rest of the world for a reason.
The Allwinner chips used in these tablets are all ARM Cortex-A8 based. A Cortex-A8 is basically unfit for a tablet. The lowest end tablets sold by Apple, Samsung, Motorola, Sony, Acer, and Asus 4 years ago didn't have a CPU this slow. Just because they can get away with selling these in China doesn't mean that they are worth anything.
"Don't be evil" is the greatest marketing line in the history of technology because so many doe-eyed nerds wholeheartedly believed an advertising company has their best interests in mind.
Intel brings x86 compatibility. But that's no benefit on mobile, and will often be a slight liability.
It's actually a massive liability. I can boot an ARM Cortex-A9 in 1000 lines of assembly. An Atom requires a multi-megabyte binary blog delivered by Intel (source unavailable) to simple turn on an Atom. And yes, it does start out in 16-bit real mode as if anyone wants that. Can you guess which one of these is more debuggable?
And of course, the ARM architecture is offered by multiple makers, in all kinds of configurations of core types and numbers, clock speeds and so on. With Intel you get what one single company decides to offer, and that's it. Not directly relevant to us consumers of course, but it does mean it's more likely the ARM set-up in your phone or tablet is adapted specifically for that hardware, not a more generic one-size-fits-all spec.
Today Intel sells three versions of the same desktop chip; a high end one with all of the features enabled, a midrange one with some features disabled, and a low end one with most features disabled. Device makes know that once Intel is in a position of strength, they will do to mobile chips exactly what they are doing to desktop chips today. Let's look at one feature: SHA hashing engine. Intel Core i7 devices have this built in. Core i3 also has this but disables it. What if you want a low end SoC with a SHA engine? Any ARM vendor will sell you one, SHA engines are cheap and easy and everyone has one. Intel makes you pay for the high end part just to get a minor feature. Multiply this by all of the other SoC components and you'll see why nobody wants Intel to be in the lead. With ARM, chip vendors are competing on features, performance, and price. Nobody is picking ARM the company, they are picking the ARM ecosystem of chip designers and chip fabricators where everyone competes.
It has a lot of BSD code in it and continues to share code with the other BSDs.
Really? I was under the impression that Apple do not distribute any source code for Darwin on ARM. Please show me where I can obtain the XNU ARM kernel source that is used in iOS.
Why would you need that? The platform-specific part of the kernel is a fairly minor part of the overall code. There's a lot more code investment in the VM, the FS, the network stack, and other major kernel subsystems, which are all generic code and distributed to the public, than the specific implementations of low level locks, interrupts, and page table map managers. The fact that we can't build and run XNU on ARM doesn't mean that we can't share code with it.
iOS is doing even better.
There seem to be some uninformed posters here, so here is the OS X relationship to BSD:
The OS X/iOS kernel is based on Mach, which is a microkernel mashed together with a BSD kernel. It has a lot of BSD code in it and continues to share code with the other BSDs. It has features borrowed from BSD such as DTrace, PF firewall, file system support (including ZFS before it was removed), the networking subsystem, kqueue, jails, and others. While Mach is fundamentally different in some ways, to a POSIX binary it looks and feels just like any other BSD system.
The OS X userland is also based on BSD and was originally derived from FreeBSD. It uses the BSD libc and many of the command line tools are from the BSD world (from grep to ssh). It also includes some GNU tools, such as bash. Apple is actively working on replacing many of these, and they recently dropped GCC and GDB and replaced them with Clang and LLDB.
Oh god. Imagine if developer unions do happen and then I'd have to choose between working at a unionized company full of idiots or working at a non-union company that is staffed with Libertarians. I'd probably quit programming forever.
It does sound like fun and I would enjoy it given the right working conditions, though I imagine these are highly unlikely to be found in a military operation.
However, no lawyer can get you the guarantee you're looking for. If you are a male and a United States citizen, you'll remember having registered for Selective Service ("The Draft") before your 18th birthday. Under the right conditions any registered person can be called up for service, all it takes is an act of Congress.
the ARMv8 back end for LLVM was written entirely by one guy in under a year and already performs well (although there's still room for optimisation).
Where can I find this LLVM back end? It does not appear to be on llvm.org.
In the first half of next year, there should be three almost totally independent[1] implementations of the ARMv8 architecture, with the Cortex A50 appearing later in the year.
Can you name the three vendors? Qualcomm for sure. Marvell seems likely. Nvidia says they will have a chip out, but I have serious doubts about their ability to deliver.
ARMv8 is not eliminating them, it's reducing the number of instructions that have them. Conditional instructions are useful because you can eliminate branches and so keep the pipeline full. For example, consider this contrived example:
if (a < b)
a++;
On ARMv7 and earlier, this would be a conditional add. The pipeline would always be full, the add would always be executed, but the result would only be retired if the condition is true. On MIPS, it would be a branch (complete with the insanity known as branch delay slots, which if you look at the diassembly of most MIPS code typically means with a nop, so you get to waste some i-cache as well) and if it's mispredicted then you get a pipeline stall.
On ARMv8, you don't have a conditional add, but you do have a conditional register-register move and you have twice as many registers. The compiler would still issue the add instruction and then would do a conditional move to put it in the result register. From the compiler perspective, this means that you can lower PHI nodes from your SSA representation directly to conditional moves in a lot of cases.
Basically, 32-bit ARM is designed for assembly writers, ARMv8 is designed for compilers. As a compiler writer, it's hands-down the best ISA I've worked with, although I would prefer to write assembly by hand for ARMv7. I wouldn't want to do either with MIPS, although I currently am working on MIPS-based CPU with some extra extensions.
Actually, ARM's reasoning is that modern branch predictors on high end AP's can do a good enough job of following a test and branch and keeping the pipeline(s) full that there is very little value in conditional instructions on future chips. It's hard to cause a pipeline stall or bubble by branching a few instructions forward or back on these CPUs since they are decoding well in advance of the execution pipelines. Added to that, there is an energy cost in executing an instruction and throwing away the result. Obviously, not all cases are wins. In the example you noted, a register to register mov on a register-renaming system is basically a 0-cycle operation (never makes it out of the instruction decoder), so it's hard to do better than that.
Never test for an error condition you don't know how to handle. -- Steinbach