Forgot your password?
typodupeerror

Comment: Re:quiet = powerful (Score 3, Insightful) 116

by Christian Smith (#47823955) Attached to: The Quiet Revolution of Formula E Electric Car Racing

A lot of car makers left F1 in the past, Mercedes has returned, but Honda, BMW and Toyota have left.

Only because they were having their ass handed to them on a plate. Toyota achieved literally nothing in their F1 stint, BMW did get some wins, but weren't competitive enough to justify the investment. Honda ditto, but left at the wrong time (the post-Honda Brawn team won the 2009 championship with the Honda designed car.)

And there are other racing series, which may be more road relevent. The Audi R18 e-tron has a Diesel hybrid drivetrainm with flywheel based energy storage. Very road relavent and innovative in the field.

Motor Racing does help drive innovation but in a sport where the FIA have virtually done away with any concept of innovation, it'll be difficult to see how this new formula will enhance the sport or spur innovation in day to day cars. Fans are leaving, sponsors are worried and that means no money and a dead series coming soon.

It's not all about innovation. It's also about the grunt work of refining what you have. That's why Mercedes are dominating even the other identically powered cars. They've done the best job within the rules defined.

And there are lots of ways to innovate in chassis and aerodynamic design. The current crop of F1 cars have a very diverse array of front end designs.

And lets be honest, most F1 innovations don't translate to road cars anyway. The biggest influence of F1 and other motor racing has been in the engine management and fuel injection areas. Racing aerodynamics? Moot. Suspension design? Not applicable to most road cars. Sequential gearboxes? Came from bikes anyway. Tires? Irrelevent unless you only want your tires to last a week.

Comment: Final nail in the Itanium coffin (Score 1, Interesting) 161

by Christian Smith (#47774165) Attached to: Research Shows RISC vs. CISC Doesn't Matter

20 years ago, RISC vs CISC absolutely mattered. The x86 decoding was a major bottleneck and transistor budget overhead.

As the years have gone by, the x86 decode overhead has been dwarfed by the overhead of other units like functional units, reorder buffers, branch prediction, caches etc. The years have been kind to x86, making the x86 overhead appear like noise in performance. Just an extra stage in an already long pipeline.

All of which paints a bleak picture for Itanium. There is no compelling reason to keep Itanium alive other than existing contractual agreements with HP. SGI was the only other major Itanium holdout, and they basically dumped it long ago. And Itaiums are basically just glorified space heaters in terms of power usage.

Comment: Re: Clever editors. (Score 2) 288

How far is it from Amsterdam to Luxembourg anyway?

A four-hour drive by car (359.5 km), according to Google. I'm curious to know if taking a plane is more energy efficient than a car or train.

Depends on how full the plane, train and car are. A single person in a largish car (he's a CEO, remember) probably won't beat a full short haul flight for the same distance.

Trains are among the most efficient transportation methods (hard wheels on smooth rails = low rolling resistance) but the journey may not be the fastest nor most direct.

Comment: Re:Another misleading headline (Score 1) 236

by Christian Smith (#47475397) Attached to: Nearly 25 Years Ago, IBM Helped Save Macintosh

PowerPC had good performance for several years. When the 603 and 604 were around they had better performance than x86 did. The problems started when the Pentium Pro came out. Even then it was not manufactured in enough numbers to be a real issue. Then the Pentium II came out...

No, I think it was more the Pentium IV where Intel overtook Motorola. The PPC G4 design had started to hit up against clock speed walls, and couldn't scale the FSB up either. While Netburst was a disaster for performance/watt, it did scale clock speed wise and had a very fast FSB and memory subsystem, and while everyone else was hovering around the sub-2GHz mark, Intel got plenty of high clock frquency practice.

Once the Netburst FSB was moved to the P6 architecture in the form of the Pentium M, Intel had the winner on their hands that propelled them to this day ahead of anyone else.

Comment: Re:Pairing? (Score 1) 236

by Christian Smith (#47475001) Attached to: Nearly 25 Years Ago, IBM Helped Save Macintosh

No, this is stupid, wasteful, unoptimized software that performs like feces compared to a platform optimized piece of software.

Eh? What are you on about?

Yeah, those hand tweaked 16bit binaries performed really well on the pipelined i486 processors of the time. Really extracted all the potential out of the advances that were taking the CPU industry by storm.

In case you missed, I was being sarcastic. "Platform optimised" (read DOS) programs held the industry back at least a decade, and it's only after we left the 16-bit sheckles^W^Wplatform optimized software behind that x86 based platforms started to reach parity with their RISC based peers.

Afterall, Doom was famously written in C on a Next cube, then ported to x86/DOS and "platform optimised" as a final step.

The whole myth I've heard about software portability most of my life has never bore fruit that didn't need tweaks for different platforms.

The software I write has had no tweaks since we stopped supporting HP-UX 10. The biggest headache is GUI code, but libraries such as Qt take care of that.

The only performance tweaks we do is upgrading compilers, and ensuring we use efficient algorithms (ie, not O(N^2) when O(NlogN) is available.)

The whole notion in the first place was to expand programming to the masses by giving the appearance of the elimination of the need of specialists.
A good intention, to be sure, except for the specialists.
The problem was that a specialist with knowledge of how the hardware operates could write software that took more advantage and/or better performed on a given platform. Things like CPU instruction set options, memory alignment, etc.

There is now a resurgence of platform optimized specialization thanks to big data. Do you want your humungous data sets processed and analyzed in months or years by the average programmer, or do you want it in days and weeks by the programmer that really, really knows how to squeeze the hardware.

That's right, the demand for hand optimizing assembly programmers is though the roof.

Do you want your big data software written in months or years, as the programmer tries to squeeze every once of performance from the CPU, while your competitor has had the software running already for months and compensated for the lack of optimization by buying an extra rack of servers.

Big data is processed faster by better algorithms, not platform tweaking.

Facebook optimized their platform by JIT compiling their PHP, but the stuff was still written in PHP in the first place by "non-specialists" and the optimization was a relatively small final step. As an added bonus, they're also porting to ARM by basically re-implmenting just the JIT compiler for ARM. So not really optimized for any particular platform, just x64 by virtue of being their primary target platform.

Google use C++, Java and Python, and I'd bet there isn't any Google hand optimized assembler in any of that mix. They kicked big data butt by using clever, scalable algorithms.

Comment: Re:"Very Long Time?" (Score 1) 79

by Christian Smith (#47422995) Attached to: Study: Why the Moon's Far Side Looks So Different

So.... At the risk of stating the obvious: modern man has been on this planet for around 50,000 years;

Australia has been colonised by "modern man" for longer than 50,000 years. Modern man left Africa more like 100,000 years ago, and if you lifted one of those babies out and plonked him in the "modern world" noone would notice the difference.

Anatomically modern humans are more like 200,000 years old, and I dare speculate that their predocessors gazed at the stars and moon.

Comment: Re:USD/GB? (Score 1) 85

by Christian Smith (#47364105) Attached to: Samsung Release First SSD With 3D NAND

Parent poster here, I use the 840 Pros I mentioned above on my laptop, I already have extensive caching going on with about 12GB of my 32GB of RAM, but its still saturates the SATA bus due to the mostly random nature of the I/Os. Its basically a giant 300GB b+tree with 2MB leaf nodes and about a 40% insertion 40% lookup and 20% deletion ratio.

Wouldn't something like this or another enterprise drive be a better match for you?

Comment: Re:Run a completely new OS? (Score 1) 257

by Christian Smith (#47223949) Attached to: HP Unveils 'The Machine,' a New Computer Architecture

Everything should be by reference. Copying crap all over is bullshit.

No.

How do you atomically validate and use data that is passed by reference? You might validate the data, then use it, but in between, the source of the data might change it in nefarious ways, leaving you open to a timing based security attack. Some copies are unavoidable, and single or multiple address spaces don't make any difference whatsoever in this case.

Comment: Re:Run a completely new OS? (Score 1) 257

by Christian Smith (#47223917) Attached to: HP Unveils 'The Machine,' a New Computer Architecture

As I pointed out above, go check out the design specs for OS/400 (System i). It's got a flat address space and was one of, if not the first mid-range system to achieve C2 certification. But I suppose you're talking about a flat address space with an open-source system - you're probably right.

OS/400 is a bit different in that programs are not native to the CPU, and are compiled into native code at installation time. In that sense, OS/400 can enforce security and separation statically at compile time, and so doesn't need isolated address spaces.

For native processes requiring any semblance of isolation, processes would have to be tagged to detemine which addresses in a flat address space they can access, which implies some sort of segmentation or page tagging, and once we're validating page accesses anyway, we might as well have a full MMU as well.

Comment: Re:Run a completely new OS? (Score 0) 257

by Christian Smith (#47216967) Attached to: HP Unveils 'The Machine,' a New Computer Architecture

From what I gather, memory management, which is a large part of what an OS does, would be completely different on this architecture as there doesn't seem to be a difference between RAM and disk storage. It's basically all RAM. This eliminates the need for paging. You'd probably need a new file system, too.

Paging provides address space isolation and protection, separation of instruction and data (unless you advocate going back to segmented memory). It won't be going very far anytime soon.

A single flat address space would be a disaster for security and protection.

Still, it would make the filesystem potentially simpler, making non-mmap reads/writes basically a memcpy in kernel space.

Comment: Re:Useless (Score 2) 97

... All Japanese, although TFA is at pains to point out that there was some seemingly minor American involvement too. Are there any major camera manufacturers left in the US?

Well, it is the International Space Station, not the American Space Station (though that would have a much better initialism.)

Comment: Re:Don't tell them that... (Score 1) 332

by Christian Smith (#46804831) Attached to: Why Portland Should Have Kept Its Water, Urine and All

People get into high positions by rising as those above are destroyed in the public eye. Those above are destroyed in the public eye when they fail to respond to every absurd panic with equal panic and alarm. A rational leader is soon removed from power.

Or, more succinctly, shit floats!

Comment: Re:RAID? (Score 1) 256

by Christian Smith (#46780747) Attached to: SSD-HDD Price Gap Won't Go Away Anytime Soon

Doesn't creating a striped RAID make up most of the performance issues from using a HDD over a SSD? At that point, it's more the bus or CPU that's a limiting factor?

When doing anything random IO intensive, even a SSD won't saturate a SATA 3 link. But it'd still be orders of magnitude faster than a mechanical disk.

Consider, a 15000 RPM enterprise disk might top out at perhaps 250 random IOPS. The lowest of low end SSD will beat that by perhaps 10x. So to get the equivalent performance to an entry level SSD, you might require 10+ enterprise 15K RPM disks. Once your SSD storage becomes big enough, it might actually be more cost effective to have SSD instead of the 10x HDD and associated management and enclosure costs.

In fact, in the TPC benchmarks so favoured by DB vendors, a large proportion of the costs of the rig are the massive storage requirements to fulfill the IO rates required (no, I don't have a citation off hand) even if the vast majority of the storage space is not actually used (HDD used in these scenarios tend to be "short stroked").

However, for most people a compromise probably works best. Bulk storage on slow HDD, with fast random IO like hot data and journals on SSD. ZFS is a prime example, using L2ARC and ZIL on SSD for hot data.

Federal grants are offered for... research into the recreation potential of interplanetary space travel for the culturally disadvantaged.

Working...