Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Which means (Score 1) 347

Also, there's a semantic looseness as well that bothers me. The proposed solution doesn't really require changing the speed of light in a vacuum. Rather, it points out that photons will undergo certain interactions which mean that light as a bulk phenomenon will appear to go slower than the maximum speed light can travel in a vacuum because of those other interactions.

When computing relativistic effects, such as Lorenz contractions, etc., the upper speed (not including all those interactions) still remains the limit, at least as I understand it.

Comment Do you remember the future? Forget it! (Score 1) 97

"Hey Paolo! He broke the President!"

I remember many years ago reading an article (probably in Wired; these days, it'd be a blog post) where someone described walking around EPCOT Center while listening to this exact album. Sounds like quite a trip, really.

And then there's this article from several years ago that's also fitting. Apparently Disney was working on their version of the Holy-Grams too..

Comment Re:Designed for safety & performance (Score 1) 636

Oy, is that how they're selling it? As if none of these features existed before Apple did it?

Swift code is transformed into optimized native code

As is any other language that passes through an optimizing compiler that outputs native code. They crowed about a 30% speedup above, which in my experience is sometimes achievable just by tweaking your compiler flags.

Comment Re:Is this an ad ? (Score 1) 304

You realize, of course, that a 26" 1920x1080 monitor is only 85 DPI, so the same font size (in pixels) on a 26" 1920x1080 monitor would actually look about 40% larger. And, you'd get more text on the screen to boot.

1280x720 at 120 DPI makes for a small screen: 10.7" x 6", which is approx 12" diagonal. Do you do all your coding on a subnotebook or MacBook Air or something?

Comment Re:Amazon "lose $ on each book, make it up on volu (Score 3, Interesting) 462

Well, they don't magically get cheaper to build just by building more. They get cheaper to build as the manufacturer refines the process, improves the technology, and scales the production lines to amortize the fixed costs of a production facility over a larger number of vehicles. That is, it takes work to make them cheaper, above and beyond just making more.

As long as there's sufficient demand, producers will have enough reason to scale up the production and work to bring the production cost down. Eventually, if all goes well, this begins a virtuous cycle where decreased price increases demand, and increased demand drives further cost reduction and innovation.

This works great if there's enough demand to kick-start the process. Unfortunately, the price of EVs today is too high to drive sufficient demand. Hence the carrot-and-stick incentives to try to jumpstart the virtuous cycle. On the carrot side are tax breaks and government subsidies / loan guarantees. On the stick side are fleet-wide fuel economy standards, price caps and quotas.

Right now, it seems as if most traditional auto manufacturers treat their electric cars either as halo cars, or as tasks they're required to do by law/regulation/whatever but would rather not. I doubt anyone at GM is staking the quarterly numbers on Chevy Volt sales, for example, but it doesn't stop them advertising it. The only competition at this point, though, is positioning, posturing and establishing a brand. That is, competition on the marketing front. The market's still too small to have meaningful competition driving the product development. At least, that's how it seems to me.

Eventually they'll figure out how to bring the costs down. Meanwhile, the early adopters hopefully help build interest and therefore demand in the future. When that happens, I'd expect the real competition to start. You'll see Toyota or GM or someone get into the mega-battery business, like Tesla is currently. Or some other major, bold move like that.

In the meantime, the carrot-and-stick will push both the supply and demand curves to the right, elevating the total units shipped to a modest number until the market can sustain itself.

Comment Re:not in the field, eh? (Score 1) 634

i did give a proviso for run-time alias checks in my comment above. Our compiler will also generate a run-time check for that as well, with a small codesize and runtime cycle penalty.. The FORTRAN equivalent doesn't need the alias check.

I'd expect ICC to be very aggressive, given that Intel has one of (if not the) largest paid, full-time compiler team in the world.

So how about all of FORTRAN's other nifty features, such as array slices? To get the same functionality in C / C++, you have to put explicit strides and bounds everywhere, and sometimes checks to reverse loop directions. In FORTRAN, you can write things like "A(1:100,1:200:2) = B(101:300:2,51:150)", and the compiler is free to choose the best way to do it.

In C / C++, you leave it to the programmer to dictate the loop explicitly and hope the compiler can figure out what you're doing. In my experience, real world programmers get unusually creative with this task, creating awful code. If you write clearly enough and pay enough attention to compiler vectorization reports or other feedback, maybe the compiler + user eventually figures it out. Realistically, most programmers aren't that sophisticated. And even among the ones who are, not all have the time or inclination.

Looking back to my array slice example: Now take those slices across function call boundaries in both languages and see how much work the programmer has to do in each language...

My point is, the more work the programmer has to do to help the compiler succeed, the more evidence it's a poor fit for the problem domain. FORTRAN can make it easier for compiler writers because they start with a higher level specification of what the programmer is trying to achieve. FORTRAN also makes it easier for programmers because they stop at that higher level specification of what they're trying to achieve.

Comment Re:not in the field, eh? (Score 1) 634

I'd argue that if you're programming in processor-specific intrinsics, you're not really programming in C++ any more. Standard C and C++ semantics for pointers and arrays really get in the way of autovectorization. So, you have to go to language extensions and kludges (like C99's restrict keyword) to throw the compiler a bone.

Sure, tricks such as whole-program analysis and run-time alias tests can help the compiler find the guarantees it needs to have in order to vectorize. The fact of the matter (and I heard this straight from the mouths of my employer's vectorizing compiler team members) is that stock FORTRAN is simply much friendlier than stock C/C++ for this due to those semantic differences.

Our compiler will autovectorize C code if you pass it enough hints such as minimum loop trip counts, pointer alignment, pointer aliasing guarantees (aka. restrict) and so forth. Even then, there are limits to what it can do. We offer processor specific intrinsics so you can vectorize the code yourself.

Once you start coding in vector intrinsics, you're taking the vectorization out of the compiler's hands and doing it yourself. Each of those intrinsics usually maps directly to an instruction or small sequence of instructions, so there's little left for the compiler to figure out. The compiler then just schedules and register-allocates the code, and handles the non-vector bits around the edges. Sure, you still compile with the C++ compiler, but the C++ compiler is no longer providing the vectorization: You are.

Programming

Why Scientists Are Still Using FORTRAN in 2014 634

New submitter InfoJunkie777 (1435969) writes "When you go to any place where 'cutting edge' scientific research is going on, strangely the computer language of choice is FORTRAN, the first computer language commonly used, invented in the 1950s. Meaning FORmula TRANslation, no language since has been able to match its speed. But three new contenders are explored here. Your thoughts?"

Comment Re:Terabyte flash drives are 10% overprovisioned (Score 1) 264

I wasn't actually thinking of DoS'ing, but I guess that's actually a valid concern. If a particular write pattern could crap a server, then you may have to worry about a user doing that to your server. I was just putting my "DV engineer" hat on, and trying to think of how I'd break an SSD in the minimum number of writes. It's the kind of analysis I'd hope the engineers that come up with lifetime specs use to give a bulletproof lifetime spec. For example, X years at YY MB/day even if you're writing like an a**hole. ;-)

I don't have a formalized attack against any particular drive, manufacturer or filesystem.

For a multi-user system, just a thought: Could you address it with quotas? If a given user can't write to more than X% of the filesystem, you can bound the "badness" of their behavior.

Comment Re:Oh goody (Score 1) 264

I'm not challenging the 30 day number, to be sure.

It's not entirely true that write amplification won't appear to speed up the rate at which an SSD erases sectors. SSDs generally have multiple independent flash banks, and each can process an erasure independent of the others. To maximize your erasure rate, you need a pattern of writes that triggers erasures across all banks as often as possible. Each bank will split its time spent receiving data to write, committing write data to flash cells, and erasing flash cells. (My assumption is that a given bank can only be doing one of these operations at a time, which was certainly true for the flash devices I programmed.)

Consider a host sending a stream of writes as fast as it can send it. The writes will land on the drive as fast as the SSD controller can process them and direct them to flash cells. If there are any bottlenecks in that path, such as generating ECC codes and allocating physical blocks in the FTL, it will slow down the part of the duty cycle devoted receiving and committing write data.

A "friendly" write stream would minimize the number of GC cycles the SSD performs, and thus the amount of write amplification that occurs. Thus, the total number of writes to the SSD media is at most slightly larger than what the PC sends, and the "receive-write" portion of the "receive-write-erase" cycle gets lengthened by whatever bottlenecks might be in the PC-controller-flash path. A "hostile" write stream triggers a larger number of GC cycles to migrate sectors. It seems reasonable to me that an on-board chip-to-chip block migration might be quite a bit faster than receiving data from the PC. For one thing, you don't necessarily need to recompute ECC. The block transfer itself could be handled by a dedicated DMA-like controller transferring between independent banks in parallel with other activity. So, generating more write data locally to the SSD could reduce the time spent in the receive-write portion of the receive-write-erase cycle, so you can spend a greater percentage of your time erasing as opposed to receiving or writing.

It seems a little counter-intuitive, but it's in some ways similar to getting a super-linear speedup on an SMP system, which is indeed possible with the right workload. How? By keeping more of the traffic local.

The main effect of write amplification, though, is on the SSD wear specs themselves, as I said. They're stated in terms of days/months/years of writes at a particular average write rate. So really, when you multiply that out, they're specified in terms of total writes from the PC. There's at least one flash endurance experiment out there showing that drives often massively exceed their rated maximum total writes by very large factors. One reason for that, I suspect, is that they aren't sending challenging enough write patterns to the drive to trigger worst case (in terms of bytes written, not wall-clock time) failure rates.

Slashdot Top Deals

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...