Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Schedule D?! (Score 2) 450

I bought the Professional Turbo Tax on Amazon and it cost $64. And I get 10% extra on everything from my refund that I Put into Amazon Gift Certificates. So with TurboTax this year I could theoretically take my whole refund as Amazon Gift Certificates and pay off Turbotax a few times over. But I don't think I spend enough on Amazon even with my prolific Amazon Purchasing to justify taking all of it back in Gift Card Balance.

Comment Re:That target already captured elsewhere (Score 1) 300

I'm a Global Entry card holder. There's almost no line at all for customs. If they're doing Quarantine and smuggling checks they can do it on the train en-route. It would be like gate-checking. Check in at the train station. Then have a white glove security service run your luggage through adequate screening and then take it directly from the train to the spacecraft.

Comment Re:That target already captured elsewhere (Score 1) 300

This is a fair point. I just flew from my home town to my current city.

Time to drive to airport: 15 minutes
Time to wait in security/waiting area: 40 minutes
Time to taxi: 5 minutes
Flight time: 45 minutes
Taxi Time: 15 minutes
Time to get to Taxi stand: 10 minutes
Time for taxi to get downtown: 35 minutes.

Total time to/from runway: 1.75 hours.
Flight time: 45 minutes.

Then again LA to Sydney Australia is about a 15 hour flight. If this is the future and you had a maglev/hyperloop type transport you could get to a remote spaceport in under an hour completely isolated from an urban area. That would be about equal to traffic to/from the nearest small airfield for a private jet. Also it wouldn't have that much security since the only threat would be a bomb and they could pre-screen your luggage while in route to make things efficient. All in all with a good highspeed rail solution you could best a business jet easily. The fixed time tables are harder though. But with an 18 hour flight time to beat you would stay overnight in a hotel and be productive then leave the next day and still beat the flight in the morning.

Comment Re:I think the thing being missed here (Score 1) 300

A flight from New York to Singapore is usually around $1,300. A Suite Class ticket from New York to Singapore is $23,000.
https://medium.com/travel-adve...

People already pay 20x coach to fly comfortably for 18 hours. If you reduced the flight time to 2-3 hours and people didn't need a bed, shower and other amenities associated with a full day in the sky then you would be price competitive.

Here is another example. I rent a camera for $1,500+ for about 36 hours. If you hard a cargo flight that could do a point-to-point delivery from Indonesia to my door and it cost 1/10th of 10x the price of a ticket ($10,000) per 160lbs for a 16lb package then it would cost them $1,000 for shipping vs $1,500 for an extra day of rental. That would save them $500. You could even include a courier to the spaceport and back. I'm certain that there are items today that could use a sub-orbital delivery and save money at $100 per lb.

Comment Re: Nosedive (Score 1) 598

Apparently, contrary to all those science fiction stories, people in general really don't want videophones after all, even after they became practical. To my knowledge, only uber-geeks are using it, and only because they can.

Phone calls period are barely used. People prefer asynchronous communication.

But video chat obviously has two big fans:
1) People showing someone something (real estate, christmas presents, things in a store, etc.)
2) Long distance romantic partners.

The advantages are pretty obvious for both use cases. :D

Comment Re:Pullin' a Gates? (Score 1) 449

Torvalds dismisses photo editing as a task for "professional photographers", but our amateur cameras are taking phenomenally detailed pictures, and even making fairly simple edits is a compute-intensive task. He may be right, but he may equally be wrong.

Torvalds is being completely ridiculous here. Avid used to be the domain of professional film editors but iMovie is incredibly popular. We even see cell phones these days sporting 4k cameras. My Lumia has a 41 megapixel sensor! I have a RED camera and it's "only" 18 megapixels. In fact the less professional you are the more processing power you need. Photoshop's paint brush can accomplish wonders in the hands of a professional touch-up artist. But Photoshop's Content-Aware-Fill is processor murder and designed specifically to intelligently replace a professional artist. Take something like 3D rendering. You could have someone hand paint every frame. It would without question require a professional artist. But if you want a pretty picture at the push of a button you want raytracing.

This is actually something that you see happening today in the high-end VFX market. It used to be that raytracing was too compute intensive for films. But for amateurs and non-artists ironically enough ray tracing was fine. The architect only needed to render 3 frames. Waiting a day was perfectly acceptable there wasn't another 100,000 frames that also needed to get rendered. In film there wasn't time for something like Global Illumination and the shortcuts caused unacceptable flickering. Now the film industry is starting to embrace advanced lighting like GI and they're getting all of the bounces and detail that used to take hundreds of lights to fake automatically. It's making artists more productive but it's coming at the cost of increased compute time. Again a professional lighter can as an artist fake global illumination. An amateur could simply position the sun, turn on GI and wait 18 hours.

The future will be an Automagical button that not only fixes your photo *cough* instagram *cough* but also performs even more advanced editing like "Remove the gray clouds and put in a photorealistic blue sky. Oh yeah, and also change the lighting of the photo to make it look sunny!" That's going to be far more CPU intensive than any photoshop filter currently in existence and it'll be targeted as much as your average cell phone user as a professional.

Comment Re:Pullin' a Gates? (Score 1) 449

You assume that task-specific tasks are all that people will come up with. If you have to spin a new ASIC every time you want to improve your software we aren't going to innovate. ASICs are specifically for something like 10GB networking which is a defined standard. But most tasks aren't defined standards. Changing specs is the norm not the exception outside of core OS functionality like storage or networking. GPUs couldn't keep up so they moved to a compiled per-pixel shading model so that developers could rapidly iterate and invent new uses. In the process GPUs by necessity became pretty general purpose. But GPUs are still frustratingly limited in their general purpose applications. There is a huge domain of problems that need more than 4 cores but need more memory and larger caches than a GPU offers them. You could legitimately call whatever processor manages to handle them a "CPU" or a "GPU".

Comment Re:Pullin' a Gates? (Score 1) 449

The point isn't to pick any one approach or technology (say neural nets) the point is that we *already* have an application that comfortably uses more than Linus' mythically adequate 4 cores. A 4 core CPU is fantastic at running a word processor and an email client in the background. But that's not the future of computing. The future of computing is going to be doing the work of the human brain, but better. The human brain is one example of the sort of application we are going to see more of. Improved Microsoft Word is not the future. Improved Chrome is not the future, we see the future in Science Fiction and it's an interface that can communicate with us naturally. Natural human/computer communication means a whole new set of problems, and these are not problems relegated to "niche" marketplaces like research lab super computers. The applications for machine vision are everywhere. The applications for voice recognition are everywhere. The applications for 'common sense' in your interaction are everywhere. These aren't problems that I expect will be solved best with fast linear serial processes. To date all of these classes of problems have been best approached with multi-threaded parallel computing.

You mention the GPU. It's true the GPU was a custom semi-specialized piece of hardware. In fact the original 3D accelerators weren't even in the display card they were pass-through cards. But you know what else used to be a semi-specialized chip? Math Co-Processors. Even today GPUs are slowly blending back into the CPU. Once something like a math co-processor becomes sufficiently critical to the average user it becomes part of the CPU's die. AMD has already integrated pretty substantial GPUs into their "APUs". By definition SOCs are integrating the GPU. If we do develop a chip that is critical the average user like AI with a magic AI-chip then they'll just integrate it into the CPU.

It used to be that video playback was a niche market and now just about every CPU, GPU and combination there-of has integrated video decoding into the chip. So what makes you think they won't integrate ai and call it a "CPU"?

Comment Re:Pullin' a Gates? (Score 1) 449

If that whole process takes 3 seconds (which would be amazing) then your computer only performed 1 "operation per second". But computers don't perform "operations" they have to perform millions of sub-actions to accomplish your goal.It would be like saying that "Rendering a game's frame is only a single task so it would be a very serial task without any potential for multithreading." when in reality "rendering a frame" is a massively parallel task of rasterizing millions of triangles (or intersecting rays) and sampling textures, computing lighting values and performing table look ups.

Take interpreting voice. By applying multiple models simultaneously you can get better results. Seems pretty obvious.
http://devblogs.nvidia.com/par...

For the flyer maybe it'll generate 1,000 flyers simultaneously and then compare them to award winning graphic design projects to see which of the 1,000 ideas it had matches historical good ideas.

Comment Re:Pullin' a Gates? (Score 1) 449

I say you are the one moving the goal posts, Linus and *most* of the other people working on parallelism solutions are working/speaking in the context of computers like the ones we know today, you they guy trying to apply what they say to *any* computer. Linus will probably be proved correct there. Past n cores the fundamental architecture in use today will not scale but for niche cases.

Within the context of traditional Van Neumann computers we already today have voice recognition, we already have SLAM 3D positioning, we already have databases like Wolfram Alpha which can give us insights, we already have applications which crunch massive 3D datasets. Some of these run ok on GPGPUs and some need the larger cache sizes of a CPU to run efficiently.

My point isn't that we need some completely exotic system, my point is that with the very limited amount of applications today for AI-driven solutions there are plenty of applications that can and would use hundreds of cores. Computers were once a "niche" tool for rich people. The internet was once just a niche tool for academics. Only gamers needed a GPU etc etc. All the way back through history when something becomes accessible someone finds an application. Build it and they will come.

Comment Re:Pullin' a Gates? (Score 3, Interesting) 449

It is a niche which will need specific algorithms tuned for the hardware (GPU or other) the pipeline must be kept busy to observe a performance gain. It doesn't scale to general purpose computing.

I feel like this is moving the goal posts. "You will never do massively parallel computing on a CPU because if it's massively parallel it's a GPU not a CPU."

Linus is 100% wrong. What's the "general purpose" computing that we all want? The NCC-1701D's main computer from star trek. If I say "Cortana/Siri/Google Now please rough me out a flyer for our yardsale on Saturday." you're going to be looking at massively parallel task for the neural networks to not only interpret the voice but then make sense of the words and finally produce a printable flyer suitable for hanging. Programming is still a really fancy version of "IF A THEN B". "for X in GROUP do Z". "X = Y". Yeah, if your application is incredibly serial then a serial processor is all that you'll need. When computing advances to the next phase of neural networks, AI and directed (not instructed) computing then it'll need to be more like our brain: massively parallel.

Now there are two obnoxious tautological arguments against this:
A) "That's not a "CPU" that's like a NeuroProcessorUnit, an NPU if you will"
B) "Yes we'll need a giant mainframe, but it'll be a server in the cloud!"

A is moving the goal posts. Just because the processor isn't an ARM or x86 instruction compatible chip doesn't mean it's not worthy of the label CPU. As mentioned above you can't say that there'll never be a CPU with massive parallelism because as soon as it has massive parallelism it's by definition no longer a CPU. B is just saying that nobody will have a need for computers because we'll have a giant mainframe. Which might be true but you just need a basic DSP not even a CPU if it's just a pure thin client transmitting a video, audio and input stream to the cloud for processing. In which case all of the CPUs in existence... need to be massively parallel AI processors.

Comment Re:Nth verse, same as the first (Score 1) 248

Who cares about developers? Microsoft is rewriting their browser to make it faster and use less battery/resources. The Trident render engine is already good. The JS engine is already one of the fastest. Developers should be happy to develop for Trident, rewriting the browser so that it's more cross platform compatible and smoother on mobile seems like a "good thing" to me.

Slashdot Top Deals

"Engineering without management is art." -- Jeff Johnson

Working...