Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment: Re:Few Million a Year is a BIG Stretch Goal (Score 4, Interesting) 181

by imgod2u (#48813667) Attached to: Tesla To Produce 'a Few Million' Electric Cars a Year By 2025

The stated ambition was "about a million cars" The article is incorrect in its quote. The liveblog (and video) are much more accurate.

One thing unique about Tesla's manufacturing is that the supply chain is materials. Most everything else is produced by Tesla themselves at the Fremont factory. Many questioned why this decision was made but there are many long-term benefits. When your supply chain is all raw materials, availability becomes much more predictable and your ability to influence the supply by pumping some money into a mine is far easier than say, getting a different company to shape up and manufacture more parts.

The only part of a Tesla that isn't produced at Fremont are the batteries and that's why the Gigafactory is coming online.

Comment: Re:Tell me it ain't so, Elon! (Score 1) 181

by imgod2u (#48813593) Attached to: Tesla To Produce 'a Few Million' Electric Cars a Year By 2025

Maybe not but every politician represents the needs of his electorate and that electorate certainly want to keep their jobs.

I'm not saying it's a good reason but often times, technological changes can blind-side a good portion of the population and we have to consider that. Perhaps not stop progress but definitely slow down adoption to give the population time to find new jobs.

Comment: Re:Tell me it ain't so, Elon! (Score 2) 181

by imgod2u (#48813505) Attached to: Tesla To Produce 'a Few Million' Electric Cars a Year By 2025

In such a situation, the franchise owner should've had enough foresight (especially given the vast amount of previous history) to add to the franchise agreement a non-compete clause. Free market and all.

Sometimes one side of said contract has too much power and we need the government to step in and make a law. The problem with that approach is that those laws often outlive their intent. The franchise laws to protect auto dealers were enacted in a day where the Big Three auto makers were the only business in town and continually abused that position. Nowadays they're scrambling for their lives.

The laws in place are no longer needed and now hamper innovation as it presents a major barrier to entry for upstart car companies -- something the people who wrote those laws never considered possible. Therefore they should be repealed.

Comment: Re:Core of the article (Score 1) 449

by imgod2u (#48721987) Attached to: How We'll Program 1000 Cores - and Get Linus Ranting, Again

How about graceful seg faults instead of program crashes? Obviously modern architectures don't really support such things but one can imagine a processor that detected bad pointers instead of causing the program to crash. In fact, each program could program or transaction even could program a pre-determined fault handler.

What'll happen is:

1. Thread A sets a "start of code snippet" and programs an address that has a fault handler.
2. Thread B starts its processing as well.
3. Thread A at some point tries to dereference a pointer at address X.
4. Thread B races ahead and deletes the pointer at address X.
5. Normally, in protected memory, the processor would throw a fit as thread A tries to access an illegal memory address.
6. Instead, the processor jumps to thread A's custom fault handler.
7. Thread A's fault handler sees "hey, my code snippet tried to access an illegal address and I, the thread, am not guaranteed to be thread safe". It then rolls back all of the work it's done up until the instruction that faulted.
8. Thread A tries again starting from 1. It could, at some point, decide to not try the thread unsafe method (if it faults too many times) and actually use the old mutex locking method.

The idea is that the majority of the time, thread A and thread B don't actually conflict. Or thread A wins the race. In those cases, you have a case of parallel computation speedup.

It's up to the programmer (or compiler, probably a JIT) to recognize when to exploit this by analyzing the algorithm and the likelihood of conflict. A JIT would probably use profiling information it gets in real time.

Nobody's saying this will replace 100% of all synchronization methods. But we don't need to. To get a speedup, you only need to technically replace 1 use case. But most likely, you can replace a lot (90%) of use cases.

Comment: Re:Core of the article (Score 3, Insightful) 449

by imgod2u (#48715247) Attached to: How We'll Program 1000 Cores - and Get Linus Ranting, Again

The idea isn't that the computer ends up with an incorrect result. The idea is that the computer is designed to be fast at doing things in parallel with the occasional hiccup that will flag an error and re-run in the traditional slow method. How much of a window you can have for "screwing up" will determine how much performance you gain.

This is essentially the idea behind transactional memory: optimize for the common case where threads that would use a lock don't actually access the same byte (or page, or cacheline) of memory. Elide the lock (pretend it isn't there), have the two threads run in parallel and if they do happen to collide, roll back and re-run in the slow way.

We see this concept play out in many parts of hardware and software algorithms actually. Hell, TCP/IP is built on having packets freely distribute and possibly collide/drop with the idea that you can resend it. It ends up speeding up the common case: that packets make it to their destination along 1 path.

Comment: Re:One fiber to rule them... (Score 2) 221

by imgod2u (#48713235) Attached to: Google Fiber's Latest FCC Filing: Comcast's Nightmare Come To Life

REAL proponents of free market capitalism should have no problem with that idea. Those who do are those who either (A) don't understand that currently we have an oligopoly not a free market, or (B) want to protect their privileged position.

Or (C) think they should be able to sell faster access to some and or priority services to some.

The whole problem with net neutrality is that it wants everyone to be the same even though everyone doesn't want to be the same. Suppose your Aunt Marry only checks email and recipes on the internet so she decided to get the cheapest version of braodband she could. Now suppose netflix says I want to service her but she only has a 1.5 meg connection and needs a 4 meg connection to use our service effectively. So they pay to have her services increased for the packets that stream from their services so they do not have to convince Aunt Marry to not only pay the monthly rate to them, but to pay their provider more for faster service.

So now Aunt Marry can keep her slow service that she likes and still have netflix for those nights when the cats and cable TV just isn't enough. But Net Neutrality proponents say they don't want that. Aunt Marry will have to pony up all the money herself.

Except Title II isn't about net neutrality. Title II is about allowing more companies to access the physical lines so that there's competition. So that even if priority access is a thing the market wants, the ISP's won't get to overtly abuse their ability to have paid priority lanes. It's about encouraging more competition (similar to anti-trust laws) such that market forces can work.

Comment: Re:Still pretty affordable (Score 1) 393

by imgod2u (#47930777) Attached to: Is the Tesla Model 3 Actually Going To Cost $50,000?

Yes on the later question. In northern CA, they also offer "EV" plans where there are no more tiers. The base rate is higher (11cents/kWH after 11pm, peak of about 35cents/kWH during the day) than the tiered system (which starts at 5cents/kWH after 11pm with a peak of ~15cents/kWH, but grows exponentially).

But if you're charging an EV, you'll likely blow past the tiers anyway so having the EV plan works better. With a Model S, at least, you'll really only need to charge it at night and the software lets you schedule charging.

Comment: Re:The suck, it burns .... (Score 5, Interesting) 179

by imgod2u (#47672053) Attached to: Microsoft Black Tuesday Patches Bring Blue Screens of Death

I think the criticism isn't so much that they're too responsive to consumers or not -- they obviously listen. The criticism is that there are so many holes to begin with and that their attempts to fix things that are obviously broken -- things that their competitors seem to be able to make work just fine -- often don't work or cause other problems. Knowing the Microsoft engineering culture, their stuff is mostly a patchwork of different groups not talking to each other. In the Windows API, there are something like 17 different representations of strings depending on which engineer/department wrote the code!

When you're disorganized like that in a giant company with a giant piece of software, it's easy to see how bugs can get out of hand.

Comment: Re: Is Tesla making cars... (Score 4, Interesting) 195

by imgod2u (#45921525) Attached to: Tesla Sending New Wall-Charger Adapters After Garage Fire

People haven't stopped beta testing. Either in hardware or software. They have been quicker to release because the vast majority of software nowadays are done inside a sandbox (mobile apps, cloud servers, etc) rather than from scratch.

It's not like software or hardware back then was any more reliable. Office, OS9, Windows (all versions) have always been plagued with problems and one can argue they have fewer obvious bugs now than they did before - When's the last time you got a BSOD?

The counterbalance is that the consumer base is far far far larger now. Some of us who were at Intel at the height of the Pentium 4 were happy to have sold 40M units in a year. Mobile phone processors at qualcomm nowadays clear 400M/quarter.

If it seems like hardware and software bugs show up faster, it is because the userbase that uses and report such bugs (easy to do now via social media) is much much much larger.

Comment: Re:Time for ARM to invest in GCC (Score 1) 82

by imgod2u (#44272027) Attached to: Casting a Jaundiced Eye On AnTuTu Benchmark Claims Favoring Intel

Well, no. There are better compilers out there for ARM. Keil for one. More importantly though is the fact that real code that cares about performance won't just write a loop and let the compiler take care of it; they'll use optimized libraries (which both Intel and ARM provide).

Compiler features like auto-vectorization are neat and do improve spaghetti code performance somewhat but anyone really concerned with performance will take Intel's optimized libraries over them. So if we're going to compare performance that the end-user cares about, we'd use a benchmark that not only mimicked the functions we'd see in actual software but the libraries they use.

Comment: Re: 1000 times better? (Score 3, Insightful) 103

by imgod2u (#43887517) Attached to: Graphene-Based Image Sensor To Enhance Low-Light Photography

Exposure is exponential as well. So a camera with 2x exposure goes from 80% QE to 90% QE for example. The next 2x will get you to 95.

That may not seem like much but keep in mind that vision itself is logarithmic. So going from 98 to 99% QE gets you dramatically better results than, say 40% to 41%

"Free markets select for winning solutions." -- Eric S. Raymond

Working...