Forgot your password?
typodupeerror

Comment: Re:Open source was never safer (Score 2) 582

by hajile (#46762993) Attached to: How Does Heartbleed Alter the 'Open Source Is Safer' Discussion?
I think this says more about the prevailing view of security. Every programmer is told "NEVER roll your own encryption". The default result is that most programmers never even look at the code and instead assume it MUST be safe since the infallible "experts" wrote it. What we are seeing here is not the fault of open source vs closed source; it is about voodoo programming being considered good security practice.

I'm not saying that everyone should be rolling their own encryption, but people should be looking over the experts implementations instead of assuming they are perfect (this bug could have been caught by any number of "normal" programmers had they simply taken the time to looked).

Comment: Re:uh (Score 3, Interesting) 221

A medicinal dosage of LSD is an order of magnitude lower than the quantity needed for most individuals to experience hallucinogenic side effects making it far safer than THC or opiates. In addition, it deals with medical conditions such as chronic joint pain or cluster headaches which aren't very treatable otherwise (and once again, it allows the person to remain cogent). The US government stopping clinical trials half-way through in the drug craze (trials that were already showing amazing potential) was criminal.

Comment: Re:I don't think so (Score 2) 153

by hajile (#46105923) Attached to: Samsung's First Tizen Smartphone Gets Leaked
Tizen's biggest problem is samsung. Here's a chat log where rasterman (head of EFL used by Tizen in the background -- they aren't allowing anything but webapps by third parties)

https://www.tizen.org/irclogs/...

He talks about how stagnant and copycat the Samsung development bureaucracy is and how it's practically impossible to make any real innovative moves in development.

Another major issue is that the Tizen SDK (despite all the "Linux Foundation" mantra) is proprietary and gives Samsung near complete control of your code. I don't see any developer agreeing to those terms when he/she is already risking so much on a new OS to begin with.

Comment: Re:Despite it's name (Score 1) 168

by hajile (#46105273) Attached to: AMD Announces First ARM Processor
All criticisms of the 286 ISA are relevant today as the current x86_64 ISAs are merely extensions of that old ISA (of all the ISA in use today, only x86 and ARM have excessive cruft and of these, only ARM has been willing to step away from all of it for newer designs). People talk of continuing to bolt on new x86 instructions, but adding instructions to the decoder increase complexity at a faster than linear rate. What x86 really needs is a hard reset like ARM, but at that point, why use x86?

x86 (like javascript and many other things in technology) is proof that throwing large amounts of money at inferior technology yields results. How much power is lost on x86 instructions? If Loongson is any indicator, performance relative to native RISC (what Intel and AMD use internally) is 80%. If Intel wanted a real game, they would start using Alpha internally and add a secondary x86 to Alpha decoder and offer Alpha on the mobile end and Alpha/x86 on the mid-high end.

Itanium is gone because Intel knew from the start that the chip had no future (they had already tried this some years before with the failed i860). The long-term effect of Itanium was that it generated enough smoke and mirrors to convince the know-nothings at the top to kill all the good RISC architectures (all the R&D was pulled except for SPARC which was mis-managed by Sun Microsystems).

VAX was killed because they had Alpha. Intel killed Alpha, but every advance they have made came from the Alpha EV8 and EV9 (AVX, SMT, Quickconnect, etc), in fact, Haswell has just finally managed to go as wide in execution as EV8 designs from 2003. Alpha was the apex of computer processor design. It's telling that ShenWei was/is so power efficient despite using a reverse-engineered copy of EV56 with an updated FP (a processor designed in the mid 90's).

RMI made a MIPS64 that is a 8-core (32-thead) monster with 12MB of cache, running at 1.2GHz (specs say up to 2.0GHz). It was designed for network switches and allowed for 40Gbps of traffic per chip (10Gbps encrypted/compressed on the fly) and was rated for less than 50 watts of power, all while back on 40nm. No ARM chips are anywhere near that performance (and x86 would struggle to get that performance at 40nm and around 50 watts). MIPS is already big and Loongson seems to indicate that China is willing to throw a lot of weight behind the MIPS architecture (for example using MIPS in their upcoming 100 petaflop supercomputer scheduled for 2015).

Comment: Re:RISC is not the silver bullet (Score 1) 403

by hajile (#41357549) Attached to: The Linux-Proof Processor That Nobody Wants
Intel killed most RISC through Itanium marketing. Everyone was convinced that RISC was better than CISC. Everyone was also convinced that VLIW (EPIC in our case) was better than RISC. The problem there was that VLIW has two glaring problems. The first is that compilers haven't yet (years after Itanium released) come close to the optimization they were supposed to be capable of creating (a little like it was once believed that AI was just a few years away). The second is that some optimizations can't be predicted by the compiler anyway. IBM's POWER, DEC/Compaq's Alpha (later bought by Intel... I assume that Intel's internal x86 micro-ops take inspiration from Alpha), MIPS non-low power division, HP's PA-RISC, etc all bet that Itanium would be the next great thing (Intel was claiming outrageous and unjustifiable performance numbers). Meanwhile, Intel continued development on it's high-end x86 chips (the screw-up known as P4). With almost all of Intel's competitors shutting down or drastically scaling back, Intel gained several years worth of advantage (something that most companies involved still haven't recovered from). Apple changing to x86 is part of the fallout from this decision. IBM and Apple weren't wanting to spend the R&D money needed to make a competitive chip (first betting on the Itanic and later not wanting to play catch-up). Thus, Apple switched from POWER to x86 serving as a (seemingly lost) example of why to never trust a company that claims something is great when that company doesn't have product in hand (Itanic launched and all it's competitor's chips which were already too far along to cancel offered better performance).

Comment: Re:Blast in time (Score 1) 403

by hajile (#41357423) Attached to: The Linux-Proof Processor That Nobody Wants
Look at the labeled die shot from AMD's presentation on Jaguar. The parts that make CISC differ from RISC are still major parts of the architecture (as written in the old RealWorldTech article www.realworldtech.com/risc-vs-cisc/ ). CISC adds lots of instructions forcing a (much) bigger decode and instruction cache. CISC adds a multi-stage decoder which requires huge increases in branch prediction size just to break even. 32-bit x86 code takes lots more room due to lack of registers (AMD stated (about Hammer) that switching to 16 general purpose registers for 64-bit operation decreased code 5% and increased performance 20% solely based on less time spent grinding the wheels in the dirt). RISC chips are easier to compile for. x86 uses lots of 1 or 2-register operations which require extra mov instructions whenever data shouldn't be overwritten (and that's a lot of the time). There's extra circuitry to deal with instruction dependencies. Finally, all this additional complexity increases the amount of possible errors (Intel and AMD processors have hundreds of unfixable errors that must be coded around. Many of these could be avoided with a simpler system).

Comment: Re:Ah, America! (Score 1) 562

by hajile (#38532222) Attached to: Verizon Adds $2 Charge For Paying Your Bill Online

Basically, if something can be sent to a collections agency (in this case the internal Verizon collections agency), the it is a DEBT and thus the creditor (VerizonWireless) CANNOT refuse payment.

To clarify, payment with Federal Reserve Notes cannot be refused, else things will not go well for them in court.

Comment: Re:Ah, America! (Score 3, Informative) 562

by hajile (#38532058) Attached to: Verizon Adds $2 Charge For Paying Your Bill Online
Wrong. From the Federal reserve FAQ http://www.federalreserve.gov/faqs/currency_12772.htm

Is it legal for a business in the United States to refuse cash as a form of payment? Section 31 U.S.C. 5103, entitled "Legal tender," states: "United States coins and currency [including Federal reserve notes and circulating notes of Federal reserve banks and national banks] are legal tender for all debts, public charges, taxes, and dues." This statute means that all United States money as identified above is a valid and legal offer of payment for debts when tendered to a creditor. There is, however, no Federal statute mandating that a private business, a person, or an organization must accept currency or coins as payment for goods or services. Private businesses are free to develop their own policies on whether to accept cash unless there is a state law which says otherwise.

Basically, if something can be sent to a collections agency (in this case the internal Verizon collections agency), the it is a DEBT and thus the creditor (VerizonWireless) CANNOT refuse payment. Since payment of a Verizon bill is frequently after some or all of the service has been given, their is debt (money owed for services previous rendered). Retail is different because there is no debt owed to the retailer since the merchandise ownership is only exchanges (barring a contract) after the payment has been given. (technically, it could be argued in some cases that a verbal contract has been established and thus a debt has been agreed to).

Aren't you glad you're not getting all the government you pay for now?

Working...