Forgot your password?
typodupeerror

+ - New Type-C USB connector ready for production->

Submitted by orasio
orasio (188021) writes "One of the most frustrating first world problems ever, trying to connect an upside down Micro-USB connector, is bound to dissappear soon.
Type-C connector for USB is declared ready for production by the USB Promoter Group (http://www.usb.org/press/USB_Type-C_Specification_Announcement_Final.pdf)."

Link to Original Source

Comment: Re:Will AMD APUs ever support ECC RAM? (Score 1) 117

by steveha (#47584557) Attached to: AMD Launches New Higher-End Kaveri APUs A10-7800 and A6-7400K

The socket AM3+ does support ECC (if you choose the right motherboard, ASUS usually do...)

Yeah, I have standardized on Asus for all my builds, and the ECC support is one of the reasons.

If you want ECC for cheap you could buy a lower-end socket AM3+ processor like the FX4350

My most recent build was an FX8xxx part. FX8350 I think.

otherwise Xeon is clearly the better choice.

I have made the choice to not give Intel any of my money if I can help it. I don't like the unethical games Intel plays (example).

Processors are so fast these days anyway, that the difference between the best AMD and the best Intel are not that big a deal for my purposes. And while AMD loses on absolute performance, they generally win on performance-per-money-spent.

Comment: Will AMD APUs ever support ECC RAM? (Score 0) 117

by steveha (#47583469) Attached to: AMD Launches New Higher-End Kaveri APUs A10-7800 and A6-7400K

I have a strong preference for using ECC RAM when I build a new computer.

I would be perfectly happy to use an APU to make a very quiet computer, but the chipsets that support the APUs don't have ECC support.

I admit I'm probably a weird outlier. People who want APUs probably don't want to pay extra for ECC RAM most of the time. Still, will there ever be even one chipset that will add ECC support?

Is there any technical reason why ECC shouldn't be used with an APU?

Comment: Re:GPLv4 - the good public license? (Score 1) 140

by steveha (#47535863) Attached to: The Army Is 3D Printing Warheads

There is an upper bound to how much stuff people will tolerate in a license. If you add even one restriction too many, people will stop using the software at all. If possible, people may fork an older version of the software; if not possible, people will switch to something else, or perhaps start their own project with a different license.

For an example from history, look at what happened to XFree86 when they changed the terms of their license. Pretty much overnight, almost everyone stopped using XFree86 and switched to the then-new X.org project. I'm sure that the XFree86 guys thought that the world would just accept the changes to the license, but that's not what happened; what happened instead is that XFree86 became instantly irrelevant.

So, if RMS takes your advice and adopts the restrictions you propose, some nonzero number of users will fall away, and new forks will begin to appear of the software. Meanwhile the military users will shrug and just deal with it. There is exactly zero chance that your proposed GPLv4 will change the plans of the military, even a little bit.

So now the question becomes: what are you trying to accomplish with your proposed GPLv4? If the benefits outweigh the costs, do it. But do it with full knowledge that there will be costs, and among the costs will be increased fragmentation of open-source software projects (more forks and more new projects).

A CNC machine or a 3D printer can be used to make medical parts, or weapons. It follows that if the military contributes code to control a CNC machine or 3D printer, the contributed code could be used for good purposes. One consequence of your proposed GPLv4 license: code under such a license would no longer receive contributions from the military. Is that part of what you wanted to achieve? I don't see this as a win, myself.

Comment: Re:What a fatuous, nebulous piece of crap??? (Score 1) 161

by steveha (#47496033) Attached to: Microsoft's Missed Opportunities: Memo From 1997

If licensed like DOS, it would have every bit as many compatibility problems.

Oh, not as bad, at least at first. The companies licensing MacOS would have had to make suitable hardware, and Apple could have held their feet to the fire to get compatibility and quality.

In those days, there was so much pent-up demand for Mac laptops that there were companies that would buy a Mac, crack it open and pull out the ROMs, build a laptop with the ROMs, and provide some sort of docking station so the original Mac would not be useless. This was about the most expensive way to make a laptop ever, but it was the only legal way to do it. Apple took forever to release a laptop product, and when they did, it was not what the customers wanted (heavy due to the lead-acid battery for one thing). Third-party Macs could have cost significantly more than generic "beige box" PCs and customers would have paid happily.

The thing is, Apple was charging crazy money for Macs. If Apple had adopted the Microsoft model, they would have had to accept lower margins on each Mac, and made it up on volume. Third-party Macs would have cost less than Apple official Macs but still would have sold a lot and buried the DOS-on-x86 PC. Apple was marking up Macs by about 100%... They were successfully getting a 50% margin on each Mac. Nobody else got away with that kind of markup, before or since.

It was great for Apple while it worked. But eventually Windows got to the point where it was kind of usable. And a Compaq running Windows would cost less than half what Apple was getting for a Mac. Hastings's Law: Adequate and cheaper tends to win against better but more expensive. Windows sales took off and Apple nearly died.

What saved Apple was the PowerBook, a laptop that really was what customers wanted. And a string of other successful products. And now Apple is doing very well. But IMHO, Apple could have had success like Microsoft in the 1990's had they adopted the Microsoft strategy of licensing to everyone and making a small profit on a huge volume; instead they nearly went out of business.

Even now, Apple isn't getting anything close to 50% margins on Macs. Those days are over.

Comment: US and UK "spreading the blame"?? (Score 4, Insightful) 503

by steveha (#47481807) Attached to: Russia Prepares For Internet War Over Malaysian Jet

From the summary:

U.S. and U.K. news organizations are studiously trying to spread the blame

WTF? Is this intended to somehow suggest that the USA and/or UK share some portion of blame?

The article linked in that part of the summary is a CNN article making the case that shoulder-fired missiles cannot reach 33,000 feet, so it must have been military gear. That's it... it even notes that both Russia and the Ukraine have such missiles.

This is news, and a news organization is reporting on it. Go figure. "trying to spread the blame"? "studiously", even! Really?

Comment: Re:What a fatuous, nebulous piece of crap??? (Score 2) 161

by steveha (#47481037) Attached to: Microsoft's Missed Opportunities: Memo From 1997

At the time, discontinuing the licensing of Mac clones was the right thing to do. All they did was tarnish Apple's image.

Actually, I agree with both you and the person to whom you are responding. Apple could have killed Windows by licensing out Mac OS, but it was the wrong thing at the time they actually tried it.

The Microsoft approach was to license out DOS and Windows to anyone who wanted it, taking a small royalty per copy and making money on a huge volume. The Apple approach is to make more money per unit, while selling fewer units. I firmly believe that if Apple had tried the Microsoft approach in, say, 1988, they would have won big-time. Windows was still a joke in 1988, and people were spending crazy money to buy Macs.

Licensing out Mac OS in small volume gains the benefits of neither approach. If Apple only got small volumes, they couldn't make Microsoft levels of money on a small royalty; yet cheap "clones" reduced their ability to charge large amounts on small volumes.

Steve Jobs never wanted the Microsoft approach anyway. He wanted to sell premium stuff that looked awesome and commanded a premium price. But I wish that Apple had embraced the Microsoft model early; we'd all be running Motorola processors rather than x86.

Comment: What I am going to buy (Score 1) 502

I am about to buy an external audio device. To my knowledge, this is the best device you can get for a similar amount of money... you can spend a lot more money to get something about as good, or spend less money and get something worse.

The device is called an O2 amplifier plus ODAC. It was designed by someone who went by the name of "NwAvGuy".

http://spectrum.ieee.org/geek-life/profiles/nwavguy-the-audio-genius-who-vanished

The O2 is a really clean analog amplifier, and is actually open source hardware. You can get the parts list, order the parts yourself, solder everything together, and have your own O2. You can pair it with any DAC, but NwAvGuy also designed a DAC called the ODAC. He(?) said that he would have liked to make the DAC open source as well, but it wasn't practical.

I will buy mine from a company called JDS Labs. They sell a single nice integrated device with O2 and ODAC in one enclosure.

http://www.jdslabs.com/products/48/o2-odac-combo/

There are audiophiles who sneer at the O2 because it doesn't cost enough. At my previous job I spent hours listening to music on an O2 with Sennheiser 650 headphones, and I want to be able to listen to music with that level of quality again. I am willing to spend my own money to do it.

I thought about buying a really nice DAC but I always hesitated to spend the money because it can be hard to figure out what is worth the extra money, and what is just extra expense. I am friends with a world-class audio geek, and he agrees that this is a good quality audio device. If you want top quality and you are spending your own money, get or make an O2.

Comment: How will history judge the F-35? (Score 5, Interesting) 417

by steveha (#47176951) Attached to: Canada Poised To Buy 65 Lockheed Martin F-35 JSFs

Sometimes a new thing looks like a disaster for a while, but in the long run proves itself. The M-16 rifle is a tremendously successful design, but there were issues with the first models that made it look like a huge mistake.

So I am watching the F-35 and I am wondering: will this be as big a disaster as the nay-sayers claim, or will this work out in the long run?

I'm guessing it will limp along as a middle-of-the-road thing: not a complete horrible disaster, just a really expensive airplane that doesn't live up to its expectations.

Also, I have read that it is intended that a bunch of F-35s will share data with each other, and help each other detect and deal with threats; but the giant costs of the program have made it much less likely that enough F-35s will fly together at one time for this to work out.

One thing I am certain about: It's a mistake to try to replace the A-10 Warthog with F-35s. I don't even understand how the F-35 is supposed to do the same mission.

http://www.motherjones.com/mojo/2012/01/a-10-f-35-air-force-budget

Comment: Apple TV? (Score 5, Insightful) 147

Apple is known for limiting the number of different products. IMHO Apple is unlikely to ship a "microconsole" and continue to ship the Apple TV.

Much more likely: the "4th generation" Apple TV, which will not only do everything an Apple TV does, but will also play games if you buy a controller.

According to Wikipedia, the current Apple TV uses a single-core ARM chip. For gaming, Apple should put in a more powerful chip, which may imply a price hike. Perhaps Apple will continue to sell the current generation as a less-expensive model, for those who don't care about games.

Comment: I wonder if "Big Wind" would work on wildfires (Score 4, Interesting) 80

by steveha (#47059747) Attached to: Researchers Experiment With Explosives To Fight Wildfires

In the movie Fires of Kuwait, my favorite part showed a modified tank called "Big Wind".

Instead of a cannon, "Big Wind" has two jet engines from a MiG fighter plane, and it uses those to blow out fires the same way you might blow out a candle on a birthday cake, only at epic scale.

http://www.caranddriver.com/features/stilling-the-fires-of-war

It's probably more practical, for wildfires, to use a helicopter to deliver explosive devices rather than drive a tank around. Setting up the water reservoirs in advance would be a problem also. The tank worked very well in Kuwait, though!

Comment: Re:Author is missing the point entirely (Score 1) 255

by steveha (#47050995) Attached to: The Sci-Fi Myth of Robotic Competence

Stuff like this:

SF writers invented the robot long before it was possible to build one. Even as automated machines have become integral to modern existence, the robot SF keeps coming. And, by and large, it keeps lying. We think we know how robots work, because weâ(TM)ve heard campfire tales about ones that donâ(TM)t exist.

And this:

The myth of robotic competence is based on a hunch. And it's a hunch that, for the most part, has been proven dead wrong by real-life robots.

Actual robots are devices of extremely narrow value and capability. They do one or two things with competence, and everything else terribly, or not at all.

The article doesn't contain the phrase "the Three Laws would only work on a true AI" but it really does discuss the fact that fiction shows AI and we don't have it.

Comment: Re:Author is missing the point entirely (Score 1) 255

by steveha (#47050217) Attached to: The Sci-Fi Myth of Robotic Competence

Sorry to say it, but I think it is you who has missed the author's point entirely.

The author asked the question: if a car can save two lives by crashing in a way that kills one life, should it do so? And many people rejected the question out of hand.

The author listed three major ways people rejected the question:

"Robots should never make moral decisions. Any activity that would require a moral decision must remain a human activity."

"Just make robots obey the classic Three Laws!"

"Robots will be such skillful drivers that accidents will never happen, so we don't need to answer this question!"

All of those responses are not well-reasoned and that is the whole point of TFA.

The author went on to point out that the Three Laws are fictional laws that were applied to fictional full AIs that we don't have in the real world.

P.S. I do think that robot car drivers will rarely have crashes. As others have pointed out, the AI never gets sleepy or bored, and never takes stupid chances due to impatience. AI cars drive in a boring way, and if the majority of all cars were doing that, there would be a great reduction in crashes.

That said, of course the AI must be programmed with some strategy to cope with a crash. I'll bet that in the current generation it's mostly "swerve in a direction that doesn't appear to have any obstacles" and "stomp on the brakes" but there has to be something.

This is a specific case of a general problem: navigating cost/benefit tradeoffs. Suppose I have a new car design, and it is safer than old car designs. Then the more people switch to the new car, the more lives are saved. But the more expensive the car is, the fewer people buy the car. Now, I could add one more feature, and it makes the car even safer but it also makes the car even more expensive. Do I add the feature? Then fewer people get the safe car, but those people are extra safe. Do I omit the feature? More people get the safe car but it isn't as safe as it could be. How do you decide?

You use math, and do your best. But some people will reject the question. "It's immoral and shocking to reduce human lives to numbers in an equation..." Oh yeah, it's so much more moral to just guess at what to do, rather than try to apply math to the problem.

Comment: Asimov's Three Laws wouldn't work (Score 4, Interesting) 255

by steveha (#47049983) Attached to: The Sci-Fi Myth of Robotic Competence

Asimov's Three Laws of Robotics are justly famous. But people shouldn't assume that they will ever actually be used. They wouldn't really work.

Asimov wrote that he invented the Three Laws because he was tired of reading stories about robots running amok. Before Asimov, robots were usually used as a problem the heroes needed to solve. Asimov reasoned that machines are made with safeguards, and he came up with a set of safeguards for his fictional robots.

His laws are far from perfect, and Asimov himself wrote a whole bunch of stories taking advantage of the grey areas that the laws didn't cover well.

Let's consider a big one, the biggest one: according to the First Law, a robot may not harm a human, nor through inaction allow a human to come to harm. Well, what's a human? How does the robot know? If you dress a human in a gorilla costume, would the robot still try to protect him?

In the excellent hard-SF comic Freefall, a human asked Florence (an uplifted wolf with an artificial Three Laws design brain; legally she is a biological robot, not a person) how she would tell who is human. "Clothes", she said.
http://freefall.purrsia.com/ff1600/fc01585.htm
http://freefall.purrsia.com/ff1600/fc01586.htm
http://freefall.purrsia.com/ff1600/fc01587.htm

In Asimov's novel The Naked Sun, someone pointed out that you could build a heavily-armed spaceship that was controlled by a standard robotic brain and had no crew; then you could talk to it and tell it that all spaceships are unmanned, and any radio transmissions claiming humans are on board a ship are lies. Hey presto, you have made a robot that can kill humans.

Another problem: suppose someone just wanted to make a robot that can kill. Asimov's standard explanation was that this is impossible, because it took many people a whole lot of work to map out the robot brain design in the first place, and it would just be too much work to do all that work again. This is a mere hand-wave. "What man has done, man can aspire to do" as Jerry Pournelle sometimes says. Someone, somewhere, would put together a team of people and do the work of making a robot brain that just obeys all orders, with no pesky First Law restrictions. Heck, they could use robots to do part of the work, as long as they were very careful not to let the robots understand the implications of the whole project.

And then we get into "harm". In the classic short story "A Code for Sam", any robot built with the Three Laws goes insane. For example, allowing a human to smoke a cigarette is, through inaction, allowing a human to come to harm. Just watching a human walk across a road, knowing that a car could hit the human, would make a robot have a strong impulse to keep the human from crossing the street.

The Second Law is problematic too. The trivial Denial of Service attack against a Three Laws robot: "Destroy yourself now." You could order a robot to walk into a grinder, or beam radiation through its brain, or whatever it would take to destroy itself as long as no human came to harm. Asimov used this in some of his stories but never explained why it wasn't a huge problem... he lived before the Internet; maybe he just didn't realize how horrible many people can be.

There will be safeguards, but there will be more than just Three Laws. And we will need to figure things out like "if crashing the car kills one person and saves two people, do we tell the car to do it?"

Comment: Re:640k isn't enough for everybody (Score 3, Informative) 522

Dos can access a lot more than 640k - the limit on real mode access is 1mb.

True! So, if DOS can access 1 MB, where does the 640K limit come from? Long story short, it's because IBM's BIOS sucked.

Okay, longer story:

Everyone was supposed to use the BIOS for basic operations including writing text to the screen. But the BIOS was poorly designed; the only way it had to write to the screen was to write one character at a time per call into the BIOS. And calling into the BIOS was kind of slow (remember we are talking about computers three orders of magnitude slower than current computers... 4.7 MHz processor).

Since the BIOS was too slow, people didn't use it. Instead, they figured out the address of the screen buffer in the graphics card, and just wrote the desired text directly into the buffer. So much faster!

But this meant that all the most popular software for DOS was not using the BIOS, and had a particular hardware dependency hard-coded. And the standard address for the frame buffer just happened to be 640K. (Well, there were two addresses, depending on whether the user had a mono or color card, but 640K was the lower of the two.) The address was chosen back in the days when RAM was really expensive, and computers might only have 64K or even less. So, nobody saw a problem coming... and besides, everyone was going to be using the BIOS, right? So you should be able to move the graphics card, change the BIOS, and all the software still would work. Whoops.

With the benefit of hindsight, what should have happened was: a DOS program uses the BIOS to query the address of the frame buffer, so the graphics card can move around anywhere in memory. And the BIOS should have had a "write whole string" function from the beginning. (Much later versions of the BIOS had a "write whole string" function but I don't think any popular software ever used it, as it was not available in the giant installed base of old DOS computers.)

Biology is the only science in which multiplication means the same thing as division.

Working...