Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Comment: Re:Do pilots still need licenses? (Score 1) 336

by Dutch Gun (#49186761) Attached to: Would You Need a License To Drive a Self-Driving Car?

Well, I wasn't really talking about adaptive cruise control, which has actually been available for quite a while. But those only control throttle and braking. I was thinking more like fully autonomous steering on freeways within a decade.

I was poking around to see if I could find some other predictions, and ran across this Wikipedia article. The section on predictions by major automobile manufacturers and others in the industry was really interesting. It looks like it may only be a year or two until cars can drive themselves on the freeway. Some highlights:

By 2016, Audi and Nissan plans to market vehicles that can autonomously steer, accelerate and brake at lower speeds, such as in traffic jams.
By 2016, Mercedes plans to introduce "Autobahn Pilot" aka Highway Pilot, the system allow hands-free highway driving with autonomous overtaking of other vehicles.

So, damn... it's probably going to happen in just a couple of years. No way they'd make those predictions if they weren't already well prototyped and gearing up for production. So, how about fully autonomous?

By 2020, Google autonomous car project head's goal to have all outstanding problems with the autonomous car be resolved.
By 2025, Daimler and Ford expect autonomous vehicles on the market. Ford predicts it will have the first mass-market autonomous vehicle.

Again, faster than I would have predicted. Possibly optimistic, but who knows? And longer term predictions?

By 2035, IHS Automotive report says will be the year most self-driving vehicles will be operated completely independent from a human occupant’s control.
By 2040, expert members of the Institute of Electrical and Electronics Engineers (IEEE) have estimated that up to 75% of all vehicles will be autonomous.

Awesome. We may never see flying cars ourselves, but self-driving cars won't be a bad runner-up.

Comment: Re:Do pilots still need licenses? (Score 1) 336

by Dutch Gun (#49186133) Attached to: Would You Need a License To Drive a Self-Driving Car?

I agree that adoption will be gradual. The first generation of "self-driving" cars will probably have "smart cruise-control" and "self-parking" modes, but the driver will still be expected to be at the wheel and ready to take control if needed. Next, the vehicle will be smart enough to take you from start to destination by itself, but only in good weather and relatively common driving circumstances. Eventually, engineers will probably figure out how to make these systems so smart and reliable that we can simply take out the manual controls, or at least have them stowed away for emergencies.

At some point, people will be far worse drivers than their cars not just because the AI drivers are better, but also because they don't have as much of an opportunity to practice. At that point, it will actually be safer to prohibit any manual driving on the road except in emergencies, as they'd be more of a liability. You'll only need a driver's license if you have a need to operate a car manually, and that will be an increasingly rare occurrence.

The only part I really disagree with is your timeline. I'm guess early "limited" self-driving cars will be here well within a decade, while completely hands-off, license-free driving will happen about thirty years from now. Between those two points will be a very gradual transition from partially to fully autonomous, as systems improve and people learn to trust their cars more and more.

Comment: Re:There might be hope for a decent adaptation (Score 1) 317

by Dutch Gun (#49185839) Attached to: 'The Moon Is a Harsh Mistress' Coming To the Big Screen

In the first Heinlein book I read, The Number of the Beast, the already-insufferable-yet-amazingly-forgettable characters eventually ended up in the land of Oz in a flying car... What. The. Fuck. At that point I put the book down and decided my time was better spent with other authors. Or sorting my lint collection. Anything else.

Starship Troopers was better, but nothing really special. I also read it after seeing the movie, so I had pretty low expectations going into it - but people kept telling me how much better the book was.

I never bothered with anything of Heinlein's after those two utter disappointments. Maybe those weren't representative of his best works, but The Number of the Beast was so atrocious and the Starship Troopers movie so vapid, it probably forever tainted my opinion of his other works.

Comment: Re:Breakthrough? (Score 4, Interesting) 411

Yeah, Windows has some penetration on low-end devices, but you know that's not where they really want to be.

Interestingly enough, Microsoft is now in the same position on the phone as Linux is on the desktop. They have an extremely competent offering, but they can't seem to really break though to make significant gains in the market. As we've seen time and time again with Linux, it's not enough to offer something "almost as good" to get someone to switch. You can't even compete with "just as good". You need to provide something that's significantly better than the competition in some fashion - some significant advantage that will compel people to move from Android or iOS to Windows phones.

In the article, Microsoft stated that a Microsoft phone would provide a "more consistent experience across smartphones, tablets, and PCs". Interestingly, that was exactly why I hated Windows 8 so much, because it was obviously a mobile UI bolted rather clumsily on top of my desktop. Windows 10 is unfortunately using the same "modern fugly" visual design, but is at least fixing the usability and integration problems. So, in theory, a cross-platform app store could end up being a win for them. If you can buy an app and run it on all three of those platforms, I could see that as being attractive for consumers.

Another possibility is if they provide businesses some great tools to help manage mobile corporate devices. Apple has been notoriously bad at this - not sure how easy it is with Android. But for consumers? I don't know. At the moment, I just don't really see how they're going to crack into this extremely competitive market.

Comment: Re:Bad idea (Score 1) 648

by Dutch Gun (#49177811) Attached to: Snowden Reportedly In Talks To Return To US To Face Trial

Do you see Cheney up on charges? Or Bush? Or Obama? Or the head of the CIA?

Of course not, because those clowns are operating under a different set of laws than you and I do.

Not that I disagree with your general point, but... the US president and members of Congress actually do constitutionally operate under a different set of rules than everyone else.

Comment: Re:Easier to Analyze or Change == More Maintainabl (Score 4, Insightful) 244

by Dutch Gun (#49177145) Attached to: Study: Refactoring Doesn't Improve Code Quality

Nope, it's when I take the awful, unmaintainable spaghetti code someone else produced when they were in a deadline crunch and convert it into something maintainable.

Sigh... I wish I could say that with a straight face.

Interestingly, in my experience, poorly structured code seems to come about often less often because of "rushed code" but instead a lack of foresight in the original structure of a system to deal with continuously evolving features (which happens in most projects), along with a lack of willingness to refactor those systems as soon as it's apparent it's starting to break down.

This is the "golden time" to refactor code, because it's just now become apparent where the structural flaws are in the architecture, but it's still early enough to refactor without causing a significant amount of pain. It's often hard to justify, because you've only got a couple of ugly special cases that complicate things here and there. However, if you procrastinate too long, you're going to start piling on more and more "ugly special cases", and the code is going to get harder and harder to read and maintain.

Comment: Re:c++? (Score 1) 393

It has dick-all to do with "correctness" or whatever. It's simply because even if a subclass has its own implementation of a parent's method, it'll still call the parent method - this goes against one of the core principles of OO: polymorphism. This means that even *if* you wanted to override a method from a parent class in your subclass, unless the parent has it marked as virtual, you're SOL.

If you read about Stroustrup's design intentions, he absolutely believed that strong type safety was an important part of making programs safer and less bug-prone. Obviously, the creators of Objective-C (and many other languages) chose to follow a different design philosophy. It's probably not helpful to get into a debate about which one is "correct", as that's obviously going to be pretty subjective.

As far as the example you give, this is because C++ always adheres to the "zero-cost principle", meaning that the language designers don't believe C++ developers should pay for features they're not using. Virtual functions are more expensive to call than non-virtual, and you can't always determine at compile time whether one will be called or not, so it's left to the programmer to tag the function. It has nothing to do with "going against the core principles of OO". It just means C++ will happily let you shoot your own foot by ignoring the language rules it requires the programmer to adhere to.

This is why people consider it one of the more difficult languages to master. It's a powerful tool, but has many sharp edges. For programmers like me who value C++ primarily for it's runtime efficiency, this is absolutely the correct design decision. There are plenty of better languages to use if your primary goal is programmer efficiency.

BTW, you're a bit out of date regarding C++ and allocation. Modern C++ now has several built-in smart pointers (including ref-counted versions) which makes modern C++ feel a lot closer to C# with it's garbage collection than to C-style manual memory management.

Comment: Re:Then, How Best To Learn? (Score 1) 393

I think this question has been answered pretty well on Stack Overflow. C++ is too deep and complex to learn from a few web tutorials of unknown quality. Get a good tutorial book and just follow along - that's how I learned it. Then get a good reference book for more in-depth knowledge as you move past the basics.

Anything that at least covers C++ 11 is fine, as C++ 14 was more about some minor tweaks that you can learn on your own.

Be careful about learning bad habits from C++ code examples you find on the web, or in older projects. Essentially, if you see the code using new, delete, and raw pointers, it's likely outdated. You may still need these on occasion, but their use is vanishingly rare in modern C++.

Comment: Re:c++ (Score 1) 393

Or you should in theory at least.
In the 8-bit arena (Embedded or old) C++ support is pretty much nonexistent or incomplete.

Heck, once you leave the architectures supported by GCC the portability of C++ becomes questionable.

Was anyone confused that I might be talking about an Apple II or Commodore 64 when I said "just about every platform"? And regarding embedded, it seems relatively easy to find C++ compilers for embedded hardware.

Comment: Re:c++ (Score 1) 393

Well, it definitely wasn't perfect. I cross-compiled code with Microsoft, Borland, gcc, Watcom, and others years ago, but I don't recall it ever being as bad as you seem to remember. If you used a reasonably conservative set of language features, it was definitely possible to keep your code quite portable.

Nowadays, as you mentioned, it's an entirely different situation, as full standards compliance seems to be getting a lot more attention.

Comment: Re:serious question (Score 1) 167

by Dutch Gun (#49169341) Attached to: Marissa Mayer On Turning Around Yahoo

The fact that few people seem to know this could be part of Yahoo's problem. They just don't seem to have a strong corporate identify. I'll bet that just about anyone here could tell you what products and services Microsoft, Google, Apple, Amazon, HP, IBM, Oracle, Cisco, and other companies are best known for. For Yahoo, I probably would have answered "mediocre e-mail, crappy search, and some decent services like news and Flickr", but beyond that, I really had no idea.

Comment: Re:c++ (Score 1) 393

I agree. C++ has really seen something of a renaissance in the last few years with C++ 11 and 14. CPU core speed has flattened, and people are realizing that efficiency isn't really something that can be ignored in many cases. Moreover, C++ is and always has been a very portable language, as you can compile it on just about every platform imaginable.

Nowadays, you can write C++ and be assured that you'll rarely have to even think about explicit memory management or leaks. Moreover, what really surprised me was how I actually now prefer the simpler, more versatile, and more predictable referece-counted paradigm over managed memory and garbage collection. The lack of a destructor mechanism means that releasing resources in a predictable manner tends to be a bit less elegant because it's handled in a different way. In C++, memory is just like any other resource.

A lot of people talk about the complexity of C++. There are a couple of things to remember. C++ IS a pretty big and complex language of course, but you don't necessarily have to actually deal with much of that complexity in many circumstances. First, a lot of complexity is related to it's own backwards compatibility both with C and it's own early features. Unless you're maintaining or interfacing with old code, many of those features are largely irrelevant when writing modern C++. If that's not the case, you either have some exceptional circumstances, a very old codebase, or you're not really using the language correctly. Second, C++ can be viewed as two different languages: one suited for library writers, and one for library users (or application programmers). Writing C++ for use in languages can actually be rather difficult - it should be viewed as expert-level language skills. However, C++ actually makes it extremely easy to use a library. And in fact, a well designed library should actually be very difficult to use incorrectly, especially when compared to C.

The language definitely has it's strengths and weaknesses, and I certainly wouldn't recommend it for everything. I'd say C++ starts to really shine when you talk about extremes. If you need your program on a lot of different platforms, need it to run extremely fast, or it has to run with extremely limited constraints, or it's an exceptionally large and complex program, then C++ may be a good fit.

Comment: 80% of statistics are made up (Score 1) 187

by Dutch Gun (#49159947) Attached to: Foxconn Factories' Future: Fewer Humans, More Robots

As of January 2015, the U6 rate is at 11.3%, from a high of 17.1% in 2009-10. U6 includes discouraged workers (U4 and up) and even "underemployed" workers (part-timers that would prefer to be full time), and so is probably a bit high if you're talking about actual unemployment. No, we're absolutely not at record levels of unemployment.

Moreover, no one uses "percentage of working age people not working" as an unemployment metric (unless you want to inflate the figure), because that includes people who choose not to work, such as spouses of full time workers, students, or those who retire early.

How about the baby boomers? Awesome, more wildly inaccurate statistics. It's not great news, but it's a far cry from what you indicated:

* 33 percent of Boomers have put aside less than $50,000
* Baby Boomers have saved an average of $262,541, about a third of the $805,398 they predict they’ll need at retirement.

I'm not claiming things aren't tough out there, but just pulling made-up statistics out of the air isn't going to inspire confidence in your arguments.

Comment: Re:Pharming? (Score 1) 39

by Dutch Gun (#49159563) Attached to: Pharming Attack Targets Home Router DNS Settings

"Phishing" actually makes a bit of sense, as in an attempt to snare victims with a false lure of sorts, such as a phony website. "Spear phishing" is a logical extension of this, a very directed phishing attack made at a particular company, or even a specific person, used to gain corporate access. I thought those were sort of clever, and gave us an accurate way to describe those very common attacks.

This one... yeah, not so much.

According to Wikipedia:

The term "pharming" has been controversial within the field. At a conference organized by the Anti-Phishing Working Group, Phillip Hallam-Baker denounced the term as "a marketing neologism designed to convince banks to buy a new set of security services". Scott Chasin, a former CTO of McAfee and founder of email security firm MX Logic, coined the term in 2005.

Let's just call it what it is: a specific type of phishing attack.

Build a system that even a fool can use and only a fool will want to use it.

Working...