Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

typodupeerror

## Comment Problem Solving Is Why (Score 1)656

Parent got it right; it's about problem solving. I have dual math and cs degrees, and while most of the actual math escaped me decades ago (I couldn't solve half the diff-EQs or integrals now that I could in college), the practices and thought processes have (IMnsHO) made me a better programmer. Programming is about efficiency as much or more than it is about knowing any specific language or being able to execute a particular task. Most importantly, I think is the ability to have faith that your code is correct and complete... proofs in linear algebra and number theory were immensely helpful for that. Testing edge cases and knowing that your loops will terminate properly flex the same muscles as proofs by induction. I think of Pollard's rho more doing database programming than I did in math classes, but I'm glad someone pointed it out to me there.

Math can also be directly applicable depending on what you're going into. Visual and game programming is full of geometry and trigonometry. Artificial Intelligence, Big Data, Data Mining all require statistics, hashing algorithms, efficient tree traversal, and all sorts of things that span the boundary between CS and Math. In the end, though, all of programming is just implementing algorithms, and all algorithms are just math problems. The two complement each other brilliantly.

## Comment Commodore "Compute's Gazette" Magazines (Score 1)623

My dad brought home a commodore and I subscribed to Compute's Gazette (I think that's what it was called) -- a magazine with a lot of commodore stuff in it. One thing they had was pages and pages of bytecode that you had to type in with no debugger or syntax to speak of. I learned a LOT from that, and from the built-in basic the OS had. The first thing I really remember programming to completeness was a Julia and Mandelbrot set generator... in Commodore basic. It was not fast; I could see the program drawing pretty much every pixel. Good times.

I ended up with a degree in computer science, but I'd say that was more an opportunity to practice than it really was how I "learned" to program. Algorithms and Operating System classes had some concepts that I hadn't run across, and every class was an opportunity to learn or find new snippets of knowledge. But the formal things I learned like "Bresenham's circle algorithm" and topological sorts, or anything from the Dragon Book, the volumes of Knuth (ah), or the Numerical Programming books were important conceptually... probably good to make efficient code, and great ways to not recreate the wheel. But the only way to "learn" to code is to code.

## Comment Wedding down the street (Score 1)217

A neighbor down the street is apparently being wed because we get their wedding RSVPs. The RSVP postcards are in a fancy script and the Post Office routinely misinterprets a "1" for a "4". This has definitely made this year's count spike.

## Comment Re:What else is in the "industry"? (Score 1)209

Military still seems "proprietary" to me. If they meant "commercial", I could see a difference. I also considered "embedded" or "firmware" style code that, while software, is more closely tied to a physical hardware implementation. All of those still seem either "proprietary" or "open source", though, and you're right (@stillnotelf) that these would raise rather than lower industry averages.

It could include things like javascript that is just out-in-the-wild. If you were to strip programmatic pieces from websites that were one-offs... things that were neither marketed nor sold, and not really managed as software, just put out there, code quality would probably drop. I'm thinking of websites with funny animations or just hand-coded scripts to do navigation or whatnot. These wouldn't be "open source" in the sense that there's no statement of open copyright, and they wouldn't be "proprietary" in the sense that noone is marketing or working to save or publish or reuse the code, and while copyright may exist noone is really worried about projecting it (due to the one-off nature). Still, it's a weird statistic.

I wonder if they meant a "standard" as in a target or an accepted limit that is somewhat arbitrary rather than an "average" (which the article actually uses) of real-world code. This would make sense since the average they cite is exactly 1.

## Comment Re:and all the children are above average (Score 2)209

Wow, yeah, I posted an almost identical sentence myself. Eerie. (Although I didn't have a Wobegon reference... sorry). But yeah, it seems like an odd sentiment. Internal use software is still either "proprietary" or "open source"... isn't it? But good point. If someone calculated the bugs in my excel macros as if they could be used for general purpose computing I'd be in sad shape. (ObNote: I use excel macros as rarely as possible, and normally only at gunpoint).

## Comment What else is in the "industry"? (Score 4, Insightful)209

and both [proprietary and open-source software] continue to surpass the industry standard for software quality

... What else is there? And why is this unknown third type of code dragging down the "industry"?

## Comment Playing along with the ridiculousness... (Score 1)629

I agree with most posters that the logic in the post is hugely flawed (predicting something about the future by arguing that we can't know enough to predict it is inane). But more constructively: our constant access to information hasn't sated our desire for more information. Information collection is driving the recent knowledge boom as much if not more than ease of access. Besides, no matter how much time passes, if we haven't visited another world we won't have the information about that world at our availability. You have to collect information before you can use it... that's WHY further exploration will always be a goal (unless, you know, we obliterate ourselves somehow in the mean time, or find something similarly more important in any given short-term).

## Submission + - Giving away personal info byte by byte for "security"?1

ZahrGnosis writes: "Through work, and bad planning, I've been subscribed to a lot of trade magazines. I don't generally hate them, most come electronically now, and some are actually useful. What I despise is the annual calls to renew my "free" subscription and update my info. Ok, giving away my work e-mail, title, and line of business info isn't that bad, it's practically good marketing. But at the end of each call the phone people say "in order to verify that we talked to you, can you tell us" followed by something tiny and seemingly innocuous. The last digit of the month you were born in, or the first letter of the city you were born in. Today it was the country you were born in. I've started lying, of course, because noone needs to aggregate that sort of personal information byte-by-byte, but now I'm convinced that's what they're doing. Does anyone have any details on this? Can someone confirm or deny that these seemingly innocuous "security" questions are being aggregated. Yes, I tried some google searches, but my fu seems weak in this area... or noone is talking about it yet. Bueller?"

## Comment Same point, two days ago... (Score 2)365

I said it before and I'll say it again. Taking something that exists and making it digital (or on the internet or on your phone or any "just-add-technology" change) doesn't make it new. Prior art should still apply...I refer you to my comment on Tuesday

Obvious, Novel, and Prior Art aren't just digital

## Comment Re:Obvious, Novel, and Prior Art aren't just digit (Score 1)240

I completely agree, but I think that was my point (and I reply largely because I took umbrage with the "Um, no" while really I think we agree)... design patents are very narrow in scope intentionally, and I'm arguing that software patents should be similarly narrow to specifically avoid the problems with the current patent system about which we're both talking.

I used the shopping cart example because of recent news that such a patent has been awarded, and upon reading the patent it does not appear to be a specific implementation at all; rather a very generic implementation that highly parallels an existing physical concept (a shopping cart). It's not that specific implementations shouldn't be patentable (although I may argue that copyright instead of patents should apply in most cases, when the differences are more aesthetic rather than functional, but thus my comparison to design patents), I'm just trying to find a reasonable testable limit to what should qualify.

As you say, patenting the entire store is not what patents are about. I think patenting shopping carts is simliar (unless they have a "novel" feature). Things that work well in the physical world are quickly gaining internet-based analogues.

You're also right about of inevitability, of course, but I think there's a difference between inevitable, and obvious or novel. Lending digital objects (case in point) is an interesting example. A narrow patent on a particular combination of encryption and centrally controlled tracking and limiting methods on how many times something could be shared probably could be patentable, and makes sense. But in this case, the patent has grown to include practically any conceivable implementation, which seems wrong. The limiting factor I suggested was whether there is a physical analog or not. There IS a physical analog for lending, so no matter how long it takes for the first person to build and market code for that should, IMHO, remain unpatentable. I'm not aware of a physical analog for lending something only a specific number of times, so I'm OK with that (although I'd be glad to be corrected).

Definitely an interesting conversation; I think we agree on the salient points.

## Comment Obvious, Novel, and Prior Art aren't just digital (Score 3, Insightful)240

The patent office needs to adopt a simple fact: doing something digitally that has been done physically before (like lending purchased objects just like a used book and music store, or having a digital "shopping cart" like, you know, a shopping cart) is "obvious". Someone will eventually get around to implementing it, so it is not novel and should not be patentable. At best maybe the site should get design patent coverage, or some very specific encryption algorithms should be protected in some way if in fact they are proprietary, but the idea of patenting an entire store concept should be ridiculous.

## Comment Re:Biology, Economics, Chemistry, Politics... (Score 1)265

It's been a long time since I've read Kuhn (it was required in college along with a counter position by someone whose name escapes me), and I remember not fully agreeing with every nuance that he wrote. But I completely agree with parent that Biology has gone through paradigm shifts, and I'd say it's clear that other things have as well. We live in a world where cloning is a reality -- this is a paradigm shift from when it was science fiction. Microbiology changed the way everyone interacts with the world right down to the soap we buy.

Obviously economists are having some identity crises lately that I suspect will be looked back upon as somewhat of a paradigm shift. Chemistry has moved quite far from alchemy and nuclear chemistry was unthinkable or understandable not long ago. Manufacturing went from fully manual through the assembly line and is now in a phase of robotics and desktop printing. Spectrum disorders are fairly recent diagnoses (as a specific category at least), in the psychological sciences. Triceratops were recently reshuffled and we're far more likely to imagine dinosaurs with feathers now than when I was a kid. Don't get me started on the Brontosaurus. Warfare is changing; it's barely important to have a person with a weapon see someone opposing them directly in some cases. The concept of water on other planets is far more accepted and direct evidence is changing common thinking on the topic. We have a near-permanent presence in space. Gender views are changing... gay-rights are probably at a social tipping point, at least in some places, and any social-science is strongly affected.

All of these things are dramatic changes. Are they paradigm shifts? Some of them... one of the problems with paradigm shifts is that they're nearly impossible to see from the inside. And yes, they may appear more subtle than "the earth is round - no it's not" debates, but remember that those took quite a long time for large populations to accept themselves. It's difficult to predict how any new way of looking at something will affect the future, but some of these changes will eventually be looked back upon as paradigm shifts.

## Comment Re:BIG data? (Score 3, Insightful)155

Ah, good, I'm glad someone already mentioned this. Big did not deserve to be italicized there not only because 260 million minutes of video isn't "that much" (!) in terms of internet streaming viewers, but the statistics aren't really based on number of minutes of video analyzed... the main statistics are more about viewership and certain events (video startup, video freezing), which could be surrounded by hours of uninteresting video time that didn't really contribute to some of the metrics.

Netflix has, what, 20+ million individual viewers per month? 10 hours a piece isn't hard to imagine. As the parent pointed out youtube is much larger than that.

It's still very interesting analytics. it's not always the size that matters with "big" data. But let's not get carried away with the italics now people... this way madness lies.

## Comment Re:Ouch. (Score 5, Informative)362

To my dismay, I worked on this project. The project started with controversy -- the Oracle bid that beat out SAP like seven years ago was surrounded by complaints. The article skips some details. CSC (Computer Sciences Corp, who is quoted) was the main driver of about \$800-million of that spending. It is accurate to say that this change didn't affect them, but that's because hundreds of people had already been laid off or moved of the project between last September and last March.

There's enough blame to go all over the place. Years spent in requirements that weren't turned into code; time spent passing blame back and forth across development teams who were so large and segregated that they rarely communicated properly, both within the Air Force and within CSC and between the other teams. At it's peak I believe the project had roughly 800 people on it. I don't know what the maximum size a development project should have, but it's got to be smaller than that. That number includes everyone, trainers, managers, and some key initial users and testers, but still it's a very high number.

The Air Force tried several times to realign the project, but there were contractual disputes or, once that was over, difficulty deciding what to keep and what to scrap, which lead to a death spiral where everything went back on the drawing board and I think ultimately leadership just lost hope.

It wasn't a complete loss, though. A few small teams, including the one I was previously on, have survived. We built a robust data quality system and are working on some enterprise data dictionary and master data tools, which will help the systems that are left behind. With hundreds of systems supporting a half million users, \$1billion probably isn't off the chart -- at least not had this been a successful project, but the worst part is that there's still much work that needs to be done, and now someone will have to start over... again.

# Slashdot Top Deals

The opposite of a correct statement is a false statement. But the opposite of a profound truth may well be another profound truth. -- Niels Bohr

Working...