Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Bell & Howell (Score 1) 523

Way back when, Apple hadn't yet established relationships with public schools. School administrators didn't know how to classify computer equipment, anyway. The Bell & Howell company came to the rescue: they were vendors of audiovisual equipment like film projectors. Bell & Howell agreed to let Apple use their connections with school districts in exchange for the computers being rebranded as Bell & Howell equipment, in Bell & Howell livery.

This is why the first computer I ever got time on was an all-black sleek Apple II+ that looked like it belonged on the Death Star.

The original Bell & Howell Apples are now almost completely forgotten about. But man, do I ever have fond memories of them.

See one for yourself here.

Submission + - Slashdot Alum Samzenpus's Fractured Veil Hits Kickstarter

CmdrTaco writes: Long time Slashdot readers remember Samzenpus,who posted over 17,000 stories here, sadly crushing my record in the process! What you might NOT know is that he was frequently the Dungeon Master for D&D campaigns played by the original Slashdot crew, and for the last few years he has been applying these skills with fellow Slashdot editorial alum Chris DiBona to a Survival game called Fractured Veil. It's set in a post apocalyptic Hawaii with a huge world based on real map data to explore, as well as careful balance between PVP & PVE. I figured a lot of our old friends would love to help them meet their kickstarter goal and then help us build bases and murder monsters! The game is turning into something pretty great and I'm excited to see it in the wild!

Comment Re:How to automate C++ auditing? (Score 1) 341

And before you start spewing about vectors vs C-style arrays performance, I would suggest you revisit some simple profiling code; vectors are just as fast as c-style arrays these days.

Actually, no.

The C++ standard doesn't give any guarantees on how quickly the OS has to service your request for a dynamically allocated block on the heap. Most modern desktop CPUs and OSes do this very quickly, but there are embedded platforms where allocating memory off in the heap takes multiple milliseconds.

Likewise, modern CPUs tend to aggressively cache vectors in ways that are absolutely beautiful. But in embedded systems with simpler CPUs, they may very well be accessing each element of the vector via a pointer indirection.

The world's a much bigger place than server farms and laptops. In those areas, yeah, std::vector<T> is a lifesaver. But in the IoT world there's still a very real need for C-style stack arrays.

This is, incidentally, the entire motivation for std::array<T, std::size_t> in C++11. The embedded guys insisted on a container with an STL-compatible API which would be compiled down to straight-up C arrays. I've never found anything in the embedded space where a std::array<T, std::size_t> was an inappropriate choice, but I've been in lots of projects where a std::vector<T> was simply not in the cards.

Comment Lots of errors. (Score 1) 341

I've been programming in C++ since 1989 and have been getting paid to do it since 1993. I've seen the language grow up, and I don't think you're very near to right. You've got some serious misunderstandings of the language.

Because it does not have momentum, it will probably never develop momentum. Like Lisp, it may be a neat language, but it will almost certainly be consigned to the dustbin of history.

I'm also an old LISP hacker (since the early '80s). LISP is not "consigned to the dustbin of history". It was foundational in the development of modern languages and concepts it introduced to programming are still with us today. I personally side with Google's Peter Norvig: LISP is still around, we just call it Python.

If Rust is doomed to share LISP's fate, I think the Rustaceans would consider that a victory beyond their wildest imaginings.

It should also be noted that well written C++ can be just as good as Rust.

Meaningless. Well-written X can of course be as good as Y. Well-written C is just as good as Rust. The question that's relevant to software engineering is "how much effort is required to do the task well in X versus well in Y?"

As someone who's entering his fourth decade of C++ programming: yes, the modern dialect of C++ is wonderful. It still has an absurd number of gotchas and corner cases, though, to the point where unless you've already made that massive investment in learning corner cases ("can I use the universal initializer syntax to initialize my object, or will it turn it into std::initializer_list? And someone remind me again why vector<bool> is a bad idea and will screw me over if I try to use it?") I would genuinely recommend using another language for low-level power.

The first, and somewhat less important principle of C++ is template metaprogramming. This approach to software development is entirely geared towards reduction of redundancy in code writing.

No. That's generic code, period. Template metaprogramming is different: it involves exploiting the fact the C++ template instantiation facility is Turing-complete to perform at compile time things which in other languages would be deferred until run time.

This can be a really big deal. In C you might assert that the size of an integer is what you're expecting, but you won't know until you try to run the code. Your assert will give you a clue as to why your code failed in production at 3:00am, but you'll still get the call at 3:00am telling you everything blew up. In C++, a static_assert evaluates that same question at compile time, and if the architecture you're compiling for doesn't have the necessary word size you'll know it when the compilation halts.

And that's just scratching the surface! Template metaprogramming is also used in things such as the Blitz++ numerical libraries (I'm old, yes, I remember Blitz++) to optimize matrix operations to the point where it beats FORTRAN matrix multiplication. Nothing like unrolling loops at compile time, automatically, to give your code a performance boost. And of course, libraries like Boost are continually pushing out the frontiers of what we can do with template metaprogramming.

Templatized code is actually about abstract algebra (as no less than Stepanov has said), allowing us to separate algorithms from the types they operate on. But template metaprogramming is mostly about moving things normally done at runtime into the compile-time cycle. It's not about code reuse.

This is an extension of the idea of inheritance where you re-use base class code, and only modify what you need to adapt the base to your use case.

Not really. Inheritance is inevitably about types: you can't talk about inheritance without talking about the type of the parent and what behaviors get inherited. Generic programming is about separating type out from the discussion and instead talking about the algorithm in an abstract-algebra sense. (And then you have abominations like the Curiously Recurring Template Pattern which fuse them both.)

And then we get to...

Object oriented programming is a mechanism for creating a software structure that forces most types of bugs to be compile time bugs.

No. Just. No.

There is no universe in which this paragraph is anywhere near right. OOP in C++ was introduced in the 1970s in the very first iteration of the language, and let me tell you, as someone who has actually had to work with cfront that compiler did absolutely nothing to turn my run-time bugs into compile-time ones, nor did object-orientation magically provide this capability.

But by the early '00s, around GCC 3, when C++ compilers started to produce scarily optimized template code? About that time is when template metaprogramming took off, and that was when I began to replace my C-style casts with static_casts and began to get warnings about "uh, boss, that cast isn't valid, it'll blow up on you at runtime".

Your example is also weird: you say the non-OO way would involve a struct that contains data fields and a type ID, but, uh -- that's what an object is: it's a struct containing function pointers and data objects. The example you give can easily be written in C by anyone who understands function pointer syntax. Remember, C++ classes were explicitly designed to be translatable into C structs-and-function-pointers. (That's how the first major C++ compiler, cfront, worked. It parsed the language and spat out equivalent C, which was then compiled.)

OOP converts this type of flow control into virtual dispatch, which can be validated at compile time,

... except that virtual dispatch explicitly cannot be validated at compile time. Virtual dispatching is done at runtime. Only static dispatch can be done at compile-time. (Note: before you put together a toy example that uses the virtual keyword, be careful: if the compiler can statically determine the type, it's allowed to implicitly convert your virtual call into a static call.)

I'm afraid that I'm sounding like an angry old man standing on my porch talking about these kids today and looking around for my shawl. I apologize if that's the way I'm coming off. But you seem to have some really weird misapprehensions about the C++ language, and I really hope you'll correct them.

Comment Re:You would need (Score 1) 210

Another comment on "big grid."

Any time you go for very high availability, it costs. Big grid delivers that availability, an availability that is absolutely critical. Because it is geographically disbursed, it achieves more redundancy that microgrids or smaller grids. It does cost to build the power lines, but it doesn't cost much to use them unless you actually need the power that they transmit, and the fact that they do move a lot of energy suggests that they really are needed.

Small nuclear reactors are really the equivalent of building small fossil plants, minus the CO2 emissions. We already have those all over the place, and yet we still need the long transmission lines.

If we want to engineer a much smaller grid to have the same reliability, what we save in long-line costs would probably be offset by the spending for in a higher degree of required redundancy.

Comment Re:You would need (Score 1) 210

I agree on modular nukes.

The problem with all of this is twofold:
    -cost - electricity is the fuel of our civilization. Raise the cost and you hurt people, and hurt the economy
    -reliability ("grid stability") - we require highly reliable electricity

On wind/solar:

If you do the math, you'll see that batteries just aren't going to cut it, and I have little hope for dramatic breakthroughs - the trend has been more incremental than dramatic, and at a fairly slow rate. Batteries are just too expensive to do anything other than cushion minor load/generation shifts.

Other methods of storage don't seem to be going anywhere, and there has to be a reason other than big bad power companies (and I'm no fan of regulated power companies, *except* that their grid stability (reliability) is an incredible achievement upon which our civilization depends). One nation in Europe is building, at high costs, pumped hydro, by excavating a deep underground reservoir to put the water into. Again, if you don't mind the cost, things like that work.

There are "obvious" storage technologies: batteries, compressed air, hydrogen, pumped hydro. That they are achieving almost no success, other than for bragging, says that those technologies are just too expensive. If you are willing to spend enough money, you can make pretty much anything work
  But, if you want to not raise our power costs too high, then nothing works except, maybe, nuclear after first coasting along on CCGT natural gas.

What we see today is wind and solar starting to destabilize the grid. This happens due to a combination of their intermittency, and their low incremental cost (only when generating) that drives down the capacity factor of the more reliable power plants, causing them to be permanently shut down. It isn't that wind and solar are cheap (other than their fuel costs), it is that the true costs are being born by the grid, invisibly. As long as the penetration is low, this isn't a huge problem. But add more wind and solar and grid stability becomes a huge problem, which translates into a lot more cost.

BTW, the resource costs and impacts of wind and solar are immense. Each wind turbine is immense, and greatly impacts the area around it. If it weren't for the fad factor, no environmentalist would be for putting thousands and thousands of skyscraper sized concrete, steel and plastic wind turbines all over the landscape. I see these things when I'm storm chasing in the midwest, and they are hideous. A single wind turbine is pretty graceful. Put a line of them on a ridge ten miles away and they just look like unnatural clutter. They require special exceptions for the protected species that they kill.

Comment Re:what about the fuel needed to get that cargo up (Score 1) 210

You'd need to put a whole lot of stuff up there. You can't just boot up heavy, high-tech industry from a metal rich rock and some solar cells. There are lots and lots of interacting technologies that go into producing almost anything these days, and those technologies take place with a whole bunch of highly specialized equipment operated and maintained, to some extent, by humans.

Comment Re:You would need (Score 1) 210

The "big grid system" is a feature, not a bug. It takes advantage of economy of scale, and provides redundancy, which is critical. The vested interests - utilities regulated by the government - may indeed be a problem.

But yes, we do need to shore up (actually, increase) electric generation. The best short term solution is combined cycle gas turbine (CCGT) for the US, because of its low cost and relatively low emissions. But if you are really concerned about CO2, then nuclear is the way to go. Unfortunately, regulators, unreasonably scared NIMBY's, and radical environmentalists have pretty much destroyed nuclear power advancements in the US,

Beyond that, I don't see a next generation of technology. Solar and wind are hitting their limits - to use much more of them will be extremely expensive due to their low quality power (i.e. intermittency). With intermittent power, you need either extremely expensive storage technologies, or you need to maintain the existing technology (nuclear, fossil) as backup. Batteries and other storage technologies are not getting better at any reasonable rate, which is not surprising, since they have been under intense development for over 100 years, with even more focus for the last 30 or so.

Nuclear is expensive, but less so than the equivalent solar/wind - because of intermittency. Also, beware Levelized Cost of Energy comparisons, which make intermittent power look really cheap. LCOE only covers the cost of the plant, not of transmission, and far more importantly, not the grid stability cost - i.e. the backup generation required. Throw that in, and things change a lot.

Comment Re:I just don't buy the shit MIT... (Score 5, Insightful) 435

For a good computer scientist...

Ah, the No True Scotsman fallacy.

that basic world view is ingrained in their soul.

No. Definitively, no.

I was born in 1975. By 1979 I knew I was going to be a hacker. No kidding: I was sitting on Mrs. Walters' kitchen floor discovering recursion by drawing geometric shapes. I remember looking at this Easter egg I'd decorated in a recursive pattern and being in awe, and thinking I wanted to draw recursive patterns on eggs forever.

I was there for Flag Day in 1983 when ARPANET became the Internet. I was eight years old and the local college computer science department viewed me as their mascot, I guess. I'm grateful to them for the time I got to spend on LISP Machines.

Today I'm 44. I hold a Master's degree in computer science and am a thesis away from a Ph.D. I've worked for the United States government's official voting research group (the now-defunct ACCURATE) and private industry. I've spoken at Black Hat, DEF CON, CodeCon, OSCON, and more. I think that I meet your, or anyone's, definition of a good computer scientist with a long career.

And I am telling you, brother, you are wrong.

In the late '80s and early '90s there was a USENIX T-shirt given to attendees. "Networks Connect People, Not Computers." It was a neat shirt and I wore mine until it was shreds, not because I liked wearing a ratty T-shirt but because there are so many of us who need to learn this lesson.

Logic is the tool we use to serve humanity. But if you let logic blind you to the fact other people are human beings with human feelings who need to be treated like human beings, then you just stopped being a hacker and you started becoming a tool.

Hackers serve humanity. We don't rule it. And we're not excused from the rules of human behavior.

I really wish RMS had learned this. It's too late for him. It's not too late for you.

Comment Re:Pathetic (Score 2) 275

They were not "innocent bystanders" although some innocents were killed. Japan was highly militarized, and every civilian adult was expected to fight if Japan was invaded. Japan had been butchering civilians throughout it's "Greater East Asia Co-Prosperity Sphere" - the conquered countries. They used biological warfare on Chinese, and in experiments on prisoners of war. They tortured and murdered prisoners of war. Japan was a racist society that accorded no humanity to anyone not Japanese.

The allies were planning to invade Japan, as that was the only way to end the very real threat from their vicious regime. The atomic bombs were dropped to shorten the war and lessen the number of lives killed. (although my Japanese relatives still would disagree_. While saving Japanese lives wasn't the intent - in those days, the enemy was the enemy - the effect of the bombings saved millions of Japanese lives.

Also, the atomic bombs killed fewer Japanese than a single night's firebombing of Tokyo.

So no, the US was hardly morally culpable for nuking the Japanese, and we in fact did them a favor!

Slashdot Top Deals

I am NOMAD!

Working...