Forgot your password?

Comment: Exponential growth (Score 1) 445

Assume for a second, that you have a pond. And a new type of algae has been introduced into the pond. Algae grows quickly, so let's assume a doubling time of a day. 24 hours. The concern is that this new algae is gross and smells bad and nobody wants to have a pond full of this disgusting algae. Unfortunately, treating the algae is expensive and nobody wants to treat the entire pond.

The question is: One week before the pond is entirely covered in algae, would enough have appeared that you would even notice? At a "gut instinct" level, we'd guess that perhaps a quarter or a third or at least a tenth of the pond would be covered in algae, but that gut level instinct would be completely wrong. Just 1.56% of the pond would be covered - right about the point where it becomes noticeable at all.

The point is this: information processing capabilities, globally, aren't just growing exponentially: the rate of growth is itself also growing exponentially. Just about exactly at the time where we notice actual, verifiable intelligence of any kind is just about exactly the time where we have to assume it's ubiquity.

Previous discussions talk about the number of cross connects and how far away we are from the mark without commenting that the Internet itself allows for an infinite number of cross connects - my laptop can connect directly to billions of resources immediately with an average 10-25ms delay. Now, it's very likely that what is meant by "cross connects" in the context of AI is substantially different than the "cross connect" capability that global networking enables, but it's equally true that people generally fail at understanding exponential growth. It's why 401ks are so universally underutilized, why credit cards are such big business, and why the concept of the "singularity" seems like such hocus pocus at the gut level.

Comment: Re:The original 68000 interrupts were inadequate (Score 1) 147

by Dadoo (#48442557) Attached to: Linux On a Motorola 68000 Solder-less Breadboard

Interrupts worked fine. It was bus errors (i.e. for off-chip memory protection and/or mapping units) that were a problem. The 68010 fixed that particular issue if I recall.

You're correct, except for the fact that it wasn't a bug. The original 68000 simply wasn't designed for use with demand-paged virtual memory. To make that happen, you need to either save the processor state somewhere (which the 68010, 68020, etc. did) or have restartable instructions (the approach used by National Semiconductor, for their 32000 series). I vaguely remember reading that Motorola switched to restartable instructions in the 68040 or 68060, but I'm not sure.

Comment: Re:68010/@2MB ran a unix variant (Score 1) 147

by Dadoo (#48442495) Attached to: Linux On a Motorola 68000 Solder-less Breadboard

My first exposure toboth UNIX and 68K was with a Motorola VME/10 system

I've actually used one of those. Pretty decent machines, for their day. I especially liked how they had two ways to access the graphics memory: one by bit-planes, and the other by pixels.

You're lucky; my first 68K experience was on a Vicom image processor. It was a 68000-based machine, running VersaDOS. Talk about a terrible OS - even MS/DOS would have been better.

Comment: Re:Nice... (Score 1) 147

by Dadoo (#48442457) Attached to: Linux On a Motorola 68000 Solder-less Breadboard

The '020 supported external memory management (MC68451)

No, the 68451 was for the 68010 - though since it was a segmented MMU (rather than demand-paged), I imagine it could have worked with the 68000, too. The 68020 used the 68851, which was a demand-paged MMU.

Comment: Re:Hey, congratulations (Score 1) 147

by Dadoo (#48442397) Attached to: Linux On a Motorola 68000 Solder-less Breadboard

The 68030 could hold short loops in its chip logic with some tricks, despite not really having a cache. Unfortunately, the 68040's on-chip cache implementation was horrible and created all sorts of problems for implementers, and by then Intel chips were running much much faster

No, you're thinking of the 68010's "loop mode", where tight loops didn't require memory accesses for instruction fetches (after the initial instruction fetch). Both the 68020 and 68030 had caches.

Comment: Lovin' that smell of BIAS (Score 1) 226

by mcrbids (#48406815) Attached to: Coding Bootcamps Presented As "College Alternative"

See, anybody who has a CS degree will be motivated to HATE boot camp guys. Employers who want more (cheaper) labor will be motivated to LOVE any force that lets them hire more people at less cost.

As a self-taught programmer myself managing a 10+ year project that's highly profitable, you'll probably guess which side of that divide you'll tend to see me on.

Comment: Re:The measurements in question: (Score 1) 142

by mcrbids (#48374457) Attached to: Data Center Study Reveals Top 5 SMART Stats That Correlate To Drive Failures

Your later comments about ignoring RAID controller warnings for a *year* strike me as callous. But we all have our standards, and standards vary greatly from place to place as the needs the drive the standards also vary greatly. (financial institutions care much more about transactional correctness than reddit)

After months of testing, our organization has wholeheartedly adopted ZFS and have been finding that not only is it technically far superior to other storage technologies, it's significantly faster in many contexts, it's actually more stable than even EXT4 under continuous heavy read/write loads, and brings capabilities to the table that even expensive, hardware RAID controllers have a tough time matching. Best of all, since it actually runs off JBOD, the cost is somewhere between insignificant and irrelevant.

I was wondering if you had investigated ZFS at all, and if so, why you aren't using it?

Comment: THIS problem solved long ago... (Score 1) 488

by mcrbids (#48370753) Attached to: Denmark Faces a Tricky Transition To 100 Percent Renewable Energy

Large scale internal combustion engines are extremely efficient and can run on just about anything burnable: vegetable oil, powdered coal, agricultural dust, wood gas from trees, dried leaves, etc. Yes, you can literally run an engine on banana peels. The trick is to get the carburetor to get the balance right.

From the perspective of a generator for a hospital, it would be relatively straightforward to design a generator running an engine like this with whatever renewable fuel is most convenient and readily available locally. Large scale wood gas installations typically work with fuel pre-processed into pellets.

Comment: Re:Ok... just turned two score, but... (Score 2) 438

by mcrbids (#48356163) Attached to: The Students Who Feel They Have the Right To Cheat

You make it sound like it was paradise in the 80s. It had it's suckiness, just like we do today.

1) There was constant threats of terrorism in the media in the 80s. Take a look at the "Libyans" in "Back to the Future".

2) Helicopter parents were definitely a thing in the 80s.

3) There were plenty of poor example adults in the 80s.

4) I'll 100% grant that entry level jobs are *much* harder to find now.

5) NSA and FBI watched us in the 80s. Ma Bell logged every call ever made. What was that you were saying on the CB Radio, back when the FCC actually gave a damn?

6) Granted Massive student debt, partially offset by the relative ease of getting into school. Yes, debt is a problem, especially when you pick a lame degree. It was always a problem, more so now.

7) There was no "online", so no posting stupid stuff online, and no online bullying. Bullying back then wasn't some insult posted in a chat root, it was a broken jaw. I remember well facing my bully with a stick in my hand, and being knocked flat repeatedly by a kid with 30 pounds on me, while I cursed defiantly and got up to face him again.

8) Education system was "declining" then too.

9) I'd argue that the cold war and the constant threat of total, global annihilation far outweighs a few school shootings. Or did you forget that little detail?

Comment: Re:They ARE a utility. (Score 2) 706

by mcrbids (#48352861) Attached to: President Obama Backs Regulation of Broadband As a Utility

The only reason he airline industry is not a natural monopoly is because of the massive public infrastructure provided by the US Government FAA in public use airports and related flight control infrastructure. In every meaningful sense, an airport solves the "last mile problem" for airplanes. Why wouldn't we expect a similar investment in the "last mile problem" for Internet Service?

SouthWest doesn't own the Oakland Airport; they merely lease a terminal. Can you imagine what would have happened if Delta had owned the airports too?

Comment: Re:Here we go again (Score 1) 139

by mcrbids (#48306101) Attached to: Ask Slashdot: How Useful Are DMARC and DKIM?

I've seen this lame list for 10 years, pretty much trolling bait. But based on this, I wonder if you even know how DKIM works?

(X ) It will stop spam for two weeks and then we'll be stuck with it

Pretty touch to crack legitimate encryption.

(X ) Requires immediate total cooperation from everybody at once

Not at all. You can use it, or not. If you don't use it, you essentially give permission for black hats to spoof your identity. Also, if you are an admin, you can choose what you do with DKIM.

(X ) Many email users cannot afford to lose business or alienate potential employers

How is being able to protect your account from being spoofed going to affect business?

(X ) Lack of centrally controlling authority for email

Why would you need one? DKIM is done via DNS and is under the control of the record holder.

(X) Asshats
(X ) Huge existing software investment in SMTP
(X ) Armies of worm riddled broadband-connected Windows boxes
(X ) Eternal arms race involved in all filtering approaches

Do you actually know how DKIM works? Each of these points are either effectively made better with DKIM or are irrelevant.

(X ) Ideas similar to yours are easy to come up with, yet none have ever
been shown practical

Care to name one?

(X ) Whitelists suck
(X ) Countermeasures should not involve sabotage of public networks
(X ) Why should we have to trust you and your servers?
(X ) Killing them that way is not slow and painful enough

How is DKIM a whitelist? You really have no idea how this works, do you? Did you just fill in some boxes at random?

I'll address a single point on here, to show how DKIM works rather well even in the worst of the points:

(X ) Mailing lists and other legitimate email uses would be affected

One of the products my company provides for schools is a "mailing list reflector" that in practice works very much like your average mailing list. In order to ensure delivery, all outbound email is signed with DKIM, even though we're really just forwarding the original message to the mailing list recipients.

How is this done? Well, we use a dummy address for the "From" field like " " and then set the reply-to field to match the original sender. Thus, DKIM passes as we provide keys for, the user is "From", and the end user is able to reply to get a message back to the sender without involving our mail server at all.

It's a compromise, but it works well and we've had virtually no complaints.

Comment: Re:Here's why (Score 1) 468

by mcrbids (#48289405) Attached to: Boo! The House Majority PAC Is Watching You

Voters worry about irrelevant issues like abortion, gay marriage, inequality, and racism, while not worrying enough about the stuff that matters, like banking regulation, tax policy, nepotism, and crony capitalism.

And, in my opinion, that's largely because of the Centrally Controlled Media in the United States. And if you think "Main Stream Media" doesn't include Faux[sp?] News, you're also a victim of this control.

Comment: For all the idiots (Score 5, Insightful) 87

by mcrbids (#48273973) Attached to: Vulnerabilities Found (and Sought) In More Command-Line Tools

... to the masses of sarcastic "I though Open Source was more secure!" crowd: in an Open Source forum, when vulnerabilities are found, they are patched. Since it's a public forum, the vulnerabilities are disclosed, and patches / updates made available. The poor, sorry state of the first cut gets rapidly and openly improved.

With closed source, the vulnerabilities merely stay hidden and undisclosed, and you have no ability to know about it, or fix it yourself. the poor, sorry state of the first cut never improves. Yes, there are some cultures that take security seriously. You have no way of knowing.

This, right here, is what "more secure" looks like: public notification of the vulnerabilities and patches to distribute.

In every non-trivial program there is at least one bug.