The InterNet was created because the guy in charge of things didn't want a teletype in his office for each and every machine he could access. A network to access all of them, and a single terminal made more sense.
It had NOTHING to do with nuclear war, or reliability, at first.
No mention of the Superb Owl watching over all this?
Access control lists are not adequate security, no matter how careful you are. You need the Bell-LaPadula or something like it that implements mandatory access controls to actually secure a system.
SELinux is an attempt to push a little bit towards a secure system, but it's not the real deal.
If we started building bunkers out of blocks of TNT, someone would rapidly figure out it was a bad idea.... but not so when it's abstracted several layers deep.
In conventional munitions, it's necessary to deliver an explosive to a target. Thanks to the Unix security model, with its lack of any notion of multi-level security, we've created an entire infrastructure that's ready to self-destruct at a moment's notice. The military went on to actually procure and use multi-level security in a number of cases, while the idea is perceived as impossible, or unnecessary in the civilian space.
All of our Linux, Mac OS, and Windows machines share the same brain dead security model. When you run code, you have to trust it not to be a virtual grenade, each and every time.
The existence of billions of computers which blindly run code without actual security protecting the operating system (as a multi-level secure system does) is astoundingly stupid, and yet 99.9% of the "tech" community is just fine with this state of affairs.
The infrastructure IS the weapon, its your job to change that over the next 20 years.... get crackin'
This is the kind of thing that happens when you trust an application to do what it says on the tin. An OS based on a capability architecture would have made this pretty much impossible.
This isn't about being a Luddite, it's about pointing out the economic disparity at play in the world. When you create conditions in the rest of the world such that we give them pieces of paper, and they are willing to die trying to get something to sell for those pieces of paper... we have some social responsibility.
The US exports paper promises of
We're all running systems based on some derivative of Unix. The user based permission model was fine for 1970s computer science departments, but it's totally crap for the world we now live in. We all should be running systems that are at least Orange Book A1 level secure, but we aren't. The resources are available to do it, we could totally pump this out in a year or two in the open source world.... but we won't.
Everyone thinks they have secure enough systems... but they don't, not by a country mile. Nobody seems to understand that trusting applications to do their jobs, and not subvert the systems, is a stupid thing.
We have persistently insecure computing... encryption, even if done perfectly, doesn't help fix that.
Fair enough... the propagation delays would suck, yes... but we're talking about general purpose computing here, not picosecond timing. The main goal is throughput, and if you can get most of the transistors in the thing doing computation, instead of waiting for the 100 picoseconds they are actually needed, you've solved the "dark transistor" problem.
The gain is from being able to process all parts of a given problem in parallel, so you get at least 1 result per clock.... imagine being able to do 1024 bin FFTs at 1 Ghz, or faster.
You have to route signal, but at least in the bitgrid, that's flexible, and not he huge constraint on things that existing FPGAs force you to work around. You should be able to get 90% usage... I'm writing a simulator to try to figure that out, in Delphi for Windows, it's on GitHub.
My solution to the dark transistor problem came to me back around 1982... I call it the BitGrid. It's a Cartesian grid of 4 bit input, 4 bit output look up tables. each cell can replicate any logic function, and those tables are the basis of modern FPGAs. The thing that makes the BitGrid different is the total lack of routing fabric. This makes the grid homogeneous and symmetric. As long as you know of a defect in a cell, you can route around it trivially, at load time. You can shift any given logic configuration left, right, up, down, without having to do any work. You can rotate and mirror it.
The down side is that you have to pass through every cell to get from one side to the other... which could be a waste of logic, or a tremendous opportunity to do computing in parallel.
Imagine a big enough grid (lets say 64k * 64k), implemented in CMOS. You could take a program, unwrap all the instructions into their logical equivalents, and then mape all that out into the grid. This would let you run the entire program all at the same time.
Exaflops... here we come.
All of this points out what I'm saying... they've optimized for small(ish) systems that have to run very quickly, with a heavy emphasis on "routing fabric" internally. This makes them hard to program, as they are heterogeneous as all get out.
Imagine a grid of logic cells, a nice big, homogenous grid, that was symmetric. You could route programs in it almost instantly, there's be no need for custom tools to program it.
The problem IS the routing grid... it's a premature optimization. And for big scale stuff it definitely gets in the way.
I would have a 4 bits in, 4 bits out lookup table as the basis of this, and I call it the "bitgrid".... I've been writing about it for years, feel free to make the chip, and send me an email (or preferably a sample, please)., because that puppy is disclosed as far as patents go.... I have none, and can't now.
You should be able to get a 64k x 64k grid on a chip for a few bucks, in any kind of quantity. It should do Exaflops, or consume almost nothing if you idle it.
Now the blind ants (researchers) will need to explore more of the tree (the computing problem space)... there are many fruits out there yet to discover, this is just the end of the very easy fruit. I happen to believe that FPGAs can be made much more powerful because of some premature optimization. Time will tell if I'm right or wrong.
What do you think the founders believed? In the early revolutionary period, the US had no navy. They issued letters of marque to privately owned, armed ships. As in: private individuals owned war ships.
Wrong... dead wrong. The States each had their own Navy, and they were combined in 1775. The first Continental Navy ship was launched in September, 1775.
I applaud your Libertarian worldview, but it is not consistent with reality in this instance.
Well... I checked the distance to make sure old-fart-ism wasn't fscking with my memory... didn't think about the temperature. Can't find any records of the local temps, just the Chicago records you found.... oh well.
The current temperature outside my house is 47 degrees of frost. The only sensible scale of frackin cold I know of... which I learned of from Ernest Shackleton.