Wish I had mod points!
Anyway I agree -- Adobe made an excellent toolchain to do what you're asking, in the form of Adobe AIR.
Wish I had mod points!
Anyway I agree -- Adobe made an excellent toolchain to do what you're asking, in the form of Adobe AIR.
I realize you asked for a ruggedized laptop. However, that everyone else replaces theirs regularly points to the idea that you should consider that as a serious alternative and not discard it out of hand. I called this 'out of your box' because they're all doing it, and you're rejecting it a priori. I see basically three legitimate issues with this solution:
a) maintaining a consistent interface for you to be used to
b) providing easy data migration to the replacement device
c) total cost of multiple non-ruggedized devices compared to the realistic lifespan of ruggedized ones.
I'm not suggesting that my parent post had the right thoughts in mind, but Apple does provide surprisingly good, quick and easy solutions for a&b in OS X and the iPhone; I would expect the iPad to continue this.
Apple is not historically great about 'c', but that sand environment is hard even on the modestly ruggedized ones so it's not impossible.
Of course, I imagine their are
Of course you might need to account for shipping, purchasing, processing, or environmental costs in 'c', but even on the environmental front it's not a given that one device is better than 3, esp if it gets recycled well (many parts of the sandworn one will still work, and it'll be early enough that those, minus your HD, are reasonable used replacement parts in the right shop...)
I'd go further and say that in my opinion, and especially at these dataset sizes that are only small multiples of physical drives these days, RAID5 is a hinderance not a benefit to price-performant backup, because it requires validity of all-but-one of the drives in your array... typically, in the CORRECT array, so swapping mirrors in/out may be quite a headache.
Don't use any data level striping; break your data into a couple chunks drive-sized in the filesystem. Keep mirrors of each chunk on drives, both onsite and (one or more) offsite. Bit-compare the drives occasionally to look for loss.
I recommend at least 3 drives for any dataset; at least one onsite and at least one in your lockbox; that leaves one to be in transit at a time.
Replace the drives with newer versions every few years. Use a variety of brands/models.
Archival quality printing is also not cheap, but at least it's a fairly solved problem.
Personally I don't think you can do much better than printing it for an option that doesn't involve frequent migration - density isn't great, but I'm confident there'll still be optical scan devices at least for historical works, so if you print out all your bits in an OCR-friendly font, it won't be TOO much work for someone else to read them (if they really want to!) You should also include in the same format the source code for the decoder - even if that's not directly compilable in the future, it'll be a relatively clear indicator of how to do it, to the limits of what's possible.
You could probably do even better by e.g. punching holes into gold sheets a la The Baroque Cycle. Or stone tablets, etc. But those are all questions of "what's the most resilient format for PRINTED text" which is a topic at least we have a bunch of data on.
Actually, MS has tried to implement some Trusted Computing pieces that would do exactly that - restrict what will run so any DRM-broken content can't possibly be played.
Perhaps we could amend your sentence to: "never SUCCESSFULLY locked down..." - because they can't manage to have backward compatibility with all the terrible niche Windows apps and also do things like that.
I have no idea what this Orange Goo does, and I haven't read TFA. But I want to comment on your comment:
1) Most electronics are not made with incredibly strong surfaces and shells. If you were to encase your electronics in a perfectly fitting thick walled steel cradle, you'd reduce all events (esp a floor hitting a corner) into only the shock (G force) leaving out the impact (concentration of force on the surface of the device) Both of these parts of an impact are damaging. The fundamental momentum-limit you discuss only applies to the shock, which is most likely to damage internal parts.
2) Some crazy materials can do a surprisingly good job of momentarily pretending to be that idealized steel case. I presume the egg-video above shows that.
3) Typical elastic padding will not spread the momentum distribution out EVENLY over the time it takes to decelerate that 1/8". So even the shock part can be improved.
4) Don't forget that in addition to momentum, you must satisfy the conservation of energy equation, too. The most common way to do this is to bounce, and at least some of this energy gets converted to heat in each material that compresses. Dissipating more energy is also valuable.
Wow, you're amazingly not good at following plot points. Maybe there are holes in the plot, but they sure aren't the ones you listed.
As much as I want to rebut everything in great detail, it turns out I don't care quite enough. So a few... and I'll endeavor not to add any spoilers you didn't already give, and I'm dekarmaing this post to help.
6. Worker Prawns lack initiative. Definitely said early in the movie.
5. is because of 6.
3. If you PROCESS some material/chemical, it probably has different effects/uses than it had before you processed it. Otherwise why would you process it? Did you miss that whole bit of the movie?
2/3. That part about "powering the command module" - you made that up. At no point is powering a command module ever anything any characters are aspiring to. In the interest of avoiding spoilers, I'll leave it to you to figure out which part of those 4 words you might've gotten wrong.
I was going to say this, but of course you beat me to it. District 9 is one of the most legitimate serious science fiction / extrapolative fiction movies I've seen in a long, long time - things you usually only get in books. A limited number of fantastical assumptions, and then the exploration of the very rational ramifications of those assumptions.
And it was made on a relative shoestring, and the effects are perfect -- and the acting is amazing. But if you're expecting a 100% crazy action/effects movie, District 9 isn't it. (Neither is Inglorious Bastards, which is also awesome)
I appreciate the OP's concern, but really, any minimum wage peon at a credit or collection agency can look up any SSN in a couple minutes. The people who you need to sue are not the ones using SSNs for IDs, but the credit reporting agencies themselves and anyone else who skipped doing any actual verification of who you are in favor of the much cheaper use of your SSN as a password in direct violation of all the government documentation about how it was NOT secret.
You're right about this - you can distribute it as long as the source is available. The GPL was never and will never be about free as in beer, it's about being able to verify, to persistently use, and to extend the software you have.
You can charge $1,000,000 for the first copy, if you want - and if you can get someone to pay. But they'll be free to take the source you give them and redistribute copies for $50,000... or for $0.
Even if the App Store might now allow a clone app from your source, they would certainly (as certainly as any other App Store submission) allow an app with new levels, or one targeted at blind users (somehow!) or etc. Or maybe someone wants to make it into a psych test. More likely, if you vanish and someone wants to take advantage of iPhone 4.0 features when they come out.
The PRIME scenario is that users are never encumbered by the lack of source or lack of permissions, EXCEPT that they have to pass that forward.
That's what it's about, guaranteeing innovation and stability.
Oh, and everyone ELSE's opinions about the GPL should be based on the text. It was carefully constructed so that you could violate the "spirit" of it without violating the letter of it. So if you and he can find an attorney you both trust, the letter of the GPL DOES tell you the spirit of it.
The only big advantage of the externals is that the connectors are a bit more robust, so if you're going to plug/unplug them a LOT, you're a bit better off.
But for maximum longevity you should take 'vibration free' seriously. That is, you shouldn't lay a drive on a hard table, because when you set it there there's a surprisingly large impact. Set it on a layer of bubblewrap or foam, instead.
If you have humidity issues, I believe you can collect desiccant packets from other things and bake them on low heat to 'refresh' them (bake out the existing humidity) Ideally do this baking with good ventilation.
Well as TFA gives several examples of, there are good uses of such frames. That's the problem... there IS some value here in many cases.
But reading this I think it's clear that we need a browser feature here. That is, something between an extension and straight HTML.
It could even just be that they use the code they already have for backward compatibility but add some kind of hint like 'toolbarframe=true' (ok, that's terrible, but you get my point) It has to identify in the frameset which part is a toolbar and which part is a 'main' page.
If that's present, the browser realizes this frame is supposed to behave like a toolbar. So it:
a) Adds some kind of display of the toolbar URL and an 'x' to close the toolbar 'frame' and automatically go to the main site.
b) Uses the right 'target' URL for the main forward/back/refresh/navigation bar etc., without dropping the toolbar... Basically be aware that it's using frames as a persistent wrapping, not as some other part of layout.
c) Becomes a feature you can explicitly disable in your browser preferences to have no frame toolbars.
Then shame any providers who don't use the hint. Google will figure out how to PageRank the right internal sites with those hints pretty fast, I'd say. Content providers will have no more to complain about than they do with any other toolbar.
A camera is something that can take photos, not something with some parts that could have made a camera. So:
a) pierce the lens if you really want to be crazy, screwing up some nice camera innards. This is relatively risky.
b) Sand the lens and surrounding area a bit.
c) Get some good 2 part epoxy and apply over the camera.
Voila, you no longer have a camera.
Obviously they won't warranty YOUR CAMERA, but you don't have to open it up. If you skip step a, you're not even 'breaking' anything... but if b/c is right you won't ever be able to use that camera again, because you'll have to break apart the body of the laptop along with the lens to.
Basically, as long as each virtual node isn't doing any WORK, you don't need any special hardware. And even if they are doing some work, but just not a lot. We have 5 Linux Xen VMs in production on a 1600Mhz Celeron with 768MB of RAM, works fine, no problems.
The CPU is almost irrelevant - you'll need whatever CPU you'd need to do all the things you're doing, plus some overhead, but it's not like it falls apart.
RAM is the only critical thing. You need at least 96 MB for the host and 24MB for each additional live Xen VM, as I recall (That's probably not precisely right) But you'll naturally be swapping a ton if you do that. A more reasonable VM has 128M - 256MB of RAM itself, so you need that for each active VM. But again, that's only for each one running at a time.
Or if you are going to swap a bunch, get better disks
In any case, I definitely wouldn't climb the price curve of equipment to do this; don't buy anything on the bleeding edge - look at arstechnica and just max the RAM on a value box - or maybe upgrade the MB to something that takes more RAM.
Used, commodity computer equipment is usually not price effective compared to the cheap end of what's still available new. But pay attention to the price point where it's cheaper to get (and power, while they're on) TWO value boxes than to pump up the one box you've been thinking of higher.
I agree both with the parent's GENERAL point and with the other replies that say it's too confusing. That is, for actual, and probably rural users, your proposed system is way too complex. In addition, the POST itself is complex.
OP's goal seems to clearly be to be nice about this. As the parent suggests, the key to trying to be nice about this without paying for a bigger pipe is to properly encourage users to use off-peak downloads. You need a simple, fair system, that just works with users who aren't thinking about it. And I agree, filtering by traffic type is lame.
So from a bulk-downloader point of view you want a system that limits everyone's bandwidth during peak times only - and you want to publish when the offpeak times are so that aggressive downloaders can choose to download stuff during those times if they so desire.
The peak limits should be stiff enough that you aren't quite pegged in either upload or download (separate limits) so everybody gets a relatively low latency connection. Feel free to add more than one tier of "peak" if you need to, especially internally. Or if you're really cool, it will automatically detect when you're about to be at 100% and throttle based on that... so you're not actually 'setting' peak times, you're just publishing guidance on what times tend to be peak.
This kind of traffic shaping - limiting everyone's bandwidth fairly when there isn't enough - is basically good for your users as a whole.
Another key thing to do is HOW this bandwidth is limited. What you want to do is not, really: no more than 200 kb/s. What you really want is more like no more than 12000 kb/min, and no more than 2000 kb/s. There are more complex algorithms for this... but the important thing is to average their bandwidth over a modest time period. Somewhere between 5 seconds and a couple minutes is probably right. Because most typical web users who AREN'T bulk downloading need a lot of bandwidth for very short periods, and to keep the interactive web experience fast you need to give it to them.
The 2.6 kernel does this pretty easily; 2.4 might but I can't remember. Of course, I don't have a clue whether you're using a linux router. TrafficControl or tc, I think the module was called. But I haven't had to adjust mine in a good long time.
Make it right before you make it faster.