I really want to like libressl. But it pretends to be openssl badly. They refused a patch that would have mitigated this whole RAND_egd problem by simply returning that it doesn't work when someone tries to use it, which means that you commonly need a patch to use it at all. If it's not going to work like openssl, then it shouldn't occupy the same space in the filesystem.
This is not news to most people, but I just tried it for the first time on my first-ever normal Debian Wheezy install (I've always done minimal, netinst etc. and built it up from there for a purpose) and wow, GNOME3 is amazingly horrible. It makes Unity look usable. If that was the idea, mission accomplished, I guess.
Welp, I can use Slashdot in Chrome and not in Firefox, which implies that something I'm blocking in Firefox is preventing the new improved Slashdot from working. What new spyware bullshit do I have to enable to use Slashdot now? Thanks, DICE! You'll run this place the rest of the way into the ground any day now.
SSDs already use wear-leveling technology that effectively turn all file-updates into copy-on-write operations.
If SSD devices would keep track of the old copies so that an operating system or SSD-vendor-supplied data-rescue-utility could easily treat non-overwritten data as if it were a "shadow copy"
if the SSD would hide that data from the host computer unless a particular switch or jumper was set,
it would aide in data recovery after a ransomware attack.
Why hide it from the host when the switch is not set? If the "shadow copy" IS visible to the OS, all the ransomware has to do is write to the disk until the data it wants to erase is no longer there in the "shadow copy." If it is invisible to the host, the ransomware has to write enough data to overwrite all existing "shadow copies" to guarantee success.
Why would a user have the switch on all the time? Backups.
Having a hardware-based "shadow copy" mechanism that the backup software or host OS understood would make backups easier without the necessity of the host OS or filesystem having to implement a shadow-copy system of its own.
Then they modded down five of my comments in a row. Why doesn't the system catch this kind of obviously abusive moderation? Oh right, because this is slashdot, not someplace with competent employees.
If moderation on slashdot were intelligently designed, this person's abusive moderation would have been autodetected and they would have been banned from moderation permanently.
Anyone still lurking around this place? I see several people are still active here.... quite a huge pile of names I have not seen in a long long time.
All your e are belong to Mother Nature.
Lately, I'm seeing more and more "reign in" on Slashdot and I've got to say, I'm disappointed.
I need a new system on which to run asterisk, bonus points if I don't have to configure it from scratch. I'd like to spend less than $200 (ideally I'd pick up something used if necessary for $100) but I have storage devices available, whether CF, SD, USB, or what have you. It can have wireless, but it doesn't have to because I have a routerboard for that. I have found my pogoplugs to be unreliable at best.
I am tired of the trolls here and I have made quite some enemies over the years. I feel like this is now a 35 year old version of highschool where it is popular to say certain things complete with squeaky cheerleaders girls rather than a place of intellectual thought.
Arstechnica.com serves this much better.
I got modded down 0 troll for putting IOS in development requirements because it wasn't an official real language by self righteous asshats who feel threatened by their own unique C/unix way regardless of market demand which was my point. I do not belong here anymore. I am not a linux fanboy anymore as I feel it is not longer keeping up with the times and I refuse to be brainwashed into an idea and never change and grow old and set in my ways. I change with the times and adjust accordingly. I want a place where I can do this. Most importantly spend less time here and go better myself like a good middle aged person is supposed to do.
Storing a private key "in the cloud":
Key is K1. Key is thousands of seemingly-random bits, probably based on a pair of 1024-bit-or-larger prime numbers. You typically store K1 on your computer using a good encryption algorithm. Your password to decrypt the key is P1. P1 is typically tens of characters. Decrypting K1 with P1 is a fast (in human-time-scale) operation, under a second.
Although K1 is typically used to encrypt or decrypt data, for the purposes of this document, K1 is the thing to be encrypted. It will not be used to encrypt or decrypt anything.
How to safely store a backup of key K1 online such that the end user can access it from any device if he has both the password P1 and something else that is not mathematically related to K1.
Method 1, the "something else" is a one-time pad:
Create a random one-time pad, R1, which is the same size as K1.
"Encrypt" (XOR) K1 with R1 then encrypt both with P1, creating the safe copy S1. Store S1 online.
Print off a copy of R1 such that it can be easily photographed and re-constructed. Store R1 or an encrypted version of it in a safe place, such as a safe-deposit box or distributed in parts to trusted secret-keepers.
Without R1 it is provably impossible to extract K1 from S1, so S1 is "safe."
R1 by itself is useless.
R1 with S1 constitutes a compromise but it will mean the attacker has to either guess P1 or exhaustively search for it.
If the person loses their local copy of K1, they can use R1, P1, and S1 to reconstruct K1.
Method 2, create a file S2 which from which is computationally hard to extract K1 without P1, acceptably moderately difficult to extract K1 with P1 and no other information, and easy to extract K1 with P1 and "something else" not related to K1.
For example, create a one-time pad R2 which consists of P1 combined with some random-ish filler-number B2 whose size is dependent on how "moderately difficult" it can be to extract K1 given only P1.
If this pad R2 is at least as long as K1, proceed on as in Method 1: "Encrypting" (XOR) K1 with R2 and encrypting both with P1, creating a safe copy S2. As neither P1 nor B2 are known or predicatble, S2 is safe.
The time to recover K from S2 with only P1 will be the time it takes to go through all (or, on average, half) of the possible values of B2. Since the length of B2 was chosen in advance based on how hard this decription should be, K1 will be recoverable in a predicable, acceptable amount of time. With B2 and P1 recovering K1 from S2 is quick.
If the pad R2 is not as long as K1, one option is to re-use the one-time pad and as such will not satisfy the goal o being "comptationally hard to extract K1 without P1," but it may be good enough for some applications.
A different solution is to encrypt K1 with P1 (the file that is normally stored on the person's local computer will qualify) then encrypt the result with either B2 or some combination of P1 and B2 to create S2. The difficulty of extracting K1 from S2 with only P1 depends on the time it takes to go through all (or, on average, half) of the possible values of B2. Depending on the lenghts of P1 and B2 and the encryption algorithms used, this may not be safe enough. With B2 and P1, recovery is quick.
This method has the advantage that the "something else," B2 in this case, need not be kept at all.
A typical scenario where the "B2" method would be preferred over the "R1" method is where it is acceptable if key K1 becomes unavailable for an extended period of time in exchange for a zero-risk that an adversary will acquire or discover R1.
A self-proving identification card:
Display in human-readable and computer-readable form:
Identifying information such as name, card number, issuer/certifying agent, expiration date, face or thumbprint, signature, etc.
Display the same in a computer-readable form. For easy-to-scan things like letters and numbers that are on the card in a pre-defined layout, the human-readable form and computer-readable form may be identical.
For things like a photo, the computer-readable form may be a simpler version, such as an 8- or 16-color 64x64 bitmap.
Have the comptuter-readable form be digitally signed by the issuer/certifying agent and have the signature on the card in both a computer- and human-readable form.
Have the scanning device display the computer-read data in a human-readable form so that a human being can compare what is on the screen with what is on the card.
The same human being would compare what is on the card with either another form of ID or, if the card had a picture or thumbprint, with that of the person presenting the card.
Some information on the card could be encrypted and require a password or other authentication token to decrypt.
Other than this optional part, the card would be "self proving" provided that the public key of the issuer/certifying agent was available to the authentication terminal.
Ok, I need to expand a bit on my excessively long post on education some time back.
The first thing I am going to clarify is streaming. This is not merely distinction by speed, which is the normal (and therefore wrong) approach. You have to distinguish by the nature of the flows. In practice, this means distinguishing by creativity (since creative people learn differently than uncreative people).
It is also not sufficient to divide by fast/medium/slow. The idea is that differences in mind create turbulence (a very useful thing to have in contexts other than the classroom). For speed, this is easy - normal +/- 0.25 standard deviations for the central band (ie: everyone essentially average), plus two additional bands on either side, making five in total.
Classes should hold around 10 students, so you have lots of different classes for average, fewer for the band's either side, and perhaps only one for the outer bands. This solves a lot of timetabling issues, as classes in the same band are going to be interchangeable as far as subject matter is concerned. (This means you can weave in and out of the creative streams as needed.)
Creativity can be ranked, but not quantified. I'd simply create three pools of students, with the most creative in one pool and the least in a second. It's about the best you can do. The size of the pools? Well, you can't obtain zero gradient, and variations in thinking style can be very useful in the classroom. 50% in the middle group, 25% in each of the outliers.
So you've 15 different streams in total. Assume creativity and speed are normally distributed and that the outermost speed streams contain one class of 10 each. Start with speed for simplicity I'll forgo the calculations and guess that the upper/lower middle bands would then have nine classes of 10 each and that the central band will hold 180 classes of 10.
That means you've 2000 students, of whom the assumption is 1000 are averagely creative, 500 are exceptional and 500 are, well, not really. Ok, because creativity and speed are independent variables, we have to have more classes in the outermost band - in fact, we'd need four of them, which means we have to go to 8000 students.
These students get placed in one of 808 possible classes per subject per year. Yes, 808 distinct classes. Assuming 6 teaching hours per day x 5 days, making 30 available hours, which means you can have no fewer than 27 simultaneous classes per year. That's 513 classrooms in total, fully occupied in every timeslot, and we're looking at just one subject. Assuming 8 subjects per year on average, that goes up to 4104. Rooms need maintenance and you also need spares in case of problems. So, triple it, giving 12312 rooms required. We're now looking at serious real estate, but there are larger schools than that today. This isn't impossible.
The 8000 students is per year, as noted earlier. And since years won't align, you're going to need to go from first year of pre/playschool to final year of an undergraduate degree. That's a whole lotta years. 19 of them, including industrial placement. 152,000 students in total. About a quarter of the total student population in the Greater Manchester area.
The design would be a nightmare with a layout from hell to minimize conflict due to intellectual peers not always being age peers, and neither necessarily being perceptual peers, and yet the layout also has to minimize the distance walked. Due to the lack of wormholes and non-simply-connected topologies, this isn't trivial. A person at one extreme corner of the two dimensional spectrum in one subject might be at the other extreme corner in another. From each class, there will be 15 vectors to the next one.
But you can't minimize per journey. Because there will be multiple interchangeable classes, each of which will produce 15 further vectors, you have to minimize per day, per student. Certain changes impact other vectors, certain vector values will be impossible, and so on. Multivariable systems with permutation constraints. That is hellish optimization, but it is possible.
It might actually be necessary to make the university a full research/teaching university of the sort found a lot in England. There is no possible way such a school could finance itself off fees, but research/development, publishing and other long-term income might help. Ideally, the productivity would pay for the school. The bigger multinationals post profits in excess of 2 billion a year, which is how much this school would cost.
Pumping all the profits into a school in the hope that the 10 uber creative geniuses you produce each year, every year, can produce enough new products and enough new patents to guarantee the system can be sustained... It would be a huge gamble, it would probably fail, but what a wild ride it would be!
Two American fighter jets dropped four unarmed bombs into Australia's Great Barrier Reef Marine Park..."Have we gone completely mad?" asked Australian Senator Larissa Waters. "Is this how we look after our World Heritage area now? Letting a foreign power drop bombs on it?"