Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Most of them won't accept bankruptcy (Score 1) 917

Judging from your subject line, you seem to be under the false impression that bankruptcy is a solution. Unfortunately, it's not, because of decades of highly successful lobbying by banks and Sallie Mae.

Student loans cannot be discharged in bankruptcy under any circumstances. This is a federal law, passed in 2005. It applies to both federally backed and private-party student loans. It applies (retroactively) to all student loans, even those which were issued before 2005.

Creditors can garnish wages without a court order to pay off student loans. Creditors can confiscate tax refund checks, disability checks, and social security checks without a court order. Notice the part about social security -- there is no statute of limitations on student loans, so creditors can do all of the above for as long as you live, even into your retirement years. If you die, they can pursue your cosigners for as long as they live.

The only way to win forgiveness for a student loan is to prove undue hardship in court. This is not the same as bankruptcy -- it's a much higher standard of proof. The burden of proof is on the debtor. Few borrowers have the resources to hire the legal representation that this process requires.

A huge part of the problem is that most Americans have no idea just how one-sided the student lending laws have become. Unfortunately, you seem to be contributing to that problem.

Comment Re:The lottery system is a joke (Score 1) 210

the best evidence available shows that Asians have the greatest intelligence on average of any race of people.

You have no clue what you're talking about.

I take it you live in the USA? The set of Asians who live in the USA is a very very biased and unrepresentative sample of the set of all Asians. The US immigration system is designed to select the best and brightest immigrants. That's why the Asians in the US are so smart and hard-working. The average Asian from an Asian country would be nothing special in America. But Asian Americans as a group are taken from the top 0.5% of all Asians, because US immigration laws are designed to keep out the stupid people. It's completely the opposite of what you claim.

If you actually go to an Asian country you'll find that the people there are no smarter than Americans. But from your condescending attitude it's clear that you're happy to claim international expertise without ever having left the USA. Try traveling or even immigrating to another country sometime -- it'll work wonders on your world view.

With blacks and Hispanics, it's a totally different story. African Americans came mostly as slaves, and Hispanics have illegal immigrants to skew the numbers. That's why the selection effects of US immigration law are significant only for Asians and not other races.

Comment Re:There are fewer than 50 (Score 3) 588

No, it's not a fact. The "fewer than 50" claim is outrageously false. Wikipedia alone lists dozens of western speakers.

I personally know three westerners, neither born nor raised in China, who are completely fluent in Chinese (could pass a spoken or written Turing test), and another five who are fluent except for a foreign accent. It's absurd to claim "fewer than 50" when I personally can think of eight firsthand without even trying.

Having visited foreign consulates in China, a quick estimate indicates that there are likely at least 500 westerners with total fluency in Chinese in the embassies and consulates alone.

Comment Re:Don't know why - but I like it (Score 1) 2288

That's complete and utter hogwash. You think imperial is "natural" simply because you are more used to it. Any non-American (except for a few Brits, Aussies and Kanuks) think metric units are more "natural".

In the first sentence of the post to which you are replying, the GP explained convincingly that s/he is more used to metric, and not American.

Comment Re:Care to elaborate? (Score 1) 2288

I live in Canada as a permanent resident. I've imported and registered American cars in Canada (permanent registration, not temporary, and yes I've done this more than once, in different years). The process is a pain, but not as difficult as you imply.

The Canadian authorities require a speedometer capable of displaying km/h. A speedometer dial that shows both sets of tick marks is fine, even if one is larger than the other. A digital speedometer that has a metric option is also fine. I've seen cars with analog dials and only one set of markings, where you press a button on the dash to change the meaning of the needle from mi/h to km/h. (If you press the button while the car is moving, then the needle will jump from X mph to Y kph). That's fine too.

There is no requirement that the odometer display support kilometers. This is a fact, that I have personally verified with border agents during my previous importation experiences.

The main difficulties in importing American cars to Canada are:

  1. Daytime running lights: Basically the car must have low-intensity headlights or (at a minimum) fog lights that are on at all times while the car is in operation, and the driver must not be capable of turning the lights off.
  2. No automatic seat belts (prohibited in Canada).
  3. Attachment points for car seats (mandatory in Canada).

It's quite possible that converting American cars into Canadian cars is cost-prohibitive, but I bet the cost has much more to do with things like daytime running lights than the relatively trivial issue of units.

Comment Re:woman's unwitting sabotage had catastrophic.... (Score 1) 282

Ok, could we sensationalize this one up more? Catastrophic? really? So how many people died? how many places exploded or burned to the ground?

Your reasoning is fallacious, and (unfortunately) quite common. Although it is not politically correct to put a price on human life, in reality money is a finite resource which can directly save lives (food aid, etc.). A crime which causes monetary or productivity loss can certainly be viewed as catastrophic, depending on the amount of monetary loss involved. 3.2 million people losing internet access for 5 hours can certainly affect a country's economy and measurably impact their tax revenue. Presumably the government is doing something productive and (dare I say) life-saving with that tax revenue. Indirectly, massive financial crimes can in fact cause loss of life, and this loss of life can be quantified.

If you think just a little bit outside the box, you'll see that financial crimes can be just as devastating as murder in terms of society-wide effects.

Comment Re:Sounds like liberal arts grad students (Score 1) 332

Dozens of applicants for professorships? I've applied for teaching/generalist English professorships in the last year for which there have been 500-800 applicants. No kidding. Those are extreme cases, but most searches, even in specialist areas, are netting at least 150 applications.

The GP said that the ratio of Ph.D candidates to positions was dozens to one, not that the ratio of applications to positions was dozens to one. The two numbers are not the same, unless each candidate applies to exactly one position on average.

In reality, each candidate applies to dozens of academic positions on average. (Some apply to hundreds, some apply to none; the average is probably on the order of a few dozen.) A few dozen people per position, multiplied by a few dozen applications per person, is entirely consistent with the range of 150-800 applicants per position.

You say that you are (applying to be) an English professor. I am a math professor. I have no sympathy for mathematicians who can't write. Writing is a big part of my research, and every individual on this planet is better off having rudimentary skill in communication. One can even reasonably argue that English in particular is the most important language worldwide. But, by the same token, I also consider foundational math, like English, to be a basic skill that every individual needs. Those who lack mathematics skills are bound to make the same kind of mistake that you displayed.

Comment Re:SSL certs are both over-trusted and under-trust (Score 1) 194

The solution to this absurdity is to build a time machine, go back to the 80s and define three protocols "http:", "httpe:" (encrypted) and "httpv:" (identity validated) so users don't grow up thinking https: is secure.

Well said. But why do we need a time machine? https is broken and we need to fix it.

Your whole line of posts is based on some sort of premise that we must maintain compatibility with the status quo. My whole point is that the status quo is so irretrievably broken that we must fix it, even if we need drastic steps such as eliminating compatibility with prior notions of "URL" or "https".

Firefox's hysteria against self-signed https goes in the opposite direction. It reinforces the status quo and makes https (or httpe or whatever you would want to call it in an ideal world) even more unusable.

The problem can be fixed. SSH uses no certificates whatsoever, and yet people successfully trust SSH encryption for root-level access. SSH is a far more robust and secure protocol than SSL ever will be.

Comment Re:SSL certs are both over-trusted and under-trust (Score 1) 194

There's no reason why browsers have to display "https://" or even "http://".

Except, that's a significant part of the address. "http://somesite.org" and "https://somesite.org" could, potentially, point to different content (certainly different vhosts).

Web servers already display different content to users based on their geographical location or their login cookies or any number of state variables, and these content changes are not reflected in the URL. Your point means nothing.

Sites using https generally do so because they want to exchange sensitive data, and the use of a self-signed certificate might indicate that a MiM attack is in progress, or (possibly more likely) that the site is being run by a cowboy outfit who can't be arsed to get proper certificates. So, a self-signed https connection is always slightly fishy (there are plenty of innocent explanations, but identifying those requires human judgement + technical understanding).

This is a tautology. Since today's browsers are so alarmist about self-signed certificates, the use of self-signed certificates is automatically fishy. If you remove the alarmism then the amount of legitimate usage of self-signed certificates would increase dramatically.

self-signed https = someone could be mounting a man-in-the-middle attack or you may have been spoofed/phished to the wrong website.

The same holds for regular http. Someone could be mounting a man-in-the-middle attack with regular http.

Meanwhile, there is one big difference between http and self-signed https that you omitted. With regular http (and only regular http), large-scale attacks like police surveillance and content filtering become possible. https (even self-signed) prevents large-scale passive attacks.

I'd suggest (3) is by far the best place at which to start nagging - most users will rarely encounter this situation (only sites with very small user bases, like home servers or in-development sites have a real excuse for not getting a cert) so you're not going to swamp typical users with bogus warnings. For the typical user, this does mean that something out-of-the-ordinary is happening.

Again, the fact that self-signed certificates are out-of-the-ordinary is a tautology that you helped to set up by insisting that they be treated as out-of-the-ordinary.

And remember at the end of the day, all browsers like firefox actually do is warn you, encourage you to view the certificate and decide whether you want to trust it temporarily or permanently

NO! That's not what firefox does. If firefox did in fact do what you claimed it did, then I would be happy.

In practice, firefox effectively blocks self-signed certificates entirely. It takes five (count them, five) mouse clicks to connect to a self-signed https site in firefox, compared with one mouse click in IE. A regular user is scared away after even one mouse click, much less five. Thus in practice firefox ends up blocking self-signed certificates entirely.

Regular http has no warnings whatsoever, even though every attack against self-signed https is also possible against http, and some attacks against http are not possible against self-signed https. This situation is absurd beyond belief.

Comment Re:SSL certs are both over-trusted and under-trust (Score 1) 194

Yes, a self-signed https connection can be more dangerous than a plain http one if you see the "https" or the "golden padlock" and assume you have a secure connection.

The obvious solution is: don't display "https" or the "golden padlock."

There's no reason why browsers have to display "https://" or even "http://". The average non-technical user doesn't care about the protocol; they just care about the "golden padlock." On the other hand, the average technical user already knows what's going on.

Nobody here is arguing that self-signed https connections deserve a "golden padlock." That's your own straw man.

The proposal is that we should treat self-signed https connections the same as unencrypted http connections. The same. Not worse. Not better. The same.

I have yet to see anybody articulate an even remotely coherent argument against this proposal.

Comment Re:trim/discard (Score 3, Interesting) 491

2 - Defragging: similarly, if you're moving data around in dead space without safely duplicating it or having a filename pointing to the blocks in use at any given time, you're not being careful. Also, which defraggers have random 3-minute gaps in operation that would even allow GC to kick in?

I think it is time to start bringing the discussion to a close, as it appears that we do share at least some common ground.

I will comment only on this one question. Your implication that the 3-minute rule somehow makes the GC "safe" is missing my point entirely. Yes, in practice there are checks and balances such as you describe, that make the GC unlikely to screw up. But, in my view, "unlikely" is not good enough. I want, and need, perfect (logical) block storage and retrieval. This should and must be the design goal. Of course, this goal is impossible to achieve in practice. For example, if the firmware (such as older, pre-SSD firmware) is designed with the goal of providing logical block storage, but fails in this task because of some honest bug, then I can understand that. At least, in this case, the programming code was written unambiguously with the correct goal (and no other directly conflicting goals) in mind, even if this goal was not achieved in practice due to an unintentional bug.

However, when a manufacturer deliberately designs firmware with the goal of deleting logical sectors, no matter how well-intentioned or well-implemented, this design goal (by definition) must come into conflict with the original, core goal of reliable (logical) data retrieval. I do not care what happens in the underlying physical layer, but I do care very greatly about data accuracy at the logical layer. The existence of certain checks and balances to prevent data loss is better than no checks and balances, but it is not better than the REAL alternative, namely, firmware that stores and retrieves logical blocks correctly, and is designed for this and only this purpose, without any other directly contradictory design goals.

No one, not even rocket scientists, has ever succeeded in writing bug-free software. But one should make an effort to minimize the number of opportunities for data loss bugs to arise. Firmware-based logical-sector garbage collection fundamentally and irreconcilably contradicts every reliability design principle known to man. That is why I consider the idea to be so abhorrent.

Comment Re:trim/discard (Score 1) 491

This GC only works with NTFS filesystems. If you are operating an SSD device using an NTFS file tree to store data, then you (as a programmer) are not using the drive as a block device as you suggest; you're accessing it (or you should be accessing it) via the abstract file tree.

The ATA standard (parallel or serial) does mandate that the drive appear as a logical block device. A drive with an ATA connector must honor that requirement or (IMO) it violates the standard. USB Mass Storage is another example where the standard mandates logical block storage -- what happens if I put one of these defective drives into a USB enclosure?

The only difference is that people seem to take block-level access to the disk for granted;

For many non-edge-case applications, like whole-disk encryption (remember, I'm a cryptographer), block-level access is exactly what you need, and anything less is unacceptable.

I understand that the firmware is supposed to (obviously) ignore non-NTFS volumes and fall back to block storage semantics. But the mere presence of active garbage collection is unwanted to me. It adds another possible failure opportunity.

The problem, from my perspective, is that your arguments hinge on the idea that marking data as 'deleted, and the filesystem can now overwrite it at some random future point, perhaps instantly or never' (the HDD model) is in some way better than 'deleted, purge at first opportunity' (the SSD model). From my perspective, I'd prefer the latter; at least then I know what's happened to data after it's been marked for deletion.

The latter can (and should) be implemented with explicit TRIM support. The operating system must have control over purging.

I'm still keen to see those realistic real-world use cases. If another poster has posted them, can you provide a link?

Link is here. In short:

  • Lazy conversion from one filesystem to another might involve (re)using the deleted space in ways that the firmware did not anticipate.
  • Defragmenting might use the deleted space.
  • A raw filesystem image might be included as a file inside another filesystem.
  • Some filesystems (like UDF) don't clear NTFS headers upon formatting, and the drive might be confused as to what filesystem is on disk.
  • Microsoft itself might update NTFS in a way that makes use of the deleted space and conflicts with what the drive expects.

Comment Re:trim/discard (Score 1) 491

I'm guessing you don't consider networked computers (e.g. SMB shares, FTP sites, NFS mounts) to be storage devices either then, since the remote host will merrily overwrite deleted files with other people's data however it likes there too?

This is of course a spurious comparison. SMB, FTP, and NFS are presented to the operating system as file trees. A drive with automatic garbage collection is presented to the operating system as a block device ... but it does not actually implement the correct semantics for a storage block device.

What other somethings do you have in mind?

It seems that another poster has already enumerated more ways for automatic garbage collection to break.

p.s. Just thought of another example. RAM. Where you store data in memory logically, and how it is arranged physically - including zeroing of dead pages - are completely out of your control and even out of your view. Does this mean you consider RAM not to be a storage device, since you can't reliably construct a stego side-channel using dead pages of memory?

Again, RAM is presented to the OS as logical addresses, and it does faithfully restore the data that was stored in those logical addresses.

A hard drive, like RAM, presents a logical block layer to the OS which is decoupled from the underlying physical data storage. Correct data storage and retrieval is required at that logical block layer. Automatic garbage collection violates the integrity requirements of a hard drive even at the logical layer. It imposes a secondary logical layer which assumes you are using a standard filesystem in a standard way. This introduces an additional and very scary mode of failure: the possibility now exists that the firmware might actively delete certain logical block data without the knowledge of the operating system. Of course, this could happen even with older, regular firmware, but only as an accident--by default, older firmware is programmed to store everything at the logical layer, no matter what it is. Active deletion raises the stakes considerably.

Honestly, the more I think about this, the more appalled I am that any manufacturer would actually do what you describe. I will be sure to make every possible effort to avoid such drives.

Comment Re:trim/discard (Score 1) 491

Well, if you think I (and the tech support staff on various SSD manufacturers forums) are wrong, you're welcome to buy an SSD and check for yourself. It's not quite as easy as typing 'it's impossible' a bunch of times, but it's a lot more likely to be correct.

A SSD that performs automatic garbage collection by interpreting the filesystem in firmware is not, in my opinion, a storage device.

Suppose I am a filesystem developer. Suppose I want to modify NTFS in such a way that deleted segments of an NTFS disk layout become (in my modified filesystem) a repository for meaningful data. This is not as absurd a concept as it appears. In my line of work (cryptography), storing actual meaningful data in deleted segments might be something that you want to do, for example in steganography.

If the SSD goes deleting disk sectors behind my back, then it becomes impossible for me to develop said filesystem. A storage device should store what I tell it to store. If it doesn't do that, then it's not a storage device. In this sense, it is, by definition, impossible for a valid storage device to implement automatic garbage collection at the filesystem level. Any device that does such a thing, by definition, does not meet the requirements of a storage device, the primary one of which is to retain data without alteration.

Sure, those deleted sectors are safe to erase in an NTFS volume, but how do you know that my operating system is using this NTFS volume as an NTFS volume? What if I'm doing steganography or something where those deleted sectors matter?

The same way they added GC to older models of SSD drives where it didn't already exist of course, and the same way they update features on any hardware. You flash the firmware with revised code.

Remind me never to upgrade the firmware on any hard drive ever again. I do want TRIM support, but I do not want automatic garbage collection, for the reasons outlined above.

I will concede that it is possible for a write-only device to implement automatic garbage collection at the filesystem level, but I maintain that no valid storage device can do so, since to do so violates the core requirement of a storage device in a fundamental and unfixable way.

Comment Re:trim/discard (Score 1) 491

Actually, this is no longer correct. SSDs (such as the one in this study) are quite capable of examining the filesystem stored on the drive, independently, and the concept of 'dutifully' and ignorantly maintaining deleted data goes out of the window as a result.

What you're describing is impossible. It might be possible for some of the more common filesystems, such as FAT or NTFS (although, given the difficulty of supporting NTFS in Linux, I highly doubt that embedded firmware on a drive can parse the NTFS format). It is utterly impossible in the case of new filesystems. Think about it -- if a piece of hardware predates the creation of ext4, or ext5, or whatever, then how can the hardware understand the filesystem?

Slashdot Top Deals

The best way to accelerate a Macintoy is at 9.8 meters per second per second.

Working...