Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Mod Summary Troll. (Score 1) 490

The context of the grandparent's quote is an answer to this question in the FAQ:

Even if it's legal to hire international students, won't it cost a lot of money and involve a lot of paperwork?

The document, and this question in particular, is clearly designed to be informative and tackle a specific bias that exists in the hiring process.

However, the wording is *may* not *will*, which makes it less informative, and instead of eliminating bias, it actually creates a bias in favor of hiring international students for reasons other than who is the best candidate. Who benefits from that? Candidates hired for the wrong reasons, or employers who hired employees for the wrong reasons?

The preceding statements in the answer were sufficient on their own:

No. The only cost to the employer hiring international students is the time and effort to interview and select the best candidate for the job. The international student office handles the paperwork involved in securing the work authorization for F-1 and J-1 students.

Comment Re:Huh? (Score 1) 490

Nowhere does it advocate hiring international over U.S. students or what benefits are to using international students.

Sure, except for this unnecessary statement in the FAQ:

In fact, a company may save money by hiring international students because the majority of them are exempt from Social Security (FICA) and Medicare tax requirements.

Comment Re:The problem is performance not SQL (Score 1) 423

Cheap, unimportant data is not the only factor. A more significant factor is scale. Some of the big "NoSQL" players in TFA have a very real monetary stake in the data they are putting into these systems.

No one is saying "no to SQL" because they can do without the reliability. Quite the opposite. Put a DBMS under crushing load, and availability is the first thing to go. The big players want a system that is highly available and maintains data integrity.

A typical DBMS makes strong consistency guarantees across the entire dataset. e.g. After an update is committed, all subsequent reads MUST reflect the change. Turns out this costs a lot; it is a major sacrifice to the potential throughput that could otherwise be achieved with the same hardware, and fundamentally limits the scale of the dataset. Strong consistency adds nothing to reliability and is unnecessary for many apps.

You are pretty close in your point that when you upload something to Facebook, it doesn't matter if everyone sees it instantly the next time they refresh their browser. That is absolutely true. However this is not to say that the underlying system is lacking in integrity or reliability. An "eventually consistent" data store can reliably guarantee that the data will eventually be reflected in all queries, without requiring it to be resubmitted.

Comment Re:Supplement, not replace (Score 1) 350

This can be viewed as problem with the current browser caching paradigm. It is an important concern and I believe it can be solved with some design changes in web apps and in the browser.

Today your web browser always hits the remote server first, even if you already have a cached copy of the content. It's checking to see if its cache is still valid. If the site is down, you see an error (read: your app won't run); if the site decided to delete or change the content, your browser obliges and caches the new version, whether you wanted the old version blown away or not.

Once you start talking about full-blown web apps served up through the browser, what you really want is to connect only for software update distribution and network-oriented features that only make sense online (e.g. live chat, featured content, ads). Local-only features, e.g. word processing, should be cached in such a way that it works offline and can be rolled back if you get a bad update.

Comment Re:No - there are plenty of safer alternatives (Score 3, Insightful) 486

Technically one size argument is enough, but in a large enough software project the code that allocates the destination buffer is maintained separately from the code that copies into it. Any failure in communication (e.g. building against an outdated library) will lead to someone's linker writing a binary with code that will overrun a buffer.

With an explicit destination size parameter, the buffer copy code is no longer as sensitive to changes at the allocation site. A breakdown in communication will lead to a binary that produces a controlled runtime error instead of a buffer overrun.

Slashdot Top Deals

"Don't drop acid, take it pass-fail!" -- Bryan Michael Wendt

Working...