Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Submission + - ask slashdot: how should I handle this IP agreement? (slashdot.org)

wierd_w writes: Today I was presented with yet another agreement from my employer, as many if not all of us have been faced with in the past. This one however, strikes as as being the closest thing to pure evil I have ever encountered.

having read, and re-read the text of the agreement, i do not see any prohibitions on public discussion of the terms provided there-in, and so I thought I would make mention of the most eggregious sections, and seek feedback on if I should find new employment or suck it up and sign this faustian rag. (I fully understand that any members involved in the legal profession must protect themselves, and regardless of the outcome, will not hold any legally responsible for the contributions they may make.)

now, down to the dirtiness at hand.

aside from the normal language where they claim ownership of every idea and skill i posess, they also make several claims that in my (very much) unprofessional opinion are serious red flags telling me to run as fast as I can. ....
6) Miscellaneous.
        a) A breach of this Agreement will result in irreparable harm to the Company for which there is no adequate remedy at law, and the Company is entitled to injunctive relief and specific performance. The Company need not post a bond or other security to enforce its rights under this Agreement.

  b) This agreement may not be modified, or terminated except in a writing signed by me, and a company officer. A waiver of breach of any provision of this Agreement will not operate as a waiver of any subsequent breach.

  c) The unenforcability of any provision of this agreement will not limit any other provision's enforcability. If any provision is held unenforcable, that provision will be limited or construed to the minimum extent necessary to make it enforceable.

  d) My obligations under this agreement continue after my employment ends.

e) This is not an employment contract. My employment is at-will.

f) This agreement will be governed by and interpreted in accordance with the laws of Washington, without reference to conflicts of law rules. ...

a) and d) especially scare the shit out of me.

the former uses loaded language that I feel should be illegal, because it presupposes a matter of fact that I personally do not find truth in; a cursory examination of civil contract law for the state of washington shows that there is no limit on the possible awarded damages for statutory compensation. My employer could pull a doctor evil, and hold me hostage for a hundred bazillion-million dollars, and be within existing legal remedy as far as I am able to determine. While i do agree that irreparable harm could come from unapproved disclosure of intellectual properties or secrets, i dispute that the available legal remedies are "not adequate." If any of you are more knowlegable about washington contract law, i would love to hear your informal opinions. As-is, i don't believe that this agreement is possible for me, since i simply do not agree with the language of that section of article 6, and cannot fathom how "unlimited statutory damages" cannot service suitable compensation, and why they feel that special injunctive relief and and specific performance are necessary. for any of our lawyer friends here that might shed some light on this, my eyes are peeled!

then there is section d) of the same article, which, when taken in context of e) (at will employment, and not a promise of continued employment) appears to amount to an agreement in purpetuity for a service rendered (at will employment) that by its very nature is decidedly temporal. perhaps they think that i will be exposed to industry secrets that require purpetual protection, but that is not rational, considering that *I* would be creating those "secrets", and the wording of the main body of the contract explicitly states that my "know-how" is included in the agreement. That would mean that my already extensive skillset prior to working here is on the table in this contract, and this agreement would essentially bar me from in any way disclosing anything i know to any unauthorized person, if a very strict interpretation was followed. I could accept a sunset with an absurd term, say 25 years, but not "infinity". Theoretically, i would have to acquire a completely new knowledgebase and find a completely different career if I accept this agreement then seek new employment, as best I comprehend it.

My current thought on a course of action is to seek a modified contract per section b), with a sunset provision, and a statement asserting "fullest extent possible by law" instead of the existing section a) of article 6, as I am actually capable of agreeing with those terms, and should provide more than enough protection to my employer, as I have no interests in stealing or proliferating any of their intelectual properties-- along with some kind of sunset provision for section d).

for the record, this is a fortune 500 company dealing in physical manufacture of aerospace components, but the language of the agreement covers *everything*, including computer code, sketches, diagrams, algorithms, et. al., including my "know-how". (it is specifically mentioned.) I also no not live in the state of washington, nor is the company headquartered there.

Should I bother with seeking to get an agreement I can actually sign in good faith, or should I just start looking for a new job?

Comment Re:The real question is: how do they taste? (Score 1) 401

carp DO taste nasty when not properly prepared. So do catfish. (if you cook it wrong, it's horrendous) The issue with carp is that they are a very bony fish as well, which makes processing them difficult, and even when cooked correctly, makes eating said fish very hard.

But all that is unimportant; You guys are not looking at this as an economic opportunity. Many products are made from fish that are not intended for human consumption, but which still require many tons of fish per hour to manufacture industrially.

Such as fish emulsion fertilizer and the like. There are also a few dubiously useful suppliments that humans injest but never actually taste, such as fish oil capsules, that could be mass manufactured.

A business can be made out of "over harvesting" said asian carp in that area.

Comment Re:Jai Hind! (Score 4, Informative) 255

Thalidomide is an interesting case.

It is a photo enantiomer, meaning that it has a left handed, and right handed isomer that will bend light one way, or another, when in solution.

The right handed isomer is an effective sedative, while the left handed one is a tetragenic compound.

The problem is that even if highly refined so that only R isomer is administered, the pH of the patient's blood will racemize the isomers again.

It could be entirely possible for thalidomide to be safe, if administered with a chaparone to prevent racemization.

At the time, the preparations of thalidomide were a heterogenous mix of both isomers, as there was no research into possible side effects from the mixed sample, and the prospect of birth defects wasnt considered, as the intended use was not for treating morning sickness. As an anti-cancer treatment for non-pregnant people, it is still a useful compound.

Thalidomide was originally developed as a sedative/hypnotic compound, and not as a treatment for nausea. (This would be similar to say, scopolamine, which is used to treat motion sickness. This is not meant to imply that the comounds are related. They arent. However, scopolamine is ALSO useful for treating some forms of nausea. Fancy that.) The use as treatment for nausea is what heralded the use of the product to treat morning sickness, and the subsequent epidemic of infant mortality and deformity that swept the world. It isn't that thalidomide is a bad drug: it was, and is still being shown to be a VERY useful drug. The problem is that thalidomide was not used properly, and was provided OTC, which strongly exacerbated the problem. To pick on poor scopolamine again, it too had a stint as an OTC motion sickness medicine and sleep aid, which ended up causing all manner of problems when certain... shall we say, "Degenerate" people discovered that it made an excellent date rape drug when dissolved in alcoholic beverages.

It isn't that either drug is "bad". It is that the lust for profits from the sale of the drugs can lead to very bad decisions in marketing and distribution of those drugs. Drugs developed for a certain purpose should be extensively and thuroughtly tested for efficacy before being used in alternative manners; such as for instance, Minoxadil. It is the primary ingredient in Rogaine, a male hairloss treatment with FDA approval. It was originally a prescription heart medicine for treating hypertension. It took quite some time for minoxadil to recieve FDA approval for treating alopecia. That is a good thing, as the testing helped establish what the ideal dosages are, and that the concentration must be different for treating women than for treating men. If minoxadil had been rushed to market as a treatment for alopecia, there could have been very dangerous results, since it *IS* a blood pressure medication! This is one of the reasons why rogaine is a topically applied preparation, and not a preparation for internal consumption. (The regrowth of hair was a common side effect of orally administered minoxadil for treating hypertension. Oral administration of the compound would be effective for regrowing hair, but the concentrations needed would make taking the drug dangerous to a patient's cardiac health. Topically applied minoxadil allows high concentrations at the site of interest, with a slow overall rate of absorption, making it ineffective at lowering blood pressure. If rushed to market, it is quite concievable that minoxadil tablets would have been seen for treating alopecia, and that there would have been class action suits as bald people all over started dieing.)

The FDA's insistence on efficacy studies is to prevent dangerous drug use, and to ensure that a drug actually does what it says it does. The long term drug study requirements are intended to catch things like thalidomide birth defects, as it would have shown up with thalidomide being used as a sedative/hypnotic as an increased risk statistic. (nobody expected birth defects as a side effect of a sedative, but since both genders would be getting prescribed the drug by doctors for appropriate conditions, and some percentage of those patients would be pregnant women, the birth defects would have manifest, and the rate of defects reported in pregnant patients using it would have been significant; Far fewer infant deaths and deformities would have happened going this route, rather than "Balls to the walls, let's put it OTC and make a killing!". For the free market true believers out there-- this is exactly why regulation is needed, and why allowing market forces to do the regulation is unconsionable. *wink* )

Comment Re:Thank you for replying Timothy (Score 1) 2219

Thank you for taking the time soulskill.

To elaborate on the first question I asked:

While obviously, I do not have access to the feedback provided to slash media, and instead must attempt to grep the summation of that feedback from the postings of others over the past 2 two days, the predominant opinion has been that the fundamental design of the beta, with its white theme, and more graphical (vs textual) presentation methodology is in and of itself something that is not seen as being desirable by a considerable proportion of the community.

Many use mobile connections with the useragent string set to desktop mode, or use a tethered mobile connection to view the site, being technology industrial professionals who are busy and on the go. As such, they have a vested interest to disable the loading of images, as these can burn into the data allowance their broadband providers have established, and can result in nasty overage charges or worse. Because of this, and for some, simply for personal preference reasons, the more graphical nature of the beta is seen as simply undesirable in any fashion. Given the prevailing and powerful nature of this opinion, and how well represented it has been in discussions of this topic I have observed, I must conclude that a goodly portion of the feedback provided has expressed this opinion. As far as I can tell, it would take a radical rethinking of the beta's design to accommodate this feedback. Given the nature of the opinion, I find it difficult to believe that it has not been presented in the feedback over the past 5 months in copious abundance. This is why I asked the question the way I did.

Personally, I do like that images directly related to the story are included, but feel these images should be very small thumbnails, (possibly text-wrapped in the top-right corner of the story summary body) and not large, flashy ones. I too view slashdot with a mobile device with the useragent string modified, and can speak with personal conviction that these images should never exceed 200px wide, at the largest. I could begrudge a compromise for using a more intelligent query for page assets to be determined by either a preferences setting, (for people who may be browsing while tethered, for instance, who would otherwise possibly benefit from having larger UI assets displayed for visibility reasons) or by evaluating the current display window size, before actually initiating any HTTP GET operations, for persons like myself who prefer to avoid using the mobile version of the site's content layout. (The excessive javascript of the mobile version often crashes the stock browser of my device.)

I eagerly await any answers you may receive from the design team concerning these queries, and fully appreciate that these kinds of questions are outside of your department.

  So far, however, the impression I have been made to hold concerning the reasons and motivations for the kinds and implementations of recent slashdot upgrades has been that the design team has been implementing changes based upon their own personal preferences, (eg, "the old design looks like something from the 90s", or "gah, that's so clunky looking!") Rather than from any supportably technical or objective position. I would very much like to hear that this is not the case, and to be made aware of the reasons behind the decision to update layout in this fashion, and behind the choices in its implementation.

If however, you are forced to confirm the position I have been made to hold concerning such choices (that they are arbitrary, and capriciously chosen and enacted) then I would ask that you be honest in reporting this to the community. I would also like for you to continue to ask questions on our behalf, inquiring why the personal aesthetic preferences of the design team trumps that of the preferences of thousands of registered users.

Thank you again for taking the time to respond.

Comment Thank you for replying Timothy (Score 4, Insightful) 2219

I really do appreciate that you and Soulskill did at least break the silence that up until now has been deafening, but really, the nature of your reply does not fill me with confidence, and with the replies I am reading by other users, it looks as if that feeling is well represented, and that I am not alone.

I just want you to know that I am listening to you as well.

With that in mind, I have some difficult questions for you.

You say that you have been reading and contemplating our feedback. It is clear that you have been at least observing the fallout that has occured over the past few days here in the comments sections of some very promising and nice looking stories, as the quality of the community provided content dropped to levels that would make even /b/ look intelligent. Your colleague Soulskill even made some well received commentary recently, and we've eagerly awaited this public level of ice-breaking on the discussion. For this I, and clearly many others are greatful.

However, since you claim to have been receiving valuable feedback about the beta experiment since at least 5 months ago, why is it that the nature of the beta has not radically changed to accommodate that feedback? Why did you allow this situation to come to a head like this, if you have been observing and seriously considering the feedback provided?

I see in your announcement that you and slashmedia believe it is time for a change in the site's layout. What factors does slashmedia use to make these determinations, and why do you believe that a radical change instead of a refinement and polish of the current system is in order?

Can you please elaborate on some of the design choices that slashmedia has taken in the beta, ans why they felt these were good decisions, and why they have apparently completely ignored 5 months of user feedback about the beta?

I understand that nobody really profits from continuing the public protest, or from relentless, mindless trolling. That's why we need to have a real, and valuable discussion here about this, and why a show of good will about our feedback actually being considered, and how it is considered, in detail, is clearly needed for our community to resolve its differences with slashmedia's choices in performing its services as the community's host.

I am sure it would mean a great deal to all of us if the dialog did not end here. We, as a community need answers to these questions if we are going to stay and continue to contribute to what makes slashdot great.

I hate to say it, but ignoring us and leaving these kinds of questions unanswered is likely to be seen as a worse slap in the face than hearing only silence was. Please continue this dialog.

Comment Re:exception handling (Score 1) 51

Biological simulation engineers at Umbrella Corporation cannot guarantee the accuracy of any simulated systems created using this product, and cannot be held liable for any resulting products that may result in injury or harm to any species, including but not limited to uncontrolled anomalous tissue growths, genetically linked deformities, or the mass extinction of human kind via a zombie apochalypse.

By using this software you agree to the above enclosed terms and conditions, and to be bound to said agreement.

Thank you for using LifeLab(tm).

Lifelab and Umbrella corporation logo are the sole intellectual property of Umbrella Corporation, all rights reserved.

Comment Re:exception handling (Score 2) 51

Only if there is a process for the cell to do so. Like a computer, a cell isn't magical. This is why amyloid plaque buildup in neural tissues is a fatal degenerative disease. There is no mechanism for the cells to flush the defective products they are synthesizing from the broken synthesis chain.

The real world KEEPS the defective biproduct, and simulates its impact on the rest of the system. A computer based simulation of that process that aims to be accurate, must also do so.

Comment Re:exception handling (Score 4, Insightful) 51

The issue is that the "zombies", in this case, defective H proteins, stay in the cell and are NOT really dealt with. They become a new, undefined input in the system that must be accounted for when simulating other cellular processes being performed in parallel inside the cell.

This can lead to a very extensive chain ot unexpected executions and transformations. Dealing with that programmatically is going to make any computer currently in operation attempting it cry to the ghost of Alan Turing and beg for mercy.

If the goal is accurate simulation, then a (try),(catch),(finally) isn't going to work properly.

Comment exception handling (Score 5, Interesting) 51

Biological systems have many broken legacy "routines" that don't get called, or get called, and execute incorrectly. How do these engineers intend to deal with exception handling in this capacity?

For instance, a well known mutation known as bombay phenotype involved a precursor protein called "H protein", which then gets modified by additional cellular processes to become either A or B blood antigen. The mutation makes a defective H protein, and thus prevents the proper activation of the A or B antigen "routine".

If they try to build a programing language for cellular processes involving DNA and protein synthesis, then how will they handle exception cases, such as that one? It can be likened to the halting problem, because the question asked is "given these inputs and this program, will the program ever halt?"

How do they intend to resolve this problem?

Comment Re:Uh... (Score 1) 740

Look, it just felt that if it was going to be forced into being famous by those damned paparazzi scientists at the LHC, when it had spent the entire previous history of the universe toiling in obscurity providing substance to all the masses, that it at least deserved to be compensated for the hassle.

And you people act like it did a bad thing! Shame on you!

Comment Re:Fascism is not Libertarianism (Score 1) 356

You are injecting an artificial difference that does not logically exist, between a "for profit school", and a "private school."

Private schools are for profit schools that are selective in which students they will accept.

For profit schools are for profit schools that are selective in which students they will accept.

Sounds to me like you are barking up the wrong tree. The issue isn't that the schools are driven with a profit motive, the issue is that you take exception to the school's ability to refuse admission.

This is a perfectly justified concern, because when no public schools (that have to accept anyone and everyone from a given district) exist anymore, then logically, there will be a resulting demographic of children who are systematically excluded from the "for profit only" school landscapes.

On the other hand, this is also an unsatisfied demand in the market. That means creating a school that specializes in these "undesirable" pupils would have an assured revinue stream.

At that point, the complaint changes; all the kids are going to school, but at least one demographic has few if any options, and the one school that specializes in the undesirable kids is essentially a monopoly, and can charge an absurd price, and get away with it.

True libertarians don't want to acknowledge this last situation, because it clearly paints a portrait of where government regulation is necessary. This is because government regulation of just about anything is considered offensive to diehard libertarians.

I don't mind the death of public school systems, and the rise of privatized ones in their place, as long as there is regulation forbidding outright castigation of groups of pupils based on any set of criteria. Eg, the schools have to admit any and all students, and the cost burden between a special needs student and a normal tuition paid student has a government assistance program that the school can make use of, paid for by tax money, but with riggorous oversight to punish and discourage abuse.

But there I go being a moderate centrist again.

Comment Re:say... WHAT? (Score 2) 335

A single process can contain multiple threads. Without some level of protection there, this kind of thing could be more vulnerable to code injection attacks, allowing a perp to own the whole VM. If they do this without upsetting the process in any visible way, they can now just soak up all the data that the VM is having shoveled through it.

Without a kernel space inside the VM looking for untoward behavior from the threads in userspace, and enforcing restrictions on who owns what resources, this is a recipe for trouble. The compromised thread can walk all over the vm's memory, and report whatever it wants to the hypervisor. In this case, the goal isn't to escallate, the goal is to compromise the vm and lay dormant. An actual, real VM with a seperate kernel space keeps important parts of memory secure. Like the data reporting and monitoring threads.

They just removed a whole layer of security. It may well be mostly redundant, but given the stakes involved, redundant security features can actually pay off.

The honor system doesn't work when the threads stop being honorable.

Comment say... WHAT? (Score 0, Troll) 335

Did they really just say that they removed the insolation between kernel and user spaces?

(Re-reads. Yup. That's what they said!)

Oh dear gawds. Do they not realize that this makes their processes naked little unprotected things in a dimly lit room, that are going to be savagely raped and abused by the first rogue process that comes along?

Do they have no conception of why the two spaces are kept apart!?

No thank you, I will refuse to conduct business with any agency that uses this platform, thanks. We have a big enough problem with identity theft and wire fraud as is. I don't want to encourage such a horrifically stupid idea by giving some dumbass led company my business.

Comment Re:Econophysicists. WTF? (Score 4, Funny) 387

Nono.

It's the non-neutonian version of fluid capital.

It really isn't that difficult; just imagine cornstarch and water.
Ok, now imagine that as money, aka, liquid assets.

This non-neutonian liquid asset appears firm and to have substance as long as it is traded quickly, or placed under high trade pressures, but for anyone attempting to hold onto it, it melts into a sticky mess, and they are left with little to show for it.

There is a considerable degree of interest and research into such non-neutonian liquidities, as everyone seems hell bent on finding ways to make ever more of the stuff. This means that the rate of exchange and the overall economic force behind the trading have to continue to rise to accommodate the inclusion.

We non-neutonian econophysicists deal almost exclusively with these kinds of liquidities, and often work very closely with non-euclidian geometric market analysists to see new angles to the market that others failed to see or exploit before.

It's really quite technical deep down, so don't feel bad if you can't quite comprehend it all.

Comment Re:End of the Universe (Score 2) 164

I just had a radical thought.

It's probably wrong, as it is the product of ignorance. As such nothing that follows should be seen as factual. It is supposition. And again, probably very wrong.

Still, What if the physical volume of spacetime is far larger than it currently appears, the force driving spacetime expansion is the energy that creates vacuum fluctuations entering the true ground state (as more spacetime that has fewer fluctuations), and the currently observed universe's rate of expansion is an illusion?

Imagine:

Shortly after the bang, we have a very excited vacuum, and a volume for the universe that is very constrained. Mass-like fluctuations in the vacuum will occur frequently, even though actual massed particles don't exist yet. Over the volume constrained universe, this creates knots in the energy density of the early universe, by making spacetime "lumpy".

If we presume that the rate of spacetime expansion of this early universe is "just slightly" greater than this gravity like influence from the combined action of the fluctuations that have mass like terms, then spacetime will explode away from the soup, faster than the soup expands. If we say gravitational effects propogate over spacetime at exactly c, then this expansion would be a tiny fractional bit in excess.

the aggregation of this soup toward its barycenter would be arrested by this expansion. The acceleration of the soup toward its baycenter would make the apparent rate of expansion seem very tiny. (Say we are accelerating at 1 plank unit every plank second, toward the net barycenter. We are in spacetime that is inflating at a net rate of 1.000000000.....1 plank units every plank second. The members of our cloud will appear to be moving *away* from the barycenter at the .0000000....1 plank units per plank second, despite actually accellerating toward it.) The actual expansion of spacetime will be considerably greater than the apparent one.

Gravitational attraction falls off on the inverse cube of distance. As the cloud expands (or rather, is pulled apart by expanding spacetime), the rate of accelleration by gravity toward the barycenter diminishes, making the apparent expansion rate increase.

Now, the really odd thought.

If we presume that the driving force behind the expansion is the decay of vacuum energy to its lowest possible state (spacetime with no fluctuations), then rate of expansion will not remain constant, and will slowly degrade over time as it runs out of energy.

This suggests a number of things. First, that the energy density of spacetime (as a whole) is falling off at a greater than geometrical rate in proportion to its volume. Second, that the rate of expansion is actually slowing, as the energy behind the expansion is depleted. And thirdly, all massive objects in the universe are currently travelling with a very large extant of momentum toward the original cloud's barycenter (with some local difflection of vector from uneaven distributions, and interactions with nearby massed objects with local barycenters) that is already a significant fraction of c.

This would seem to explain a good deal.

1) where did all the missing energy go? It's basically empty and flat spacetime surrounding the visible universe like a bubble. It is far bigger than the visible universe.

2) the odd shifts in rates of expansion of the universe over time, when run backwards with regard to star lifecycles, and isotopic concentrations of clouds and star clusters. The universe appears to expand very predictably, slows down for a long time, then suddenly picks up again with great force. The reason suggested: expansion is at first very pernicious, but attractive forces nearly cancel it out, making for slow apparent expansion. The rate of decay in that expansion is not sufficient for attractive forces to overcome. The distances between major massed objects continues to increas, and the rate of accelleration between them diminishes, but slower than the falloff in actual rate of expansion. The two falloff curves (expansion and falloff of attraction) almost join. Expansion continues, but appears very very slowly. Distances between objects continues to increase, reducing attraction so that the curves again seperate, rate of attraction now decays faster than rate of reduction in expansion. Universe starts to fly apart at exponential rate. Voila, we have our bell shaped curve in rate of apparent expansion.

But we also have another sticky mess.

Under this thought experiment, all mass in the universe has been accellerating (counter to expansion), and is now a significant fraction of c in true velocity. This means it is experiencing time dilational effects, in comparison to the actual rate of spacetime expansion, universe-wide. That is to say, the empty spacetime outside our visible universe experiences time significantly "faster" than ours.

But this also explains a curious observation of our current observable universe. The speed of light appears to be changing. (Rather, the speed of light is the same, but the amount of time being experienced is changing.)

If we, as massed particles in motion, are slowing down our rate of hidden accelleration due to the relentless expansion, then our degree of time dilation will also diminish, resulting in the change in measurements. We are slowing down (or rather, not accellerating at the same rate), so we experience more time, and the speed of light appears to slow accordingly.

This means that the actual age of the universe is "waaaaaaaaaaaaaaay" older than what our astrophysicists have determined by measuring star lifecycles, and is likewise, vastly larger in true volume than measurements in our matter dense region would suggest. (By many orders of magnitude.)

This should produce useful math that makes useful predictions that could then be experimentally evaluated.

But again, it is probably wrong in more ways than can easily be counted.

Slashdot Top Deals

Never test for an error condition you don't know how to handle. -- Steinbach

Working...