Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Ageing can be seen as a treatable disease. (Score 1) 478

Global warming was caused by a simple thing: too damned many people consuming energy on this rock, and it gets worse every year.

I thought the cause was our unwillingness to reengineer the orbit of Earth to be a teensy bit further out. IT's not like we aren't going to have to do that anyway, as soon as the solar expansion phase starts in a bit...

Comment Re:Comparable? Not really. (Score 1) 126

Yeah, there's long term downside risk to the stock as a straight financial instrument (along with significant historical upside), but you know what? I don't really feel the need to destroy things just because that's they path to my highest ROI over time.

Which leads the question, how do we force management to stop aiming at short-term decisions that destroy companies/greater profits over the longer period?

My personal suggestion? Keep investors away from decisions impacting the day to day operations of the company. One good way of accomplishing this is having two classes of stock, voting and non-voting, and keeping the voting stock in the hands of people who care about the long term interests of the company. :)

Comment The problem is the Windows 98 SP2 effect. (Score 2) 504

Apple devices "degrade" with OS updates in the same way that Windows updates do on PCs, gradually. But even after an Apple starts no being upgradeable to the latest OS release, it stays useful for years to come. My mother is still using my hand-me-down 2002 desk-lamp iMac, which has the old PowerPC processor.

The problem is the Windows 98 SP2 effect.

The last service pack supporting Windows 98 turned it from a usable system into utter buggy crashing heap of crap, at coincidentally the same time they started trying to sell you Windows XP.

Note that generally I don't think this is an intention destruction of usability on the part of Microsoft (or Apple), I just think that all their testing takes place on newer hardware, better than what the user is actually using, and so the usability test engineers just never see how terrible it's going to be on (nominally) supported older devices.

Comment Re:Comparable? Not really. (Score 1) 126

When someone buys a share in Apple, they actually get an ownership share in Apple.

Apple, yes. Google or Facebook, no. Google and Facebook have two classes of stock. The class with all the voting rights is in both cases controlled by the founders. The publicly traded shares cannot outvote them, even if someone bought all of them.

Until recently, multiple classes of stock were prohibited for NYSE-listed companies, which tended to discourage doing this. (The classic exception was Ford, which has two classes of stock, the voting shares controlled by the Ford family. This predates that NYSE rule.)

This matters when the insiders make a big mistake and the stock starts going down. There's no way to kick them out.

It also matters when someone has built something of value, and then becomes publicly traded, since it keeps the financial vampires from descending on the company and sucking the blood out of it, leaving a husk which dies in 6 months. That's what's currently going on with the OliveGarden proxy fight, where a funds group has acquired a large position in the company, and now wants to spin off the real estate holdings to a separate company (taking about $1B in the $2.5B value portfolio as a one time dividend, and putting in their own sock puppets on the board to short-term pump the stock by changing employee mix, etc.).

The problem with Google and Facebook maintaining one class of stock is ISOs/RSUs. Stock given as incentives to employees, after the vesting period, can be sold on the open market, and if that stock position becomes larger than the founders, then the people who made the decisions that created the large value in the first place are no longer in control, and Gordon Gecko (or Carl Icahn) can come in and do what's best short term for the shareholders, rather than what's best long term for the shareholders, company, employees, and customers.

Who do I trust more to make the best decisions not totally motivated by short term profit, Carl Icahn, or Larry, Sergey, and Eric?

Yeah, there's long term downside risk to the stock as a straight financial instrument (along with significant historical upside), but you know what? I don't really feel the need to destroy things just because that's they path to my highest ROI over time.

For better or worse, I'd rather have the founders, not Wall Street, making the decisions that guide the future of the thing they built.

Comment Re:Place of Business. (Score 4, Insightful) 126

If a US company listed in the US decided to screw its shareholders, it and the board can be held accountable in US courts.

LOL, when has that ever happened

It's happened many times; it's called "malfeasance" or "misconduct", and it's punishable as criminal fraud.

This is why corporate board members these days are all about "fiduciary responsibility", even if they have to club baby seals to death in the shallow waters where they are coated in oil from the Exxon Valdez.

Comment This is why you outsource manufacturing. (Score 1) 408

This is why you outsource manufacturing.

Outsource to a big company like Foxconn or Solectron that has already invested in all the expensive equipment and processes (in both cases, some of it actually paid for by Apple), and have them do your manufacturing for you.

The incremental cost ends up pretty tiny, relative to COGS, and you get a better finished product at only a fractionally higher cost than if you were stupid enough to do your own manufacturing. The argument in the article only holds up if you are stupidly building the widgets yourself.

Comment "How do you explain..." (Score 1) 408

"How do you explain..."

I don't really follow Microsoft acquisitions enough to speculate on their reasoning, but the Facebook reasoning was pretty obviously that the WhatsApp company cost (predominantly non-US) telephone companies $19B in per-SMS charged revenue over a period of 2 years, and it therefore gave Facebook some incredible leverage with those phone companies to make the purchase in such a way that a small group of phone companies couldn't drive WhatsApp out of business by increasing data costs to compensate (which would hurt Facebook.

Comment Given the relative percentages... (Score 1) 460

Given the relative percentages... it's likely that the "harassment escalating to assault" numbers for the men is underreported by a factor of 2.5, which would be about on a par with the underreporting of men being raped in the general population. There's a real cultural stigma to reporting by men, who are, by stereotype and therefore societal norms, "supposed to be" on the other end of the power equation.

Comment They've already screwed the pooch. (Score 2, Informative) 270

They've already screwed the pooch.

They've published the source archive under the original TrueCrypt license. As a result, unless there's a legal entity (person or company) to which all contributors make an assignment of rights, or they keep the commit rights down to a "select group" that has agreed already to relicense the code, they will not be able to later release the code under an alternate license, since all contributions will be derivative works and subject to the TrueCrypt license (as the TrueCrypt license still in the source tree makes clear).

The way you do these things is: sanitize, relicense, THEN announce. Anyone who wants to contribute as a result of the announcement can't, without addressing the relicensing issue without having already picked a new license.

Comment Re:This. (Score 1) 234

Now add to this that most major contributions in any scientific field occur before someone hits their mid 20's...

Tell me, does this account for the fact that the majority of people working in a scientific field graduate with a PhD in their mid 20s, or is it simply a reflection of that?

I expect that it's a little bit of both. Look however at Kepler and Tycho Brahe. Brahe's observational contributions aided Kepler, but he started well before he was 30. Kepler had his theories before 30, and was aided by Brahe into his 30's proving them out. Counter examples include Newton, and so on. Most Large contributions that aren't ideas themselves are contributions based on the wealth of the contributor, e.g. The Allen Telescope Array.

Like the GP, I'm in my late 30s and have found that my current field is less than optimal. It is a) unfulfilling, b) extremely underpaid (if I do more than 13 hours a week, the CEO running the studio is just as likely to steal my hours from me as not), and c) unlikely to go anywhere.

Reason (a) is motivation to do something that could be big, if the new reason is passion.
Reason (b) is a piss poor reason to do something big; there's no passion involved.
Reason (c) is ennui.

If you get into something solely to satisfy (a), you have a chance at greatness; if you do it for the other two reasons, even in part, you are unlikely to have the fire to spark the necessary effort. For example, the OP's willingness to dedicate 10 hours a week from a 24x7 = 168 total hours in a week really speaks to the idea of someone acting out a dilettante reason, rather than a reason of passion. Excluding sleeping, you could probably argue for 86 hours a week for a passion, and that's less than 11% of the "every moment of every day" you'd expect with a passion.

Comment This. (Score 1) 234

I can only spend maybe 10 hours a week on this

Since you already have a full life, something would have to give. The amount of time you estimate to be available would get to hobby level: the same as the other thousands of amateur astronomers in the country. But it's not enough to do any serious studying, get qualified or do research to a publishable quality.

This.

I read through the comments to find this comment so that I didn't just post a duplicate if someone else had covered the ground.

Let me be really blunt about the amount of time you are intending to invest in this project. If you were taking a college course, you should expect to spend 2 hours out of class for each hour you spend in class, and given that you only have 10 hours to dedicate to the idea, that's effectively 3 credit hours for every interval. So if you picked a community college, and they offered all the classes you needed, you should expect to have your Bachelor's of Science in any given degree field in about 23 years. That gets you to the necessary 210 credit hours for an Astronomy degree.

Let's say, though that you are a super genius, and can do 1:1 instead of 1:2 for in/out of class. That only cuts your time by 1/3, which means that you get that degree in 15 years instead.

Now add to this that most major contributions in any scientific field occur before someone hits their mid 20's; there are exceptions, but let's say again that you are exceptional. What contributions do you expect to be able to make after age 61 / 53, with your shiny new Bachelor's, since you're unlikely to find someone to hire you at that age, and you're unlikely to be able to afford instrument time on the necessary equipment on your own?

Comment I would say you have it right. (Score 1) 336

I would say you have it right.

Apple initially didn't open up the iPhone to Apps at all because Steve was deathly afraid of building another Newton.

Then they wanted to open them up, but there was not rational set of APIs, there was just an internal morass, because it had never been designed with the idea of hardening one app on the iPhone from interference by another app on the phone, or hardening the phones functions against a malicious app.

This is a single App on a single use, incomplete, API, one which was built only to host this App and nothing else. Could that API be exposed, and used for other applications? Yeah. Would that enable all possible NFC applications which you might want to implement in the future? Not a chance in hell.

This is just Apple wanting some bake time so that they can rationally support an API that they happily demonstrated opening hotel doors and other things which they are not prepared to open up at this point in time.

Comment Re:TDD FDD (Score 1) 232

Tests need to be fast and repeatable (among other characteristics). Tests must be of high quality as your production code. If you would fix "timing related" issues in your production code, there is no reason your tests suffer from the "timing related" issues either.

There's no reason they *should*, but they do unless you correct the test. The problem is in the test code, or in the wrapper that runs the test code. But consider an automated login test on an isolated network with a credentials server that races to come up with the browser that's attempting the login in the test case. If the login happens to start before the login server gets up and stable, then your login fails, and so does your test case, even though it's not a problem with the browser you are nominally testing.

This is/was a pretty common failure case with the ChomeOS build waterfall because Chrome was considered an "upstream" product, and therefore changes in Chrome, when they occurred, could throw off the timing. There wasn't a specific, separate effort to ensure that the test environment was free from timing issues. And since you can't let any test run forever, if you intend to get a result that you can act upon it in an automated way, you get transient failures.

Transient test failures can (sort of) be addressed by repeating failed tests; by the time you attempt to reproduce, the cache is likely warmed up anyway, and the transient failure goes away. Problem solved. Sort of. But what if everyone starts taking that tack? Then you end up with 5 or 6 transient failures, and any one of them is enough to shoot you in the foot on any given retry.

Now add that these are reactive tests: they're intended to avoid the recurrence of a bug which has occurred previously, but is probabilistically unlikely to occur again; when do you retire one of these tests? Do you retire one of these tests?

Consider that you remove a feature, a login methodology, a special URL, or some other facility that used to be there; what do you do with the tests which used to test that code? If you remove them, then your data values are no longer directly comparable with historical data; if you don't remove them, then your test fails. What about the opposite case: what are the historical values, necessarily synthetic, for a new feature? What about for a new feature where the test is not quite correct, or where the test is correct, but the feature is not yet fully stable, or not yet implemented, but instead merely stubbed out?

You see, I think, the problem.

And while in theory your build sheriff or other person, who's under fire to reopen the tree, rather than actually root-causing the problem, doesn't have time to actually determine a root cause. At that point, you're back to fear driven development, because for every half hour you keep the tree closed, you have 120 engineers unable to commit new code that's nor related to fixing the build failure. Conservatively estimate their salary at $120K/year, then their TCO for computers and everything else is probably $240K/year, and for every half hour you don't open the tree back up, that's ~$14K of lost productivity, and then once you open it up, there's another half hour for the next build to be ready, so even if you react immediately, you're costing the company at least $25K one of those bugs pops on you and you don't just say "screw it" and open the tree back up. Have that happen 3X a day on average, and that's $75K lost money per day, so let's call it $19.5M a year in lost productivity.

This very quickly leads to a "We Fear Change" mentality for anyone making commits. At the very least, it leads to a "We Fear Large Change" mentality which won't stop forward progress, but will insure that all forward progress is incremental and evolutionary. The problem then becomes that you never make anything revolutionary because sometimes there's no drunkard's walk from where you are to the new, innovative place you want to get to (eventually). So you don't go there.

The whole "We Fear Large Change" mentality - the anti-innovation mentality - tends to creep in any place you have the Agile/SCRUM coding pattern, where you're trying to do large things in small steps, and it's just not possible to, for example, change an API out from everyone, without committing changes to everyone else at the same time.

You can avoid the problem (somewhat) by adding the new API before taking the old API away. So you end up with things like "stat64" that returns a different structure from "stat", and then when you go and try to kill "stat" after you've changed everywhere to call "stat64" instead, with the new structure, you have to change the "stat" API to be the same as the "stat64" API, and then convert all the call sites back, one by one, until you can then get rid of the "stat64".

That leads to things like Solaris, where the way you endure binary compatibility is "give the hell up; you're never going to kill off the old stat, just live with carrying around two APIs, and pray people use the new one and you can kill off the old one in a decade or so". So you're back to another drunkard's walk of very slow progress, but at least you have the new API out of it.

And maybe someday the formal process around the "We Fear Change" mentality, otherwise known as "The Architectural Board" or "The Change Control Committee" or "Senior VP Bob" will let you finally kill off the old API, but you know, at that point, frankly you don't care, and the threat to get rid of it is just a bug in a bug database somewhere that someone has helpfully marked "NTBF" because you can close "Not To Be Fixed" bugs immediately, and hey, it gets the total number of P2 or P3 bugs down, and that looks good on the team stats.

Slashdot Top Deals

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...