Become a fan of Slashdot on Facebook


Forgot your password?

Comment Re:Lack of vision (Score 2) 157

Sometimes, Google just baffles me. The lack of direction in their product lines makes me shake my head.

We have several distinct software platforms:

1) Android. Development in XML with Java used as glue to hold everything together. Unless you don't. You can use standard C libraries and call the Linux kernel directly, bypassing the Dalvik Java VM.

2) Chrome browser. Development largely in javascript, again there are some obvious exceptions. Javascript is, of course, preferred because it's safer, so ChromeOS protects you by having everything done in Javascript. Except that it isn't.

3) ChromeOS. Kinda/Sorta like using the Chrome browser, except that it's not, because you are developing things that run as if they were actual clients. In Javascript. And of course, this too, is just as strictly enforced.

4) But Let's not forget the 4th platform in the trio: Google's Go language is clearly a contender, and it's designed to replace C, except for a few bone-headed decisions like linking everything statically resulting in enormous binaries. Because you really, really need to have the same library installed once for every app installed, because that way you get to recompile everything installed on your system any time a security update comes out for your favorite library. Except that, of course there are exceptions here, too.

And most importantly, you cannot target all these platforms with any single codebase written in any language. It's like they are trying to make their product suite as difficult as just using products from multiple vendors anyway.

It's really quite simple. A lot of Google projects started from a handful of people going "you know what would be a cool idea?" and doing it with very little approval or red tape (the fabled 20% time). That's certainly the only explanation I can think of for DART, at any rate.

Go is basically what you get when you hire a former Plan 9 developer, expose him to Google's internal hermetic build system (where a 100MiB binary is small), then let him build cool stuff to keep him from getting bored.

Disclaimer: I work at Google but do not speak for my employer. I don't work on any of the teams mentioned in your post. The information in this post is already available to the public in various places.

Comment Re:eh, Google no eat own dogfood? (Score 2) 308

Care to share the Distro of choice on those linux based non chromebook machines? Is it a free employee option ? Are there a set number of pre-approved distros? Is there a top-secret Google Gnu-Linux Distro that dispenses chocolates on the half hour?

Only Goobuntu is available. It's Ubuntu Precise Pangolin plus some light policy customization (internal base-install *.debs; some Puppet stuff).

Comment Re:eh, Google no eat own dogfood? (Score 4, Informative) 308

why use so many Apple computers when there's your own awesome Chromebook?

Google employee here (but I don't speak for my employer and I am basing this purely on anecdotal observation, not hard data).

I'm only familiar with my impressions from the engineering side, so I don't know much about the sales and marketing side of things, but nearly all of the engineers use Linux desktops (unless they're developing client software, like Chrome). Laptops are a different story. As a Bay Area-wide phenomenon, software engineers sure like their Macbooks, and this place is no exception. A few of us run Linux laptops, but my impression is that Macbooks outnumber Linux laptops plus Chromebooks combined. But the internal hardware requisition site is now offering the Pixel (indeed, recommending it instead of Macbooks), so this should change with time.

There's also the matter of hardware refresh cycles. The Pixel is not even a year old yet, and it hasn't been available for requisitions for its entire lifespan, so a good number of employees haven't yet had the chance to switch even if they want to. (Returned working laptops are refurbished and reused, so turning over the inventory will take longer than you might expect.) Also, lack of VPN or native SSH impeded the Chromebook's internal usefulness in the early days, but today hardly anything still requires VPN (it works now regardless) and the Secure Shell app is pretty workable (set it "Open as Window" so that ^W goes to the terminal). And... well, the early Chromebooks had anemic hardware specs, which is not true of the Pixel.

Comment Re:Movie idea (Score 1) 127

You could make a film about a pile of dead body parts assembled into the form of a man being shocked by lightning and being given the will to live. You could even add some wanton violence and philosophical questions of existence to make the story interesting.

You mean Frank Henenlotter's 1990 masterpiece, Frankenhooker , of which Bill Murray said (and I quote) "if you see one movie this year, it should be Frankenhooker"?

Comment Re:Freedom of thought (Score 3, Insightful) 392

Uhhh...just FYI? Rohm and the SA leadership were pretty much ALL gay and Hitler and pals didn't have a problem with it until Rohm started talking about a "second revolution" because he thought "the little colonel" had betrayed the socialist part of national socialism, just FYI.

Hitler had a pretty firm "babies good, homosexuals bad" policy for the common folk. Rohm was a party insider long before Hitler was elected Chancellor; in general, Hitler was pretty willing to give special treatment to party insiders, even ones less senior than Rohm. Even so, I'm not aware of any other SA leaders who got a pass for the same reason; care to name names?

For that matter, Hitler's family doctor Eduard Bloch was Jewish, and he got special treatment too (only Jew in Linz with special protection from the Gestapo, notes Wikipedia). Adolf reportedly had quite the soft spot for him after he did everything he could to treat Klara Hitler's rather horrifically advanced breast cancer, despite her financial hardship. Basically, Hitler was a giant hypocrite who tried to ignore the brutality of his own policies by shielding only the people he cared about and could personally see suffering from them.

Comment Re:Mysterious quantum mechanical connection? (Score 4, Interesting) 186

I am not a physicist.

But I keep hearing that there is actually nothing mysterious about entanglement at all... Something along the lines of:

You post 2 envelopes containing cards in opposite directions, one with a printed letter A, the other card with the letter B.

At one destination, the envelope is opened to reveal the letter A. ... then through some mysterious quantum mechanical connection.... you know that the envelope at the remote destination contains the letter B.

And that's about all there is to entanglement....

Can any physicist confirm?

I'm not a physicist, just a well-read layman, but...

It is more mysterious than that, but if you go with the Many Worlds interpretation it's not much more mysterious.

Basically, if you entangle letters A and B and send them in opposite directions, you're really creating two universes corresponding to the two possibilities: universe P (A here, B there) and universe Q (B here, A there). If you open the envelope to reveal A, for instance, then that copy of you in universe P now knows they exists in universe P, and likewise for B and Q. But unlike in classical physics, universe P is not completely separated from universe Q. P and Q still exist as a single mathematical object, P-plus-Q, and you can manipulate that mathematical object in ways that don't make sense from a classical standpoint.

Basically, it all comes down to one small thing with big consequences. The real world is NOT described by classical probability (real numbers in the range [0,1]). Instead, the real world is described by quantum probability (complex numbers obeying Re[x]^2 + Im[x]^2 = 1).

As it turns out, "system P-plus-Q has a 50% chance of P and a 50% chance of Q" is really saying "system P-plus-Q lies at a 45deg angle between the P axis and the Q axis". Starting from P-plus-Q, you can rotate 45deg in one direction to get orthogonal P (A always here), or you can rotate 45deg in the opposite direction to get orthogonal Q (B always here), thus deleting the history of whether A or B was "originally" here. (If P and Q were independent universes, this would decrease entropy and thus break the laws of physics.) Even more counterintuitively, you can even rotate P-plus-Q by 15deg to get a 75% chance A is here and a 25% chance B is here (or vice versa, depending on which quadrant the starting angle was in). Circular rotations in 2-dimensional probability space are the thing that makes quantum probability different from classical probability, and thus the thing that makes quantum physics from classical physics.

Classically, A is either definitely here or definitely there, and until we open the envelope and look we are merely ignorant of which is the case. Classical physics is time-symmetric, and it therefore forbids randomness from being created or destroyed; classical probability actually measures ignorance of starting conditions. In a classical world obeying classical rules, you can't start from "50% A-here, 50% B-here" and transform it into "75% A-here, 25% B-here" without cheating. The required operation would be "flip a coin; if B is here and the coin lands heads, swap envelopes", and you can't carry that out without opening the envelope to check if B is here or not. Quantum physics is also time-symmetric and also forbids the creation and destruction of randomness, but quantum probability (also called "amplitude") is not a mere measure of ignorance. In the Many Worlds way of thinking, physics makes many copies of each possible universe, and the quantum amplitude determines how many copies of each universe to make. At 30deg off the P axis, cos(30deg)^2 = 75% of the copies are copies of universe P, and you experience this as a 75% probability of finding yourself in a universe with "A here, B there".

(Or something like that. It'll probably make more sense once we eliminate time from the equations. At the moment not even Many Worlds can help us wrap our heads around the fact that quantum entanglement works backward the same as it does forward. The equations as they stand today imply that many past-universes containing past-yous have precisely converged to become the present-universe containing present-you.)

One last complication. If the information of A's location spreads to more particles than A and B, then P and Q become more and more different, and as a consequence the quantum probability rules become harder and harder to distinguish from the classical ones. If you open the envelope and learn "A is here", for instance, then P now contains billions of particles that are different from Q (at the very least, the particles in your brain that make up your memory) and it now becomes impossible-ish to perform rotations on P-plus-Q, because you would need to find each particle that changed and rotate it individually. (Not truly impossible, but staggeringly impractical in the same sense that freezing a glass of room-temperature water by gripping each molecule individually to make it sit still is staggeringly impractical. And both are impractical for the same reason: entropy.)

When so many particles are involved that we can't merge the universes back together, we call the situation "decoherence", but it's really just "entanglement of too many things to keep track of". Entanglement itself isn't really that special; what's special is limiting the entanglement to a small group of particles that we can keep track of and manipulate as a group.

Comment Re:Democracy? (Score 1) 371

Just under what legal theory before the FDA was poisoning people a legitimate business ?


Back in the U.S. robber-baron era (1870-1905) it used to be the case that it was your own fault if you put it in your mouth. It didn't matter if the seller marketed it as edible despite knowing or suspecting that the product was poisonous (such as radium water or formaldehyde-preserved milk). As the buyer you were supposed to know better, as summarized by the legal doctrine caveat emptor ("let the buyer beware"). It was only later that caveat emptor was _partially_ overturned by the invention of the "implied warranty", as federally formalized in the Uniform Commercial Code of 1952 (though the concept was kicking around decades earlier than that on a state-by-state basis). In the absence of a warranty (explicit or otherwise), the seller had made no promise to the buyer about the product sold, and with no promise to break there was therefore no fraud on the seller's part. No fraud, therefore no wrong and no restitution: no wrongful death damages, no medical bill expenses, not even a "satisfaction or your money back" refund guarantee.

To this day, there's still quite a bit of caveat emptor in the law. For example, cigarette smoke is poisonous at the intended dosage, full stop. Habitual smoking of cigarettes is known to inactivate hemoglobin by way of carbon monoxide, to reduce lung capacity by accumulation of scar tissue, to damage the cardiovascular system by hardening the arterial walls, and to dramatically increase the risk of lung and other cancers. But despite their documented toxicity, to this day tobacco companies are not held liable for selling them. They have been sued several times, but generally for their advertising, and many of the advertising suits have been for ads that played up false benefits or downplayed real drawbacks -- i.e. they made a promise (implied warranty of fitness) that was then broken (fraud). But so long as the buyer is duly warned (no false advertising, the Surgeon General's Warning is present), the situation reverts to caveat emptor and it's again the buyer's own fault if they put poison in their mouth.

Comment Re:Security? (Score 5, Informative) 123

How they maintain security with C and C++ applets?

-- hendrik

NaCl (in its standard, non-Portable flavor) is essentially a bytecode that happens to be directly executable as machine code (either x86-64 or ARM). The bytecode can be statically verified to mathematically prove that the instructions obey certain rules (e.g. exactly one interpretation for any bytecode, execution only leaves the verified bytecode by calling trusted functions, can only read/write memory in the sandbox, cannot write to bytecode, etc.). As I understand it, PNaCl is similar to classic x86/ARM NaCl but trades fake bytecode for real bytecode (LLVM's intermediate representation, last I heard) and statically compiles it to native machine code after the bytecode verification step. Basically, in this scheme the verified C code can run at near-native speed, but it can only communicate with the world outside the sandbox by calling trusted functions that the enclosing app chooses to expose.

Theoretically, Java ought to be just as strongly sandboxed as NaCl: Java code in a JVM sandbox can only call trusted functions that the JVM chooses to expose, too. But in practice the Java standard library exposes a ridiculously broad attack surface, giving sandboxed apps plenty of chances to exploit bugs and escape the sandbox. (For instance, java.lang.String is a final class today because folks discovered that you could subclass it to make it mutable, pass a sandbox-approved value to e.g. a file I/O function, then modify the value to a sandbox-forbidden value after the security check but before the OS system call.) Basically, Java's attack surface is broad and leaky because Java was designed for running embedded devices and servers, not for sandboxed applets downloaded from hostile sites on the Internet. Applets were a distant afterthought compared to Java's "let's write an OS for set-top cable boxes" origin.

In contrast with Java, Chrome's implementation of [P]NaCl only exposes the Pepper API, and the Pepper API was designed from the ground up to be called by sandboxed code fetched from a malicious website. Looking at the Pepper C API site, the attack surface seems... bigger... than I would have expected. But most of the functionality I see there is also exposed to JavaScript, where the code is every bit as hostile. Almost any "attack surface, WTF" argument would also argue against JavaScript and all modern web design. And if they're smart, one API is hopefully built on top of the other (plus a thunk layer made of machine-generated code), so that there's only one pool of security bugs to fix.

Comment Easy solution: measure budgets in Iraq War Days (Score 4, Insightful) 205

A repost of a Google+ post I wrote a year and some change ago:


From today forward, all federal government expenditures will be priced in "Iraq War Days" (IWD) or "Iraq War Years" (IWY). For quick reference:

  • - MSL mission w/ Curiosity rover: 3.5 IWD
  • - Cost of giving $10 to all 312M US citizens: 4.33 IWD
  • - 2012 "General Science, Space and Technology" budget: 43.04 IWD
  • - Cost of giving $100 to all 312M US citizens: 43.3 IWD
  • - 2012 Welfare budget: 210.3 IWD (0.6 IWY)
    • ~ Computed as 26% of the 2012 "Income Security" budget
    • ~ Includes TANF (22%) welfare, SNAP (70%) and WIC (8%) food stamps
    • ~ All ratios from 3rd party analysis of 2010 data; see "How much do we REALLY spend on Welfare?"
  • - 2012 "Medicare" budget: 672.9 IWD (1.8 IWY)
  • - Cost of giving $2250 to all 312M US citizens: 975 IWD (2.7 IWY)
  • - 2012 "National Defense" budget: 994.9 IWD (2.7 IWY)
  • - 2012 "Social Security" budget: 1081 IWD (3.0 IWY)
  • - 2012 Total budget: 4986 IWD (13 IWY)

Source: "United States Federal budget, 2012" and "Mars Science Laboratory" pages on Wikipedia for budgets, for US population, National Priorities Project via "Cost of War" Wikipedia page for IWD exchange rate.


Something I didn't note in my original post that's probably worth mentioning in passing: Social Security is huge, "bigger than the National Defense budget" huge, but it's basically self-funding because it's a retirement investment paid for by payroll taxes (modulo population bumps, e.g. the post-WW2 "baby boom"). Person A pays in, person A cashes out, theoretical net cost to taxpayers $0.

Comment Re:Government waste (Score 1) 257

So then the question becomes, could an actual fission reactor be designed small and powerful enough to power a car (or horse) -like vehicle?

Short version, no. There are no nuclear fuels with the right balance of properties to achieve that. Long version: go Wikipedia nuclear fission, fissile, and critical mass.

Comment Re:Dispensing our reserves? (Score 1) 255

[...] Meanwhile, engineers will continue to look at alternate cooling solutions, such as liquid hydrogen. [...]

This doesn't work. There's no viable substitute for helium, not even hydrogen. The reason helium is so useful is that it boils at 4 K (by far the coldest boiling point of any substance), remains liquid all the way down to absolute zero at standard pressure, and becomes superfluid at 2 K (the only bulk superfluid achievable on Earth).

The boiling point is important because that's how cryogenic cooling works: when you use a circulating liquid coolant, the temperature of the (coolant plus apparatus) system cannot exceed the boiling point of the coolant until the coolant has entirely boiled away, so you get a very consistent and predictable temperature (right up until the coolant is gone). 4 K is below the critical temperature of the most common materials for superconducting electromagnets: niobium-titanium (10 K, relatively cheap) and niobium-tin (18 K, highest known T_c for a traditional superconductor). Hydrogen is not a substitute, because it boils at 20 K; that's noticeably too warm for any traditional superconductor, and even if it weren't, superconductors can handle stronger magnetic fields the colder you chill them, so they'd be less useful in an MRI machine. And you can't chill hydrogen much colder than its boiling point before you hit its melting point, 14 K, at which point it stops circulating and becomes much less useful as a coolant.

The superfluidity is not quite as useful day to day, but it's used to study the behavior of other quantum mechanical systems, such as neutron star interiors, that we can't recreate in a lab. It also forms a rigorous analogy with superconductivity, especially in the case of fermionic He-3, so it gives us a chance to play with a bulk fluid that propagates fluid currents in the same way that superconductors propagate electrical currents. Nothing else can replace it for this purpose.

(Side note: helium is not a truly expendable resource. Of the helium present on Earth, not a single gram is left over from the formation of the solar system; Earth doesn't have the mass to retain helium in its atmosphere. All our helium comes from the alpha particle decay of heavier radioactive elements, like radon. When the alpha particles relax and become neutral helium gas, the gas is trapped by the same gas-impermeable rock formations that trap natural gas. However, the natural recharge rate from radioactive decay is much slower than the rate that we're extracting it and venting it, so if we don't curtail our waste we're going to run out regardless.)

Comment Re:I do not understand why this is a story (Score 1) 740

Trades were executed in Chicago before the change was announced in Washington D.C. in a relativistic physics sense.

Actually, in relativistic physics sense, the trades in Chicago where outside of the light cone of the Washington event (neither in the future cone nor in the past cone). That being said, since Washington and Chicago do not move at relativistic speed with respect to each other, the trades are still at a later time than the announce, even if there's no possible causality.

But the DC announcement was not in the past light cone for the Chicago trade. Therefore the information had not yet reached the Chicago public. That is the criterion being judged, not simultaneity. Insider trading, case closed.

(And even if we take the classical limit of c approaches infinity, are we really to believe that a trade conducted within single-digit milliseconds of the announcement was based on consideration of the contents of the announcement? There exist fully automated flash trading systems hooked up to news wire services, but AFAIK even those don't react quickly enough to explain the speed of this trade. Shakier conclusion, but still insider trading.)

Comment What the hell (Score 1) 55

Package your ruleset.xml into DeploymentRuleSet.jar

Packaging your ruleset allows the desktop administrator to apply cryptographic signatures [emphasis mine] and prevent users from overriding your policy. This requires usage of a trusted signing certificate. The easiest route to get a signature is to buy one from a certificate authority like Symantec/Verisign, Comodo, GoDaddy, or any other; [...]. The default certificate authority list contains about 80 authorities from which you may purchase a signing certificate [emphasis mine].

-- Introducing Deployment Rule Sets, Java Platform Group blog

Why in the name of the everliving fuck would anyone think this step was a good idea? The file is already located in a directory that can only be written by root (or Administrator, as OS appropriate). Why require a signature? This adds zero security. If you have root on the machine, you can add a self-signed CA to the trusted CA list anyway. Do they have a kickback arrangement with Verisign or something?

Comment Re:maintenance (Score 1) 195

Since you obviously know that a *file* can be fragmented, obviously you already know that a file doesn't have to be contiguously written.

Thus, you don't need to defragment it. The directory structure knows that the 'file' is in blocks 1-5, 8, 14.

As other people pointed out, disk seeks are most assuredly something to avoid on spinning media. But even when seeks are free, as they are on SSD, fragmentation still sucks and you should avoid it like you owe it money. For one, some filesystems use run-length encoding for the list of blocks in a file. Basically, instead of recording "1, 2, 3, 4, 5, 8, 14", they notice the pattern and record "1-5, 8, 14" like you just did in your post. (The ext[234] family doesn't do this, but IIRC some of the post-ext2 up-and-comers use it.) RLE lets you inline more metadata directly in the inode without resorting to indirect blocks, which basically means you get your data with fewer round trips to the disk. (It might save you from needing to read a meta-meta-block to find the meta-blocks that tell you where the blocks are. Instead you can fit all the blocks in one meta-block and skip a round trip.) For two, even filesystems on SSD that don't do RLE still suffer under fragmentation. Unfragmented files make it easy for the kernel I/O scheduler to coalesce those sequential block reads into big, happy multi-block SATA reads when you're streaming through the file. As before that means fragmentation = more round trips to the disk, but it also means fragmentation = spamming the SATA controller with more commands and spamming the CPU with more interrupt handlers for the command completions. (In other words, copying a big fragmented file slows down everything else on the computer, moreso than copying a big un-fragmented file.)

Disclaimer: I am not a filesystem designer, I just play one on Slashdot.

Slashdot Top Deals

A freelance is one who gets paid by the word -- per piece or perhaps. -- Robert Benchley