Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:more info, pls (Score 4, Informative) 40

The goal of the feature is basically similar to "go to this page, then cntl+f for this text". One of the Google docs linked in the article suggests that for Chrome, that might even be the literal the implementation. The privacy concern is that the linked website would be able to detect the act of this automated scrolling, then infer roughly what text snippet was linked.

(The Google doc mentions timing-based detection as a possibility, as well as scroll-event-based detection.)

This sort of search is only worth doing if the snippet being searched for is relatively unique on the page, and thus has high information content. Because the user clicked on such a link rather than looking at a page from a web search, or even a link to the site at a high-level, the text is presumably very relevant to the link-follower. For example, you won't create a link with the text "from the" because it will misfire more often than not, but you mightfor something like "from the office of the Surgeon General".

This also assumes that the attack is conducted by the linked site. That means this is a website you want to go to, but which you don't necessarily trust. That's actually a pretty safe assumption for any site in the age when everything is used to fingerprint the user for advertising purposes, and that information is then sold of. But the example of someone who you want to control the flow of information towards -- an insurance company, a customer, an employer, a competitor, a lawyer -- is also a solid one.

An alternative attack is to use the linked page to infer something about the reader. An example is "if the page contains a string like "User Type: Admin", then the person who followed the link is a system admin that can be targeted for attack.

An alternative attack is a cross-site search attack. The success/failure in finding the linked text leaks information. This can give you a yes/no on some questions like "did they receive an email from "hiring@snap.com"? (The trick is searching for link text "No messages matched your search" to indicate they did NOT receive such a message.)

So far, they've figured out a few cases that demonstrate this can leak 1 bit (yes or no) of information, which is a privacy concern by not a security concern. More information could be leaked if a user action were not required to follow the link, which is not the case so far (but sounds like something easy to forget about over time).

Comment Re: English is a phonetic language. (Score 2) 333

English is horrible un-phonic, as numerous reformers over the years have observed. Consider the word COLOR (read "culler"), which uses two Os, neither of which make an OH or or OOH sound. Consider anything with a gh in it, which could be read as G, as FF, or as silent. Consider the terminating B in words like JAMB, but not its homophone JAM. English is schizophrenic.

And part of the reason is because the letters is the simplest (least-foreign) words are decidedly not phonic. H is used in WHEN and WHERE (but not WIN and WEAR) as mostly-silent, but often indicating "breath". The only hard rules for vowels, their dipthongs and digraphs is that an A can't make an OOH sound: anything else goes in some word somewhere. The whole purpose of a terminating E is to convey "that L or vowel doesn't sound like you might first guess, try again." And digraphs like PH, TH, and CH are basically appeals to the ghost of a letter like Phi, Thorn, or Chi that just didn't make it into the modern Latin alphabet. The letters are markers, not sounds in of themselves.

The words in English that ARE phonic are mostly foreign. And then they inherit their readings from twenty-odd different languages. (So that, e.g. the Os in COLOUR are both pronounced the same as each other and every other French O, but not like most German or English Os.)

Phonetics is a useful stepping stone for young children. I have no familiarity with the whole word method. But English is largely aphonic, and reading unfamiliar words is more about recognizing whether it is French-looking or Greek-looking before you can even try to phoneticize.

Whatever else is said, let's never pretend that phonetica are anything but a stepping stone.

Comment Sounds like a great extension (Score 2) 50

This is one of numerous features -- like phishing protection -- which would be great as an official extension for people that actually want it, easily uninstalled for people who don't.

(A reminder that the classic extension model wasn't just about piling on new crap: it was also about opting out of things that are useless to you for a leaner browser.)

Comment Re:It would be a wonderful world (Score 4, Informative) 161

There's no reason to argue... it's actually pretty easy to explain how the (modern) English are wrong:

Separated by a Common Language: Math(s)

The British often linguistically treat "mathematics" as though were the plural of some noun "mathematic". But the -s is the nominative -s.

How do we know that these are really different affixes, and not just the same affix doing a range of jobs? Partly we know from history. The plural -s comes from an Old English case suffix (-es or -as). The verb one has derived from the suffix -eth (or -ath) in earlier Englishes. The adverbial one is related to the possessive 's. And our friend the nominali{s/z}ing (=noun-making) suffix generally affixes to roots from classical Greek.

It's easy to find other uses of the nominative -s -- for example, almost any high-level subject of study such as mechanics, physics, economics, linguistics -- but now many are long and common enough to be frequently abbreviated by common people. For example, few people talk about "economics" often enough to shorten it to "econ" or "econs" (though when they do, it's usually "econ").

This also is one of the cases that led me to rule of thumb "(modern) English people can't speak English". Americans seem to hang on to the "old way" of speaker longer than the British do.

Comment Re:More accurate (Score 1) 140

Wikipedia is also more accurate than many people give it credit.

YMMV. My friend in organic chemistry said it was better than her textbook, accurate, and mostly free of junk contributions because most people don't understand it well enough to argue about it. The math pages seem to be aimed at college graduates who are all at least conversant in abstract algebra. The electrical engineering articles are aimed at high school level of understanding -- and God help you if it has the slightest relationship to audio, because that brings out all the horribly ill-informed A/V nuts.

I'd speculate that the more readily a high school sophomore can understand the basics, the lower the education level of those involved will be, and the more people who know what they're talking about will be squeezed out.

Comment Re:May emit showers of sparks (Score 4, Interesting) 233

Supposedly, their power transmission system is based on some sort of high-energy plasma being piped throughout the ship, and every piece of technology they have can be made more or less effective by simply funneling more power into it, even if you have to steal that power from things like lighting or life support. (For example, the computer calculates more quickly, shields become firmer, sensors extend their range and resolution, etc.)

So -- bearing in mind that "pipe more high-energy plasma from one part of the ship to a completely unrelated part" is default behavior in any crisis -- that in mind, it's amazing things don't blow up constantly.

Comment Because they're easy to miss (Score 5, Interesting) 233

Back in the documentary "Crumb", on underground commix icon Robert Crumb, R.C. demonstrated how he took a lot of photos and drew scenes from them rather from memory, because it's easy to mentally tune-out a lot of very big, annoying things about modern life: billboards, power lines, transformers. He didn't want to miss them when he drew, so he took photos to force himself to acknowledge them with photos.

Sure enough, once he pointed it out, I realized that was one of the things that made his work both very solid/real and also very gritty. When there's no panels with large swaths of empty, blue sky, it really forces you to acknowledge everything we've put in the way.

In anime, it could be similarly an attempt at heightened awareness/realism, or a form of social commentary, or a subtle nod that the characters are in the Ugly Real World and not the Sparking Virtual Reality or Romantic Past.

Comment Re:The way not to do it... (Score 5, Interesting) 127

The old extension model gave you access to everything. I hesitate to call them "hooks", because when people here that they usually think of a well-enumerated API option or port. It wasn't that. Everything was made of modules with a defined module API. Any extension could leverage any model by talking to it. That made you able to do literally anything with their model.

Do you want to cut out the (then) Gecko rendering engine and instead use IE's (then) Trident rending engine? An extension did that. Do you want to filter everything that goes to the (then) SpiderMonkey JavaScript engine before the code is run? An extension let you do that. Do you want to leverage the rendering engine to view a webpage in a way that it was never intended (say, as hierarchical text)? An extension did that. Do you want to inject your own code into every page you're shown? An extension did that. Do you want to add support for a new or obsoleted protocol (like gopher), or new image formats? You could do that. Do you want to implement completely new UI features, such as dragging-to-rearrange tabs? That was an extension, later added to the main program.

The trouble is that most extensions did really banal shit like changing the UI by modifying the chrome. And when Firefox revs and redefines chrome element (and the mediocre extensions do not update at all), all of a sudden the browser gets laggy and leaky and doesn't work like customers expect, and Mozilla looks incompetent. They had the same problems with their CSS-based themes, which is why they started moving back to customizable by mostly-meaningless skins (in their Jetpack initiative).

Now by moving to Google Chrome's API model, they've finish cutting out most of the wild, wooly, user-generated code that made it less stable... but also the only thing that makes Firefox unique or useful in a modern browser. It did it's job and ended IE dominance. With the passage of time it has forgotten the goal of making a browser that was lightweight. Modern web features make it impossible for it to be nearly as cross-platform as it was. They rarely ever supported user choices over API standards (they always had to be badgered into things like 'never deny the user access to the menu'), so it's hard for me to believe them when they claim any kind of moral superiority. They only care about user choice so long as it doesn't make work for them. So other than the fact we have a _different flavor_ of chrome now, what's there to be thrilled about?

Comment Not sure about Apps, but... (Score 2) 70

... as an engineer for an IC company, I had to translate bug reports into firmware release notes suitable for corporate customers. (These days we automate it through a combination of JIRA and repo comments.)

For those of you who actually write release notes, what guidelines do you use?
There was no formal standard or best practices to follow in our group: the de facto standard was a balance between whatever our engineer (usually the same guy over many releases) thinks is enough info, and what our customers (also engineers) badger us about being not enough info.

Should one make any attempts at levity, or keep it strictly to business?
If somebody is reading your document, it's probably because something went wrong. They are short on time and reading a document they didn't want to deal with, trying to solve it quickly. They are already angry. All humor in documentation is thus inherently tone-deaf and insulting. It is never worth doing, and we all want to punch you.

How much information is appropriate in release notes?
You must first understand our customers: they build systems of many ICs (and don't devote a lot of time to any one vendor), their companies are segmented by function (so the driver guy and the platform guy never talk to each other, everything is write once/change never, and iterative design is a dirty word), and they are very risk averse (they think it's our job to prove to them that nothing is ever risky). Here's what we end up giving them:

* Bug fixes: Sometimes too many little ones to be worth enumerating, collected under "bugs fixed". But usually there are some big ones, or at least specific ones that your customer noticed and called you on. Give a description of what the issue did and something that suggests you traced it to root cause and didn't just move data around until a test passed. "Resolved an issue whereby the widget exhausted streamer overhead and data was lost." An ugly fact: giving too much detail about what went wrong and how you fixed it actually makes customers ask more questions, which means more busywork. (The Japanese notion of process means asking a lot of nonsensical questions, to be answered in the form of a spreadsheet, repeating ad nauseam.)

* Errata: customers won't always move to the latest/greatest firmware: they'll stick with what they last validated internally. Some bugs were found to be around for a long time. You need to note these newly-known bugs in your errata for the previous releases when you rev the document. This is covering your ass; you've warned them. Also, it lets you sum up many bug fixes as "fixed all other previous errata".

* New unexpected features: If a customer does actually upgrade mid-project, it'll be to fix an intolerable bug, and then they'll get new features they weren't expecting along with the fix. Assume that the release notes are the only new document that they will ever read after switching... and thus the only document that describes new features. Give the interface, an example, and any required system configuration changes needed to live with it. Think of it as a one-page white paper on the new feature.

* New expected features: "(finally) added support for Industry-Expected Feature." Done. (Paradoxically, the bigger the new feature, the smaller the release note comment can be.)

* Changed/removed features and changed interfaces. Most customers will hate that you change an interface at all, but it's unavoidable.

On a related note: early in life, I found that I'm the only one I know who ever reads help files, release notes, and EULAs. I taught myself Matlab as a grad student in about a week by using their immaculate help system. I knew about long-standing but poorly-advertised features and bugs my coworkers didn't, because I actually read the damn release notes. I was not surprised at all that my mother-in-law's Samsung TV is spying on her, or that Windows 8 phoned "home" to various universities, because they told you they were going to do so in the EULAs. If you don't read the documentation, you're going to be worse off. If you're an engineer who thinks documentation is a "someday" task or a "check box item" for a release, then you're horrible, and I hate you.

Comment Re:Their app reads your contacts... (Score 1) 635

Every single "Like" widget on every webpage phones home to FaceBook. This basically lets them track everyone, everywhere.

If you want to keep two identities separate, you need to clear cookies, probably clear various super-cookie method, and/or turn off javascript. Or at least have NoScript set up with ABE rules to keep FaceBook javascript only on FaceBook. Or more easily, use a separate computer in addition to all the basic identity segregation that Leila used.

And bear in mind that this will only get worse as time goes on. As web sites move from a page model to an app model, the Web API standards will give sites more and more vectors to track users, undermine privacy protections, and enforce compliance with excessive requirements.

Comment No Credit Cards, no online gambling (Score 5, Interesting) 103

Even if they try to do a legal run around based on tribal sovereignty, the simple fact remains that it's against Federal law for credit card companies to do business with casinos. This is what originally killed the American online gambling industry. (And while I think that basic goal was short sighted, it is what it is.)

Credit card companies care a lot more about pissing off the Feds than they do about doing business with what they admit is a shady, untested casino scheme. The money is good, I'm sure, but the legal theory would have to be rock-solid to convince them that they're not going to just burn through it all in legal fees and penalties.

It would actually be easier to go to President Trump -- literally the most sympathetic possible person for this cause -- and bitch about how all those casino dollars are going off-shore to GoldenPalace.com, and get him to put a pet bill through a Republican-controlled Congress.

Comment Re:Dumb (Score 4, Informative) 219

Of course, you're not competing with the real world. You're competing with the past. For the most part, this means film, but it can also just mean "at whatever resolution we eliminate all the digital artifacts we accidentally put it". This matters for both Hollywood movies, and for most TV shows.

Say you're watching a DVD source on a 4K screen. You interpolate to fill in the missing data, but that's more missing data than available data and the contrast is terrible. When your screen resolution is better than your source like this, you have to rely on little (but still visible) tricks like digital grain to make it look less unnatural (as they did relatively successfully with, say, the old A&E/BBC version of Pride & Prejudice, and less successfully with the movie 300. To avoid this entirely, you have to re-sample the original source at a higher rate, which means going back to a higher-resolution master. For older material that means film. For newer material, it may mean you're just SOL.

A case study: All the episodes of the original Star Trek were shot (including special effects), edited, and mastered on film. That master was broadcast using analog technology, or digitized to some resolution for DVDs, Blu-Ray, etc. When screen resolution goes up, you can't just upscale the DVD or Blu-Ray and get good results indefinitely: you have to go back to the master and re-capture a higher-resolution digital version from that. The resolution of 35mm film is roughly equivalent to 20 megapixels.

Most of the Next Generation was shot, edited, and mastered on film, but a few effects were produced and edited in using digital 3D. For those 3D models, they had to do some digital archaeology and re-creation to replace (not scale up) those effects without artifacts. And then you get to Star Trek: Deep Space Nine, were a lot was modeled in 3D, and it was all edited and mastered digitally assuming the TV resolution of the time. There's no film master of higher resolution to go to, so DS9 (even the human actors) will just look worse and worse as the screen resolution goes up, forever.

Theoretically, 8K is approaching the point where you can good and truly digitally re-master most older media -- getting as good as you ever got with film -- and thus the point where the technology tops out... at least until Hollywood starts digitally filming in 16K.

Slashdot Top Deals

If you want to put yourself on the map, publish your own map.

Working...