Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Sounds like 23andMe gave the FDA the finger (Score 5, Informative) 371

If even half of the FDA's letter is correct (and I see no reason to doubt it), they've been bending over backwards to work with 23andMe for years. The company made a deliberate decision to both ignore the regulators and (more damningly) fail to study the effectiveness of their own product, and now they're paying for it. Here's the relevant section of the letter:

Your company submitted 510(k)s for PGS on July 2, 2012 and September 4, 2012, for several of these indications for use. However, to date, your company has failed to address the issues described during previous interactions with the Agency or provide the additional information identified in our September 13, 2012 letter for (b)(4) and in our November 20, 2012 letter for (b)(4), as required under 21 CFR 807.87(1). Consequently, the 510(k)s are considered withdrawn, see 21 C.F.R. 807.87(1), as we explained in our letters to you on March 12, 2013 and May 21, 2013. To date, 23andMe has failed to provide adequate information to support a determination that the PGS is substantially equivalent to a legally marketed predicate for any of the uses for which you are marketing it; no other submission for the PGS device that you are marketing has been provided under section 510(k) of the Act, 21 U.S.C. 360(k).

The Office of In Vitro Diagnostics and Radiological Health (OIR) has a long history of working with companies to help them come into compliance with the FD&C Act. Since July of 2009, we have been diligently working to help you comply with regulatory requirements regarding safety and effectiveness and obtain marketing authorization for your PGS device. FDA has spent significant time evaluating the intended uses of the PGS to determine whether certain uses might be appropriately classified into class II, thus requiring only 510(k) clearance or de novo classification and not PMA approval, and we have proposed modifications to the device’s labeling that could mitigate risks and render certain intended uses appropriate for de novo classification. Further, we provided ample detailed feedback to 23andMe regarding the types of data it needs to submit for the intended uses of the PGS. As part of our interactions with you, including more than 14 face-to-face and teleconference meetings, hundreds of email exchanges, and dozens of written communications, we provided you with specific feedback on study protocols and clinical and analytical validation requirements, discussed potential classifications and regulatory pathways (including reasonable submission timelines), provided statistical advice, and discussed potential risk mitigation strategies. As discussed above, FDA is concerned about the public health consequences of inaccurate results from the PGS device; the main purpose of compliance with FDA’s regulatory requirements is to ensure that the tests work.

However, even after these many interactions with 23andMe, we still do not have any assurance that the firm has analytically or clinically validated the PGS for its intended uses, which have expanded from the uses that the firm identified in its submissions. In your letter dated January 9, 2013, you stated that the firm is “completing the additional analytical and clinical validations for the tests that have been submitted” and is “planning extensive labeling studies that will take several months to complete.” Thus, months after you submitted your 510(k)s and more than 5 years after you began marketing, you still had not completed some of the studies and had not even started other studies necessary to support a marketing submission for the PGS. It is now eleven months later, and you have yet to provide FDA with any new information about these tests. You have not worked with us toward de novo classification, did not provide the additional information we requested necessary to complete review of your 510(k)s, and FDA has not received any communication from 23andMe since May. Instead, we have become aware that you have initiated new marketing campaigns, including television commercials that, together with an increasing list of indications, show that you plan to expand the PGS’s uses and consumer base without obtaining marketing authorization from FDA.

Reading this, I get the impression that 23andMe doesn't particularly care about the reliability of their tests as long as the money keeps rolling in. I don't see how 23andMe or its customers can determine the reliability without studies that 23andMe is refusing to do. The fuss is probably just because this is a new company with a new technology. If Pfizer or GSK were acting like this nobody would think twice about the FDA busting them.

Comment Re:Damn, that sucks. (Score 1) 154

Or voodoo, or 3dfx, or creative, or.......

The Voodoo (and sequels) was 3dfx's chipset. I'm not sure where Creative comes from; they were never big in the 3D graphics arena. The GP is mostly correct in context -- it's been around 15 years since anyone other than Nvidia or ATI/AMD had a competitive 3D graphics card for gaming, and developers were favoring certain vendors even in the pre-3dfx days, so the GGP's statement:

Now all the high profile game engine developers are in bed with NVIDIA and to a lesser extent AMD ... Carmack was the last voice and force of independence.

is wishful thinking.

Comment A logic analyzer doesn't count? (Score 1) 215

If you have a device that performs most of the same functions as a scope for digital systems, and you mostly work on digital systems, then no, you probably don't need a scope. But a scope is sufficient for most tasks, easy to acquire, and has more educational value. If you ever want to try anything analog, even if it's just scoping a power outlet, you'll need one.

I recommend an auto-ranging multimeter, a three-output power supply, and a super-cheap scope to start with. For embedded systems, don't forget that you can also use software debugging techniques.

Other useful hardware: a good soldering iron (for moving beyond breadboards), fine-tip tweezers (for surface-mount work), and a clean desk, preferably with shelves for your equipment.

Comment They don't actually mean 15 hours in front of a TV (Score 4, Informative) 53

Note that the report (third link) treats data (bytes) and time (hours) as separate measurements. The various summaries are mixing 15.5 hours with 9 DVDs, which is not correct. Also of note: media consumed at work is not included in the estimate.

Also, as the summary points out, consumption is defined in terms of what goes over the network and for how long, not what actually gets attention. Thus, it's possible to double or even triple your rate of media "consumption" without spending any more time or attention than you did before.

Still disturbing, though.

Comment Re:Seems a bit verbose (Score 1) 84

Now compare it with LaTeX: x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}. 31 characters. I know which I'd rather write. Or read.

Yeah, I was wondering about LaTeX when I wrote my comment. I don't see why we can't just have something like <math eq="x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}"> and let the browser convert it to whatever internal format it likes.

Comment Seems a bit verbose (Score 5, Insightful) 84

Maybe it's just because I'm unfamiliar with MathML, but this seems like a *very* verbose way of writing equations. One of the examples in the article is the quadratic formula:


<mtd><mrow>
<mi>x</mi><mo>=</mo>
<mfrac>
<mrow><mo>-</mo><mi>b</mi><mo>±</mo>
<msqrt><mrow><msup><mi>b</mi><mn>2</mn></msup><mo>-</mo><mn>4</mn><mi>a</mi><mi>c</mi></mrow></msqrt></mrow>
<mrow><mn>2</mn><mi>a</mi></mrow>
</mfrac>
</mrow></mtd>

That's 236 characters (ignoring whitespace) to write a 13 character equation, which is around 5% efficiency. Maybe that doesn't matter so much for bandwidth, but forget writing it by hand. (When would you do that? Well, commenting on web forums, for one thing...) Granted, there's some text formatting, but does every character really needs a separate tag around it?

Comment Re:Fighting Game Players Going to Be Bummed (Score 1) 202

I'm not a fighting gamer but I am sensitive to input lag, and my 50" Panasonic plasma was the answer to my prayers. Not only is the lag low, but there are real blacks and no ghosting. I might have to get another one before they shut down production. The downsides are high power consumption (it noticeably heats up my room) and the fact that they only come in large sizes, mostly greater than 50".

Comment Re:The paper gives examples (Score 1) 470

It's no so much a violation as assuming behaviour is defined when it is not.

Yes, that's a better way to say it.

The passing of parameters and return values falls under the purview of calling conventions and ABIs. These are discussed in the compiler manual (yes, yours has one), but usually ignored on PCs. (Or really anything with an OS.) In embedded programming, that stuff is much more useful, since you're more often interfacing C and assembly functions. It's also helpful for debugging, since the debugger is often confused by optimization. Put a breakpoint on the branch instruction and the parameters will be in the registers or on the stack.

Someone posted a link to Deep C elsewhere in the comments, which goes over some of these details.

Comment The paper gives examples (Score 4, Informative) 470

The article doesn't summarize this very well, but the paper (second link) provides a couple examples. First up:

char *buf = ...;
char *buf_end = ...;
unsigned int len = ...;
if (buf + len >= buf_end)
  return; /* len too large */

if (buf + len < buf)
  return; /* overflow, buf+len wrapped around */ /* write to buf[0..len-1] */

To understand unstable code, consider the pointer overflow check buf + len < buf shown [above], where buf is a pointer and len is a positive integer. The programmer's intention is to catch the case when len is so large that buf + len wraps around and bypasses the first check ... We have found similar checks in a number of systems, including the Chromium browser, the Linux kernel, and the Python interpreter.

While this check appears to work on a flat address space, it fails on a segmented architecture. Therefore, the C standard states that an overflowed pointer is undefined, which allows gcc to simply assume that no pointer overflow ever occurs on any architecture. Under this assumption, buf + len must be larger than buf, and thus the "overflow" check always evaluates to false. Consequently, gcc removes the check, paving the way for an attack to the system.

They then give another example, this time from the Linux kernel:

struct tun_struct *tun = ...;
struct sock *sk = tun->sk;
if (!tun)
  return POLLERR; /* write to address based on tun */

In addition to introducing new vulnerabilities, unstable code can amplify existing weakness in the system. [The above] shows a mild defect in the Linux kernel, where the programmer incorrectly placed the dereference tun->sk before the null pointer check !tun. Normally, the kernel forbids access to page zero; a null tun pointing to page zero causes a kernel oops at tun->sk and terminates the current process. Even if page zero is made accessible (e.g. via mmap or some other exploits), the check !tun would catch a null tun and prevent any further exploits. In either case, an adversary should not be able to go beyond the null pointer check.

Unfortunately, unstable code can turn this simple bug into an exploitable vulnerability. For example, when gcc first sees the dereference tun->sk, it concludes that the pointer tun must be non-null, because the C standard states that dereferencing a null pointer is undefined. Since tun is non-null, gcc further determines that the null pointer check is unnecessary and eliminates the check, making a privilege escalation exploit possible that would not otherwise be.

The basic issue here is that optimizers are making aggressive inferences from the code based on the assumption of standards-compliance. Programmers, meanwhile, are writing code that sometimes violates the C standard, particularly in corner cases. Many of these seem to be attempts at machine-specific optimization, such as this "clever" trick from Postgres for checking whether an integer is the most negative number possible:

int64_t arg1 = ...;
if (arg1 != 0 && ((-arg1 < 0) == (arg1 < 0)))
  ereport(ERROR, ...);

The remainder of the paper goes into the gory Comp Sci details and discusses their model for detecting unstable code, which they implemented in LLVM. Of particular interest is the table on page 9, which lists the number of unstable code fragments found in a variety of software packages, including exciting ones like Kerberos.

Comment Re:i wonder.. (Score 1) 530

You (as are most people with a poor grasp of these physics because of some shitty analogy someone used to explain it to them) are making the common mistake that the person with the flashlight in hand would think the light is traveling at 1c away from him (total of 1.5c) but they wouldn't see any such thing.

No, the GP is correct. Both observers see the light moving at 1c relative to them, even though they're moving relative to each other.

Slashdot Top Deals

Neutrinos have bad breadth.

Working...