The last time I saw a presentation on brain interface technology was almost a year ago, so I'm not 100% current either, but the current state of the art isn't that great.
The fundamental problem is that the brain/hardware interface breaks down with time. In simple terms, it looks like the extremely soft brain tissue doesn't stand up to being in hard contact with the rigid electrodes (there's a nice picture in the article: they look like meat tenderizers). In the long run, there is formation of a buffer zone of unusable tissue between viable brain matter and the electrodes which blocks the signal. This is an area of substantial research: trying to build nanomaterials that serve as a good physical buffer between the brain and electrode, which is a non-trivial problem. Success in this goal can directly lead to longer-lived devices.
So when they say
no evidence has emerged of any fundamental incompatibility between the sensor and the brain
that's not entirely honest. Yes, their sensor still works fine but they still need to adapt it to be more brain-compatible. My personal guess is that this one patient just happens to have a lucky brain composition/response.
Do you mention FTs just for reference, or are you implying that they are typically used in deconvolutions? In my experience, signals with any amount of noise are much better handled with iterative algorithms.
Graphics: Matlab has the most feature-rich and usable graphical environment of any of its would-be competitors, none of which do 3D well.
I'm interpreting that to say that Matlab does a better job at 3D than the competitors, which is exactly the opposite of my experiences.
I work 100% in Python/Scipy/etc, and my brother does 100% Matlab. He had to come to me for suggestions when Matlab failed to handle visualization of his extremely large 3D datasets (I can't comment on whether he really had exhausted Matlab's functionality for that purpose). Although it's true that Matplotlib has pretty poor 3D support, Python gives you many more avenues: Blender+Python actually gave incredibly good performance for an interactive data environment, although it certainly wasn't designed for it. And VTK will give you anything you want, if you're prepared to put in the time.
Isn't it fair to say that if you're worried about roundoff noise in repeated calculations, you've passed the point from being just a scientist to a someone who should be concerned with general programming theory and conventions, and hence at least familiar and comfortable with notation that denotes type?
My introduction to IEEE 754 was brought about via Python, when my chemistry kinetic simulations weren't running right (many millions of iterations, scaling factors with huge and tiny exponents). Understanding and fixing that problem took an hour or two, which far overshadowed the minute-long pause when I first found that 5/2 = 2.
In the end, the benefits of having the power of a real programming environment far outstrip the very small entry barrier. I personally feel that in the modern world you have no business calling yourself a scientist of any kind unless you can write a basic data manipulation script (parse and write a flatfile csv/tabs/CRLF etc) in some language of your choice. Massive quantities of data and meta-analyses are now the norm, making manual transcription or even copy/paste a thing of the past, and it is not acceptable to be hobbled by the feature set of existing software.
I have a bad feeling that this ruling will just shift the focus of patents, without changing their effective targets.
If a kit for diagnosis etc can be patented, the net result is unchanged: the knowledge remains locked-down under the control of the patent-holder. While this wouldn't be a huge problem if the generation of a detection kit was a novel work worthy of protection, the reality is that genetics as a field is a rigorously predictable, formulaic, and mathematical study.
The techniques for working with genetics are very well-understood, and it will likely be a computer algorithm (and not a new one) that designs the tools (DNA primers etc) for performing a particular diagnostic test. Why should that be patentable? At best, it certainly isn't non-obvious, and I would argue that when a machine (which you didn't invent) is designing your invention, you certainly shouldn't be able to claim it as your own!
Of course, there should be room for patenting truly new genetic tools, such as a new type of DNA-binding fluorescent probe, but such inventions come around once or twice a year. The hundreds or thousands of unique gene-detection methods developed per year which use that new tool should not be patentable.
With just a few modifications, that would be perfect for feeding my cat.
She tends to eat so rapidly that she makes herself sick, but the Food Lift could drop the food a little at a time into a bowl over the course of a few hours.
If it weren't $100 CAD, I'd have ordered one already.
If your take-home pay is >$42,000 a month (very conservative estimate for 1M/year gross), do you really need a car loan?
And if you're in that situation, and you do need a loan, that doesn't speak well for your money management skills. I wouldn't want to loan it to you.
If they have paid their dues, and they are fully rehabilitated, then why does it matter if they are mentioned by name? After all, they're just normal citizens again.
Clearly there is a disconnect between the theory of rehabilitation and what the public considers to be sufficient stigma for past offenders
The angular accuracy is very poor. It's not designed to pick out just one vehicle from a group.
The ones with which I have some familiarity are designed to report back multiple signals: the fastest signal detected within the spread of the radar, and the largest. So theoretically they can get a signal off one vehicle traveling very fast through traffic, but if you are always traveling slower than people around you, your car will never be reported to the officer.
The use of money is all the advantage there is to having money. -- B. Franklin