... both by people and software.
Agreed, it's a good book, and one that seems to get most of the technicalities right. The thing which bothers me the most is the authors very flat and sometimes a bit boring writing style - there is a lot of "I did this, and then that happened. Then I did something else, and exactly the same thing happened again. Then I tried something completely different, got a bit lucky, and now it worked. Yay.". The same goes for the characters - with some exceptions for the main character, they are all very much portraid as "cardboard cutouts".
But seriously, the blended edges do look a lot like my Samsung (which does not bend). Given that Apple had a case with "rectangle with rounded corners", Samsung may have a case with "thin rectangle with blended edges".
The algorithm combines data from several sensors.
Quite a few US universities are heavily involved in CERN. And European universities. And Russian. And to an increasing amount, Chinese. And also many others.
The people "teaching" (implying that there is something worthwhile to learn) creationism, are not scientists - they are coming up with neither new data or reasonable interpretations. So thus no US scientists are "teaching" that steaming pile of poo.
As a European, it would be great if
> and they probably buy them fpga's and boards in industrial quantities anyway
Njaaa. Define "industrial quantities". Mostly I've seen people use a few 10s of them, not 100s or 1000s.
The really expensive part about ASICs are to make the masks for lithography etc., not how many chips you make. Thus you don't want to make a new chip unless you *really* need to.
FPGAs are very different beasts from normal CPUs - as far as I understand, they are very well suited to doing relatively simple tasks ridiculously fast, and one chip can treat tons of data in parallel. However, they do not do so well on really complex algorithms, algorithms requiring lots of fast memory and branches, and they are harder to program than CPUs.
In this case, I would think each cell cell in the (m,q) parameter space is handled by one "block" of the FPGA, and you then feed all the blocks the data stream coming off the detector. When you are finished reading the data into the FPGA, you can then read the result back from each block.
When you "burn" a chip from a FPGA, what it means is that you take the VHDL (etc) code and compile it into a format which you can use to produce specialized chips, instead of a format for programming an FPGA.
Oh, and 2 m should have been 2 um. Slashdot ate my alt-gr+m = \mu...
Part of the method may very well be to put the clustering algorithm directly onto the the same chip as is doing the digital readout of the sensor, i.e. bump-bonded on the back of the sensor, directly providing estimated (x,y) coordinates of the particle hits instead of raw pixel data with zero-suppression as is traditionally done.
However, this is not what this paper is discussing. It discusses mapping the parameter space (m,q) of the gradient and intercept of a particle track y=m*z+q into some kind of matrix, and then applying an algorithm which describes how well the data fits with each of the points in the parameter space. This is thus integrating the information from several sub-detectors, and can thus not be done on the "image sensor" (which is usually a "hybrid", i.e. a chip with an array of detector diodes, coupled to another chip which has the electronics).
While this paper is pretty light on details (I'm guessing some sort of conference paper), it references another single-author paper in NIM A (which author is also a co-author on this paper) from 2000:
It appears to be open-access, at least I can read it without logging in to VPN.
So, to summarize the paper
They have developed an algorithm for quickly giving a rough interpretation of the raw data stream coming out from the detector, i.e. converting the information that "value pixel A =12, value of pixel B = 43,
This is needed, because in a few years, we will upgrade the LHC such that it produces many more collisions per second, i.e. the data rates will be much higher. We do this to get more statistics, which may uncover rare physics processes (such as was done for the Higgs boson). Not all of this data deluge can be written to disk (or even downloaded from the detector hardware), so we use a trigger which decides which collisions are interesting enough to read out and store. This trigger works by downloading *part* of the data to a computing cluster that sits in the next room (yes, it does run on Linux), quickly reconstructing the event, and sending the "READ" signal to the rest of the detector if it fits certain criteria indicating that (for example) a heavy particle was created. If the data rate goes up, so must the processing speed, or else we will run out of buffers on the detector.
Still, there has to be some kind of mechanism to do the initial pairing, even if this requires removing a PCB and hooking it up to the diag/programming equipment they have at the factory. Even counting a few hours of engineers time, it would be much much less that 70k.
> Do you think everyone needs the same speed? Does your grandmother need the same speed as an MIT researcher?
This is actually quite an interesting case: Without net neutrality, the grandmother would get the speed she paid for when she streams grandmothery movies from grandmaflix (who paid her ISP to not make it impossible for her to access their webpage at the speed she paid for). The MIT reseacher, who today probably pays for a much fatter connection, would not get to use all of his/her bandwidth to access the data stored on some computing center, because this computing center would not want to pay everyones ISP so that they can connect to them.
The solution today (i.e. with net neutrality) is fair: The grandmother pays for the bandwith she needs to send emails to her grandkids and watch grandmaflix in low resolution (because she can't see HD content anyway), while the researcher pays much more for the bandwith he/she needs to upload hundreds of gigabytes of data from NERSC and use the university's terminal services at low lag.
This is a true fallacy when the conclusion is already drawn, such as media trying to present "both sides" of climate change as if the relevant sides where "yes, it's warming" and "no, it's cooling" -- while the actual discussion is more like "is the impact of effect X on K equal A or B=A+0.01*A, while taking the interaction with effect Y into account?", where the relevant sides of the discussion are those saying it's A and those saying it is 1.01*A.
In this case (paint dust and zooplankton), I'm less sure if the effect is that well known, so presenting the argument between "it's important" and "it's less important" might be correct.
So in conclusion, the journalists can usually present a "balance" and be factually correct, but then it has to be between two sides of a non-settled question.
There are languages, such as the Scandinavian languages, which are "mostly latin". This means we have the full A-Z as used in English (although C,Q,W,X,Z are never used) PLUS some extra letters "Æ/Ø/Å" (dunno if this displays correctly here). There are also domains which uses these letters like "lånekassen.no", which is the state agency handling student loans. (They are also available at the alternative address "laanekassen.no".)
Thus a "hard and fast" rule disallowing domain names with mixed types of characters won't work well, it needs to be slightly more nuanced.