It took me 2 hours. Nothing about it was very hard but most cells can only be filled in after quite a few other cells in the row are already filled in. This makes the number of logically deducable cells avialable at any given time somewhat low.
Progress bars are all about using past history to predict future performance. The problem is that past history doesn't always say anything about what will happen in the future.
If you only use very recent history then you can usually better predict the very near future but it also makes the progress prediction and remaining time prediction very unstable and jump all over.
You're a human so use your own intuition to predict progress in part on what the program tells you and in part based on your knowledged of the work involved and the work yet-to-be-done.
The code writes 48 1kb vars. The summary is wrong.
It's very nice. I was in the process of setting up a tunnel between my home gateway and a Linode machine (Linode provides native v6) and making Linode my publicly visible exit point to the Internet. A few weeks into the project Comcast implimented v6 making my tunneling efforts redundant.
Comcast currently allocates a
I currently use "privacy addressing" with my Linux machine which I do with:
# IPv6 privacy stuff
echo 209600 >
echo 10800 >
echo 128 >
echo 2 >
This is mostly so that I'm trying out the most extreme end of IPv6 where I'm going through addresses quickly and have up to 128 at a time.
I have a native, public, non-tunneled IPv6 address at home through my non-business Comcast cable Internet service. My computer and phone automatically use IPv6 whenever available.
I can use IPv6 at work too.
It's already here and adoption seems to be accelerating.
I use the "gp" calculator which is a programmable front-end to the PARI library of functions. See https://en.wikipedia.org/wiki/PARI/GP
It's great for number theory and discrete math. I primarily use it for cryptography. My TI 86 and TI 89 used to be sitting on my desk at all times but after I discovered gp I don't have any use for them.
Nonsense. It is still unknown if it is possible (even theoretically) to scale this up. One of the main reasons is quantum decoherence which seems to introduce errors faster than you can scale the machine.
There are plenty of reasons to abandon RSA (which assumes factoring to be hard) in favor of elliptic curves but these quantum factoring advances are not one of them. RSA keys must be huge in order to provide similar security that symmetric and elliptic curve algorithms provide with small keys. Also, it's somewhat likely that the NSA has 1) improved GNFS or another factoring algorithm and 2) has built dedicated cracking hardware. I fully except the NSA to be able to factor 1024 bit numbers today (perhaps even at a rate of one or more a month).
Of course it is doing work. Work == energy (same units). Instead of making the rocket go forward it created sound, pushed air, increased temperature, etc. Exactly the same amount of work is done in a static test as is done in an actual launch.
Peter had a pretty good first glance reaction to the paper: http://www.math.columbia.edu/~woit/wordpress/?p=5104
I haven't seen any good discussions of the actual math content of the paper yet though.
Just a few weeks ago I upgraded my Cable plan with Comcast. Now I'm at 50 down, 15 up for $90 a month. It's nice having a cable company that supports DOCSIS 3.0
Yeah you're right, it is H -> ZZ -> llll
The WW decay is H -> WW -> lvlv
Sorry about that.
You're also right about it being LEP and not SLAC that studied the W boson with so much accuracy.
Thanks for the corrections.
The LHC was built to find any new physics, not just the Higgs. The fact that we've been able to rule out SUSY for large mass ranges is part of that. To measure the specific properties of one particle though does need something a bit more purpose-built. They'll be able to measure a lot about the Higgs boson but not anywhere near as much as a linear collider could measure.
Also, for part of the year they stop injecting protons and instead inject lead nucului. This is meant to measure extremely messy but very high energy collisions that should generate quark-gluon plasmas.
It's unknown but really likely. There is definitely a particle at around 125 GeV but there certainly is a (very small) chance it could be something else.
The standard model predicts a number of different ways the Higgs Boson can decay and what probability it has for each type of decay.
The most common easy to measure decay modes are:
Higgs -> Two Photons (high energy gamma rays)
Higgs -> Two W Bosons -> 4 leptons (electrons or muons)
So what they are actually seeing is the decay products and they measure the energy of each component of the decay and add that up to find the original energy of the Higgs.
The measurement of the two photons is called the "gamma-gamma" channel or "diphoton" channel. They call the 4 lepton channel the "golden channel" because it's a pretty clean signal with a low "background" (noise). That is, they get a good signal to noise ratio from the 4 lepton channel.
The theory says that the two photons should happen a certain % of the time and the 4 leptons should happen a different % and the other decay modes should happen with other probabilities.
One of the reasons to believe they have found the Higgs boson and not some other particle is that the decay relative rates for each type of decay are pretty close to what the theory suggests.
The best way to study the Higgs would be to produce lots of them accurately without producing other particles. The best-known way to do that is with a linear collider that smashes leptons (usually electrons) together. They can tune the energy of the collisions to the exact value to produce Higgs. This is how the W boson was studied so accurately at SLAC. A new international linear collider (ILC) would need to be built to reach the energy levels needed to make the Higgs. Luckily, it's a pretty low and easy to reach energy compared to what it could have been which makes an ILC somewhat reasonable to build.
It's not exactly sniffing but take a look at all of the host detection scripts for IPv6: targets-ipv6-multicast-echo, targets-ipv6-multicast-invalid-dst, targets-ipv6-multicast-mld, targets-ipv6-multicast-slaac.
These scripts are using this feature "The new pre-scan occurs before Nmap starts scanning. Some of the initial pre-scan scripts use techniques like broadcast DNS service discovery or DNS zone transfers to enumerate hosts which can optionally be treated as targets.". So if you want to sniff an IPv4 network to add targets Nmap now has all of the tools you need to do that (NSE, libpcap bidings, the ability to add targets).
The issue is mostly that this isn't usually a useful feature for IPv4.
So the CPU is actually a Harvard arcitecture so we have 256 bytes of instruction memory and 256 bytes of data memory.
The tape is a piston-driven loop of sand and glass that is cycled once and all of the instructions in the tape are read into instruction memory. The tape works because non-glass blocks propagate redstone but glass stops it (is an insulator). So we encode zeros with glass and ones with sand. We actually use colored wool instead of sand every 8 places so that it's easy to keep track of where we are in the tape when we have to make adjustments.
The tape is the program to be read into memory. We did this so that we can swap out the program for a different program by replicating the take over a few blocks and encoding something else onto it.