Probably looks like this.
This modulation scheme is called QRSS and can also be used to send very low power (milliwatt and microwatt) signals around the world ionospherically, and on bands such as VLF (very low frequency). Here the open source from a couple of projects by Hans Summers from a book I edited for the ARRL on the Arduino: http://hamradioprojects.com/authors/g0upl/+qrss-attiny/ http://hamradioprojects.com/authors/g0upl/+mm-shield/ and plenty of links about QRSS from there.
Voltage? Not 5V? I took a quick look through the USB Power Delivery docs and didn't see that.
Wikipedia doesn't mention it either, though it does discuss the raising of the pre-negotiation current limit from 0.5A to 1.5A, and the max negotiated limit at 5A, which would be 25W.
Do you have any links on the higher voltages?
You probably already understand, but many do not, that you cannot push or provide current at 5V that the device doesn't want. If your device will draw only 500mA due to its internal design, attaching it to a 2A or 5A port won't do anything.
Carl Hewitt's "Actor" model, which is the basis for Erlang, was first implemented on multi-server systems on Symbolics Lisp Machines at the MIT-AI lab. The CADR machines could not be produced fast enough to dedicate enough to the project but when commercial ones were available Carl got a grant and bought 6 of them and they called it the Apiary. They didn't use it all the time so i thought of it mostly as a source of free machines, and we are now only just getting to the point where the multi-CPU network based shared nothing architecture begins to be a mainstream approach.
In late 1999, we tested a product by rolling the date forward to 2000-01-01 and it worked fine. Then we rolled the date back to the normal date, and files that got touched during the test period caused trouble, because their modification date was "IN THE FUTURE!?!?!?" as one piece of code put it. The most broken was the timestamp data for a time-based UID generator, which flat out refused to run, saying that it was in danger of generating collisions.
Try Quixey app search (where I work):
Or search for sciency things you might want to do with a three-year old:
Yes, that's in the original submission, as you see above. For the record, Brewster Kahle (who founded Archive.org), Jeff and Danny (who did this project), and I are all MIT alums, and the "Internet Archive scanning robot" is from a company called Kirtas, which also has ties to Xerox.
I remember thinking the same thing then.
Yes, it was in my submission but apparently edited for brevity. TL;DW?
In point of fact, for individual scanning, the video even mentions that this linear scanner is SLOWER than a manual scanner such as the diybookscanner. The gains come in that since its automatic, a single person could keep 8 or 10 of them running at at time.
Yup. Progress in clock speeds has pretty much slowed down, and Google appears to expect future performance enhancements to come in the form of parallelism
Disclaimer: I worked with Jeff when we were at Xerox (where he did the awesome hack Gnu Chess on your Scanner), but this is more awesome because it saves books."
Link to Original Source
I'm sure this study is testing cultural bias, not human propensity. In Japan, for example, it's considered rude and direct to look into someone's eyes, and many people look at the mouth, or even slightly away.
How many bits for a IPv6 IP vs a IPv4 IP?
Yes of course they should of thought about this before designing the hardware with a maximum ability to comprehend a ipv4 IP...
I remember having this discussion with people close to the principles about the NCP to TCP/IP transition when the 32-bit (four octet) address size was picked.
The sound bite was that it's bigger than the biggest European phone number, so they planned ahead for a time when there would be as many computers as phones, which seemed way enough. (Remember, NCP had a hosts.txt file that listed all the hosts.)
For DNS, they designed an hierarchical system, but events overtook the hierarchy and people got fetishistic about names, leading to most names being in ".com" and being public-facing. The original theory was that the hierarchy would be more important, with more hosts in organizations and so on.
But on the IP side, the segmentation with subnetting (and later, classless subnetting) made things more complicated, so it became possible to run out of IP addresses even though there were still plenty available, but fragmented. Along the way with all the subnetting routing got more complicated and there were a few routing table crises that required new algorithms and lots of new designs, and that pretty much works miraculously now, but doesn't solve the walled-off inaccessible IP address problems.
If you can figure out a way to transparently change who firewalled-off Class A subnet over to a non-routable private net and then release the class A net, you could reset the clock back to the problem IPv4 thought it was solving and become a zillionaire in the process.