In terms of stumble/dollar vodka has it beat, hands down.
Stumble/dollar is one of the best descriptions I've ever heard to rate drugs, just FYI.
But you see you are in the Windows CE embedded niche. Your vision is clouded.
I'm not in a "windows CE embedded" niche and the grandparent poster is right.
It's not an issue with the target. It's an issue with the platform(s) supported by the development tool vendors and the chip manufacturers.
For instance: With Bluetooth 4.0 / Bluetooth Low Energy (BLE), two of the premier system-on-a-chip product families are from Texas Instruments and Nordic Semiconductors.
TI developed their software in IAR's proprietary development environment and only supports that. Their bluetooth stack is only distributed in object form - for IAR's tools - with a "no reverse engineering" and "no linking to open source (which might force disclosure)". IAR, in turn, doesn't support anything but Windows. (You can't even use Wine: The IAR license manager needs real Windows to install, and the CC Debugger dongle, for burning the chip and necessary for hooking the debugger to the hardware debugging module, keeps important parts of its functionality in a closed-source windows driver.) IAR is about $3,000/seat after the one-month free evaluation (though they also allow a perpetual evaluation that is size-crippled, and too small to run the stack.)
The TI system-on-a-chip comes with some very good and very cheap hardware development platforms. (The CC Debugger dongle, the USB/BLE-radio stick, and the Sensor Tag (a battery-powered BLE device with buttons, magnetometer, gyro, barometer, humidity sensor, ambient temp sensor, and IR remote temp sensor), go for $49 for each of the three kits.) Their source code is free-as-in-beer, even when built into a commercial product, and gives you the whole infrastructure on which to build your app. But if you want to program these chips you either do it on Windows with the pricey IAR tools or build your own toolset and program the "bare metal", discarding ALL TI's code and writing a radio stack and OS from scratch.
Nordic is similar: Their license lets you reverse-engineer and modify their code (at your own risk). But their development platforms are built by Segger and the Windows-only development kit comes with TWO licenses. The Segger license (under German law), for the burner dongle and other debug infrastruture, not only has a no-reverse-engineering clause but also an anti-compete: Use their tools (even for comparison while developing your own) and you've signed away your right to EVER develop either anything similar or any product that competes with any of theirs.
So until the chip makers wise up (or are out-competed by ones who have), or some open-source people build something from scratch, with no help from them, to support their products, you're either stuck on Windows or stuck violating contracts and coming afoul of the law.
I don't know about Acetaminophen, but I've heard compelling cases made that if Aspirin were discovered today it would be a prescription drug. Think of the side effects, the modern day "think of the children!" attitude, and pathetic need of the body politic to feel "safe" from any and everything.
- If this deviation is the result of burning fossil fuels, they are expected to run out in about 800 years - after which the temperature might crash toward the "Ice age already in progress" as the excess carbon is removed from the atomsphere by various processes, or simply be overwhelmed by the orbital mechanical function if it remains.
Does this scenario count as supporting or opposing anthropogenic global warming?
The percentages come from looking at all studies, papers, research, etc. and determining the number one one side or the
When the administrators of research funding withhold future grants from scientists who publish papers questioning some aspect of the current global warming scenario, while giving additional funding to scientists who publish papers supporting it (or claiming some global-warming tie-in to whatever phenomenon they're examining), the count becomes skewed. This is political action, not science.
This happened in the '70s with research into medical effects of the popular "recreational" drugs - before such research was effectively banned. Among the resuts were a plethora of papers where the conclusions obviously didn't match the data presented and a two-decade delay in the discovery of medical effects and development of treatments. Only NOW are we finding evidence that PTSD might be aborted by adequate opate dosages in the weeks immediately following the injury, or that compounds in marijuana may be a specific treatment for it - as they are for some forms of epilepsy and may be for some cancers, late stage parkinsons, and so on.
The same happens when the editors of a journal and their selection of reviewers systematically approve and publish only research supporting the current paradigms, to the point that scientists with contrary resuts must find, or create, other journals or distribution channels (which can then be smeared as non-authoritaive, creations of the fossil fuel industry, right-wing politicans, or conspiracy nuts - and their articles LEFT OUT OF THE COUNT). Again, this is politics, not science.
Then there's the question of the methodology of the count itself. What is counted as "support for" versus "opposition to"? What does it count as a scientific paper? Were well-established research methods used? Was it reviewed? By whom? Was it done by scientists with no established position on the issue, by scientists supporting one side, by pollsters, by an advocacy group, by politicians? (Hell, was it done at all? Truth is the first casualty of politics, and fake polls are one of the commonest murder weapons.)
For an instance: How would you interpret the study behind the Scientific American article that seems to indicate:
- Planetary temperatures have tightly tracked a function of three orbital-mechanics effects on the earth's orbit and axial orientation - up to the time of human domestication of fire.
- That occurred as the function was just starting to inflect downward into the next ice age.
- The deviation amounted to holding the temperature stable as the function slowly curved downward. (Perhaps a feedback effect - more fires needed for comfort in colder winters?)
- This essentially flat temperature held up to the industrial revolution, when the temperature began to curve upward, overcoming the gradually steepening decline of the function.
- If this deviation is the result of burning fossil fuels, they are expected to run out in about 800 years - after which the temperature might crash toward the "Ice age already in progress" as the excess carbon is removed from the atomsphere by various processes, or simply be overwhelmed by the orbita
In twenty-four hours this will go from "illegal" to "high demand professional camera service" for promotions, events, etc.
Sorry, that's already illegal (according to the FAA).
Just a few weeks ago the FAA issued an interpretation of existing rules that declared illegal any commercial use of video from a drone.
I dont get it. The average depth of oil/gas wells here in Oklahoma is approx 5,000 ft. The typical depth of earthquakes here in Oklahoma is approx 16,000 ft. I'm not seeing a connection between the two.
First: You're looking at the wrong wells. What's the depth of the injection wells?
Second: The depth of the well doesn't particularly matter, as long as it connects the water to a fault system. The water spreads out through the fault, turning it into a hydraulic jack the size of a small eastern state or so. The faults aren't purely horizontal and the pressure (except for an added component at greater depth from the weight of the water above it) is the same everywhere.
So of course the earthquakes take place at the usual depths where the "last straw" rock finally gives way.
This was on Gizmag yesterday... like many of Slashdot's articles...
Give it a rest.
Slashdot is not an investigative journal or a follower-and-repeater of press releases. It's a bunch of nerds pointing out interesting stuff to each other, and talking it over, with a few nerds vetting the postings before they go up on the "front page".
That means, like Wikipedia, it's not generally a primary source. It also means that, for real news items, it is generally about a day behind.
If you want news in a timely fashion, go read Gizmag and a bunch of other acutal reportage sites. If you're willing to wait a little bit and then talk it over with a crowd of acquaintences (some of whom might actually know more about it than the newsies), this is the place for you.
Is the end result graphene, a lattice of carbon atoms, or not? What exactly is a "substitute carbon nanosheet" if not graphene itself?
It sounds to me like they're hedging because they haven't fully characterized what they get.
As I undetstand it, producing carbon fiber from plastic consists of stretching a plastic (such as rayon - a string of carbon hexagons joined by oxygen links, or polyacriolnitrile - a carbon backbone with a C2N group hanging off every other carbon) so the long-chains are alligned, then baking off the other elements (hydrogen, oxygen, nitrogen). This leaves just the carbon backbones (with additional carbon-carbon bonds from the loss of the hydrogen and whatever. Result: long, narrow, straight or crumpled ribbons of graphite-like hexagons, in a bundle, perhaps with occasional crosslinks, side-bumps, and other debris.
So I'd think that, if they did this on a surface, with something that didn't polmerize in two dimensions, they wouldn't end up with the nice, clean, carbon chicken-wire fence of graphine. Instead they'd end up with little graphine patches and strips, interconnected irregularly, and not restricted to an atom-thick plane.
But I'd expect the result to, like graphene, conduct well and be very strong. Just not as strong and conductive as a perfect graphene layer, perhaps with some odd electrical activity from the deviations from the regular structure acting as "impurities', and higher resistance due to shorter mean free paths for charge carriers as they bump into these irregularities.
[suggests] relocate[ing] GitHub (servers, company and all) outside the US to avoid those DMCA take downs?
How about Antigua?
Antigua recently won a suit against the US over its ban on online gambling (a major source of foreign exchange income for the country). As a penalty, the WTO awarded Antigua the right to freely distribute "American [copyrighted] DVDs, CDs and games and software", up to $21 Million per year.
GitHub doesn't charge for the software it distributes (getting revenue mainly from things lik companies storing their OWN, PRIVATE repositories on their servers). So I'd think a company like GitHub, incorporated, owned, and hosted there, would consume $0 of the $21MM/year allocation, and could freely and legally distribute copyrighted material with US copyright holders - at least until the year after the US congress finally repeals the anti-online-gambing laws.
Oh that DMCA was issued by Cyveillance
According to an Ausdroid "excllusive", a "Qualcomm representative" has already:
- repudiated and retracted the takedown notices,
- promised they will pursure any issues directly with the project maintainers.
- appologized to the project maintainers.
Unfortunately, this was in a communication with Ausdroid and apparently not in a form that would let GitHub over-the-holiday staff put the repositories back up immediately.
That's a pity. Many of the contributors to open source projects are volunterers with day jobs. This makes three-day weekend holidays "prime time" for a hackfest. Taking down the repositories over such a period is a serious hit to productivity. If they'd done it early in the week, rather than just before a three-day holiday, their error could have been corrected in hours rather than (exceptionally important) days.
(Fortunately, since the revision control system is git, where each checkout is a full copy of the repository, the hit is mainly impeeding inter-member cooperation, rather than bringing all work on the projects to a screeching halt.)
I hope both Qualcom and some of the affected projects bring actions against Cyveillance, if only to make them leery of issuing anti-FOSS takedowns at such sensitive times.
The DMCA does not allow you to refuse to process notices due to unpaid processing fees.
Does it allow somethig like this?
1) OSP charges the takedown filer a $1,000 (or $10,000, or whatever) fee to process a notice.
2) The fee is waived if the alleged infringer fails to file a counter-notice.
3) If a counter-noitce, is filed, the takedown filer is notified, perhaps with a check-box list of the alleged imfringer's claim(s), but DOES NOT RECIEVE THE CONTACT INFORMATION until the fee is paid (or satisfactory payment arrangements made).
4) The fee (or the bulk of it, or a pro-rata share) is waived if the takedown filer notifies the OSP, in a timely fashion, that it does not wish to pursue the takedown at this time and the OSP may put-back the material immediately, rather than waiting for the statutory time.
Assuming the OSP may legally withhold the counter-filing contact information pending payment without jepoardizing the safe harbor, this could be implemented entirely by an OSP. A troll operation would have to pay up to get the information needed to pursue its extortion. The OSP would not be stiffed for its fees if the trolls want to move on to the next step (and could still pursure collection even if the trolls DON'T pay up after the counter-notice is filed).
It would have the advantage (over "losing filers get a big financial hit" approaches) that it does not create a financial incentive for copyright claimants to pursure an iffy or bogus suit in order to avoid a large fine or damages payment.
How hard would it be to send signals from the power plant or substations across different parts of the grid creating a signature that could be detected in recorded hums?
It wouldn't have to come from the substations. It could be injected at any power feed (though the higher-capacity feed the better). B-b
It might also drive the power company nuts - especially if it was close to the line frequency, because that would look like a large and rapidly varying power factor.
I've been trying to think of how there could possibly be enough variation to fingerprint someone based on the hum caused by that 60Hz frequency noise. I've been in transmission control centers where they monitor, regulate and occasionally wet themselves over frequency shifts, and I've seen that the amount of variation needed to cause sheer panic is shockingly low..and it rarely ever happens for even a second. And those tolerances have been the same everywhere I've gone.
The frequency is synchronized across the whole grid.
The phase shifts, due to several factors (which way the power is going on the lines (treated as signal transmission lines), power factors of loads switching on and off, etc.) Much of this shiftig is local (motors on your transformer starting and stopping, etc.). Some of it is regional (for starters: the average across a distribution block of all those motor loads switching).
The combining of the varioius contributibutions to the phase offset is essentially linear. So if you have a recording system that is including power line hum and sufficiently stable on a tens-of-seconds time scale, the phase can be extracted and correlated with a recording from a nearby part of the grid. The closer they are (in electrical term), the stronger the correlation.
I could imagine the NSA recording this phase signal from one or several places in each city or rural region and archiving it, then using a cross-correlation against such a signal extracted from a recording. The amount of data to be stored and processed would be pretty small and a hit would stand out like a beacon.
First run against a national average (or several regional signals) to get enough of a hit to identiy the time of the recording. Then run against that time segment of the whole database of local samples to get a rough location. (With enough samples this should get you down to a "which cell tower" level.) Then see what suitable recording studios are in the identified region and look for other clues.
- Notch-filter out the power line frequency and its first few harmonics.
- Bandpass filter out the low part of the audio.
- Add in a small amount of hum of your own, with a pseudo-random phase jitter (and still more phase jitter on the harmonics). Be sure to use a set of pseudo-random generator that they won't be able to identify and cancel out - like by using several of them to continuously adjust the amount of phase noise added and such.
- Jitter the sampling rate.
- Re-record it with deliberate injection of a larger amount of real power line hum at a different time and location, before releasing the recording. B-)
Identifying edits in a recording consists of lookinf for a gross jump in the phase of the hum. Identifying the recording location from the pattern of small phase shifts (and other artifacts) in the power line signal is a much signal to find in a much larger amount of noise. I'm not convinced yet how doable it is. But with the above description of what I think they're doing, I expect a bunch of slashdotters will soon be playing with their audio cards, hacking up code to analyze recordings. B-)