This is the outcome of the government's 2016 budget which imposes huge savings on research and education.
You seem to have misspelled "cuts".
This is the outcome of the government's 2016 budget which imposes huge savings on research and education.
You seem to have misspelled "cuts".
I have to admit that when I first read the headline, my mind processed it as
Exploitable Backhoe Accidentally...
I figured that some nitwit had decided that large construction machinery needed to be part of the Internet of Things, and that the expected outcome had come to pass.
The excuse "Medical privacy" when it comes to a wild animal is really causing the person using it to look like a fool.
The person who rejected the FOIA request 'signed' his name in Comic Sans. This is not a person who is concerned about looking like a fool.
I'm totally with you on the lack of naked-eye night-sky access and I definitely don't want to minimize the loss it represents to city-dwelling humanity. I live in a large city way down at the bottom of the Bortle scale, in the center of one of those whited-out you-can't-see-a-thing patches on the light pollution maps. From time to time I'm lucky enough to get away to a little cabin in the woods with impeccable dark skies, but the rest of the time I have to make do with viewing from the parking lot next to my apartment building.
That said - and the article mentions this, but it's worth reiterating - a surprising amount of the sky becomes accessible again for those of us with even basic digital SLRs (and even some of the more fully-featured point-and-shoots). Last January, the nominally-naked-eye comet Lovejoy (C/2014 Q2) was in the sky near the Pleiades. No hope of seeing it with my own eyes, but it was an easy target for just about any lens in my camera bag. It's trivial to capture stars down below ninth magnitude There's a little bit of - a different sort of - magic to being able to pull so much of the night sky out of the muck.
I could probably do some moderately impressive things with binoculars, too, but I'm a bit concerned about what the neighbors would think.
Incidentally, the suggested camera settings provided in the CBC article (ISO 1600, 30 second exposure) may be a bit aggressive for very bright city skies, and will definitely show at least some star trailing. Don't be afraid to play around. My skies start to get too bright if I go beyond ISO 800, f/3.5, 5 s or equivalent. And a five-second exposure is close to the limit if you want to avoid perceptible star trails at a medium-wide focal length.
What the hell does that mean?
If it's parabolic but really really long, "near-hyperbolic" would be a reasonable description -- that's not out of the ordinary for comets.
Presumably it means that its orbit is closed - elliptical - but is only very loosely gravitationally bound--perhaps even more so that most comets. In other words, its velocity is only just shy of escape velocity, hence near-parabolic. Yes, mathematically speaking, that means that its orbit must also be near-hyperbolic; an infinitesimal increase in velocity converts a parabolic path into a hyperbolic one (and an infinitesimal decrease in velocity converts a parabolic path into a long-period ellipse).
No it's not. Weight energy and volumic energy are two different things. The article does not say which is which.
It's a good thing that the summary (didn't even have to click through to the article) indicates that it's using volumetric energy density for both:
"Sony is developing a new type of battery chemistry that can boost runtimes by 40 percent compared to lithium-ion batteries of the same volume. Sony's batteries use a sulfur compound instead of lithium compounds for the positive electrodes, reportedly allowing for much great energy density. Sulfur batteries can also supposedly be made 30 percent smaller than traditional lithium-ion cells while maintaining the same run times."
Weight - and therefore energy density per unit mass - isn't mentioned or implied.
The grandparent's observation is spot on--the summary is indeed saying exactly the same thing in two different ways. If you can have the same runtime in 30% less volume, you can always get 40% more runtime with the original-sized package. To within a trivial rounding error, 140% and 70% are reciprocals; they're just saying "40% improvement in volumetric energy density".
Just for clarification from TFA, they did not disqualify over 1,000 candidates. What they found was: 67 Binary Stars and 3 Brown Dwarfs out of the 129 candidates they actually looked at.
That seems like an awfully small sample size to me, but hey I'm not a scientist.
Actually, if that represents a random selection from their initial pool of candidates - that is, if they didn't do any initial pre-sorting to enrich their selection for stars over planets - then that's a reasonable sample size. As long as their sample was random, it's actually the absolute number of stars in their sample that matters. The standard deviation in their estimate of the number of non-planets goes as roughly the square root of the number of non-planets in the sample. We'll say the square root of 67 is about 8, so there's an estimated error of plus-or-minus 8 out of 129--about 6%.
If, before an election, you do a telephone survey of 1000 people, you'll be able to estimate the election's outcome with about the same confidence whether the country has a hundred thousand or a hundred million voters. Essentially the same statistical principle.
If that sounds weird, try it with inanimate objects instead. If I pull 100 jelly beans from a large and well-shaken bag, and 50 happen to be red, then I'm going to be pretty confident that roughly half of all the jelly beans are red--no matter how big the bag is. If I pull a 100 planet candidates from the Kepler survey and 50 turn out to be stars, then I'm going to be pretty confident that roughly half of the planet candidates are stars--no matter how big the list of Kepler candidates is.
If 40% of those university graduates are still overqualified by their mid-thirties, they've already been typecast by their experience in the 25-35 range.
That's certainly a problem with the data provided--it bundles together the fresh-out-of-school 25-year-olds with the decade-plus-in-the-workforce 34-year-olds. There's a lack of resolution. It could be that 40% of 25-year-olds and 40% of 34-year-olds are "overqualified". Or it could be that 60% in the 25-29 age group are overqualified, and just 20% of the 30-34 bracket.
Actually, that brings to mind another confounder to the interpretation of these data. As more young people get more years of formal education (3-year college diploma to 4- or 5-year bachelor's degree to 7-year bachelor-plus-master's degree) they enter the workforce later. A 25-year-old with a high school diploma might have been working for 7 years (and is also more likely to be working in a job for which they are not "overqualified" by their lower level of formal educational attainment). A 25-year-old with a master's degree might have graduated this summer and could still be job-hunting.
... an increasing number of university graduates are overqualified for their jobs.... 40 per cent of university graduates aged 25-34 were overqualified for their job.... The problem is bigger than that, because those young workers spent money, time, and resources to get those qualifications.
It could be a problem, but we're missing some information. This is looking at people aged 25-34. A lot of them are taking crappy entry-level jobs. A lot of them don't have any significant work experience, and have trouble breaking into their preferred fields. A lot of them have student loans and other financial obligations, and just need to take a job - any job - to keep food on the table and a roof overhead. (That, in itself, is another kettle of problems that I'm not going to go into right now.)
An important question is, then, how many of them are still overqualified by the time they're into the 35-44 age bracket? Was the extra education actually "wasted", or did they eventually come out ahead because they didn't have to drop out of the workforce later on to go back to school to get the education they missed in their twenties? Did their extra "unnecessary" knowledge help them move up the ladder faster than they would have without it? (I'm not looking for anecdotes - of which I am sure there exist examples to suit any preferred narrative - but rather real data.)
And that leaves aside the rather more philosophical question of whether or not it's generally a Good Thing to have more university-educated individuals in it, even if they don't need those degrees specifically as job training. Are universities now only vocational schools, and only of value to society in that context? If I can't cash in my degree for a high-paying job, is it worthless?
...but then the stupidity of taking off at less than 100% throttle to save a little bit of fuel at the expense of increasing risk is also a pretty dumb thing to do, engineering wise.
Taking off at less than 100% throttle means reduced acceleration, which reduces stress on the airframe (and passengers). It reduces wear on the engines and - more important - reduces the risk of turbine failure. It makes the aircraft easier to control (less unbalanced thrust) if it does lose an engine immediately before or after takeoff.
So...not just to save fuel.
There is no diffraction...20,000 miles is nothing. A laser beam that measures several microns wide at it's origin will still be several microns wide at it's destination.
This is fundamentally incorrect. Even under ideal conditions laser beams will diverge in proportion to their wavelength and in inverse proportion to their narrowest diameter. Effectively, the laser light interferes with itself - diffracts - as it passes through the aperture from which it emerges. At visible or near-infrared wavelengths, a "collimated" 10-micron-wide beam will be more than 30 meters across at 1 km from its source. (I confess to doing the math in my head, but the order of magnitude is about right.) At 20,000 miles, the beam will be more than 100 km across. Wikipedia has the formulas if you'd like to play with them: beam divergence.
You can improve performance by increasing aperture (beam diameter) and wavelength, but there are limits. Beam divergence gets a hell of a lot better with a 1-centimeter (or 1-meter) rather than a 10-micron beam, but also puts about one millionth (or one ten-billionth) as much power down per unit of area on the target.
This isn't to say that space-based anti-satellite lasers aren't possible, but your assumptions about the behavior and performance of lasers over long ranges (and the associated technical challenges) are not grounded in adequate physics knowledge. The Soviets took a stab at launching an anti-satellite laser weapon back in 1987. Polyus weighed 80 tons, required a massive booster, used a 1-megawatt carbon dioxide laser, and was still only intended for low-orbit targets. (And suffered a launch failure, but that's not important.)
If I remember correctly, the noise floor of the previous instrument was approximately the level of the signal they were looking for. A better detector may help.
Indeed. It's hard to overstate the sensitivity of these instruments, or the vulnerability of these instruments to noise. To take one example, here's an ArXiv preprint that calculates that the original LIGO detectors would need to be physically shielded from tumbleweeds, since the the impact of a wind-borne tumbleweed on the building exterior (100 feet from the detector) could produce a vibrational or gravitational transient sufficient to appear to be a spurious gravitational wave signal.
Neither the summary nor the linked article provide the necessary statistics to tell us how well this algorithm actually works. We're told it has a 68% success rate, which presumably means that 68% of the time it gives the same answer as de Vries (the human subject/programmer).
The problem is, we're not told anything about the sensitivity or specificity of the technique. What is the rate of false positives? False negatives?
Let's say that de Vries typically finds 1 out of 3 (33%) of the profile pictures "attractive". His computer could score 67% accuracy just by rejecting every single picture. (Such an algorithm would have zero sensitivity, but perfect specificity, and a terrible false negative rate. The "reject-everything" algorithm also scores better the more picky de Vries gets.)
This sort of story is only interesting if it includes specific information about where and how his algorithm fails (and succeeds).
The only area of education not dominated by women in the past ten years is STEM, and men are also far behind women in biology & related sciences, and math, leaving really only computer science and the engineering fields, and physics to men.
Looking at the 2012/13 numbers, women do indeed significantly outnumber men as recipients of bachelor's and master's degrees. Women received about 1,052,000 bachelor's degrees to men's 787,000. (That's 57% to women.)
The source of that disparity - about 265,000 degrees - is interesting. About a quarter of the difference (a surplus of 61,000 degrees) is in education--principally teaching degrees. Another third (a surplus of over 84,000 degrees) are in nursing. Another quarter (another surplus of about 61,000 degrees) come from psychology. There's a good-sized surplus in social work and other social and community services (14,000), family and consumer sciences (18,000), and in visual and performing arts (21,000). That's about a quarter million degrees right there.
In other words, a lot of that surplus is 'job training'- or 'job certification'-type degrees, mostly in areas that are traditionally associated with soft, squishy notions of womanhood, and often in occupations associated with relatively lower salaries.
Presumably he made this for a class, and if so, why didn't that teacher stand up for him and tell them it was for his class?
And if it wasn't for a class or club or something, that does admittedly seem a bit suspicious.
He brought it to school to show the teacher in his engineering class, and then kept it out of sight in his bag. The alarm on the clock beeped during an English class later in the day, so he showed the project to his English teacher after class by way of explanation.
The only obviously wrong thing he did was (presumably inadvertently) let the alarm go off during a class. If he were a kid with a cell phone, the teacher would confiscate the phone for the rest of the class and possibly assign some other standard, trivial punishment. And that would be fine. Instead, we have a hopelessly irrational overreaction, almost certainly enhanced by the kid's race.
In the future, you're going to get computers as prizes in breakfast cereals. You'll throw them out because your house will be littered with them.