A friend of mine was able to pick up a cheap used PDP-8 in the 1990s with many of the bells and whistles (paper tape reader/writer & teletype, etc), and a full set of software. I remember toggling in the bootstrap loader to start the whole bootstrapping of the operating system. Ah...memories.
Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
Yes, you can't tell the difference in the spectrum of the light when you are staring at the light, so when looking at objects whose color is purely emisive, like TVs and monitors then you can represent the entire gamut of color that the human eye can see by combining three primary colors.
But this breaks down when you are looking at objects that are reflecting that light, because the way those materials reflect light absolutely is wavelength specific. In that case if you have two lights that appear to be the exact same color when staring at them (or when shining them against a white wall), but have different spectra, then objects illuminated with those two lights can look very different because they absorb and reflect those spectra differently. A normal person won't be able to quantify why they look different, but they will know that something is "off" and may get an impression of the lighting in vague terms like mood or character.
So no, you can't fake a lighting spectrum with just 3 primaries, which is why producing good LED lighting has been much harder than producing good LED monitors.
For the entire history of the human race nearly all the lighting we have encountered has been block-body radiation, and a black body spectrum will always look better and more natural to us than other light spectrum. So florescent and sodium vapor will finally die off as LEDs become less expensive, but variations in color temperature will never go away. Warm lights will always feel more cozy and intimate just like campfires and candles have always been. Cool light will always feel a bit dreary, like an overcast day. And Daylight spectrum will always feel bright and cheerful. Opinions on whether a living room should be bright and cheerful or warm and comforting may vary. But unless we somehow stop experiencing natural lighting whatsoever, and evolve into Morlocks, variants of black body light will retain their historical associations.
These are really cool. But it did make me chuckle when the article talked about how current LED candelabra bulb in particular are quite ugly. The candelabra bulbs were made to (poorly) mimic the shape of candle flame, and now we are attempting to mimic that imitation because we have gotten used to the way it looks
If this is a case of NIH, then it is reinventing the framebuffer, not X11, Wayland or Mir. And it makes since to do so since the kms/drm interfaces provide better performance and more features than fbdev.
I think you're misguided. The criteria for patentability has never been bad, and has actually gotten worse since the recent change to "first to file".
Yes it has been, and your following paragraphs demonstrate clearly why this is so
The problem is it's impossible for anyone to know what can or cannot be patented without spending hundreds of thousands of dollars hiring an entire team of lawyers to search through the back catalogue of patents and inventions and court precedents.
The patent office does not have enough staff to do proper research while a patent is being filed. If they did proper research, they would only be able to approve a handful of patents per year with the number of employees currently working at the PTO.
The problem with the current system is that the PTO has taken the approach of only rejecting patents if they can find documented evidence that someone has done the exact same thing before. If there is a single independent claim for which they can't find exact prior art in a timely manner, then they approve the patent, regardless of how similar it is to other prior art. They deliberately ignore the obviousness of the patent because they don't want to have to defend subjective decisions against appeal.
The recent Supreme Court rulings have forcefully asserted that this is not acceptable. The law clearly states that obviousness is one of the criteria for patentability and therefore the USPTO and courts must take that into consideration when deciding patentability. Furthermore, they have stated that if the improvement that an invention makes on prior art is not patentable by itself, then the invention is not patentable. This is a huge decision because it rules out a ton of "on a computer" and business model patents that combined things that weren't patentable on their own into something that was patentable in aggregate. This second issue is likely to have an even bigger impact as it can be applied more objectively than the first which increases the chances that the USPTO will embrace it. Furthermore, if anything these changes decrease the amount of research the PTO has to perform for an average application.
It simply isn't possible for a small company to defend themselves at all, their only viable option is to settle out of court which inevitably means nobody actually knows whether or not the patent is valid. After years of watching this issue closely I have never seen a small company defend themselves in court. Some have tried, but every single one gives up and settles out of court half way through the process.
Agreed which is why we need these reforms. They proposed two important changes. First is to strictly limit how much information the plaintiff can subpoena during discover. This prevents fishing expeditions and prevents discovery from turning into a war of attrition, which will make defending oneself against patent claims faster and less expensive. Secondly it allows defendant to challenge the validity of the patent before discovery has taken place, potentially avoiding the vast majority of the expense of defending oneself, if the patent is determined to be invalid by the new post-Alice standards.
Personally I don't see how any reform could possibly fix the problem. There are certainly ways to improve the situation but I don't think anything can truly fix it. I've never seen anybody suggest a viable solution.
I have no disillusions that these changes will magically make the patent system perfect. In fact I expect the USPTO and the lower courts to continue to be slow to adopt them, but they address the two biggest issues with the patent system today - the low standards for patents and the cost of defending against them - which is more than I can say about any other proposed changes to the patent system in the last 50 years.
Apart from the loser pays part (which I dislike as well), the rest of reforms were about limiting the ability for either party to draw out the pre-trial proceedings, which wouldn't harm legitimate small plaintiffs.
Granted, the biggest problem with the patent system has been that the criteria for patentability has been so loose, and the recent Supreme Court rulings will certainly do more to fix that root cause than the recent patent reform bills. Hopefully going forward these new rulings will improve the quality of patents approved and upheld in court, which is by far the single most important reform needed in the long run.
But in the meanwhile there are more than 20 years of bad patents that have been granted, and the costs of defending against a patent lawsuit is still far greater than the cost of settling. We need to make it less expensive to challenge existing patents if we don't want them to continue to be a burden for the next 20+ years. That is exactly what the reform bills were about. They were designed to be complementary to the Supreme Court rulings, addressing a different parts of the problem.
Yeah, but the cash registers don't record anything. That eliminates all the automated tracking of your purchases which is 99% percent of the problem. It is still possible to track what you buy though manual investigation, but that would be true even without the ATM info (security camera correlated with register records, etc).
$12 is cheap for something that lasts years (with occasional use) and prevents you from going deaf at rock concerts, while still allowing you to hear the music like it was supposed to sound, instead of sounding like you are underwater. These are not audiophile pseudoscience garbage, the frequency response of the earplugs is scientifically quantifiable, and the difference in sound quality is immediately obvious to anyone who tries them, not just idiots with "golden ears" who can hear differences that don't exist. Like the AC posted, these aren't the only brand, but AFAIK they all are pretty much in the same price range until you get into custom fit professional models, at which point you are paying for comfort more than quality.
Yeah, and good earplugs like these have a nearly flat frequency response which make it easier to have a conversation in loud room, unlike foam earplugs or headphones that muffle the sound in addition to attenuating it.
Well it was a little more than that. For some months automounting of USB drives was broken for any combination of X11 display manager and window manager except GDM and gnome 3 because the systemd udev apparently handles that stuff differently than the old udev.
And this is why people get upset about systemd. I actually like the idea of systemd as a boot manager. Elimination of pointless boiler-plate demon scripts, better exposure of all sorts of cool kernel process management features, and using filehandles activity to manage the order in which daemons are launched (rather than explicit declaration of daemon dependencies), dovetails very nicely with the unix philosophy that everything is a file.
But it has becoming a sprawling feature creep monster, and I don't like that. I don't like that the developers put on false airs about how they aren't forcing you to use the other 68 daemons under the systemd umbrella, while making design decisions that make it next to impossible for distros to deploy anything but an all-or-nothing solution. I don't like how they are unilaterally making compatibility breaking system level decisions that affect everyone without giving adequate consideration to the rest of the ecosystem. That sort of attitude and approach is only going to cause more problems in the future, not less, which makes me very wary of getting on the systemd train, even though I like the technical core.
Like you said, this is a beta distrubution so as a user I'm not upset that it was broken for a while. I'm upset at the undue upheaval that one project is having on the entire Linux ecosystem.
From the article most of the spending is on things that are beneficial to society as a whole, not just NSA. These include K-12 funding for science fairs, math clubs, and STEM summer camps. Unless the NSA is influencing these in harmful ways, such as pushing ideology beyond the normal "if you do well in school, you could do cool spy work for us" recruiting I don't see a problem with taking their money. Same for the research grants and conferences, which all result in publicly published fundamental research, that help the entire cryptographic and big data communities as a whole. The only program I would have a problem with are any classified research and the sabbaticals to do classified work at the NSA.
No, you could use a conductive rail, like a subway, and rack and pinion system to move the elevator. The rack and rail would add a fair bit more total weight to the building compared to a cable. But more importantly, the motors would have to be much much more powerful! Modern elevator systems have a counter-weight balanced on the other side of that cable, which means the motor only has to overcome friction and the small difference in weight between the elevator and counterweight (which varies depending on current payload). The motor on an elevator like Noah is suggesting would have to provide enough force to counteract the entire weight of the elevator + payload + motor + friction, which is at least an order of magnitude more than a traditional elevator.
No, good scientists understand significant digits. As far as geological epoch go, the time elapsed between the start of the industrial revolution and the start of the nuclear age is insignificant. Furthermore, while the technology began at the industrial revolution, the impact of that technology didn't have global environmental scale until later on. We don't mark the other geological boundaries at the point where precursors to change appeared, we mark them when change became significant. If you look at graphs of human energy or CO2 output, the knee in the curve does occur at around the mid 1900s. The fact that there happens to be an easily observable geological marker that occurred at that time makes it a convenient dividing point, and as good as any other of the arbitrary dates picked to divide otherwise well-distinguishable geological epochs.
If anything, I would argue they risk jumping the gun too early, not setting the date too late, as there may very well be a much bigger global change in the next 10's thousands of years of which the last millennium will just be regarded as a precursor to.