Is that the new word for 'adult?'
Is that the new word for 'adult?'
But you do nonetheless. My current machine was bought for one reason - price - and lacks it. When I've built my own systems in the past I have always used it. Scoping out parts to build a new one, I see the price of sane memory has only gotten further out of line than I remember.
This is one aspect of a market where the buyer does not understand the product well enough to make intelligent choices. If computer buyers understood the technology, at least 70% of them would insist on ECC, and as a result economy of scale would have eliminated the price premium long ago. Instead, manufacturers continue to skimp a few pennies on the RAM by default, creating an economy of scale advantage in the other direction, which only reÃ«nforces the bad allocation and ensures it continues.
Instead of ECC memory they should call it 'sanity-checking memory.' Maybe then people would understand what it is enough to realize they want it. But since no one in particular stands to make a windfall by doing it, no one promotes it.
Still not what I want on my system. I dont really care how long it takes to boot, I just want to make sure that when it's finished it's really finished. Systemd in so many ways copies windows concepts instead - like how they make it supposedly boot faster - by rushing along to draw a GUI before things are actually ready to use.
Not saying systemd is as bad as windows - and the massive improvements in boot speed are not all illusory! but they do come at the cost of reliability and correctness, and that's simply not a good tradeoff for people using the OS in a traditional manner.
I don't think people understand the Unix philosophy. They think it's about limiting yourself to pipelines, but it's not. It's about writing simple robust programs that interact through a common, relatively high level interface, such as a pipeline. But that interface doesn't have to be a pipeline. It could be HTTP Requests and Responses.
The idea of increasing concurrency in a web application through small, asynchronous event handlers has a distinctly Unix flavor. After all the event handlers tend to run top to bottom and typically produce an output stream from an input stream (although it may simply modify one or the other or do something orthogonal to either like logging). The use of a standardized, high level interface allows you to keep the modules weakly coupled, and that's the real point of the Unix philosophy.
I think he's off his meds again.
Unfortunately that's not quite true. You *can* configure systemd to spit out text logs as well as the binaries but that is a delayed process, so in the one case where you MOST want text logs (where a crash has occured with the file open) it's absolutely worthless.
If your application program has a flaw, it's probably not a huge deal. Maybe it crashes occasionally. You save often, you have autosave, it's not a big deal.
But a system component that can crash the system, render it unbootable, hand control to a hostile third party, etc - it's much more important in that case to keep things clean and proper to keep the machine itself stable.
Part of the disconnect between the Sysd cabal and the traditionalists here is about what we mean by the machine. We are often running linux on bare metal as our workstation. From what I have been told, they typically run it in virtual machines on server farms instead, and use Apple workstations. So from their point of view, it is just another application, and it shouldnt be a big deal to restart it occasionally - especially after they put so much work into improving boot times. But from our point of view, we dont care much about fast boot times, we want a stable system that doesnt need to be rebooted all the time.
What, you are surprised? You expected accuracy in a slashdot blurb?
It depends on your design goals.
In Asimov's story universe, the Three Laws are so deeply embedded in robotics technology they can't be circumvented by subsequent designers -- not without throwing out all subsequent robotics technology developments and starting over again from scratch. That's one heck of a tall order. Complaining about a corner case in which the system doesn't work as you'd like after they achieved that seems like nitpicking.
We do know that *more* sophisticated robots can designed make more subtle ethical systems -- which is another sign of a robust fundamental design. The simplistic ethics is what subsequent designers get when they get "for free" when they use an off-the-shelf positronic brain to control a welding robot or bread-slicing machine.
Think of the basic positronic brain design as a design framework. One of the hallmarks of a robust framework is that easy things are easy and hard things are possible. By simply using the positronic framework the designers of the bread slicing machine don't have to figure out all the ways the machine might slice a person's fingers off. The framework takes care of that for them.
I don't think you've really grasped Apple's design sensibility. Job one for the designers is to deliver a product that consumers want but can't get anywhere else.
The "camera bulge" may be a huge blunder, or it may be just a tempest in a teapot. The real test will be the user's reactions when they hold the device in their hand, or see it in another user's hand. If the reaction is "I want it", the designers have done their job. If it's "Holy cow, look at that camera bulge," then it's a screw-up.
The thinness thing hasn't been about practicality for a long, long time; certainly not since smartphones got thinner than 12mm or so. They always been practical things the could have given us other than thinness, but what they want you to do is pick up the phone and say, "Look how thin the made this!" The marketing value of that is that it signals that you've got the latest and greatest device. There's a limit of course, and maybe we're at it now. Otherwise we'll be carrying devices in ten years that look like big razor blades.
At some point in your life you'll probably have seen so many latest and greatest things that having the latest and greatest isn't important to you any longer. That's when know you've aged out of the demographic designers care about.
Install iTunes somewhere, sign up for an account (you can do so without providing a credit card number), and download the album. Apple has been selling music DRM free for the last several years, so it's just standard AAC. Once you have it, remove your account, delete iTunes, and add the music to whatever music program you prefer to use.
Unless, of course, you live in Canada, where copying music from a friend is still perfectly legal.
Sorry, forcing a download of an entire album is *not* giving you an option that "you don't have to tune into". This is not you giving the kids an album you like, this is you strapping them to a chair to listen to it à la "Clockwork Orange". If everyone got an email saying "Click for a free download of the album!" there would be no complaints. (Mockery, perhaps, but not complaints.
Except this is pretty much exactly how the system was setup.
In "releasing" the album, Apple pretty much just added a database entry for every user on iTunes to say that they had already purchased the album. It was then supposed to show up in your iTunes library as "in the cloud", with an option to download it.
Nobody was forced to download the album. The only way you'd download it without needing to do so specifically is if you had previously turned on the option to automatically download all new iTunes purchases (which defaults to off). And the only way you'd have to worry about using cellular data for this is if you had the option to download iTunes Music purchases over mobile enabled as well (otherwise, it would wait until you're on WiFi). So yeah -- this is completely a tempest in a teapot from people who don't like U2 seeing a free album available for download showing up in their libraries.
Hopefully Apple have learned their lesson. It was a publicity stunt, and while it upset some people, here we are talking about it. I don't think it went off the way they were hoping it would, and hopefully they've learned some lessons in the process.
 - Here in Canada at least, it appears the setup for this album didn't work for a very large number of users. I know in my case, the U2 album did not show up on my iPad as it was supposed to, nor did it show up in any of my iTunes libraries. And I do have the auto-download option enabled. In order to get the album, I had to go into iTunes and find the section that shows all your existing purchases, and then select the "Not on This Device" list, and only then could I download the album. And looking at the album reviews on iTunes Canada, it seems that I was hardly the only person to experience this -- nearly every review when I last checked last night was form people trying to figure out how to get their "free" album. I haven't seen this level of complaints outside of Canada, so I'm assuming either a) something screwed up with the iTunes Canada edition of the album's launch, or b) iTunes Canada did something different in order to not run afoul of some legislation (although I can't for the life of me guess what legislation that might be). This situation seems to have been lost in the noise of everyone else complaining about getting a free album, so I haven't heard much commentary on the situation.
You paid for this music when you bought the phone.
I'd argue that we do try to write about the future, but the thing is: it's pretty damn hard to predict the future.
The problem is that if we look at history, we see it littered with disruptive technologies and events which veered us way off course from that mere extrapolation into something new.
I think you are entirely correct about the difficulty in predicting disruptive technologies. But there's an angle here I think you may not have considered: the possibility that just the cultural values and norms of the distant future might be so alien to us that readers wouldn't identify with future people or want to read about them and their problems.
Imagine a reader in 1940 reading a science fiction story which accurately predicted 2014. The idea that there would be women working who aren't just trolling for husbands would strike him as bizarre and not very credible. An openly transgendered character who wasn't immediately arrested or put into a mental hospital would be beyond belief.
Now send that story back another 100 years, to 1840. The idea that blacks should be treated equally and even supervise whites would be shocking. Go back to 1740. The irrelevance of the hereditary aristocracy would be difficult to accept. In 1640, the secularism of 2014 society and would be distasteful, and the relative lack of censorship would be seen as radical (Milton wouldn't publish his landmark essay Aereopagitica for another four years). Hop back to 1340. A society in which the majority of the population is not tied to the land would be viewed as chaos, positively diseased. But in seven years the BLack Death will arrive in Western Europe. Displaced serfs will wander the land, taking wage work for the first time in places where the find labor shortages. This is a shocking change that will resist all attempts at reversal.
This is all quite apart from the changes in values that have been forced upon us by scientific and technological advancement. The ethical issues discussed in a modern text on medical ethics would probably have frozen Edgar Allen Poe's blood.
I think it's just as hard to predict how the values and norms of society will change in five hundred years as it is to accurately predict future technology. My guess is that while we'd find things to admire in that future society, overall we would find it disturbing, possibly even evil according to our values. I say this not out of pessimism, but out my observation that we're historically parochial. We think implicitly like Karl Marx -- that there's a point where history comes to an end. Only we happen to think that point is *now*. Yes, we understand that our technology will change radically, but we assume our culture will not.