Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Robots First, then Humans? (Score 1) 352

" send autonomous robots to various locations in space to create infrastructure using local resources with advanced manufacturing technology, such as 3D printing"

So we send robots to terraform and prepare a new habitat for humans.
Eventually, after many years, the robots send us a message that says "Everything is ready. We are waiting to meet you all for dinner."

Anyone see a problem with this?

Hey, if it was good enough for Columbus and the European powers in their colonization of America and Australia, "send robots first" is surely good enough for our colonization efforts for the moon and Mars...

Comment Re:You have it wrong. (Score 2) 323

If you want to understand if the school should be blamed, ask yourself, would the school be blamed if the person was an adult? No. Of course not, that would be silly. The School had as much to do with the activity as the ISP serving the school.

They didn't control access to the computers in such a way as to prevent this kind of usage by students. In the case of an adult in the same situation, the school is guilty of, at a bare minimum, presenting an attractive nuisance in the form of a computer that could be used for this.

But this is all hemming and hawing about "how can we blame someone other than the kid for the actions of the kid?", which is pretty stupid on the face of it. In any other bullying situation, such as assault and battery, you don't blame the parents; you send the kid to juvie, and they get to go to school there, with all of the other genetic sociopaths.

Comment You have it wrong. (Score 2) 323

If your kids happen to make money, parents control that money until they are 18. They should also suffer the liability as well.

You can't have one without the other. Either children are responsible or they are not.

You have it wrong.

The page was created on a school computer while the school was acting in loco parentis for the child. If anyone should be held responsible *instead of the child, whose fault it is*, it would be the school, with "contribution to the delinquency of a minor" by the "friend" who helped them create the page.

At worst, the parents are guilty of "contributory negligence" for not being software engineers.

Comment Re:I can answer these, as I was there. (Score 1) 240

10.5 ran on Intel. Making Classic run, but only if you were using PPC hardware, was not an option, due to the large amount of kernel and interface changes necessary to support Intel.

Back up for a moment though, and you're recall that 10.4 supported Intel, which I believe would mean that those changes had already been made, wouldn't it? The way you guys avoided having to do a rewrite of Classic was by maintaining separate builds for PPC and Intel Macs during Tiger's run, with the PPC builds featuring Classic support and the Intel builds lacking it. Couldn't that pattern have been carried forward into 10.5, given that PPC hardware was not long for the world anyway? Instead, Apple merged the builds in 10.5 (which only lasted for that one version), which necessitated dropping Classic for the reasons you said.

The 10.4 Intel builds were "a" versions. The intent was always to integrate them, but they were run as a separate "build train". This is the same way that development versions are handled, and it was the same way that Mac OS X Server versions used to be handled.

To carry it along, it was necessary to build and designate an entire "build fleet" - systems in a server room used to do the builds - and to staff build engineers, and integration engineers, and duplicate roles between Intel and PPC development, etc.. So there was a significant cost associated with it, including that things were not being "built fat". This meant that if you made a change in the PPC code that happened to break the Intel build, since you personally didn't have an Intel system, to do the test builds on, let alone try to run the code to make sure it still worked, someone had to clean it up for you so that it worked on both Intel and PPC. It was incredibly burdensome, and it meant that those people, who were bright people, were stuck working on that, rather than the next great thing.

Technically, it could have been done, but it didn't make economic sense, and it didn't make resource sense (at the time, in Silicon Valley, the feeling was that Google was hiring all available talent so nobody else could hire them, regardless of whether or not Google actually had anything useful for them to do with their time or not).

I'd agree that it was an economic decision, as part of getting people to stop buying PPCs, and to stop waiting on buying Intel, when a purchase was going to happen, but there was still a choice of one or the other. But it was also a resource and process overhead problem.

Comment Re:I can answer these, as I was there. (Score 1) 240

If Intel support for Classic was the issue, then why not ditch Classic after 10.5, instead of 10.4? After all, PowerPCs were still supported for 10.5, so Classic could have been kept around without needing to port it to Intel.

10.5 ran on Intel. Making Classic run, but only if you were using PPC hardware, was not an option, due to the large amount of kernel and interface changes necessary to support Intel. It would have basically required a complete rewrite of Classic. At the time there were just two people supporting Class vs. changes to the main OS (Alex Rosenberg and one other person), it would have require dedicating a team of people to work under them in a non-protected mode environment in order to do the work.

Most of the original Classic team had either already escaped to the iPod division, to avoid having to learn how to deal with the "inconvenience" of memory protection (think "thePort"), or they'd buckled down, and learned to live with protected mode restrictions, or they had left Apple for other companies (Adobe, etc.).

It would have required a lot of hiring to deal with the amount of change that Intel brought to all of the Mach interfaces and other parts of Mac OS X, and frankly, it would have taught them bad/obsolete programming habits, and provided little value to the company, while at the same time making the trade-off between the Intel and PPC value propositions in the wrong direction, when both types of systems were still for sale. This would have not only caused new PPC sales to cannibalize Intel sales, it would have also stretched out the support timeline for the PPC another 2 years.

All in all it would have sucked, a lot, from many perspectives.

Comment Re:I can answer these, as I was there. (Score 1) 240

Not trolling, just a serious question: why can I compile any software from early 90's to today on any Linux distro I've attempted without issue? It sounds like I shouldn't be able to, but it takes less than 20 minutes of finagling to compile ancient POVray software from the 90's on literally any current Linux distro.

Technically, you can't. A lot of people were still using pre-ANSI C compilers in the early 1990's, so the code will be K&R C rather than ANSI C.

Any software that uses a function name or variable that uses what's now a keyword in C89 or later won't compile correctly, it'll barf. Try naming a function "void" or "restrict" or "const" or "volatile" or typdef'ing something named "_bool". Likewise, you should try compiling most of the contents of the comp.unix.sources archive, and noting the fact that -std=c89 requires ANSI C constructs, rather than K&R C constructs.

Global variables may be caches in registers in later compilers, while depending on changing a value from another thread of control (either a thread, or from a signal handler, etc.), where changing the backing value won't modify the register contents because the contents are not re-fetched on reference... i.e. compiling with a compiler that depends on the "volatile" keyword for the validity of its optimization assumptions (technically compiler writers could have introduced a "nonvolatile" keyword instead, but didn't because they wanted to lazily make the assumption and have the programmer correct it, rather than being conservative about their assumptions).

I expect that you will also see a lot of "cast discards qualifier", "comparison si always true" and similar warnings that would be errors, should you enable -Werror.

Additionally, you're going to find incompatibilities in the use of pty's, and in the use of trmio vs. termios, and other SVR4-ism's vs. BSD-isms (example: do system calls automatically restart after a signal handler fires, or do they terminate the function call with EINTR?). You will also find threads differences (such as PTHREADS_MUTEX_INITIALIZER) which will bear on your ability or inability to declare an initialized pthread_mutex_t, or need to call pthread_mutex_init(). You'll also find that OOB signalling on TCP/IP sockets behaves differently on BSD 4.2g based TCP implementation - and the code that expect a 4.2g sockets implementation - vs. later 4.3 sockets interfaces. For SVR3 and SVR4.0.2 code through 1994, you'll also see portability problems with ntoa vs. the libTLI interfaces, which lost out to the BSD interfaces for socket and host name lookup.

If you are using setuid/seteuid/etc., you will also find that older code assumes the non-existance of POSIX saved IDs - in other words, it believe that you can flip back and forth easily. Not only will this not work without jumping through an additional hoop, you're going to have security holes if you were to "fix" the system call interfaces to behave in the old way.

What you actually mean is that you haven't *noticed* any problems, in doing a not very exhaustive survey of older software. If you tried to use an older gcc that supported the K&R C dialect as a -std= argument, you'd find that the system header files with ANSI C prototypes wouldn't compile correctly.

That is not possible on Mac OSX (I've tried repeatedly, even with homebrew).

I would like to agree with you, maybe I don't understand what the problem truly is, but there is definitely a difference between compiling software for Linux vs Mac OSX. As in, one is so easy a talented kindergartener can do it, while the other is literally impossible without rewriting Mac OSX. Why is that?

Are you only trying to use POSIX interfaces on Mac OS X, or are you trying to use additional libraries supplied by the OS historically, but not supplied by the current OS version? If the answer is the latter, then the problem is that your software isn't portable, because it doesn't engage in minimal use of only standard-defined interfaces.

You should also be aware that Mac OS X is based on Rhapsody, which is a BSD 4.4 system. So the same setuid/signal/socket problems that apply to the comp.unix.sources archives on Linux, apply to compiling old code (which is going to assume BSD interfaces, rather than POSIX compatibility), are going to apply equally to Mac OS X as they do to Linux.

Comment I can answer these, as I was there. (Score 1) 240

I can answer these, as I was there.

"Hey, how come this new version of Mac OS doesn't work with any of my old Mac OS 9 software?", said Mac users in response to Classic support being dropped with the release of Mac OS X 10.5.

Because Apple was unwilling to port the Classic 68K emulator to Intel because of the difference in processor byte order, among other things, making such a port not worthwhile in terms of performance of the Classic software. The user experience would have been crap, and so the decision was made by upper management to not support Classic going forward on Intel.

For the PPC versions of Classic, they could have been supported under Rosetta, but it would have meant an approximate doubling of the number of APIs that were dragged forward onto Intel, for the dubious benefit of "Some Classic software will run, and some won't; sorry the stuff you personally care about doesn't".

"Hey, how come this new version of OS X doesn't work with any of my old PowerPC software?", said Mac users in response to Rosetta being dropped with the release of OS X 10.7.

This one was more related to the lack of a willingness to "freeze-dry" old versions of the libraries. The Mac OS X B&I ("Build and Integration") process required building from scratch the libraries, when building them fat, even though this was a process decision, rather than a technical one.

Effectively, it's not practically possible within the Apple process to reproduce a binary build that's identical to a previous binary build. This is because Mac OS X builds are hosted on a system where the builds depend on being incrementally built into the host environment, rather than actually cross-built to the target. What this means is that the "build from scratch" process to produce the PPC portion of the fat binaries for all the system libraries couldn't be pegged at a particular Mac OS X version, while moving forward with the Intel portion of the binaries, and therefore it was not possible to maintain binary backward compatibility - which was what both Rosetta, and before it, Classic, existed to do.

Debian Linux has this same problem when you build in a chroot to avoid "polluting" the host build environment. It's one of the reasons a build chroot is used by Linaro to support ARM architecture builds on Intel, and that's the same reason that ChromeOS for x86 is built in a chroot environment on a debian variant maintained internally for desktops at Google, rather than being simply directly cross-built into a directory tree hierarchy.

Unlike FreeBSD, neither Linux nor Mac OS X is capable of targeted cross-builds, without pollution of the build/host environment, and without using build products as part of the build.

So technically, the answer for both Linux and Mac OS X is the same: poor build environment engineering - "Worse is Better", exactly what the original article talks about - infects both operating systems.

Comment OK, as long as they *selectively* kill birds. (Score 2) 610

It's true that they kill birds. But so do cars and skyscrapers. And I'd wager that coal - between the waste disposal, emitted mercury, and mining - kills birds, too.

OK, as long as they *selectively* kill birds.

I mean, if all they killed were pigeons, that'd be fine, right? We might even build more of them, even without the subsidies...

Slashdot Top Deals

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...