Slashback: HETE, HP, Regression 233
The computing equivalent of Area 51? A short while back HP closed its calculator division. Many have thought HP's calculator department was unprofitable. This was not the case. Many have thought they had no innovation. This was not the case. Turns out that management had 4% workforce to kill and they were part of the cut.
This article explains more. It turns out they had designed several Linux based PDA's ready to produce that were killed by management. Sounds interesting? Go check it out.
The biggest expense was the 12 gross of Estes D engines ... Satellite Designer writes: "The topic of low cost satellites having been mooted here recently, I though I'd alert readers to another such project. The HETE-2 satellite recently located a cosmic gamma-ray burst precisely enough that (with a lot of help from friends) an afterglow was detected, identifying its source. HETE-2 cost $26 million, only 1/3 of what a 'small' scientific satellite normally costs.A lot of commercial 'off the shelf' technology went into HETE. Nothing from Radio Shack, but there are quite a few parts from Digi-Key onboard. You can't save money by using cheap parts (but you *can* save money by using easily obtainable parts), and you can't achieve reliability by using expensive parts (but you *can* help reliability by using the parts best suited for your application). The radical thing about HETE's parts selection was that it considered parts in the application context (as one would do in a normal engineering process), rather than restricting selection to a QPL assembled to meet irrelevant requirements.
The real trick to keeping costs down is to do the job with as small a team as possible in the minimum time possible. Rather than employing a large team of specialists, HETE's scientific investigators did much of the engineering and technical work. A small, carefully selected engineering team filled in the knowledge gaps."
Quitting isn't easy, and why bother? dmarsh writes: "This new article from C|Net seems to be a total contradiction to last week's "Dump Broadband, Dig Out Your Modem!" thread's article. I guess the important difference being that this one is backed up by an actual survey by the National Cable and Telecommunications Association."
Goes to show, in a large group of people you can probably find at least some who fit nearly any premise. As always, question the source ;)
I'm retreating to ISDN (Score:2, Insightful)
But if people don't need DSL, then dropping back makes sense. After all, it IS money!
Re:I'm retreating to ISDN (Score:1)
seattlewireless.net [seattlewireless.net]
Neighborhood Area Networks? [slashdot.org]
Wireless along the Main Coast [slashdot.org]
Re:Consider the stats evened (Score:2, Funny)
Surveys. (Score:2, Funny)
Certainly sounds impartial.
Re:Surveys. (Score:2)
The RIAA (boo) would LOVE it if album sales plummeted at the same time that Napster was taking off. Yet, that wasn't the case (although CD singles did suffer a drop last year, the increased sales of full albums was significantly greater), and the RIAA reported the numbers correctly. Although they put their spin on it, ("Look at the drop in Singles sales"), they reported valid sales figures.
It looks like the first article [cnet.com] was a guy trying to create a news story when there really wasn't one. Sure, people will switch from high bandwidth to low bandwidth, but if the general trend overshadows it, we end up with a very different story.
The newer story, which is just a rehash of an NCTA press release [ncta.com] says nothing about the people who are installing (or uninstalling) cable modems. It talks about sales trends. This doesn't negate the other story, it just indicates that it isn't all that big a trend.
It's the equivalent of C|Net reporting that I just bought an AMD processor, so Intel had better watch out. Who cares about me? If there are 50,000 people like me, then you notice.
So, yes there are people switching. But no, it doesn't seem to be affecting the industry. Two separate stories, no conflict.
I switched from dsl to modem.. (Score:3, Funny)
Re:I switched from dsl to modem.. (Score:2)
I wonder about the statistics (Score:4, Insightful)
I wonder whether (1) this many people signed up for the service during the period, or (2) this many people finally received their hardware/installation. Everybody knows that the pool of broadband installers is vastly outnumbered by the pool of broadband salespeople. No flamebait here, just wondering if the mass sign-up occurred in 2Q or 3Q...
Also, consider the source of the statistics ("Our research shows that our product is 100% safe...")
My broadband provider starting sticking extra fees into my bill earlier this year. It's only $6/month, but it's still lame as hell. I'm revolting by dusting off my ol' 56K USR at home & taking advantage of that T-1 at work. BellSouth can rip off someone else.
Re:I wonder about the statistics (Score:1)
if (value cost) (Score:3, Interesting)
Hell, my employer hasn't hired anyone or let anyone go from my group in the last year so just to make up for raises and what not our product will cost at least 7% more. If our customers thought like you, we'd be screwed (but so would our competitors).
Broadband isn't a commodity good (Score:2)
Now hop over to cable broadband industry. It takes (gasp) skill to implement a WAN/MAN. The technology isn't so simple that you can just pick random parts off a shelf and expect everything to work brilliantly. We should hope that either companies like yours begin to dominate and spread their philosophy of good engineering or that technology improves to the point that setting up a WAN is as simple as setting up a LAN for a game of Quake.
HP was the greatest (Score:3, Interesting)
Maybe some slashdotters don't know it, but before the current palm-craze, HP's calculators were *the* portable thing to program for (at least in my university, I remember being amazed that somebody got pacman working on the HP).
To think that a whole division like that, with great products and a great vision was axed just to get the stock price a few bucks up in the short term seems really backwards, but I guess that's what's happening far too often in this period of stock-price-driven management.
:(
Re:HP was the greatest (Score:1)
This thing has roughly the power of a XT and with 512k on board memory (shared ram disk and system memory) and a 512k ram card it does everything I need from a PDA. Notes, phonebook, games, even room for an ebook. Runs FROTZ fine as well. If you can find on of them (or it's bigger brothers the 100lx and 200lx) grab it. Excellent design and will run for weeks on 2 AA batteries.
Re:HP was the greatest (Score:2, Funny)
No family member would steal it because of the reverse Polish logic. Perfect.
Re:HP was the greatest (Score:1)
RPN was the greatest (Score:3, Interesting)
I don't think I would ever have passed all the number crunching civil engineering classes and the dreaded EIT (engineer licencing test) without my trusty 32s.
I lothe regular calculators now...
When It got stolen with my bookbag (uggg) I got the 42s. even better! 2 lines of stack on the screen!!! I still use it. Durable too.
Maybe its not as big a deal now that calculators can enter equations with parens..
I was thinking of wipping up a desktop calculator that did rpm.... Maybe its time..
Re:HP was the greatest (Score:3, Insightful)
HP was founded by engineers. Engineering is what they knew, and that's how they competed. Today, HP is run by b-schoolers; engineering really isn't their forte. But they know advertising and finance and marketing. So that's what they rely on; that's how they compete. They leave the real innovation to their "partners" (guess who I'm talking about) who promise them success in terms they can understand: market share and intrinsic stock option value. Meanwhile, the company dissolves from the inside into yet another sales staff and yet another brand for the same old Same Old.
The Hewletts and the Packards might stop the Compaq deal, but all the rats together still can't stop their sinking ship from taking them under. It will take great innovation, not great speeches about "innovation". Good luck HP, you're gonna need it.
HP: Get a new CEO. (Score:2)
Fire Carleton Fiorino.
Not all of them use RPN. (Score:2)
DSL.... (Score:2, Funny)
Cable vs. DSL (Score:5, Interesting)
I used to have DSL. When I moved, I tried a Cable Modem instead. I found the quality of my connection was better, and the service technicians were far more knowledgeable. Of course, that reflects more on the individual companies (Verizon for DSL vs. Charter for Cable) than it does on DSL vs. Cable, but considering the number of people I know who gave up on DSL because of technical problems, I wouldn't be surprised if DSL is losing business to Cable.
Here in Pasadena, Cable is cheaper and they can come install it within a day or two of your order. When I got DSL, I had to wait six weeks for the first visit, and it took them quite a few tries to get it working.
It all depends. (Score:2)
My experience is the general case, but other people like yourself have had different results. I think it all comes down to the number of subscribers in your area, and the competency of your provider.
Here in California, Cingular Wireless seems to have the worst service of any cell phone provider. However, I consider GSM (the type of network they use) to be the best network technologically. So why do they have all these problems? It all happened when they made the name-switch from PacBell to Cingular, and I believe the major problem is they have reached capacity. Bad planning? Bad management?
It's a mixed bag wherever you go.
Re:It all depends. (Score:1)
(location == Greenville, SC
cable provider == Charter@Home
dsl provider == BellSouth)
Re:Cable vs. DSL (Score:2)
Verizon is well-known as being probably the worst DSL provider in the world.
I have DSL, and while my DSL provider is quirky and occasionally erratic, the kinds of war stories I hear from Verizon people are insane.
FWIW, when I originally got DSL, I didn't have to wait for a visit. They Purolatored the modem to me, turned on the line remotely, and I set it up myself. Took about a week IIRC.
Re:Cable vs. DSL (Score:2, Informative)
The verizon pppoe software for windows was reported to chew the cpu prodigously like you were running SETI@home. That's where alot of discontent with verizon originated. I heard about that when doing research , but then I already had made other plans. I set Verizon dsl up with a dhcp provding local isp instead of Verizon or earthlink with a Linux floppy based firewall. It's been fabulously great (for the people I set it up for) ever since the splitter was installed. Literally like dialtone.
Oh yes, and another thing: if it isn't a dsl to ethernet bridge, it was never designed to perform with stability in the first place. Whoever your dsl provider is, they don't necessarily want you to be connected 24/7, so as the the technology "matures" , meaning as they get past the tech-savvy, nitpicky, first-adopters, they think : why should they give away hardware that contributes to high uptime when they can buy the cheapest POS usb devices and microfilters instead , saving themselves money and keeping the customers offline a little more ?
oh but maybe i'm just too cynical about the telco monopolies.
Gov't run broadband (Score:1)
I have some reservations about this, but at least it should be more stable (i.e., much less chance of bankruptcy) than a lot of these poor companies going out of business.
What I think. (Score:2)
My reservation is that, if it's government run, a few whiny idiots in the community can turn around and slap filters on it, and start using it to regulate what THEY don't like.
So.. as long as the charter that runs it is about being purely available... it's great.
Re:Gov't run broadband (Score:1)
It might stay around, but it will probably only be as fast and reliable as the United States Postal Service delivers mail or Amtrak delivers people.
Broadband as an industry is here to stay, regardless of what happens to any one company. So in the long run, government broadband is not really more stable than the private broadband industry as a whole.
In a truly competitive market, other companies would come in to fill the gap left by the departing ones. The problem is, the companies that currently dominate broadband come from industries that are used to having government imposed monopoly status: cable and telephone. The monopoly status is starting to go away in the cable industry, but is persisting for telephone, especially in regards to the "final mile."
The first wave of DSL providers had tremendous problems getting the incumbent carriers (ILECs) to give them support when there were line problems. The ILECs didn't want them to succeed because they wanted to offer their own DSL but hadn't managed to get their act together yet. They had no incentive to provide good service and every incentive to provide bad service. Result: bad service. Now that the first wave of DSL providers has gone bankrupt, the ILECs are moving in to dominate DSL. A typical consequence of government interfering in markets.
So what you're really talking about is a government "solution" to a problem that was created by government in the first place. No thanks.
Re:Gov't run broadband (Score:3, Interesting)
Yeah, and in the best of all possible worlds...Monopoly status is not imposed by the government in the sense that the government forbids competition; by and large what they do is amelioration of the effects of an existing monopoly (price controls, etc.) Government does not impose monopoly status so much as it acknowledges an existing reality. You seem to forget that it was government "interference" that opened telephone lines up to DSL competitors in the first place, but that's inconvenient, so we'll just forget that, right? Of course, the RBOCs' incentive for doing so was access to long-distance markets they couldn't get into after the AT&T breakup. One of the many woes that introduced to the average consumer was no longer having to hide extra telephones when the repairman came by. Don't forget choosing your long distance carrier.
Cable was deregged under George I. Guess what? Prices went up. Natural gas prices in GA went up when they deregged last year. CA's electricity woes are partially due to a badly-planned dereg, but the consumers still had to take it up the ass. While competition is always good for the competitors (i.e., drive wholesale up by bidding because we're different companies and therefore not a monopoly,) it's not always good for consumers. Rather than parrot armchair libertarianism, maybe you should look at deregulation on a case-by-case basis and support it where it lowers costs to consumers and oppose it where it doesn't. Unless you have a financial stake in a company assraping consumers in the name of the "free market" you really shouldn't have a dog in this fight. If you do have a financial stake in such a company, you should say so up front so there's no confusion. If your interest is strictly ideological I can't see any explanation other than that you favor the concentration of wealth in the hands of a very few people even when that doesn't include you because you somehow find these people more accountable than politicians who can be voted out or recalled.
The first wave of DSL providers had tremendous problems getting the incumbent carriers (ILECs) to give them support when there were line problems. The ILECs didn't want them to succeed because they wanted to offer their own DSL but hadn't managed to get their act together yet. They had no incentive to provide good service and every incentive to provide bad service. Result: bad service
Who's going to provide those incentives to good service? The Tooth Fairy, the Easter Bunny, Santa Claus, or the government? Remember it took legislation just to get the cable companies to answer the phone.
So what you're really talking about is a government "solution" to a problem that was created by government in the first place. No thanks.
In a truly free market you could be bought and ground up for pet food. Never forget that.
Re:Gov't run broadband (Score:1)
The government itself may not go out of business, but what will stop them from deciding next year that its broadband services are losing too much money and should be either privatised, discontinued, price increased dramatically, etc?
It's not like those politicians will be saying "sure it loses money but this is way more important than elementary education, so let's subsidise it just a little longer until it starts breaking even". Most governments (well, local governments) have fairly tight purse strings.
PR hogwash (Score:5, Insightful)
-Slashback
Goes to show, in a large group of people you can probably find at least some who fit nearly any premise. As always, question the source
-Timothy
Well, OK, let's question the source. the National Cable & Telecommmunications Assosciation [ncta.com] is "is the principal trade association of the cable television industry in the United States". So basically, they're the RIAA of the cable industry. And they just published a survey that says that consumers are subscribing to broadband in mass quantitites.
Ok, I question the source. This is like Shell Oil publishing a study that concludes that burning gasoline provides valuable fertilizer for wetlands. Why give PR machines free press?
HP calcs unprofitable?? (Score:2, Insightful)
If their calculator division was making money, then why on earth was it chosen to be closed down? They should have chosen something that was loosing them money. If there were no departments loosing money, then they shouldn't have had to cut *any* departments.
Re:HP calcs unprofitable?? (Score:2, Insightful)
HP has a giant cash cow in the printer business. But printers aren't very buzzword compliant, and don't give analysts anything interesting to talk about. So the money coming in from printers is used to finance whatever projects Carly thinks will give the stock price a boost.
Re:HP calcs unprofitable?? (Score:5, Insightful)
In the last couple years, HP's philosophy has been to concentrate on a few areas. It was the reason that they spun off their test and measurement division as Agilent Technologies. HP currently wants to concentrate on computers and the internet. I guess the calculators did not fit into their vision of a computer and internet world.
Personally, I think they should have given the calculator division to Agilent when it was spun off. It seems to line up with Agilent's mission of making specialized electronic devices.
Re:HP calcs unprofitable?? (Score:1, Troll)
Other Slashback... (Score:2, Informative)
Re:Other Slashback... (Score:1)
system call vs library call (Score:2, Informative)
well duh. Just look at the section of the man pages -- semop is in section 2 (system calls) and pthreads are in section 3 (library calls). As a general rule of thumb, system calls will be slower than library calls (a context switch is involved).
Re:system call vs library call (Score:3, Interesting)
Furthermore, any library function that does the same thing as a system function will undoubtedly call the system function (fopen calls open, fork calls clone, etc.).
Perhaps this just reflects that the implementation of IPC in Linux, while complete, is not as fast or optimized as it should be. This is probably because everyone uses sockets, mmaps and stuff to do the same things, all of which are already fast, so nobody bitches enough about it to prompt someone to rework it.
Note that I make this statement purely from an observational standpoint; most code to apps I see forgo IPC for other methods. Would somebody care to give an example of some common Linux app that uses IPC heavily?
Re:system call vs library call (Score:1)
Of course, I'm overgeneralizing again and someone will jump on me for that
Re:system call vs library call (Score:1)
Is it really worth using threads with coroutines can be done? Why do round-robin scheduling when you can simply have your functions call each other?
Re:system call vs library call (Score:2)
True; the fact they don't under linux is an artifact of LinuxThreads. As Xavier Leroy notes in his FAQ [inria.fr], a one-to-one (every thread maps to a kernel thread) thread implementation implies that every context switch must be at the kernel level, which is more expensive than a pure user-space context switch. It's the price you pay for simplicity. This is somewhat mitigated by linux's fast context switching.
Processor instructions such as "test and set" don't typically need supervisor priveleges.
Correct, so you can do atomic variables in user space :)
Re:system call vs library call (Score:2)
Be careful when you throw that term around...
I can't tell whether you're talking about sysV IPC ('man ipc'), since it's the only other interface that provides shared memory similar to mmap(), or the posix threading mechanisms.
They have nothing to do with each other; and that's a good thing, since sysV IPC is a legacy, poorly designed POS (I can give examples on demand).
Link to IPC benchmarks (Score:1)
I am as confused as heck (Score:3, Funny)
2. "Goes to show, in a large group of people you can probably find at least some who fit nearly any premise. As always, question the source
Link to Semaphore, mutex results, etc. (Score:1)
I wonder what the results would have been if he used the non-portable (non-pthread) interfaces to the sync/threading primitives in linux... because Windows gets an extra boost not having to go through a compatibility API. Are there non-pthread abstractions for mutexes and such? I don't know much about low-level threading stuff in linux beyond clone.
Missing link for Ed Bradford's article (Score:5, Informative)
Empty <a> blocks aren't terribly useful...
Re:Missing link for Ed Bradford's article (Score:1)
co-routines [greenend.org.uk]?
Don't they both have to do with communication without a context change?
Broadband defections. (Score:4, Informative)
Reason: downloads could hit 400+K/s uploads could hit 200K/s (not bits, bytes).
After a year, down ~= 200+K/s upload capped at 128K/s. Ok, fine and dandy.
Insult to injury came when dowload rate varied (no biggie) but a second cap at 128kbits.
When questioning the provider and calling the corporate office I got "Oh, we meant 128kbits not Kbytes".
Uh, huh.
The sad part is no one noticed the drop off in cable revenues at, or shortly after 2 things:
Killing off the *.divx groups and 'capping people off at the knees' as far as uploads.
By capping off uploads and killing off the divx groups @home completely negated the purpose of broadband
Include the caving into the MPAA w/o so much as a defense of its own customers much less adhering to "innocent until proven guilty" therom.
If DSL could provide a 128Kbyte up/down rate and eliminate the install hassles and provide the service for 20 to 25 bucks a month...I'd jump on that in a heartbeat.
If the had a you want faster, you pay more scheme (which @home does not do...WTF?) I'd use it and I'd *recommend* other cable users do it as well.
I can not tell you how many ppl I've recommended cable to because I lost count.
Now I tell them DSL first, cable second if they don't mind "getting less" for the same amount of money.
"once bitten, twice shy"
Ok, in my case it was a nip first then a bite.
Now I am shying away from recommending cable as a first step. Second step getting away especially if the 'veeceedee' groups start disappearing.
Then a lot of us will have absolutely *NO* reasons for sticking with cable.
Re:Broadband defections. (Score:1)
Re:Broadband defections. (Score:1)
Maybe there should be a T-shirt that says:
"I got broadband and all I got was a large pr0n collection..."
Wait. What was the downside again?
I forgot to mention that @home scans your machine daily to make sure you are not running a news server.
Never mind they *don't provide the bandwith* to run a news server and more often than not the *scans* will disrupt your downloads!
As for my previous post and your question I think the "hint" of fraud is just one more example of @home's...what is the word I'm looking for...incompetance, stupidity, (again) fraud, backward-assed-ness?
I'm sure someone else could think of a more eloquent way to put it, but this kind of reverse logic escapes me.
Seriously, look at the heart of what I am saying: you are paying the same, or more, and getting less and less as @home can take from you. Is this the way to run a business?
If this is the kind of "e-commerce" we can expect?
This kind of business "hari-kari" lends new meaning to "e-viscerated", does it not?
(I apologise for avoiding your question as to moderation. It was intentional. I've never moderated and I'm sure there are guidelines.
Heck I got a chuckle out of the moderations of this comment [slashdot.org].
What is even funnier is that I agreed with the totals because it was too far over the top.
Don't get me wrong. Getting karma points is nice, but I prefer to be challenged on my thinking not on how I'm moderated.
That might be the another point you missed, perhaps?)
cheers
Re:Broadband defections. (Score:2)
I forgot to mention that @home scans your machine daily to make sure you are not running a news server.
They were pressured [cnet.com] to do this by Usenet administrators. If they had not, their IPs would have been blocked by many usenet servers. The levels of spam from @Home addresses were unacceptable. These scans fixed the problem.
Never mind they *don't provide the bandwith* to run a news server
Right. They provide fucking insane downstream bandwidth and fairly modest upstream, suitable for clients. I would prefer more upstream, too, but not if it means paying more...which of course it would. Bandwidth costs money. If you haven't noticed, @Home isn't in the best financial shape.
Why would you run a Usenet server anyway? This is a huge resource drain (much more content than you actually read is sent to you), when there are plenty of other usenet servers (for modest fees, or even using the ones @Home provides) or alternatives to Usenet entirely.
Bullshit. Their scans consist of SYN packets to port nntp (119/tcp). If your machine properly issues a ICMP connection refused packet, nothing more will happen. I am an @Home customer and was when they started doing this. I have not experienced any problems due to these scans.
Seriously, look at the heart of what I am saying: you are paying the same, or more, and getting less and less as @home can take from you. Is this the way to run a business?
Given their terrible financial situation [nytimes.com], they must do this or go broke. In that case, they would charge you nothing and provide no service. You have that option now. Take it if you like.
My complaint with @Home is that their support is absolutely terrible. When I call about service interruptions, I'm put on hold for way too long before talking to someone who does not have a clue. I'd much rather see them pour money into fixing this problem than into a little more upstream bandwidth.
Re:Broadband defections. (Score:2)
For leaf sites, it's usually more efficent to run a caching newserver. This only downloads the articles you read, but caches them, plus XOVER's and other repetative stuff which reduces the bandwidth required by a large amount.
Re:Broadband defections. (Score:2)
You and the damn cable people... (Score:1)
Instantly, a thousand of you are now saying "But you're depriving them of income they would otherwise have." To that I say NO! I am not keeping anyone from seeing a movie at a theater.
Re:Broadband defections. (Score:1)
ROTFL.
Stealing is such an ugly word...how about "creative (re)distribution techniques".
usenet service (Score:3, Insightful)
Subscribe to external news sources - probably put you down $10/mo. Sure, that's ANOTHER $10 a month out of your pocket. But if you're feeling squirley, consider what that costs the provider.
The traffic used to have a set cost as defined by upkeep of the internal network - call it "internal cost". Now the same traffic has that internal cost as well as the cost associated with increased traffic from the upstream provider. Its possible that the cost of this external traffic is less than the cost of providing better usenet service. Its also very possible this same traffic now has considerably higher cost.
In any case - you get better usenet service.
critical sections are not equivalent (Score:3, Informative)
Re:critical sections are not equivalent (Score:2)
Re:critical sections are not equivalent (Score:2)
But in any case, if your application is spending a significant amount of time grinding on waits for mutex constructs (i.e. any of the syscalls discussed), you're having a bad day -- it means your threads are spending a lot of time in critical sections, and are going to spend a lot of time waiting for each other.
There are schools of concurrent design where you typically have threads blocked on a mutex waiting to move forward, but I don't think those are particularly high-performance models in the first place. Better to stick to the old dictum: "minimize the critical section", both in length and in frequency.
No surprise. (Score:2, Informative)
Of course critical sections are fast - that's what they were designed to be. The tradeoff is that they can't be used for IPC, so the comparision in the article is misleading .
Fast? Often just wishful thinking (Score:4, Interesting)
More Comparison (Score:1)
But I would like to see Solaris benchmarked in the same way...
getting out (?) of broadband (Score:2)
The exact determination is that "more people than ever are leaving broadband". Not that the ranks are shrinking, but that a greater number of people are terminating accounts. Obviously, as you increase your customer base, if the same percentage of people unhook every month due to dissatisfaction or because they can't afford it any more, then of course the gross number will increase.
Threads article link? Did I miss it? (Score:2, Informative)
The reason why pthreads 'look pretty good' speed-wise is because the pthread library provides user level threads as opposed to a kernel level threads. User level threads have their own scheduler and are much quicker to swap out--less data to save than during a kernel thread context switch. Meanwhile, pthread semaphores (and condition variables) should also be faster depending on the user-to-kernel thread mapping scheme (windows 2k maps each user thread to a kernel thread, for example; I think linux uses a many-to-many mapping). This'll reflect in how fast threads go through their critical sections because they may have to wait shorter/longer to get access to them.
Re:Threads article link? Did I miss it? (Score:2, Informative)
Technically, "pthreads" ("POSIX threads") is just an API which can be provided by any thread library. And yes, technically, you can get a user-level threads package that implements pthreads.
But I think you were referring to Linuxthreads, the pthreads implementation used by GNU libc on Linux. Linuxthreads is kernel-level, not user-level.
Semaphores and mutexes may be implementation mostly in user space (I don't know for sure) but thread creation/destruction/scheduling is definitely based on real kernel threads.
IPC under Windows (Score:2)
In Windows, the critical section code will become a single bit test and set instruction on an uniprocessor system (which, being a single machine instruction, is very fast), but a much more complicated operation on a mulitprocessor build.
Under Linux, you don't have to explicitly compile your program to support multiprocessor, so I would guess that Linux is using a more SMP friendly implementation of a mutex than a uniprocessor build of Windows.
HP Contact Info (Score:2)
The article author also pulls no punches on his opinions of these fine folks.
I think I have some email to send, my self.
What calc now? (Score:2, Interesting)
Does anyone else make high quality calculators? Or are there any good math programs for PDAs?
Test not complete... (Score:2)
great... (Score:1)
Re:Two strains of Windows, eh? (Score:2, Insightful)
It might be good news, but not for alternative OSs. It simply means that M$ has saturated the market with their previous versions of Windows, and there aren't any compelling reasons to change. Anybody who was going switch from Win98, just switched to Win2K or ME, and isn't about to run out and buy XP. That said, they ain't buying Linux either.
Re:Two strains of Windows, eh? (Score:4, Insightful)
I'm not saying any of those technologies are in XP, I don't know, I have it (via MSDN) but have no intention of installing it on any machines, as you say, there simply isn't any real incentive.
Re:Two strains of Windows, eh? (Score:1)
Bullshit. XP uses the exact same bootloader as NT4, 2K, and even WinME (well, sure, with some minor cosmetic changes and performance enhancements, but for all intents and purposes it's the same loader written way back in 93/94-ish for NT4). As well, it's never been hard to dual-boot with the NT-Loader. There are two mini-howtos on LinuxDoc [linuxdoc.org] that outline two different ways of dual-booting Linux with NT (using LILO):
Both methods work, and I have used both in the past. Interestingly enough, NT-Loader is flexible enough that it can work with pretty much any OS. I've personally used it to dual-boot BeOS 4.5 and Windows 2000, in the past, and never had any problems.
First off, what does this have to do with Windows XP? You've obviously confused Windows XP with Office XP. Second, this is not new, and it's unlikely it'll change (although with Microsoft moving more and more towards XML, don't be surprised if you start seeing XML-based Word documents that can thus be easily parsed by anything that understands XML).
Given all the above, I still don't see how these are anti-open source. Hell, even WPA isn't "anti-open source". It's anti-piracy, sure, but I don't see how it has anything to do with open source at all.
Re:Two strains of Windows, eh? (Score:1)
BUYING Linux?
Re:Two strains of Windows, eh? (Score:2, Insightful)
Seeing is not using (Score:5, Insightful)
You saw Windows XP at Fry's? I'm assuming you mean you saw a demo computer running XP, and not that you merely saw the box sitting on a shelf. By your logic, I could say "I saw Linux at my friend's house and was not impressed. It was nothing but text and stuff."
I shouldn't have to tell you that the interface isn't the OS. If everyone judged Linux by its interface and nothing else (which, unfortunately, is often the case), people would have an absurdly skewed view of Linux. Think about how many different window managers and themes there are for Linux. Just because one of them looks like shit doesn't mean the underlying OS kernel sucks.
The same holds true for Windows. Sure, the interface may be full of goofy alpha blending and unnecessary menu fade-ins and mouse pointer shadows and other things, but when you replace explorer.exe with a third-party shell (or merely disable the extra eye candy via the Control Panel), all that stuff goes away and you're left with what is without a doubt the most stable version of Windows I've ever seen.
Re:Seeing is not using (Score:3, Insightful)
stablest windows version isn't saying a whole heck of a lot. An analogous quote would something like "the new twinkie xp is the healthiest twinkie hostess has ever made"
So after paying for 3.1, 3.11, win95, win98, win2k, winme, (forget winNT for the moment becuase it was never marketed for home consumer use), we finally have a windows product that might actually be stable enough to be worth its cost...now if I could only trade in all those old MS licenses for all the MS Oses that I have kicking around for a stable windows product. MS calls it a new Os, I call it a sorely needed basic upgrade...too bad I have to pay through the nose once again for basic functionality I should have had a decade ago.
As far as interface!=windows xp. Show me a major windows application that can fully function from the commandline. Show me a useful scriptable terminal shell environment that comes with windows xp. The interface IS MS windows. You might be able to graft on a less functional 3rd party wm/file manager other than explorer, but what you are paying for when you buy xp is the interface and all the time and effort spent getting the bells and whistles (and MSN ads...dont forget those) in place. If you were paying for the effort MS put into stability from Os release to release, each version of windows would have a fair price of about $2...and the upgrade to xp would be free patch, like the virus patches are. I've never really understood that, poor stability leads to data loss just like virus do...but MS doesn't hand out free stability upgrades, they sell them as new Os releases. I shouldn't have to keep paying for promised stability. Paying for new features is one thing...pay for basic features I should have had when I bought the Os is extortion...but that's okay pretty soon we will all be paying a monthly fee to get access to or windows system thanks to
rant off
-jef
Re:Seeing is not using (Score:1)
How about this -- install vim (yes, there's a native win32 port, not just via cygwin) and Visual C++. Now, in only cmd.exe, you can code your entire application and build it. The VC compiler doesn't require the gui. Oh, sure, you get the gui when you install it, but that doesn't mean you have to use it. You can build by hand or write a makefile, just like with gcc. Alternatively, you can configure nearly everything either through commandline tools (try "net help" from cmd.exe, "ipconfig /?", "route /?", and so on) or via the Windows Scripting Host (wscript.exe if you want gui stuff from your script, cscript.exe if you don't). Hell, you can even install software solely from the commandline (lookup "msiexec" in the Help and Support Center), given that the software is provided as an msi (Microsoft Installer package, which most new applications are using, and is required to get certified for the XP Logo program). As far as "major windows applications" running from the command line, I'm going to ignore this as flamebait. Windows is generally used as a GUI environment (and when it's not, it's because it's being used as a server, where you shouldn't be firing up stuff like Word anyway), and so major applications (Word, Excel, IE, Photoshop, whatever) are obviously gui-oriented. If you need to use those remotely, Terminal Services (now called Remote Desktop in XP) is very nice, and is even better in XP -- 32bpp color depth, tweakable options to help performance, optional audio over the network, full backwards compatibility in both the client and the server so you can connect to win2k or nt4 terminal servers, or connect to XP from NT4 or 2K, and more. You can use TS for remote administration as well, or you can setup the included telnet server, or you can install a third-party ssh server. The first option gives you the most control over the system as you have both console and gui to work with, but the latter two give you nearly as much flexibility even just through the commandline.
You've obviously never looked at Windows Update [microsoft.com]. Microsoft does a pretty good job of offering critical updates, not-so-critical updates, minor Windows updates, new versions for things like Messenger, and even some drivers. As far as "paying for patches", maybe so. But historically, all the important features from win98 that could be patched back into 95 without significant changes were made available. Same for 98 -> 98SE, and even 98SE->ME. Granted, there's no way you can just patch 98SE and end up with ME, but any critical updates and such were always offered for the older systems (well, maybe not 95, since it was declared obsolete as of 98se, and 98 and 98se were declared obsolete as of ME, but mainly that just means you won't be able to buy them in the store any more -- they will still be supported with critical updates). As far as the path from WinME to WinXP, there's no way you can make a patch to upgrade between the two. That's like saying you can just get a patch to upgrade from DOS to Linux. Not going to happen. WinME was still Win9x. WinXP is based on 2K, which in turn was based on NT. Completely different kernel, completely different driver architecture, no more legacy 16-bit code, etc.
And just as a note on the whole .NET thing you brought up -- it's very likely that at least initially (and probably for the next 5+ years after), both subscription and stand-alone packages will be offered. In otherwords, you can pay $99 for your XP->2004 (or whatever) upgrade and be done, or you can pay $30/year to get 2004, and then 2005, and then 2006, and ... Maybe not a great idea for businesses that need to standardize on a platform, but do you really believe Microsoft hasn't thought of this? Just as with XP's anti-piracy activation measures, where site licenses for larger companies (I believe, any package of 5+ licenses) does not require activation, standalone licesnses would be offered on any software that also has a subscription license (Office.NET, Windows.NET, whatever).
Re:Seeing is not using (Score:3, Insightful)
True enough...and this is the point. Windows IS generally used as a GUI environment in a home consumer market, the orignal post in this thread was talking about the GUI interface being shotty, the reply I flamebaited was trying to make the point that the interface XP has isn't all that important...the underlying OS kernel is much much better and that is the important thing to MS....and I disagree. Let me also say that I didn't think clearly though the last post...I was trying to avoid the server agruments hence why I said ignore NT...though my comments about a useful shell environment was totally wrong headed...because commandline tools really are server argument and that just garbled my main thrust...which is XP for the consumer desktop solution is all about the interface....and that's what MS has put the time and effort into developing in XP...the things geared toward the consumer desktop market (including the product activation) my point about the commandline and shell interfaces was that in the consumer desktop market these are not important factors...
MS is putting the big dollar developing into the spit and polish of XP...you as a home desktop PC owner are not paying for the promise of stability...you are paying for the features...and MS knows this. The people who paid for the stability were the companies that shelled out big bucks for NT support in the good old days and MS is finally giving the home consumer a taste of a stable system in w2k and now xp.
You've obviously never looked at Windows Update [microsoft.com]. Microsoft does a pretty good job of offering critical updates
Are you honestly telling me that I can get enough windows updates for my win98 systems to bring the stability up to the point to match xp? I'm not talking about new feature rich explorer updates or messenger updates...I'm talking about basic stability issues, which I think are as critically important to keeping data intact as updates to prevent viri and internet exploits. I don't expect any release of any software to be perfect...but I don't think it unreasonable to expect the purchase of a product gives me access to continued updates that help prevent system crashes or system lockups. MS wants to release XP chock full of new kernel and new extra features and abilities, fine that's great...but to drop support for the older Oses which still have glaring stability problems and force people to buy into a new Os yet again...with new hardware yet again...seems a tad disrespectful.
Good think the EULA washes MS clean of any responsbility to make a best effort to ensure the product actually works as claimed before you even open the software box. I'm not asking for a path from 95 to XP...I dont want XP's features I want a computer i bought 4 years ago that met the specs of win98 to be reach a decent level of stability...I don't think I should have to buy a whole new Os with a whole new hardware spec to finally get to the point where the Os can claim to be stable and can last a week without rebooting...hence why I run BSD and linux on the older boxes now...I can be confident that updates affecting stability will be made available for the older architecture. I have no problem paying for productivity updates, (new features, new tools) but I have a big problem being told I have to buy a stability update, when the product I bought should have been stable to begin with.
-jef
Microsoft is, partly, the enemy of its customers. (Score:2)
I very much agree with this. Part of my definition of an operating system is that it is stable. Windows 98 is not stable. Therefore, it cannot be truly called an operating system.
I should not have to pay for junk, especially when it is deliberate junk. If Win XP is stable, then it should be a free upgrade to all those who paid for Windows 95 and 98 and ME, and suffered enormously from the shortcomings that were deliberately left there to try to get us to pay more.
Microsoft is, partly, the enemy of its customers.
Re:Seeing is not using (Score:2)
Alternatively, you can configure nearly everything either through commandline tools (try "net help" from cmd.exe, "ipconfig
Ok, here's a big problem we face at my job site with hundreds of student accounts that must be reset every two months (when the next batch come in).
Reset a range of user accounts (xxx100-xxx600) to a specified default password WITH the flag marked to force a password change on initial login. Do that from the command line so it could be batched. I've STFTN. I've STFKB. If you can figure out how to do it, I'll grovel at your feet.
It's really fun doing this one-by-one in the GUI.
Re:Seeing is not using (Score:2)
Just as you would script this in Unix, you can script this in Windows. Obviously, the scripts you write will be different. NT != UNIX and UNIX != NT, so that should be expected. Have a look at this link [microsoft.com] for more information (you'll probably need to use IE to get to that link, but since we're talking about Windows here, that shouldn't be an issue). That link discusses the IADsUser scriptable interface for ADSI in Windows. Based on the connect string you use, you can change a local user account, an NT4 domain user account, or an Active Directory user account. Figuring out what properties you need to change for your problem and what glue you need to write to loop through all indicated users is left as an excercise for you.
Re:Seeing is not using (Score:1)
Yeah, how annoying, a product that wasn't perfect in version 1.0, and had to be improved over several years of development. Boy, those fellows at Microsoft just have to do things different, I guess. Heh, those folks working on MacOS seem to have the same attitude. OS-X!? Shoulda been perfect at OS-1! Sheesh, what is the OS world coming too?
forget winNT for the moment becuase it was never marketed for home consumer use
Then I assume you are forgetting Linux too, for the purposes of this discussion. (Of course you are, since Linux wasn't perfect on its first release either.)
How about this one? (Score:2)
Granted, if it's really as stable as Microsoft promises this time (and about half of the Windows 2000 users I know didn't have any stability problems), then that may be worth it. I get similarly curtailed framerates in Linux by making the same tradeoff, and I think it's worth it... but I'd like to know how many game players who went out to buy XP were making a conscious decision for stability over speed.
Re:Seeing is not using (Score:1)
I only have three working brain cells, and I knew what he meant U:>
using is no better (Score:2)
But the biggest problem with XP is its rampant commercialism. Windows, other Microsoft applications, and third party applications constantly bug you for personal information, registration information, etc. And who knows what information it's sending out behind my back. And I already spent about $100 on third party utilities.
Altogether, XP is something I could do without: it runs on applications I want to run, and the software I need to run on it is not particularly high quality. The only reason I have it is because Microsoft has managed to monopolize the market so much that there are applications you simply have to use in business and that only run on Windows. Yuck.
Re:Seeing is not using (Score:2)
Compared to the > year uptimes of Linux, I'm not even sure how you're able to describe XP as having any sort of decent uptime whatsoever.
Uptime != Stability (Score:2)
Uptime is a measure of how long a system has been running without a reboot. Uptime generally requires stability, assuming the machine in question is actually doing something. But I could boot up a fresh, clean install of Windows 95 and (after patching the 49.7 day registry uptime counter bug) let it sit in a corner doing nothing, and the damn thing would probably keep running till the next Ice Age.
Stability, on the other hand, is a measure of many things. Mostly it is a measure of how well an operating system responds to instability in software. Linux is incredibly good at this; when a program on Linux crashes or has a problem, the OS steps over it and keeps right on going. Windows has been notoriously bad at this, until Windows 2000 and XP.
Now, if you re-read my message, you'll notice that nowhere did I claim that I thought Windows XP was more stable than Linux. I merely claimed that it was more stable than previous versions of Windows. Furthermore, since Windows XP, as you said, has been out for about a month now, it would be impossible (and incredibly stupid) to rate its stability by comparing the uptime of a Windows system with that of a Linux system.
To illustrate my point (that uptime does not always equal stability), back when uptimes.net was running full force, I achieved an uptime of about 155 days from a beta version of Windows 2000 running on a Pentium 166 with 64 megs of RAM, serving up lots of dynamic webpages at wonko.com [wonko.com]. In the end, I had to turn the machine off because I moved.
Now, the only reason I achieved that incredible uptime with a beta OS running on inferior hardware was that it wasn't doing a whole lot. It was just running IIS and MSSQL Server, and that was about it. Now, if I had been serving Slashdot off that box, it probably wouldn't have lasted a week. Thus, we see that uptime != stability.
Re:Uptime != Stability (Score:2)
To sum it all up: Measuring uptime is all well and good, and uptime can be an indicator of stability, but uptime, by itself, is a very bad way to measure stability.
I was not defending a server or an OS, I was objecting to your misleading usage of the word "uptime".
Re: (Score:3, Informative)
Re:Two strains of Windows, eh? (Score:2)
>operating environments run
Last time I looked [sun.com], Solaris cost absolutely nothing. You can download ISO images of the latest release from Sun, burn them yourself, and run it without any license fees, etc, at all on any Sun box with less than eight CPUs, no matter what you're using the machine for (business or personal). If you want a development environment, you can get the Forte compiler suite and a 30-day license (which can be renewed indefinitely) as a free download, or you can get all the GNU goodies at Sunfreeware [sunfreeware.com]. When it comes to applications, the StarOffice suite is also a free download. All you have to pay for is the machine itself, electricity to run it, and an Internet connection for the downloads.
Re: (Score:2)
Re:Two strains of Windows, eh? (Score:2)
I've got a SunBlade 1000 (their UltraSPARC-III based "big daddy" workstation) on my desk here at home, and have a Blade 100 (with Expert3D-Lite, an additional $1K graphics option) at work, and for day-to-day use (windowmaker, Mozilla, SSH, etc) they're "just about" the same, despite the 400mhz CPU speed difference and 256k of L2 cache versus 8MB, for the tasks I do all the time.
Sun has been giving Solaris away for free for a little over two years now, AFAIK.
Re: (Score:2)
Re:Two strains of Windows, eh? (Score:2)
OTOH, if you just want 32-bit Unix performance, you're going to be rather envious of your cubemate who bought a Dell OptiPlex GX 150 P3/1.13 512M 80G for the same price, and plopped Debian on it....
Signed, :-/
An Ultra 10 owner who switched from Solaris 8 to Debian/SPARC
Re:Two strains of Windows, eh? (Score:2)
You should work on your reading comprehension friend. What that article is implying, and not surprisingly so, is the MS is dropping FUD about XP sales... that they are changing there tune for no good reason when they MUST assuredly know exactly how many theyve sold, through each channel and EXACTLY what the numbers are.... the quotes from M$ have been contradictory and vague... more likely a sign that XP isnt doing as well as the marketroids would like the public to believe.
In short, your comment is a perfect non-sequitur.
Re: (Score:2)
Re:HETE (Score:2, Informative)
Some of their problems have been related to the fact that their team is very small. So, it is possible to make things too cheap.
HETE's operations team is indeed too small. HETE-2 was ready for launch in January, 2000 (it was integrated with the rocket!), but after the Mars lander failure NASA got cold feet and ordered it shipped back to MIT for additional testing. HETE-2's operations were also funded below the HETE project's minimum estimate of operations costs. Since people without long term support need to find new jobs, this combination meant that several people left for new employment either before launch (having already lined up new jobs before the delay) or shortly afterward. While a reduction in the team's size post launch was intended, what happened was too drastic. This definitely made it harder.
It would have been fine if they had no time constraints, but it seems that spending most of the first year essentially in a kind of engineering mode is a bad thing.
Part of the trouble is that HETE needed to be well calibrated before it could generate useful results. A mission like Chandra could do a lot of interesting stuff (especially pretty pictures) before its calibration was finished. Astrometric calibration takes time, however.
In any case, this extra time has not added much to the overall mission cost.
Is there anyway that the community (esp. NASA) could have helped bring things on line sooner?
A launch in January 2000, when HETE was ready, would have helped. Adequate funding of the operations phase would have helped.
Patience also would have helped, I think. The HETE team, NASA, and the community were all impatient for results. This meant that there was an emphasis on working through the inevitable operational problems rather than taking the time to fix them. A team that is too small cannot do both in parallel. Once some of the more time consuming problems had been fixed, positive feedback set in: operations became less labor intensive which meant more time was available to fix problems.
Re:I must be lucky (Score:2)
Funny thing is the difference I see in download speeds between the Linux and Windows computers I have on my network. I run everything through a Linux NAT box. Nothing out of the ordinary -- a P200 with a pair of plain old 10/100 PCI NIC's, but I can regularly pull 250K bytes/sec through it, if I go to somewhere like www.kernel.org that has screaming fast servers.
However, I run the same download on any one of the Windows computers behind the firewall, all of which have faster processors than my main Linux box, and the best they can squeak out is something like 50K bytes/s. Same site, same file, same firewall, similar NIC's, and I get about 1/10 the effective download speed.
Now there's an great testimonial for how bloody fast Win98 is... :-/