Handhelds

Major Problems with Cingular Network 382

Wabin writes "It looks like the Cingular GSM network is having serious trouble. My phone stopped working today completely, though my wife's was still able to make outgoing calls. Talking to tech support, they claimed some kind of massive failure across the country starting around 4PM yesterday and possibly a virus attack. Howard Forums is all abuzz, but there really doesn't seem to be any hard info. Glad I haven't totally given up the land line yet... redundancy is good."
Linux

Linux Clustering 162

SPK writes "A colleague and I recently discussed how New Riders's most highly regarded book -- Paul DuBois's MySQL -- corresponds to O'Reilly's worst dud: MySQL & mSQL. Charles Bookman's Linux Clustering does nothing to improve New Riders's reputation. The book is divided into eleven chapters, unevenly distributed among three sections: an overview of clustering for Linux, building clusters, and maintaining clusters. Four appendices provide brief information about online clustering resources, options for RedHat's 'Kickstart,' options for DHCP, and information on 'Condor ClassAd Machine Attributes.'" To find out why Krause was so displeased with this book, read on below for his review.
Science

GZipping Life Forms: Deflate Reveals Bare-Bones 245

An anonymous reader writes "To distinguish images derived from living vs. non-living sources, USC and NASA JPL researchers report today using the standard gzip compression utility. As a measure of overall pattern complexity, they find that the inherent pixel content of biologically generated fossils produces higher image compression ratios [more data redundancy], compared to their non-biological counterparts. The more the file shrinks, the more likely it is that a living process was involved. A test is live online here. This extends the simple, but powerful, uses of gzip to biogenic fossil detectors, in addition to spam cop filters, DNA sequence comparisons, digital camera image crunchers, etc. In nine months, the two Mars rovers will send back the first microscopic-scale images of Mars rocks, which should be amenable to some of these same techniques: thus gzipping is apparently pretty zippy."
Linux Business

What Goes into an Enterprise Network? 61

Komi asks: "I work for a big semiconductor company, and I'm part of a group that is spear heading the Linux movement here. Right now everyone uses Sun machines to design, but you can get a cheaper Linux x86 machine that is four times faster. So it is my job to prove that Linux works. The problem is that I'm an analog circuit designer stuck in the role of sysadmin. So I need some advice on what goes into a network. It won't be that large right now, but it has to be scalable for up to a couple of hundred machines. If this works, then hopefully we'll convince all designers at my company to make the switch."
Spam

ISP Operator Barry Shein Answers Spam Questions 373

Barry mentions his "sender pays" spamfighting plan more than once in his answers to your questions, and discuessed it at length in an InternetWeek.com article published on Feb. 20. Is Barry's plan workable? Do you have a better idea? Or should we all just get used to spam as part of the online experience, and learn to live with it and block it as best we can?
The Internet

Multihoming Suggestions w/o at Least a /24? 55

An anonymous reader asks: "I work for a small company who is looking to get a multihomed Internet connection for redundancy. The logical conclusion would be to get another internet connection to another provider. However, in the case of a primary connection failure, we need to be running BGP to have our internally-hosted sites still accessible to the Internet via the 2nd connection. The problem is that we only have a /28 (16 IPs), which is too small to make it past most route filters, and would then mean that we still couldn't be reached if the primary T1 is down. So, what's our options? (and no, lying and getting a /24 isn't a valid choice)"
The Internet

VRRP 85

Peter H. Schmidt writes "As the world increasingly relies upon the Internet and TCP/IP-based networks, their reliability and availability have become pressing topics for both enterprises and service providers. The Internet was designed for resiliency, not the 99.999% uptime of the PSTN, but it is now being required to do both jobs. This book on VRRP -- the Virtual Router Redundancy Protocol -- details the open, RFC-track protocol which has been developed to help ensure that edge router failures can be handled automatically without affecting connectivity." Peter's complete review (below) is interesting even if you never have to deal with the network at this level.
Linux

Sharing a SCSI Drive Between Two Boxes Using Linux? 112

yppasswd asks: "I'm looking for a (cheap) solution for filesystem sharing between two linux servers and, since the target is just redundancy, I've come to the following idea: two SCSI controllers, one per machine, with different IDs (say 7 and 6) sharing the same disk. Only one of them would mount the disk, the other is just ready in case of failure. I've googled this around, and I've found many different opinions (Yes, no, perhaps, don't do it or it'll explode,...) but nobody saying 'Ok, I've tried this and here is what happened...'. Suggestions are welcome, but keep in mind that many other solutions (Fiber Channel, SSA, NFS mounts, various network filesystems) were already rejected because they were either too expensive, unreliable or not supported under Linux."
The Internet

Craig Silverstein answers your Google questions 292

On June 20, we requested questions to submit to Google Director of Technology Craig Silverstein, and got a heck of a lot of them. Here are Craig's answers to the 10 we sent him, along with a "bonus" answer to an additional question he chose himself. (Yes, Craig reads Slashdot. His answers make that pretty obvious.)
Technology

Using IR Lasers Instead of Fiber 209

Artifice_Eternity writes: "Can't deal with the trouble, time or expense of digging up the street to get fiberoptic cable to your building in the big city? There's another way...infrared line-of-sight infrared lasers between your building and another one nearby. Repeaters and redundancy can keep the chain going reliably for miles, with gigabit data transmission rates."
Hardware

Inexpensive Network Servers? 45

Linuxthess asks: "I work in a small company with only 20 or so employees. Being the most tech-savvy of them I find myself doing less work as a salesman, and more as their non-paid tech support. I was asked for a solution to create a domain for login authentication, a DHCP server, a webserver, file & printer services, and e-mail. I found three such companies with an inexpensive, yet solid products aimed at what we need: one is Celestix with their Aries and Taurus products; there is a company in Chicago called Dartek which sell a custom-built box called iMass which comes in three flavors; and lastly a company in Canada named Net Integration Technologies Inc who has a box named Net Integrator that is available in various flavors. Does anyone have experience in regard to these solutions? I think we will go along with the Taurus, but I want to hear a little more regarding the quality of doing this job inexpensively (these things start from $800 and go until $3000). I spoke with a tech-support guy, and he told me customers buy a couple of these since they're so cheap for redundancy, and clustering."
Programming

When Making a Comprehensive Retrofit of your Code... 385

chizor asks: "My programming team is considering making some sweeping changes to our code base (150+ perl CGIs, over a meg of code) in the interest of consistency and reducing redundancy. We're going to have to make some hard decisions about code style. What suggestions might readers have about tackling a large-scale retrofit?" Once the decision has been made for a sweeping rewrite of a project, what can you do to make sure things go smoothly and you don't run into any development snags...especially as things progress in the development cycle?
Technology

Are Hybrid Solar/Grid Houses Practical? 54

Controlio asks: "With the continuing power crisis and the announcement of major power rate hikes, I figure now is an excellent time to pose this question. Instead of pay these inflated prices for power, I'd like to sink my money into a long-term solution. Cutting myself off from the power grid isn't practical, as I use too much power periodically to be 'solar-only'. But how practical is adding solar for either power redundancy (in case of a blackout) or as supplemental power? Redundancy would be nice, but being supplemental would involve using solar power as my primary power when it's available (and I hear tell that you CAN have negative electric bills if you produce more than you use). Do the costs/advantages for either provide enough incentive to be worth investing in? How would one go about creating a hybrid house? And finally, of course, which is cheaper? Investing in expensive solar paneling, or paying the outrageous charges the power company wants?"
The Internet

Whatever Happened to Internet Redundancy? 200

blueforce asks: "At one time, there was this really neat concept built into the internet that said there's all this redundancy like a spider web. If one segment or router would go down the internet would re-route traffic around the faulty segment and keep on chuggin'. So, as I sit here today and can't get to a whole bunch of places on the net, I'm wondering what gives? Where's all the redundancy? I'm not referring to mirrors or co-location. It almost seems like a script-kiddie with some real ambition could bring the world to it's knees. What really happens when routers go down, and what goes on when something like a Cable and Wireless pipe or someone else's OC-something backbone goes down?" Redundancies are nice, but not infinite. Planned DoS attacks can take out dozens or hundreds of routers at once, and as the number of downed nodes increases, the process of rerouting becomes increasingly difficult. What are some of the largest problems with the current systems in use today, and are there ways to improve them?
Hardware

Inexpensive Storage of Terrabytes on WORM Media? 30

noSpaceleftonDevice asks: "The company I work for stores large amounts of data on magnetic-optical platters. We currently buy these for about $60 per 4.6 GB platter. So far, we have over a terabyte stored, which really adds up (especially considering each items is stored on multiple disks and stored off site for redundancy and safety), and my guess is that we will write that much again in the next 12 months. We could use standard write-once CDs (which are much cheaper), but each terabyte would require well over 1000 CDs (not counting redundancy). Writing to, storing and managing this many platters quickly becomes unmanageable. I'm wondering if any of the Slashdot community knows of a better/faster/cheaper way to write large amounts of data on Write-Once Read-Many media."
IBM

If IBM Is Serious About Linux, What Do WE Want? 167

bfree asks: "Robert LeBlanc, Vice President, Software Strategy, Software Solutions Division says both that IBM would open source any part of AIX and that we would be better off taking bits and pieces and the expertise that IBM bring along with it. IBM's AIX Web site lauds Linux compatibility of AIX and the new AIXL only just slightly behind their statements such as 'A robust, scalable UNIX platform for critical applications.' It's clear IBM wants to be involved with Linux, and I feel that we should want that also. What should we ask them to do for us in return for their involvement? Networking scalability and redundancy, optimization and facilities for database systems (as the jfs has started) or systems management applications? It seems to me we have the offer on the plate from IBM to create a new joined project to bring Linux up another level if we can find a way from AIX. Surely we must take them up on this?"
The Internet

Web And Database Synchronization? 11

crazney asks: "I am developing a medium sized Web site with a friend, and we have stumbled across a small problem. Our hosting will consist of several Web servers load balanced, and possible two or three database servers. The extra database servers may be used for load balancing or for redundancy. My question is, what can we use to keep these up to date? We would like to ensure that all the Web servers always contain an exact same copy of the Web page, and any changes to the database affect the others. These servers may be geographically separated, so something low bandwidth would be helpful. For the Web servers, a quick and dirty way could be CVS, but I'm stumped for the database servers (which will probably be running PostgreSQL)."
The Internet

120 Gigabit Pipe To Oz Begins Operation 236

dustpuppy writes: "The new Southern Cross Cable Network connecting Australia to the US is now operational. Featuring 120 Gigabit capacity and with a latency of 70 msec, the new trans-Pacific cable is 120 times the capacity of the existing Australasia/North America connection. Now us poor Aussies can download our mp3s that much faster! You can read more about it here." Interesting, too, how it's constructed. From the article: "The network consisted of two separate cables configured in three self-healing rings, with all three rings to be completed early next year. The duplicate-ring construction gave the network greater redundancy - if one side of the network was damaged or became inoperable, traffic could be transferred to the other side instantly." Neat.
Technology

Practical Issues In Database Management 55

Fabian Pascal has written this work on issues that come up in database administration and recommendations for solving them. It's not really a practical guide, despite the title, but probably many administrators would benefit from reading the book and keeping it handy somewhere. Yes, I mean you, the one who's got a copy of Filemaker Pro at home and thinks he knows it all.

Slashdot Top Deals