Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Remote Data Access Solutions? 54

magoldfish asks: "Our company has several terabytes of data, typically chunked into 2-6 GB files, that we need to share among users at sites around the US. Our users range in technical skill from novice to guru, data access needs to be secure, and we require client access from multiple platforms (Linux, Mac, and Windows). These files will likely be accessed infrequently, and cached on local hard drives by our power users. We've been running WebDAV, primarily to accommodate non-savvy users and guarantee access from firewalled environments, but the users are really complaining about download speed — maybe 50 KB/sec serving from a Mac. Any suggestions for better alternatives, or comments on the pros and cons of alternative access techniques?"
This discussion has been archived. No new comments can be posted.

Remote Data Access Solutions?

Comments Filter:
  • by LiquidCoooled ( 634315 ) on Thursday November 09, 2006 @02:37PM (#16787753) Homepage Journal
    Save bandwidth, time and support headaches.
  • by kperrier ( 115199 ) on Thursday November 09, 2006 @02:38PM (#16787769)
    Without knowing how the data will be used it will be hard for anyone to supply you with some recommendations.
  • by jp10558 ( 748604 ) on Thursday November 09, 2006 @03:20PM (#16788071)
    I would expect you could use something like VNC or remote X Sessions over ssh/vpn (Hamachi, OpenVPN, etc) and keep the data local.

    Or, if you need it spread out for some reason, iFolder or rSync seem the best choices. However, you could also look at AFS.

    Basically, you have to get the long haul data transfers down somehow, or else get faster connections.
  • More of a question (Score:3, Insightful)

    by east coast ( 590680 ) on Thursday November 09, 2006 @03:42PM (#16788211)
    These files will likely be accessed infrequently, and cached on local hard drives by our power users.

    But how often do these files need updated? Is the end user in a read only situation? How infrequent is infrequent? How many users are you talking about and what's the density of these users? Even though the access is "infrequent" is this an access that modifies data that would have to be shared across your entire user base?

    Your scenario needs some gaps filled in as far as the requirements. I see a lot of people suggesting large capacity medias being shipped to the user but if this data is being updated frequently this is not a solution. If you have a large number of users and the data is not updated often you would still have to judge the frequency of sending out updates to X number of users versus the costs of having the data centralized and the infrastructure upgraded to meet the needs of these users. If you need to share changes in this data from the user end then using a physical media via postal services is going to cause problems in the coordination of the correct version of these files to be used by the other end users. and god forbid you have several users who need to update this data in a short timeframe as you're going to have disks being mailed in by users not knowing what changes were made by other users. You'd doubtlessly fall into some kind of data integrity problem and even if your talking only having users update their data every few months your still going to need to hire someone on to coordinate these efforts and insure that all the end users are getting every update in a timely fashion.

    Without more information it's hard to suggest something and still be confident that we're not leading you down to a solution that is completely inappropriate.
  • by jnik ( 1733 ) on Thursday November 09, 2006 @03:43PM (#16788215)
    Your data are in 2-6GB chunks. When a user needs a chunk, do they really need the whole chunk, or just a few megabytes of it? Downloading 6GB when you need 6MB really sucks. The solutions mentioned here all work by breaking down the chunk size. Thin client reduces the data transfer to what you need to see on the screen right then. Putting in a filesystem layer allows transfer of bits of files. Using a SQL database reduces it all to queries and the records you actually need to hit. (An aside: I've had a friend beat me over the head with "The single most expensive thing your program can do is open that database connection. Once you have it, USE IT. Make the DBM on the big beefy server do the work...they've optimized the heck out of it.")

    Figure out how the data can be broken into smaller chunks and managed...that will probably indicate what sort of tech will enable things for you.

  • by beaststwo ( 806402 ) on Saturday November 11, 2006 @07:04PM (#16808822)
    The questioner doesn't specifically have a data transfer problem, but instead a wide-area information processing problem (of which data transfer may be a part).

    While the answer may reside with any of the main themes recommended by responders (improving transfer, reducing the amount of data to transfer, and eliminating the need to transfer via remote desktop solutions such as Citrix, MS Terminal Services, and VNC), the questioner really needs to define his needs. Does the data really need to be local at each site? If data needs to reside locally, does each site really need such large chunks? What are the OS platforms used? How important is privacy, data integrity, and "chain of custody"? Answering about ten basic questions can shed enough light the problem to determine which of the recommended solutions (if any) make sense.

    It's tough to get a good answer without asking good questions. On the other hand, when you ask the right question, the answer is generally obvious...

Old programmers never die, they just hit account block limit.

Working...